developers
3 TopicsTrue or False: Application acceleration solutions teach developers to write inefficient code
It has been suggested that the use of application acceleration solutions as a means to improve application performance would result in programmers writing less efficient code. In a comment on “The House that Load Balancing Built” a reader replies: Not only will it cause the application to grow in cost and complexity, it's teaching new and old programmers to not write efficient code and rely on other products and services on [sic] thier behalf. I.E. Why write security into the app, when the ADC can do that for me. Why write code that executes faster, the ADC will do that for me, etc., etc. While no one can control whether a programmer writes “fast” code, the truth is that application acceleration solutions do not affect the execution of code in any way. A poorly constructed loop will run just as slow with or without an application acceleration solution in place. Complex mathematical calculations will execute with the same speed regardless of the external systems that may be in place to assist in improving application performance. The answer is, unequivocally, that the presence or lack thereof of an application acceleration solution should have no impact on the application developer because it does nothing to affect the internal execution of written code. If you answered false, you got the answer right. The question has to be, then, just what does an application acceleration solution do that improves performance? If it isn’t making the application logic execute faster, what’s the point? It’s a good question, and one that deserves an answer. Application acceleration is part of a solution we call “application delivery”. Application delivery focuses on improving application performance through optimization of the use and behavior of transport (TCP) and application transport (HTTP/S) protocols, offloading certain functions from the application that are more efficiently handled by an external often hardware-based system, and accelerating the delivery of the application data. OPTIMIZATION Application acceleration improves performance by understanding how these protocols (TCP, HTTP/S) interact across a WAN or LAN and acting on that understanding to improve its overall performance. There are a large number of performance enhancing RFCs (standards) around TCP that are usually implemented by application acceleration solutions. Delayed and Selective Acknowledgments (RFC 2018) Explicit Congestion Notification (RFC 3168) Limited and Fast Re-Transmits (RFC 3042 and RFC 2582) Adaptive Initial Congestion Windows (RFC 3390) Slow Start with Congestion Avoidance (RFC 2581) TCP Slow Start (RFC 3390) TimeStamps and Windows Scaling (RFC 1323) All of these RFCs deal with TCP and therefore have very little to do with the code developers create. Most developers code within a framework that hides the details of TCP and HTTP connection management from them. It is the rare programmer today that writes code to directly interact with HTTP connections, and even rare to find one coding directly at the TCP socket layer. The execution of code written by the developer takes just as long regardless of the implementation or lack of implementation of these RFCs. The application acceleration solution improves the performance of the delivery of the application data over TCP and HTTP which increases the performance of the application as seen from the user’s point of view. OFFLOAD Offloading compute intensive processing from application and web servers improves performance by reducing the consumption of CPU and memory required to perform those tasks. SSL and other encryption/decryption functions (cookie security, for example) are computationally expensive and require additional CPU and memory on the server. The reason offloading these functions to an application delivery controller or stand-alone application acceleration solution improves application performance is because it frees the CPU and memory available on the server and allows it to be dedicated to the application. If the application or web server does not need to perform these tasks, it saves CPU cycles that would otherwise be used to perform them. Those cycles can be used by the application and thus increases the performance of the application. Also beneficial is the way in which application delivery controllers manage TCP connections made to the web or application server. Opening and closing TCP connections takes time, and the time required is not something a developer – coding within a framework – can affect. Application acceleration solutions proxy connections for the client and subsequently reduce the number of TCP connections required on the web or application server as well as the frequency with which those connections need to be open and closed. By reducing the connections and frequency of connections the application performance is increased because it is not spending time opening and closing TCP connections, which are necessarily part of the performance equation but not directly affected by anything the developer does in his or her code. The commenter believes that an application delivery controller implementation should be an afterthought. However, the ability of modern application delivery controllers to offload certain application logic functions such as cookie security and HTTP header manipulation in a centralized, optimized manner through network-side scripting can be a performance benefit as well as a way to address browser-specific quirks and therefore should be seriously considered during the development process. ACCELERATION Finally, application acceleration solutions improve performance through the use of caching and compression technologies. Caching includes not just server-side caching, but the intelligent use of the client (usually the browser) cache to reduce the number of requests that must be handled by the server. By reducing the number of requests the server is responding to, the web or application server is less burdened in terms of managing TCP and HTTP sessions and state, and has more CPU cycles and memory that can be dedicated to executing the application. Compression, whether using traditional industry standard web-based compression (GZip) or WAN-focused data de-duplication techniques, decreases the amount of data that must be transferred from the server to the client. Decreasing traffic (bandwidth) results in fewer packets traversing the network which results in quicker delivery to the user. This makes it appear that the application is performing faster than it is, simply because it arrived sooner. Of all these techniques, the only one that could possibly contribute to the delinquency of developers is caching. This is because application acceleration caching features act on HTTP caching headers that can be set by the developer, but rarely are. These headers can also be configured by the web or application server administrator, but rarely are in a way that makes sense because most content today is generated dynamically and is rarely static, even though individual components inside the dynamically generated page may in fact be very static (CSS, JavaScript, images, headers, footers, etc…). However, the methods through which caching (pragma) headers are set is fairly standard and the actual code is usually handled by the framework in which the application is developed, meaning the developer ultimately cannot affect the efficiency of the use of this method because it was developed by someone else. The point of the comment was likely more broad, however. I am fairly certain that the commenter meant to imply that if developers know the performance of the application they are developing will be accelerated by an external solution that they will not be as concerned about writing efficient code. That’s a layer 8 (people) problem that isn’t peculiar to application delivery solutions at all. If a developer is going to write inefficient code, there’s a problem – but that problem isn’t with the solutions implemented to improve the end-user experience or scalability, it’s a problem with the developer. No technology can fix that.250Views0likes4CommentsIf You Have to Ask What the Big(O) Is You've Never ... Calculated It
I recently made a passing remark about the value of being able to write the code for a linked list. The night before Don and I had been arguing with our oldest son about whether he should be using a stack or a linked list to implement a Java version of Freecell, hence data structures had been on my mind. Because he, like many college students (and graduates) today, hasn't had the proper instruction in the basics of these data structures he's somewhat at a loss to understand why a linked list is, in fact, a better solution to his problem. Compiler theory is no longer taught at most colleges, and operating systems is a class of the past (good thing, too, I still have nightmares). Colleges are no longer turning out computer scientists, they're turning out - for the most part - enterprise application developers. Like Big(O) calculations and the red dragon book, a thorough foundation in computer science has been left to those who plan a career in writing compilers (someone has to do it, after all) and enhancing operating systems. Part of the reason most college graduates no longer have a firm foundation in such concepts is because in the majority of organizations it's not necessary. These kids are, for the most part, going to be application developers, not compiler specialists, and they certainly aren't going to be writing code in assembly or tweaking TCP/IP stacks. Developers have "moved up the stack" out of necessity and demand. There are far more organizations that need good application developers than there are that require someone who can implement Djikstra's algorithm in their sleep. Their focus is on writing applications that are usable and meet the needs of the business owners for whom these applications become paramount for success, not twiddling bits. Translating business speak into functional specifications is not always easy. It can be as frustrating as trying to marry network and application concepts in order to deliver applications when you know a lot about one, but nothing about the other. And trying to talk to "those other people" - you know, the ones in that other IT silo - can be as frustrating as traveling to a new country in which the only language spoken is one you don't understand. Hence the rise of "analysts" in IT and "application delivery networks" in the age of the Internet. Both solutions work because they essentially sit on the fence, between two worlds, and translate between them. Because of them we end up with applications that fit into the organization's architecture and solve the problems of the business, and a way to deliver them in the most efficient manner possible. Application delivery is about applications and networks, and how they can collaborate to deliver secure, fast applications. The folks who build application delivery products do understand the nitty-gritty underside of operating systems, network stacks, transport protocols, and application protocol optimization. These guys understand how to tweak delivery of applications and squeeze every last drop of performance out of the network and implement products to do just that. Deploying an application delivery solution is like getting the advantages of an entire team of bare metal, bit tweaking, compiler writing coders in a box but without sacrificing the headcount necessary to fulfill the application development needs of the business. Application developers today aren't likely able to calculate the Big(O) of that loop they just wrote or read HEX as naturally as they do English, but they are able to translate requirements into technical specifications and build applications that solve the problems of their particular business. That's okay, that's their job. Let them focus on their job and let application delivery do its job by ensuring those applications are secure, fast, and available. Imbibing: Coffee162Views0likes2CommentsAll your performance are belong to us
The evolution of programming languages and environments and the impact on performance Chances are that if I ask my son, a third-year computer science major, about Big(O) I'll either get that look - the one that says he's had that discussion with his father years ago and he really doesn't want to discuss such things with his mother - or he'll dismiss it as not relevant to today's computing environment. Big(O) and algorithmic performance is just not that important to today's generation of developers who are too often being taught to code within a vaccuum, or to be more accurate, inside a virtual machine that takes care of most of the dirty work for them. The most popular of developer environments and languages today, those being taught to would-be developers in college and used extensively in the enterprise, are primarily interpreted, virtual-machine executed languages. Java. C#. VB.NET. These languages take the "guesswork" out of dealing with low-level, dirty concepts like memory allocation and deallocation and automatically optimize common syntactical constructs. But that means that developers can no longer choose between malloc() and calloc(). And indeed they likely do not understand the performance implications of choosing one over the other. They cannot decide when garbage collection occurs, a process that has long plagued Java as one of the primary causes of temporary performance problems in high-volume server-side applications. They turn to using pre-built data structures such as Vectors or Collections, and haven't the foggiest clue how to create, manipulate, and manage a true array on their own. The layer of veneer placed over these "low-level" functions removes complexity and thus improves the quality of the code, but does so at a price - performance. In short, today's developers are limited in their ability to optimize code by their education, experience, and development environment. They can make performance worse, but there is an upper bound to how well they can make an application perform, and that upper bound is controlled entirely by their environment. This means enterprises are dependent upon the ability of the pre-compiler and interpreters (virtual machines) to optimize the code and the run-time environments. But because the ISVs providing those tools and environments must account for hundreds of scenarios, the tools are rarely as optimized as they could be. They are "good enough" for general use and rely upon the steady increase of resources and performance on the computing platforms upon which these tools and applications will be deployed to keep performance at an acceptable level. It's not always enough. Traditionally it has been the case that once an application is deployed and performance problems crop up that the finger pointing begins. Developers point to the network or the platform as the cause, conversely the network team points at the developers. Both are likely wrong in assuming the fault lies with the other. In many cases there simply aren't any more tweaks possible within the code and the network is performing exactly as it should. The fault often lies in the one place that neither the network or application teams can affect: the operating system and the application platform. It's a matter of protocol efficiency, from layer 4 (TCP) up through layer 7 (HTTP). These inefficiences are inherent in the operating system and network stacks and they are not something an enterprise can affect directly. They can, however, address these inefficiences with application acceleration solutions. App accelerationproductsaddress the problems inherent inHTTPlike chattiness and TCP in a variety of ways, including implementing the rather lengthy list of RFCs designedspecifically to improve TCP performance and behavior. Products like F5's WebAccelerator add file and object level caching and compression on top of protocol-specific enhancements to overcome the limitations of operating system network stacks and application platform environments. In many cases these solutions can further optimize specific applications like those from Oracle, Microsoft, Siebel, and SAP by addressing inefficiencies in their protocols, improving performance even further. Application acceleration solutions improve performance of applications by addressing the issues that cannot be readily addressed by developers or applicationand network administrators.Developers can, of course, go back to using more easily optimized code such as C/C++ or even assembly, but that's not likely for a number of reasons including increased complexity, increase time for development, and the reality that there are a dwindling number of developers who are skilled in these languages. You can tweak the network all you want, and send developers back to stare at code that can't be optimized any further, but in the end neither option will likely improve the performance of the application. The most cost-effective, optimal solution is to implement an application acceleration solution for those applications for which performance is critical. Imbibing: Pink Lemonade Technorati tags: F5, MacVittie, application acceleration, developers, performance201Views0likes0Comments