application acceleration
25 TopicsTrue or False: Application acceleration solutions teach developers to write inefficient code
It has been suggested that the use of application acceleration solutions as a means to improve application performance would result in programmers writing less efficient code. In a comment on “The House that Load Balancing Built” a reader replies: Not only will it cause the application to grow in cost and complexity, it's teaching new and old programmers to not write efficient code and rely on other products and services on [sic] thier behalf. I.E. Why write security into the app, when the ADC can do that for me. Why write code that executes faster, the ADC will do that for me, etc., etc. While no one can control whether a programmer writes “fast” code, the truth is that application acceleration solutions do not affect the execution of code in any way. A poorly constructed loop will run just as slow with or without an application acceleration solution in place. Complex mathematical calculations will execute with the same speed regardless of the external systems that may be in place to assist in improving application performance. The answer is, unequivocally, that the presence or lack thereof of an application acceleration solution should have no impact on the application developer because it does nothing to affect the internal execution of written code. If you answered false, you got the answer right. The question has to be, then, just what does an application acceleration solution do that improves performance? If it isn’t making the application logic execute faster, what’s the point? It’s a good question, and one that deserves an answer. Application acceleration is part of a solution we call “application delivery”. Application delivery focuses on improving application performance through optimization of the use and behavior of transport (TCP) and application transport (HTTP/S) protocols, offloading certain functions from the application that are more efficiently handled by an external often hardware-based system, and accelerating the delivery of the application data. OPTIMIZATION Application acceleration improves performance by understanding how these protocols (TCP, HTTP/S) interact across a WAN or LAN and acting on that understanding to improve its overall performance. There are a large number of performance enhancing RFCs (standards) around TCP that are usually implemented by application acceleration solutions. Delayed and Selective Acknowledgments (RFC 2018) Explicit Congestion Notification (RFC 3168) Limited and Fast Re-Transmits (RFC 3042 and RFC 2582) Adaptive Initial Congestion Windows (RFC 3390) Slow Start with Congestion Avoidance (RFC 2581) TCP Slow Start (RFC 3390) TimeStamps and Windows Scaling (RFC 1323) All of these RFCs deal with TCP and therefore have very little to do with the code developers create. Most developers code within a framework that hides the details of TCP and HTTP connection management from them. It is the rare programmer today that writes code to directly interact with HTTP connections, and even rare to find one coding directly at the TCP socket layer. The execution of code written by the developer takes just as long regardless of the implementation or lack of implementation of these RFCs. The application acceleration solution improves the performance of the delivery of the application data over TCP and HTTP which increases the performance of the application as seen from the user’s point of view. OFFLOAD Offloading compute intensive processing from application and web servers improves performance by reducing the consumption of CPU and memory required to perform those tasks. SSL and other encryption/decryption functions (cookie security, for example) are computationally expensive and require additional CPU and memory on the server. The reason offloading these functions to an application delivery controller or stand-alone application acceleration solution improves application performance is because it frees the CPU and memory available on the server and allows it to be dedicated to the application. If the application or web server does not need to perform these tasks, it saves CPU cycles that would otherwise be used to perform them. Those cycles can be used by the application and thus increases the performance of the application. Also beneficial is the way in which application delivery controllers manage TCP connections made to the web or application server. Opening and closing TCP connections takes time, and the time required is not something a developer – coding within a framework – can affect. Application acceleration solutions proxy connections for the client and subsequently reduce the number of TCP connections required on the web or application server as well as the frequency with which those connections need to be open and closed. By reducing the connections and frequency of connections the application performance is increased because it is not spending time opening and closing TCP connections, which are necessarily part of the performance equation but not directly affected by anything the developer does in his or her code. The commenter believes that an application delivery controller implementation should be an afterthought. However, the ability of modern application delivery controllers to offload certain application logic functions such as cookie security and HTTP header manipulation in a centralized, optimized manner through network-side scripting can be a performance benefit as well as a way to address browser-specific quirks and therefore should be seriously considered during the development process. ACCELERATION Finally, application acceleration solutions improve performance through the use of caching and compression technologies. Caching includes not just server-side caching, but the intelligent use of the client (usually the browser) cache to reduce the number of requests that must be handled by the server. By reducing the number of requests the server is responding to, the web or application server is less burdened in terms of managing TCP and HTTP sessions and state, and has more CPU cycles and memory that can be dedicated to executing the application. Compression, whether using traditional industry standard web-based compression (GZip) or WAN-focused data de-duplication techniques, decreases the amount of data that must be transferred from the server to the client. Decreasing traffic (bandwidth) results in fewer packets traversing the network which results in quicker delivery to the user. This makes it appear that the application is performing faster than it is, simply because it arrived sooner. Of all these techniques, the only one that could possibly contribute to the delinquency of developers is caching. This is because application acceleration caching features act on HTTP caching headers that can be set by the developer, but rarely are. These headers can also be configured by the web or application server administrator, but rarely are in a way that makes sense because most content today is generated dynamically and is rarely static, even though individual components inside the dynamically generated page may in fact be very static (CSS, JavaScript, images, headers, footers, etc…). However, the methods through which caching (pragma) headers are set is fairly standard and the actual code is usually handled by the framework in which the application is developed, meaning the developer ultimately cannot affect the efficiency of the use of this method because it was developed by someone else. The point of the comment was likely more broad, however. I am fairly certain that the commenter meant to imply that if developers know the performance of the application they are developing will be accelerated by an external solution that they will not be as concerned about writing efficient code. That’s a layer 8 (people) problem that isn’t peculiar to application delivery solutions at all. If a developer is going to write inefficient code, there’s a problem – but that problem isn’t with the solutions implemented to improve the end-user experience or scalability, it’s a problem with the developer. No technology can fix that.250Views0likes4CommentsThe Disadvantages of DSR (Direct Server Return)
I read a very nice blog post yesterday discussing some of the traditional pros and cons of load-balancing configurations. The author comes to the conclusion that if you can use direct server return, you should. I agree with the author's list of pros and cons; DSR is the least intrusive method of deploying a load-balancer in terms of network configuration. But there are quite a few disadvantages missing from the author's list. Author's List of Disadvantages of DSR The disadvantages of Direct Routing are: Backend server must respond to both its own IP (for health checks) and the virtual IP (for load balanced traffic) Port translation or cookie insertion cannot be implemented. The backend server must not reply to ARP requests for the VIP (otherwise it will steal all the traffic from the load balancer) Prior to Windows Server 2008 some odd routing behavior could occur in In some situations either the application or the operating system cannot be modified to utilse Direct Routing. Some additional disadvantages: Protocol sanitization can't be performed. This means vulnerabilities introduced due to manipulation of lax enforcement of RFCs and protocol specifications can't be addressed. Application acceleration can't be applied. Even the simplest of acceleration techniques, e.g. compression, can't be applied because the traffic is bypassing the load-balancer (a.k.a. application delivery controller). Implementing caching solutions become more complex. With a DSR configuration the routing that makes it so easy to implement requires that caching solutions be deployed elsewhere, such as via WCCP on the router. This requires additional configuration and changes to the routing infrastructure, and introduces another point of failure as well as an additional hop, increasing latency. Error/Exception/SOAP fault handling can't be implemented. In order to address failures in applications such as missing files (404) and SOAP Faults (500) it is necessary for the load-balancer to inspect outbound messages. Using a DSR configuration this ability is lost, which means errors are passed directly back to the user without the ability to retry a request, write an entry in the log, or notify an administrator. Data Leak Prevention can't be accomplished. Without the ability to inspect outbound messages, you can't prevent sensitive data (SSN, credit card numbers) from leaving the building. Connection Optimization functionality is lost. TCP multiplexing can't be accomplished in a DSR configuration because it relies on separating client connections from server connections. This reduces the efficiency of your servers and minimizes the value added to your network by a load balancer. There are more disadvantages than you're likely willing to read, so I'll stop there. Suffice to say that the problem with the suggestion to use DSR whenever possible is that if you're an application-aware network administrator you know that most of the time, DSR isn't the right solution because it restricts the ability of the load-balancer (application delivery controller) to perform additional functions that improve the security, performance, and availability of the applications it is delivering. DSR is well-suited, and always has been, to UDP-based streaming applications such as audio and video delivered via RTSP. However, in the increasingly sensitive environment that is application infrastructure, it is necessary to do more than just "load balancing" to improve the performance and reliability of applications. Additional application delivery techniques are an integral component to a well-performing, efficient application infrastructure. DSR may be easier to implement and, in some cases, may be the right solution. But in most cases, it's going to leave you simply serving applications, instead of delivering them. Just because you can, doesn't mean you should.5.9KViews0likes4CommentsI am wondering why not all websites enabling this great feature GZIP?
Understanding the impact of compression on server resources and application performance While doing some research on a related topic, I ran across this question and thought “that deserves an answer” because it certainly seems like a no-brainer. If you want to decrease bandwidth – which subsequently decreases response time and improves application performance – turn on compression. After all, a large portion of web site traffic is text-based: CSS, JavaScript, HTML, RSS feeds, which means it will greatly benefit from compression. Typical GZIP compression affords at least a 3:1 reduction in size, with hardware-assisted compression yielding an average of 4:1 compression ratios. That can dramatically affect the response time of applications. As I said, seems like a no-brainer. Here’s the rub: turning on compression often has a negative impact on capacity because it is CPU-bound and under certain conditions can actually cause a degradation in performance due to the latency inherent in compressing data compared to the speed of the network over which the data will be delivered. Here comes the science. IMPACT ON CPU UTILIZATION Compression via GZIP is CPU bound. It requires a lot more CPU than you might think. The larger the file being compressed, the more CPU resources are required. Consider for a moment what compression is really doing: it’s finding all similar patterns and replacing them with representations (symbols, indexes into a table, etc…) to a single instance of the text instead. So it makes sense that the larger a file is, the more resources are required – RAM and CPU – to execute such a process. Of course the larger the file is the more benefit you see from compression in terms of bandwidth and improvement in response time. It’s kind of a Catch-22: you want the benefits but you end up paying in terms of capacity. If CPU and RAM is being chewed up by the compression process then the server can handle fewer requests and fewer concurrent users. You don’t have to take my word for it – there are quite a few examples of testing done on web servers and compression that illustrate the impact on CPU utilization. Measuring the Performance Effects of Dynamic Compression in IIS 7.0 Measuring the Performance Effects of mod_deflate in Apache 2.2 HTTP Compression for Web Applications They all essentially say the same thing; if you’re serving dynamic content (or static content and don’t have local caching on the web server enabled) then there is a significant negative impact on CPU utilization that occurs when enabling GZIP/compression for web applications. Given the exceedingly dynamic nature of Web 2.0 applications, the use of AJAX and similar technologies, and the data-driven world in which we live today, that means there are very few types of applications running on web servers for which compression will not negatively impact the capacity of the web server. In case you don’t (want || have time) to slog through the above articles, here’s a quick recap: File Size Bandwidth decrease CPU utilization increase IIS 7.0 10KB 55% 4x 50KB 67% 20x 100KB 64% 30x Apache 2.2 10KB 55% 4x 50KB 65% 10x 100KB 63% 30x It’s interesting to note that IIS 7.0 and Apache 2.2 mod_deflate have essentially the same performance characteristics. This data falls in line with the aforementioned Intel report on HTTP compression which noted that CPU utilization was increased 25-35% when compression was enabled. So essentially when you enable compression you are trading its benefits – bandwidth reduction, response time improvement – for a reduction in capacity. You’re robbing Peter to pay Paul, because instead of paying for bandwidth you’re paying for more servers to handle the same load. THE MYTH OF IMPROVED RESPONSE TIME One of the reasons you’d want to compress content is to improve response time by decreasing the total number of packets that have to traverse a wire. This is a necessity when transferring content via a WAN, but can actually cause a decrease in performance for application delivery over the LAN. This is because the time it takes to compress the content and then deliver it is actually greater than the time to just transfer the original file via the LAN. The speed of the network over which the content is being delivered is highly relevant to whether compression yields benefits for response time. The increasing consumption of CPU resources as volume increases, too, has a negative impact on the ability of the server to process and subsequently respond, which also means an increase in application response time, which is not the desired result. Maybe you’re thinking “I’ll just get more CPU then. After all, there’s like billion core servers out there, that ought to solve the problem!” Compression algorithms, like FTP, are greedy. FTP will, if allowed, consume as much bandwidth as possible in an effort to transfer data as quickly as possible. Compression will do the same thing to CPU resources: consume as much as it can to perform its task as quickly as possible. Eventually, yes, you’ll find a machine with enough cores to support both compression and capacity needs, but at what cost? It may well have been more financially efficient to invest in a better solution (that also brings additional benefits to the table) than just increasing the size of the server. But hey, it’s your data, you need to do what you need to do. The size of the content, too, has an impact on whether compression will benefit application performance. Consider that the goal of compression is to decrease the number of packets being transferred to the client. Generally speaking, the standard MTU for most network is 1500 bytes because that’s what works best with ethernet and IP. That means you can assume around 1400 bytes per packet available to transfer data. That means if content is 1400 bytes or less, you get absolutely no benefit out of compression because it’s already going to take only one packet to transfer; you can’t really send half-packets, after all, and in some networks packets that are too small can actually freak out some network devices because they’re optimized to handle the large content being served today – which means many full packets. TO COMPRESS OR NOT COMPRESS There is real benefit to compression; it’s part of the core techniques used by both application acceleration and WAN application delivery services to improve performance and reduce costs. It can drastically reduce the size of data and especially when you might be paying by the MB or GB transferred (such as applications deployed in cloud environments) this a very important feature to consider. But if you end up paying for additional servers (or instances in a cloud) to make up for the lost capacity due to higher CPU utilization because of that compression, you’ve pretty much ended up right where you started: no financial benefit at all. The question is not if you should compress content, it’s when and where and what you should compress. The answer to “should I compress this content” almost always needs to be based on a set of criteria that require context-awareness – the ability to factor into the decision making process the content, the network, the application, and the user. If the user is on a mobile device and the size of the content is greater than 2000 bytes and the type of content is text-based and … It is this type of intelligence that is required to effectively apply compression such that the greatest benefits of reduction in costs, application performance, and maximization of server resources is achieved. Any implementation that can’t factor all these variables into the decision to compress or not is not an optimal solution, as it’s just guessing or blindly applying the same policy to all kinds of content. Such implementations effectively defeat the purpose of employing compression in the first place. That’s why the answer to where is almost always “on the load-balancer or application delivery controller”. Not only are such devices capable of factoring in all the necessary variables but they also generally employ specialized hardware designed to speed up the compression process. By offloading compression to an application delivery device, you can reap the benefits without sacrificing performance or CPU resources. Measuring the Performance Effects of Dynamic Compression in IIS 7.0 Measuring the Performance Effects of mod_deflate in Apache 2.2 HTTP Compression for Web Applications The Context-Aware Cloud The Revolution Continues: Let Them Eat Cloud Nerd Rage686Views0likes2CommentsEnterprise Apps are Not Written for Speed
#fasterapp #cceventThey’re written for readability, for integration, for business function, and for long-term maintenance… When I was first entering IT I had the good (or bad, depending on how you look at it) fortune to be involved in some of the first Internet-facing projects at a global transportation organization. We made mistakes and learned lessons and eventually got down to the business of architecting a framework that would span the entire IT portfolio. One of the lessons I learned early on was that maintainability always won over performance, especially at the code level. Oh, some basic tenets of optimization in the code could be followed – choosing between while, for, and do..until conditionals based on performance-related concerns – but for the most part, many of the tricks used to improve performance were verboten, and some based solely on factors like readability. The introduction of local scope for an if…then…else statement, for example, was required for readability, even though in terms of performance this introduces many unnecessary clock ticks that under load can have a negative impact on overall capacity and response time. Microseconds of delays adds up to seconds of delays, after all. But coding standards in the enterprise lean heavily toward the reality that (1) code lives for a long time and (2) someone other than the original developer will likely be maintaining it. This means readability is paramount to ensuring the long-term success of any development project. Thus, performance suffers and “rewriting the application” is not an option. It’s costly and the changes necessary would likely conflict with the overriding need to ensure long-term maintainability. Even modern web-focused organizations like Twitter and Facebook have run into performance issues based on architectural decisions made early in the lifecycle. Many no doubt recall the often very technical discussions regarding Twitter’s design and interaction with its database as a source of performance woes, with hundreds of experts offering advice and criticism. Applications are not often designed with performance in mind. They are architected and designed to perform specific functions and tasks, usually business-related, and they are developed with long-term maintenance in mind. This leads to the problem of performance, which can rarely be addressed by the developers due to the constraints placed upon them, not least of which may be an active and very vocal user base. APPLICATION DELIVERY PUTS the FAST back in APPLICATIONS This is a core reason the realm of application delivery exists: to compensate for issues within the application that cannot – for whatever reason – be addressed through modification of the application itself. Application acceleration, WAN optimization, and load balancing services combine to form a powerful tier of application delivery services within the data center through which performance-related issues can be addressed. This tier allows load balancing services, for example, to be leveraged as a means to scale out an application, which effectively results in similar (and often greater) performance gains as simply scaling up to redress inherent performance constraints within the application. Application acceleration techniques improve the delivery of application-related content and objects through caching, compression, transformation, and concatenation. And WAN optimization services address bandwidth constraints that may inhibit delivery of the application, especially those heavy on the data and content side. While certainly developers could modify applications to rearrange content or reduce the size of data being delivered, it is rarely practical or cost-effective to do so. Similarly, it is not cost-effective or practical to ask developers to modify applications to remove processing bottlenecks that may result in unreadable code. Enterprise applications are not written for speed, but that is exactly what is demanded of them by their users. Both needs must be met, and the introduction of an application delivery tier into the architecture can serve to provide the balance between performance and maintenance by applying acceleration services dynamically. In this way applications need not be modified, but performance and scale is greatly improved. I’ll be at CloudConnect 2012 and we’ll discuss the subject of cloud and performance a whole lot more at the show! Sessions From Point A to Point B. The Three Axioms of Application Delivery WILS: WPO versus FEO The Full-Proxy Data Center Architecture Even the best written code has a weakness At the Intersection of Cloud and Control… What is a Strategic Point of Control Anyway? The Battle of Economy of Scale versus Control and Flexibility What CIOs Can Learn from the Spartans195Views0likes0CommentsArchitecturally, Is There Such A Thing As Too Scalable?
We’ve all had that chilling moment when the gate attendant at the airport comes over the loudspeaker, and doing her best Charlie Brown’s Teacher imitation, announces “Jursim Puzzling vlordid Netting, gollink dummole Neptune.” (This flight is in an oversold situation, we’re looking for volunteers…). While we could discuss the causes and solutions to this being an all-too-frequent event in the daily operation of airlines, for the purposes of this blog, let’s talk about the back end. The problem on the back end is, quite simply, that the plain cannot be expanded to handle the burden demanded of it. That makes perfect sense in an airplane, I for one would be slow to get onto an airplane that could be expanded like a pop-up camper. But it makes no sense whatsoever in an IT infrastructure. While a particular application might never need to expand, the overall architecture of the datacenter must meet shifting demands on a minute-by-minute basis, and must be prepared to offer more power to an application that is currently overburdened. There are many parts to making your architecture that dynamic in the application sense, developers (be they internal or external) need to have thought of the issues that scaling brings up, you will likely need a virtualization engine – you can do it without one, but that means leaving a lot of servers sitting idle all of the time, and in the 21st century we don’t tend to do a lot of that. You’ll also need a dynamic infrastructure. It does you no good to scale up a server unless the network, security, and optimization tools available can not only handle the additional load, but adapt to the presence of a new server popping up or an existing one going away. In short, you want the functionality of a hardware ADC with an adaptability to match the VM capabilities of your organization. And as time goes on you will want the same functionality to extend to the cloud, because your ADC brings your datacenter policies to VMs running on your preferred cloud vendor. But that’s the problem with a single-faceted deployment model where architecture is concerned, in the old world you wanted hardware ADCs to offload things like encryption and compression to, so your servers could be servers and not bulk-processing engines. In the virtual world, some would tell you that you want virtual ADCs to maintain a level adaptability that matches your virtualization environment. The problem is, where to put all of these virtual machines? When virtually offloading encryption or compression to a VM on the same machine, you’re offloading nothing, just shifting which VM makes the request of your hardware, the burden is still there. Much like an airplane, it just doesn’t expand very well unless you dedicate hardware to the ADC, making it less of a bargain in terms of architecture and cost savings. We have talked in the past about the hybrid model, but this is where it shines. By maintaining a central, physical ADC at the WAN strategic point of control (that point between the world and your servers), you can then give a separate virtualized ADC to your virtualized environment – or a dozen virtualized ADCs – with computationally expensive operations like encryption and compression turned off, and route their traffic through the physical ADC for that processing. Your applications get the benefits of an ADC – from load balancing to security – from the virtual ADC, and the benefits of offloading the heavy lifting items to the physical ADC. By placing the physical ADC at the WAN strategic point of control, you can also place virtual ADCs at your cloud provider and either physical or virtual (depending upon throughput needs) ADCs at remote datacenters. With the physical ADC coordinating the efforts of this network of ADCs, you can create centralized policies and profiles that are applied no matter where the final target ADC resides. And if your network grows, you can exponentially expand your physical ADC with a Virtual Clustered Multiprocessing system, so that scalability becomes an issue of yesteryear. More on that in a future blog, promise. No, there is not such a thing as too scalable from an architectural perspective. Putting the right tools into place means your options are practically unlimited as your traffic patterns grow and change, day by day, month by month, year by year. And with an ADC processing data as it passes through – before it ever reaches your servers – you are also able to offer a more secure environment, should the organization have that need. It’s a bright future, and the more technology moves forward, the brighter it seems to get. Soon you’ll be able to meet all of your employer’s IT architecture needs with the speed and grace that virtualization allowed you to spin up new servers on demand. Without working weekends and dropping entire systems to achieve upgrades. So ditch the airbus architecture, save your customers the “We are in an overutilized network situation…” horror, and usher in the age of adaptability, your network, your staff, and the business will all thank you.209Views0likes0CommentsThe Right (Platform) Tool For the Job(s).
One of my hobbies is modeling – mostly for wargaming but also for the sake of modeling. In an average year I do a lot of WWII models, some modern military, some civilian vehicles, figures from an array of historical timeperiods and the occasional sci-fi figure for one of my sons… The oldest (24 y/o) being a WarHammer 40k player and the youngest (3 y/o) just plain enjoying anything that looks like a robot. While I have been modeling more or less for decades, only in the last five years have I had the luxury of owning an airbrush, and then I restrict it to very limited uses – mostly base-coating larger models like cars, tanks, or spaceships. The other day I was reading on my airbrush vendor’s website and discovered that they had purchased a competitor that specialized in detailing airbrushes – so detailed that the line is used to decorate fingernails. This got me to thinking that I could do more detailed bits on models – like shovel blades and flesh-tones with an airbrush if I had one of these little detail brushes. Lori told me to send her a link to them so that she had it on the list for possible gifts, so I went out and started researching which model of the line was most suited to my goals. The airbrush I have is one of the best on the market – a Badger Airbrush Company model 150. It has dual-action, which means that pushing down on the trigger lets air out, and pulling the trigger back while pushing down lets an increasing amount of paint flow through. I use this to determine the density of paint I’m applying, but have never thought too much about it. Well in my research I wanted to see how much difference there was between my airbrush and the Omni that I was interested in. The answer… Almost none. Which confused me at first, as my airbrush, even with the finest needle and tip available and a pressure valve on my compressor to control the amount of air being pumped through it, sprays a lot of paint at once. So I researched further, and guess what? The volume of paint adjustment that is controlled by how far you draw back the trigger, combined with the PSI you allow through the regulator will control the width of the paint flow. My existing airbrush can get down to 2mm – sharpened pencil point widths. I have a brand-new fine tip and needle (in poor lighting I confused my fine needle with my reamer and bent the tip a few weeks ago, so ordered a new one), my pressure regulator is a pretty good one, all that is left is to play with it until I have the right pressure, and I may be doing more detailed work with my airbrush in the near future. Airbrushing isn’t necessarily better – for some jobs I like the results better, like single-color finishes, because if you thin the paint and go with several coats, you can get a much more uniform worn look to surfaces – but overall it is just different. The reason I would want to use my airbrush more is, simply time. Because you don’t have to worry about crevices and such (the air blows paint into them), you don’t have to take nearly as long to paint a given part with an airbrush as you do with a brush. At least the base coat anyway, you still need a brush for highlighting and shadowing… Or at least I do… But it literally cuts hours off of a group of models if I can arrange one trip down to the spray area versus brush-painting those same models. What does all of this have to do with IT? The same thing it usually does. You have a ton of tools in your datacenter that do one job very well, but you have never had reason to look into alternate uses that the tool might do just as well or better at. This is relatively common with Application Delivery Controllers, where they are brought in just to do load balancing, or just for application acceleration, or just for WAN Optimization, and the other things that the tool does just as well haven’t been explored. But you might want to do some research on your platforms, just to see if they can serve other needs than you’re putting them to today. Let’s face it, you’ve paid for them, and in many cases they will work as-is or with a slight cost add-on to do even more. It is worth knowing what “more” is for a given product, if for no other reason than having that information in your pocket when exploring solutions going forward. A similar situation is starting to develop with our ARX family of products, and no doubt with some competitors also (though I haven’t heard of it from competitors, I’m simply conjecturing) – as ARX grows in its capabilities, many existing customers aren’t taking advantage of the sweet new tools that are available to them for free or for a modest premium on their existing investment. ARX Cloud Extender is the largest case of this phenomenon that I know of, but this week’s EMC Atmos announcement might well go a long way to reconcile that bit. To me it is very cool that ARX can virtualize your NAS devices AND include cloud and/or object storage alongside NAS so as to appear to be one large pool of storage. Whether you’re a customer or not, it’s worth checking out. Of course, like my airbrush, you’ll have some learning to do if you try new things with your existing hardware. I’ll spend a couple of hours with the airbrush figuring out how to make reliable lines of those sizes, then determine where best to use it. While I could have achieved the same or similar results with masking, the time investment for masking is large and repetitive, the dollar cost is repetitive. I also could have paid a large chunk of money for a specialized detail airbrush, but then I’d have two tools to maintain, when one will do it all… And this is true of alternatives to learning new things about your existing hardware – the learning curve will be there whether you implement new functionality on your existing platforms or purchase a point solution, best to figure out the cost in time and money to solve the problem from either direction. Often, you’ll find the cost of learning a new function on familiar hardware is much lower than purchasing and learning all new hardware. WWII Russians – vehicle is airbrushed, figures not.241Views0likes0CommentsLoad Balancing For Developers: Security and TCP Optimizations
It has been a while since I wrote a Load Balancing for Developers installment, and since they’re pretty popular and there’s still a lot about Application Delivery Controllers (ADCs) that are taken for granted in the Networking industry but relatively unknown in the development world, I thought I’d throw one out about making your security more resilient with ADCs. For those who are just joining this series, here’s the full list of posts I’ve tagged as Load Balancing for Developers, though only the ones whose title starts with “Load Balancing for Developers” or “Advance Load Balancing for Developers” were actually written from this perspective, utilizing our fictional web application Zap’N’Go! as an example. This post, like most of them, doesn’t require that you read the other entries in the “Load Balancers for Developers” series, but if you’re interested in the topic, they are all written from the developer’s perspective, and only bring in the networking/ops portions where it makes sense. So your organization has a truly successful web application called Zap’N’Go! That has taken the Internet by storm. Your hits are in the thousands an hour, and orders are rolling in. All was going well until your server couldn’t keep up and you went to a load balanced scenario so that multiple servers could share the load. The problem is that with the money you’ve generated off of Zap’N’Go, you’ve bought a competitor and started several new web applications, set up a forum or portal for your customers to communicate with you and each other directly, and are using the old datacenter from the company you purchased as a redundant datacenter in case the worst should happen. And all of that means that you are suffering server (and VM) sprawl. The CPU cycles being eaten up by your applications are truly astounding, and you’re looking into ways to drive them down. Virtualization helped you to be more agile in responding to the requests of the business, but also brings a lot of management overhead in making certain servers aren’t overloaded with too high a virtual density. One of the cool bits about an ADC is that they do a lot more than load balance, and much of that can be utilized to improve application performance without re-architecting the entire system. While there are a lot of ways that an ADC can improve application performance, we’ll look at a couple of easy ones here, and leave some of the more difficult or involved ones for another time. That keeps me in writing topics, and makes certain that I can give each one the attention it deserves in the space available. The biggest and most obvious improvement in an ADC is of course load balancing. This blog assumes you already have an ADC in place, and load balancing was your primary reason for purchasing it. While I don’t have market numbers in front of me, it is my experience that this is true of the vast majority of ADC customers. If you have overburdened web applications and have not looked into load balancing, before you go rewriting your entire system, take a look at the rest of this series. There really are options out there to help. After that win, I think the biggest place – in a virtualized environment – that developers can reap benefits from an ADC is one that developers wouldn’t normally think of. That’s the reason for this series, so I suppose that would be a good thing. Nearly every application out there hits a point where SSL is enabled. That point may be simply the act of accessing it, or it may be when they go to the “shopping cart” section of the web site, but they all use SSL to protect sensitive user data being passed over the Internet. As a developer, you don’t have to care too much about this fact. Pay attention to the protocol if you’re writing at that level and to the ports if you have reason to, but beyond that you don’t have to care. Networking takes care of all of that for you. But what if you could put a request in to your networking group that would greatly improve performance without changing a thing in your code and from a security perspective wouldn’t change much – most companies would see it as not changing anything, while a few will want to talk about it first? What if you could make this change over lunch and users wouldn’t know the difference? Here’s the background. SSL Encryption is expensive in terms of CPU cycles. No doubt you know that, most developers have to face this issue head-on at some point. It takes a lot of power to do encryption, and while commodity hardware is now fast enough that it isn’t a problem on a stand-alone server, in a VM environment, the number of applications requesting SSL encryption on the same physical hardware is many times what it once was. That creates a burden that, at this time at least, often drags on the hardware. It’s not the fault of any one application or a rogue programmer, it is the summation of the burdens placed by each application requiring SSL translation. One solution to this problem is to try and manage VM deployment such that encryption is only required on a couple of applications per physical server, but this is not a very appealing long-term solution as loads shift and priorities change. From a developers’ point of view, do you trust the systems/network teams to guarantee your application is not sharing hardware with a zillion applications that all require SSL encryption? Over time, this is not going to be their number one priority, and when performance troubles crop up, the first place that everyone looks in an in-house developed app is at the development team. We could argue whether that’s the right starting point or not, but it certainly is where we start. Another, more generic solution is to take advantage of a non-development feature of your ADC. This feature is SSL termination. Since the ADC sits between your application and the Internet, you can tell your ADC to handle encryption for your application, and then not worry about it again. If your network team sets this up for all of your applications, then you have no worries that SSL is burning up your CPU cycles behind your back. Is there a negative? A minor one that most organizations (as noted above) just won’t see as an issue. That is that from the ADC to your application, communications will happen in the clear. If your application is internal, this really isn’t a big deal at all. If you suspect a bad-guy on your internal network, you have much more to worry about than whether communications between two boxes are in the clear. If you application is in the cloud, this concern is more realistic, but in that case, SSL termination is limited in usefulness anyway because you can’t know if the other apps on the same hardware are utilizing it. So you simply flick a switch on your ADC to turn on SSL termination, and then turn it off on your applications, and you have what the ADC industry calls “SSL offload”. If your ADC is purpose-built hardware (like our BIG-IP), then there is encryption hardware in the box and you don’t have to worry about the impact to the ADC of overloading it with SSL requests, it’s built to handle the load. If your ADC is software or a VM (like our BIG-IP LTM VE), then you’ll have to do a bit of testing to see what the tolerance level for SSL load is on the hardware you deployed it on – but you can ask the network staff to worry about all of that, once you’ve started the conversation. Is this the only security-based performance boost you can get? No, but it is the easy one. Everything on the Internet remains encrypted, but your application is not burdening the server’s CPU with encryption requests each time communications in or out occur. The other easy one is TCP optimizations. This one requires less talk because it is completely out of the realm of the developer. Simply put, TCP is a well designed protocol that sometimes gets bogged down communicating and has a lot of overhead in those situations. Turning on TCP optimizations in your ADC can reduce the overhead – more or less, depending upon what is on the other end of the communications network – and improve perceived performance, which honestly is one of the most important measures of web application availability. By making it seem to load faster, you’ve improved your customer experience, and nothing about your development has to change. TCP optimizations are not new, and thus the ones that are turned on when you activate the option on most ADCs are stable and won’t disrupt most applications. Of course you should run a short test cycle with them enabled, just to be certain, but I would be surprised if you saw any issues. They’re not unheard of, but they are very rare. That’s enough for now, I think. I don’t want these to get so long that you wander off to develop some more. Keep doing what you do. And strive to keep your users from doing this. Slow apps anger users234Views0likes0CommentsTechnical Options. Opportunity and Confusion
One of the things that I love about technology is the fact that every time there is a problem, five solutions crop up to solve it. One of the things I hate about technology is the fact that every time there is a problem, five solutions crop up to solve it… And there are marketing geeks and pundits willing to tell you which one to choose before you even know that you have the problem. I was out in Anaheim last week with F5’s rockstar salesforce, telling them about the Future of IT. Or trying to, you’ll have to ask them if I imparted any worthwhile information, since I haven’t seen evaluations of my presentations yet. One thing that struck me from the ensuing discussions though is that there are people in IT who know their stuff, but are still confused about what solutions are best for long-distance problems. The sales team told me repeatedly that their customers sometimes are uncertain of their needs when talking about access control and acceleration. They of course got the F5-biased, product laden answers, I’ll skip that for you all here and just mention that “F5 has products in each of these spaces – talk to your sales folks”. Though I’ve included the F5 product list in this article’s tags if you want an idea what to talk with sales people about. Remote office communications are often slowed by the need for a WAN connection to the home datacenter. They also have more precise security requirements than your average Internet connection – you need to know that those accessing your applications from the remote office actually have the rights to do so, since most often remote office users have access to your core systems. So you need an SSL VPN and/or application level authentication, along with something to make those connections speedy. Normally this would be Application Acceleration, but you might possibly also require WAN optimization if there is a lot of repetitive data being thrown across the line. If you’re not using an SSL VPN, then you need some form of secure tunnel over the line between remote office and datacenter – after all, locking down both ends does you no good if you’re unencrypted in the middle. I didn’t get a picture of any of my sessions, so you’ll have to settle for this PowerPoint image Datacenter to datacenter communications are less user intensive, and thus less browser intensive, so the benefit of Application Acceleration is less, and the benefit of WAN Optimization is commensurately greater. You still need secure connections, but perhaps not an SSL VPN – you might, it all depends upon how the secondary data center systems are managed. If they’re managed from the primary datacenter, then you probably want to have an SSL VPN just to put something between the ne’er-do-wells and your systems. Otherwise, secure, encrypted tunnels to transfer data will do the trick. Of course there are a lot of considerations here, and you know your systems better than anyone else, so consider how many remote logins the remote datacenter has, and that will give you an idea if you need an SSL VPN. For users hitting your website, the requirements are closer to a remote office, but not quite so stringent. You’ll still want an application firewall, and you’ll want to speed things up in a manner that won’t impact browsers negatively – faster is only useful if the page remains unchanged from your implementation. So Application Acceleration and a web application firewall should do the trick. My experience with application acceleration is that you want a tool that has a lot of knobs and dials because no two websites are the same. You’ll want to exclude some content from acceleration, tweak the settings on other content, etc. And with all of these solutions you’ll want frequent updates (particularly to firewalls), and a world-class service organization because the products sit right in your line of production and you don’t want to waste a ton of time figuring out what’s going wrong or waiting for replacement parts. We’re not the only vendor on the planet that offers you solutions in these spaces, so check out the market. Of course I think ours are the best – if I didn’t, I’d be off working where I DID think they were the best. But every organization is different, find a vendor (or some vendors) that suit your organization’s needs the best. And check to see how they support cloud, because it is coming to a datacenter near you.181Views0likes0CommentsAs NetWork Speeds Increase, Focus Shifts
Someone said something interesting to me the other day, and they’re right “at 10 Gig WAN connections with compression turned on, you’re not likely to fill the pipe, the key is to make certain you’re not the bottleneck.” (the other day is relative – I’ve been sitting on this post for a while) I saw this happen when 1 Gig LANs came about, applications at the time were hard pressed to actually use up a Gigabit of bandwidth, so the focus became how slow the server and application were, if the backplane on the switch was big enough to handle all that was plugged into it, etc. After this had gone on for a while, server hardware became so fast that we chucked application performance under the bus in most enterprises. And then those applications were running on the WAN, where we didn’t have really fast connections, and we started looking at optimizing those connections in lieu of optimizing the entire application. But there is only so much that an application developer can do to speed network communications. Most of the work of network communications is out of their hands, and all they control is the amount of data they send over the pipe. Even then, if persistence is being maintained, even how much data they send may be dictated by the needs of the application. And if you are one of those organizations that has situations where databases are communicating over your WAN connection, that is completely outside the control of application developers. So the speed bottleneck became the WAN. For every problem in high tech, there is a purchasable solution though, and several companies (including F5) offer solutions for both WAN Acceleration and Application Acceleration. The cool thing about solutions like BIG-IPWebAccelerator, EDGE Gateway, and WOM are that they speed application performance (WebAccelerator for web based applications and WOM for more back-end applications or remote office), while reducing the amount of data being sent over the wire – without requiring work on the part of developers. As I’ve said before: If developers can focus on solving the business problems at hand and not the technical issues that sit in the background, they are more productive. Now that WAN connections are growing again, you would think we would be poised to shift the focus back to some other piece of the huge performance puzzle, but this stuff doesn’t happen in a vacuum, and there are other pressures growing on your WAN connection that keep the focus squarely on how much data it can pass. Those pressures are multi-core, virtualization and cloud. Multi-core increases the CPU cycles available to applications. To keep up, server vendors have been putting more NICs in every given server, increasing potential traffic on both the LAN and the WAN. With virtualization we have a ton more applications running on the network, and the comparative ease with which they can be brought online implies this trend will continue, and cloud not only does the same thing, but puts the instances on a remote network that requires trips back to your datacenter for integration and database access (yeah, there are exceptions. I would argue not many). Both of these trends mean that the size of your pipe out to the world is not only important, but because it is a monthly expense, it must be maximized. By putting in both WAN Optimization and Web Application Acceleration, you stand a chance of keeping your pipe from growing to the size of the Alaska pipeline, and that means savings for you on a monthly basis. You’ll also see that improved performance that is so elusive. Never mind that as soon as one bottleneck is cleared another will crop up, that comes with the territory. By clearing this one you’ll have improved performance until you hit the next plateau, and you can then focus on settling it, secure in the knowledge that the WAN is not the bottleneck. And with most technologies – certainly with those offered by F5 – you’ll have the graphs and data to show that the WAN link isn’t the bottleneck. Meanwhile, your developers will be busy solving business problems, and all of those cores won’t go to waste. Photo of caribou walking alongside the, taken July 1998 by Stan Shebs210Views0likes0CommentsWAN Optimization is not Application Acceleration
Increasingly WAN optimization solutions are adopting the application acceleration moniker, implying a focus that just does not exist. WAN optimization solutions are designed to improve the performance of the network, not applications, and while the former does beget improvements of the latter, true application acceleration solutions offer greater opportunity for improving efficiency and end-user experience as well as aiding in consolidation efforts that result in a reduction in operating and capital expenditure costs. WAN Optimization solutions are, as their title implies, focused on the WAN; on the network. It is their task to improve the utilization of bandwidth, arrest the effects of network congestion, and apply quality of service policies to speed delivery of critical application data by respecting application prioritization. WAN Optimization solutions achieve these goals primarily through the use of data de-duplication techniques. These techniques require a pair of devices as the technology is most often based on a replacement algorithm that seeks out common blocks of data and replaces them with a smaller representative tag or indicator that is interpreted by the paired device such that it can reinsert the common block of data before passing it on to the receiver. The base techniques used by WAN optimization are thus highly effective in scenarios in which large files are transferred back and forth over a connection by one or many people, as large chunks of data are often repeated and the de-duplication process significantly reduces the amount of data traversing the WAN and thus improves performance. Most WAN optimization solutions specifically implement “application” level acceleration for protocols aimed at the transfer of files such as CIFS and SAMBA. But WAN optimization solutions do very little to aid in the improvement of application performance when the data being exchanged is highly volatile and already transferred in small chunks. Web applications today are highly dynamic and personalized, making it less likely that a WAN optimization solution will find chunks of duplicated data large enough to make the overhead of the replacement process beneficial to application performance. In fact, the process of examining small chunks of data for potential duplicated chunks can introduce additional latency that actual degrades performance, much in the same way compression of small chunks of data can be detrimental to application performance. Too, WAN optimization solutions require deployment in pairs which results in what little benefits these solutions offer for web applications being enjoyed only by end-users in a location served by a “remote” device. Customers, partners, and roaming employees will not see improvements in performance because they are not served by a “remote” device. Application acceleration solutions, however, are not constrained by such limitations. Application acceleration solutions act at the higher layers of the stack, from TCP to HTTP, and attempt to improve performance through the optimization of protocols and the applications themselves. The optimizations of TCP, for example, reduce the overhead associated with TCP session management on servers and improve the capacity and performance of the actual application which in turn results in improved response times. The understanding of HTTP and both the browser and server allows application acceleration solutions to employ techniques that leverage cached data and industry standard compression to reduce the amount of data transferred without requiring a “remote” device. Application acceleration solutions are generally asymmetric, with some few also offering a symmetric mode. The former ensures that regardless of the location of the user, partner, or employee that some form of acceleration will provide a better end-user experience while the latter employs more traditional WAN optimization-like functionality to increase the improvements for clients served by a “remote” device. Regardless of the mode, application acceleration solutions improve the efficiency of servers and applications which results in higher capacities and can aid in consolidation efforts (fewer servers are required to serve the same user base with better performance) or simply lengthens the time available before additional investment in servers – and the associated licensing and management costs – must be made. Both WAN optimization and application acceleration aim to improve application performance, but they are not the same solutions nor do they even focus on the same types of applications. It is important to understand the type of application you want to accelerate before choosing a solution. If you are primarily concerned with office productivity applications and the exchange of large files (including backups, virtual images, etc…) between offices, then certainly WAN optimization solutions will provide greater benefits than application acceleration. If you’re concerned primarily about web application performance then application acceleration solutions will offer the greatest boost in performance and efficiency gains. But do not confuse WAN optimization with application acceleration. There is a reason WAN optimization-focused providers have recently begun to partner with application acceleration and application delivery providers – because there is a marked difference between the two types of solutions and a single offering that combines them both is not (yet) available.810Views0likes2Comments