compression
28 TopicsSelective Compression on BIG-IP
BIG-IP provides Local Traffic Policies that simplify the way in which you can manage traffic associated with a virtual server. You can associate a BIG-IP local traffic policy to support selective compression for types of content that can benefit from compression, like HTML, XML, and CSS stylesheets. These file types can realize performance improvements, especially across slow connections, by compressing them. You can easily configure your BIG-IP system to use a simple Local Traffic Policy that selectively compresses these file types. In order to use a policy, you will want to create and configure a draft policy, publish that policy, and then associate the policy with a virtual server in BIG-IP v12. Alright, let’s log into a BIG-IP The first thing you’ll need to do is create a draft policy. On the main menu select Local Traffic>Policies>Policy List and then the Create or + button. This takes us to the create policy config screen. We’ll name the policy SelectiveCompression, add a description like ‘This policy compresses file types,’ and we’ll leave the Strategy as the default of Execute First matching rule. This is so the policy uses the first rule that matches the request. Click Create Policy which saves the policy to the policies list. When saved, the Rules search field appears but has no rules. Click Create under Rules. This brings us to the Rules General Properties area of the policy. We’ll give this rule a name (CompressFiles) and then the first settings we need to configure are the conditions that need to match the request. Click the + button to associate file types. We know that the files for compression are comprised of specific file types associated with a content type HTTP Header. We choose HTTP Header and select Content-Type in the Named field. Select ‘begins with’ next and type ‘text/’ for the condition and compress at the ‘response’ time. We’ll add another condition to manage CPU usage effectively. So we click CPU Usage from the list with a duration of 1 minute with a conditional operator of ‘less than or equal to’ 5 as the usage level at response time. Next under Do the following, click the create + button to create a new action when those conditions are met. Here, we’ll enable compression at the response time. Click Save. Now the draft policy screen appears with the General Properties and a list of rules. Here we want to click Save Draft. Now we need to publish the draft policy and associate it with a virtual server. Select the policy and click Publish. Next, on the main menu click Local Traffic>Virtual Servers>Virtual Server List and click the name of the virtual server you’d like to associate for the policy. On the menu bar click Resources and for Policies click Manage. Move SelectiveCompression to the Enabled list and click Finished. The SelectiveCompression policy is now listed in the policies list which is now associated with the chosen virtual server. The virtual server with the SelectiveCompression Local Traffic Policy will compress the file types you specified. Congrats! You’ve now added a local traffic policy for selective compression! You can also watch the full video demo thanks to our TechPubs team. ps966Views0likes7CommentsI am wondering why not all websites enabling this great feature GZIP?
Understanding the impact of compression on server resources and application performance While doing some research on a related topic, I ran across this question and thought “that deserves an answer” because it certainly seems like a no-brainer. If you want to decrease bandwidth – which subsequently decreases response time and improves application performance – turn on compression. After all, a large portion of web site traffic is text-based: CSS, JavaScript, HTML, RSS feeds, which means it will greatly benefit from compression. Typical GZIP compression affords at least a 3:1 reduction in size, with hardware-assisted compression yielding an average of 4:1 compression ratios. That can dramatically affect the response time of applications. As I said, seems like a no-brainer. Here’s the rub: turning on compression often has a negative impact on capacity because it is CPU-bound and under certain conditions can actually cause a degradation in performance due to the latency inherent in compressing data compared to the speed of the network over which the data will be delivered. Here comes the science. IMPACT ON CPU UTILIZATION Compression via GZIP is CPU bound. It requires a lot more CPU than you might think. The larger the file being compressed, the more CPU resources are required. Consider for a moment what compression is really doing: it’s finding all similar patterns and replacing them with representations (symbols, indexes into a table, etc…) to a single instance of the text instead. So it makes sense that the larger a file is, the more resources are required – RAM and CPU – to execute such a process. Of course the larger the file is the more benefit you see from compression in terms of bandwidth and improvement in response time. It’s kind of a Catch-22: you want the benefits but you end up paying in terms of capacity. If CPU and RAM is being chewed up by the compression process then the server can handle fewer requests and fewer concurrent users. You don’t have to take my word for it – there are quite a few examples of testing done on web servers and compression that illustrate the impact on CPU utilization. Measuring the Performance Effects of Dynamic Compression in IIS 7.0 Measuring the Performance Effects of mod_deflate in Apache 2.2 HTTP Compression for Web Applications They all essentially say the same thing; if you’re serving dynamic content (or static content and don’t have local caching on the web server enabled) then there is a significant negative impact on CPU utilization that occurs when enabling GZIP/compression for web applications. Given the exceedingly dynamic nature of Web 2.0 applications, the use of AJAX and similar technologies, and the data-driven world in which we live today, that means there are very few types of applications running on web servers for which compression will not negatively impact the capacity of the web server. In case you don’t (want || have time) to slog through the above articles, here’s a quick recap: File Size Bandwidth decrease CPU utilization increase IIS 7.0 10KB 55% 4x 50KB 67% 20x 100KB 64% 30x Apache 2.2 10KB 55% 4x 50KB 65% 10x 100KB 63% 30x It’s interesting to note that IIS 7.0 and Apache 2.2 mod_deflate have essentially the same performance characteristics. This data falls in line with the aforementioned Intel report on HTTP compression which noted that CPU utilization was increased 25-35% when compression was enabled. So essentially when you enable compression you are trading its benefits – bandwidth reduction, response time improvement – for a reduction in capacity. You’re robbing Peter to pay Paul, because instead of paying for bandwidth you’re paying for more servers to handle the same load. THE MYTH OF IMPROVED RESPONSE TIME One of the reasons you’d want to compress content is to improve response time by decreasing the total number of packets that have to traverse a wire. This is a necessity when transferring content via a WAN, but can actually cause a decrease in performance for application delivery over the LAN. This is because the time it takes to compress the content and then deliver it is actually greater than the time to just transfer the original file via the LAN. The speed of the network over which the content is being delivered is highly relevant to whether compression yields benefits for response time. The increasing consumption of CPU resources as volume increases, too, has a negative impact on the ability of the server to process and subsequently respond, which also means an increase in application response time, which is not the desired result. Maybe you’re thinking “I’ll just get more CPU then. After all, there’s like billion core servers out there, that ought to solve the problem!” Compression algorithms, like FTP, are greedy. FTP will, if allowed, consume as much bandwidth as possible in an effort to transfer data as quickly as possible. Compression will do the same thing to CPU resources: consume as much as it can to perform its task as quickly as possible. Eventually, yes, you’ll find a machine with enough cores to support both compression and capacity needs, but at what cost? It may well have been more financially efficient to invest in a better solution (that also brings additional benefits to the table) than just increasing the size of the server. But hey, it’s your data, you need to do what you need to do. The size of the content, too, has an impact on whether compression will benefit application performance. Consider that the goal of compression is to decrease the number of packets being transferred to the client. Generally speaking, the standard MTU for most network is 1500 bytes because that’s what works best with ethernet and IP. That means you can assume around 1400 bytes per packet available to transfer data. That means if content is 1400 bytes or less, you get absolutely no benefit out of compression because it’s already going to take only one packet to transfer; you can’t really send half-packets, after all, and in some networks packets that are too small can actually freak out some network devices because they’re optimized to handle the large content being served today – which means many full packets. TO COMPRESS OR NOT COMPRESS There is real benefit to compression; it’s part of the core techniques used by both application acceleration and WAN application delivery services to improve performance and reduce costs. It can drastically reduce the size of data and especially when you might be paying by the MB or GB transferred (such as applications deployed in cloud environments) this a very important feature to consider. But if you end up paying for additional servers (or instances in a cloud) to make up for the lost capacity due to higher CPU utilization because of that compression, you’ve pretty much ended up right where you started: no financial benefit at all. The question is not if you should compress content, it’s when and where and what you should compress. The answer to “should I compress this content” almost always needs to be based on a set of criteria that require context-awareness – the ability to factor into the decision making process the content, the network, the application, and the user. If the user is on a mobile device and the size of the content is greater than 2000 bytes and the type of content is text-based and … It is this type of intelligence that is required to effectively apply compression such that the greatest benefits of reduction in costs, application performance, and maximization of server resources is achieved. Any implementation that can’t factor all these variables into the decision to compress or not is not an optimal solution, as it’s just guessing or blindly applying the same policy to all kinds of content. Such implementations effectively defeat the purpose of employing compression in the first place. That’s why the answer to where is almost always “on the load-balancer or application delivery controller”. Not only are such devices capable of factoring in all the necessary variables but they also generally employ specialized hardware designed to speed up the compression process. By offloading compression to an application delivery device, you can reap the benefits without sacrificing performance or CPU resources. Measuring the Performance Effects of Dynamic Compression in IIS 7.0 Measuring the Performance Effects of mod_deflate in Apache 2.2 HTTP Compression for Web Applications The Context-Aware Cloud The Revolution Continues: Let Them Eat Cloud Nerd Rage636Views0likes2CommentsWILS: How can a load balancer keep a single server site available?
Most people don’t start thinking they need a “load balancer” until they need a second server. But even if you’ve only got one server a “load balancer” can help with availability, with performance, and make the transition later on to a multiple server site a whole lot easier. Before we reveal the secret sauce, let me first say that if you have only one server and the application crashes or the network stack flakes out, you’re out of luck. There are a lot of things load balancers/application delivery controllers can do with only one server, but automagically fixing application crashes or network connectivity issues ain’t in the list. If these are concerns, then you really do need a second server. But if you’re just worried about standing up to the load then a Load balancer for even a single server can definitely give you a boost.399Views0likes2CommentsThe Order of (Network) Operations
Thought those math rules you learned in 6 th grade were useless? Think again…some are more applicable to the architecture of your data center than you might think. Remember back when you were in the 6 th grade, learning about the order of operations in math class? You might recall that you learned that the order in which mathematical operators were applied can have a significant impact on the result. That’s why we learned there’s an order of operations – a set of rules – that we need to follow in order to ensure that we always get the correct answer when performing mathematical equations. Rule 1: First perform any calculations inside parentheses. Rule 2: Next perform all multiplications and divisions, working from left to right. Rule 3: Lastly, perform all additions and subtractions, working from left to right. Similarly, the order in which network and application delivery operations are applied can dramatically impact the performance and efficiency of the delivery of applications – no matter where those applications reside.349Views0likes1CommentHardware Acceleration Critical Component for Cost-Conscious Data Centers
Better performance, reduced costs and data center footprint are not niche-market interests. The fast-paced world of finance is taking a hard look at the benefits of hardware acceleration for performance and finding additional benefits such as a reduction in rack-space via consolidation of server hardware. Rich Miller over at Data Center Knowledge writes: Hardware acceleration addresses computationally-intensive software processes that task the CPU, incorporating special-purpose hardware such as a graphics processing unit (GPUs) or field programmable gate array (FPGA) to shift parallel software functions to the hardware level. … “The value proposition is not just to sustain speed at peak but also a reduction in rack space at the data center,” Adam Honore, senior analyst at Aite Group, told WS&T. Depending on the specific application, Honore said a hardware appliance can reduce the amount of rack space by 10-to-1 or 20-to-1 in certain market data and some options events. Thus, a trend that bears watching for data center providers. But confining the benefits associated with hardware acceleration to just data center providers or financial industries is short-sighted, because similar benefits can be achieved by any data center in any industry looking for cost-cutting technologies. And today, that’s just about … everyone. USING SSL? YOU CAN BENEFIT FROM HARDWARE ACCELERATION Now maybe I’m just too into application delivery and hardware and all its associated benefits, but the idea that hardware acceleration and offloading of certain computationally expensive tasks like encryption, decryption, TCP session management, etc… seems pretty straightforward, and not exclusive to financial markets. Any organization using SSL, for example, can see benefits in both performance and a reduction in costs through consolidation by offloading the responsibility for SSL to an external device that employs some sort of hardware-based acceleration of the specific computationally expensive functions. This is the same concept used by routers and switches, and why they employ FPGAs and ASICs to perform network processing: they’re faster and capable of much greater speeds than their software predecessors. Unlike routers and switches, however, solutions capable of hardware-based acceleration provide the added benefit of reducing the utilization on hardware servers while improving the speed at which such computations can be executed. Reducing the utilization on servers means increased capacity on each server, which results in either the ability to eliminate a number of servers or the need to invest in even more servers. Both strategies result in a reduction in costs associated with the offloading of the expensive functionality. Add hardware-based acceleration of SSL operations with hardware-based acceleration for compression of data and you can offload yet another computationally expensive piece of functionality to an external device, which again saves resources on the server and increases its capacity as well as the overall response time for transfers requiring compression. Now put that functionality onto your load-balancer, a fairly logical place in your architecture to apply such functionality both ingress and egress, and what you’ve got is an application delivery controller. Add to the hardware-based acceleration of SSL and compression an optimized TCP stack that reuses TCP connections and you not only increase performance but decrease utilization on the server yet again because it’s handling fewer connections and not going through the tedium of opening and closing connections at a fairly regular rate. NOT JUST FOR ADMINS and NETWORK ARCHITECTS Developers and architects, too, can apply the benefits of hardware accelerated services to their applications and frameworks. Cookie encryption, for example, is a fairly standard method of protecting web applications against cookie-based attacks such as cookie tampering and poisoning. Encryption of cookies mitigates that risk by ensuring that cookies stored on clients are not human-readable. But encryption and decryption of cookies can be expensive and often comes at the cost of performance of the application and, if not implemented as part of the original design, can cost in terms of the time and money necessary to add the feature to the application. Leveraging the network-side scripting capabilities of application delivery controllers removes the need to rewrite the application by allowing cookies to be encrypted and decrypted on the application delivery controller. By moving the task of (de|en)cryption to the application delivery controller, the expensive computations required by the process are accelerated in hardware and will not negatively impact the performance of the application. If the functionality is moved from within the application to an application delivery controller, the resulting shift in computational burden can reduce utilization on the server – particularly in heavily used applications or those with a larger set of cookies – which, like other reductions in server utilization, can lead to the ability to consolidate or retire servers in the data center. HARDWARE ACCELERATION REDUCES COSTS, INCREASES EFFICIENCY By the time you get finished, the case for consolidating servers seems fairly obvious: you’ve offloaded so much intense functionality that you can cut the number of servers you need by a considerable amount, and either retire them (decreasing power, cooling, heating, and rack space in the process) or re-provision them for use on other projects (decreasing investment and acquisition costs for the other project and maintaining current operating expenses rather than increasing them). Basically, if you need load balancing you’ll benefit both technically and financially from investing in an application delivery controller rather than a traditional simple load balancer. And if you don’t need load balancing, you can still benefit simply by employing the offloading capabilities inherent in such platforms endowed with hardware-assisted acceleration technologies. The increased efficiency of servers resulting from the use of hardware-assisted offload of computationally expensive operations can be applied to any data center and any application in any industry.321Views0likes2CommentsDeduplication and Compression – Exactly the same, but different.
One day many years ago, Lori and I’s oldest son held up two sheets of paper and said “These two things are exactly the same, but different!” Now, he’s a very bright individual, he was just young, and didn’t even get how incongruous the statement was. We, being a fun loving family that likes to tease each other on occasion, we of course have not yet let him live it down. It was honestly more than a decade ago, but all is fair, he doesn’t let Lori live down something funny that she did before he was born. It is all in good fun of course. Why am I bringing up this family story? Because that phrase does come to mind when you start talking about deduplication and compression. Highly complimentary and very similar, they are pretty much “Exactly the same, but different”. Since these technologies are both used pretty heavily in WAN Optimization, and are growing in use on storage products, this topic intrigued me. To get this out of the way, at F5, compression is built into the BIG-IP family as a feature of the core BIG-IP LTM product, and deduplication is an added layer implemented over BIG-IP LTM on BIG-IP WAN Optimization Module (WOM). Other vendors have similar but varied (there goes a variant of that phrase again) implementation details. Before we delve too deeply into this topic though, what caught my attention and started me pondering the whys of this topic was that F5’s deduplication is applied before compression, and it seems that reversing the order changes performance characteristics. I love a good puzzle, and while the fact that one should come before the other was no surprise, I started wanting to know why the order it was, and what the impact of reversing them in processing might be. So I started working to understand the details of implementation for these two technologies. Not understand them from an F5 perspective, though that is certainly where I started, but try to understand how they interact and compliment each other. While much of this discussion also applies to in-place compression and deduplication such as that used on many storage devices, some of it does not, so assume that I am talking about networking, specifically WAN networking, throughout this blog. At the very highest level, deduplication and compression are the same thing. They both look for ways to shrink your dataset before passing it along. After that, it gets a bit more complex. If it was really that simple, after all, we wouldn’t call them two different things. Well, okay, we might, IT has a way of having competing standards, product categories, even jobs that we lump together with the same name. But still, they wouldn’t warrant two different names in the same product like F5 does with BIG-IP WOM. The thing is that compression can do transformations to data to shrink it, and it also looks for small groupings of repetitive byte patterns and replaces them, while deduplication looks for larger groupings of repetitive byte patterns and replaces them. In the implementation you’ll see on BIG-IP WOM, deduplication looks for larger byte patterns repeated across all streams, while compression applies transformations to the data, and when removing duplication only looks for smaller combinations on a single stream. The net result? The two are very complimentary, but if you run compression before deduplication, it will find a whole collection of small repeating byte patterns and between that and transformations, deduplication will find nothing, making compression work harder and deduplication spin its wheels. There are other differences – because deduplication deals with large runs of repetitive data (I believe that in BIG-IP the minimum size is over a K), it uses some form of caching to hold patterns that duplicates can match, and the larger the caching, the more strings of bytes you have to compare to. This introduces some fun around where the cache should be stored. In memory is fast, but limited in size, on flash disk is fast and has a greater size, but is expensive, and on disk is slow but has a huge advantage in size. Good deduplication engines can support all three and thus are customizable to what your organization needs and can afford. Some workloads just won’t benefit from one, but will get a huge benefit from the other. The extremes are good examples of this phenomenon – if you have a lot of in-the-stream repetitive data that is too small for deduplication to pick up, and little or no cross-stream duplication, then deduplication will be of limited use to you, and the act of running through the dedupe engine might actually degrade performance a negligible amount – of course, everything is algorithm dependent, so depending upon your vendor it might degrade performance a large amount also. On the other extreme, if you have a lot of large byte count duplication across streams, but very little within a given stream, deduplication is going to save your day, while compression will, at best, offer you a little benefit. So yes, they’re exactly the same from the 50,000 foot view, but very very different from the benefits and use cases view. And they’re very complimentary, giving you more bang for the buck.288Views0likes1CommentTrue or False: Application acceleration solutions teach developers to write inefficient code
It has been suggested that the use of application acceleration solutions as a means to improve application performance would result in programmers writing less efficient code. In a comment on “The House that Load Balancing Built” a reader replies: Not only will it cause the application to grow in cost and complexity, it's teaching new and old programmers to not write efficient code and rely on other products and services on [sic] thier behalf. I.E. Why write security into the app, when the ADC can do that for me. Why write code that executes faster, the ADC will do that for me, etc., etc. While no one can control whether a programmer writes “fast” code, the truth is that application acceleration solutions do not affect the execution of code in any way. A poorly constructed loop will run just as slow with or without an application acceleration solution in place. Complex mathematical calculations will execute with the same speed regardless of the external systems that may be in place to assist in improving application performance. The answer is, unequivocally, that the presence or lack thereof of an application acceleration solution should have no impact on the application developer because it does nothing to affect the internal execution of written code. If you answered false, you got the answer right. The question has to be, then, just what does an application acceleration solution do that improves performance? If it isn’t making the application logic execute faster, what’s the point? It’s a good question, and one that deserves an answer. Application acceleration is part of a solution we call “application delivery”. Application delivery focuses on improving application performance through optimization of the use and behavior of transport (TCP) and application transport (HTTP/S) protocols, offloading certain functions from the application that are more efficiently handled by an external often hardware-based system, and accelerating the delivery of the application data. OPTIMIZATION Application acceleration improves performance by understanding how these protocols (TCP, HTTP/S) interact across a WAN or LAN and acting on that understanding to improve its overall performance. There are a large number of performance enhancing RFCs (standards) around TCP that are usually implemented by application acceleration solutions. Delayed and Selective Acknowledgments (RFC 2018) Explicit Congestion Notification (RFC 3168) Limited and Fast Re-Transmits (RFC 3042 and RFC 2582) Adaptive Initial Congestion Windows (RFC 3390) Slow Start with Congestion Avoidance (RFC 2581) TCP Slow Start (RFC 3390) TimeStamps and Windows Scaling (RFC 1323) All of these RFCs deal with TCP and therefore have very little to do with the code developers create. Most developers code within a framework that hides the details of TCP and HTTP connection management from them. It is the rare programmer today that writes code to directly interact with HTTP connections, and even rare to find one coding directly at the TCP socket layer. The execution of code written by the developer takes just as long regardless of the implementation or lack of implementation of these RFCs. The application acceleration solution improves the performance of the delivery of the application data over TCP and HTTP which increases the performance of the application as seen from the user’s point of view. OFFLOAD Offloading compute intensive processing from application and web servers improves performance by reducing the consumption of CPU and memory required to perform those tasks. SSL and other encryption/decryption functions (cookie security, for example) are computationally expensive and require additional CPU and memory on the server. The reason offloading these functions to an application delivery controller or stand-alone application acceleration solution improves application performance is because it frees the CPU and memory available on the server and allows it to be dedicated to the application. If the application or web server does not need to perform these tasks, it saves CPU cycles that would otherwise be used to perform them. Those cycles can be used by the application and thus increases the performance of the application. Also beneficial is the way in which application delivery controllers manage TCP connections made to the web or application server. Opening and closing TCP connections takes time, and the time required is not something a developer – coding within a framework – can affect. Application acceleration solutions proxy connections for the client and subsequently reduce the number of TCP connections required on the web or application server as well as the frequency with which those connections need to be open and closed. By reducing the connections and frequency of connections the application performance is increased because it is not spending time opening and closing TCP connections, which are necessarily part of the performance equation but not directly affected by anything the developer does in his or her code. The commenter believes that an application delivery controller implementation should be an afterthought. However, the ability of modern application delivery controllers to offload certain application logic functions such as cookie security and HTTP header manipulation in a centralized, optimized manner through network-side scripting can be a performance benefit as well as a way to address browser-specific quirks and therefore should be seriously considered during the development process. ACCELERATION Finally, application acceleration solutions improve performance through the use of caching and compression technologies. Caching includes not just server-side caching, but the intelligent use of the client (usually the browser) cache to reduce the number of requests that must be handled by the server. By reducing the number of requests the server is responding to, the web or application server is less burdened in terms of managing TCP and HTTP sessions and state, and has more CPU cycles and memory that can be dedicated to executing the application. Compression, whether using traditional industry standard web-based compression (GZip) or WAN-focused data de-duplication techniques, decreases the amount of data that must be transferred from the server to the client. Decreasing traffic (bandwidth) results in fewer packets traversing the network which results in quicker delivery to the user. This makes it appear that the application is performing faster than it is, simply because it arrived sooner. Of all these techniques, the only one that could possibly contribute to the delinquency of developers is caching. This is because application acceleration caching features act on HTTP caching headers that can be set by the developer, but rarely are. These headers can also be configured by the web or application server administrator, but rarely are in a way that makes sense because most content today is generated dynamically and is rarely static, even though individual components inside the dynamically generated page may in fact be very static (CSS, JavaScript, images, headers, footers, etc…). However, the methods through which caching (pragma) headers are set is fairly standard and the actual code is usually handled by the framework in which the application is developed, meaning the developer ultimately cannot affect the efficiency of the use of this method because it was developed by someone else. The point of the comment was likely more broad, however. I am fairly certain that the commenter meant to imply that if developers know the performance of the application they are developing will be accelerated by an external solution that they will not be as concerned about writing efficient code. That’s a layer 8 (people) problem that isn’t peculiar to application delivery solutions at all. If a developer is going to write inefficient code, there’s a problem – but that problem isn’t with the solutions implemented to improve the end-user experience or scalability, it’s a problem with the developer. No technology can fix that.240Views0likes4CommentsThe “All of the Above” Approach to Improving Application Performance
#ado #fasterapp #stirling Carnegie Mellon testing of ADO solutions answers age old question: less filling or tastes great? You probably recall years ago the old “Tastes Great vs Less Filling” advertisements. The ones that always concluded in the end that the beer in question was not one or the other, but both. Whenever there are two ostensibly competing technologies attempting to solve the same problem, we run into the same old style argument. This time, in the SPDY versus Web Acceleration debate, we’re inevitably going to arrive at the conclusion it’s both less filling and tastes great. SPDY versus Web Acceleration In general, what may appear on the surface to be competing technologies are actually complementary. Testing by Carnegie Mellon supports this conclusion, showing marked improvements in web application performance when both SPDY and Web Acceleration techniques are used together. That’s primarily because web application traffic shows a similar pattern across modern, interactive Web 2.0 sites: big, fat initial pages with a subsequent steady stream of small requests and a variety of response sizes, typically small to medium in content length. We know from experience and testing that web acceleration techniques like compression provide the greatest improvements in performance when acting upon medium-large sized responses though actual improvement rates depend highly on the network over which data is being exchanged. We know that compression can actually be detrimental to performance when responses are small (in the 1K range) and being transferred over a LAN. That’s because the processing time incurred to compress that data is greater than the time to traverse the network. But when used to compress larger responses traversing congested or bandwidth constrained connections, compression is a boon to performance. It’s less filling. SPDY, though relatively new on the scene, is the rising star of web acceleration. Its primary purposes is to optimize the application layer exchanges that typically occur via HTTP (requests and responses) by streamlining connection management (SPDY only uses one connection per client-host), dramatically reducing header sizes, and introducing asynchronicity along with prioritization. It tastes great. What Carnegie Mellon testing shows is that when you combine the two, you get the best results because each improves performance of specific data exchanges that occur over the life of a user interaction. HERE COMES the DATA The testing was specifically designed to measure the impact of each of the technologies separately and then together. For the web acceleration functionality they chose to employ BoostEdge (a software ADC) though one can reasonably expect similar results from other ADCs provided they offer the same web acceleration and optimization capabilities, which is generally a good bet in today’s market. The testing specifically looked at two approaches: Two of the most promising software approaches are (a) content optimization and compression, and (b) optimizing network protocols. Since network protocol optimization and data optimization operate at different levels, there is an opportunity for improvement beyond what can be achieved by either of the approaches individually. In this paper, we report on the performance benefits observed by following a unified approach, using both network protocol and data optimization techniques, and the inherent benefits in network performance by combining these approaches in to a single solution. -- Data and Network Optimization Effect on Web Performance, Carnegie Mellon, Feb 2012 The results should not be surprising – when SPDY is combined with ADC optimization technologies, the result is both less filling and it tastes great. When tested in various combinations, we find that the effects are more or less additive and that the maximum improvement is gained by using BoostEdge and SPDY together. Interestingly, the two approaches are also complimentary; i.e., in situations where data predominates (i.e. “heavy” data, and fewer network requests), BoostEdge provides a larger boost via its data optimization capabilities and in cases where the data is relatively small, or “light”, but there are many network transactions required, SPDY provides an increased proportion of the overall boost. The general effect is that relative level of improvement remains consistent over various types of websites. -- Data and Network Optimization Effect on Web Performance, Carnegie Mellon, Feb 2012 NOT THE WHOLE STORY Interestingly, the testing results show there is room for even greater improvement. The paper notes that comparisons include “an 11% cost of running SSL” (p 13). This is largely due to the use of a software ADC solution which employs no specialized hardware to address the latency incurred by compute intense processing such as compression and cryptography. Leveraging a hardware ADC with cryptographic hardware and compression acceleration capabilities should further improve results by reducing the latency incurred by both processes. Testers further did not test under load (which would have a significant impact on the results in a negative fashion) or leverage other proven application delivery optimization (ADO) techniques such as TCP multiplexing. While the authors mention the use of front-end optimization (FEO) techniques such as image optimization and client-side cache optimization, it does not appear that these were enabled during testing. Other techniques such as EXIF stripping, CSS caching, domain sharding, and core TCP optimizations are likely to provide even greater benefits to web application performance when used in conjunction with SPDY. CHOOSE “ALL of the ABOVE” What the testing concluded was that an “all of the above” approach would appear to net the biggest benefits in terms of application performance. Using SPDY along with complimentary ADO technologies provides the best mitigation of latency-inducing issues that ultimately degrade the end-user experience. Ultimately, SPDY is one of a plethora of ADO technologies designed to streamline and improve web application performance. Like most ADO technologies, it is not universally beneficial to every exchange. That’s why a comprehensive, inclusive ADO strategy is necessary. Only an approach that leverages “all of the above” at the right time and on the right data will net optimal performance across the board. The HTTP 2.0 War has Just Begun Stripping EXIF From Images as a Security Measure F5 Friday: Domain Sharding On-Demand WILS: WPO versus FEO WILS: The Many Faces of TCP HTML5 Web Sockets Changes the Scalability Game Network versus Application Layer Prioritization The Three Axioms of Application Delivery227Views0likes0CommentsF5 Friday: The Mobile Road is Uphill. Both Ways.
Mobile users feel the need …. the need for spe- please wait. Loading… We spent the week, like many other folks, at O’Reilly’s Velocity Conference 2011 – a conference dedicated to speed, of web sites, that is. This year the conference organizers added a new track called Mobile Performance. With the consumerization of IT ongoing and the explosion of managed and unmanaged devices allowing ever-increasing amounts of time “connected” to enterprise applications and services, mobile performance – if it isn’t already – will surely become an issue in the next few years. The adoption of HTML5, as a standard platform across mobile and traditional devices is a boon – optimizing the performance of HTML-based application is something F5 knows a thing or two about. After all, there are more than 50 ways to use your BIG-IP system, and many of them are ways to improve performance – often in ways you may not have before considered. NARROWBAND is the NEW NORMAL The number of people who are “always on” today is astounding, and most of them are always on thanks to rapid technological improvements in mobile devices. Phones and tablets are now commonplace just about anywhere you look, and “that guy” is ready to whip out his device and verify (or debunk) whatever debate may be ongoing in the vicinity. Unfortunately the increase in use has also coincided with an increase in the amount of data being transferred without a similar increase in the available bandwidth in which to do it. The attention on video these past few years – which is increasing, certainly, in both size and length – has overshadowed similar astounding bloat in the size and complexity of web page composition. It is this combination – size and complexity – that is likely to cause even more performance woes for mobile users than video. “A Google engineer used the Google bot to crawl and analyze the Web, and found that the average web page is 320K with 43.9 resources per page (Ramachandran 2010). The average web page used 7.01 hosts per page, and 6.26 resources per host. “ (Average Web Page Size Septuples Since 2003) Certainly the increase in broadband usage – which has “more than kept pace with the increase in the size and complexity of the average web page” (Average Web Page Size Septuples Since 2003) – has mitigated most of the performance issues that might have arisen had we remained stuck in the modem-age. But the fact is that mobile users are not so fortunate, and it is their last mile that we must now focus on lest we lose their attention due to slow, unresponsive sites and applications. The consumerization of IT, too, means that enterprise applications are more and more being accessed via mobile devices – tablets, phones, etc… The result is the possibility not just of losing attention and a potential customer, but of losing productivity, a much more easily defined value that can be used to impart the potential severity of performance issues to those ultimately responsible for it. ADDRESSING MOBILE PERFORMANCE If you thought the need for application and network acceleration solutions was long over due to the rise of broadband, you thought too quickly. Narrowband, i.e. mobile connectivity, is still in the early stages of growth and as such still exhibits the same restricted bandwidth characteristics as pre-broadband solutions such as ISDN and A/DSL. The users, however, are far beyond broadband and expect instantaneous responses regardless of access medium. Thus there is a need to return to (if you left it) the use of web application acceleration techniques to redress performance issues as soon as possible. Caching and compression are but two of the most common acceleration techniques available, and F5 is no stranger to such solutions. BIG-IP WebAccelerator implements both along with other performance-enhancing features such as Intelligent Browser Referencing (IBR) and OneConnect can dramatically improve performance of web applications by leveraging the browser to load more quickly those 6.26 resources per host and simultaneously eliminating most if not all of the overhead associated with TCP session management on the servers (TCP Multiplexing). WebAccelerator – combined with some of the innate network protocol optimizations available in all F5 BIG-IP solutions due to its shared internal platform, TMOS – can do a lot to mitigate performance issues associated with narrowband mobile connections. The mobile performance problem isn’t new, after all, and thus these proven solutions should provide relief to end-users of both the customer and employee communities who weary of waiting for the web. HTML5 – the darling of the mobile world - will also have an impact on the usage patterns of web applications regardless of client device and network type. HTML5 inherently results in more request and objects, and the adoption rate is fairly significant from the developer community. A recent Evans Data survey indicates increasing adoption rates; in 2010 28% of developers were using HTML5 markup, with 48.9% planning on using it in the future. More traffic. More users. More devices. More networks. More data. More connections. It’s time to start considering how to address mobile performance before it becomes an even steeper hill to climb. The third greatest (useful) hack in the history of the Web Achieving Scalability Through Fewer Resources Long Live(d) AJAX The Impact of AJAX on the Network The AJAX Application Delivery Challenge What is server offload and why do I need it? 3 Really good reasons you should use TCP multiplexing222Views0likes0CommentsArchitecting for Speed
I'm going to give you an engine low to the ground. An extra-big oil pan that'll cut the wind underneath you. That'll give you more horsepower. I'll give you a fuel line that'll hold an extra gallon of gas. I'll shave half an inch off you and shape you like a bullet. When I get you primed, painted and weighed... ...you're going to be ready to go out on that racetrack. You're going to be perfect. (From the movie: Days of Thunder) In the monologue above, Harry Hogge, crew chief, is talking to the framework of a car; explaining how it is that he's going to architect her for speed. What I love about this monologue is that Harry isn't focusing on any one aspect of the car, he's looking at the big picture - inside and out. This is the way we should architect web application infrastructures for speed: holistically and completely, taking the entire application delivery infrastructure into consideration, because each component in that infrastructure can have an effect - positive or negative - on the performance of web applications. Analyst firm Forrester recently hosted a teleconference (download available soon) on this very subject entitled "Web Performance Architecture Best Practices." In one single slide analysts Mike Gualtieri and James Staten captured the essence of Harry's monologue by promoting a holistic view of web application performance that includes the inside and outside of an application. "Performance depends upon a holistic view of your architecture" SOURCE: "Teleconference: Web Performance Architecture Best Practices", Forrester Research, July 2008. The discussion goes on to describe how to ensure speedy delivery of applications, and includes the conclusion that cutting Web-tier response time by half delivers an overall 40% improvement in the performance of applications. Cutting response time is the primary focus of web application acceleration solutions. Combining intelligent caching and compression with technologies that make the browser more efficient improve the overall responsiveness of the web tier of your web applications. And what's best is that you don't have to do anything to the web applications to get that improvement. While improving performance in the application and data tiers of an application architecture can require changes to the application including a lot of coding, the edge and application infrastructure can often provide a significant boost in performance simply by transparently adding the ability to optimize web application protocols as well as their underlying transport protocols (TCP, HTTP). Steve Souders, author of "High Performance Web Sites" (O'Reilly Media, Inc., 2007) further encourages an architecture that includes compressing everything as well as maximizing the use of the browser's cache. But my absolute favorite line from the teleconference? "Modern load balancers do far more than just spread the load." Amen, brothers! Can I get a hallelujah? If you weren't able to attend, I highly recommend downloading the teleconference when it's available and giving it a listen. It includes a great case study, as well, on how to build a high performing, scalable web application that helps wrap some reality around the concepts discussed. Perhaps one day we'll be talking to our applications like Harry Hogge does to the car he's about to build... I'm going to give you code with tightly written loops. An extra-fast infrastructure that'll offload functionality for you. That'll give you more horsepower. I'll give you a network that'll hold an extra megabit of bandwidth. I'll compress and shape your data like a bullet. When I get you optimized, secured and deployed... ...you're going to be ready to go out on the Internet. You're going to be perfect.222Views0likes0Comments