offload
6 TopicsThe Order of (Network) Operations
Thought those math rules you learned in 6 th grade were useless? Think again…some are more applicable to the architecture of your data center than you might think. Remember back when you were in the 6 th grade, learning about the order of operations in math class? You might recall that you learned that the order in which mathematical operators were applied can have a significant impact on the result. That’s why we learned there’s an order of operations – a set of rules – that we need to follow in order to ensure that we always get the correct answer when performing mathematical equations. Rule 1: First perform any calculations inside parentheses. Rule 2: Next perform all multiplications and divisions, working from left to right. Rule 3: Lastly, perform all additions and subtractions, working from left to right. Similarly, the order in which network and application delivery operations are applied can dramatically impact the performance and efficiency of the delivery of applications – no matter where those applications reside.361Views0likes1CommentWILS: How can a load balancer keep a single server site available?
Most people don’t start thinking they need a “load balancer” until they need a second server. But even if you’ve only got one server a “load balancer” can help with availability, with performance, and make the transition later on to a multiple server site a whole lot easier. Before we reveal the secret sauce, let me first say that if you have only one server and the application crashes or the network stack flakes out, you’re out of luck. There are a lot of things load balancers/application delivery controllers can do with only one server, but automagically fixing application crashes or network connectivity issues ain’t in the list. If these are concerns, then you really do need a second server. But if you’re just worried about standing up to the load then a Load balancer for even a single server can definitely give you a boost.424Views0likes2CommentsI am wondering why not all websites enabling this great feature GZIP?
Understanding the impact of compression on server resources and application performance While doing some research on a related topic, I ran across this question and thought “that deserves an answer” because it certainly seems like a no-brainer. If you want to decrease bandwidth – which subsequently decreases response time and improves application performance – turn on compression. After all, a large portion of web site traffic is text-based: CSS, JavaScript, HTML, RSS feeds, which means it will greatly benefit from compression. Typical GZIP compression affords at least a 3:1 reduction in size, with hardware-assisted compression yielding an average of 4:1 compression ratios. That can dramatically affect the response time of applications. As I said, seems like a no-brainer. Here’s the rub: turning on compression often has a negative impact on capacity because it is CPU-bound and under certain conditions can actually cause a degradation in performance due to the latency inherent in compressing data compared to the speed of the network over which the data will be delivered. Here comes the science. IMPACT ON CPU UTILIZATION Compression via GZIP is CPU bound. It requires a lot more CPU than you might think. The larger the file being compressed, the more CPU resources are required. Consider for a moment what compression is really doing: it’s finding all similar patterns and replacing them with representations (symbols, indexes into a table, etc…) to a single instance of the text instead. So it makes sense that the larger a file is, the more resources are required – RAM and CPU – to execute such a process. Of course the larger the file is the more benefit you see from compression in terms of bandwidth and improvement in response time. It’s kind of a Catch-22: you want the benefits but you end up paying in terms of capacity. If CPU and RAM is being chewed up by the compression process then the server can handle fewer requests and fewer concurrent users. You don’t have to take my word for it – there are quite a few examples of testing done on web servers and compression that illustrate the impact on CPU utilization. Measuring the Performance Effects of Dynamic Compression in IIS 7.0 Measuring the Performance Effects of mod_deflate in Apache 2.2 HTTP Compression for Web Applications They all essentially say the same thing; if you’re serving dynamic content (or static content and don’t have local caching on the web server enabled) then there is a significant negative impact on CPU utilization that occurs when enabling GZIP/compression for web applications. Given the exceedingly dynamic nature of Web 2.0 applications, the use of AJAX and similar technologies, and the data-driven world in which we live today, that means there are very few types of applications running on web servers for which compression will not negatively impact the capacity of the web server. In case you don’t (want || have time) to slog through the above articles, here’s a quick recap: File Size Bandwidth decrease CPU utilization increase IIS 7.0 10KB 55% 4x 50KB 67% 20x 100KB 64% 30x Apache 2.2 10KB 55% 4x 50KB 65% 10x 100KB 63% 30x It’s interesting to note that IIS 7.0 and Apache 2.2 mod_deflate have essentially the same performance characteristics. This data falls in line with the aforementioned Intel report on HTTP compression which noted that CPU utilization was increased 25-35% when compression was enabled. So essentially when you enable compression you are trading its benefits – bandwidth reduction, response time improvement – for a reduction in capacity. You’re robbing Peter to pay Paul, because instead of paying for bandwidth you’re paying for more servers to handle the same load. THE MYTH OF IMPROVED RESPONSE TIME One of the reasons you’d want to compress content is to improve response time by decreasing the total number of packets that have to traverse a wire. This is a necessity when transferring content via a WAN, but can actually cause a decrease in performance for application delivery over the LAN. This is because the time it takes to compress the content and then deliver it is actually greater than the time to just transfer the original file via the LAN. The speed of the network over which the content is being delivered is highly relevant to whether compression yields benefits for response time. The increasing consumption of CPU resources as volume increases, too, has a negative impact on the ability of the server to process and subsequently respond, which also means an increase in application response time, which is not the desired result. Maybe you’re thinking “I’ll just get more CPU then. After all, there’s like billion core servers out there, that ought to solve the problem!” Compression algorithms, like FTP, are greedy. FTP will, if allowed, consume as much bandwidth as possible in an effort to transfer data as quickly as possible. Compression will do the same thing to CPU resources: consume as much as it can to perform its task as quickly as possible. Eventually, yes, you’ll find a machine with enough cores to support both compression and capacity needs, but at what cost? It may well have been more financially efficient to invest in a better solution (that also brings additional benefits to the table) than just increasing the size of the server. But hey, it’s your data, you need to do what you need to do. The size of the content, too, has an impact on whether compression will benefit application performance. Consider that the goal of compression is to decrease the number of packets being transferred to the client. Generally speaking, the standard MTU for most network is 1500 bytes because that’s what works best with ethernet and IP. That means you can assume around 1400 bytes per packet available to transfer data. That means if content is 1400 bytes or less, you get absolutely no benefit out of compression because it’s already going to take only one packet to transfer; you can’t really send half-packets, after all, and in some networks packets that are too small can actually freak out some network devices because they’re optimized to handle the large content being served today – which means many full packets. TO COMPRESS OR NOT COMPRESS There is real benefit to compression; it’s part of the core techniques used by both application acceleration and WAN application delivery services to improve performance and reduce costs. It can drastically reduce the size of data and especially when you might be paying by the MB or GB transferred (such as applications deployed in cloud environments) this a very important feature to consider. But if you end up paying for additional servers (or instances in a cloud) to make up for the lost capacity due to higher CPU utilization because of that compression, you’ve pretty much ended up right where you started: no financial benefit at all. The question is not if you should compress content, it’s when and where and what you should compress. The answer to “should I compress this content” almost always needs to be based on a set of criteria that require context-awareness – the ability to factor into the decision making process the content, the network, the application, and the user. If the user is on a mobile device and the size of the content is greater than 2000 bytes and the type of content is text-based and … It is this type of intelligence that is required to effectively apply compression such that the greatest benefits of reduction in costs, application performance, and maximization of server resources is achieved. Any implementation that can’t factor all these variables into the decision to compress or not is not an optimal solution, as it’s just guessing or blindly applying the same policy to all kinds of content. Such implementations effectively defeat the purpose of employing compression in the first place. That’s why the answer to where is almost always “on the load-balancer or application delivery controller”. Not only are such devices capable of factoring in all the necessary variables but they also generally employ specialized hardware designed to speed up the compression process. By offloading compression to an application delivery device, you can reap the benefits without sacrificing performance or CPU resources. Measuring the Performance Effects of Dynamic Compression in IIS 7.0 Measuring the Performance Effects of mod_deflate in Apache 2.2 HTTP Compression for Web Applications The Context-Aware Cloud The Revolution Continues: Let Them Eat Cloud Nerd Rage686Views0likes2CommentsF5 Friday: The 2048-bit Keys to the Kingdom
There’s a rarely mentioned move from 1024-bit to 2048-bit key lengths in the security demesne … are you ready? More importantly, are your infrastructure and applications ready? Everyone has likely read about DNSSEC and the exciting day on which the root servers were signed. In response to security concerns – and very valid ones at that – around the veracity of responses returned by DNS, which underpins the entire Internet, the practice of signing responses was introduced. Everyone who had anything to do with encryption and certificates said something about the initiative. But less mentioned was a move to leverage longer RSA key lengths as a means to increase the security of the encryption of data, a la SSL (Secure Socket Layer). While there have been a few stories on SSL vulnerabilities – Dan Kaminsky illustrated flaws in the system at Black Hat last year – there’s been very little public discussion about the transition in key sizes across the industry. The last time we had such a massive move in the cryptography space was back when we moved from 128-bit to 256-bit keys. Some folks may remember that many early adopters of the Internet had issues with browser support back then, and the impact on the performance and capacity of infrastructure were very negatively impacted. Well, that’s about to happen again as we move from 1024-bit keys to 2048-bit keys – and the recommended transition deadline is fast approaching. In fact, NIST is recommending the transition by January 1st, 2011 and several key providers of certificates are already restricting the issuance of certificates to 2048-bit keys. NIST Recommends transition to 2048-bit key lengths by Jan 1st 2011: Special Publication 800-57 Part 1 Table 4 VeriSign Started focusing on 2048-bit keys in 2006; complete transition by October 2010. Indicates their transition is to comply with best practices as recommended by NIST GeoTrust Clearly indicates why it transitioned to only 2048-bit Keys in June 2010 Entrust Also following NIST recommendations : TN 7710 - Entrust is moving to 2048-bit RSA keys. GoDaddy "We enforced a new policy where all newly issued and renewed certificates must be 2048-bit“. Extended Validation (EV) required 2048-bit keys on 1/1/09 Note that it isn’t just providers who are making this move. Microsoft uses and recommends 2048-bit keys per the NIST guidelines for all servers and other products. Red Hat recommends 2048+ length for keys using RSA algorithm. And as of December 31, 2013 Mozilla will disable or remove all root certificates with RSA key sizes smaller than 2048 bits. That means sites that have not made the move as of that date will find it difficult for customers and visitors to hook up, as it were. THE IMPACT on YOU The impact on organizations that take advantage of encryption and decryption to secure web sites, sign code, and authenticate access is primarily in performance and capacity. The decrease in performance as key sizes increase is not linear, but more on the lines of exponential. For example, though the key size is shifting by a factor of two, F5 internal testing indicates that such a shift results in approximately a 5x reduction in performance (as measured by TPS – Transactions per Second). This reduction in performance has also been seen by others in the space, as indicated by a recent Citrix announcement of a 5x increase in performance of its cryptographic processing. This decrease in TPS is due primarily to heavy use of the key during the handshaking process. The impact on you is heavily dependent on how much of your infrastructure leverages SSL. For some organizations – those that require SSL end-to-end – the impact will be much higher. Any infrastructure component that terminated SSL and re-encrypted the data as a means to provide inline functionality (think IDS, Load balancer, web application firewall, anti-virus scan) will need to also support 2048-bit keys, and if new certificates are necessary these, too, will need to be deployed throughout the infrastructure. Any organization with additional security/encryption requirements over and above simply SSL encryption, such as FIPS 140-2 or higher, are looking at new/additional hardware to support the migration. Note: There are architectural solutions to avoid the type of forklift upgrade necessary, we’ll get to that shortly. If your infrastructure is currently supporting SSL encryption/decryption on your web/application servers, you’ll certainly want to start investigating the impact on capacity and performance now. SSL with 1024-bit keys typically requires about 30% of a server’s resources (RAM, CPU) and the increase to 2048-bit keys will require more, which necessarily comes from the resources used by the application. That means a decrease in capacity of applications running on servers on which SSL is terminated and typically a degradation in performance. In general, the decrease we’ve (and others) have seen in TPS performance on hardware should give you a good idea of what to expect on software or virtual network appliances. As a general rule you should determine what level of SSL transaction you are currently licensed for and divide that number by five to determine whether you can maintain the capacity you have today after a migration to 2048-bit keys. It may not be a pretty picture. ADVANTAGES of SSL OFFLOAD If the advantages of offloading SSL to an external infrastructure component were significant before the move from 1024-bit keys to 2048-bit keys makes them nearly indispensable to maintaining performance and capacity of existing applications and infrastructure. Offloading SSL to an external infrastructure component enabled with specialized hardware further improves the capacity and performance of these mathematically complex and compute intensive processes. ARCHITECTURAL SOLUTION to support 1024-bit key only applications If you were thinking about leveraging a virtual network appliance for this purpose, you might want to think about that one again. Early testing of RSA operations using 2048-bit keys on 64-bit commodity hardware shows a capacity in the hundreds of transactions per second. Not tens of thousands, not even thousands, but hundreds. Even if the only use of SSL in your organization is to provide secure web-based access to e-mail, a la Microsoft Web Outlook, this is likely unacceptable. Remember there is rarely a 1:1 relationship between connections and web applications today, and each connection requires the use of those SSL operations, which can drastically impact the capacity in terms of user concurrency. Perhaps as important is the ability to architect around limitations imposed by applications on the security infrastructure. For example, many legacy applications (Lotus Notes, IIS 5.0) do not support 2048-bit keys. Thus meeting the recommendation to migrate to 2048-bit keys is all but impossible for this class of application. Leveraging the capabilities of an application delivery controller that can support 2048-bit keys, however, allows for the continued support of 1024-bit keys to the application while supporting 2048-bit keys to the client. ARE YOU READY? That’s a question only you can answer, and you can only answer that by taking a good look at your infrastructure and applications. Now is a good time to evaluate your SSL strategy to ensure it’s up to the challenge of 2048-bit keys. Check your licenses, determine your current capacity and requirements, and compare those to what can be realistically expected once the migration is complete. Validate that applications currently requiring 1024-bit keys can support 2048-bit keys or whether such a migration is contraindicated by the application, and investigate whether a proxy-based (mediation) solution might be appropriate. And don’t forget to determine whether or not compliance with regulations may require new hardware solutions. Now this is an F5 Friday post, so you knew there had to be some tie-in, right? Other than the fact that the red-ball glowing ball on every BIG-IP just looks hawesome in the dim light of a data center, F5 solutions can mitigate many potential negative impacts resulting from a migration of 1024-bit to 2048-bit key lengths: BIG-IP Specialized Hardware BIG-IP hardware platforms include specialized RSA acceleration hardware that improves the performance of the RSA operations necessary to support encryption/decryption and SSL communication and enables higher capacities of the same. EM (Enterprise Manager) Streamlines Certificate Management F5’s centralized management solution, EM (Enterprise Manager), allows an organization to better manage a cryptographic infrastructure by providing the means to monitor and manage key expirations across all F5 solutions and collect TPS history and usage when sizing to better understand capacity constraints. BIG-IP Flexibility BIG-IP is a full proxy-based solution. It can mediate between clients and applications that have disparate requirements, such as may be the case with key sizes. This allows you to use 2048-bit keys but retain the use of 1024-bit keys to web/application servers and other infrastructure solutions. Strong partnerships and integration with leading centralized key management and crypto vendors that provide automated key migration and provisioning through open and standards-based APIs and robust scripting capabilities. DNSSEC Enhance security through DNSSEC to validate domain names. Although it has been suggested that 1024-bit keys might be sufficient for signing zones, with the forced migration to 2048-bit keys there will be increased pressure on the DNS infrastructure that may require a new solution for your DNS systems. THIS IS IN MANY REGARDS INFOSEC’S “Y2K” In many ways a change of this magnitude is for Information Security professionals their “Y2K” because such a migration will have an impact on nearly every component and application in the data center. Unfortunately for the security folks, we had a lot more time to prepare for Y2K…so get started, go through the checklist, and get yourself ready to make the switch now before the eleventh hour is upon us. Related blogs & articles: The Anatomy of an SSL Handshake [Network Computing] DNSSEC Readiness [ISC.org] Get Ready for the Impact of 2048-bit RSA Keys [Network Computing] SSL handshake latency and HTTPS optimizations [semicomplete.com] Pete Silva Demonstrates the FirePass SSL-VPN Data Center Feng Shui: SSL WILS: SSL TPS versus HTTP TPS over SSL SSL performance - DevCentral - F5 DevCentral > Community > Group ... DevCentral Weekly Roundup | Audio Podcast - SSL iControl Apps - #12 - Global SSL Statistics > DevCentral > F5 ... Oracle 10g SSL Offload - JInitiator:X509CertChainInvalidErr error ... Requiring an SSL Certificate for Parts of an Application ... The Order of (Network) Operations1.2KViews0likes4CommentsCloud Computing: Vertical Scalability is Still Your Problem
Horizontal scalability achieved through the implementation of a load balancing solution is easy. It's vertical scalability that's always been and remains difficult to achieve, and it's even more important in a cloud computing or virtualized environment because now it can hurt you where it counts: the bottom line. Horizontal scalability is the ability of an application to be scaled up to meet demand through replication and the distribution of requests across a pool or farm of servers. It's the traditional load balanced model, and it's an integral component of cloud computing environments. Vertical scalability is the ability of an application to scale under load; to maintain performance levels as the number of concurrent requests increases. While load balancing solutions can certainly assist in optimizing the environment in which an application needs to scale by reducing overhead that can negatively impact performance (such as TCP session management, SSL operations, and compression/caching functionality) it can't solve core problems that prevent vertical scalability. The problem is that a single database table or SQL query that is poorly constructed can destroy vertical scalability and actually increase the cost of deploying in the cloud. Because you generally pay on a resource basis, if the application isn’t scaling up well it will require more resources to maintain performance levels and thus cost a lot more. Cloud computing isn’t going to magically optimize code or database queries or design database tables with performance in mind, that’s still squarely in the hands of the developers regardless of whether or not cloud computing is used as the deployment model. The issue of vertical scalability is very important when considering the use of cloud computing because you’re often charged based on compute resources used, much like the old mainframe model. If an application doesn’t vertically scale well, it’s going to increase the costs to run in the cloud. Cloud computing providers can't, and probably wouldn't if they could (it makes them money, after all), address vertical scalability issues because they are peculiar to the application. No external solution can optimize code such that the application will magically scale up vertically. External solutions can improve overall performance, certainly, by optimizing protocols, reducing protocol and application overhead, and reducing bandwidth requirements, but it can't dig into the application code and rearrange the order in which joins are performed inside an SQL query, rewrite a particularly poorly written loop, or refactor code to use a more efficient data structure. Vertical scalability, whether the application is deployed inside the local data center or out there in the cloud, is still the domain of the application developer. While developers can certainly take advantage of technologies like network-side scripting and inherent features in application delivery solutions to assist with efforts to increase vertical scalability, there is a limit to what solutions can do to address the root cause of an application's failure to vertically scale. Improving the vertical scalability of applications is important in achieving the benefits of a reduction in costs associated with cloud computing and virtualization. Applications that fail to vertically scale well may end up costing more when deployed in the cloud because of the additional demand on compute resources required as demand increases. Things you can do to improve vertical scalability Optimize SQL / database queries Take advantage of offload capabilities of application delivery solutions available Be aware of the impact on performance and scalability of decomposing applications into too finely grained services Remember that API usage will impact vertical scalability Understand the bottlenecks associated with the programming language(s) used and address them Cloud computing and virtualization can certainly address vertical scalability limitations by using horizontal scaling techniques to ensure capacity meets demand and performance level agreements are met. But doing so may cost you dearly and eliminate many of the financial incentives that led you to adopt cloud computing or virtualization in the first place. Related articles by Zemanta Infrastructure 2.0: The Diseconomy of Scale Virus Why you should not use clustering to scale an application Top 10 Concepts That Every Software Engineer Should Know Can Today's Hardware Handle the Cloud? Vendors air the cloud's pros and cons Twitter and the Architectural Challenges of Life Streaming Applications442Views0likes3CommentsWanna know a secret? You can consolidate servers by using acceleration technologies
Forrester Research recently conducted a survey on virtualization, citing server consolidation as one of the primary drivers behind the 73% of enterprises already or planning on implementing virtualization technology. But virtualization, particularly operating system virtualization, assumes you have additional cycles on servers to spare. In some cases, that's just not true. Your application servers are working as hard as they can to serve up your applications and virtualizing them isn't going to change that fact. But application acceleration technologies can change that, and offer you the chance to consolidate servers. I know that sounds crazy. How can making something faster result in needing fewer servers? That doesn't make any sense. Usually when you want faster applications it means more servers because reducing the load on the application servers makes the application execute faster thus delivering it more quickly to end-users. That's one of the secrets of application acceleration. Some of the "tricks of the trade" that make applications faster include techniques that reduce the load on application servers which means, ultimately, that you can consolidate and use fewer servers while still improving performance. There are three primary mechanisms used by application acceleration technologies that can help you reduce the burden on servers and thus consolidate your application infrastructure: offloading, optimization, and acceleration. Let's say you have 10 servers in a server farm, each with a total capacity of 1000 concurrent HTTP requests, and that you need to support at least 10,000 concurrent HTTP requests. You're full up. In order to consolidate you're going to need to maintain support for those 10,000 concurrent HTTP requests with fewer servers. Let's take a look at how application acceleration solutions can enable you to meet that goal. 1. Caching (Offloading) Let's assume that at least 5 of those objects are actually static even though they are written into the page dynamically. CSS, external scripts, and images are good examples of this. Those objects don't change all that often, but developers and administrators probably aren't inserting the proper cache control headers for them, meaning that every time the page is loaded the images are re-requested from the server. Application acceleration solutions employ caching to relieve the situation, recognizing when static content is being served and automatically caching it. When the request for those objects hits the application acceleration solution, it recognizes it and doesn't bother the server, it just serves it out of cache or, in many cases, tells the browser to retrieve it from its cache. Basically we've just cut the load in half and we've made the application appear faster because the content is being served by a device physically closer to the user or is being retrieved from the browser's cache, which is really much faster than transferring it. Reduces load by offloading requests Accelerates application delivery by obviating the need to transfer static content 2. TCP Multiplexing (Optimization, Acceleration) TCP multiplexing allows full proxy-based application acceleration solutions to optimize the use of TCP connections. Rather than opening and closing two new connections to the server for every page (or more if the browser is FireFox) the application acceleration solution sets up connections ahead of time and reuses them. That means the server doesn't have to spend time opening and closing TCP connections, which can actually be quite costly in terms of time spent. This means the application responds faster, because it isn't concerned with connection management, it's just executing logic and serving up the application. It also means the server has additional resources it can use to handle requests because they aren't being spent on opening and closing connections. That increases capacity of individual servers meaning you can reduce the total number of servers or, at least, stave off the purchase of additional servers. Reduces load by optimizing the use of connections Accelerates application delivery by reducing the amount of time required to respond to a request 3. Content spooling (Offloading) Servers can only serve content as fast as users can consume it. Even in the world of nearly ubiquitous broadband access there are still folks on dial-up or who are accessing applications and sites from a far-reaching location. The speed of light is a law, not a guideline, so there are inherent limitations on how fast an object can be delivered to the user. If the server is hanging around waiting for a user, spoon feeding it content because it's far away, the network is congested somewhere, or it's connected via dial-up, it can't process other requests. That connection is tied up for as long as the user is receiving data. Application acceleration solutions resolve this problem by sucking up responses from servers as fast as the server can provide it, and then spoon feeding it to the client. That means the server is freed up and can respond to someone else's request rather than hang around bored while one user takes forever (and in the internets, even 10 seconds is forever). Reduces load by offloading responsibility for delivering content to clients 4. Protocol Optimizations (Optimization, Acceleration) There is a lengthy list of RFCs (Request for Comments) regarding the optimization of TCP. By implementing these RFCs application acceleration solutions improve the performance of the underlying transport protocol used under the covers by HTTP which in turn improves the performance your applications. HTTP has no such list of RFCs, but it is a chatty protocol and there are ways to improve its performance through optimization as well, and these mechanisms are generally also implemented by application acceleration solutions. Accelerates application delivery by making more efficient the protocols used to deliver applications 5. SSL Acceleration (Offloading, Acceleration) When SSL is used to secure data in transit it can degrade performance and consume additional resources on servers. By placing the burden of negotiating SSL sessions and bulk encryption/decryption on an application acceleration solution, the server can reclaim the resources used to handle SSL. Most application acceleration solutions employ hardware-based acceleration to improve the performance of SSL, allowing such devices to support a much higher number of concurrent SSL-enabled connections than any single server. This improves the capacity of your application without requiring additional servers. Reduces load by offloading responsibility for SSL Accelerates application delivery by increasing the performance of SSL through hardware acceleration By reducing load on servers, capacity is increased. When the capacity of each individual server is increased, it allows you to reduce the total number of servers because you can handle the same volume with fewer servers. This enables you to consolidate servers (or just stave off the purchase of new ones) while simultaneously improving the performance of your web applications. Virtualization is one way to consolidate servers when you have extra cycles laying around to spare. But if you don't and are still tasked with consolidating servers, consider application acceleration solutions as an alternative to meeting your goals. Need an example? Here's one in which application acceleration technology reduced server load by 50%, lowered bandwidth usage by 20% to 50%, and reduced download times by 20%.213Views0likes0Comments