consolidation
32 TopicsWAN Optimization is not Application Acceleration
Increasingly WAN optimization solutions are adopting the application acceleration moniker, implying a focus that just does not exist. WAN optimization solutions are designed to improve the performance of the network, not applications, and while the former does beget improvements of the latter, true application acceleration solutions offer greater opportunity for improving efficiency and end-user experience as well as aiding in consolidation efforts that result in a reduction in operating and capital expenditure costs. WAN Optimization solutions are, as their title implies, focused on the WAN; on the network. It is their task to improve the utilization of bandwidth, arrest the effects of network congestion, and apply quality of service policies to speed delivery of critical application data by respecting application prioritization. WAN Optimization solutions achieve these goals primarily through the use of data de-duplication techniques. These techniques require a pair of devices as the technology is most often based on a replacement algorithm that seeks out common blocks of data and replaces them with a smaller representative tag or indicator that is interpreted by the paired device such that it can reinsert the common block of data before passing it on to the receiver. The base techniques used by WAN optimization are thus highly effective in scenarios in which large files are transferred back and forth over a connection by one or many people, as large chunks of data are often repeated and the de-duplication process significantly reduces the amount of data traversing the WAN and thus improves performance. Most WAN optimization solutions specifically implement “application” level acceleration for protocols aimed at the transfer of files such as CIFS and SAMBA. But WAN optimization solutions do very little to aid in the improvement of application performance when the data being exchanged is highly volatile and already transferred in small chunks. Web applications today are highly dynamic and personalized, making it less likely that a WAN optimization solution will find chunks of duplicated data large enough to make the overhead of the replacement process beneficial to application performance. In fact, the process of examining small chunks of data for potential duplicated chunks can introduce additional latency that actual degrades performance, much in the same way compression of small chunks of data can be detrimental to application performance. Too, WAN optimization solutions require deployment in pairs which results in what little benefits these solutions offer for web applications being enjoyed only by end-users in a location served by a “remote” device. Customers, partners, and roaming employees will not see improvements in performance because they are not served by a “remote” device. Application acceleration solutions, however, are not constrained by such limitations. Application acceleration solutions act at the higher layers of the stack, from TCP to HTTP, and attempt to improve performance through the optimization of protocols and the applications themselves. The optimizations of TCP, for example, reduce the overhead associated with TCP session management on servers and improve the capacity and performance of the actual application which in turn results in improved response times. The understanding of HTTP and both the browser and server allows application acceleration solutions to employ techniques that leverage cached data and industry standard compression to reduce the amount of data transferred without requiring a “remote” device. Application acceleration solutions are generally asymmetric, with some few also offering a symmetric mode. The former ensures that regardless of the location of the user, partner, or employee that some form of acceleration will provide a better end-user experience while the latter employs more traditional WAN optimization-like functionality to increase the improvements for clients served by a “remote” device. Regardless of the mode, application acceleration solutions improve the efficiency of servers and applications which results in higher capacities and can aid in consolidation efforts (fewer servers are required to serve the same user base with better performance) or simply lengthens the time available before additional investment in servers – and the associated licensing and management costs – must be made. Both WAN optimization and application acceleration aim to improve application performance, but they are not the same solutions nor do they even focus on the same types of applications. It is important to understand the type of application you want to accelerate before choosing a solution. If you are primarily concerned with office productivity applications and the exchange of large files (including backups, virtual images, etc…) between offices, then certainly WAN optimization solutions will provide greater benefits than application acceleration. If you’re concerned primarily about web application performance then application acceleration solutions will offer the greatest boost in performance and efficiency gains. But do not confuse WAN optimization with application acceleration. There is a reason WAN optimization-focused providers have recently begun to partner with application acceleration and application delivery providers – because there is a marked difference between the two types of solutions and a single offering that combines them both is not (yet) available.875Views0likes2CommentsLightboard Lessons: Service Consolidation on BIG-IP
The Consolidation of point devices and services in your datacenter or cloud can help with cost, complexity, efficiency, management, provisioning and troubleshooting your infrastructure and systems. In this Lightboard Lesson, I light up many of the services you can consolidate on BIG-IP. ps412Views0likes0CommentsVirtual Server Sprawl: FUD or FACT?
At Interop this week, security experts have begun sounding the drum regarding the security risks of virtualization and reminding us that virtual server sprawl magnifies that risk because, well, there are more virtual servers to manage at risk. Virtual sprawl isn't defined by numbers; it's defined as the proliferation of virtual machines without adequate IT control, [David] Lynch said. That's good, because the numbers as often cited just don't add up. A NetworkWorld article in December 2007 cited two different sets of numbers from Forrester Research on the implementation of virtualization in surveyed organizations. First we are told that: IT departments already using virtualization have virtualized 24% of servers, and that number is expected to grow to 45% by 2009. And later in the article we are told: The latest report finds that 37% of IT departments have virtualized servers already, and another 13% plan to do so by July 2008. An additional 15% think they will virtualize x86 servers by 2009. It's not clear where the first data point is coming from, but it appears to come from a Forrester Research survey cited in the first paragraph while the latter data set appears to come from the same recent study. The Big Hairy Question is: how many virtual servers does that mean? This sounds a lot like the great BPM (Business Process Management) scare of 2005 when it was predicted that business users would be creating SOA-based composite applications willy nilly using BPM tools because it required no development skills, just a really good mouse finger with which you could drag and drop web services to create your own customized application. Didn't happen. Or if it did, it happened in development and test and local environments and never made it to the all important production environment, where IT generally maintains strict control. Every time you hear virtual server sprawl mentioned it goes something like this: "When your users figure out how easy it is..." "Users", whether IT or business, are not launching virtual servers in production in the data center. If they are, then an organization has bigger concerns on their hands than the issue of sprawl. Are they launching virtual servers on their desktop? Might be. On a test or development machine? Probably. In production? Not likely. And that's where management and capacity issues matter; that's where the bottom line is potentially impacted from a technological black plague like virtual server sprawl; that's where the biggest security and management risks associated with virtualization are going to show themselves. None of the research cited ever discusses the number of virtual servers running, just the number of organizations in which virtualization has been implemented. That could mean 1 or 10 or 100 virtual servers. We just don't know because no one has real numbers to back it up; nothing but limited anecdotal evidence has been presented to indicate that there is a problem with virtual server sprawl. I see problems with virtualization. I see the potential for virtualizing solutions that shouldn't be virtualized for myriad reasons. I see the potential problems inherent in virtualizing everything from the desktop to the data center. But I don't see virtual server sprawl as the Big Hairy Monster hiding under the virtual bed. So as much as I'd like to jump on the virtual sprawl bandwagon and make scary faces in your general direction about the dangers that lie within the virtual world - because many of them are very real and you do need to be aware of them - there just doesn't seem to be any real data to back up the claim that virtual sprawl is - or will become - a problem.374Views0likes2CommentsHardware Acceleration Critical Component for Cost-Conscious Data Centers
Better performance, reduced costs and data center footprint are not niche-market interests. The fast-paced world of finance is taking a hard look at the benefits of hardware acceleration for performance and finding additional benefits such as a reduction in rack-space via consolidation of server hardware. Rich Miller over at Data Center Knowledge writes: Hardware acceleration addresses computationally-intensive software processes that task the CPU, incorporating special-purpose hardware such as a graphics processing unit (GPUs) or field programmable gate array (FPGA) to shift parallel software functions to the hardware level. … “The value proposition is not just to sustain speed at peak but also a reduction in rack space at the data center,” Adam Honore, senior analyst at Aite Group, told WS&T. Depending on the specific application, Honore said a hardware appliance can reduce the amount of rack space by 10-to-1 or 20-to-1 in certain market data and some options events. Thus, a trend that bears watching for data center providers. But confining the benefits associated with hardware acceleration to just data center providers or financial industries is short-sighted, because similar benefits can be achieved by any data center in any industry looking for cost-cutting technologies. And today, that’s just about … everyone. USING SSL? YOU CAN BENEFIT FROM HARDWARE ACCELERATION Now maybe I’m just too into application delivery and hardware and all its associated benefits, but the idea that hardware acceleration and offloading of certain computationally expensive tasks like encryption, decryption, TCP session management, etc… seems pretty straightforward, and not exclusive to financial markets. Any organization using SSL, for example, can see benefits in both performance and a reduction in costs through consolidation by offloading the responsibility for SSL to an external device that employs some sort of hardware-based acceleration of the specific computationally expensive functions. This is the same concept used by routers and switches, and why they employ FPGAs and ASICs to perform network processing: they’re faster and capable of much greater speeds than their software predecessors. Unlike routers and switches, however, solutions capable of hardware-based acceleration provide the added benefit of reducing the utilization on hardware servers while improving the speed at which such computations can be executed. Reducing the utilization on servers means increased capacity on each server, which results in either the ability to eliminate a number of servers or the need to invest in even more servers. Both strategies result in a reduction in costs associated with the offloading of the expensive functionality. Add hardware-based acceleration of SSL operations with hardware-based acceleration for compression of data and you can offload yet another computationally expensive piece of functionality to an external device, which again saves resources on the server and increases its capacity as well as the overall response time for transfers requiring compression. Now put that functionality onto your load-balancer, a fairly logical place in your architecture to apply such functionality both ingress and egress, and what you’ve got is an application delivery controller. Add to the hardware-based acceleration of SSL and compression an optimized TCP stack that reuses TCP connections and you not only increase performance but decrease utilization on the server yet again because it’s handling fewer connections and not going through the tedium of opening and closing connections at a fairly regular rate. NOT JUST FOR ADMINS and NETWORK ARCHITECTS Developers and architects, too, can apply the benefits of hardware accelerated services to their applications and frameworks. Cookie encryption, for example, is a fairly standard method of protecting web applications against cookie-based attacks such as cookie tampering and poisoning. Encryption of cookies mitigates that risk by ensuring that cookies stored on clients are not human-readable. But encryption and decryption of cookies can be expensive and often comes at the cost of performance of the application and, if not implemented as part of the original design, can cost in terms of the time and money necessary to add the feature to the application. Leveraging the network-side scripting capabilities of application delivery controllers removes the need to rewrite the application by allowing cookies to be encrypted and decrypted on the application delivery controller. By moving the task of (de|en)cryption to the application delivery controller, the expensive computations required by the process are accelerated in hardware and will not negatively impact the performance of the application. If the functionality is moved from within the application to an application delivery controller, the resulting shift in computational burden can reduce utilization on the server – particularly in heavily used applications or those with a larger set of cookies – which, like other reductions in server utilization, can lead to the ability to consolidate or retire servers in the data center. HARDWARE ACCELERATION REDUCES COSTS, INCREASES EFFICIENCY By the time you get finished, the case for consolidating servers seems fairly obvious: you’ve offloaded so much intense functionality that you can cut the number of servers you need by a considerable amount, and either retire them (decreasing power, cooling, heating, and rack space in the process) or re-provision them for use on other projects (decreasing investment and acquisition costs for the other project and maintaining current operating expenses rather than increasing them). Basically, if you need load balancing you’ll benefit both technically and financially from investing in an application delivery controller rather than a traditional simple load balancer. And if you don’t need load balancing, you can still benefit simply by employing the offloading capabilities inherent in such platforms endowed with hardware-assisted acceleration technologies. The increased efficiency of servers resulting from the use of hardware-assisted offload of computationally expensive operations can be applied to any data center and any application in any industry.371Views0likes2CommentsSimplify VMware View Deployments
Virtual Desktop Infrastructure (VDI) or the ability to deliver desktops as a managed service is an attractive and cost effective solution to mange a corporate desktop environment. The success of virtual desktop deployments hinges on the user experience, availability and performance, security and IT's ability to reduce desktop operating expenses. VDI deployments virtualizes user desktops by delivering them to distinctive end point devices over the network from a central location. Since the user's primary work tool is now located in a data center rather than their own local machine, VDI can put a strain on network resources while the user experience can be less than desired. This is due to the large amounts of data required to deliver a graphical user interface (GUI) based virtual desktop. For users who want to access their desktops and applications from anywhere in the world, network latency can be especially noticeable when the virtual desktop is delivered over a WAN. Organizations might have to provision more bandwidth to account for the additional network traffic which in turn, reduces any cost savings realized with VDI. In addition, VMware has introduced the PCoIP (PC over IP) communications display protocol which makes more efficient use of the network by encapsulating video display packets in UDP instead of TCP. Many remote access devices are incapable of correctly handling this distinctive protocol and this can deteriorate the user experience. Keeping mobile users connected to their own unique, individual environments can also pose a challenge. When a user is moving from one network to another, their session could be dropped, requiring them to re-connect, re-authenticate, and navigate to where they were prior to the interruption. Session-persistence can maintain the stateful desktop information helping users reconnect quickly without the need to re-authenticate. Secure access and access control are always concerns when deploying any system and virtual desktops are no different. Users are still accessing sensitive corporate information so enforcing strong authentication, security policies, and ensuring that the client is compliant all still apply to VDI deployments. Lastly, IT must make sure that the virtual systems themselves are available and can scale when needed to realize all the benefits from both a virtual server and virtual desktop deployment. The inclusion of BIG-IP APM's fine grained access control to BIG-IP LTM VE offers a very powerful enhancement to a VMware View deployment. BIG-IP APM for LTM VE is an exceptional way to optimize, secure, and deliver a VMware View virtual desktop infrastructure. This is a 100% virtual remote access solution for VMware View 4.5 VDI solutions. In addition, the BIG-IP APM for LTM VE system will run as a virtual machine in a VMware hypervisor environment so you can easily add it to your existing infrastructure. As the number of users on virtual desktops grows, customers can easily transition from the BIG-IP virtual edition to a BIG-IP physical appliance. The BIG-IP provides important load balancing, health monitoring and SSL Offload for VMware View deployments for greater system availability and scalability. Network and protocol optimizations help organizations mange bandwidth efficiently and in some cases, reduces the bandwidth requirements while maintaining and improving the user experience. BIG-IP APM for LTM VE also opens the possibility of making virtual server load balancing decisions based on user’s identity, ensuring the user is connected to the optimal virtual instance based their needs. F5 also overcomes the PCoIP challenge with our Datagram Transport Layer Security (DTLS) feature. This transport protocol is uniquely capable of providing all the desired security for transporting PCoIP communications but without the degradation in performance. In addition, F5 supports View’s automatic fallback to TCP if a high performance UDP tunnel cannot be established. Users no longer have to RDP to their virtual desktops but can now connect directly with PCoIP or organizations can plan a phased migration to PCoIP. The BIG-IP APM for LTM VE comes with powerful security controls to keep the entire environment secure. Pre-login host checks will inspect the requesting client and determine if it meets certain access criteria like OS patch level, Anti-virus/Firewall state or if a certificate is present. BIG-IP APM for LTM VE offers a wide range of authentication mechanisms, including two-factor, to protect corporate resources from unauthorized access. BIG-IP APM enables authentication pass-through for convenient single sign on and once a session is established, all traffic, including PCoIP, is encrypted to protect the data and session-persistence helps users reconnect quickly without having to re-authenticate. BIG-IP APM for LTM VE simplifies deployment of authentication and session management for VMware View enterprise virtual desktop management. ps Resources F5 Accelerates VMware View Deployments with BIG-IP Access Policy Manager on a Virtual Platform BIG-IP Local Traffic Manager Virtual Edition BIG-IP Access Policy Manager Application Delivery and Load Balancing for VMware View Desktop Infrastructure Deploying F5 Application Ready Solutions with VMware View 4.5 Optimizing VMware View VDI Deployments Global Distributed Service in the Cloud with F5 and VMware WILS: The Importance of DTLS to Successful VDI F5 Friday: The Dynamic VDI Security Game F5 Friday: Secure, Scalable and Fast VMware View Deployment Technorati Tags: F5, BIG-IP, VMWare, Optimization, Pete Silva, F5, vmview,virtualization,mobile applications,access control,security,context-aware,strategic point of control362Views0likes1CommentCloudFucius Shares: Cloud Research and Stats
Sharing is caring, according to some and with the shortened week, CloudFucius decided to share some resources he’s come across during his Cloud exploration in this abbreviated post. A few are aged just to give a perspective of what was predicted and written about over time. Some Interesting Cloud Computing Statistics (2008) Mobile Cloud Computing Subscribers to Total Nearly One Billion by 2014 (2009) Server, Desktop Virtualization To Skyrocket By 2013: Report (2009) Gartner: Brace yourself for cloud computing (2009) A Berkeley View of Cloud Computing (2009) Cloud computing belongs on your three-year roadmap (2009) Twenty-One Experts Define Cloud Computing (2009) 5 cool cloud computing research projects (2009) Research Clouds (2010) Cloud Computing Growth Forecast (2010) Cloud Computing and Security - Statistics Center (2010) Cloud Computing Experts Reveal Top 5 Applications for 2010 (2010) List of Cloud Platforms, Providers, and Enablers 2010 (2010) The Cloud Computing Opportunity by the Numbers (2010) Governance grows more integral to managing cloud computing security risks, says survey (2010) The Cloud Market EC2 Statistics (2010) Experts believe cloud computing will enhance disaster management (2010) Cloud Computing Podcast (2010) Security experts ponder the cost of cloud computing (2010) Cloud Computing Research from Business Exchange (2010) Just how green is cloud computing? (2010) Senior Analyst Guides Investors Through Cloud Computing Sector And Gives His Top Stock Winners (2010) Towards Understanding Cloud Performance Tradeoffs Using Statistical Workload Analysis and Replay (2010) …along with F5’s own Lori MacVittie who writes about this stuff daily. And one from Confucius: Study the past if you would define the future. ps The CloudFucius Series: Intro, 1, 2, 3, 4, 5, 6, 7, 8346Views0likes1CommentSimplifying your S/Gi Network with a consolidated architecture
Guest blog post by Misbah Mahmoodi, Product Marketing Manager, Service Providers Service providers are constantly challenged with ensuring their networks are running at optimal performance, especially as they cope with the increasing usage of mobile data traffic which leads to increased CapEx and OpeEx. At the same time, revenue has not kept pace with increasing data consumption, yielding in declining profitability as total cost of ownership continues to rise. As a result, service providers are looking for solutions that will allow them to scale more efficiently with traffic growth yet limit cost increases and at the same time accelerate revenue growth. Many of the services which operators use to deliver to their subscribers, such as video optimization, parental control, firewall and Carrier-Grade NAT reside on the S/Gi network, which is the interface between the PGW and the internet. Along with these services, service providers have deployed load-balancing solutions coupled with intelligent traffic steering and dynamic service chaining capabilities to steer traffic to the relevant VAS solutions based on a subscriber-aware and context-aware framework. This ensures, for example, that only subscribers using video are steered to a parental control service to check if the subscriber can watch the video, and subsequently on to a video optimization server, whereas, all other traffic are sent straight on through to the internet. Typically, service providers have deployed these services using point solutions. As traffic increases, service providers continue to expand these point solutions leading to an increase in the overall network footprint, but also results in an overwhelmingly complex network, making it more difficult to manage as well as increasing risk of network failures due to different vendor solutions being incompatible with each other. Continuing down this path is becoming less viable, and service providers need a solution that not only simplifies their S/Gi Network, but also reduces the total cost of ownership. Service providers need a solution that can consolidate core services onto a single platform, which provides the scalability and capacity to accommodate increases in future mobile broadband traffic and also provides greater subscriber and application visibility and control than a solution using multiple point products leading to increased revenues and profitability. With a consolidated architecture, service providers can leverage a common hardware and software framework to deliver multiple services. Adding or removing services within this framework is done via licensing, and having a unified framework means that there is common technology to understand and manage, enabling simpler configuration and management of network resources, which significantly simplifies operations and reduces cost. As all the major functionality of the S/Gi network is consolidated on a unified framework, service providers now have the ability to scale performance on demand, or using software based virtualized solutions, provide the ability to create an elastic infrastructure that can efficiently adapt as business needs change. Recently, F5 has conducted a study with an independent research analyst firm to analyze the total cost of ownership of a consolidated architecture versus point products. Based on this study, it was found that the F5 unified solution has a 36 percent lower TCO than the alternative point products solution and a 53 percent to 88 percent lower TCO with intelligent traffic steering as compared to a solution with no intelligent traffic steering. With F5, service providers have a solution that can optimize, secure and monetize mobile broadband networks and provide a unified platform that simplifies the network, yielding improved efficiency, lower costs, and secure service delivery.284Views0likes0CommentsCloudFucius Combines: Security and Acceleration
CloudFucius has explored Cloud Security with AAA Important to the Cloud and Hosts in the Cloud along with wanting An Optimized Cloud. Now he desires the sweet spot of Cloud Application Delivery combining Security and Acceleration. Few vendors want to admit that adding a web application security solution can also add latency, which can be kryptonite for websites. No website, cloud or otherwise, wants to add any delay to users’ interaction. Web application security that also delivers blazing fast websites might sound like an oxymoron, but not to CloudFucius. And in light of Lori MacVittie’s Get your SaaS off my cloud and the accompanying dramatic reading of, I’m speaking of IaaS and PaaS cloud deployments, where the customer has some control over the applications, software and systems deployed. It’s like the old Reese’s peanut butter cups commercial, ”You’ve stuck your security in our acceleration.” “Yeah, well your acceleration has broken our security.” Securing applications and preventing attacks while simultaneously ensuring consistent, rapid user response, is a basic web application requirement. Yet web application security traditionally comes at the expense of speed. This is an especially important issue for online retailers, where slow performance can mean millions of dollars in lost revenue and a security breach can be just as devastating as more than 70 percent of consumers say they would no longer do business with a company that exposed their sensitive information. Web application performance in the cloud is also critical for corporate operations, particularly for remote workers, where slow access to enterprise applications can destroy productivity. As more applications are being delivered through a standard browser from the cloud, the challenge of accelerating web applications without compromising security grows. This has usually required multiple dedicated units either from the customer or provider, along with staff to properly configure and manage them. Because each of these “extra” devices has its own way of proxying transactions, packets can slow to a crawl due to the extra overhead of TCP and application processing. Fast and secure in a single, individually wrapped unit does seem like two contrary goals. The Security Half As the cloud has evolved, so have security issues. And as more companies become comfortable deploying critical systems in the cloud, solutions like web application firewalls are a requirement, particularly for regulatory compliance situations. Plus, as the workforce becomes more mobile, applications need to be available in more places and on more devices, adding to the complexity of enforcing security without impacting productivity. Consider that a few years back, the browser’s main purpose was to surf the net. Today, browser usage is a daily tool for both personal and professional needs. In addition to the usual web application activities like ordering supplies, checking traffic, and booking travel, we also submit more private data like health details and payroll information. The browser acts as a secret confidant in many areas of our lives since it transmits highly sensitive data in both our work and social spheres. And it goes both ways; while other people, providers, sites, and systems have our sensitive data, we may also be carrying someone else’s sensitive data on our own machines. Today, the Could and really the Internet at large is more than a function of paying bills or getting our jobs done—it holds our digital identity for both work and play. And once a digital identity is out there, there’s no retracting it. We just hope there are proper controls in place to keep it secret and safe. The Acceleration Half For retail web applications and search engines, downtime or poor performance can mean lost revenue along with significant, tangible costs. A couple years ago, the Warwick Business School published research that showed it can be more than $500,000 in lost revenue for an unplanned outage lasting just an hour. For financial institutions, the loss can be in the several million dollar range. And downtime costs more than just lost revenue. Not adhering to a service level agreement can incur remediation costs or penalties and non-compliance with certain regulatory laws can result in fines. Additionally, the damage to a company’s brand reputation—whether it’s from an outage, poor performance, or breach—can have long-lasting, detrimental effects to the company. These days, many people now have high-speed connections to the home accessing applications in the cloud. But applications have matured and now offer users pipe-clogging rich data like video and other multi-media. If the website is slow, users will probably go somewhere else. It happens all the time. You type in a URL only to watch the browser icon spin and spin. You might try to reload or retype, but more often, you simply type a different URL to a similar site. With an e-commerce site, poor performance usually means a lost sale because you probably won’t wait around if your cart doesn’t load quickly or stalls during the secure check-out process. If it’s a business application and you’re stuck with a sluggish site, then that’s lost productivity, a frustrated user and can result in a time-consuming trouble ticket for IT. When application performance suffers, the business suffers. What’s the big deal? Typically, securing an application can come at the cost of end-user productivity because of deployment complexity. Implementing website security—like a web application firewall—adds yet another mediation point where the traffic between the client and the application is examined and processed. This naturally increases the latency of the application especially in the cloud, since the traffic might have to make multiple trips. This can become painfully apparent with globally disbursed users or metered bandwidth agreements but the solution is not always simple. Web application performance and security administration can cross organizational structures within companies, making ownership splintered and ambiguous. Add a cloud provider to the mix and the finger pointing can look like Harry Nilsson's The Point! (Oh how I love pulling out obscure childhood references in my blogs!!) The Sweet Spot Fortunately, you can integrate security and acceleration into a single device with BIG-IP Local Traffic Manager (LTM) and the BIG-IP LTM Virtual Edition (VE). By adding the BIG-IP Application Security Manager (ASM) module and the BIG-IP WebAccelerator module to BIG-IP LTM, not only are you able to deliver web application security and acceleration, but the combination provides faster cloud deployment and simplifies the process of managing and deploying web applications in the cloud. This is a true, internal system integration and not just co-deployment of multiple proxies on the same device. These integrated components provide the means to both secure and accelerate your web applications with ease. The unified security and web application acceleration takes a single platform approach that receives, examines, and acts upon application traffic as a single operation, in the shortest possible time and with the least complexity. The management GUI allows varying levels of access to system administrators according to their roles. This ensures that administrators have appropriate management access without granting them access to restricted, role-specific management functions. Cloud providers can segment customers, customers can segment departments. The single-platform integration of these functions means that BIG-IP can share context between security and acceleration—something you don’t get with multiple units and enables both the security side and the acceleration side to make intelligent, real-time decisions for delivering applications from your cloud infrastructure. You can deploy and manage a highly available, very secure, and incredibly fast cloud infrastructure all from the same unified platform that minimizes WAN bandwidth utilization, safeguards web applications, and prevents data leakage, all while directing traffic to the application server best able to service a request. Using the unified web application security and acceleration solution, a single proxy secures, accelerates, optimizes, and ensures application availability for all your cloud applications. And one from Confucius: He who will not economize will have to agonize. ps The CloudFucius Series: Intro, 1, 2, 3, 4, 5, 6264Views0likes0CommentsWanna know a secret? You can consolidate servers by using acceleration technologies
Forrester Research recently conducted a survey on virtualization, citing server consolidation as one of the primary drivers behind the 73% of enterprises already or planning on implementing virtualization technology. But virtualization, particularly operating system virtualization, assumes you have additional cycles on servers to spare. In some cases, that's just not true. Your application servers are working as hard as they can to serve up your applications and virtualizing them isn't going to change that fact. But application acceleration technologies can change that, and offer you the chance to consolidate servers. I know that sounds crazy. How can making something faster result in needing fewer servers? That doesn't make any sense. Usually when you want faster applications it means more servers because reducing the load on the application servers makes the application execute faster thus delivering it more quickly to end-users. That's one of the secrets of application acceleration. Some of the "tricks of the trade" that make applications faster include techniques that reduce the load on application servers which means, ultimately, that you can consolidate and use fewer servers while still improving performance. There are three primary mechanisms used by application acceleration technologies that can help you reduce the burden on servers and thus consolidate your application infrastructure: offloading, optimization, and acceleration. Let's say you have 10 servers in a server farm, each with a total capacity of 1000 concurrent HTTP requests, and that you need to support at least 10,000 concurrent HTTP requests. You're full up. In order to consolidate you're going to need to maintain support for those 10,000 concurrent HTTP requests with fewer servers. Let's take a look at how application acceleration solutions can enable you to meet that goal. 1. Caching (Offloading) Let's assume that at least 5 of those objects are actually static even though they are written into the page dynamically. CSS, external scripts, and images are good examples of this. Those objects don't change all that often, but developers and administrators probably aren't inserting the proper cache control headers for them, meaning that every time the page is loaded the images are re-requested from the server. Application acceleration solutions employ caching to relieve the situation, recognizing when static content is being served and automatically caching it. When the request for those objects hits the application acceleration solution, it recognizes it and doesn't bother the server, it just serves it out of cache or, in many cases, tells the browser to retrieve it from its cache. Basically we've just cut the load in half and we've made the application appear faster because the content is being served by a device physically closer to the user or is being retrieved from the browser's cache, which is really much faster than transferring it. Reduces load by offloading requests Accelerates application delivery by obviating the need to transfer static content 2. TCP Multiplexing (Optimization, Acceleration) TCP multiplexing allows full proxy-based application acceleration solutions to optimize the use of TCP connections. Rather than opening and closing two new connections to the server for every page (or more if the browser is FireFox) the application acceleration solution sets up connections ahead of time and reuses them. That means the server doesn't have to spend time opening and closing TCP connections, which can actually be quite costly in terms of time spent. This means the application responds faster, because it isn't concerned with connection management, it's just executing logic and serving up the application. It also means the server has additional resources it can use to handle requests because they aren't being spent on opening and closing connections. That increases capacity of individual servers meaning you can reduce the total number of servers or, at least, stave off the purchase of additional servers. Reduces load by optimizing the use of connections Accelerates application delivery by reducing the amount of time required to respond to a request 3. Content spooling (Offloading) Servers can only serve content as fast as users can consume it. Even in the world of nearly ubiquitous broadband access there are still folks on dial-up or who are accessing applications and sites from a far-reaching location. The speed of light is a law, not a guideline, so there are inherent limitations on how fast an object can be delivered to the user. If the server is hanging around waiting for a user, spoon feeding it content because it's far away, the network is congested somewhere, or it's connected via dial-up, it can't process other requests. That connection is tied up for as long as the user is receiving data. Application acceleration solutions resolve this problem by sucking up responses from servers as fast as the server can provide it, and then spoon feeding it to the client. That means the server is freed up and can respond to someone else's request rather than hang around bored while one user takes forever (and in the internets, even 10 seconds is forever). Reduces load by offloading responsibility for delivering content to clients 4. Protocol Optimizations (Optimization, Acceleration) There is a lengthy list of RFCs (Request for Comments) regarding the optimization of TCP. By implementing these RFCs application acceleration solutions improve the performance of the underlying transport protocol used under the covers by HTTP which in turn improves the performance your applications. HTTP has no such list of RFCs, but it is a chatty protocol and there are ways to improve its performance through optimization as well, and these mechanisms are generally also implemented by application acceleration solutions. Accelerates application delivery by making more efficient the protocols used to deliver applications 5. SSL Acceleration (Offloading, Acceleration) When SSL is used to secure data in transit it can degrade performance and consume additional resources on servers. By placing the burden of negotiating SSL sessions and bulk encryption/decryption on an application acceleration solution, the server can reclaim the resources used to handle SSL. Most application acceleration solutions employ hardware-based acceleration to improve the performance of SSL, allowing such devices to support a much higher number of concurrent SSL-enabled connections than any single server. This improves the capacity of your application without requiring additional servers. Reduces load by offloading responsibility for SSL Accelerates application delivery by increasing the performance of SSL through hardware acceleration By reducing load on servers, capacity is increased. When the capacity of each individual server is increased, it allows you to reduce the total number of servers because you can handle the same volume with fewer servers. This enables you to consolidate servers (or just stave off the purchase of new ones) while simultaneously improving the performance of your web applications. Virtualization is one way to consolidate servers when you have extra cycles laying around to spare. But if you don't and are still tasked with consolidating servers, consider application acceleration solutions as an alternative to meeting your goals. Need an example? Here's one in which application acceleration technology reduced server load by 50%, lowered bandwidth usage by 20% to 50%, and reduced download times by 20%.257Views0likes0Comments