ado
16 TopicsFilling the SLA Gap in the Cloud
#webperf #ado Meeting user expectations of fast and available applications becomes more difficult as you relinquish more and more control… User expectations with respect to performance are always a concern for IT. Whether it's monitoring performance or responding to a fire drill because an application is "slow", IT is ultimately responsible for maintaining the consistent levels of performance expected by end-users – whether internal or external. Virtualization and cloud computing introduce a variety of challenges for operations whose primary focus is performance. From lack of visibility to lack of control, dealing with performance issues is getting more and more difficult. The situation is one of which IT is acutely aware. A ServicePilot Technologies survey (2011) indicates virtualization, the pace of emerging technology, lack of visibility and inconsistent service models as challenges to discovering the root cause of application performance issues. Visibility, unsurprisingly, was cited as the biggest challenge, with 74% of respondents checking it off. These challenges are not unrelated. Virtualization's tendency toward east-west traffic patterns can inhibit visibility, with few solutions available to monitor intra-virtual machines deployed on the same physical machine. Cloud computing – highly virtual in both form factor and in model – contributes to the lack of visibility as well as challenges associated with disconnected service models as enterprise and cloud computing providers rarely leverage the same monitoring systems. Most disturbing, all these challenges contribute to an expanding gap between performance expectations (SLA) and the ability of IT to address application performance issues, especially in the cloud. YES, YET ANOTHER GAP There are many "gaps" associated with virtualization and cloud computing: the gap between dev and ops, the gap between ops and the network, the gap between scalability of operations and the volatility of the network. The gap between application performance expectations and the ability to affect it is just another example of how technology designed to solve one problem can often illuminate or even create another. Unfortunately for operations, application performance is critical. Degrading performance impacts reputation, productivity, and ultimately the bottom line. It increases IT costs as end-users phone the help desk, redirects resources from other just as important tasks toward solving the problem and ultimately delaying other projects. This gap is not one that can be ignored or put off or dismissed with a "we'll get to that". Application performance always has been – and will continue to be – a primary focus for IT operations. An even bigger challenge than knowing there's a performance problem is what to do about it – particularly in a cloud computing environment where tweaking QoS policies just isn't an option. What IT needs – both in the data center and in the cloud – is a single, strategic point of control at which to apply services designed to improve performance at three critical points in the delivery chain: the front, middle, and back-end. FILLING THE GAP IN THE CLOUD Such a combined performance solution is known as ADO – Application Delivery Optimization – and it uses a variety of acceleration and optimization techniques to fill the gap between SLA expectations and the lack of control in cloud computing environments. A single, strategic implementation and enforcement point for such policies is necessary in cloud computing (and highly volatile virtualized) environments because of the topological challenges created by the core model. Not only is the reality of application instances (virtual machines) popping up and moving around problematic, but the same occurs with virtualized network appliances and services designed to address specific pain points involving performance. The challenge of dealing with a topologically mobile architecture – particularly in public cloud computing environments – is likely to prove more trouble than it's worth. A single, unified ADO solution, however, provides a single control plane through which optimizations and enhancements can be applied across all three critical points in the delivery chain – without the topological obstacles. By leveraging a single, strategic point of control, operations is able to leverage the power of dynamism and context to ensure that the appropriate performance-related services are applied intelligently. That means not applying compression to already compressed content (such as JPEG images) and recognizing the unique quirks of browsers when used on different devices. ADO further enhances load balancing services by providing performance-aware algorithms and network-related optimizations that can dramatically impact the load and thus performance of applications. What's needed to fill the gap between user-expectations and actual performance in the cloud is the ability of operations to apply appropriate services with alacrity. Operations needs a simple yet powerful means by which performance-related concerns can be addressed in an environment where visibility into the root cause is likely extremely limited. A single service solution that can simultaneously address all three delivery chain pain points is the best way to accomplish that and fill the gap between expectations and reality. The Need for (HTML5) Speed Mobile versus Mobile: An Identity Crisis SPDY versus HTML5 WebSockets WILS: WPO versus FEO WILS: Application Acceleration versus Optimization F5 Friday: F5 Application Delivery Optimization (ADO) The Four V’s of Big Data The “All of the Above” Approach to Improving Application Performance Y U No Support SPDY Yet?424Views0likes0CommentsY U No Support SPDY Yet?
#fasterapp #ado #interop Mega-sites like Twitter and popular browsers are all moving to support SPDY – but there’s one small glitch in the game plan… SPDY is gaining momentum as “big” sites begin to enable support for the would-be HTTP 2.0 protocol of choice. Most recently Twitter announced its support for SPDY: Twitter has embraced Google’s vision of a faster web and is now serving webpages over the SPDY protocol to browsers that support it. SPDY is still only available for about 40 percent of desktop users. But with large services like Twitter throwing their weight behind it, SPDY may well start to take the web by storm — the more websites that embrace SPDY the more likely it is that other browsers will add support for the faster protocol. -- Twitter Catches the ‘SPDY’ Train But even with existing support from Google (where the protocol originated) and Amazon, there’s a speed bump on the road to fast for SPDY: There is not yet any spdy support for the images on the Akamai CDN that twitter uses, and that's obviously a big part of performance. But still real deployed users of this are Twitter, Google Web, Firefox, Chrome, Silk, node etc.. this really has momentum because it solves the right problems. Big pieces still left are a popular CDN, open standardization, a httpload balancing appliance like F5 or citrix. And the wind is blowing the right way on all of those things. This is happening very fast. -- Twitter, SPDY, and Firefox The “big pieces” missing theme is one that’s familiar at this point; many folks are citing the lack of SPDY support at various infrastructure layers (in particular the application delivery tier and load balancing services) as an impediment embracing what is certainly the early frontrunner in the HTTP 2.0 War. While mod_spdy may address the basic requirement to support SPDY at the web server infrastructure layer, mod_spdy does not (and really can not) address the lack of support in the rest of the infrastructure. Like the IPv4 to IPv6 transition, the move to support SPDY is transitory in nature and counts on HTTP 2.0 adopting SPDY as part of its overhaul of the aging HTTP protocol. Thus, infrastructure must maintain a dual SPDY-HTTP stack, much in the same way it must support both IPv4 and IPv6 until adoption is complete - or a firm cutover date is arrived at (unlikely). U No Support SPDY because SPDY Support is No Easy The use of SPDY is transparent to the user. But for IT the process of supporting SPDY is no simple task. To understand the impact on infrastructure first take a look at the typical steps a browser uses to switch to SPDY from HTTP: The first request to a server is sent over HTTP The server-side end-point (server or application delivery service), when it supports SPDY, can reply with an HTTP header 'Alternative-Protocols: 443:npn-spdy/2', indicating that the content can also be retrieved on this server on port 443, and on that port there is a TLS 1.2 endpoint that understands NPN and SPDY can be negotiated Alternatively the server can just 301 redirect the user to an https end-point. The client starts a TLS connection to the port described and when the server-side end-point indicates SPDY as part of the TLS NPN extension, it will use SPDY on that connection. What this means is that infrastructure must be updated not just to handle SPDY – which has some unique behavior in its bi-directional messaging model – but also TLS 1.2 and NPN (Next Protocol Negotiation). This is no trivial task, mind you, and it’s not only relevant to infrastructure, it’s relevant to site administrators who want to enable SPDY support. Doing so has several pre-requisites that will make the task a challenging one, including supporting TLS 1.2 and the NPN extension and careful attention to links served, which must use HTTPS instead of HTTP. While you technically can force SPDY to be used without TLS, this is not something a typical user will (or in the case of many mobile platforms can) attempt. For testing purposes, running SPDY without TLS can be beneficial, but for general deployment? SSL Everywhere will become a reality. Doing so without negatively impacting performance, however, is going to be a challenge. Establishing a secure SSL connection requires 15x more processing power on the server than on the client. -- THC SSL DOS Tool Released The need to support secure connections becomes a big issue for sites desiring to support both HTTP and HTTPS without forcing SSL everywhere on every user, because there must be a way to rewrite links from HTTP to HTTPS (or vice-versa) without incurring a huge performance penalty. Network-scripting provides a solution to this quandary, with many options available – all carrying varying performance penalties from minimal to unacceptable. PROS and CONS Basically, there are a whole lot of pros to SPDY – and a whole lot of cons. Expect to see more sites – particularly social media focused sites like Facebook – continue to jump on the SPDY bandwagon. HTTP was not designed to support the real-time interaction inherent in social media today, and its bursty, synchronous nature often hinders the user-experience. SPDY has shown the highest benefits to just such sites – highly interactive applications with lots of small object transfers – so we anticipate a high rate of early adoption amongst those who depend on such applications to support its business model. Supporting SPDY will be much easier for organizations that take advantage of an intelligent intermediary. Such solutions will be able to provide support for both HTTP and SPDY simultaneously – including all the pre-requisite capabilities. Using an intermediary application delivery platform further alleviates the need to attempt to deploy pre-production quality modules on critical application server infrastructure, and reduces the burden on operations to manage certificate sprawl (and all the associated costs that go along with that). SPDY is likely to continue to gain more and more momentum, especially in the mobile device community. Unless Microsoft’s Speed+Mobility offers a compelling reason it should ascend to the top of the HTTP 2.0 stack over SPDY, it’s likely that supporting SPDY will become a “must do” instead of “might do” sooner rather than later. SPDY Momentum Fueled by Juggernauts Introducing mod_spdy, a SPDY module for the Apache HTTP server The HTTP 2.0 War has Just Begun The Chromium Projects : SPDY Oops! HTML5 Does It Again F5 Friday: Mitigating the THC SSL DoS Threat SPDY - without TLS? Web App Performance: Think 1990s. Google SPDY Protocol Would Require Mass Change in Infrastructure317Views0likes0CommentsSPDY versus HTML5 WebSockets
#HTML5 #fasterapp #webperf #SPDY So much alike, yet so vastly a different impact on the data center … A recent post on the HTTP 2.0 War beginning garnered a very relevant question regarding WebSockets and where it fits in (what might shape up to be) an epic battle. The answer to the question, “Why not consider WebSockets here?” could be easily answered with two words: HTTP headers. It could also be answered with two other words: infrastructure impact. But I’m guessing Nagesh (and others) would like a bit more detail on that, so here comes the (computer) science. Different Solutions Have Different Impacts Due to a simple (and yet profound) difference between the two implementations, WebSockets is less likely to make an impact on the web (and yet more likely to make an impact inside data centers, but more on that another time). Nagesh is correct in that in almost all the important aspects, WebSockets and SPDY are identical (if not in implementation, in effect). Both are asynchronous, which eliminates the overhead of “polling” generally used to simulate “real time” updates a la Web 2.0 applications. Both use only a single TCP connection. This also reduces overhead on servers (and infrastructure) which can translate into better performance for the end-user. Both can make use of compression (although only via extensions in the case of WebSockets) to reduce size of data transferred resulting, one hopes, in better performance, particularly over more constrained mobile networks. Both protocols operate “outside” HTTP and use an upgrade mechanism to initiate. While WebSockets uses the HTTP connection header to request an upgrade, SPDY uses the Next Protocol Negotiation (proposed enhancement to the TLS specification). This mechanism engenders better backwards-compatibility across the web, allowing sites to support both next-generation web applications as well as traditional HTTP. Both specifications are designed, as pointed out, to solve the same problems. And both do, in theory and in practice. The difference lies in the HTTP headers – or lack thereof in the case of WebSockets. Once established, WebSocket data frames can be sent back and forth between the client and the server in full-duplex mode. Both text and binary frames can be sent full-duplex, in either direction at the same time. The data is minimally framed with just two bytes. In the case of text frames, each frame starts with a 0x00 byte, ends with a 0xFF byte, and contains UTF-8 data in between. WebSocket text frames use a terminator, while binary frames use a length prefix. -- HTML5 Web Sockets: A Quantum Leap in Scalability for the Web WebSockets does not use HTTP headers, SPDY does. This seemingly simple difference has an inversely proportional impact on supporting infrastructure. The Impact on Infrastructure The impact on infrastructure is why WebSockets may be more trouble than its worth – at least when it comes to public-facing web applications. While both specifications will require gateway translation services until (if) they are fully adopted, WebSockets has a much harsher impact on the intervening infrastructure than does SPDY. WebSockets effectively blinds infrastructure. IDS, IPS, ADC, firewalls, anti-virus scanners – any service which relies upon HTTP headers to determine specific content type or location (URI) of the object being requested – is unable to inspect or validate requests due to its lack of HTTP headers. Now, SPDY doesn’t make it easy – HTTP request headers are compressed – but it doesn’t make it nearly as hard, because gzip is pretty well understood and even intermediate infrastructure can deflate and recompress with relative ease (and without needing special data, such as is the case with SSL/TLS and certificates). Let me stop for a moment and shamelessly quote myself from a blog on this very subject, “Oops! HTML5 Does it Again”: One of the things WebSockets does to dramatically improve performance is eliminate all those pesky HTTP headers. You know, things like CONTENT-TYPE. You know, the header that tells the endpoint what kind of content is being transferred, such as text/html and video/avi. One of the things anti-virus and malware scanning solutions are very good at is detecting anomalies in specific types of content. The problem is that without a MIME type, the ability to correctly identify a given object gets a bit iffy. Bits and bytes are bytes and bytes, and while you could certainly infer the type based on format “tells” within the actual data, how would you really know? Sure, the HTTP headers could by lying, but generally speaking the application serving the object doesn’t lie about the type of data and it is a rare vulnerability that attempts to manipulate that value. After all, you want a malicious payload delivered via a specific medium, because that’s the cornerstone upon which many exploits are based – execution of a specific operation against a specific manipulated payload. That means you really need the endpoint to believe the content is of the type it thinks it is. But couldn’t you just use the URL? Nope – there is no URL associated with objects via a WebSocket. There is also no standard application information that next-generation firewalls can use to differentiate the content; developers are free to innovate and create their own formats and micro-formats, and undoubtedly will. And trying to prevent its use is nigh-unto impossible because of the way in which the upgrade handshake is performed – it’s all over HTTP, and stays HTTP. One minute the session is talking understandable HTTP, the next they’re whispering in Lakota, a traditionally oral-only language which neatly illustrates the overarching point of this post thus far: there’s no way to confidently know what is being passed over a WebSocket unless you “speak” the language used, which you may or may not have access to. The result of all this confusion is that security software designed to scan for specific signatures or anomalies within specific types of content can’t. They can’t extract the object flowing through a WebSocket because there’s no indication of where it begins or ends, or even what it is. The loss of HTTP headers that indicate not only type but length is problematic for any software – or hardware for that matter – that uses the information contained within to extract and process the data. SPDY, however, does not eliminate these Very-Important-to-Infrastructure-Services HTTP headers, it merely compresses them. Which makes SPDY a much more compelling option than WebSockets. SPDY can be enabled for an entire data center via the use of a single component: a SPDY gateway. WebSockets ostensibly requires the upgrade or replacement of many more infrastructure services and introduces risks that may be unacceptable to many organizations. And thus my answer to the question "Why not consider WebSockets here” is simply that the end-result (better performance) of implementing the two may be the same, WebSockets is unlikely to gain widespread acceptance as the protocol du jour for public facing web applications due to the operational burden it imposes on the rest of the infrastructure. That doesn’t mean it won’t gain widespread acceptance inside the enterprise. But that’s a topic for another day… HTML5 Web Sockets: A Quantum Leap in Scalability for the Web Oops! HTML5 Does it Again The HTTP 2.0 War has Just Begun Fire and Ice, Silk and Chrome, SPDY and HTTP Grokking the Goodness of MapReduce and SPDY Google SPDY Protocol Would Require Mass Change in Infrastructure287Views0likes0CommentsF5 Friday: F5 Application Delivery Optimization (ADO)
#ado #fasterapp #webperf The “all of the above” approach to improving application performance A few weeks ago (at Interop 2012 to be exact) F5 announced its latest solution designed to improve application performance. One facet of this “all of the above” approach is a SPDY gateway. Because of the nature of SPDY and the need for a gateway-based architectural approach to adoption, this piece of the announcement became a focal point. But lest you think the entire announcement (and F5’s entire strategy) revolves around SPDY, let’s take a moment to consider the overall solution. F5 ADO is a comprehensive approach to optimizing application delivery, i.e. it makes apps go faster. It accomplishes this seemingly impossible feat by intelligently applying accelerating technologies and policies at a strategic point of control in the network, the application delivery tier. Because of its location in the architecture, a BIG-IP has holistic visibility; it sees and understands factors on the client, in the network, and in the server infrastructure that are detrimental to application performance. By evaluating each request in the context it was made, BIG-IP can intelligently apply a wide variety of optimization and acceleration techniques that improve performance. These range from pure client-side (FEO) techniques to more esoteric server-side techniques. Being able to evaluate requests within context means BIG-IP can apply the technology or policy appropriate for that request to address specific pain points or challenges that may impede performance. Some aspects of ADO may seem irrelevant. After all, decreasing the size of a JavaScript by a couple of KB isn’t really going to have all that much impact on transfer times. But it does have a significant impact on the parsing time on the client, which whether we like it or not is one piece of the equation that counts from an end-user perspective, because it directly impacts the time it takes to render a page and be considered “loaded”. So if we can cut that down through minification or front-loading the scripts, we should – especially when we know clients are on a device with constrained CPU cycles, like most mobile platforms. But it’s important to recognize when applying technologies might do more harm than good. Clients connecting over the LAN or even via WiFi do not have the same characteristics as those connecting over the Internet or via a mobile network. “Optimization” of any kind that takes longer than it would to just transfer the entire message to the end-user is bad; it makes performance worse for clients, which is counter to the intended effect. Context allows BIG-IP to know when to apply certain techniques – and when not to apply them – for optimal performance. By using an “all of the above” approach to optimizing and accelerating delivery of applications, F5 ADO can increase the number of milliseconds shaved off the delivery of applications. It makes the app go faster. I could go into details about each and every piece of F5 ADO, but that would take thousands of words. Since a picture is worth a thousand words (sometimes more), I’ll just leave you with a diagram and a list of resources you can use to dig deeper into F5 ADO and its benefits to application performance. Resources: The “All of the Above” Approach to Improving Application Performance Y U No Support SPDY Yet? Stripping EXIF From Images as a Security Measure F5’s Application Delivery Optimization – SlideShare Presentation Application Delivery Optimization – White Paper Interop 2012 - Application Delivery Optimization with F5's Lori MacVittie – Video When Big Data Meets Cloud Meets Infrastructure F5 Friday: Ops First Rule New Communications = Multiplexification F5 Friday: Are You Certifiable? The HTTP 2.0 War has Just Begun Getting Good Grades on your SSL WILS: The Many Faces of TCP WILS: WPO versus FEO The Three Axioms of Application Delivery280Views0likes0CommentsThe Need for (HTML5) Speed
#mobile #HTML5 #webperf #fasterapp #ado The importance of understanding acceleration techniques in the face of increasing mobile and HTML5 adoption An old English proverb observes that "Even a broken clock is right twice a day.” A more modern idiom involves a blind squirrel and an acorn, and I’m certain there are many other culturally specific nuggets of wisdom that succinctly describe what is essentially blind luck. The proverb and modern idioms fit well the case of modern acceleration techniques as applied to content delivered to mobile devices. A given configuration of options and solutions may inadvertently be “right” twice a day purely by happenstance, but the rest of the time they may not be doing all that much good. With HTML5 adoption increasing rapidly across the globe, the poor performance of parsing on mobile devices will require more targeted and intense use of acceleration and optimization solutions. THE MOBILE LAST MILES One of the reasons content deliver to mobile devices is so challenging is the number of networks and systems through which the content must flow. Unlike WiFi connected devices, which traverse controllable networks as well as the Internet, content delivered to mobile devices connected via carrier networks must also traverse the mobile (carrier) network. Add to that challenge the constrained processing power of mobile devices imposed by carriers and manufacturers alike, and delivering content to these devices in an acceptable timeframe becomes quite challenging. Organizations must contend not only with network conditions across three different networks but also capabilities and innate limitations of the devices themselves. Such limitations include processing capabilities, connection models, and differences in web application support. Persistence and in-memory caching is far more limited on mobile devices, making reliance on traditional caching strategies as a key component of acceleration techniques less than optimal. Compression and de-duplication of data even over controlled WAN links when mobile devices are in WiFi mode may not be as helpful as they are for desktop and laptop counterparts given mobile hardware limitations. Difference in connection models – on mobile devices connections are sporadic, shorter-lived, and ad-hoc – render traditional TCP-related enhancements ineffective. TCP slow-start mechanisms, for example, are particularly frustrating under the hood for mobile device connections because connections are constantly being dropped and restarted, forcing TCP to begin again very slowly. TCP, in a nutshell, was designed for fixed-networks, not mobile networks. A good read on this topic is Ben Strong’s “Google and Microsoft Cheat on Slow-Start. Should You?” His testing (in 2010) showed both organizations push the limits for the IW (initial window) higher than the RFC allows, with Microsoft nearly ignoring the limitations all together. Proposals to increase the IW in the RFC to 10 have been submitted, but thus far there does not appear to be consensus on whether or not to allow this change. Also not discussed is the impact of changing the IW on fixed (desktop, laptop, LAN) connected devices. The assumption being that IW is specified as it is because it was optimal for fixed end-points and changing that would be detrimental to performance for those devices. The impact of TCP on mobile performance (and vice-versa) should not be underestimated. CloudFare has a great blog post on the impact of mobility on TCP-related performance concluding that: TCP would actually work just fine on a phone except for one small detail: phones don't stay in one location. Because they move around (while using the Internet) the parameters of the network (such as the latency) between the phone and the web server are changing and TCP wasn't designed to detect the sort of change that's happening. -- CloudFare blog: Why mobile performance is difficult One answer is more intelligent intermediate acceleration components, capable of detecting not only the type of end-point initiating the connection (mobile or fixed) but actually doing something about it, i.e. manipulating the IW and other TCP-related parameters on the fly. Dynamically and intelligently. Of course innate parsing and execution performance on mobile devices contributes significantly to the perception of performance on the part of the end-user. While HTML5 may be heralded as a solution to cross-platform, cross-environment compatibility issues, it brings to the table performance challenges that will need to be overcome. http://thenextweb.com/dd/2012/05/22/html5-runs-up-to-thousands-of-times-slower-on-mobile-devices-report/ In the latest research by Spaceport.io on the performance of HTML5 on desktop vs smartphones, it appears that there are performance issues for apps and in particular games for mobile devices. Spaceport.io used its own Perfmarks II benchmarking suite to test HTML rendering techniques across desktop and mobile browsers. Its latest report says: We found that when comparing top of the line, modern smartphones with modern laptop computers, mobile browsers were, on average, 889 times slower across the various rendering techniques tested. At best the iOS phone was roughly 6 times slower, and the best Android phone 10 times slower. At worst, these devices were thousands of times slower. Combining the performance impact of parsing HTML5 on mobile devices with mobility-related TCP impacts paints a dim view of performance for mobile clients in the future. Especially as improving the parsing speed of HTML5 is (mostly) out of the hands of operators and developers alike. Very little can be done to impact the parsing speed aside from transformative acceleration techniques, many of which are often not used for fixed client end-points today. Which puts the onus back on operators to use the tools at their disposal (acceleration and optimization) to improve delivery as a means to offset and hopefully improve the overall performance of HTML5-based applications to mobile (and fixed) end-points. DON’T RELY on BLIND LUCK Organizations seeking to optimize delivery to mobile and traditional end-points need more dynamic and agile infrastructure solutions capable of recognizing the context in which requests are made and adjusting delivery policies – from TCP to optimization and acceleration – on-demand, as necessary to ensure the best delivery performance possible. Such infrastructure must be able to discern whether the improvements from minification and image optimization will be offset by TCP optimizations designed for fixed end-points interacting with mobile end-points – and do something about it. It’s not enough to configure a delivery chain comprised of acceleration and optimization designed for delivery of content to traditional end-points because the very same services that enhance performance for fixed end-points may be degrading performance for mobile end-points. It may be that twice a day, like a broken clock, the network and end-point parameters align in such a way that the same services enhance performance for both fixed and mobile end-points. But relying on such a convergence of conditions as a performance management strategy is akin to relying on blind luck. Addressing mobile performance requires a more thorough understanding of acceleration techniques – particularly from the perspective of what constraints they best address and under what conditions. Trying to leverage the browser cache, for example, is a great way to improve fixed end-point performance, but may backfire on mobile devices because of limited capabilities for caching. On the other hand, HTML5 introduces client-side cache APIs that may be useful, but are very different from previous HTML caching directives that supporting both will require planning and a flexible infrastructure for execution. In many ways this API will provide opportunities to better leverage client-side caching capabilities, but will require infrastructure support to ensure targeted caching policies can be implemented. As HTML5 continues to become more widely deployed, it’s important to understand the various acceleration and optimization techniques, what each is designed to overcome, and what networks and platforms they are best suited to serve in order to overcome inherent limitations of HTML5 and the challenge of mobile delivery. Google and Microsoft Cheat on Slow-Start. Should You?” HTML5 runs up to thousands of times slower on mobile devices: Report Application Security is a Stack Y U No Support SPDY Yet? The “All of the Above” Approach to Improving Application Performance What Does Mobile Mean, Anyway? The HTTP 2.0 War has Just Begun260Views0likes0CommentsThe “All of the Above” Approach to Improving Application Performance
#ado #fasterapp #stirling Carnegie Mellon testing of ADO solutions answers age old question: less filling or tastes great? You probably recall years ago the old “Tastes Great vs Less Filling” advertisements. The ones that always concluded in the end that the beer in question was not one or the other, but both. Whenever there are two ostensibly competing technologies attempting to solve the same problem, we run into the same old style argument. This time, in the SPDY versus Web Acceleration debate, we’re inevitably going to arrive at the conclusion it’s both less filling and tastes great. SPDY versus Web Acceleration In general, what may appear on the surface to be competing technologies are actually complementary. Testing by Carnegie Mellon supports this conclusion, showing marked improvements in web application performance when both SPDY and Web Acceleration techniques are used together. That’s primarily because web application traffic shows a similar pattern across modern, interactive Web 2.0 sites: big, fat initial pages with a subsequent steady stream of small requests and a variety of response sizes, typically small to medium in content length. We know from experience and testing that web acceleration techniques like compression provide the greatest improvements in performance when acting upon medium-large sized responses though actual improvement rates depend highly on the network over which data is being exchanged. We know that compression can actually be detrimental to performance when responses are small (in the 1K range) and being transferred over a LAN. That’s because the processing time incurred to compress that data is greater than the time to traverse the network. But when used to compress larger responses traversing congested or bandwidth constrained connections, compression is a boon to performance. It’s less filling. SPDY, though relatively new on the scene, is the rising star of web acceleration. Its primary purposes is to optimize the application layer exchanges that typically occur via HTTP (requests and responses) by streamlining connection management (SPDY only uses one connection per client-host), dramatically reducing header sizes, and introducing asynchronicity along with prioritization. It tastes great. What Carnegie Mellon testing shows is that when you combine the two, you get the best results because each improves performance of specific data exchanges that occur over the life of a user interaction. HERE COMES the DATA The testing was specifically designed to measure the impact of each of the technologies separately and then together. For the web acceleration functionality they chose to employ BoostEdge (a software ADC) though one can reasonably expect similar results from other ADCs provided they offer the same web acceleration and optimization capabilities, which is generally a good bet in today’s market. The testing specifically looked at two approaches: Two of the most promising software approaches are (a) content optimization and compression, and (b) optimizing network protocols. Since network protocol optimization and data optimization operate at different levels, there is an opportunity for improvement beyond what can be achieved by either of the approaches individually. In this paper, we report on the performance benefits observed by following a unified approach, using both network protocol and data optimization techniques, and the inherent benefits in network performance by combining these approaches in to a single solution. -- Data and Network Optimization Effect on Web Performance, Carnegie Mellon, Feb 2012 The results should not be surprising – when SPDY is combined with ADC optimization technologies, the result is both less filling and it tastes great. When tested in various combinations, we find that the effects are more or less additive and that the maximum improvement is gained by using BoostEdge and SPDY together. Interestingly, the two approaches are also complimentary; i.e., in situations where data predominates (i.e. “heavy” data, and fewer network requests), BoostEdge provides a larger boost via its data optimization capabilities and in cases where the data is relatively small, or “light”, but there are many network transactions required, SPDY provides an increased proportion of the overall boost. The general effect is that relative level of improvement remains consistent over various types of websites. -- Data and Network Optimization Effect on Web Performance, Carnegie Mellon, Feb 2012 NOT THE WHOLE STORY Interestingly, the testing results show there is room for even greater improvement. The paper notes that comparisons include “an 11% cost of running SSL” (p 13). This is largely due to the use of a software ADC solution which employs no specialized hardware to address the latency incurred by compute intense processing such as compression and cryptography. Leveraging a hardware ADC with cryptographic hardware and compression acceleration capabilities should further improve results by reducing the latency incurred by both processes. Testers further did not test under load (which would have a significant impact on the results in a negative fashion) or leverage other proven application delivery optimization (ADO) techniques such as TCP multiplexing. While the authors mention the use of front-end optimization (FEO) techniques such as image optimization and client-side cache optimization, it does not appear that these were enabled during testing. Other techniques such as EXIF stripping, CSS caching, domain sharding, and core TCP optimizations are likely to provide even greater benefits to web application performance when used in conjunction with SPDY. CHOOSE “ALL of the ABOVE” What the testing concluded was that an “all of the above” approach would appear to net the biggest benefits in terms of application performance. Using SPDY along with complimentary ADO technologies provides the best mitigation of latency-inducing issues that ultimately degrade the end-user experience. Ultimately, SPDY is one of a plethora of ADO technologies designed to streamline and improve web application performance. Like most ADO technologies, it is not universally beneficial to every exchange. That’s why a comprehensive, inclusive ADO strategy is necessary. Only an approach that leverages “all of the above” at the right time and on the right data will net optimal performance across the board. The HTTP 2.0 War has Just Begun Stripping EXIF From Images as a Security Measure F5 Friday: Domain Sharding On-Demand WILS: WPO versus FEO WILS: The Many Faces of TCP HTML5 Web Sockets Changes the Scalability Game Network versus Application Layer Prioritization The Three Axioms of Application Delivery227Views0likes0CommentsSpeed Matters, but Dev Speed or App Speed?
In running, speed matters. But how the speed matters is very important, and what type of running is your forte’ should determine what you are involved in. As a teen, I was never a very good sprinter. Just didn’t get up to speed fast enough, and was consistently overcome by more nimble opponents. But growing up on a beach was perfect conditioning for cross country track. Running five miles in beach sand that gave way underfoot and drained your energy much faster than it allowed you to move forward was solid practice for running through the woods mile after mile. And I wasn’t a bad runner – not a world champion to be sure – but I won more often than I lost when the “track” was ten or fifteen miles through the woods. The same is true of mobile apps, though most organizations don’t seem to realize it yet. There are two types of mobile apps – those that are developed for sprinting, by getting them to market rapidly, and those that are developed for the long haul, by implementing solutions based around the platform in question. By “platform” in this case, I mean the core notions of “mobile” apps – wireless, limited resources, touch interfaces, and generally different use cases than a laptop or desktop machine. It is certainly a more rapid go-to-market plan to have an outsourcer of some kind dump your existing HTML into an “app” or develop a little HTML5 and wrap it in an “app”, but I would argue that the goals of such an endeavor are short term. Much like sprinting, you’ll get there quickly, but then the race is over. How the judges (customers in this case) gauge the result is much more important. There are three basic bits to judging in mobile apps – ease of use, which is usually pretty good in a wrapped HTML or “hybrid” app; security, which is usually pretty horrendous in a hybrid app; and performance, which is usually pretty horrendous in a hybrid app. The security bit could be fixed with some serious security folks looking over the resultant application, but the performance issue is not so simple. You see, performance of a hybrid application is a simple equation… Speed of original web content + overhead of a cell phone + overhead of the app wrapper around the HTML. Sure, you’ll get faster development time wrapping HTML pages in an app, but you’ll get worse long-term performance. Kind of the same issue you get when a sprinter tries to run cross country. They rock for the first while, but burn out before the cross country racers are up to speed. You can use tools like our Application Delivery Optimization (ADO) engine to make the wrapped app perform better, but that’s not a panacea. Longer term it will be necessary to develop a more targeted, comprehensive solution. Because when you need a little bit of data and could wrap display functionality around it on the client side, transferring that display functionality and then trying to make it work in a client is pure overhead. Overhead that must be transmitted on a slower network over what is increasingly a pay-as-you-go bandwidth model. Even if the application somehow performs adequately, apps that are bandwidth hogs are not going to be repaid with joy as increasing numbers of carriers drop unlimited bandwidth plans. So before you shell out the money for an intermediate step, stop and consider your needs. Enterprises are being beaten about the head and shoulders with claims that if you don’t have a mobile app, you’re doomed. Think really carefully before you take the chicken-little mentality to heart. Are your customers demanding an app? If so are they demanding it right this instant? if so, perhaps a hybrid app is a good option, if you’re willing to spend whatever it costs to get it developed only to rewrite the app native in six or ten months. Take a look at the Play store or the Apple store, and you’ll see that just throwing an app out there is not enough. You need to develop a method to let your customers know it’s available, and it has to offer them… Something. If you can’t clearly define both of those requirements, then you can’t clearly define what you need, and should take a deep breath while considering your options. Let’s say you have a web-based calculator for mortgage interest rates. It is calling web services to do the interest rate calculations. For not much more development time, it is possible to build a very sweet version of the same calculator in native mode for either iPhones or Android (depending upon your platform priorities, could be either), with a larger up-front investment but less long-term investment by re-using those web services calls from within the mobile app. A little more money now, and no need to rewrite for better performance or targeting Mobile in the future? Take the little extra hit now and do it right. There are plenty of apps out there, and unless you can prove you’re losing money every day over lack of a mobile app, no one will notice that your application came out a month or two later – but they will notice how cool it is. While we’re on the topic, I hate to burst any bubbles, but every single website doesn’t need a dedicated app. We have to get over the hype bit and get to reality. Most people do not want 50 reader apps on their phone, each one just a simple hybrid shell to allow easier reading of a single website. They just don’t. So consider whether you even need an app. Seriously. If the purpose of your app is to present your website in a different format, well news flash, all mobile devices have this nifty little tool called a web browser that’s pretty good at presenting your website. Of course, when you do deploy apps, or even before you do, consider F5’s ADO and security products. They do a lot with mobile that is specific to the mobile world. App development is no simple task, and good app development, like all good development, will cost you money. Make the right choices, drive the best app you can out to your customers, because they’re not very forgiving of slow or buggy apps, and they’re completely unforgiving about apps that mess up their mobile devices. And maybe one day soon, if we’re lucky, we’ll have a development toolkit that works well and delivers something like this: Related Articles and Blogs F5 Solutions for VMware View Mobile Secure Desktop Drama in the Cloud: Coming to a Security Theatre Near You Scary App Games. SSL without benefit. Will BYOL Cripple BYOD? Four Best Practices for Reducing Risk in the Cloud Birds on a Wire(less) 22 Beginner Travel Tips Dreaming of Work 20 Lines or Less #59: SSL Re-encryption, Mobile Browsing, and iFiles Scaling Web Security Operations with DAST and One-Click Virtual Patching BIG-IP Edge Client v1.0.4 for iOS217Views0likes0CommentsPerformance versus Presentation
#webperf #ado You remember the service, not the plating (unless you're a foodie) One morning, while reading the Internet (yes, the entire Internet), I happened upon a rather snarky (and yes, I liked the tone and appreciated the honesty) blog on the value (or lack thereof) of A/B Testing, "Most of your AB-tests will fail". The blog is really a discussion on the rule of diminishing returns and notes the reality that at some point, the value of moving a button 2 pixels to the right is not worth the effort of going through the testing and analysis. When you combine the eventual statistical irrelevance of presentation with the very real impact on conversion rates due to performance (both negative and positive, depending on the direction of performance) it becomes evident that at some point it becomes more valuable to focus on performance over presentation. If you think about it, most people remember service over plating at a restaurant. As long as the meal isn't dumped on a plate in a manner that's completely unappetizing, most people are happy as long as the service was good, i.e. it was delivered within their anticipated time frame. Even those of us who appreciate an aesthetically pleasing plate will amend a description of our dining experience with "but it took f-o-r-e-v-e-r" if the service was too slow. Service - performance - ends up qualifying even our dining experiences. And really, how many people do you know who go around praising the color and font choices* on a website or application? How many gush over the painstakingly created icons or the layout that took months to decide upon? Now, how many do you hear complain about the performance? About how s-l-o-w the site was last night, or how lag caused their favorite character in their chosen MMORPG to die? See what I mean? Performance, not plating, is what users remember and it's what they discuss. Certainly a well-designed and easy to use (and navigate) application is desirable. A poorly designed application can be as much a turn off as a meal dumped unceremoniously on a plate. But pretty only gets you so far, and eventually performance is going to be more of a hindrance than plating, and you need to be ready for that. A/B testing (and other devops patterns) is a hot topic right now, especially given new tools and techniques that make it easy to conduct. But the aforementioned blog was correct in that at some point, it's just not worth the effort any more. The math says improving performance, not plating, at that point will impact conversion rates and increase revenue far more than moving a button or changing an image. As more and more customers move to mobile means of interacting with applications and web sites, performance is going to become even more critical. Mobile devices come with a wide variety of innate issues that impede performance that cannot be addressed directly. After all, unless it's a corporate or corporate-managed device you don't get to mess with the device. Instead, you'll need to leverage a variety of mobile acceleration techniques including minification, content-inlining, compression, image optimization, and even SPDY support. A/B testing is important in early stages of design, no doubt about that. Usability is not something to be overlooked. But recognize the inflection point, the point at which tweaking is no longer really returning value when compared to the investment in time. Performance improvements, however, seem to contradict the law of diminishing returns based on study after study, and always brings value to both users and the bottom line alike. So don't get so wrapped up in how the application looks that you overlook how it performs. *Except, of course, if you use Comic Sans. If you use Comic Sans you will be mocked, loudly and publicly, across the whole of the Internets no matter how fast your site is. Trust me. You can check out your application's performance usingF5's FAST.213Views0likes0CommentsThere is more to it than performance.
Did you ever notice that sometimes, “high efficiency” furnaces aren’t? That some things the furnace just doesn’t cover – like the quality of your ductwork, for example? The same is true of a “high performance” race car. Yes, it is VERY fast, assuming a nice long flat surface for it to drive on. Put it on a dirt road in the rainy season, and, well, it’s just a gas hog. Or worse, a stuck car. I could continue the list. A “high energy” employee can be relatively useless if they are assigned tasks at which brainpower, not activity rate, determines success… But I’ll leave it at those three, I think you get the idea. The same is true of your network. Frankly, increasing your bandwidth in many scenarios will not yield the results you expected. Oh, it will improve traffic flow, and overall the performance of apps on the network will improve, the question is “how much?” It would be reasonable – or at least not unreasonable – to expect that doubling Internet bandwidth should stave off problems until you double bandwidth usage. But often times the problems are with the overloading apps we’re placing upon the network. Sometimes, it’s not the network at all. Check the ecosystem, not just the network. When I was the Storage and Servers Editor over at NWC, I reviewed a new (at the time) HP server that was loaded. It had a ton of memory, a ton of cores, and could make practically any application scream. It even had two gigabit NICs in it. But they weren’t enough. While I had almost nothing bad to say about the server itself, I did observe in the summary of my article that the network was now officially the bottleneck. Since the machine had high speed SAS disks, disk I/O was not as bi a deal as it traditionally has been, high-speed cached memory meant memory I/O wasn’t a problem at all, and multiple cores meant you could cram a ton of processing power in. But team those two NICs and you’d end up with slightly less than 2 Gigabits of network throughput. Assuming 100% saturation, that was really less than 250 Megabytes per second, and that counts both in and out. For query-intensive database applications or media streaming servers, that just wasn’t keeping pace with the server. Now here we are, six or so years later, and similar servers are in use all over the globe… Running VMs. Meaning that several copies of the OS are now carving up that throughput. So start with your server. Check it first if the rest of the network is performing, it might just be the problem. And while we’re talking about servers, the obvious one needs to be mentioned… Don’t forget to check CPU usage. You just might need a bigger server or load balancing, or these days, less virtuals on your server. Heck, as long as we’re talking about servers, let’s consider the app too. The last few years for a variety of reasons we’ve seen less focus on apps whose performance is sucktacular, but it still happens. Worth looking into if the server turns out to be the bottleneck. Old gear is old. I was working on a network that deployed an ancient Cisco switch. The entire network was 1 Gig, except for that single switch. But tracing wires showed that switch to lie between the Internet and the internal network. A simple error, easily fixed, but an easy error to have in a complex environment, and certainly one to be aware of. That switch was 10/100 only. We pulled it out of the network entirely, and performance instantly improved. There’s necessary traffic, and then there’s… Not all of the traffic on your network needs to be. And all that does need to be doesn’t have to be so bloated. Look for sources of UDP broadcasts. More often than you would think, applications broadcast that you don’t care about. Cut them off. For other traffic, well there is Application Delivery Optimization. ADO is improving application delivery by a variety of technical solutions, but we’ll focus on those that make your network and your apps seem faster. You already know about them – compression, caching, image optimization… In the case of back-end services, de-duplication. But have you considered what they do other than improve perceived or actual performance? Free Bandwidth Anything that reduces the size of application data leaving your network also reduces the burden on your Internet connection. This goes without saying, but as I alluded to above, we sometimes overlook the fact that it is not just application performance we’re impacting, but the effectiveness of our connections – connections that grow more expensive by leaps and bounds each time we have to upgrade them. While improving application performance is absolutely a valid reason to seek out ADO, delaying or eliminating the need to upgrade your Internet connection(s) is another. Indeed, in many organizations it is far easier to do TCO justification based upon deferred upgrades than it is based upon “our application will be faster”, while both are benefits of ADO. New stuff! And as time wears on, SPDY, IPv6, and a host of other technologies will be more readily available to help you improve your network. Meanwhile, check out gateways for these protocols to make the transition easier. In Summation There are a lot of reasons for apps not to perform, and there are a lot of benefits to ADO. I’ve listed some of the easier problems to ferret out, the deeper into your particular network you get, the harder it is to generalize problems. But these should give you a taste for the types of things to look for. And a bit more motivation to explore ADO. Of course I hope you choose F5 gear for ADO and overall application networking, but there are other options out there. I think. Maybe.210Views0likes0CommentsHype Cycles, VDI, and BYOD
Survive the hype. Get real benefits. #F5 #BYOD #VDI An interesting thing is occurring in the spaces of Virtual Desktop Infrastructure (VDI) and Bring Your Own Device (BYOD) that is subtle, and while not generally part of the technology hype cycle, seems to be an adjunct of it in these two very specific cases. In espionage, when launching a misinformation campaign, several different avenues are approached for maximum impact, and to increase believability. While I don’t think this over-hype cycle is anything like an espionage campaign, the similarities that do exist are intriguing. The standard hype cycle tells you how a product category will change the world, we’ve seen it over and over in different forms. Get X or get behind your competitors, Product Y will solve all your business problems and cook you toast in the morning. More revolutionary than water, product Z is the next big thing. Product A will eliminate the need for (insert critical IT function here)… The hype cycle goes on, and eventually wears out as people find actual uses for the product category, and the limitations. At that point, the product type settles into the enterprise, or into obscurity. This is all happening with both VDI and BYOD. VDI will reduce your expenses while making employees happier and licensing easier… Well, that was the hype. We’ve discovered as an industry that neither of these hype points was true, and realized that the anecdotal evidence was there from the early days. That means it is useful for reducing hardware expenses, but even that claim is being called into question by those who have implemented and paid for servers and some form of client. There are very valid reasons for VDI, but it is still early yet from a mass-adoption standpoint, and enterprises seem to acknowledge it needs some time. Still, the hype engine declares each new year “The Year of VDI”. Server virtualization shows that there won’t be a year of VDI when its time comes, but rather a string of very successful years that slowly add up. Meanwhile, BYOD is everything for everyone. Employees get the convenience of using the device of their choice, employers get to reduce expenses on mobile devices, the sun comes out, birds sing beautiful madrigals… Except that like every hype cycle ever, that’s nowhere near the reality. Try this one on for size to see what I’m talking about: If you’re not familiar with my example corporation ZapNGo, see my Load Balancing For Developers series of blogs Because that is never going to happen. Which means it is not “bring your own device”, it is “let us buy the device you like” which will also not last forever unless there is management convergence. In the desktop space, organizations standardized on Windows because it allowed them to care for their gear without having to have intimate knowledge of three (or ten) different operating systems. There is no evidence that corporations are going to willy-nilly support different device OS’s and the associated different apps that go along with using multiple device OS’s. In many industries, the need to control the device at a greater level than most users want their personal devices messed with (health care and financial services spring to mind) will drive a dedicated device for work, like it or not. So why is there so much howling about “the democratization of IT”? Simple, the people howling for BYOD belong to two groups. C-Level execs that need access 24x7 if an emergency occurs, and people like me – techno geeks that want to pop onto work systems from their home tablet. The other 90% of the working population? They do not want work stuff on their personal tablet or phone. They just don’t. They’ll take a free tablet from their employer and even occasionally use it for work, but even that is asking much of a large percentage of workers. Am I saying BYOD is bogus? No. But bear with me while I circle back to VDI for a moment… In the VDI space, there was originally a subtle pressure to do massive implementations. We’ve had decades of experience as an industry at this point, and we know better than to do massive rip-n-replace projects unless we have to. They just never go as planned, they’re too big. So most organizations have the standard pilot/limited implementation/growing implementation cycle. And that makes sense, even if it doesn’t make VDI vendors too happy. In the interim, if you discover something that makes a given VDI implementation unsuitable for your needs, you still have time to replace it with a parallel project. The most cutting-edge (to put it nicely) claims are that your employees – all of them! – want access to their desktops on their BYOD device. Let me see, how do you spell that… S L O W. That’s it. Most of your employees don’t want work desktops at work, let alone on their tablet at the beach. It’s just us geeks and the overachievers that even think it’s a good idea, and performance will quickly disillusion even the most starry eyed of those of us who do. So let’s talk realities. What will we end up with after the hype is done and how can you best proceed? Well, here are some steps you can take. Some of these are repeated from my VDI blog post of a few months ago, but repetition is not a terrible thing, as long as it’s solid advice. Pick targeted deployments of both. Determine who most wants/needs BYOD and see if those groups can agree on a platform. Determine what segments of your employee population are best suited to VDI. Low CPU usage or high volume of software installs are good places to start. Make certain your infrastructure can take the load and new types of traffic. If not, check the options F5 has to accelerate and optimize your network for both products. Move slowly. The largest VDI vendors are certainly not going anywhere, and at a minimum, neither are Android and iOS. So take your time and do it right. SELL your chosen solutions. IT doesn’t generally do very well on the selling front, but if you replace a local OS with a remote one, every performance issue will be your fault unless you point out that no matter where they are geographically, their desktop stays the same, it is easier to replace client hardware when you don’t have to redo the entire desktop, etc. Gauge progress. Many VDI deployments fell on hard times because they were too ambitious, or because they weren’t ambitious enough and didn’t generate benefits, or because they weren’t properly sold to constituents. BYOD will face similar issues. Get a solid project manager and have them track progress not in terms of devices or virtual desktops, but in terms of user satisfaction and issues that could blow up as the project grows. That’s it for now. As always, remember that you’re doing what’s best for your organization, do not concern yourself with hype, market, or vendor health, those will take care of themselves. Related Articles and Blogs: Four Best Practices for Reducing Risk in the Cloud Dreaming of Work Speed Matters, but Dev Speed or App Speed? Infographic: Protect Yourself Against Cybercrime Multiscreen Multitasking Upcoming Event: F5 Agility Summit 2012 Enterprise Still Likes Apple over Android201Views0likes0Comments