ado
16 TopicsForm (Factor) versus Function
#webperf Purpose-built hardware is an integral component for some things but not others. Understanding the difference will help architect networks that deliver highly scalable, top-performing applications. In the world of web performance there are two distinct functions that are constantly conflated: acceleration and optimization. This confusion is detrimental in that it impedes the ability to architect network services in a way that maximizes each with the goal of delivering highly performant applications. In the increasingly bifurcated networks of today, acceleration is tightly coupled to hardware-enhanced network components that improve application performance by speeding up specific compute-intensive functions. Services like SSL termination, compression, hardware-assisted load balancing and DDoS protections offer applications a dramatic boost in performance by performing certain tasks in hardware*. Obviously if you deploy a network component that has been designed to take advantage of purpose-built hardware acceleration on software, you lose the benefits of the hardware acceleration. Form-factor is important to acceleration. On the other side of the fence is optimization. Optimization is a different technique that seeks to eek out every bit or CPU cycle possible from network and compute resources in order to reduce network overhead (most of these are TCP-related) or reduce the size of data being transferred (WAN optimization and application-specific services like minification and image optimization). Optimizations are functions and are not reliant on specialized hardware**. This separation between acceleration and optimization, between form-factor and function, is imperative to recognize as it enables more efficient and better performing data center architectures to be designed. It also offers guidance on what functions may be reasonably moved to software and/or virtual form-factors without negatively impacting performance. SSL termination on software of any kind, for example, is unable to equal the performance of hardware-assisted functions. Every gain made by general-purpose hardware is also experienced by hardware counterparts and the latter has the advantage of additional special-purpose processing. Thus when deploying support for SSL, it makes sense to implement in the core network. Functions such as minification and other application content-specific processing can be deployed in the application network, close to the application it is supporting. Such functions are impacted by compute configuration, of course, but primarily benefit from highly-tuned algorithms that do not require purpose-built hardware and can be deployed on hardware, software or virtual form-factors. While such functions benefit from highly tuned networking stacks on purpose-built hardware, they still provide considerable benefits when deployed on alternative form factors. Understanding the difference between acceleration and optimization and their relationship to form-factor is a critical step in designing highly available, fast and secure next generation data center networks as that network continues to split into two equally important but different networks. *You can claim that hardware assisted network functions are going away or no longer necessary because of advances being made by chip makers like Intel and AMD, but read carefully what they're actually doing. They're moving compute intensive processing into hardware to accelerate them. In other words, they're doing what much of the networking industry has done for decades: leverage hardware acceleration to improve performance. ** WAN optimization relies heavily on caching as part of its overall strategy and thus the amount of memory is an important factor in its effectiveness, but memory is rarely specialized.170Views0likes0Comments1024 Words: I Didn't Say It Was Your Fault, I Said I Was Going to Blame You
#webperf We often lay the blame for application performance woes on the nebulous (and apparently sentient-with-malevolent-tendencies) "network". But the truth is that the causes of application performance woes are more often than not related to the "first*" and "last" mile of connectivity. That's why optimization is as important, often more so, than acceleration. And yes, there is a difference. * That whole "first mile" thing isn't the network as we generally see it, per se, but the network internal to the server. It's complicated, engineering things. Trust me, there's a bus over which data has to travel that slows things down. Besides, the analogy doesn't work well if there isn't a "first" mile to match the "last" mile so just run with it, okay?171Views0likes1Comment1024 Words: SEND Buffers and Concurrent Connections
#devops #webperf #ado The relationship between SEND buffers and concurrent connections The send buffer setting is the maximum amount of data a system will send before receiving an acknowledgement, and the receive window setting is the maximum size window the system will advertise. Many folks tweak the send buffer settings on web and application servers as well as intermediate devices (proxies, load balancers, etc...) to optimize for high-speed, high-latency pipes. However, as this is often a system-wide configuration change, mucking with the SEND buffer sizes can directly impact the connection capacity of the system. Remember, "buffering" takes up memory, and so do connections (session and state table storage),170Views0likes0CommentsPerformance versus Presentation
#webperf #ado You remember the service, not the plating (unless you're a foodie) One morning, while reading the Internet (yes, the entire Internet), I happened upon a rather snarky (and yes, I liked the tone and appreciated the honesty) blog on the value (or lack thereof) of A/B Testing, "Most of your AB-tests will fail". The blog is really a discussion on the rule of diminishing returns and notes the reality that at some point, the value of moving a button 2 pixels to the right is not worth the effort of going through the testing and analysis. When you combine the eventual statistical irrelevance of presentation with the very real impact on conversion rates due to performance (both negative and positive, depending on the direction of performance) it becomes evident that at some point it becomes more valuable to focus on performance over presentation. If you think about it, most people remember service over plating at a restaurant. As long as the meal isn't dumped on a plate in a manner that's completely unappetizing, most people are happy as long as the service was good, i.e. it was delivered within their anticipated time frame. Even those of us who appreciate an aesthetically pleasing plate will amend a description of our dining experience with "but it took f-o-r-e-v-e-r" if the service was too slow. Service - performance - ends up qualifying even our dining experiences. And really, how many people do you know who go around praising the color and font choices* on a website or application? How many gush over the painstakingly created icons or the layout that took months to decide upon? Now, how many do you hear complain about the performance? About how s-l-o-w the site was last night, or how lag caused their favorite character in their chosen MMORPG to die? See what I mean? Performance, not plating, is what users remember and it's what they discuss. Certainly a well-designed and easy to use (and navigate) application is desirable. A poorly designed application can be as much a turn off as a meal dumped unceremoniously on a plate. But pretty only gets you so far, and eventually performance is going to be more of a hindrance than plating, and you need to be ready for that. A/B testing (and other devops patterns) is a hot topic right now, especially given new tools and techniques that make it easy to conduct. But the aforementioned blog was correct in that at some point, it's just not worth the effort any more. The math says improving performance, not plating, at that point will impact conversion rates and increase revenue far more than moving a button or changing an image. As more and more customers move to mobile means of interacting with applications and web sites, performance is going to become even more critical. Mobile devices come with a wide variety of innate issues that impede performance that cannot be addressed directly. After all, unless it's a corporate or corporate-managed device you don't get to mess with the device. Instead, you'll need to leverage a variety of mobile acceleration techniques including minification, content-inlining, compression, image optimization, and even SPDY support. A/B testing is important in early stages of design, no doubt about that. Usability is not something to be overlooked. But recognize the inflection point, the point at which tweaking is no longer really returning value when compared to the investment in time. Performance improvements, however, seem to contradict the law of diminishing returns based on study after study, and always brings value to both users and the bottom line alike. So don't get so wrapped up in how the application looks that you overlook how it performs. *Except, of course, if you use Comic Sans. If you use Comic Sans you will be mocked, loudly and publicly, across the whole of the Internets no matter how fast your site is. Trust me. You can check out your application's performance usingF5's FAST.215Views0likes0CommentsProgrammable Cache-Control: One Size Does Not Fit All
#webperf For addressing challenges related to performance of #mobile devices and networks, caching is making a comeback. It's interesting - and almost amusing - to watch the circle of technology run around best practices with respect to performance over time. Back in the day caching was the ultimate means by which web application performance was improved and there was no lack of solutions and techniques that manipulated caching capabilities to achieve optimal performance. Then it was suddenly in vogue to address the performance issues associated with Javascript on the client. As Web 2.0 ascended and AJAX-based architectures ruled the day, Javascript was Enemy #1 of performance (and security, for that matter). Solutions and best practices began to arise to address when Javascript loaded, from where, and whether or not it was even active. And now, once again, we're back at the beginning with caching. In the interim years, it turns out developers have not become better about how they mark content for caching and with the proliferation of access from mobile devices over sometimes constrained networks, it's once again come to the attention of operations (who are ultimately responsible for some reason for performance of web applications) that caching can dramatically improve the performance of web applications. [ Excuse me while I take a breather - that was one long thought to type. ] Steve Souders, web performance engineer extraordinaire, gave a great presentation at HTML5DevCon that was picked up by High Scalability: Cache is King!. The aforementioned articles notes: Use HTTP cache control mechanisms: max-age, etag, last-modified, if-modified-since, if-none-match, no-cache, must-revalidate, no-store. Want to prevent HTTP sending conditional GET requests, especially over high latency mobile networks. Use a long max-age and change resource names any time the content changes so that it won't be cached improperly. -- Better Browser Caching Is More Important Than No Javascript Or Fast Networks For HTTP Performance The problem is, of course, that developers aren't putting all these nifty-neato-keen tags and meta-data in their content and the cost to modify existing applications to do so may result in a prioritization somewhere right below having an optional, unnecessary root canal. In other cases the way in which web applications are built today - we're still using AJAX-based, real-time updates of chunks of content rather than whole pages - means simply adding tags and meta-data to the HTML isn't necessarily going to help because it refers to the page and not the data/content being retrieved and updated for that "I'm a live, real-time application" feel that everyone has to have today. Too, caching tags and meta-data in HTML doesn't address every type of data. JSON, for example, commonly returned as the response to an API call (used as the building blocks for web applications more and more frequently these days) aren't going to be impacted by the HTML caching directives. That has to be addressed in a different way, either on the server side (think Apache mod_expire) or on the client (HTML5 contains new capabilities specifically for this purpose and there are usually cache directives hidden in AJAX frameworks like jQuery). The Programmable Network to the Rescue What you need is the ability to insert the appropriate tags, on the appropriate content, in such a way as to make sure whatever you're about to do (a) doesn't break the application and (b) is actually going to improve the performance of the end-user experience for that specific request. Note that (b) is pretty important, actually, because there are things you do to content being delivered to end users on mobile devices over mobile networks that might make things worse if you do it to the same content being delivered to the same end user on the same device over the wireless LAN. Network capabilities matter, so it's important to remember that. To avoid rewriting applications (and perhaps changing the entire server-side architecture by adding on modules) you could just take advantage of programmability in the network. When enabled as part of a full-proxy, network intermediary the ability to programmatically modify content in-flight becomes invaluable as a mechanism for improving performance, particularly with respect to adding (or modifying) cache headers, tags, and meta-data. By allowing the intermediary to cache the cacheable content while simultaneously inserting the appropriate cache control headers to manage the client-side cache, performance is improved. By leveraging programmability, you can start to apply device or network or application (or any combination thereof) logic to manipulate the cache as necessary while also using additional performance-enhancing techniques like compression (when appropriate) or image optimization (for mobile devices). The thing is that a generic "all on" or "all off" for caching isn't going to always result in the best performance. There's logic to it that says you need the capability to say "if X and Y then ON else if Z then OFF". That's the power of a programmable network, of the ability to write the kind of logic that takes into consideration the context of a request and takes the appropriate actions in real-time. Because one size (setting) simply does not fit all.207Views0likes0CommentsService Chaining and Unintended Consequences
#webperf #infosec #ado #SDN #cloud Service chaining is a popular term today to describe a process in the network that's been done in the land of application integration for a long time. Service chaining in a nutshell is basically orchestration of network services. This concept is being put forth as the way future data center networks will be designed and execute in the future. Its unintended consequence is, of course, that chaining can have a profound impact on performance, particularly when (or if) those chains extend across providers. Let's consider an existing service chaining example that's challenging for SSL in terms of performance. The Rest of the "SSL Performance" Story Now, we're all aware that SSL handshaking introduces latency. It has to because in addition to the already time-consuming process of performing cryptographic functions, it requires additional round trips between the client (browser) and server (or intermediate network proxy acting as the endpoint, such as a load balancer or ADC) to exchange the information needed to encrypt and decrypt subsequent communication. But that's not all it needs to do. The certificate offered up by the server-side device is increasingly suspect thanks to a variety of incidents in which basically forged certificates were used to impersonate a site and trick the user into believing the site was safe. As the SSL Everywhere movement continues to grow, so has the decision by browsers to properly validate certificates by querying an OCSP (Online Certificate Status Protocol) responder as to the status of the certificate (this is increasingly favored over the use of CRL (Certificate Revocation Lists) to address certain shortcomings of the technology). What this means is that during the SSL handshake, the client makes a request to an OCSP responder. It's an additional service in the connection chain that adds time to the "load" process. Thus, it needs to be as fast as possible because it's counted in the "load time" for a page, if not technically then from the perspective of the user which, as we all know, is what really counts. So the browser makes a request to the responder. It does this by choosing a responder from a list of those that support the CA (Certificate Authority, the issuer of the certificate in question). While there are a large number of global CAs, the actual number of global CAs for SSL is fairly small. Thus the responder is almost certainly very large and likely to see billions of requests a day, from around the globe. This "link in the chain" is increasingly important to the overall performance experienced by the end-user. Its impact on mobile users, in particular, is worthy of note given the impact of mobile networks and constrained device capabilities, as noted by Mike Belshe, one of the folks who helped create the SPDY protocol (emphasis mine): But this process is pretty costly, especially on mobile networks. For my own service, I just did a quick trace over 3G: DNS (1334ms) TCP handshake (240ms) SSL handshake (376ms) Follow certificate chain (1011ms) — server should have bundled this. DNS to CA (300ms) TCP to CA (407ms) OCSP to CA #1 (598ms) — StartSSL CA uses connection close on each! TCP to CA #2 (317ms) OCSP to CA #2 (444ms) Finish SSL handshake (1270ms) -- Rethinking SSL for Mobile Apps The emphasized portions of the transaction indicate those related to the certificate verification process being carried out by the browser as a security precaution. Over a non-mobile network, one would expect the performance to improve, but the impact on "regular" browsers should not be underestimated, either. Early last year Adam Langley noted this and proposed to disable OSCP validation in Chrome: . The median time for a successful OCSP check is ~300ms and the mean is nearly a second. This delays page loading and discourages sites from using HTTPS. They are also a privacy concern because the CA learns the IP address of users and which sites they're visiting. On this basis, we're currently planning on disabling online revocation checks in a future version of Chrome. http://www.imperialviolet.org/2012/02/05/crlsets.html I'll save the security-related arguments for another time, but suffice to say that the impact of service chaining on performance in the case of SSL and certificate validation is significant enough at times to be noticed. Key Takeaway Now certainly service chaining in other contexts, say in the data center network, would not experience the same magnitude of delay based purely on the fact that we're talking about LAN speeds rather than what often end up being inter- or cross-continental communications. Still, the very real impact of service chaining, particularly when such chains are comprised of a long string of services, should not be ignored or underestimated. Such chains introduce additional latency, often in the form of unnecessary, duplicated functions as well as the possibility of failure. Load and utilization monitoring and scaling strategies of individual (dependent) services is a vital to the overall success of any architecture which employs an orchestrated (chained) services strategy. And while technologies like SDN and cloud offer corrective action in the face of failure, it should be noted that such corrections tend to be reactions to failure. That means at least one user experiences a failure before a correction is made. In some cases that failure will go unnoticed except for a lengthier response time, but the key takeaway there is that it is noticeable. And when it comes to web application performance, noticeable degradations are not something the business or operations, for that matter, likes to see. Not even for a single user.195Views0likes0CommentsFilling the SLA Gap in the Cloud
#webperf #ado Meeting user expectations of fast and available applications becomes more difficult as you relinquish more and more control… User expectations with respect to performance are always a concern for IT. Whether it's monitoring performance or responding to a fire drill because an application is "slow", IT is ultimately responsible for maintaining the consistent levels of performance expected by end-users – whether internal or external. Virtualization and cloud computing introduce a variety of challenges for operations whose primary focus is performance. From lack of visibility to lack of control, dealing with performance issues is getting more and more difficult. The situation is one of which IT is acutely aware. A ServicePilot Technologies survey (2011) indicates virtualization, the pace of emerging technology, lack of visibility and inconsistent service models as challenges to discovering the root cause of application performance issues. Visibility, unsurprisingly, was cited as the biggest challenge, with 74% of respondents checking it off. These challenges are not unrelated. Virtualization's tendency toward east-west traffic patterns can inhibit visibility, with few solutions available to monitor intra-virtual machines deployed on the same physical machine. Cloud computing – highly virtual in both form factor and in model – contributes to the lack of visibility as well as challenges associated with disconnected service models as enterprise and cloud computing providers rarely leverage the same monitoring systems. Most disturbing, all these challenges contribute to an expanding gap between performance expectations (SLA) and the ability of IT to address application performance issues, especially in the cloud. YES, YET ANOTHER GAP There are many "gaps" associated with virtualization and cloud computing: the gap between dev and ops, the gap between ops and the network, the gap between scalability of operations and the volatility of the network. The gap between application performance expectations and the ability to affect it is just another example of how technology designed to solve one problem can often illuminate or even create another. Unfortunately for operations, application performance is critical. Degrading performance impacts reputation, productivity, and ultimately the bottom line. It increases IT costs as end-users phone the help desk, redirects resources from other just as important tasks toward solving the problem and ultimately delaying other projects. This gap is not one that can be ignored or put off or dismissed with a "we'll get to that". Application performance always has been – and will continue to be – a primary focus for IT operations. An even bigger challenge than knowing there's a performance problem is what to do about it – particularly in a cloud computing environment where tweaking QoS policies just isn't an option. What IT needs – both in the data center and in the cloud – is a single, strategic point of control at which to apply services designed to improve performance at three critical points in the delivery chain: the front, middle, and back-end. FILLING THE GAP IN THE CLOUD Such a combined performance solution is known as ADO – Application Delivery Optimization – and it uses a variety of acceleration and optimization techniques to fill the gap between SLA expectations and the lack of control in cloud computing environments. A single, strategic implementation and enforcement point for such policies is necessary in cloud computing (and highly volatile virtualized) environments because of the topological challenges created by the core model. Not only is the reality of application instances (virtual machines) popping up and moving around problematic, but the same occurs with virtualized network appliances and services designed to address specific pain points involving performance. The challenge of dealing with a topologically mobile architecture – particularly in public cloud computing environments – is likely to prove more trouble than it's worth. A single, unified ADO solution, however, provides a single control plane through which optimizations and enhancements can be applied across all three critical points in the delivery chain – without the topological obstacles. By leveraging a single, strategic point of control, operations is able to leverage the power of dynamism and context to ensure that the appropriate performance-related services are applied intelligently. That means not applying compression to already compressed content (such as JPEG images) and recognizing the unique quirks of browsers when used on different devices. ADO further enhances load balancing services by providing performance-aware algorithms and network-related optimizations that can dramatically impact the load and thus performance of applications. What's needed to fill the gap between user-expectations and actual performance in the cloud is the ability of operations to apply appropriate services with alacrity. Operations needs a simple yet powerful means by which performance-related concerns can be addressed in an environment where visibility into the root cause is likely extremely limited. A single service solution that can simultaneously address all three delivery chain pain points is the best way to accomplish that and fill the gap between expectations and reality. The Need for (HTML5) Speed Mobile versus Mobile: An Identity Crisis SPDY versus HTML5 WebSockets WILS: WPO versus FEO WILS: Application Acceleration versus Optimization F5 Friday: F5 Application Delivery Optimization (ADO) The Four V’s of Big Data The “All of the Above” Approach to Improving Application Performance Y U No Support SPDY Yet?429Views0likes0CommentsThe Need for (HTML5) Speed
#mobile #HTML5 #webperf #fasterapp #ado The importance of understanding acceleration techniques in the face of increasing mobile and HTML5 adoption An old English proverb observes that "Even a broken clock is right twice a day.” A more modern idiom involves a blind squirrel and an acorn, and I’m certain there are many other culturally specific nuggets of wisdom that succinctly describe what is essentially blind luck. The proverb and modern idioms fit well the case of modern acceleration techniques as applied to content delivered to mobile devices. A given configuration of options and solutions may inadvertently be “right” twice a day purely by happenstance, but the rest of the time they may not be doing all that much good. With HTML5 adoption increasing rapidly across the globe, the poor performance of parsing on mobile devices will require more targeted and intense use of acceleration and optimization solutions. THE MOBILE LAST MILES One of the reasons content deliver to mobile devices is so challenging is the number of networks and systems through which the content must flow. Unlike WiFi connected devices, which traverse controllable networks as well as the Internet, content delivered to mobile devices connected via carrier networks must also traverse the mobile (carrier) network. Add to that challenge the constrained processing power of mobile devices imposed by carriers and manufacturers alike, and delivering content to these devices in an acceptable timeframe becomes quite challenging. Organizations must contend not only with network conditions across three different networks but also capabilities and innate limitations of the devices themselves. Such limitations include processing capabilities, connection models, and differences in web application support. Persistence and in-memory caching is far more limited on mobile devices, making reliance on traditional caching strategies as a key component of acceleration techniques less than optimal. Compression and de-duplication of data even over controlled WAN links when mobile devices are in WiFi mode may not be as helpful as they are for desktop and laptop counterparts given mobile hardware limitations. Difference in connection models – on mobile devices connections are sporadic, shorter-lived, and ad-hoc – render traditional TCP-related enhancements ineffective. TCP slow-start mechanisms, for example, are particularly frustrating under the hood for mobile device connections because connections are constantly being dropped and restarted, forcing TCP to begin again very slowly. TCP, in a nutshell, was designed for fixed-networks, not mobile networks. A good read on this topic is Ben Strong’s “Google and Microsoft Cheat on Slow-Start. Should You?” His testing (in 2010) showed both organizations push the limits for the IW (initial window) higher than the RFC allows, with Microsoft nearly ignoring the limitations all together. Proposals to increase the IW in the RFC to 10 have been submitted, but thus far there does not appear to be consensus on whether or not to allow this change. Also not discussed is the impact of changing the IW on fixed (desktop, laptop, LAN) connected devices. The assumption being that IW is specified as it is because it was optimal for fixed end-points and changing that would be detrimental to performance for those devices. The impact of TCP on mobile performance (and vice-versa) should not be underestimated. CloudFare has a great blog post on the impact of mobility on TCP-related performance concluding that: TCP would actually work just fine on a phone except for one small detail: phones don't stay in one location. Because they move around (while using the Internet) the parameters of the network (such as the latency) between the phone and the web server are changing and TCP wasn't designed to detect the sort of change that's happening. -- CloudFare blog: Why mobile performance is difficult One answer is more intelligent intermediate acceleration components, capable of detecting not only the type of end-point initiating the connection (mobile or fixed) but actually doing something about it, i.e. manipulating the IW and other TCP-related parameters on the fly. Dynamically and intelligently. Of course innate parsing and execution performance on mobile devices contributes significantly to the perception of performance on the part of the end-user. While HTML5 may be heralded as a solution to cross-platform, cross-environment compatibility issues, it brings to the table performance challenges that will need to be overcome. http://thenextweb.com/dd/2012/05/22/html5-runs-up-to-thousands-of-times-slower-on-mobile-devices-report/ In the latest research by Spaceport.io on the performance of HTML5 on desktop vs smartphones, it appears that there are performance issues for apps and in particular games for mobile devices. Spaceport.io used its own Perfmarks II benchmarking suite to test HTML rendering techniques across desktop and mobile browsers. Its latest report says: We found that when comparing top of the line, modern smartphones with modern laptop computers, mobile browsers were, on average, 889 times slower across the various rendering techniques tested. At best the iOS phone was roughly 6 times slower, and the best Android phone 10 times slower. At worst, these devices were thousands of times slower. Combining the performance impact of parsing HTML5 on mobile devices with mobility-related TCP impacts paints a dim view of performance for mobile clients in the future. Especially as improving the parsing speed of HTML5 is (mostly) out of the hands of operators and developers alike. Very little can be done to impact the parsing speed aside from transformative acceleration techniques, many of which are often not used for fixed client end-points today. Which puts the onus back on operators to use the tools at their disposal (acceleration and optimization) to improve delivery as a means to offset and hopefully improve the overall performance of HTML5-based applications to mobile (and fixed) end-points. DON’T RELY on BLIND LUCK Organizations seeking to optimize delivery to mobile and traditional end-points need more dynamic and agile infrastructure solutions capable of recognizing the context in which requests are made and adjusting delivery policies – from TCP to optimization and acceleration – on-demand, as necessary to ensure the best delivery performance possible. Such infrastructure must be able to discern whether the improvements from minification and image optimization will be offset by TCP optimizations designed for fixed end-points interacting with mobile end-points – and do something about it. It’s not enough to configure a delivery chain comprised of acceleration and optimization designed for delivery of content to traditional end-points because the very same services that enhance performance for fixed end-points may be degrading performance for mobile end-points. It may be that twice a day, like a broken clock, the network and end-point parameters align in such a way that the same services enhance performance for both fixed and mobile end-points. But relying on such a convergence of conditions as a performance management strategy is akin to relying on blind luck. Addressing mobile performance requires a more thorough understanding of acceleration techniques – particularly from the perspective of what constraints they best address and under what conditions. Trying to leverage the browser cache, for example, is a great way to improve fixed end-point performance, but may backfire on mobile devices because of limited capabilities for caching. On the other hand, HTML5 introduces client-side cache APIs that may be useful, but are very different from previous HTML caching directives that supporting both will require planning and a flexible infrastructure for execution. In many ways this API will provide opportunities to better leverage client-side caching capabilities, but will require infrastructure support to ensure targeted caching policies can be implemented. As HTML5 continues to become more widely deployed, it’s important to understand the various acceleration and optimization techniques, what each is designed to overcome, and what networks and platforms they are best suited to serve in order to overcome inherent limitations of HTML5 and the challenge of mobile delivery. Google and Microsoft Cheat on Slow-Start. Should You?” HTML5 runs up to thousands of times slower on mobile devices: Report Application Security is a Stack Y U No Support SPDY Yet? The “All of the Above” Approach to Improving Application Performance What Does Mobile Mean, Anyway? The HTTP 2.0 War has Just Begun265Views0likes0CommentsThere is more to it than performance.
Did you ever notice that sometimes, “high efficiency” furnaces aren’t? That some things the furnace just doesn’t cover – like the quality of your ductwork, for example? The same is true of a “high performance” race car. Yes, it is VERY fast, assuming a nice long flat surface for it to drive on. Put it on a dirt road in the rainy season, and, well, it’s just a gas hog. Or worse, a stuck car. I could continue the list. A “high energy” employee can be relatively useless if they are assigned tasks at which brainpower, not activity rate, determines success… But I’ll leave it at those three, I think you get the idea. The same is true of your network. Frankly, increasing your bandwidth in many scenarios will not yield the results you expected. Oh, it will improve traffic flow, and overall the performance of apps on the network will improve, the question is “how much?” It would be reasonable – or at least not unreasonable – to expect that doubling Internet bandwidth should stave off problems until you double bandwidth usage. But often times the problems are with the overloading apps we’re placing upon the network. Sometimes, it’s not the network at all. Check the ecosystem, not just the network. When I was the Storage and Servers Editor over at NWC, I reviewed a new (at the time) HP server that was loaded. It had a ton of memory, a ton of cores, and could make practically any application scream. It even had two gigabit NICs in it. But they weren’t enough. While I had almost nothing bad to say about the server itself, I did observe in the summary of my article that the network was now officially the bottleneck. Since the machine had high speed SAS disks, disk I/O was not as bi a deal as it traditionally has been, high-speed cached memory meant memory I/O wasn’t a problem at all, and multiple cores meant you could cram a ton of processing power in. But team those two NICs and you’d end up with slightly less than 2 Gigabits of network throughput. Assuming 100% saturation, that was really less than 250 Megabytes per second, and that counts both in and out. For query-intensive database applications or media streaming servers, that just wasn’t keeping pace with the server. Now here we are, six or so years later, and similar servers are in use all over the globe… Running VMs. Meaning that several copies of the OS are now carving up that throughput. So start with your server. Check it first if the rest of the network is performing, it might just be the problem. And while we’re talking about servers, the obvious one needs to be mentioned… Don’t forget to check CPU usage. You just might need a bigger server or load balancing, or these days, less virtuals on your server. Heck, as long as we’re talking about servers, let’s consider the app too. The last few years for a variety of reasons we’ve seen less focus on apps whose performance is sucktacular, but it still happens. Worth looking into if the server turns out to be the bottleneck. Old gear is old. I was working on a network that deployed an ancient Cisco switch. The entire network was 1 Gig, except for that single switch. But tracing wires showed that switch to lie between the Internet and the internal network. A simple error, easily fixed, but an easy error to have in a complex environment, and certainly one to be aware of. That switch was 10/100 only. We pulled it out of the network entirely, and performance instantly improved. There’s necessary traffic, and then there’s… Not all of the traffic on your network needs to be. And all that does need to be doesn’t have to be so bloated. Look for sources of UDP broadcasts. More often than you would think, applications broadcast that you don’t care about. Cut them off. For other traffic, well there is Application Delivery Optimization. ADO is improving application delivery by a variety of technical solutions, but we’ll focus on those that make your network and your apps seem faster. You already know about them – compression, caching, image optimization… In the case of back-end services, de-duplication. But have you considered what they do other than improve perceived or actual performance? Free Bandwidth Anything that reduces the size of application data leaving your network also reduces the burden on your Internet connection. This goes without saying, but as I alluded to above, we sometimes overlook the fact that it is not just application performance we’re impacting, but the effectiveness of our connections – connections that grow more expensive by leaps and bounds each time we have to upgrade them. While improving application performance is absolutely a valid reason to seek out ADO, delaying or eliminating the need to upgrade your Internet connection(s) is another. Indeed, in many organizations it is far easier to do TCO justification based upon deferred upgrades than it is based upon “our application will be faster”, while both are benefits of ADO. New stuff! And as time wears on, SPDY, IPv6, and a host of other technologies will be more readily available to help you improve your network. Meanwhile, check out gateways for these protocols to make the transition easier. In Summation There are a lot of reasons for apps not to perform, and there are a lot of benefits to ADO. I’ve listed some of the easier problems to ferret out, the deeper into your particular network you get, the harder it is to generalize problems. But these should give you a taste for the types of things to look for. And a bit more motivation to explore ADO. Of course I hope you choose F5 gear for ADO and overall application networking, but there are other options out there. I think. Maybe.214Views0likes0CommentsHype Cycles, VDI, and BYOD
Survive the hype. Get real benefits. #F5 #BYOD #VDI An interesting thing is occurring in the spaces of Virtual Desktop Infrastructure (VDI) and Bring Your Own Device (BYOD) that is subtle, and while not generally part of the technology hype cycle, seems to be an adjunct of it in these two very specific cases. In espionage, when launching a misinformation campaign, several different avenues are approached for maximum impact, and to increase believability. While I don’t think this over-hype cycle is anything like an espionage campaign, the similarities that do exist are intriguing. The standard hype cycle tells you how a product category will change the world, we’ve seen it over and over in different forms. Get X or get behind your competitors, Product Y will solve all your business problems and cook you toast in the morning. More revolutionary than water, product Z is the next big thing. Product A will eliminate the need for (insert critical IT function here)… The hype cycle goes on, and eventually wears out as people find actual uses for the product category, and the limitations. At that point, the product type settles into the enterprise, or into obscurity. This is all happening with both VDI and BYOD. VDI will reduce your expenses while making employees happier and licensing easier… Well, that was the hype. We’ve discovered as an industry that neither of these hype points was true, and realized that the anecdotal evidence was there from the early days. That means it is useful for reducing hardware expenses, but even that claim is being called into question by those who have implemented and paid for servers and some form of client. There are very valid reasons for VDI, but it is still early yet from a mass-adoption standpoint, and enterprises seem to acknowledge it needs some time. Still, the hype engine declares each new year “The Year of VDI”. Server virtualization shows that there won’t be a year of VDI when its time comes, but rather a string of very successful years that slowly add up. Meanwhile, BYOD is everything for everyone. Employees get the convenience of using the device of their choice, employers get to reduce expenses on mobile devices, the sun comes out, birds sing beautiful madrigals… Except that like every hype cycle ever, that’s nowhere near the reality. Try this one on for size to see what I’m talking about: If you’re not familiar with my example corporation ZapNGo, see my Load Balancing For Developers series of blogs Because that is never going to happen. Which means it is not “bring your own device”, it is “let us buy the device you like” which will also not last forever unless there is management convergence. In the desktop space, organizations standardized on Windows because it allowed them to care for their gear without having to have intimate knowledge of three (or ten) different operating systems. There is no evidence that corporations are going to willy-nilly support different device OS’s and the associated different apps that go along with using multiple device OS’s. In many industries, the need to control the device at a greater level than most users want their personal devices messed with (health care and financial services spring to mind) will drive a dedicated device for work, like it or not. So why is there so much howling about “the democratization of IT”? Simple, the people howling for BYOD belong to two groups. C-Level execs that need access 24x7 if an emergency occurs, and people like me – techno geeks that want to pop onto work systems from their home tablet. The other 90% of the working population? They do not want work stuff on their personal tablet or phone. They just don’t. They’ll take a free tablet from their employer and even occasionally use it for work, but even that is asking much of a large percentage of workers. Am I saying BYOD is bogus? No. But bear with me while I circle back to VDI for a moment… In the VDI space, there was originally a subtle pressure to do massive implementations. We’ve had decades of experience as an industry at this point, and we know better than to do massive rip-n-replace projects unless we have to. They just never go as planned, they’re too big. So most organizations have the standard pilot/limited implementation/growing implementation cycle. And that makes sense, even if it doesn’t make VDI vendors too happy. In the interim, if you discover something that makes a given VDI implementation unsuitable for your needs, you still have time to replace it with a parallel project. The most cutting-edge (to put it nicely) claims are that your employees – all of them! – want access to their desktops on their BYOD device. Let me see, how do you spell that… S L O W. That’s it. Most of your employees don’t want work desktops at work, let alone on their tablet at the beach. It’s just us geeks and the overachievers that even think it’s a good idea, and performance will quickly disillusion even the most starry eyed of those of us who do. So let’s talk realities. What will we end up with after the hype is done and how can you best proceed? Well, here are some steps you can take. Some of these are repeated from my VDI blog post of a few months ago, but repetition is not a terrible thing, as long as it’s solid advice. Pick targeted deployments of both. Determine who most wants/needs BYOD and see if those groups can agree on a platform. Determine what segments of your employee population are best suited to VDI. Low CPU usage or high volume of software installs are good places to start. Make certain your infrastructure can take the load and new types of traffic. If not, check the options F5 has to accelerate and optimize your network for both products. Move slowly. The largest VDI vendors are certainly not going anywhere, and at a minimum, neither are Android and iOS. So take your time and do it right. SELL your chosen solutions. IT doesn’t generally do very well on the selling front, but if you replace a local OS with a remote one, every performance issue will be your fault unless you point out that no matter where they are geographically, their desktop stays the same, it is easier to replace client hardware when you don’t have to redo the entire desktop, etc. Gauge progress. Many VDI deployments fell on hard times because they were too ambitious, or because they weren’t ambitious enough and didn’t generate benefits, or because they weren’t properly sold to constituents. BYOD will face similar issues. Get a solid project manager and have them track progress not in terms of devices or virtual desktops, but in terms of user satisfaction and issues that could blow up as the project grows. That’s it for now. As always, remember that you’re doing what’s best for your organization, do not concern yourself with hype, market, or vendor health, those will take care of themselves. Related Articles and Blogs: Four Best Practices for Reducing Risk in the Cloud Dreaming of Work Speed Matters, but Dev Speed or App Speed? Infographic: Protect Yourself Against Cybercrime Multiscreen Multitasking Upcoming Event: F5 Agility Summit 2012 Enterprise Still Likes Apple over Android206Views0likes0Comments