caching
14 TopicsF5 Friday: In the NOC at Interop
#interop #fasterapp #adcfw #ipv6 Behind the scenes in the Interop network Interop Las Vegas expects somewhere in the realm of 10,000+ attendees this year. Most of them will no doubt be carrying smart phones, many tablets, and of course the old standby, the laptop. Nearly every one will want access to some service – inside or out. The Interop network provides that access – and more. F5 solutions will provide IT services, including IPv4–IPv6 translation, firewall, SSL VPN, and web optimization technologies, for the Network Operations Center (NOC) at Interop. The Interop 2012 network is comprised of the show floor Network Operations Center (NOC), and three co-location sites: Colorado (DEN), California (SFO), and New Jersey(EWR). The NOC moves with the show to its 4 venues: Las Vegas, Tokyo, Mumbai, and New York. F5 has taken a hybrid application delivery network architectural approach – leveraging both physical devices (in the NOC) and virtual equivalents (in the Denver DC). Both physical and virtual instances of F5 solutions are managed via a BIG-IP Enterprise Manager 4000, providing operational consistency across the various application delivery services provided: DNS, SMTP, NTP, global traffic management (GSLB), remote access via SSL VPNs, local caching of conference materials, and data center firewall services in the NOC DMZ. Because the Interop network is supporting both IPv6 and IPv4, F5 is also providing NAT64 and DNS64 services. NAT64: Network address translation is performed between IPv6 and IPv4 on the Interop network, to allow IPv6-only clients and servers to communicate with hosts on IPv4-only networks DNS64: IPv6-to-IPv4 DNS translations are also performed by these BIG-IPs, allowing A records originating from IPv4-only DNS servers to be converted into AAAA records for IPv6 clients. F5 is also providing SNMP, SYSLOG, and NETFLOW services to vendors at the show for live demonstrations. This is accomplished by cloning the incoming traffic and replicating it out through the network. At the network layer, such functionality is often implemented by simply mirroring ports. While this is sometimes necessary, it does not necessarily provide the level of granularity (and thus control) required. Mirrored traffic does not distinguish between SNMP and SMTP, for example, unless specifically configured to do so. While cloning via an F5 solution can be configured to act in a manner consistent with port mirroring, cloning via F5 also allows intermediary devices to intelligently replicate traffic based on information gleaned from deep content inspection (DCI). For example, traffic can be cloned to a specific pool of devices based on the URI, or client IP address or client device type or destination IP. Virtually any contextual data can be used to determine whether or not to clone traffic. You can poke around with more detail and photos and network diagrams at F5’s microsite supporting its Interop network services. Dashboards are available, documentation, pictures, and more information in general on the network and F5 services supporting the show. And of course if you’re going to be at Interop, stop by the booth and say “hi”! I’ll keep the light on for ya… F5 Interopportunities at Interop 2012 F5 Secures and Optimizes Application and Network Services for the Interop 2012 Las Vegas Network Operations Center When Big Data Meets Cloud Meets Infrastructure Mobile versus Mobile: 867-5309 Why Layer 7 Load Balancing Doesn’t Suck BYOD–The Hottest Trend or Just the Hottest Term What Does Mobile Mean, Anyway? Mobile versus Mobile: An Identity Crisis The Three Axioms of Application Delivery Don’t Let Automation Water Down Your Data Center The Four V’s of Big Data352Views0likes0CommentsFixing Internet Explorer & AJAX
A few weeks ago, as developers are wont to do, I rewrote our online gameroom. Version 1 was getting crusty, and I'd written all the AJAX handlers manually and wanted to clean up the code by using Prototype and Script.aculo.us. You may recall we discussed using these tools to build a Web 2.0 interface to iControl. So I rewrote it and was pretty pleased with myself. Until one of our players asked why it wasn't working in Internet Explorer (IE). Now Version 1 hadn't worked in IE either, but because I have a captive set of users I ignored the problem and forced them all to use FireFox instead. But this player's wife will be joining us soon and she's legally blind. She uses a reader to get around the Internet and as luck would have it, the reader only works with IE. So I started digging into the problem. I had thought it was my code (silly me), and thus moving to prototype would solve the problem. No such luck. Everything but the periodically updated pieces of the application worked fine. The real-time updating components? Broken in IE. I looked around and found this very interesting article on Wikipedia regarding known problems with IE and XMLHTTPRequest, the core of AJAX. From the Wikipedia article Most of the implementations also realize HTTP caching. Internet Explorer and Firefox do, but there is a difference in how and when the cached data is revalidated. Firefox revalidates the cached response every time the page is refreshed, issuing an "If-Modified-Since" header with value set to the value of the "Last-Modified" header of the cached response. Internet Explorer does so only if the cached response is expired (i.e., after the date of received "Expires" header). Basically, the problem lies with IE's caching mechanisms. So if you were trying to build an AJAX application with a real-time updating component and it didn't seem to work in IE, now you may know why that is. There are workarounds: Modify the AJAX call (within the client-side script) to check the response and, if necessary, make a second call with a Date value in the past to force the call to the server. Append a unique query string to the call, for example appending a timestamp. This makes the URI unique, ensuring it won't be in the cache and forcing IE to call out to the server to get it. Change all requests to use POST instead of GET. Force the "Expires" header to be set in the past (much in the way we expire cookies programmatically). Setting cache control headers may also help force IE to act according to expectations. I used option #3, because it was a simple, quick fix for me to search the single script using Ajax.PeriodicalUpdater and automatically change all the GETs to POSTs. That may not feasible for everyone, hence the other available options. Option #4 could easily be achieved using iRules, and could be coded such that only requests sent via IE were modified. In fact, Joe has a great post on how to prevent caching on specific file types that can be easily modified to solve the problem with IE. First we want to know if the browser is IE, and if so, we want to modify the caching behavior on the response. Don't forget that IE7 is using a slightly different User-Agent header than previous versions of IE. Don't look for specific versions, just try to determine if the browser is a version of IE. when HTTP_REQUEST { if {[string tolower [HTTP::header "User-Agent"]] contains "msie"} { set foundmatch 1 } } when HTTP_RESPONSE { if {$foundmatch == 1} { HTTP::header replace Cache-Control no-cache HTTP::header replace Pragma no-cache HTTP::header replace Expires -1 } } You could also use an iRule to accomplish #3 dynamically, changing the code only for IE browsers instead of all browsers. This requires a bit more work as you'll have to search through the payload for 'GET' and replace it with 'POST'. It's a good idea to make the search string as specific as possible to ensure that only the HTTP methods are replaced in the Ajax.PeriodicalUpdater calls and not everyplace the letters may appear in the document, hence the inclusion of the quotes around the methods. Happy Coding! Imbibing: Coffee287Views0likes4CommentsF5 Friday: When the Solution to a Vulnerability is Vulnerable You Need a New Solution
#v11 Say hello to DNS Express You may recall we recently expounded upon the need for the next generation of infrastructure to provide more protection of critical DNS services. This is particularly important given recent research on behalf of Versign that found “60% of respondents rely on their websites for at least 25% of their annual revenue.” Combined with findings that DDoS attacks, DNS failures and attackers comprised 65% of unplanned downtime in the past year, the financial impact on organizations is staggering. We also described the most popular solution today, DNS caching, and mentioned that it turns out this solution is itself vulnerable to attack. DNS caching can be defeated by simply requesting non-existent resources. This is not peculiar to DNS, by the way, but rather to caching and the way it works. Caching is designed as a proxy for content; content that is always obtained from the originating server. Thus if you request a resource that does not exist in the cache, it must in turn query the originating server to retrieve it. If you start randomly creating host names you know don’t exist to lookup, you can quickly overwhelm the originating server (and potentially the cache) and voila! Successful DDoS. Like an increasing number of modern attacks, this vulnerability is no one’s fault per se; it’s an exploitation of the protocol’s assumptions and designed behavior. But as has been noted before, expected behavior is not necessarily acceptable behavior. For IT, that only matters forasmuch as it aids in finding a more secure, i.e. non-vulnerable, solution. INTRODUCING DNS Express BIG-IP v11 introduced DNS Express , comprising several new capabilities that provide comprehensive DNS protection and addresses just this vulnerability as part of its overall features designed to maintain availability for critical DNS services. DNS Express is a new DNS service available in BIG-IP v11 that implements an authoritative in-memory DNS service capable of storing tens of millions of records. This caching-style solution is enhanced by the CMP (Clustered Multi-Processing) enabled TMOS platform, which allows BIG-IP Global Traffic Manager (GTM) to respond to hundreds of thousands of queries per second (millions per second on the VIPRION hardware platforms). Rounding out this strategic trifecta of DNS goodness is IP Anycast integration, which has the result of obfuscating the number and topological attributes of DNS servers while simultaneously distributing load. This is an important facet as attackers often target DNS servers one by one, and without the ability to determine how many servers may be present attackers must make a choice whether to forge ahead – possibility wasting their valuable time – or concede defeat themselves. A DNS infrastructure based on DNS Express allows customers to leverage the ability of BIG-IP to withstand even the most persistent DDoS load by enacting a zone transfer from a DNS pool to BIG-IP GTM, which subsequently acts as a high-speed authoritative slave DNS service. It is an architectural solution that is fairly non-disruptive to existing architecture and by leveraging core TMOS features such as iRules, adds control and flexibility in designing solutions specifically for a data center’s unique needs and business requirements. This solution realizes the benefits of a DNS-caching solution while mitigating the risk an attacker will exploit the behavior of caching solutions with a barrage of randomly generated host name requests. Happy Safe Resolving! DNS is Like Your Mom All F5 Friday Posts on DevCentral It’s DNSSEC Not DNSSUX Introducing v11: The Next Generation of Infrastructure BIG-IP v11 Information Page The End of DNS As We Know It Taking Down Twitter as easy as D.N.S. Cloud Balancing, Cloud Bursting, and Intercloud Achieving Enterprise Agility in the Cloud (Cloudbursting with VMware, BlueLock, and F5) DNSSEC: The Antidote to DNS Cache Poisoning and Other DNS Attacks264Views0likes0CommentsThe Need for (HTML5) Speed
#mobile #HTML5 #webperf #fasterapp #ado The importance of understanding acceleration techniques in the face of increasing mobile and HTML5 adoption An old English proverb observes that "Even a broken clock is right twice a day.” A more modern idiom involves a blind squirrel and an acorn, and I’m certain there are many other culturally specific nuggets of wisdom that succinctly describe what is essentially blind luck. The proverb and modern idioms fit well the case of modern acceleration techniques as applied to content delivered to mobile devices. A given configuration of options and solutions may inadvertently be “right” twice a day purely by happenstance, but the rest of the time they may not be doing all that much good. With HTML5 adoption increasing rapidly across the globe, the poor performance of parsing on mobile devices will require more targeted and intense use of acceleration and optimization solutions. THE MOBILE LAST MILES One of the reasons content deliver to mobile devices is so challenging is the number of networks and systems through which the content must flow. Unlike WiFi connected devices, which traverse controllable networks as well as the Internet, content delivered to mobile devices connected via carrier networks must also traverse the mobile (carrier) network. Add to that challenge the constrained processing power of mobile devices imposed by carriers and manufacturers alike, and delivering content to these devices in an acceptable timeframe becomes quite challenging. Organizations must contend not only with network conditions across three different networks but also capabilities and innate limitations of the devices themselves. Such limitations include processing capabilities, connection models, and differences in web application support. Persistence and in-memory caching is far more limited on mobile devices, making reliance on traditional caching strategies as a key component of acceleration techniques less than optimal. Compression and de-duplication of data even over controlled WAN links when mobile devices are in WiFi mode may not be as helpful as they are for desktop and laptop counterparts given mobile hardware limitations. Difference in connection models – on mobile devices connections are sporadic, shorter-lived, and ad-hoc – render traditional TCP-related enhancements ineffective. TCP slow-start mechanisms, for example, are particularly frustrating under the hood for mobile device connections because connections are constantly being dropped and restarted, forcing TCP to begin again very slowly. TCP, in a nutshell, was designed for fixed-networks, not mobile networks. A good read on this topic is Ben Strong’s “Google and Microsoft Cheat on Slow-Start. Should You?” His testing (in 2010) showed both organizations push the limits for the IW (initial window) higher than the RFC allows, with Microsoft nearly ignoring the limitations all together. Proposals to increase the IW in the RFC to 10 have been submitted, but thus far there does not appear to be consensus on whether or not to allow this change. Also not discussed is the impact of changing the IW on fixed (desktop, laptop, LAN) connected devices. The assumption being that IW is specified as it is because it was optimal for fixed end-points and changing that would be detrimental to performance for those devices. The impact of TCP on mobile performance (and vice-versa) should not be underestimated. CloudFare has a great blog post on the impact of mobility on TCP-related performance concluding that: TCP would actually work just fine on a phone except for one small detail: phones don't stay in one location. Because they move around (while using the Internet) the parameters of the network (such as the latency) between the phone and the web server are changing and TCP wasn't designed to detect the sort of change that's happening. -- CloudFare blog: Why mobile performance is difficult One answer is more intelligent intermediate acceleration components, capable of detecting not only the type of end-point initiating the connection (mobile or fixed) but actually doing something about it, i.e. manipulating the IW and other TCP-related parameters on the fly. Dynamically and intelligently. Of course innate parsing and execution performance on mobile devices contributes significantly to the perception of performance on the part of the end-user. While HTML5 may be heralded as a solution to cross-platform, cross-environment compatibility issues, it brings to the table performance challenges that will need to be overcome. http://thenextweb.com/dd/2012/05/22/html5-runs-up-to-thousands-of-times-slower-on-mobile-devices-report/ In the latest research by Spaceport.io on the performance of HTML5 on desktop vs smartphones, it appears that there are performance issues for apps and in particular games for mobile devices. Spaceport.io used its own Perfmarks II benchmarking suite to test HTML rendering techniques across desktop and mobile browsers. Its latest report says: We found that when comparing top of the line, modern smartphones with modern laptop computers, mobile browsers were, on average, 889 times slower across the various rendering techniques tested. At best the iOS phone was roughly 6 times slower, and the best Android phone 10 times slower. At worst, these devices were thousands of times slower. Combining the performance impact of parsing HTML5 on mobile devices with mobility-related TCP impacts paints a dim view of performance for mobile clients in the future. Especially as improving the parsing speed of HTML5 is (mostly) out of the hands of operators and developers alike. Very little can be done to impact the parsing speed aside from transformative acceleration techniques, many of which are often not used for fixed client end-points today. Which puts the onus back on operators to use the tools at their disposal (acceleration and optimization) to improve delivery as a means to offset and hopefully improve the overall performance of HTML5-based applications to mobile (and fixed) end-points. DON’T RELY on BLIND LUCK Organizations seeking to optimize delivery to mobile and traditional end-points need more dynamic and agile infrastructure solutions capable of recognizing the context in which requests are made and adjusting delivery policies – from TCP to optimization and acceleration – on-demand, as necessary to ensure the best delivery performance possible. Such infrastructure must be able to discern whether the improvements from minification and image optimization will be offset by TCP optimizations designed for fixed end-points interacting with mobile end-points – and do something about it. It’s not enough to configure a delivery chain comprised of acceleration and optimization designed for delivery of content to traditional end-points because the very same services that enhance performance for fixed end-points may be degrading performance for mobile end-points. It may be that twice a day, like a broken clock, the network and end-point parameters align in such a way that the same services enhance performance for both fixed and mobile end-points. But relying on such a convergence of conditions as a performance management strategy is akin to relying on blind luck. Addressing mobile performance requires a more thorough understanding of acceleration techniques – particularly from the perspective of what constraints they best address and under what conditions. Trying to leverage the browser cache, for example, is a great way to improve fixed end-point performance, but may backfire on mobile devices because of limited capabilities for caching. On the other hand, HTML5 introduces client-side cache APIs that may be useful, but are very different from previous HTML caching directives that supporting both will require planning and a flexible infrastructure for execution. In many ways this API will provide opportunities to better leverage client-side caching capabilities, but will require infrastructure support to ensure targeted caching policies can be implemented. As HTML5 continues to become more widely deployed, it’s important to understand the various acceleration and optimization techniques, what each is designed to overcome, and what networks and platforms they are best suited to serve in order to overcome inherent limitations of HTML5 and the challenge of mobile delivery. Google and Microsoft Cheat on Slow-Start. Should You?” HTML5 runs up to thousands of times slower on mobile devices: Report Application Security is a Stack Y U No Support SPDY Yet? The “All of the Above” Approach to Improving Application Performance What Does Mobile Mean, Anyway? The HTTP 2.0 War has Just Begun260Views0likes0CommentsStripping EXIF From Images as a Security Measure
#fasterapp #infosec And you thought FourSquare was a security risk… Mobile phones with great cameras are an awesome tool. Many of these end up on Facebook, visible to friends, family and, well, friends of friends and maybe even the public. They get shared around so much, you can’t really be sure where they might eventually wind up. According to Justin Mitchell, an engineer for Facebook Photos, answering a Quora question on the subject last year, Facebook has “over 200 million photos uploaded per day, or around 6 billion per month. There are currently almost 90 billion photos total on Facebook. This means we are, by far, the largest photos site on the Internet.” As most of these are uploaded via modern cameras – whether on mobile phones or digital cameras – which are almost universally enabled with GPS technology, they almost all certainly include some data that you might not want others to find: the exact location the picture was taken. Pshaw! Many may think. After all, “checking in” via FourSquare and adding location to Facebook and Twitter posts is something many do regularly. But this data can be very dangerous, and not just for soliders who have been warned against geotagging photos uploaded to facebook, as cited by a recent Gizmodo article, “US Soldiers Are Giving Away Their Positions with Geotagged Photos”: The Army has issued a warning to its soldiers to stop geotagging their photos on Facebook and other social media outlets. Because it's putting soldiers in danger, and has been for years. Now you might not be worried about giving away the location of helicopters inside a compound that leads to the enemy able to “conduct a mortar attack, destroying four of the AH-64 Apaches” there, but the risks to everyone exists. Those who share photos of their home or things in their home (can’t resist showing off your latest collectible addition to friends, can you?) are opening themselves up to theft, especially if they also like to broadcast their latest travel schedules via a host of other socially connected tools. Even if you aren’t actively sharing your address, all a potential thief needs to do is grab a photo of your fat loot and extract the GPS coordinates hidden in the EXIF data to find his target and then move in, right after you made sure everyone know you were out of town by broadcasting your latest flight information (ATL –> ORD –> SEA). “But I’ve locked down my photos using Facebook’s privacy features!” you say. You might have done so, but do you really know everyone on your list of “friends”? Are they really who they claim to be? And did any of them share your photo with their friends, and their friends? Facebook privacy doesn’t prevent the old standby of “save as” and “upload”, and a quick tag of your name and a Twitter search and bam! You’ve shared data with people you wouldn’t have, if only you had known. While perhaps requiring a bit more paranoia than the average user (and an inherent distrust in humanity), there are very real security implications for a wide variety of folks to embedding Geotags in photos via EXIF, though perhaps those in service to their country more than others. There is a simple and more automated mitigation for this risk. In addition to turning off Geolocation tags on your camera or phone or manually eradicating the EXIF info from photos, a mediating application delivery service can, on-demand, strip this data from images. MEDIATED EXIF STRIPPING With the right application delivery tier implementation, mediated EXIF stripping is as simple as other content scrubbing exercises. Requests are received as normal for an image object. When the image is retrieved from the origin server, a service in the application delivery tier is invoked that strips EXIF data from the image before it is returned to the end-user or deposited in a caching solution. Subsequent requests for that same image, then, though served out of cache are also clear of potentially dangerous GPS information – without modifying the original.* That’s important, as for some folks having that information available to them may be necessary or desirable, but serving it up to the public may simply incur too much risk. Given the velocity with which we click and share photos today, we may be underestimating the associated risk. Others may think that’s just far too paranoid and desire to keep EXIF data in their images. This is another opportunity to monetize a service for providers. The right application delivery tier, capable of interpreting context as well as being instructed by external infrastructure (including applications), could be configured such that only image-containing responses with specific HTTP headers are subject to EXIF stripping. The more security-minded users may desire such a service – and be willing to pay for it – while others could simply continue on as they were, EXIF and all. And even if you aren’t concerned with potential security risks associated with EXIF, you might want to consider that stripping out that extraneous data from images like thumbnails and product shots can reduce the overall size of the image, which is a boon if you’re trying to improve overall performance – particularly on network and resource constrained devices like mobile phones. * Image optimization techniques are always best-effort and sometimes cannot be applied to an image given other factors. Also, if a positive caching models is used, the original image is served the first time it is requested, but not cached. Network versus Application Layer Prioritization Web App Performance: Think 1990s. Mobile versus Mobile: 867-5309 Watch out for cloud congestion What Does Mobile Mean, Anyway? More Users, More Access, More Clients, Less Control The Context-Aware Cloud WILS: WPO versus FEO249Views0likes1CommentTrue or False: Application acceleration solutions teach developers to write inefficient code
It has been suggested that the use of application acceleration solutions as a means to improve application performance would result in programmers writing less efficient code. In a comment on “The House that Load Balancing Built” a reader replies: Not only will it cause the application to grow in cost and complexity, it's teaching new and old programmers to not write efficient code and rely on other products and services on [sic] thier behalf. I.E. Why write security into the app, when the ADC can do that for me. Why write code that executes faster, the ADC will do that for me, etc., etc. While no one can control whether a programmer writes “fast” code, the truth is that application acceleration solutions do not affect the execution of code in any way. A poorly constructed loop will run just as slow with or without an application acceleration solution in place. Complex mathematical calculations will execute with the same speed regardless of the external systems that may be in place to assist in improving application performance. The answer is, unequivocally, that the presence or lack thereof of an application acceleration solution should have no impact on the application developer because it does nothing to affect the internal execution of written code. If you answered false, you got the answer right. The question has to be, then, just what does an application acceleration solution do that improves performance? If it isn’t making the application logic execute faster, what’s the point? It’s a good question, and one that deserves an answer. Application acceleration is part of a solution we call “application delivery”. Application delivery focuses on improving application performance through optimization of the use and behavior of transport (TCP) and application transport (HTTP/S) protocols, offloading certain functions from the application that are more efficiently handled by an external often hardware-based system, and accelerating the delivery of the application data. OPTIMIZATION Application acceleration improves performance by understanding how these protocols (TCP, HTTP/S) interact across a WAN or LAN and acting on that understanding to improve its overall performance. There are a large number of performance enhancing RFCs (standards) around TCP that are usually implemented by application acceleration solutions. Delayed and Selective Acknowledgments (RFC 2018) Explicit Congestion Notification (RFC 3168) Limited and Fast Re-Transmits (RFC 3042 and RFC 2582) Adaptive Initial Congestion Windows (RFC 3390) Slow Start with Congestion Avoidance (RFC 2581) TCP Slow Start (RFC 3390) TimeStamps and Windows Scaling (RFC 1323) All of these RFCs deal with TCP and therefore have very little to do with the code developers create. Most developers code within a framework that hides the details of TCP and HTTP connection management from them. It is the rare programmer today that writes code to directly interact with HTTP connections, and even rare to find one coding directly at the TCP socket layer. The execution of code written by the developer takes just as long regardless of the implementation or lack of implementation of these RFCs. The application acceleration solution improves the performance of the delivery of the application data over TCP and HTTP which increases the performance of the application as seen from the user’s point of view. OFFLOAD Offloading compute intensive processing from application and web servers improves performance by reducing the consumption of CPU and memory required to perform those tasks. SSL and other encryption/decryption functions (cookie security, for example) are computationally expensive and require additional CPU and memory on the server. The reason offloading these functions to an application delivery controller or stand-alone application acceleration solution improves application performance is because it frees the CPU and memory available on the server and allows it to be dedicated to the application. If the application or web server does not need to perform these tasks, it saves CPU cycles that would otherwise be used to perform them. Those cycles can be used by the application and thus increases the performance of the application. Also beneficial is the way in which application delivery controllers manage TCP connections made to the web or application server. Opening and closing TCP connections takes time, and the time required is not something a developer – coding within a framework – can affect. Application acceleration solutions proxy connections for the client and subsequently reduce the number of TCP connections required on the web or application server as well as the frequency with which those connections need to be open and closed. By reducing the connections and frequency of connections the application performance is increased because it is not spending time opening and closing TCP connections, which are necessarily part of the performance equation but not directly affected by anything the developer does in his or her code. The commenter believes that an application delivery controller implementation should be an afterthought. However, the ability of modern application delivery controllers to offload certain application logic functions such as cookie security and HTTP header manipulation in a centralized, optimized manner through network-side scripting can be a performance benefit as well as a way to address browser-specific quirks and therefore should be seriously considered during the development process. ACCELERATION Finally, application acceleration solutions improve performance through the use of caching and compression technologies. Caching includes not just server-side caching, but the intelligent use of the client (usually the browser) cache to reduce the number of requests that must be handled by the server. By reducing the number of requests the server is responding to, the web or application server is less burdened in terms of managing TCP and HTTP sessions and state, and has more CPU cycles and memory that can be dedicated to executing the application. Compression, whether using traditional industry standard web-based compression (GZip) or WAN-focused data de-duplication techniques, decreases the amount of data that must be transferred from the server to the client. Decreasing traffic (bandwidth) results in fewer packets traversing the network which results in quicker delivery to the user. This makes it appear that the application is performing faster than it is, simply because it arrived sooner. Of all these techniques, the only one that could possibly contribute to the delinquency of developers is caching. This is because application acceleration caching features act on HTTP caching headers that can be set by the developer, but rarely are. These headers can also be configured by the web or application server administrator, but rarely are in a way that makes sense because most content today is generated dynamically and is rarely static, even though individual components inside the dynamically generated page may in fact be very static (CSS, JavaScript, images, headers, footers, etc…). However, the methods through which caching (pragma) headers are set is fairly standard and the actual code is usually handled by the framework in which the application is developed, meaning the developer ultimately cannot affect the efficiency of the use of this method because it was developed by someone else. The point of the comment was likely more broad, however. I am fairly certain that the commenter meant to imply that if developers know the performance of the application they are developing will be accelerated by an external solution that they will not be as concerned about writing efficient code. That’s a layer 8 (people) problem that isn’t peculiar to application delivery solutions at all. If a developer is going to write inefficient code, there’s a problem – but that problem isn’t with the solutions implemented to improve the end-user experience or scalability, it’s a problem with the developer. No technology can fix that.240Views0likes4CommentsThe “All of the Above” Approach to Improving Application Performance
#ado #fasterapp #stirling Carnegie Mellon testing of ADO solutions answers age old question: less filling or tastes great? You probably recall years ago the old “Tastes Great vs Less Filling” advertisements. The ones that always concluded in the end that the beer in question was not one or the other, but both. Whenever there are two ostensibly competing technologies attempting to solve the same problem, we run into the same old style argument. This time, in the SPDY versus Web Acceleration debate, we’re inevitably going to arrive at the conclusion it’s both less filling and tastes great. SPDY versus Web Acceleration In general, what may appear on the surface to be competing technologies are actually complementary. Testing by Carnegie Mellon supports this conclusion, showing marked improvements in web application performance when both SPDY and Web Acceleration techniques are used together. That’s primarily because web application traffic shows a similar pattern across modern, interactive Web 2.0 sites: big, fat initial pages with a subsequent steady stream of small requests and a variety of response sizes, typically small to medium in content length. We know from experience and testing that web acceleration techniques like compression provide the greatest improvements in performance when acting upon medium-large sized responses though actual improvement rates depend highly on the network over which data is being exchanged. We know that compression can actually be detrimental to performance when responses are small (in the 1K range) and being transferred over a LAN. That’s because the processing time incurred to compress that data is greater than the time to traverse the network. But when used to compress larger responses traversing congested or bandwidth constrained connections, compression is a boon to performance. It’s less filling. SPDY, though relatively new on the scene, is the rising star of web acceleration. Its primary purposes is to optimize the application layer exchanges that typically occur via HTTP (requests and responses) by streamlining connection management (SPDY only uses one connection per client-host), dramatically reducing header sizes, and introducing asynchronicity along with prioritization. It tastes great. What Carnegie Mellon testing shows is that when you combine the two, you get the best results because each improves performance of specific data exchanges that occur over the life of a user interaction. HERE COMES the DATA The testing was specifically designed to measure the impact of each of the technologies separately and then together. For the web acceleration functionality they chose to employ BoostEdge (a software ADC) though one can reasonably expect similar results from other ADCs provided they offer the same web acceleration and optimization capabilities, which is generally a good bet in today’s market. The testing specifically looked at two approaches: Two of the most promising software approaches are (a) content optimization and compression, and (b) optimizing network protocols. Since network protocol optimization and data optimization operate at different levels, there is an opportunity for improvement beyond what can be achieved by either of the approaches individually. In this paper, we report on the performance benefits observed by following a unified approach, using both network protocol and data optimization techniques, and the inherent benefits in network performance by combining these approaches in to a single solution. -- Data and Network Optimization Effect on Web Performance, Carnegie Mellon, Feb 2012 The results should not be surprising – when SPDY is combined with ADC optimization technologies, the result is both less filling and it tastes great. When tested in various combinations, we find that the effects are more or less additive and that the maximum improvement is gained by using BoostEdge and SPDY together. Interestingly, the two approaches are also complimentary; i.e., in situations where data predominates (i.e. “heavy” data, and fewer network requests), BoostEdge provides a larger boost via its data optimization capabilities and in cases where the data is relatively small, or “light”, but there are many network transactions required, SPDY provides an increased proportion of the overall boost. The general effect is that relative level of improvement remains consistent over various types of websites. -- Data and Network Optimization Effect on Web Performance, Carnegie Mellon, Feb 2012 NOT THE WHOLE STORY Interestingly, the testing results show there is room for even greater improvement. The paper notes that comparisons include “an 11% cost of running SSL” (p 13). This is largely due to the use of a software ADC solution which employs no specialized hardware to address the latency incurred by compute intense processing such as compression and cryptography. Leveraging a hardware ADC with cryptographic hardware and compression acceleration capabilities should further improve results by reducing the latency incurred by both processes. Testers further did not test under load (which would have a significant impact on the results in a negative fashion) or leverage other proven application delivery optimization (ADO) techniques such as TCP multiplexing. While the authors mention the use of front-end optimization (FEO) techniques such as image optimization and client-side cache optimization, it does not appear that these were enabled during testing. Other techniques such as EXIF stripping, CSS caching, domain sharding, and core TCP optimizations are likely to provide even greater benefits to web application performance when used in conjunction with SPDY. CHOOSE “ALL of the ABOVE” What the testing concluded was that an “all of the above” approach would appear to net the biggest benefits in terms of application performance. Using SPDY along with complimentary ADO technologies provides the best mitigation of latency-inducing issues that ultimately degrade the end-user experience. Ultimately, SPDY is one of a plethora of ADO technologies designed to streamline and improve web application performance. Like most ADO technologies, it is not universally beneficial to every exchange. That’s why a comprehensive, inclusive ADO strategy is necessary. Only an approach that leverages “all of the above” at the right time and on the right data will net optimal performance across the board. The HTTP 2.0 War has Just Begun Stripping EXIF From Images as a Security Measure F5 Friday: Domain Sharding On-Demand WILS: WPO versus FEO WILS: The Many Faces of TCP HTML5 Web Sockets Changes the Scalability Game Network versus Application Layer Prioritization The Three Axioms of Application Delivery227Views0likes0CommentsF5 Friday: The Mobile Road is Uphill. Both Ways.
Mobile users feel the need …. the need for spe- please wait. Loading… We spent the week, like many other folks, at O’Reilly’s Velocity Conference 2011 – a conference dedicated to speed, of web sites, that is. This year the conference organizers added a new track called Mobile Performance. With the consumerization of IT ongoing and the explosion of managed and unmanaged devices allowing ever-increasing amounts of time “connected” to enterprise applications and services, mobile performance – if it isn’t already – will surely become an issue in the next few years. The adoption of HTML5, as a standard platform across mobile and traditional devices is a boon – optimizing the performance of HTML-based application is something F5 knows a thing or two about. After all, there are more than 50 ways to use your BIG-IP system, and many of them are ways to improve performance – often in ways you may not have before considered. NARROWBAND is the NEW NORMAL The number of people who are “always on” today is astounding, and most of them are always on thanks to rapid technological improvements in mobile devices. Phones and tablets are now commonplace just about anywhere you look, and “that guy” is ready to whip out his device and verify (or debunk) whatever debate may be ongoing in the vicinity. Unfortunately the increase in use has also coincided with an increase in the amount of data being transferred without a similar increase in the available bandwidth in which to do it. The attention on video these past few years – which is increasing, certainly, in both size and length – has overshadowed similar astounding bloat in the size and complexity of web page composition. It is this combination – size and complexity – that is likely to cause even more performance woes for mobile users than video. “A Google engineer used the Google bot to crawl and analyze the Web, and found that the average web page is 320K with 43.9 resources per page (Ramachandran 2010). The average web page used 7.01 hosts per page, and 6.26 resources per host. “ (Average Web Page Size Septuples Since 2003) Certainly the increase in broadband usage – which has “more than kept pace with the increase in the size and complexity of the average web page” (Average Web Page Size Septuples Since 2003) – has mitigated most of the performance issues that might have arisen had we remained stuck in the modem-age. But the fact is that mobile users are not so fortunate, and it is their last mile that we must now focus on lest we lose their attention due to slow, unresponsive sites and applications. The consumerization of IT, too, means that enterprise applications are more and more being accessed via mobile devices – tablets, phones, etc… The result is the possibility not just of losing attention and a potential customer, but of losing productivity, a much more easily defined value that can be used to impart the potential severity of performance issues to those ultimately responsible for it. ADDRESSING MOBILE PERFORMANCE If you thought the need for application and network acceleration solutions was long over due to the rise of broadband, you thought too quickly. Narrowband, i.e. mobile connectivity, is still in the early stages of growth and as such still exhibits the same restricted bandwidth characteristics as pre-broadband solutions such as ISDN and A/DSL. The users, however, are far beyond broadband and expect instantaneous responses regardless of access medium. Thus there is a need to return to (if you left it) the use of web application acceleration techniques to redress performance issues as soon as possible. Caching and compression are but two of the most common acceleration techniques available, and F5 is no stranger to such solutions. BIG-IP WebAccelerator implements both along with other performance-enhancing features such as Intelligent Browser Referencing (IBR) and OneConnect can dramatically improve performance of web applications by leveraging the browser to load more quickly those 6.26 resources per host and simultaneously eliminating most if not all of the overhead associated with TCP session management on the servers (TCP Multiplexing). WebAccelerator – combined with some of the innate network protocol optimizations available in all F5 BIG-IP solutions due to its shared internal platform, TMOS – can do a lot to mitigate performance issues associated with narrowband mobile connections. The mobile performance problem isn’t new, after all, and thus these proven solutions should provide relief to end-users of both the customer and employee communities who weary of waiting for the web. HTML5 – the darling of the mobile world - will also have an impact on the usage patterns of web applications regardless of client device and network type. HTML5 inherently results in more request and objects, and the adoption rate is fairly significant from the developer community. A recent Evans Data survey indicates increasing adoption rates; in 2010 28% of developers were using HTML5 markup, with 48.9% planning on using it in the future. More traffic. More users. More devices. More networks. More data. More connections. It’s time to start considering how to address mobile performance before it becomes an even steeper hill to climb. The third greatest (useful) hack in the history of the Web Achieving Scalability Through Fewer Resources Long Live(d) AJAX The Impact of AJAX on the Network The AJAX Application Delivery Challenge What is server offload and why do I need it? 3 Really good reasons you should use TCP multiplexing222Views0likes0CommentsArchitecting for Speed
I'm going to give you an engine low to the ground. An extra-big oil pan that'll cut the wind underneath you. That'll give you more horsepower. I'll give you a fuel line that'll hold an extra gallon of gas. I'll shave half an inch off you and shape you like a bullet. When I get you primed, painted and weighed... ...you're going to be ready to go out on that racetrack. You're going to be perfect. (From the movie: Days of Thunder) In the monologue above, Harry Hogge, crew chief, is talking to the framework of a car; explaining how it is that he's going to architect her for speed. What I love about this monologue is that Harry isn't focusing on any one aspect of the car, he's looking at the big picture - inside and out. This is the way we should architect web application infrastructures for speed: holistically and completely, taking the entire application delivery infrastructure into consideration, because each component in that infrastructure can have an effect - positive or negative - on the performance of web applications. Analyst firm Forrester recently hosted a teleconference (download available soon) on this very subject entitled "Web Performance Architecture Best Practices." In one single slide analysts Mike Gualtieri and James Staten captured the essence of Harry's monologue by promoting a holistic view of web application performance that includes the inside and outside of an application. "Performance depends upon a holistic view of your architecture" SOURCE: "Teleconference: Web Performance Architecture Best Practices", Forrester Research, July 2008. The discussion goes on to describe how to ensure speedy delivery of applications, and includes the conclusion that cutting Web-tier response time by half delivers an overall 40% improvement in the performance of applications. Cutting response time is the primary focus of web application acceleration solutions. Combining intelligent caching and compression with technologies that make the browser more efficient improve the overall responsiveness of the web tier of your web applications. And what's best is that you don't have to do anything to the web applications to get that improvement. While improving performance in the application and data tiers of an application architecture can require changes to the application including a lot of coding, the edge and application infrastructure can often provide a significant boost in performance simply by transparently adding the ability to optimize web application protocols as well as their underlying transport protocols (TCP, HTTP). Steve Souders, author of "High Performance Web Sites" (O'Reilly Media, Inc., 2007) further encourages an architecture that includes compressing everything as well as maximizing the use of the browser's cache. But my absolute favorite line from the teleconference? "Modern load balancers do far more than just spread the load." Amen, brothers! Can I get a hallelujah? If you weren't able to attend, I highly recommend downloading the teleconference when it's available and giving it a listen. It includes a great case study, as well, on how to build a high performing, scalable web application that helps wrap some reality around the concepts discussed. Perhaps one day we'll be talking to our applications like Harry Hogge does to the car he's about to build... I'm going to give you code with tightly written loops. An extra-fast infrastructure that'll offload functionality for you. That'll give you more horsepower. I'll give you a network that'll hold an extra megabit of bandwidth. I'll compress and shape your data like a bullet. When I get you optimized, secured and deployed... ...you're going to be ready to go out on the Internet. You're going to be perfect.222Views0likes0CommentsThe Four V’s of Big Data
#stirling #bigdata #ado #interop “Big data” focuses almost entirely on data at rest. But before it was at rest – it was transmitted over the network. That ultimately means trouble for application performance. The problem of “big data” is highly dependent upon to whom you are speaking. It could be an issue of security, of scale, of processing, of transferring from one place to another. What’s rarely discussed as a problem is that all that data got where it is in the same way: over a network and via an application. What’s also rarely discussed is how it was generated: by users. If the amount of data at rest is mind-boggling, consider the number of transactions and users that must be involved to create that data in the first place – and how that must impact the network. Which in turn, of course, impacts the users and applications creating it. It’s a vicious cycle, when you stop and think about it. This cycle shows no end in sight. The amount of data being transferred over networks, according to Cisco, is only going to grow at a staggering rate – right along with the number of users and variety of devices generating that data. The impact on the network will be increasing amounts of congestion and latency, leading to poorer application performance and greater user frustration. MITIGATING the RISKS of BIG DATA SIDE EFFECTS Addressing that frustration and improving performance is critical to maintaining a vibrant and increasingly fickle user community. A Yotta blog detailing the business impact of site performance (compiled from a variety of sources) indicates a serious risk to the business. According to its compilation, a delay of 1 second in page load time results in: 7% Loss in Conversions 11% Fewer Pages Viewed 16% Decrease in Customer Satisfaction This delay is particularly noticeable on mobile networks, where latency is high and bandwidth is low – a deadly combination for those trying to maintain service level agreements with respect to application performance. But users accessing sites over the LAN or Internet are hardly immune from the impact; the increasing pressure on networks inside and outside the data center inevitably result in failures to perform – and frustrated users who are as likely to abandon and never return as are mobile users. Thus, the importance of optimizing the delivery of applications amidst potentially difficult network conditions is rapidly growing. The definition of “available” is broadening and now includes performance as a key component. A user considers a site or application “available” if it responds within a specific time interval – and that time interval is steadily decreasing. Optimizing the delivery of applications while taking into consideration the network type and conditions is no easy task, and requires a level of intelligence (to apply the right optimization at the right time) that can only be achieved by a solution positioned in a strategic point of control – at the application delivery tier. Application Delivery Optimization (ADO) Application delivery optimization (ADO) is a comprehensive, strategic approach to addressing performance issues, period. It is not a focus on mobile, or on cloud, or on wireless networks. It is a strategy that employs visibility and intelligence at a strategic point of control in the data path that enables solutions to apply the right type of optimization at the right time to ensure individual users are assured the best performance possible given their unique set of circumstances. The technological underpinnings of ADO are both technological and topological, leveraging location along with technologies like load balancing, caching, and protocols to improve performance on a per-session basis. The difficulties in executing on an overarching, comprehensive ADO strategy is addressing variables of myriad environments, networks, devices, and applications with the fewest number of components possible, so as not to compound the problems by introducing more latency due to additional processing and network traversal. A unified platform approach to ADO is necessary to ensure minimal impact from the solution on the results. ADO must therefore support topology and technology in such a way as to ensure the flexible application of any combination as may be required to mitigate performance problems on demand. Topologies Symmetric Acceleration Front-End Optimization (Asymmetric Acceleration) Lengthy debate has surrounded the advantages and disadvantages of symmetric and asymmetric optimization techniques. The reality is that both are beneficial to optimization efforts. Each approach has varying benefits in specific scenarios, as each approach focuses on specific problem areas within application delivery chain. Neither is necessarily appropriate for every situation, nor will either one necessarily resolve performance issues in which the root cause lies outside the approach's intended domain expertise. A successful application delivery optimization strategy is to leverage both techniques when appropriate. Technologies Protocol Optimization Load Balancing Offload Location Whether the technology is new – SPDY – or old – hundreds of RFC standards improving on TCP – it is undeniable that technology implementation plays a significant role in improving application performance across a broad spectrum of networks, clients, and applications. From improving upon the way in which existing protocols behave to implementing emerging protocols, from offloading computationally expensive processing to choosing the best location from which to serve a user, the technologies of ADO achieve the best results when applied intelligently and dynamically, taking into consideration real-time conditions across the user-network-server spectrum. ADO cannot effectively scale as a solution if it focuses on one or two comprising solutions. It must necessarily address what is a polyvariable problem with a polyvariable solution: one that can apply the right set of technological and topological solutions to the problem at hand. That requires a level of collaboration across ADO solutions that is almost impossible to achieve unless the solutions are tightly integrated. A holistic approach to ADO is the most operationally efficient and effective means of realizing performance gains in the face of increasingly hostile network conditions. Mobile versus Mobile: 867-5309 Identity Gone Wild! Cloud Edition Network versus Application Layer Prioritization Performance in the Cloud: Business Jitter is Bad The Three Axioms of Application Delivery Fire and Ice, Silk and Chrome, SPDY and HTTP The HTTP 2.0 War has Just Begun Stripping EXIF From Images as a Security Measure222Views0likes0Comments