caching
14 TopicsTrue or False: Application acceleration solutions teach developers to write inefficient code
It has been suggested that the use of application acceleration solutions as a means to improve application performance would result in programmers writing less efficient code. In a comment on “The House that Load Balancing Built” a reader replies: Not only will it cause the application to grow in cost and complexity, it's teaching new and old programmers to not write efficient code and rely on other products and services on [sic] thier behalf. I.E. Why write security into the app, when the ADC can do that for me. Why write code that executes faster, the ADC will do that for me, etc., etc. While no one can control whether a programmer writes “fast” code, the truth is that application acceleration solutions do not affect the execution of code in any way. A poorly constructed loop will run just as slow with or without an application acceleration solution in place. Complex mathematical calculations will execute with the same speed regardless of the external systems that may be in place to assist in improving application performance. The answer is, unequivocally, that the presence or lack thereof of an application acceleration solution should have no impact on the application developer because it does nothing to affect the internal execution of written code. If you answered false, you got the answer right. The question has to be, then, just what does an application acceleration solution do that improves performance? If it isn’t making the application logic execute faster, what’s the point? It’s a good question, and one that deserves an answer. Application acceleration is part of a solution we call “application delivery”. Application delivery focuses on improving application performance through optimization of the use and behavior of transport (TCP) and application transport (HTTP/S) protocols, offloading certain functions from the application that are more efficiently handled by an external often hardware-based system, and accelerating the delivery of the application data. OPTIMIZATION Application acceleration improves performance by understanding how these protocols (TCP, HTTP/S) interact across a WAN or LAN and acting on that understanding to improve its overall performance. There are a large number of performance enhancing RFCs (standards) around TCP that are usually implemented by application acceleration solutions. Delayed and Selective Acknowledgments (RFC 2018) Explicit Congestion Notification (RFC 3168) Limited and Fast Re-Transmits (RFC 3042 and RFC 2582) Adaptive Initial Congestion Windows (RFC 3390) Slow Start with Congestion Avoidance (RFC 2581) TCP Slow Start (RFC 3390) TimeStamps and Windows Scaling (RFC 1323) All of these RFCs deal with TCP and therefore have very little to do with the code developers create. Most developers code within a framework that hides the details of TCP and HTTP connection management from them. It is the rare programmer today that writes code to directly interact with HTTP connections, and even rare to find one coding directly at the TCP socket layer. The execution of code written by the developer takes just as long regardless of the implementation or lack of implementation of these RFCs. The application acceleration solution improves the performance of the delivery of the application data over TCP and HTTP which increases the performance of the application as seen from the user’s point of view. OFFLOAD Offloading compute intensive processing from application and web servers improves performance by reducing the consumption of CPU and memory required to perform those tasks. SSL and other encryption/decryption functions (cookie security, for example) are computationally expensive and require additional CPU and memory on the server. The reason offloading these functions to an application delivery controller or stand-alone application acceleration solution improves application performance is because it frees the CPU and memory available on the server and allows it to be dedicated to the application. If the application or web server does not need to perform these tasks, it saves CPU cycles that would otherwise be used to perform them. Those cycles can be used by the application and thus increases the performance of the application. Also beneficial is the way in which application delivery controllers manage TCP connections made to the web or application server. Opening and closing TCP connections takes time, and the time required is not something a developer – coding within a framework – can affect. Application acceleration solutions proxy connections for the client and subsequently reduce the number of TCP connections required on the web or application server as well as the frequency with which those connections need to be open and closed. By reducing the connections and frequency of connections the application performance is increased because it is not spending time opening and closing TCP connections, which are necessarily part of the performance equation but not directly affected by anything the developer does in his or her code. The commenter believes that an application delivery controller implementation should be an afterthought. However, the ability of modern application delivery controllers to offload certain application logic functions such as cookie security and HTTP header manipulation in a centralized, optimized manner through network-side scripting can be a performance benefit as well as a way to address browser-specific quirks and therefore should be seriously considered during the development process. ACCELERATION Finally, application acceleration solutions improve performance through the use of caching and compression technologies. Caching includes not just server-side caching, but the intelligent use of the client (usually the browser) cache to reduce the number of requests that must be handled by the server. By reducing the number of requests the server is responding to, the web or application server is less burdened in terms of managing TCP and HTTP sessions and state, and has more CPU cycles and memory that can be dedicated to executing the application. Compression, whether using traditional industry standard web-based compression (GZip) or WAN-focused data de-duplication techniques, decreases the amount of data that must be transferred from the server to the client. Decreasing traffic (bandwidth) results in fewer packets traversing the network which results in quicker delivery to the user. This makes it appear that the application is performing faster than it is, simply because it arrived sooner. Of all these techniques, the only one that could possibly contribute to the delinquency of developers is caching. This is because application acceleration caching features act on HTTP caching headers that can be set by the developer, but rarely are. These headers can also be configured by the web or application server administrator, but rarely are in a way that makes sense because most content today is generated dynamically and is rarely static, even though individual components inside the dynamically generated page may in fact be very static (CSS, JavaScript, images, headers, footers, etc…). However, the methods through which caching (pragma) headers are set is fairly standard and the actual code is usually handled by the framework in which the application is developed, meaning the developer ultimately cannot affect the efficiency of the use of this method because it was developed by someone else. The point of the comment was likely more broad, however. I am fairly certain that the commenter meant to imply that if developers know the performance of the application they are developing will be accelerated by an external solution that they will not be as concerned about writing efficient code. That’s a layer 8 (people) problem that isn’t peculiar to application delivery solutions at all. If a developer is going to write inefficient code, there’s a problem – but that problem isn’t with the solutions implemented to improve the end-user experience or scalability, it’s a problem with the developer. No technology can fix that.248Views0likes4CommentsProgrammable Cache-Control: One Size Does Not Fit All
#webperf For addressing challenges related to performance of #mobile devices and networks, caching is making a comeback. It's interesting - and almost amusing - to watch the circle of technology run around best practices with respect to performance over time. Back in the day caching was the ultimate means by which web application performance was improved and there was no lack of solutions and techniques that manipulated caching capabilities to achieve optimal performance. Then it was suddenly in vogue to address the performance issues associated with Javascript on the client. As Web 2.0 ascended and AJAX-based architectures ruled the day, Javascript was Enemy #1 of performance (and security, for that matter). Solutions and best practices began to arise to address when Javascript loaded, from where, and whether or not it was even active. And now, once again, we're back at the beginning with caching. In the interim years, it turns out developers have not become better about how they mark content for caching and with the proliferation of access from mobile devices over sometimes constrained networks, it's once again come to the attention of operations (who are ultimately responsible for some reason for performance of web applications) that caching can dramatically improve the performance of web applications. [ Excuse me while I take a breather - that was one long thought to type. ] Steve Souders, web performance engineer extraordinaire, gave a great presentation at HTML5DevCon that was picked up by High Scalability: Cache is King!. The aforementioned articles notes: Use HTTP cache control mechanisms: max-age, etag, last-modified, if-modified-since, if-none-match, no-cache, must-revalidate, no-store. Want to prevent HTTP sending conditional GET requests, especially over high latency mobile networks. Use a long max-age and change resource names any time the content changes so that it won't be cached improperly. -- Better Browser Caching Is More Important Than No Javascript Or Fast Networks For HTTP Performance The problem is, of course, that developers aren't putting all these nifty-neato-keen tags and meta-data in their content and the cost to modify existing applications to do so may result in a prioritization somewhere right below having an optional, unnecessary root canal. In other cases the way in which web applications are built today - we're still using AJAX-based, real-time updates of chunks of content rather than whole pages - means simply adding tags and meta-data to the HTML isn't necessarily going to help because it refers to the page and not the data/content being retrieved and updated for that "I'm a live, real-time application" feel that everyone has to have today. Too, caching tags and meta-data in HTML doesn't address every type of data. JSON, for example, commonly returned as the response to an API call (used as the building blocks for web applications more and more frequently these days) aren't going to be impacted by the HTML caching directives. That has to be addressed in a different way, either on the server side (think Apache mod_expire) or on the client (HTML5 contains new capabilities specifically for this purpose and there are usually cache directives hidden in AJAX frameworks like jQuery). The Programmable Network to the Rescue What you need is the ability to insert the appropriate tags, on the appropriate content, in such a way as to make sure whatever you're about to do (a) doesn't break the application and (b) is actually going to improve the performance of the end-user experience for that specific request. Note that (b) is pretty important, actually, because there are things you do to content being delivered to end users on mobile devices over mobile networks that might make things worse if you do it to the same content being delivered to the same end user on the same device over the wireless LAN. Network capabilities matter, so it's important to remember that. To avoid rewriting applications (and perhaps changing the entire server-side architecture by adding on modules) you could just take advantage of programmability in the network. When enabled as part of a full-proxy, network intermediary the ability to programmatically modify content in-flight becomes invaluable as a mechanism for improving performance, particularly with respect to adding (or modifying) cache headers, tags, and meta-data. By allowing the intermediary to cache the cacheable content while simultaneously inserting the appropriate cache control headers to manage the client-side cache, performance is improved. By leveraging programmability, you can start to apply device or network or application (or any combination thereof) logic to manipulate the cache as necessary while also using additional performance-enhancing techniques like compression (when appropriate) or image optimization (for mobile devices). The thing is that a generic "all on" or "all off" for caching isn't going to always result in the best performance. There's logic to it that says you need the capability to say "if X and Y then ON else if Z then OFF". That's the power of a programmable network, of the ability to write the kind of logic that takes into consideration the context of a request and takes the appropriate actions in real-time. Because one size (setting) simply does not fit all.204Views0likes0CommentsThe Need for (HTML5) Speed
#mobile #HTML5 #webperf #fasterapp #ado The importance of understanding acceleration techniques in the face of increasing mobile and HTML5 adoption An old English proverb observes that "Even a broken clock is right twice a day.” A more modern idiom involves a blind squirrel and an acorn, and I’m certain there are many other culturally specific nuggets of wisdom that succinctly describe what is essentially blind luck. The proverb and modern idioms fit well the case of modern acceleration techniques as applied to content delivered to mobile devices. A given configuration of options and solutions may inadvertently be “right” twice a day purely by happenstance, but the rest of the time they may not be doing all that much good. With HTML5 adoption increasing rapidly across the globe, the poor performance of parsing on mobile devices will require more targeted and intense use of acceleration and optimization solutions. THE MOBILE LAST MILES One of the reasons content deliver to mobile devices is so challenging is the number of networks and systems through which the content must flow. Unlike WiFi connected devices, which traverse controllable networks as well as the Internet, content delivered to mobile devices connected via carrier networks must also traverse the mobile (carrier) network. Add to that challenge the constrained processing power of mobile devices imposed by carriers and manufacturers alike, and delivering content to these devices in an acceptable timeframe becomes quite challenging. Organizations must contend not only with network conditions across three different networks but also capabilities and innate limitations of the devices themselves. Such limitations include processing capabilities, connection models, and differences in web application support. Persistence and in-memory caching is far more limited on mobile devices, making reliance on traditional caching strategies as a key component of acceleration techniques less than optimal. Compression and de-duplication of data even over controlled WAN links when mobile devices are in WiFi mode may not be as helpful as they are for desktop and laptop counterparts given mobile hardware limitations. Difference in connection models – on mobile devices connections are sporadic, shorter-lived, and ad-hoc – render traditional TCP-related enhancements ineffective. TCP slow-start mechanisms, for example, are particularly frustrating under the hood for mobile device connections because connections are constantly being dropped and restarted, forcing TCP to begin again very slowly. TCP, in a nutshell, was designed for fixed-networks, not mobile networks. A good read on this topic is Ben Strong’s “Google and Microsoft Cheat on Slow-Start. Should You?” His testing (in 2010) showed both organizations push the limits for the IW (initial window) higher than the RFC allows, with Microsoft nearly ignoring the limitations all together. Proposals to increase the IW in the RFC to 10 have been submitted, but thus far there does not appear to be consensus on whether or not to allow this change. Also not discussed is the impact of changing the IW on fixed (desktop, laptop, LAN) connected devices. The assumption being that IW is specified as it is because it was optimal for fixed end-points and changing that would be detrimental to performance for those devices. The impact of TCP on mobile performance (and vice-versa) should not be underestimated. CloudFare has a great blog post on the impact of mobility on TCP-related performance concluding that: TCP would actually work just fine on a phone except for one small detail: phones don't stay in one location. Because they move around (while using the Internet) the parameters of the network (such as the latency) between the phone and the web server are changing and TCP wasn't designed to detect the sort of change that's happening. -- CloudFare blog: Why mobile performance is difficult One answer is more intelligent intermediate acceleration components, capable of detecting not only the type of end-point initiating the connection (mobile or fixed) but actually doing something about it, i.e. manipulating the IW and other TCP-related parameters on the fly. Dynamically and intelligently. Of course innate parsing and execution performance on mobile devices contributes significantly to the perception of performance on the part of the end-user. While HTML5 may be heralded as a solution to cross-platform, cross-environment compatibility issues, it brings to the table performance challenges that will need to be overcome. http://thenextweb.com/dd/2012/05/22/html5-runs-up-to-thousands-of-times-slower-on-mobile-devices-report/ In the latest research by Spaceport.io on the performance of HTML5 on desktop vs smartphones, it appears that there are performance issues for apps and in particular games for mobile devices. Spaceport.io used its own Perfmarks II benchmarking suite to test HTML rendering techniques across desktop and mobile browsers. Its latest report says: We found that when comparing top of the line, modern smartphones with modern laptop computers, mobile browsers were, on average, 889 times slower across the various rendering techniques tested. At best the iOS phone was roughly 6 times slower, and the best Android phone 10 times slower. At worst, these devices were thousands of times slower. Combining the performance impact of parsing HTML5 on mobile devices with mobility-related TCP impacts paints a dim view of performance for mobile clients in the future. Especially as improving the parsing speed of HTML5 is (mostly) out of the hands of operators and developers alike. Very little can be done to impact the parsing speed aside from transformative acceleration techniques, many of which are often not used for fixed client end-points today. Which puts the onus back on operators to use the tools at their disposal (acceleration and optimization) to improve delivery as a means to offset and hopefully improve the overall performance of HTML5-based applications to mobile (and fixed) end-points. DON’T RELY on BLIND LUCK Organizations seeking to optimize delivery to mobile and traditional end-points need more dynamic and agile infrastructure solutions capable of recognizing the context in which requests are made and adjusting delivery policies – from TCP to optimization and acceleration – on-demand, as necessary to ensure the best delivery performance possible. Such infrastructure must be able to discern whether the improvements from minification and image optimization will be offset by TCP optimizations designed for fixed end-points interacting with mobile end-points – and do something about it. It’s not enough to configure a delivery chain comprised of acceleration and optimization designed for delivery of content to traditional end-points because the very same services that enhance performance for fixed end-points may be degrading performance for mobile end-points. It may be that twice a day, like a broken clock, the network and end-point parameters align in such a way that the same services enhance performance for both fixed and mobile end-points. But relying on such a convergence of conditions as a performance management strategy is akin to relying on blind luck. Addressing mobile performance requires a more thorough understanding of acceleration techniques – particularly from the perspective of what constraints they best address and under what conditions. Trying to leverage the browser cache, for example, is a great way to improve fixed end-point performance, but may backfire on mobile devices because of limited capabilities for caching. On the other hand, HTML5 introduces client-side cache APIs that may be useful, but are very different from previous HTML caching directives that supporting both will require planning and a flexible infrastructure for execution. In many ways this API will provide opportunities to better leverage client-side caching capabilities, but will require infrastructure support to ensure targeted caching policies can be implemented. As HTML5 continues to become more widely deployed, it’s important to understand the various acceleration and optimization techniques, what each is designed to overcome, and what networks and platforms they are best suited to serve in order to overcome inherent limitations of HTML5 and the challenge of mobile delivery. Google and Microsoft Cheat on Slow-Start. Should You?” HTML5 runs up to thousands of times slower on mobile devices: Report Application Security is a Stack Y U No Support SPDY Yet? The “All of the Above” Approach to Improving Application Performance What Does Mobile Mean, Anyway? The HTTP 2.0 War has Just Begun262Views0likes0CommentsF5 Friday: In the NOC at Interop
#interop #fasterapp #adcfw #ipv6 Behind the scenes in the Interop network Interop Las Vegas expects somewhere in the realm of 10,000+ attendees this year. Most of them will no doubt be carrying smart phones, many tablets, and of course the old standby, the laptop. Nearly every one will want access to some service – inside or out. The Interop network provides that access – and more. F5 solutions will provide IT services, including IPv4–IPv6 translation, firewall, SSL VPN, and web optimization technologies, for the Network Operations Center (NOC) at Interop. The Interop 2012 network is comprised of the show floor Network Operations Center (NOC), and three co-location sites: Colorado (DEN), California (SFO), and New Jersey(EWR). The NOC moves with the show to its 4 venues: Las Vegas, Tokyo, Mumbai, and New York. F5 has taken a hybrid application delivery network architectural approach – leveraging both physical devices (in the NOC) and virtual equivalents (in the Denver DC). Both physical and virtual instances of F5 solutions are managed via a BIG-IP Enterprise Manager 4000, providing operational consistency across the various application delivery services provided: DNS, SMTP, NTP, global traffic management (GSLB), remote access via SSL VPNs, local caching of conference materials, and data center firewall services in the NOC DMZ. Because the Interop network is supporting both IPv6 and IPv4, F5 is also providing NAT64 and DNS64 services. NAT64: Network address translation is performed between IPv6 and IPv4 on the Interop network, to allow IPv6-only clients and servers to communicate with hosts on IPv4-only networks DNS64: IPv6-to-IPv4 DNS translations are also performed by these BIG-IPs, allowing A records originating from IPv4-only DNS servers to be converted into AAAA records for IPv6 clients. F5 is also providing SNMP, SYSLOG, and NETFLOW services to vendors at the show for live demonstrations. This is accomplished by cloning the incoming traffic and replicating it out through the network. At the network layer, such functionality is often implemented by simply mirroring ports. While this is sometimes necessary, it does not necessarily provide the level of granularity (and thus control) required. Mirrored traffic does not distinguish between SNMP and SMTP, for example, unless specifically configured to do so. While cloning via an F5 solution can be configured to act in a manner consistent with port mirroring, cloning via F5 also allows intermediary devices to intelligently replicate traffic based on information gleaned from deep content inspection (DCI). For example, traffic can be cloned to a specific pool of devices based on the URI, or client IP address or client device type or destination IP. Virtually any contextual data can be used to determine whether or not to clone traffic. You can poke around with more detail and photos and network diagrams at F5’s microsite supporting its Interop network services. Dashboards are available, documentation, pictures, and more information in general on the network and F5 services supporting the show. And of course if you’re going to be at Interop, stop by the booth and say “hi”! I’ll keep the light on for ya… F5 Interopportunities at Interop 2012 F5 Secures and Optimizes Application and Network Services for the Interop 2012 Las Vegas Network Operations Center When Big Data Meets Cloud Meets Infrastructure Mobile versus Mobile: 867-5309 Why Layer 7 Load Balancing Doesn’t Suck BYOD–The Hottest Trend or Just the Hottest Term What Does Mobile Mean, Anyway? Mobile versus Mobile: An Identity Crisis The Three Axioms of Application Delivery Don’t Let Automation Water Down Your Data Center The Four V’s of Big Data355Views0likes0CommentsThe “All of the Above” Approach to Improving Application Performance
#ado #fasterapp #stirling Carnegie Mellon testing of ADO solutions answers age old question: less filling or tastes great? You probably recall years ago the old “Tastes Great vs Less Filling” advertisements. The ones that always concluded in the end that the beer in question was not one or the other, but both. Whenever there are two ostensibly competing technologies attempting to solve the same problem, we run into the same old style argument. This time, in the SPDY versus Web Acceleration debate, we’re inevitably going to arrive at the conclusion it’s both less filling and tastes great. SPDY versus Web Acceleration In general, what may appear on the surface to be competing technologies are actually complementary. Testing by Carnegie Mellon supports this conclusion, showing marked improvements in web application performance when both SPDY and Web Acceleration techniques are used together. That’s primarily because web application traffic shows a similar pattern across modern, interactive Web 2.0 sites: big, fat initial pages with a subsequent steady stream of small requests and a variety of response sizes, typically small to medium in content length. We know from experience and testing that web acceleration techniques like compression provide the greatest improvements in performance when acting upon medium-large sized responses though actual improvement rates depend highly on the network over which data is being exchanged. We know that compression can actually be detrimental to performance when responses are small (in the 1K range) and being transferred over a LAN. That’s because the processing time incurred to compress that data is greater than the time to traverse the network. But when used to compress larger responses traversing congested or bandwidth constrained connections, compression is a boon to performance. It’s less filling. SPDY, though relatively new on the scene, is the rising star of web acceleration. Its primary purposes is to optimize the application layer exchanges that typically occur via HTTP (requests and responses) by streamlining connection management (SPDY only uses one connection per client-host), dramatically reducing header sizes, and introducing asynchronicity along with prioritization. It tastes great. What Carnegie Mellon testing shows is that when you combine the two, you get the best results because each improves performance of specific data exchanges that occur over the life of a user interaction. HERE COMES the DATA The testing was specifically designed to measure the impact of each of the technologies separately and then together. For the web acceleration functionality they chose to employ BoostEdge (a software ADC) though one can reasonably expect similar results from other ADCs provided they offer the same web acceleration and optimization capabilities, which is generally a good bet in today’s market. The testing specifically looked at two approaches: Two of the most promising software approaches are (a) content optimization and compression, and (b) optimizing network protocols. Since network protocol optimization and data optimization operate at different levels, there is an opportunity for improvement beyond what can be achieved by either of the approaches individually. In this paper, we report on the performance benefits observed by following a unified approach, using both network protocol and data optimization techniques, and the inherent benefits in network performance by combining these approaches in to a single solution. -- Data and Network Optimization Effect on Web Performance, Carnegie Mellon, Feb 2012 The results should not be surprising – when SPDY is combined with ADC optimization technologies, the result is both less filling and it tastes great. When tested in various combinations, we find that the effects are more or less additive and that the maximum improvement is gained by using BoostEdge and SPDY together. Interestingly, the two approaches are also complimentary; i.e., in situations where data predominates (i.e. “heavy” data, and fewer network requests), BoostEdge provides a larger boost via its data optimization capabilities and in cases where the data is relatively small, or “light”, but there are many network transactions required, SPDY provides an increased proportion of the overall boost. The general effect is that relative level of improvement remains consistent over various types of websites. -- Data and Network Optimization Effect on Web Performance, Carnegie Mellon, Feb 2012 NOT THE WHOLE STORY Interestingly, the testing results show there is room for even greater improvement. The paper notes that comparisons include “an 11% cost of running SSL” (p 13). This is largely due to the use of a software ADC solution which employs no specialized hardware to address the latency incurred by compute intense processing such as compression and cryptography. Leveraging a hardware ADC with cryptographic hardware and compression acceleration capabilities should further improve results by reducing the latency incurred by both processes. Testers further did not test under load (which would have a significant impact on the results in a negative fashion) or leverage other proven application delivery optimization (ADO) techniques such as TCP multiplexing. While the authors mention the use of front-end optimization (FEO) techniques such as image optimization and client-side cache optimization, it does not appear that these were enabled during testing. Other techniques such as EXIF stripping, CSS caching, domain sharding, and core TCP optimizations are likely to provide even greater benefits to web application performance when used in conjunction with SPDY. CHOOSE “ALL of the ABOVE” What the testing concluded was that an “all of the above” approach would appear to net the biggest benefits in terms of application performance. Using SPDY along with complimentary ADO technologies provides the best mitigation of latency-inducing issues that ultimately degrade the end-user experience. Ultimately, SPDY is one of a plethora of ADO technologies designed to streamline and improve web application performance. Like most ADO technologies, it is not universally beneficial to every exchange. That’s why a comprehensive, inclusive ADO strategy is necessary. Only an approach that leverages “all of the above” at the right time and on the right data will net optimal performance across the board. The HTTP 2.0 War has Just Begun Stripping EXIF From Images as a Security Measure F5 Friday: Domain Sharding On-Demand WILS: WPO versus FEO WILS: The Many Faces of TCP HTML5 Web Sockets Changes the Scalability Game Network versus Application Layer Prioritization The Three Axioms of Application Delivery230Views0likes0CommentsThe Four V’s of Big Data
#stirling #bigdata #ado #interop “Big data” focuses almost entirely on data at rest. But before it was at rest – it was transmitted over the network. That ultimately means trouble for application performance. The problem of “big data” is highly dependent upon to whom you are speaking. It could be an issue of security, of scale, of processing, of transferring from one place to another. What’s rarely discussed as a problem is that all that data got where it is in the same way: over a network and via an application. What’s also rarely discussed is how it was generated: by users. If the amount of data at rest is mind-boggling, consider the number of transactions and users that must be involved to create that data in the first place – and how that must impact the network. Which in turn, of course, impacts the users and applications creating it. It’s a vicious cycle, when you stop and think about it. This cycle shows no end in sight. The amount of data being transferred over networks, according to Cisco, is only going to grow at a staggering rate – right along with the number of users and variety of devices generating that data. The impact on the network will be increasing amounts of congestion and latency, leading to poorer application performance and greater user frustration. MITIGATING the RISKS of BIG DATA SIDE EFFECTS Addressing that frustration and improving performance is critical to maintaining a vibrant and increasingly fickle user community. A Yotta blog detailing the business impact of site performance (compiled from a variety of sources) indicates a serious risk to the business. According to its compilation, a delay of 1 second in page load time results in: 7% Loss in Conversions 11% Fewer Pages Viewed 16% Decrease in Customer Satisfaction This delay is particularly noticeable on mobile networks, where latency is high and bandwidth is low – a deadly combination for those trying to maintain service level agreements with respect to application performance. But users accessing sites over the LAN or Internet are hardly immune from the impact; the increasing pressure on networks inside and outside the data center inevitably result in failures to perform – and frustrated users who are as likely to abandon and never return as are mobile users. Thus, the importance of optimizing the delivery of applications amidst potentially difficult network conditions is rapidly growing. The definition of “available” is broadening and now includes performance as a key component. A user considers a site or application “available” if it responds within a specific time interval – and that time interval is steadily decreasing. Optimizing the delivery of applications while taking into consideration the network type and conditions is no easy task, and requires a level of intelligence (to apply the right optimization at the right time) that can only be achieved by a solution positioned in a strategic point of control – at the application delivery tier. Application Delivery Optimization (ADO) Application delivery optimization (ADO) is a comprehensive, strategic approach to addressing performance issues, period. It is not a focus on mobile, or on cloud, or on wireless networks. It is a strategy that employs visibility and intelligence at a strategic point of control in the data path that enables solutions to apply the right type of optimization at the right time to ensure individual users are assured the best performance possible given their unique set of circumstances. The technological underpinnings of ADO are both technological and topological, leveraging location along with technologies like load balancing, caching, and protocols to improve performance on a per-session basis. The difficulties in executing on an overarching, comprehensive ADO strategy is addressing variables of myriad environments, networks, devices, and applications with the fewest number of components possible, so as not to compound the problems by introducing more latency due to additional processing and network traversal. A unified platform approach to ADO is necessary to ensure minimal impact from the solution on the results. ADO must therefore support topology and technology in such a way as to ensure the flexible application of any combination as may be required to mitigate performance problems on demand. Topologies Symmetric Acceleration Front-End Optimization (Asymmetric Acceleration) Lengthy debate has surrounded the advantages and disadvantages of symmetric and asymmetric optimization techniques. The reality is that both are beneficial to optimization efforts. Each approach has varying benefits in specific scenarios, as each approach focuses on specific problem areas within application delivery chain. Neither is necessarily appropriate for every situation, nor will either one necessarily resolve performance issues in which the root cause lies outside the approach's intended domain expertise. A successful application delivery optimization strategy is to leverage both techniques when appropriate. Technologies Protocol Optimization Load Balancing Offload Location Whether the technology is new – SPDY – or old – hundreds of RFC standards improving on TCP – it is undeniable that technology implementation plays a significant role in improving application performance across a broad spectrum of networks, clients, and applications. From improving upon the way in which existing protocols behave to implementing emerging protocols, from offloading computationally expensive processing to choosing the best location from which to serve a user, the technologies of ADO achieve the best results when applied intelligently and dynamically, taking into consideration real-time conditions across the user-network-server spectrum. ADO cannot effectively scale as a solution if it focuses on one or two comprising solutions. It must necessarily address what is a polyvariable problem with a polyvariable solution: one that can apply the right set of technological and topological solutions to the problem at hand. That requires a level of collaboration across ADO solutions that is almost impossible to achieve unless the solutions are tightly integrated. A holistic approach to ADO is the most operationally efficient and effective means of realizing performance gains in the face of increasingly hostile network conditions. Mobile versus Mobile: 867-5309 Identity Gone Wild! Cloud Edition Network versus Application Layer Prioritization Performance in the Cloud: Business Jitter is Bad The Three Axioms of Application Delivery Fire and Ice, Silk and Chrome, SPDY and HTTP The HTTP 2.0 War has Just Begun Stripping EXIF From Images as a Security Measure227Views0likes0CommentsStripping EXIF From Images as a Security Measure
#fasterapp #infosec And you thought FourSquare was a security risk… Mobile phones with great cameras are an awesome tool. Many of these end up on Facebook, visible to friends, family and, well, friends of friends and maybe even the public. They get shared around so much, you can’t really be sure where they might eventually wind up. According to Justin Mitchell, an engineer for Facebook Photos, answering a Quora question on the subject last year, Facebook has “over 200 million photos uploaded per day, or around 6 billion per month. There are currently almost 90 billion photos total on Facebook. This means we are, by far, the largest photos site on the Internet.” As most of these are uploaded via modern cameras – whether on mobile phones or digital cameras – which are almost universally enabled with GPS technology, they almost all certainly include some data that you might not want others to find: the exact location the picture was taken. Pshaw! Many may think. After all, “checking in” via FourSquare and adding location to Facebook and Twitter posts is something many do regularly. But this data can be very dangerous, and not just for soliders who have been warned against geotagging photos uploaded to facebook, as cited by a recent Gizmodo article, “US Soldiers Are Giving Away Their Positions with Geotagged Photos”: The Army has issued a warning to its soldiers to stop geotagging their photos on Facebook and other social media outlets. Because it's putting soldiers in danger, and has been for years. Now you might not be worried about giving away the location of helicopters inside a compound that leads to the enemy able to “conduct a mortar attack, destroying four of the AH-64 Apaches” there, but the risks to everyone exists. Those who share photos of their home or things in their home (can’t resist showing off your latest collectible addition to friends, can you?) are opening themselves up to theft, especially if they also like to broadcast their latest travel schedules via a host of other socially connected tools. Even if you aren’t actively sharing your address, all a potential thief needs to do is grab a photo of your fat loot and extract the GPS coordinates hidden in the EXIF data to find his target and then move in, right after you made sure everyone know you were out of town by broadcasting your latest flight information (ATL –> ORD –> SEA). “But I’ve locked down my photos using Facebook’s privacy features!” you say. You might have done so, but do you really know everyone on your list of “friends”? Are they really who they claim to be? And did any of them share your photo with their friends, and their friends? Facebook privacy doesn’t prevent the old standby of “save as” and “upload”, and a quick tag of your name and a Twitter search and bam! You’ve shared data with people you wouldn’t have, if only you had known. While perhaps requiring a bit more paranoia than the average user (and an inherent distrust in humanity), there are very real security implications for a wide variety of folks to embedding Geotags in photos via EXIF, though perhaps those in service to their country more than others. There is a simple and more automated mitigation for this risk. In addition to turning off Geolocation tags on your camera or phone or manually eradicating the EXIF info from photos, a mediating application delivery service can, on-demand, strip this data from images. MEDIATED EXIF STRIPPING With the right application delivery tier implementation, mediated EXIF stripping is as simple as other content scrubbing exercises. Requests are received as normal for an image object. When the image is retrieved from the origin server, a service in the application delivery tier is invoked that strips EXIF data from the image before it is returned to the end-user or deposited in a caching solution. Subsequent requests for that same image, then, though served out of cache are also clear of potentially dangerous GPS information – without modifying the original.* That’s important, as for some folks having that information available to them may be necessary or desirable, but serving it up to the public may simply incur too much risk. Given the velocity with which we click and share photos today, we may be underestimating the associated risk. Others may think that’s just far too paranoid and desire to keep EXIF data in their images. This is another opportunity to monetize a service for providers. The right application delivery tier, capable of interpreting context as well as being instructed by external infrastructure (including applications), could be configured such that only image-containing responses with specific HTTP headers are subject to EXIF stripping. The more security-minded users may desire such a service – and be willing to pay for it – while others could simply continue on as they were, EXIF and all. And even if you aren’t concerned with potential security risks associated with EXIF, you might want to consider that stripping out that extraneous data from images like thumbnails and product shots can reduce the overall size of the image, which is a boon if you’re trying to improve overall performance – particularly on network and resource constrained devices like mobile phones. * Image optimization techniques are always best-effort and sometimes cannot be applied to an image given other factors. Also, if a positive caching models is used, the original image is served the first time it is requested, but not cached. Network versus Application Layer Prioritization Web App Performance: Think 1990s. Mobile versus Mobile: 867-5309 Watch out for cloud congestion What Does Mobile Mean, Anyway? More Users, More Access, More Clients, Less Control The Context-Aware Cloud WILS: WPO versus FEO258Views0likes1CommentEnterprise Apps are Not Written for Speed
#fasterapp #cceventThey’re written for readability, for integration, for business function, and for long-term maintenance… When I was first entering IT I had the good (or bad, depending on how you look at it) fortune to be involved in some of the first Internet-facing projects at a global transportation organization. We made mistakes and learned lessons and eventually got down to the business of architecting a framework that would span the entire IT portfolio. One of the lessons I learned early on was that maintainability always won over performance, especially at the code level. Oh, some basic tenets of optimization in the code could be followed – choosing between while, for, and do..until conditionals based on performance-related concerns – but for the most part, many of the tricks used to improve performance were verboten, and some based solely on factors like readability. The introduction of local scope for an if…then…else statement, for example, was required for readability, even though in terms of performance this introduces many unnecessary clock ticks that under load can have a negative impact on overall capacity and response time. Microseconds of delays adds up to seconds of delays, after all. But coding standards in the enterprise lean heavily toward the reality that (1) code lives for a long time and (2) someone other than the original developer will likely be maintaining it. This means readability is paramount to ensuring the long-term success of any development project. Thus, performance suffers and “rewriting the application” is not an option. It’s costly and the changes necessary would likely conflict with the overriding need to ensure long-term maintainability. Even modern web-focused organizations like Twitter and Facebook have run into performance issues based on architectural decisions made early in the lifecycle. Many no doubt recall the often very technical discussions regarding Twitter’s design and interaction with its database as a source of performance woes, with hundreds of experts offering advice and criticism. Applications are not often designed with performance in mind. They are architected and designed to perform specific functions and tasks, usually business-related, and they are developed with long-term maintenance in mind. This leads to the problem of performance, which can rarely be addressed by the developers due to the constraints placed upon them, not least of which may be an active and very vocal user base. APPLICATION DELIVERY PUTS the FAST back in APPLICATIONS This is a core reason the realm of application delivery exists: to compensate for issues within the application that cannot – for whatever reason – be addressed through modification of the application itself. Application acceleration, WAN optimization, and load balancing services combine to form a powerful tier of application delivery services within the data center through which performance-related issues can be addressed. This tier allows load balancing services, for example, to be leveraged as a means to scale out an application, which effectively results in similar (and often greater) performance gains as simply scaling up to redress inherent performance constraints within the application. Application acceleration techniques improve the delivery of application-related content and objects through caching, compression, transformation, and concatenation. And WAN optimization services address bandwidth constraints that may inhibit delivery of the application, especially those heavy on the data and content side. While certainly developers could modify applications to rearrange content or reduce the size of data being delivered, it is rarely practical or cost-effective to do so. Similarly, it is not cost-effective or practical to ask developers to modify applications to remove processing bottlenecks that may result in unreadable code. Enterprise applications are not written for speed, but that is exactly what is demanded of them by their users. Both needs must be met, and the introduction of an application delivery tier into the architecture can serve to provide the balance between performance and maintenance by applying acceleration services dynamically. In this way applications need not be modified, but performance and scale is greatly improved. I’ll be at CloudConnect 2012 and we’ll discuss the subject of cloud and performance a whole lot more at the show! Sessions From Point A to Point B. The Three Axioms of Application Delivery WILS: WPO versus FEO The Full-Proxy Data Center Architecture Even the best written code has a weakness At the Intersection of Cloud and Control… What is a Strategic Point of Control Anyway? The Battle of Economy of Scale versus Control and Flexibility What CIOs Can Learn from the Spartans194Views0likes0CommentsF5 Friday: When the Solution to a Vulnerability is Vulnerable You Need a New Solution
#v11 Say hello to DNS Express You may recall we recently expounded upon the need for the next generation of infrastructure to provide more protection of critical DNS services. This is particularly important given recent research on behalf of Versign that found “60% of respondents rely on their websites for at least 25% of their annual revenue.” Combined with findings that DDoS attacks, DNS failures and attackers comprised 65% of unplanned downtime in the past year, the financial impact on organizations is staggering. We also described the most popular solution today, DNS caching, and mentioned that it turns out this solution is itself vulnerable to attack. DNS caching can be defeated by simply requesting non-existent resources. This is not peculiar to DNS, by the way, but rather to caching and the way it works. Caching is designed as a proxy for content; content that is always obtained from the originating server. Thus if you request a resource that does not exist in the cache, it must in turn query the originating server to retrieve it. If you start randomly creating host names you know don’t exist to lookup, you can quickly overwhelm the originating server (and potentially the cache) and voila! Successful DDoS. Like an increasing number of modern attacks, this vulnerability is no one’s fault per se; it’s an exploitation of the protocol’s assumptions and designed behavior. But as has been noted before, expected behavior is not necessarily acceptable behavior. For IT, that only matters forasmuch as it aids in finding a more secure, i.e. non-vulnerable, solution. INTRODUCING DNS Express BIG-IP v11 introduced DNS Express , comprising several new capabilities that provide comprehensive DNS protection and addresses just this vulnerability as part of its overall features designed to maintain availability for critical DNS services. DNS Express is a new DNS service available in BIG-IP v11 that implements an authoritative in-memory DNS service capable of storing tens of millions of records. This caching-style solution is enhanced by the CMP (Clustered Multi-Processing) enabled TMOS platform, which allows BIG-IP Global Traffic Manager (GTM) to respond to hundreds of thousands of queries per second (millions per second on the VIPRION hardware platforms). Rounding out this strategic trifecta of DNS goodness is IP Anycast integration, which has the result of obfuscating the number and topological attributes of DNS servers while simultaneously distributing load. This is an important facet as attackers often target DNS servers one by one, and without the ability to determine how many servers may be present attackers must make a choice whether to forge ahead – possibility wasting their valuable time – or concede defeat themselves. A DNS infrastructure based on DNS Express allows customers to leverage the ability of BIG-IP to withstand even the most persistent DDoS load by enacting a zone transfer from a DNS pool to BIG-IP GTM, which subsequently acts as a high-speed authoritative slave DNS service. It is an architectural solution that is fairly non-disruptive to existing architecture and by leveraging core TMOS features such as iRules, adds control and flexibility in designing solutions specifically for a data center’s unique needs and business requirements. This solution realizes the benefits of a DNS-caching solution while mitigating the risk an attacker will exploit the behavior of caching solutions with a barrage of randomly generated host name requests. Happy Safe Resolving! DNS is Like Your Mom All F5 Friday Posts on DevCentral It’s DNSSEC Not DNSSUX Introducing v11: The Next Generation of Infrastructure BIG-IP v11 Information Page The End of DNS As We Know It Taking Down Twitter as easy as D.N.S. Cloud Balancing, Cloud Bursting, and Intercloud Achieving Enterprise Agility in the Cloud (Cloudbursting with VMware, BlueLock, and F5) DNSSEC: The Antidote to DNS Cache Poisoning and Other DNS Attacks265Views0likes0CommentsF5 Friday: The Mobile Road is Uphill. Both Ways.
Mobile users feel the need …. the need for spe- please wait. Loading… We spent the week, like many other folks, at O’Reilly’s Velocity Conference 2011 – a conference dedicated to speed, of web sites, that is. This year the conference organizers added a new track called Mobile Performance. With the consumerization of IT ongoing and the explosion of managed and unmanaged devices allowing ever-increasing amounts of time “connected” to enterprise applications and services, mobile performance – if it isn’t already – will surely become an issue in the next few years. The adoption of HTML5, as a standard platform across mobile and traditional devices is a boon – optimizing the performance of HTML-based application is something F5 knows a thing or two about. After all, there are more than 50 ways to use your BIG-IP system, and many of them are ways to improve performance – often in ways you may not have before considered. NARROWBAND is the NEW NORMAL The number of people who are “always on” today is astounding, and most of them are always on thanks to rapid technological improvements in mobile devices. Phones and tablets are now commonplace just about anywhere you look, and “that guy” is ready to whip out his device and verify (or debunk) whatever debate may be ongoing in the vicinity. Unfortunately the increase in use has also coincided with an increase in the amount of data being transferred without a similar increase in the available bandwidth in which to do it. The attention on video these past few years – which is increasing, certainly, in both size and length – has overshadowed similar astounding bloat in the size and complexity of web page composition. It is this combination – size and complexity – that is likely to cause even more performance woes for mobile users than video. “A Google engineer used the Google bot to crawl and analyze the Web, and found that the average web page is 320K with 43.9 resources per page (Ramachandran 2010). The average web page used 7.01 hosts per page, and 6.26 resources per host. “ (Average Web Page Size Septuples Since 2003) Certainly the increase in broadband usage – which has “more than kept pace with the increase in the size and complexity of the average web page” (Average Web Page Size Septuples Since 2003) – has mitigated most of the performance issues that might have arisen had we remained stuck in the modem-age. But the fact is that mobile users are not so fortunate, and it is their last mile that we must now focus on lest we lose their attention due to slow, unresponsive sites and applications. The consumerization of IT, too, means that enterprise applications are more and more being accessed via mobile devices – tablets, phones, etc… The result is the possibility not just of losing attention and a potential customer, but of losing productivity, a much more easily defined value that can be used to impart the potential severity of performance issues to those ultimately responsible for it. ADDRESSING MOBILE PERFORMANCE If you thought the need for application and network acceleration solutions was long over due to the rise of broadband, you thought too quickly. Narrowband, i.e. mobile connectivity, is still in the early stages of growth and as such still exhibits the same restricted bandwidth characteristics as pre-broadband solutions such as ISDN and A/DSL. The users, however, are far beyond broadband and expect instantaneous responses regardless of access medium. Thus there is a need to return to (if you left it) the use of web application acceleration techniques to redress performance issues as soon as possible. Caching and compression are but two of the most common acceleration techniques available, and F5 is no stranger to such solutions. BIG-IP WebAccelerator implements both along with other performance-enhancing features such as Intelligent Browser Referencing (IBR) and OneConnect can dramatically improve performance of web applications by leveraging the browser to load more quickly those 6.26 resources per host and simultaneously eliminating most if not all of the overhead associated with TCP session management on the servers (TCP Multiplexing). WebAccelerator – combined with some of the innate network protocol optimizations available in all F5 BIG-IP solutions due to its shared internal platform, TMOS – can do a lot to mitigate performance issues associated with narrowband mobile connections. The mobile performance problem isn’t new, after all, and thus these proven solutions should provide relief to end-users of both the customer and employee communities who weary of waiting for the web. HTML5 – the darling of the mobile world - will also have an impact on the usage patterns of web applications regardless of client device and network type. HTML5 inherently results in more request and objects, and the adoption rate is fairly significant from the developer community. A recent Evans Data survey indicates increasing adoption rates; in 2010 28% of developers were using HTML5 markup, with 48.9% planning on using it in the future. More traffic. More users. More devices. More networks. More data. More connections. It’s time to start considering how to address mobile performance before it becomes an even steeper hill to climb. The third greatest (useful) hack in the history of the Web Achieving Scalability Through Fewer Resources Long Live(d) AJAX The Impact of AJAX on the Network The AJAX Application Delivery Challenge What is server offload and why do I need it? 3 Really good reasons you should use TCP multiplexing226Views0likes0Comments