context-aware
37 TopicsWARNING: Security Device Enclosed
If you aren’t using all the security tools at your disposal you’re doing it wrong. How many times have you seen an employee wave on by a customer when the “security device enclosed” in some item – be it DVD, CD, or clothing – sets off the alarm at the doors? Just a few weeks ago I heard one young lady explain the alarm away with “it must have be the CD I bought at the last place I was at…” This apparently satisfied the young man at the doors who nodded and turned back to whatever he’d been doing. All the data the security guy needed to make a determination was there; he had all the context necessary in which to analyze the situation and make a determination based upon that information. But he ignored it all. He failed to leverage all the tools at his disposal and potentially allowed dollars to walk out the door. In doing so he also set a precedent and unintentionally sent a message to anyone who really wanted to commit a theft: I ignore warning signs, go ahead.1.7KViews0likes2CommentsI am wondering why not all websites enabling this great feature GZIP?
Understanding the impact of compression on server resources and application performance While doing some research on a related topic, I ran across this question and thought “that deserves an answer” because it certainly seems like a no-brainer. If you want to decrease bandwidth – which subsequently decreases response time and improves application performance – turn on compression. After all, a large portion of web site traffic is text-based: CSS, JavaScript, HTML, RSS feeds, which means it will greatly benefit from compression. Typical GZIP compression affords at least a 3:1 reduction in size, with hardware-assisted compression yielding an average of 4:1 compression ratios. That can dramatically affect the response time of applications. As I said, seems like a no-brainer. Here’s the rub: turning on compression often has a negative impact on capacity because it is CPU-bound and under certain conditions can actually cause a degradation in performance due to the latency inherent in compressing data compared to the speed of the network over which the data will be delivered. Here comes the science. IMPACT ON CPU UTILIZATION Compression via GZIP is CPU bound. It requires a lot more CPU than you might think. The larger the file being compressed, the more CPU resources are required. Consider for a moment what compression is really doing: it’s finding all similar patterns and replacing them with representations (symbols, indexes into a table, etc…) to a single instance of the text instead. So it makes sense that the larger a file is, the more resources are required – RAM and CPU – to execute such a process. Of course the larger the file is the more benefit you see from compression in terms of bandwidth and improvement in response time. It’s kind of a Catch-22: you want the benefits but you end up paying in terms of capacity. If CPU and RAM is being chewed up by the compression process then the server can handle fewer requests and fewer concurrent users. You don’t have to take my word for it – there are quite a few examples of testing done on web servers and compression that illustrate the impact on CPU utilization. Measuring the Performance Effects of Dynamic Compression in IIS 7.0 Measuring the Performance Effects of mod_deflate in Apache 2.2 HTTP Compression for Web Applications They all essentially say the same thing; if you’re serving dynamic content (or static content and don’t have local caching on the web server enabled) then there is a significant negative impact on CPU utilization that occurs when enabling GZIP/compression for web applications. Given the exceedingly dynamic nature of Web 2.0 applications, the use of AJAX and similar technologies, and the data-driven world in which we live today, that means there are very few types of applications running on web servers for which compression will not negatively impact the capacity of the web server. In case you don’t (want || have time) to slog through the above articles, here’s a quick recap: File Size Bandwidth decrease CPU utilization increase IIS 7.0 10KB 55% 4x 50KB 67% 20x 100KB 64% 30x Apache 2.2 10KB 55% 4x 50KB 65% 10x 100KB 63% 30x It’s interesting to note that IIS 7.0 and Apache 2.2 mod_deflate have essentially the same performance characteristics. This data falls in line with the aforementioned Intel report on HTTP compression which noted that CPU utilization was increased 25-35% when compression was enabled. So essentially when you enable compression you are trading its benefits – bandwidth reduction, response time improvement – for a reduction in capacity. You’re robbing Peter to pay Paul, because instead of paying for bandwidth you’re paying for more servers to handle the same load. THE MYTH OF IMPROVED RESPONSE TIME One of the reasons you’d want to compress content is to improve response time by decreasing the total number of packets that have to traverse a wire. This is a necessity when transferring content via a WAN, but can actually cause a decrease in performance for application delivery over the LAN. This is because the time it takes to compress the content and then deliver it is actually greater than the time to just transfer the original file via the LAN. The speed of the network over which the content is being delivered is highly relevant to whether compression yields benefits for response time. The increasing consumption of CPU resources as volume increases, too, has a negative impact on the ability of the server to process and subsequently respond, which also means an increase in application response time, which is not the desired result. Maybe you’re thinking “I’ll just get more CPU then. After all, there’s like billion core servers out there, that ought to solve the problem!” Compression algorithms, like FTP, are greedy. FTP will, if allowed, consume as much bandwidth as possible in an effort to transfer data as quickly as possible. Compression will do the same thing to CPU resources: consume as much as it can to perform its task as quickly as possible. Eventually, yes, you’ll find a machine with enough cores to support both compression and capacity needs, but at what cost? It may well have been more financially efficient to invest in a better solution (that also brings additional benefits to the table) than just increasing the size of the server. But hey, it’s your data, you need to do what you need to do. The size of the content, too, has an impact on whether compression will benefit application performance. Consider that the goal of compression is to decrease the number of packets being transferred to the client. Generally speaking, the standard MTU for most network is 1500 bytes because that’s what works best with ethernet and IP. That means you can assume around 1400 bytes per packet available to transfer data. That means if content is 1400 bytes or less, you get absolutely no benefit out of compression because it’s already going to take only one packet to transfer; you can’t really send half-packets, after all, and in some networks packets that are too small can actually freak out some network devices because they’re optimized to handle the large content being served today – which means many full packets. TO COMPRESS OR NOT COMPRESS There is real benefit to compression; it’s part of the core techniques used by both application acceleration and WAN application delivery services to improve performance and reduce costs. It can drastically reduce the size of data and especially when you might be paying by the MB or GB transferred (such as applications deployed in cloud environments) this a very important feature to consider. But if you end up paying for additional servers (or instances in a cloud) to make up for the lost capacity due to higher CPU utilization because of that compression, you’ve pretty much ended up right where you started: no financial benefit at all. The question is not if you should compress content, it’s when and where and what you should compress. The answer to “should I compress this content” almost always needs to be based on a set of criteria that require context-awareness – the ability to factor into the decision making process the content, the network, the application, and the user. If the user is on a mobile device and the size of the content is greater than 2000 bytes and the type of content is text-based and … It is this type of intelligence that is required to effectively apply compression such that the greatest benefits of reduction in costs, application performance, and maximization of server resources is achieved. Any implementation that can’t factor all these variables into the decision to compress or not is not an optimal solution, as it’s just guessing or blindly applying the same policy to all kinds of content. Such implementations effectively defeat the purpose of employing compression in the first place. That’s why the answer to where is almost always “on the load-balancer or application delivery controller”. Not only are such devices capable of factoring in all the necessary variables but they also generally employ specialized hardware designed to speed up the compression process. By offloading compression to an application delivery device, you can reap the benefits without sacrificing performance or CPU resources. Measuring the Performance Effects of Dynamic Compression in IIS 7.0 Measuring the Performance Effects of mod_deflate in Apache 2.2 HTTP Compression for Web Applications The Context-Aware Cloud The Revolution Continues: Let Them Eat Cloud Nerd Rage794Views0likes2CommentsWILS: Network Load Balancing versus Application Load Balancing
Are you load balancing servers or applications? Network traffic or application requests? If your strategy to application availability is network-based you might need a change in direction (up the stack). Can you see the application now? Network load balancing is the distribution of traffic based on network variables, such as IP address and destination ports. It is layer 4 (TCP) and below and is not designed to take into consideration anything at the application layer such as content type, cookie data, custom headers, user location, or the application behavior. It is context-less, caring only about the network-layer information contained within the packets it is directing this way and that. Application load balancing is the distribution of requests based on multiple variables, from the network layer to the application layer. It is context-aware and can direct requests based on any single variable as easily as it can a combination of variables. Applications are load balanced based on their peculiar behavior and not solely on server (operating system or virtualization layer) information. The difference between the two is important because network load balancing cannot assure availability of the application. This is because it bases its decisions solely on network and TCP-layer variables and has no awareness of the application at all. Generally a network load balancer will determine “availability” based on the ability of a server to respond to ICMP ping, or to correctly complete the three-way TCP handshake. An application load balancer goes much deeper, and is capable of determining availability based on not only a successful HTTP GET of a particular page but also the verification that the content is as was expected based on the input parameters. This is also important to note when considering the deployment of multiple applications on the same host sharing IP addresses (virtual hosts in old skool speak). A network load balancer will not differentiate between Application A and Application B when checking availability (indeed it cannot unless ports are different) but an application load balancer will differentiate between the two applications by examining the application layer data available to it. This difference means that a network load balancer may end up sending requests to an application that has crashed or is offline, but an application load balancer will never make that same mistake. WILS: Write It Like Seth. Seth Godin always gets his point across with brevity and wit. WILS is an ATTEMPT TO BE concise about application delivery TOPICS AND just get straight to the point. NO DILLY DALLYING AROUND. WILS: InfoSec Needs to Focus on Access not Protection WILS: Applications Should Be Like Sith Lords WILS: Cloud Changes How But Not What WILS: Application Acceleration versus Optimization WILS: Automation versus Orchestration Layer 7 Switching + Load Balancing = Layer 7 Load Balancing Business-Layer Load Balancing Not all application requests are created equal Cloud Balancing, Cloud Bursting, and Intercloud The Infrastructure 2.0 Trifecta763Views0likes0CommentsWhat Does Mobile Mean, Anyway?
We tend to assume characteristics upon hearing the term #mobile. We probably shouldn’t… There are – according to about a bazillion studies - 4 billion mobile devices in use around the globe. It is interesting to note that nearly everyone who notes this statistic and then attempts to break it down into useful data (usually for marketing) that they almost always do so based on OS or device type – but never, ever, ever based on connectivity. Consider the breakdown offered by W3C for October 2011. Device type is the chosen taxonomy, with operating system being the alternative view. Unfortunately, aside from providing useful trending on device type for application developers and organizations, this data does not provide the full range of information necessary to actually make these devices, well, useful. Consider that my Blackberry can either connect to the Internet via 3G or WiFi. When using WiFi my user experience is infinitely better than via 3G and, if one believes the hype, will be even better once 4G is fully deployed. Also not accounted for is the ability to pair my Blackberry Playbook to my Blackberry phone and connect to the Internet via that (admittedly convoluted) chain of connectivity. Bluetooth to 3G or WiFi (which in my house has an additional chain on the LAN and then back out through a fairly unimpressive so-called broadband connection). But I could also be using the Playbook’s built-in WiFi (after trying both this is the preferred method, but in a pinch…) You also have to wonder how long it will be before “mobile” is the GPS in your car, integrated with services via Google Map or Bing to “find nearby” while you’re driving? Or, for some of us an even better option, find the nearest restroom off this highway because the four-year old has to use it – NOW. Trying to squash “mobile” into a little box is about as useful as trying to squash “cloud” into a bigger box. It doesn’t work. The variations in actual implementation in communication channels across everything that is “mobile” require different approaches to mitigating operational risk, just as you approach SaaS differently than IaaS differently than PaaS. Defining “mobile” by its device characteristics is only helpful when you’re designing applications or access management policies. In order to address real user-experience issues you have to know more about the type of connection over which the user is connecting – and more. CONTEXT is the NEW BLACK in MOBILE This is not to say that device type is not important. It is, and luckily device type (as well as browser and often operating system), are an integral part of the formula we all “context.” Context is the combined set of variables that make it possible to interpret any given connection with respect to its unique client, server, network, and application needs. It’s what allows organizations to localize, to hyperlocalize, and to provide content based on location. It’s what enables the ability to ensure performance whether over 3G, 4G, LAN, or congested WAN connections. It’s the agility to route application requests to the best server-side location based on a combination of client location, connection type, and current capacity across multiple sites – whether cloud, managed hosting, or secondary data centers. Context is the ‘secret sauce’ to successful application delivery. It’s the ingredient that makes it possible to make the right decisions at the right time based on current conditions that address operational risk – performance, security, and availability. Context is what makes the application delivery tier of the modern data center able to adapt dynamically. It’s the shared data that forms the foundation for the collaboration between application delivery network infrastructure and provisioning systems both local and in the cloud, enabling on-demand scalability and at some point, instant mobility in an inter-cloud architecture. Context is a key component to an agile data center, because it is only be inspecting all the variables that you can interpret them in a way that leads to optimal decisions with respect to the delivery of an application, which includes choosing the right application instance whether it’s deployed remotely in a cloud computing environment or locally on an old-fashioned piece of hardware. Knowing what device a given request is coming from is not enough, especially when the connection type and conditions cannot be assumed. The same user on the same device may connect via two completely different networking methods within the same day – or even same hour. It is the network connection which becomes a critical decision point around which to apply proper security and performance-related policies, as different networks vary in their conditions. So while we all like to believe that our love of our chosen mobile platform is vindicated by statistics, we need to dig deeper when we talk about mobile strategies within the walls of IT. The device type is only one small piece of a much larger puzzle called context. “Mobile” is as much about the means of connectivity as it is the actual physical characteristic of a small untethered device. We need to recognize that, and incorporate it into our mobile delivery strategies sooner rather than later. [Updated: This post was updated 2/17/2012 - the graphic was updated to reflect the proper source of the statistics, w3schools ] Long-distance live migration moves within reach HTML5 Web Sockets Changes the Scalability Game At the Intersection of Cloud and Control… F5 Friday: The Mobile Road is Uphill. Both Ways More Users, More Access, More Clients, Less Control Cloud Needs Context-Aware Provisioning Call Me Crazy but Application-Awareness Should Be About the Application The IP Address – Identity Disconnect The Context-Aware Cloud505Views0likes2CommentsF5 Friday: Creating a DNS Blackhole. On Purpose
#infosec #DNS #v11 DNS is like your mom, remember? Sometimes she knows better. Generally speaking, blackhole routing is a problem, not a solution. A route to nowhere is not exactly a good thing, after all. But in some cases it’s an approved and even recommended solution, usually implemented as a means to filter out bad packets at the routing level that might be malformed or are otherwise dangerous to pass around inside the data center. This technique is also used at the DNS layer as a means to prevent responding to queries with known infected or otherwise malicious sites. Generally speaking, DNS does nothing more than act like a phone book; you ask for an address, it gives it to you. That may have been acceptable through the last decade, but it is increasingly undesirable as it often unwittingly serves as part of the distribution network for malware and other malicious intent. In networking, black holes refer to places in the network where incoming traffic is silently discarded (or "dropped"), without informing the source that the data did not reach its intended recipient. When examining the topology of the network, the black holes themselves are invisible, and can only be detected by monitoring the lost traffic; hence the name. (http://en.wikipedia.org/wiki/Black_hole_(networking)) What we’d like to do is prevent DNS servers from returning addresses for sites which we know – or are at least pretty darn sure – are infected. While we can’t provide such safeguards for everyone (unless you’re the authoritative server for such sites) we can at least better protect the corporate network and users from such sites by ensuring such queries are not answered with the infected addresses. Such a solution requires the implementation of a DNS blackhole – a filtering of queries at the DNS level. This can be done using F5 iRules to inspect queries against a list of known bad sites and returning an internal address for those that match. What’s cool about using iRules to perform this function is the ability to leverage external lookups to perform the inspection. Sideband connections were introduced in BIG-IP v11 and these connections allow external, i.e. off device, lookups for solutions like this. Such a solution is similar to the way in which you’d want to look up the IP address and/or domain of the sender during an e-mail exchange, to validate the sender is not on the “bad spammer” lists maintained by a variety of organizations and offered as a service. Jason Rahm recently detailed this solution as architected by Hugh O’Donnel, complete with iRules, in a DevCentral Tech Tip. You can find a more comprehensive description of the solution as well as the iRules to implement in the tech tip. v11.1: DNS Blackhole with iRules Happy (DNS) Routing! F5 Friday: No DNS? No … Anything. BIG-IP v11 Information High-Performance DNS Services in BIG-IP Version 11 DNS is Like Your Mom F5 Friday: Multi-Layer Security for Multi-Layer Attacks The Many Faces of DDoS: Variations on a Theme or Two High-Performance DNS Services in BIG-IP Version 11406Views0likes0CommentsThe Consumerization of IT: The OpsStore
Tablets, smart phones and emerging mobile devices with instant access to applications are impacting the way in which IT provides services and developers architect applications. When pundits talk about the consumerization of IT they’re mostly referring to the ability of IT consumers, i.e. application developers and business stakeholders, to provision and manage, on demand, certain IT resources, most usually that of applications. There’s no doubt that the task of provisioning the hardware and software resources for an application is not only tedious but time-consuming and that it can easily – using virtualization and cloud computing technologies – be enabled with a self-service interface. Consumers, however, are demanding even more and some have begun to speculate on the existence of “app stores” within IT; a catalog of application resources available to consumers through a “so easy my five-year old can do it” interface. Unfortunately, such systems always seem to lay upon the surface. It’s putting lipstick on a pig: the pig is still there and, like the eight-hundred pound gorilla, demands attention. The infrastructure responsible for delivering and securing the applications so readily available in such “enterprise app stores” are lagging far behind in terms of the ability to also be automatically and easily provisioned, configured and managed. What we need is an Ops Store. IT as a SERVICE Cloud computing environments, specifically IaaS, have gone about half-way toward creating the Ops Store necessary to complete the consumerization of IT and enable IT as a Service. Consider the relative ease with which one can provision load balancing services using most cloud computing environments today. Using third-party cloud computing provisioning and management frameworks, such processes are made even simpler, with many affording the point-and-click style of deployment required to be worthy of the moniker “on-demand” and “self-service.” But in the enterprise, such systems still lag behind the application layer. Devops continues to focus primarily on the automation of configuration; on scripts and recipes that reduce the time to deploy an application and create a repeatable deployment experience that takes much of the guess-work and checkbox task management previously required to achieve a successful deployment. But in terms of providing an “ops store”, a simple, self-service point and click “so easy my five year old can do it” interface to such processes, we are still waiting. But these automations are still primarily focused on topology and configuration of the basics, not on the means by which configuration and policies can be easily created, tested and deployed by the people responsible: developers and business stakeholders. Developers end up duplicating many infrastructure-related services – security, performance, etc… – not because they think they know better (although that is certainly sometimes the case) but because they have no means of integrating existing infrastructure services during the development process. It’s not that they don’t want to, they often aren’t even aware they exist and even if they are, they can’t easily integrate them with the application they are developing. And because ultimately the developer is responsible to the business stakeholder for the application, the developer is not about to abrogate that responsibility in favor of some unknown, untestable infrastructure service that will be “incorporated during deployment.” Anyone who’s sat through a user acceptance meeting for an application knows that the business stakeholders expect the application to work as expected when they test it, not later in production. It’s a Catch-22, actually, as the application can’t move to production from QA until it’s accepted by the business stakeholder who won’t accept it until it meets all requirements. If one of those requirements is, say, encryption of sensitive data then it had better be encrypted at the time the stakeholders test the application for acceptance. If it’s not, the application is not ready to move to production. The developer must provide all functionality and incorporate all services necessary to meet business requirements into the application before it’s accepted. That means operational services provided by the infrastructure must be available to developers at the time the application is being developed, particularly for those services that impact the way in which data might be handled. Identity and access management services, for example, are critical during development to ensure that the application behavior respects and adheres to access policies. In a DevOps world, the operations team provides infrastructure as a service to product teams, such as the ability to spin up production-like environments on demand for testing and release purposes, and manage them programmatically. [emphasis added] -- Tired of Playing Ping Pong with Dev, QA and Ops? (CIO Update, May 2011) Developers need a way to manage infrastructures services programmatically; an “Ops Store”, if you will, that enables them to take advantage of infrastructure services. NEEDED: INFRASTRUCTURE DEVELOPERS While it “would be nice” if an Ops Store was a simple to navigate and use as a existing consumer-oriented application stores. But that’s not reasonable. What is, reasonable, however is to expect that a catalog of services is provided such that not only can developers provision such services but that they can subsequently configure and invoke them during development. It seems logical that such services would be provided by means of some sort of operational API, whether SOAP or REST-based. But more important than how is that they are provided; made accessible to the developers who need them to incorporate such services as required into the broadening definition of an “application.” It is not likely to be operational-minded folks that enable such an interface. Unfortunately, devops today is still more concerned with ops than it is development and continues to focus on providing operational automation without much concern for application integration – even though that remains a vital component to enabling IT as a Service and realizing the benefits of a truly dynamic data center. This concern will likely be left to a new role, one that has yet to truly emerge in the enterprise: infrastructure developer. One that understands how developers interface and integrate with services, in general, and can subsequently provide the operational services in a form more usable to developers, closer to an “ops store” than an installation script. While scripting and pre-execution automated configuration systems are great for deployment, they’re not necessarily well-suited for on-demand modification and application of delivery and access policies. There are situations in which an application is aware that “something” needs to be done but it can’t do it because of its topological location. The delivery infrastructure, however, can. Consider that the dynamic nature of applications is such that it is often the case only the application, at execution time, knows the content and size of a particular response. Consider, too, that it may also recognize that the user is a “premium” member and therefore is guaranteed “higher performance.” The application developer should be able to put 2 and 2 together and instruct the infrastructure in such a way as to leverage whatever delivery policies might enable the fulfillment of that guarantee. But today there’s a disconnect. The developer, even if aware, can’t necessarily enable that collaboration because the operational automation today focuses on deployment, not execution. Developers need the means by which they can enable applications to be more contextually aware of their environment and provide actionable data to infrastructure regarding how any given response should be treated. If we’re going to go down the path of consumerization and take advantage of the operational efficiencies afforded by cloud and service-oriented concepts, eventually the existence of Infrastructure 2.0 enabled components has to be recognized and then leveraged in the form of services that can be invoked from within the application. That will take developers, not operations, because of the nature of that integration. Now Witness the Power of this Fully Operational Feedback Loop An Aristotlean Approach to Devops and Infrastructure Integration The Impact of Security on Infrastructure Integration Infrastructure 2.0 + Cloud + IT as a Service = An Architectural Parfait The World Doesn’t Care About APIs Cloud, Standards, and Pants Infrastructure 2.0: Squishy Name for a Squishy Concept Choosing a Load Balancing Algorithm Requires DevOps Fu How to Build a Silo Faster: Not Enough Ops in your Devops365Views0likes2CommentsLocation-Aware Load Balancing
No, it’s not global server load balancing or GeoLocation. It’s something more… because knowing location is only half the battle and the other half requires the ability to make on-demand decisions based on context. In most cases today, global application delivery bases the decision on which location should service a given client based on the location of the user, availability of the application at each deployment location and, if the user is lucky, some form of performance-related service-level agreement. With the advent of concepts like cloud bursting and migratory applications that can be deployed at any number of locations at any given time based on demand, the ability to determine not just the user location accurately but the physical location of the application as well is becoming increasingly important to address concerns regarding regulatory compliance. Making the equation more difficult is that these regulations vary from country to country and the focus of each varies greatly. In the European Union the focus is on privacy for the consumer, while in the United States the primary focus is on a combination of application location (export laws) and user location (access restrictions). These issues become problematic for not just application providers who want to tap into the global market, but for organizations whose employee and customer base span the globe. Many of the benefits of cloud computing are based on the ability to tap into cloud providers’ inexpensive resources not just at any time its needed for capacity (cloud bursting) but at any time that costs can be minimized (cloud balancing). These benefits are appealing, but can quickly run organizations afoul of regulations governing data and application location. In order to maximize benefits and maintain compliance with regulations relating to the physical location of data and applications and ensure availability and performance levels are acceptable to both the organization and the end-user, some level of awareness must be present in the application delivery architecture. Awareness of location provides a flexible application delivery infrastructure with the ability to make on-demand decisions regarding where to route any given application request based on all the variables required; based on the context. Because of the flexible nature of deployment (or at least the presumed flexibility of application deployment) it would be a poor choice to hard-code such decisions so that users in location X are always directed to the application at location Y. Real-time performance and availability data must also be taken into consideration, as well as capacity of each location.337Views0likes1CommentF5 Friday: F5 Application Delivery Optimization (ADO)
#ado #fasterapp #webperf The “all of the above” approach to improving application performance A few weeks ago (at Interop 2012 to be exact) F5 announced its latest solution designed to improve application performance. One facet of this “all of the above” approach is a SPDY gateway. Because of the nature of SPDY and the need for a gateway-based architectural approach to adoption, this piece of the announcement became a focal point. But lest you think the entire announcement (and F5’s entire strategy) revolves around SPDY, let’s take a moment to consider the overall solution. F5 ADO is a comprehensive approach to optimizing application delivery, i.e. it makes apps go faster. It accomplishes this seemingly impossible feat by intelligently applying accelerating technologies and policies at a strategic point of control in the network, the application delivery tier. Because of its location in the architecture, a BIG-IP has holistic visibility; it sees and understands factors on the client, in the network, and in the server infrastructure that are detrimental to application performance. By evaluating each request in the context it was made, BIG-IP can intelligently apply a wide variety of optimization and acceleration techniques that improve performance. These range from pure client-side (FEO) techniques to more esoteric server-side techniques. Being able to evaluate requests within context means BIG-IP can apply the technology or policy appropriate for that request to address specific pain points or challenges that may impede performance. Some aspects of ADO may seem irrelevant. After all, decreasing the size of a JavaScript by a couple of KB isn’t really going to have all that much impact on transfer times. But it does have a significant impact on the parsing time on the client, which whether we like it or not is one piece of the equation that counts from an end-user perspective, because it directly impacts the time it takes to render a page and be considered “loaded”. So if we can cut that down through minification or front-loading the scripts, we should – especially when we know clients are on a device with constrained CPU cycles, like most mobile platforms. But it’s important to recognize when applying technologies might do more harm than good. Clients connecting over the LAN or even via WiFi do not have the same characteristics as those connecting over the Internet or via a mobile network. “Optimization” of any kind that takes longer than it would to just transfer the entire message to the end-user is bad; it makes performance worse for clients, which is counter to the intended effect. Context allows BIG-IP to know when to apply certain techniques – and when not to apply them – for optimal performance. By using an “all of the above” approach to optimizing and accelerating delivery of applications, F5 ADO can increase the number of milliseconds shaved off the delivery of applications. It makes the app go faster. I could go into details about each and every piece of F5 ADO, but that would take thousands of words. Since a picture is worth a thousand words (sometimes more), I’ll just leave you with a diagram and a list of resources you can use to dig deeper into F5 ADO and its benefits to application performance. Resources: The “All of the Above” Approach to Improving Application Performance Y U No Support SPDY Yet? Stripping EXIF From Images as a Security Measure F5’s Application Delivery Optimization – SlideShare Presentation Application Delivery Optimization – White Paper Interop 2012 - Application Delivery Optimization with F5's Lori MacVittie – Video When Big Data Meets Cloud Meets Infrastructure F5 Friday: Ops First Rule New Communications = Multiplexification F5 Friday: Are You Certifiable? The HTTP 2.0 War has Just Begun Getting Good Grades on your SSL WILS: The Many Faces of TCP WILS: WPO versus FEO The Three Axioms of Application Delivery333Views0likes0CommentsWhat is Network-based Application Virtualization and Why Do You Need It?
Need it you do, even if know it you do not. But you will…heh. You will. With all the attention being paid these days to VDI (virtual desktop infrastructure) and application virtualization and server virtualization and virtualization it’s easy to forget about network-based application virtualization. But it’s the one virtualization technique you shouldn’t forget because it is a foundational technology upon which myriad other solutions will be enabled. WHAT IS NETWORK-BASED APPLICATION VIRTUALIZATION? This term may not be familiar to you but that’s because since its inception oh, more than a decade ago, it’s always just been called “server virtualization”. After the turn of the century (I love saying that, by the way) it was always referred to as service virtualization in SOA and XML circles. With the rise of the likes of VMware and Citrix and Microsoft server virtualization solutions, it’s become impossible to just use the term “server virtualization” and “service virtualization” is just as ambiguous so it seems appropriate to give it a few more modifiers to make it clear that we’re talking about the network-based virtualization (aggregation) of applications. That “aggregation” piece is important because unlike server virtualization that bifurcates servers, network-based application virtualization abstracts applications, making many instances appear to be one. Network-based application virtualization resides in the network, in the application delivery “tier” of an architecture. This tier is normally physically deployed somewhere near the edge of the data center (the perimeter) and acts as the endpoint for user requests. In other words, a client request to http://www.example.com is answered by an application delivery controller (load balancer) which in turn communicates internally with applications that may be virtualized or not, local or in a public cloud. Many, many, many organizations take advantage of this type of virtualization as a means to implement a scalable, load balancing based infrastructure for high-volume, high-availability applications. Many, many, many organizations do not take advantage of network-based application virtualization for applications that are not high-volume, high-availability applications. They should. FOUR REASONS to USE NETWORK-BASED APPLICATION VIRTUALIZATION for EVERY APPLICATION There are many reasons to use network-based application virtualization for every application but these four are at the top of the list. FUTURE-PROOF SCALABILITY. Right now that application may not need to be scaled but it may in the future. If it’s deployed on its own, without network-based application virtualization, you’ll have a dickens of a time rearranging your network later to enable it. Leveraging network-based application virtualization for all applications ensures that if an application ever needs to be scaled it can be done so without disruption – no downtime for it or for other applications that may be impacted by moving things around. This creates a scalability domain that enables the opportunity to more easily implement infrastructure scalability patterns even for applications that don’t need to scale beyond a single server/instance yet. IMPROVES PERFORMANCE. Even for a single-instance application, an application delivery controller provides value – including aiding in availability. It can offload computationally intense functions, optimize connection management, and apply acceleration policies that make even a single instance application more pleasant to use. An architecture that leverages network-based application virtualization for every application also the architect to employ client-side and server-side techniques for improving performance, tweaking policies on both sides of “the stack” for optimal delivery of the application to users regardless of the device from which they access the application. The increasing demand for enterprise applications to be accessible from myriad mobile devices – iPad, Blackberry, and smart phones – can create problems with performance when application servers are optimized for LAN delivery to browsers. The ability to intelligently apply the appropriate delivery policies based on client device (part of its context-aware capabilities) can improve the performance of even a single-instance application for all users, regardless of device. STRATEGIC POINT of CONTROL. Using network-based application virtualization allows you to architect strategic points of control through which security and other policies can be applied. This include authentication, authorization, and virtual patching through web application firewall capabilities. As these policies change, they can be applied at the point of control rather than in the application. This removes the need to cycle applications through the implementation-test-deploy cycle as often as vulnerabilities and security policies change and provides flexibility in scheduling. Applications that may be deployed in a virtualized environment and that may “move” around the data center because they are not a priority and therefore are subject to being migrated to whatever resources may be available can do so without concern for being “lost”. Because the application delivery controller is the end-point, no matter where the application migrates it can always be accessed in the same way by end-users. Business continuity is an important challenge for organizations to address and as infrastructure continues to be virtualization and highly mobile the ability to maintain its interfaces becomes imperative in reducing the disruption to the network and applications as components are migrating around. IMPROVES VISIBILITY. One of the keys to a healthy data center is keeping an eye on things. You can’t do anything about a crashed application if you don’t know it’s crashed, and the use of network-based application virtualization allows you to implement health monitoring that can notify you before you get that desperate 2am call. In a highly virtualized or cloud computing environment, this also provides critical feedback to automation systems that may be able to take action immediately upon learning an application is unavailable for any reason. Such action might be as simple as spinning up a new instance of the application elsewhere while taking the “downed” instance off-line, making it invaluable for maintaining availability of even single instance-applications. When the application delivery infrastructure is the “access point” for all applications, it also becomes a collection point for performance-related data and usage patterns, better enabling operations to plan for increases in capacity based on actual use or as a means to improve performance. To summarize, four key reasons to leverage network-based application virtualization are: visibility, performance, control, and flexibility. APPLICATION DELIVERY INFRASTRUCTURE is a PART OF ENTERPRISE ARCHITECTURE The inclusion of an architecture of an application delivery network as a part of a larger, holistic enterprise architecture is increasingly a “must” rather than a “should”. Organizations must move beyond viewing application delivery as simple load balancing in order to take full advantage of the architectural strategic advantages of using network-based application virtualization for every application. The additional control and visibility alone are worth a second look at that application delivery controller that’s been distilled down to little more than a Load balancer in the data center. The whole is greater than the sum of its parts, and load balancing is just one of the “parts” of an application delivery controller. Architects and devops should view such data center components with an eye toward how to leverage its many infrastructure services to achieve the flexibility and control necessary to move the enterprise architecture along the path of maturity toward a truly automated data center. Your applications - even the lonely, single-instance ones – will thank you for it.332Views0likes0CommentsInfrastructure 2.0: As a matter of fact that isn't what it means
We've been talking a lot about the benefits of Infrastructure 2.0, or Dynamic Infrastructure, a lot about why it's necessary, and what's required to make it all work. But we've never really laid out what it is, and that's beginning to lead to some misconceptions. As Daryl Plummer of Gartner pointed out recently, the definition of cloud computing is still, well, cloudy. Multiple experts can't agree on the definition, and the same is quickly becoming true of dynamic infrastructure. That's no surprise; we're at the beginning of what Gartner would call the hype cycle for both concepts, so there's some work to be done on fleshing out exactly what each means. That dynamic infrastructure is tied to cloud computing is no surprise, either, as dynamic infrastructure is very much an enabler of such elastic models of application deployment. But dynamic infrastructure is applicable to all kinds of models of application deployment: so-called legacy deployments, cloud computing and its many faces, and likely new models that have yet to be defined. The biggest confusion out there seems to be that dynamic infrastructure is being viewed as Infrastructure as a Service (IaaS). Dynamic infrastructure is not the same thing as IaaS. IaaS is a deployment model in which application infrastructure resides elsewhere, in the cloud, and is leveraged by organizations desiring an affordable option for scalability that reduces operating and capital expenses by sharing compute resources "out there" somewhere, at a provider. Dynamic infrastructure is very much a foundational technology for IaaS, but it is not, in and of itself, IaaS. Indeed, simply providing network or application network solution services "as a service" has never required dynamic infrastructure. CDN (Content Delivery Networks), managed VPNs, secure remote access, and DNS services have long been available as services to be used by organizations as a means by which they can employ a variety of "infrastructure services" without the capital expenditure in hardware and time/effort required to configure, deploy, and maintain such solutions. Simply residing "in the cloud" is not enough. A CDN is not "dynamic infrastructure" nor are hosted DNS servers. They are infrastructure 1.0, legacy infrastructure, whose very nature is such that physical location has never been important to their deployment. Indeed, these services were designed without physical location as a requirement, necessarily, as their core functions are supposed to work in a distributed, location agnostic manner. Dynamic infrastructure is an evolution of traditional network and application network solutions to be more adaptable, support integration with its environment and other foundational technologies, and to be aware of context (connectivity intelligence). Adaptable It is able to understand its environment and react to conditions in that environment in order to provide scale, security, and optimal performance for applications. This adaptability comes in many forms, from the ability to make management and configuration changes on the fly as necessary to providing the means by which administrators and developers can manually or automatically make changes to the way in which applications are being delivered. The configuration and policies applied by dynamic infrastructure are not static; they are able to change based on predefined criteria or events that occur in the environment such that the security, scalability, or performance of an application and its environs are preserved. Some solutions implement this capability through event-driven architectures, such as "IP_ADDRESS_ASSIGNED" or "HTTP_REQUEST_MADE". Some provide network-side scripting capabilities to extend the ability to react and adapt to situations requiring flexibility while others provide the means by which third-party solutions can be deployed on the solution to address the need for application and user specific capabilities at specific touch-points in the architecture. Context Aware Dynamic infrastructure is able to understand the context that surrounds an application, its deployment environment, and its users and apply relevant policies based on that information. Being context aware means being able to recognize that a user accessing Application X from a coffee shop has different needs than the same user accessing Application X from home or from the corporate office. It is able to recognize that a user accessing an application over a WAN or high-latency connection requires different policies than one accessing that application via a LAN or from close physical proximity over the Internet. Being context aware means being able to recognize the current conditions of the network and the application, and then leveraging its adaptable nature to choose the right policies at the time the request is made such that the application is delivered most efficiently and quickly. Collaborative Dynamic infrastructure is capable of integrating with other application network and network infrastructure, as well as the management and control solutions required to manage both the infrastructure and the applications it is tasked with delivering. The integration capabilities of dynamic infrastructure requires that the solution be able to direct and take direction from other solutions such that changes in the infrastructure at all layers of the stack can be recognized and acted upon. This integration allows network and application network solutions to leverage its awareness of context in a way that ensures it is adaptable and can support the delivery of applications in an elastic, flexible manner. Most solutions use a standards-based control plane through which they can be integrated with other systems to provide the connectivity intelligence necessary to implement IaaS, virtualized architectures, and other cloud computing models in such a way that the perceived benefits of reduced operating expenses and increased productivity through automation can actually be realized. These three properties of dynamic infrastructure work together, in concert, to provide the connectivity intelligence and ability to act on information gathered through that intelligence. All three together form the basis for a fluid, adaptable, dynamic application infrastructure foundation on which emerging compute models such as cloud computing and virtualized architectures can be implemented. But dynamic infrastructure is not exclusively tied to emerging compute models and next-generation application architectures. Dynamic infrastructure can be leveraged to provide benefit to traditional architectures, as well. The connectivity intelligence and adaptable nature of dynamic infrastructure improves the security, availability, and performance of applications in so-called legacy architectures as well. Dynamic infrastructure is a set of capabilities implemented by network and application network solutions that provide the means by which an organization can improve the efficiency of their application delivery and network architecture. That's why it's just not accurate to equate Infrastructure 2.0/Dynamic Infrastructure with Infrastructure as a Service cloud computing models. The former is a description of the next generation of network and network application infrastructure solutions; the evolution from static, brittle solutions to fluid, dynamic, adaptable ones. The latter is a deployment model that, while likely is built atop dynamic infrastructure solutions, is not wholly comprised of dynamic infrastructure. IaaS is not a product, it's a service. Dynamic infrastructure is a product that may or may not be delivered "as a service". Glad we got that straightened out.308Views0likes1Comment