application delivery network
14 TopicsF5 Friday: ADN = SDN at Layer 4-7
The benefits of #SDN have long been realized from #ADN architectures There are, according to ONF, three core characteristics of SDN: “In the SDN architecture, the control and data planes are decoupled, network intelligence and state are logically centralized, and the underlying network infrastructure is abstracted from the applications. As a result, enterprises and carriers gain unprecedented programmability, automation, and network control, enabling them to build highly scalable, flexible networks that readily adapt to changing business needs.” -- Software-Defined Networking: The New Norm for Networks, ONF Let’s enumerate them so we’re all on the same page: 1. Control and data planes are decoupled 2. Intelligence and state are logically centralized 3. Underlying network infrastructure abstracted from applications Interestingly enough, these characteristics – and benefits - have existed for quite some time at layers 4-7 in something known as an application delivery network. ADN versus SDN First let’s recognize that the separation of the control and data planes is not a new concept. It’s been prevalent in application architecture for decades. It’s a core premise of SOA where implementation and interface are decoupled and abstracted, and it’s even in application delivery networking – has been for nearly a decade now. When F5 redesigned the internal architecture of BIG-IP back in the day, the core premise was separation of the control plane from the data plane as a means to achieve a more programmable, flexible, and extensible solution. The data plane and control plane are completely separate, and it’s part of the reason F5 is able to “plug-in” modules and extend functionality of the core platform. Now, one of the benefits of this architecture is programmability – the ability to redefine how flows, if you will, are handled as they traverse the system. iRules acts in a manner similar to the goals of OpenFlow, in that it allows the implementation of new functionality down to the packet level. You can, if you desire, implement new protocols using iRules. In many cases, this is how F5 engineers develop support for emerging protocols and technologies in between major releases. BIG-IP support for SIP, for example, was initially handled entirely by iRules. Eventually, demand and need resulted in that functionality being moved into the core control plane. Which is part of the value proposition of SDN + OpenFlow – the ability to “tinker” and “test” experimental and new protocols before productizing them. So the separation of control from data is not new, though BIG-IP certainly doesn’t topologically separate the two as is the case in SDN architectures. One could argue physical separation exists internal to the hardware, but that would be splitting hairs. Suffice to say that the separation of the two exists on BIG-IP platforms, but it is not the physical (topological) separation described by SDN descriptions. Intelligence and control is logically centralized in the application delivery (wait for it… wait for it… ) controller. Agility is realized through the ability to dynamically adjust flows and policy on-demand. Adjustments in application routing are made based on policy defined in the control plane, and as an added bonus, context is shared across components focusing on specific application delivery domain policy. Offering “unprecedented programmability, automation, and network control” that enables organizations “to build highly scalable, flexible [application] networks that readily adapt to changing business needs” is exactly what an application delivery network-based architecture does – at least for that sub-section of the network dedicated to delivering applications. WOULD ADN BENEFIT FROM SDN? There is not much SDN can provide to improve ADN. As mentioned before, there may be advantages to implementing SDN topologically downstream from an ADN, to manage the volatility and scale needed in the application infrastructure, and that might require support for protocols like OpenFlow to participate at layer 2-3, but at layer 4-7 there is really no significant play for SDN that has an advantage over existing ADN thus far. SDN focuses on core routing and switching, and while that’s certainly important for an ADN, which has to know where to forward flows to the appropriate resource after application layer routing decisions have been made, it’s not a core application delivery concern. The argument could (and can and probably will) be made that SDN is not designed to perform the real-time updates required to implement layer 4-7 routing capabilities. SDN is designed to address network routing and forwarding, automatically, but such changes in the network topology are (or hopefully are, at least) minimal. Certainly such a change is massive in scale at the time it happens, but it does not happen for every request as is the case at layer 4-7. load balancing necessarily makes routing decisions for every session (flow) and in some cases, every request. That means thousands upon thousands of application routing decisions per second. In SDN terms, this would require similar scale of updating forwarding tables, a capability SDN has been judged unable at this time to accomplish. Making this more difficult, forwarding of application layer requests is accomplished on a per-session basis, not unilaterally across all sessions, whereas forwarding information in switches is generally more widely applicable to the entire network. In order for per-session or per-request routing to be scalable, it must be done dynamically, in real-time. Certainly SDN could manage the underlying network for the ADN, but above layer 3 SDN will not suffice to support the scale and performance required to ensure fast and available applications. When examining the reasons for implementing SDN, it becomes apparent that most of the challenges addressed by the concept have been addressed at layer 4-7 by a combination of ADN and integration with virtualization management platforms. The benefits afforded at layer 2-3 by SDN are duplicated at layer 4-7 with a unified ADN already. In practice, if not definition, ADN is SDN at layer 4-7. OpenFlow/SDN Is Not A Silver Bullet For Network Scalability F5 Friday: Performance, Throughput and DPS Cyclomatic Complexity of OpenFlow-Based SDN May Drive Market Innovation QoS without Context: Good for the Network, Not So Good for the End user The Full-Proxy Data Center Architecture SDN, OpenFlow, and Infrastructure 2.0 Searching for an SDN Definition: What Is Software-Defined Networking? OpenFlow/Software-Defined Networking (SDN) A change is blowing in from the North (-bound API)284Views0likes0CommentsF5 Friday: Avoiding the Operational Debt of Cloud
#F5CLP F5 Cloud Licensing Program enables #cloud providers to differentiate and accelerate advanced infrastructure service offerings while reducing operational debt for the enterprise If you ask three different people why they are adopting cloud it’s likely you’ll get three different reasons. The rationale for adopting cloud – whether private or public – depends entirely on the strategy IT has in place to address the unique combination of operational and business requirements for their organizations. But one thing seems clear through all these surveys: cloud is here to stay, in one form or another. Those who are “going private” today may “go hybrid” tomorrow. Those who are “in the cloud” today may reverse direction and decide to, as Alan Leinwand puts it so well, “own the base and rent the spike” by going “hybrid.” What the future seems to hold is hybrid architectures, with use of public and private cloud mixed together to provide the best of both worlds. This state of possibility certainly leaves both enterprise and service providers alike somewhat on edge. How can service providers entice the enterprise? How do they prove their services are above and beyond the other thousand-or-so offerings out there? How does the enterprise go about choosing an IaaS partner (and have no doubts, enterprises want partners, not providers, when it comes to managing their data and applications)? How do they ensure the operational efficiency gained through their private cloud implementation isn’t lost by disjointed processes imposed by the differences in core application delivery services in public offerings? How do organizations avoid going into operational debt from managing two environments with two different sets of management and solutions? Architectural consistency is key to the answer, achieved through a fully cloud-enabled application delivery network. The F5 Cloud Licensing Program Whether the goal is scalability, security, better performance, availability, consolidation, or reducing costs, F5 enterprise customers have achieved these goals using F5 solutions. The next step is ensuring these same goals can be achieved in a public cloud, whether the implementation is pure public or hybrid cloud. To do that requires enabling cloud service providers with the ability to offer a complete application delivery network (ADN) in the cloud, with a cost structure appropriate to a utility service model. Given that 43% of respondents in a Cloud Computing Outlook 2011 survey indicated “lack of training” was inhibiting their cloud adoption, being able to offer such services that customers are familiar with is important. That’s the impetus behind the creation of the F5 Cloud Licensing Program, a new service-provider focused licensing model for the industry’s only complete cloud-enabled ADN. With services encompassing the entire application delivery chain – from security to acceleration to access control – this offering brings to the table the ability to maintain operational consistency from the data center into the cloud, without compromising on the infrastructure services needed by enterprises to take advantage of public cloud models. The Conspecific Hybrid Cloud Complexity Drives Consolidation Cloud Bursting: Gateway Drug for Hybrid Cloud Ecosystems are Always in Flux The Pythagorean Theorem of Operational Risk At the Intersection of Cloud and Control… Cloud Computing and the Truth About SLAs196Views0likes0CommentsF5 Friday: In the NOC at Interop
#interop #fasterapp #adcfw #ipv6 Behind the scenes in the Interop network Interop Las Vegas expects somewhere in the realm of 10,000+ attendees this year. Most of them will no doubt be carrying smart phones, many tablets, and of course the old standby, the laptop. Nearly every one will want access to some service – inside or out. The Interop network provides that access – and more. F5 solutions will provide IT services, including IPv4–IPv6 translation, firewall, SSL VPN, and web optimization technologies, for the Network Operations Center (NOC) at Interop. The Interop 2012 network is comprised of the show floor Network Operations Center (NOC), and three co-location sites: Colorado (DEN), California (SFO), and New Jersey(EWR). The NOC moves with the show to its 4 venues: Las Vegas, Tokyo, Mumbai, and New York. F5 has taken a hybrid application delivery network architectural approach – leveraging both physical devices (in the NOC) and virtual equivalents (in the Denver DC). Both physical and virtual instances of F5 solutions are managed via a BIG-IP Enterprise Manager 4000, providing operational consistency across the various application delivery services provided: DNS, SMTP, NTP, global traffic management (GSLB), remote access via SSL VPNs, local caching of conference materials, and data center firewall services in the NOC DMZ. Because the Interop network is supporting both IPv6 and IPv4, F5 is also providing NAT64 and DNS64 services. NAT64: Network address translation is performed between IPv6 and IPv4 on the Interop network, to allow IPv6-only clients and servers to communicate with hosts on IPv4-only networks DNS64: IPv6-to-IPv4 DNS translations are also performed by these BIG-IPs, allowing A records originating from IPv4-only DNS servers to be converted into AAAA records for IPv6 clients. F5 is also providing SNMP, SYSLOG, and NETFLOW services to vendors at the show for live demonstrations. This is accomplished by cloning the incoming traffic and replicating it out through the network. At the network layer, such functionality is often implemented by simply mirroring ports. While this is sometimes necessary, it does not necessarily provide the level of granularity (and thus control) required. Mirrored traffic does not distinguish between SNMP and SMTP, for example, unless specifically configured to do so. While cloning via an F5 solution can be configured to act in a manner consistent with port mirroring, cloning via F5 also allows intermediary devices to intelligently replicate traffic based on information gleaned from deep content inspection (DCI). For example, traffic can be cloned to a specific pool of devices based on the URI, or client IP address or client device type or destination IP. Virtually any contextual data can be used to determine whether or not to clone traffic. You can poke around with more detail and photos and network diagrams at F5’s microsite supporting its Interop network services. Dashboards are available, documentation, pictures, and more information in general on the network and F5 services supporting the show. And of course if you’re going to be at Interop, stop by the booth and say “hi”! I’ll keep the light on for ya… F5 Interopportunities at Interop 2012 F5 Secures and Optimizes Application and Network Services for the Interop 2012 Las Vegas Network Operations Center When Big Data Meets Cloud Meets Infrastructure Mobile versus Mobile: 867-5309 Why Layer 7 Load Balancing Doesn’t Suck BYOD–The Hottest Trend or Just the Hottest Term What Does Mobile Mean, Anyway? Mobile versus Mobile: An Identity Crisis The Three Axioms of Application Delivery Don’t Let Automation Water Down Your Data Center The Four V’s of Big Data360Views0likes0CommentsAt the Intersection of Cloud and Control…
Arises the fourth data center architecture tier – application delivery. The battle of efficiency versus economy continues in the division of the cloud market between public and private environments. Public cloud proponents argue, correctly, that private cloud simply does not offer the same economy of scale as that of public cloud. But that only matters if economy of scale is more important than the efficiency gains realized through any kind of cloud computing implementation. Cloud for most organizations has been recognized as transformational not necessarily in where the data center lives, but rather in how the data center operates. Private cloud is desired for its ability to transform the operational model of IT from its long reign of static, inefficient architectures to a more dynamic and ultimately efficient architectural model, one able to more rapidly adapt to new, well, everything. In many respects the transformative power of cloud computing within the enterprise is not focused on the cost savings but rather on the efficiencies that can be realized through the service-focused design; through the creation of a new “virtual” tier of control in the data center that enables the flexibility of cloud inside the data center. That tier is necessary to ensure things like SLAs between IT and business organizations. SLAs that are, as Bernard Golden recently pointed out in “Cloud Computing and the Truth About SLAs”, nearly useless in public cloud. There are no guarantees on the Internet, it’s a public, commoditized medium designed specifically for failure, not performance or even uptime of nodes. That, as always has been the case, is the purview of the individuals responsible for maintaining the node, i.e. IT. It’s no surprise, then, that public providers are fairly laid back when it comes to SLAs and that they provide few if any tools through which performance and uptime can be guaranteed. It’s not because they can’t – the technology to do so certainly exists, many organizations use such today in their own data centers – but rather it’s because the investment required to do so would end up passed on to consumers, many of whom simply aren’t willing to pay to ensure SLAs that today, at least, are not relevant to their organization. Test and development, for example, requires no SLAs. Many startups, while desiring 100% uptime and fantabulous performance, do not have the impetus (yet) to fork over additional cents per instance per hour per megabit transferred to enforce any kind of performance or availability guarantee. But existing organizations, driven by business requirements and the increasing pressure to “add value”, do have an impetus to ensure performance and uptime. Seconds count, in business, and every second delay – whether from poor performance or downtime – can rack up a hefty bill from lost productivity, lost customers, and lost revenue. Thus while the SLA may be virtually useless in the public cloud for its ability to not only compensate those impacted by an outage or poor performance but the inability of providers to enforce and meet SLAs to enterprise-class specifications, they are important. Important enough, in fact, that many organizations are, as anticipated, turning to private cloud to reap the benefits of both worlds – cloud and control. CONTROL meets CLOUD And thus we are seeing the emergence of a fourth tier within the data center architecture; a flexible tier in which those aspects of delivering applications are addressed: security, performance, and availability. This tier is a necessary evolution in data center architecture because as cloud transforms the traditional server (application) tiers into mobile, virtualized containers, it abstracts the application and the servers as well from the infrastructure, leaving it bereft of the ability to easily integrate with the infrastructure and systems typically used to provide these functions. The topology of a virtualized application infrastructure is necessarily transient and, in order to more easily develop and deploy those applications are increasingly relying on external services to provide security, access management, and performance-related functionality. The insertion of a fourth tier in the architecture affords IT architects and operations the ability to easily manage these services and provide them in an application specific way to the virtualized application infrastructure. It has the added advantage of presenting a unified, consistent interface to the consumer – internal or external – that insulates them from failure as well as changes in service location. This is increasingly important as applications and infrastructure become more mobile and move from not only server to server but data center to data center and cloud to cloud. Insulating the consumers of applications and services is critical to ensuring a consistent experience and to enforcing SLAs. Consider the simple case of accessing an application. Many access control strategies are topologically constrained either in implementation or in integration with applications, making implementation in a dynamic environment challenging. Leveraging an application delivery tier, which focuses on managing applications and not IP addresses or servers enables an access management strategy that is able to deal with changing topology and locations without disruption. This is a more service-focused approach that melds well with the service-oriented design of modern, cloud-based data centers and architectures. The alternative is to return to an agent-based approach, which has its own challenges and has already been tried and for the most part rejected as a viable, long term strategy. Unfortunately, cloud computing is driving us back toward this approach and, while effective in addressing many of the current gaps in cloud computing services, this approach fractures operations and has the effect of increasing operational investment as two very disconnected sets of management frameworks and processes must be simultaneously managed. An effective application delivery tier, on the other hand, unifies operations while providing the services necessary across multiple environments. This means consistent processes and policies can be applied to applications regardless of location, making it possible to ensure governance and meet business-required SLAs. This level of control is necessary for enterprise-class services, no matter where the services may actually be deployed. That public providers do not and indeed cannot today provide support for enterprise-class SLAs is no surprise, but partly because of this neither should the data showing enterprises gravitating toward private cloud be surprising. The right data center architecture can support both the flexibility and operational benefits of using cloud computing and ensuring performance and availability guarantees.188Views0likes0CommentsForce Multipliers and Strategic Points of Control Revisited
On occasion I have talked about military force multipliers. These are things like terrain and minefields that can make your force able to do their job much more effectively if utilized correctly. In fact, a study of military history is every bit as much a study of battlefields as it is a study of armies. He who chooses the best terrain generally wins, and he who utilizes tools like minefields effectively often does too. Rommel in the desert often used Wadis to hide his dreaded 88mm guns – that at the time could rip through any tank the British fielded. For the last couple of years, we’ve all been inundated with the story of The 300 Spartans that held off an entire army. Of course it was more than just the 300 Spartans in that pass, but they were still massively outnumbered. Over and over again throughout history, it is the terrain and the technology that give a force the edge. Perhaps the first person to notice this trend and certainly the first to write a detailed work on the topic was von Clausewitz. His writing is some of the oldest military theory, and much of it is still relevant today, if you are interested in that type of writing. For those of us in IT, it is much the same. He who chooses the best architecture and makes the most of available technology wins. In this case, as in a war, winning is temporary and must constantly be revisited, but that is indeed what our job is – keeping the systems at their tip-top shape with the resources available. Do you put in the tool that is the absolute best at what it does but requires a zillion man-hours to maintain, or do you put in the tool that covers everything you need and takes almost no time to maintain? The answer to that question is not always as simple as it sounds like it should be. By way of example, which solution would you like your bank to put between your account and hackers? Probably a different one than the one you would you like your bank to put in for employee timekeeping. An 88 in the desert, compliments of WW2inColor Unlike warfare though, a lot of companies are in the business of making tools for our architecture needs, so we get plenty of options and most spaces have a happy medium. Instead of inserting all the bells and whistles they inserted the bells and made them relatively easy to configure, or they merged products to make your life easier. When the terrain suits a commanders’ needs in wartime, the need for such force multipliers as barbed wire and minefields are eliminated because an attacker can be channeled into the desired defenses by terrain features like cliffs and swamps. The same could be said of your network. There are a few places on the network that are Strategic Points of Control, where so much information (incidentally including attackers, though this is not, strictly speaking, a security blog) is funneled through that you can increase your visibility, level of control, and even implement new functionality. We here at F5 like to talk about three of them… Between your users and the apps they access, between your systems and the WAN, and between consumers of file services and the providers of those services. These are places where you can gather an enormous amount of information and act upon that information without a lot of staff effort – force multipliers, so to speak. When a user connects to your systems, the strategic point of control at the edge of your network can perform pre-application-access security checks, route them to a VPN, determine the best of a pool of servers to service their requests, encrypt the stream (on front, back, or both sides), redirect them to a completely different datacenter or an instance of the application they are requesting that actually resides in the cloud… The possibilities are endless. When a user accesses a file, the strategic point of control between them and the physical storage allows you to direct them to the file no matter where it might be stored, allows you to optimize the file for the pattern of access that is normally present, allows you to apply security checks before the physical file system is ever touched, again, the list goes on and on. When an application like replication or remote email is accessed over the WAN, the strategic point of control between the app and the actual Internet allows you to encrypt, compress, dedupe, and otherwise optimize the data before putting it out of your bandwidth-limited, publicly exposed WAN connection. The first strategic point of control listed above gives you control over incoming traffic and early detection of attack attempts. It also gives you force multiplication with load balancing, so your systems are unlikely to get overloaded unless something else is going on. Finally, you get the security of SSL termination or full-stream encryption. The second point of control gives you the ability to balance your storage needs by scripting movement of files between NAS devices or tiers without the user having to see a single change. This means you can do more with less storage, and support for cloud storage providers and cloud storage gateways extends your storage to nearly unlimited space – depending upon your appetite for monthly payments to cloud storage vendors. The third force-multiplies the dollars you are spending on your WAN connection by reducing the traffic going over it, while offloading a ton of work from your servers because encryption happens on the way out the door, not on each VM. Taking advantage of these strategic points of control, architectural force multipliers offers you the opportunity to do more with less daily maintenance. For instance, the point between users and applications can be hooked up to your ADS or LDAP server and be used to authenticate that a user attempting to access internal resources from… Say… and iPad… is indeed an employee before they ever get to the application in question. That limits the attack vectors on software that may be highly attractive to attackers. There are plenty more examples of multiplying your impact without increasing staff size or even growing your architectural footprint beyond the initial investment in tools at the strategic point of control. For F5, we have LTM at the Application Delivery Network Strategic Point of Control. Once that investment is made, a whole raft of options can be tacked on – APM, WOM, WAM, ASM, the list goes on again (tired of that phrase for this blog yet?). Since each resides on LTM, there is only one “bump in the wire”, but a ton of functionality that can be brought to bear, including integration with some of the biggest names in applications – Microsoft, Oracle, IBM, etc. Adding business value like remote access for devices, while multiplying your IT force. I recommend that you check it out if you haven’t, there is definitely a lot to be gained, and it costs you nothing but a little bit of your precious time to look into it. No matter what you do, looking closely at these strategic points of control and making certain you are using them effectively to meet the needs of your organization is easy and important. The network is not just a way to hook users to machines anymore, so make certain that’s not all you’re using it for. Make the most of the terrain. And yes, if you also read Lori’s blog, we were indeed watching the same shows, and talking about this concept, so no surprise our blogs are on similar wavelengths. Related Blogs: What is a Strategic Point of Control Anyway? Is Your Application Infrastructure Architecture Based on the ... F5 Tech Field Day – Intro To F5 As A Strategic Point Of Control What CIOs Can Learn from the Spartans What We Learned from Anonymous: DDoS is now 3DoS What is Network-based Application Virtualization and Why Do You ... They're Called Black Boxes Not Invisible Boxes Service Virtualization Helps Localize Impact of Elastic Scalability F5 Friday: It is now safe to enable File Upload264Views0likes0CommentsHTTP Now Serving … Everything
You can’t assume anything about an application’s performance and delivery needs based on the fact that it rides on HTTP. I read an interesting article during my daily perusal of most of the Internet (I’ve had to cut back because the Internet is growing faster than my ability to consume) on “Virtual Micro Networks.” The VMN concept goes well beyond Virtual Local Area Networks (VLANs). Like VLANs or any other network, VMNs transport data from source to destination. But VMNs extend beyond transport to consider security, location, users, and applications. VMNs address: Where is the information? The answer to this question used to be a physical server or storage device but application switching and server/storage virtualization makes this more dynamic and complex. […] VMNs also must be aware of traffic type. For example, voice, video, and storage traffic is extremely latency-sensitive while HTTP traffic is not. Additionally, some network traffic may contain confidential information that should be encrypted or even blocked. What are the specific characteristics of the information? Network-based applications may be made up of numerous services that come together at the user browser. How they get there isn’t always straightforward, thus the rise of vendors like Citrix NetScaler and F5 Networks. This is also where security comes into play as certain traffic may be especially sensitive, suspicious, or susceptible. [emphasis added] What’s driving creation of Virtual Micro Networks Okay, so first things first: the author really is using another term to describe what we’ve been calling for some time an application delivery network. That was cool in and of itself; not the emergence of yet another TLA but that the concept is apparently out there and rising. But what was even more interesting was the conversation this started on Twitter. If you don’t follow @csoandy (that’s the Twitternym of Andy Ellis of Akamai Networks) you might want to start. Andy pointed out that the statement “HTTP traffic is not latency-sensitive” is a bit too broad and went on to point out that it really depends on what you’re delivering. Live video, after all, is sensitive to latency no matter what protocol is transporting it. Andy put it well in an off-line conversation when he said, “There's also the myth that HTTP isn't for low latency apps. HTTP lets you take advantage of optimizations done in TCP and HTTP to accelerate delivery.” A great point and very true. All the “built-in acceleration and optimization” of an application delivery controller’s TCP stack is free for HTTP because after all, HTTP rides on TCP. But ironically this is also where things get a bit wonky. The reality is that the application data is sensitive, not the protocol. But because the data HTTP was initially designed to transport was not considered to be latency sensitive, you almost have to look at HTTP as though it is, which is why the broad statement bothered Andy in the first place. We wouldn’t say something like “TCP” or “UDP” is not sensitive to latency because these are transport layer protocols. We need to know about the data that’s being transported. Similarly, we can’t (anymore) say “HTTP isn’t sensitive to latency” because HTTP is the de facto transport protocol of web applications. As I remarked to Andy, the move to deliver everything via HTTP changes things significantly. “Things” being the entire realm of optimization, acceleration, and application delivery. Context is Everything As the initial blog post that started this conversation pointed out, and which nothing Andy and I discussed really changed, is that our nearly complete reliance on HTTP as the de facto transport protocol for everything means that the infrastructure really needs to be aware of the context in which requests and responses are handled. When an HTTP GET and its associated response might in one case be a simple binary image and in another case it might be the initiation of a live video stream, well, the infrastructure better be able to not only recognize the difference but handle them differently. HTTP doesn’t change regardless, but the delivery needs of the data do change. This is the “application aware” mantra we (as in the entire application delivery industry) have been chanting for years. And now it’s becoming an imperative because HTTP no longer implies text and images, it implies nothing. The infrastructure responsible for delivering (securing, optimizing, accelerating, load balancing) the access to that application data cannot assume anything; not session length, not content length, not content type, not optimal network conditions. The policies that might ensure a secure, fast, and available web application are almost certainly not the same policies that will provide the same assurance for video, or audio, or even lengthy binary data. The policies that govern the delivery of a user-focused application are not the same ones that should govern the delivery of integration-driven applications like Web 2.0 and cloud computing APIs. These are different data types, different use cases, different needs. Yet they are all delivered via the same protocol: HTTP. Dynamic Infrastructure Needed What makes this all even more complicated (yes, it does get worse as a matter of fact) is that not only is the same protocol used to deliver different types of data but in many cases it may be delivered to the same user in the same session. A user might move from an article or blog to a video back to text all the while a stream of Twitter and Facebook updates are updating a gadget in that application. And the same infrastructure has to handle all of it. Simultaneously. Wheeeeeeeeeee! That HTTP can be extended (and has been, and will continue to be) to include broad advanced capabilities has been a blessing and a curse, for as we deliver more and more differenter content over the same protocol the infrastructure must be able to ascertain dynamically what type of data is being delivered and apply the appropriate actions dynamically. And it has to incorporate user information, as well, because applications highly sensitive to latency need special care and feeding when delivered over a congested, bandwidth constrained network as opposed to delivery via a high-speed, low latency LAN. The application delivery network, from user to application and back, must be context-aware and able to “turn on a dime” as it were, and adjust delivery policies based on conditions at the time of the request and subsequent responses. It’s got to by dynamic. Consider the comparison offered by Andy regarding video served via traditional protocols and HTTP: Consider a live stream; say, Hope for Haiti. A user opens a browser, and has a small embedded video, with a button to expand to full screen. With most streaming protocols, to get a higher resolution stream, your player needs to either: a) start grabbing a second, high res stream in the background, and guess when to splice them over. (now consider if the stream is too fat, and you need to downgrade) b) pause (drop existing stream) and grab a new stream, exposing buffering to a user. c) signal somehow to the streaming server that it should splice in new content (we built this. it's *hard* to get right. And you have to do it differently for each protocol). With HTTP, instead what you see is: a) Browser player is grabbing short (usually 2) second chunks of live streaming content. When it detects that it is fuller screen, and, inferring available bandwidth by how long it takes to download a chunk, ask for a higher resolution chunk for the next available piece. Quite the difference, isn’t it? But underlying that simplicity is the ubiquity of HTTP and a highly dynamic, flexible infrastructure capable of adapting to the sensitivities specific not only to the protocol, to the data, but to the type of data being delivered. So it turns that both Andy and I are both right, it just depends on how you’re looking at it. It isn’t that HTTP is sensitive to latency, it isn’t. But the data being delivered over HTTP most certainly is. But it is confusing to discuss HTTP in broad, general terms because you can’t assume anymore that what’s being delivered is text and images. We don’t talk in terms of TCP when we talk web applications, so maybe it’s time to stop generalizing about “HTTP” and start focusing on applications and data, about the content, because that’s where the real challenges surrounding performance and security are hiding. Related Posts What’s driving creation of Virtual Micro Networks HTTP: The de facto application transport protocol of the Web168Views0likes0CommentsA Hardware Platform and a Virtual Appliance Walk into a Bar at Interop…
Invariably when new technology is introduced it causes an upheaval. When that technology has the power to change the way in which we architect networks and application infrastructure, it can be disruptive but beneficial. When that technology simultaneously requires that you abandon advances and best practices in architecture in order to realize those benefits, that’s not acceptable. Virtualization at the server level is disruptive, but in a good way. It forces organizations to reconsider the applications deployed in their data center, turn a critical eye toward the resources available and how they’re partitioned across applications, projects, and departments. It creates an environment in which the very make-up of the data center can be re-examined with the goal of making more efficient the network, storage, and application network infrastructure over which those applications are delivered. Virtualization at the network, layer, is even more disruptive. From a network infrastructure perspective there are few changes required in the underlying infrastructure to support server virtualization because the application and its behavior doesn’t really change when moving from a physical deployment to a virtual one. But the network, ah, the network does require changes when it moves from a physical to a virtual form factor. The way in which scale and fault-tolerance and availability of the network infrastructure – from storage to application delivery network – is impacted by the simple change from physical to virtual. In some cases this impact is a positive one, in others, it’s not so positive. Understanding how to take advantage of virtual network appliances such that core network characteristics such as fault-tolerance, reliability, and security are not negatively impacted is one of the key factors in the successful adoption of virtual network technology. Combining virtualization of “the data center network” with the deployment of applications in a public cloud computing environment brings to the fore the core issues of lack of control and visibility in externalized environments. While the benefits of public cloud computing are undeniable (though perhaps not nearly as world-shaking as some would have us believe) the inclusion of externally controlled environments in the organization’s data center strategy will prove to have its challenges. Many of these challenges can be addressed thanks to the virtualization of the network (despite the lack of choice and dearth of services available in today’s cloud computing offerings).225Views0likes0CommentsCan the future of application delivery networks be found in neural network theory?
I spent a big chunk of time a few nights ago discussing neural networks with my oldest son over IM. It's been a long time since I've had reason to dig into anything really related to AI (artificial intelligence) and at first I was thinking how cool it would be to be back in college just exploring topics like that. Then, because I was trying to balance a conversation with my oldest while juggling my (fussy) youngest on my lap, I thought no, no it wouldn't. Artificial neural networks (ANN) are good for teaching a system how to recognize patterns, discern complex mathematical relationships, and make predictions based on a variety of inputs. It learns by trying and trying again until the output matches what is expected given a sample (training) data set. That learning process requires feedback; feedback that is often given via backpropagation. Backpropagation can be tricky, but essentially it's the process of determining how far off the output is from the expected output, and then propagating that back into the network so it can essentially learn from its mistakes. Just like us. If you guessed that this was going to tie back into application delivery, you guessed correctly. An application delivery network is not a neural network, but it often has many of the same properties, such as using something similar to a hidden layer (the application delivery controller) to make decisions about application messages, such as to which server to distribute them and how to best optimize those messages. More interestingly, perhaps, is the ability to backpropagate errors and information through the application delivery network such that the application delivery network automatically adjusts itself and makes different decisions for subsequent requests. If the application delivery network is enabled with a services-based API, for example, it can be integrated into applications to provide valuable feedback regarding the state of that application and the messages it receives to the application delivery controller, which can then be adjusted to reflect changes in the state of that application. This is how we change the weights of individual servers in the load balancing algorithms in what is somewhat akin to modifying the weights of the connections between neurons in a neural net. But it's merely a similarity now; it's not a real ANN as it's missing some key attributes and behaviors that would make it one. When you look at the way in which an application delivery network is deployed and how it acts, you can (or at least I can) see the possibilities of employing a neural network model in building an even smarter, more adaptable delivery network. Right now we have engineers that deploy, configure, and test application delivery networks for specific applications like Oracle, Microsoft, and BEA. It's an iterative process in which they continually tweak the configuration of the solutions that make up an application delivery network based on feedback such as response time, size of messages, and load on individual servers. When they're finished, they've documented an Application Ready Network with a configuration that is configured for optimal performance and scalability for that application that can easily be deployed by customers. But the feedback loop for this piece is mostly manual right now, and we only have so many engineers available for the hundreds of thousands of applications out there. And that's not counting all the in-house developed applications that could benefit from a similar process. And our environment is not your environment. In the future, it would awesome if application delivery networks acted more like neural networks, incorporating the feedback themselves based on designated thresholds (response time must be less than X, load on the server must not exceed Y) and tweak itself until it met its goals; all based on the applications and environment unique to the organization. It's close; an intelligent application delivery controller is able to use thresholds for response time and size of application messages to determine to which server an individual request should be sent. And it can incorporate feedback through the use of service-based APIs integrated with the application. But it's not necessarily modifying its own configuration permanently based on that information; it doesn't have a "learning mode" like so many application firewall and security solutions. That's an important piece we're missing - the ability to learn the behavior of an application in a specific environment and adjust automatically to that unique configuration. Like learning that in your environment a specific application task runs faster on server X than it does on servers Y and Z, so it always sends that task to server X. We can do the routing via layer 7 switching, but we can't (yet) deduce what that routing should be from application behavior and automatically configure it. We've come a long way since the early days of load balancing, where the goal was simply to distribute requests across machines equally. We've learned how to intelligently deliver applications, not just distribute them, in the years since the web was born. So it's not completely crazy to think that in the future the concepts used to build neural networks will be used to build application delivery neural networks. At least I don't think it is. But then crazy people don't think they're crazy, do they?273Views0likes1CommentIf Load Balancers Are Dead Why Do We Keep Talking About Them?
Commoditized from solution to feature, from feature to function, load balancing is no longer a solution but rather a function of more advanced solutions that’s still an integral component for highly-available, fault-tolerant applications. Unashamed Parody of Monty Python and the Holy Grail Load balancers: I'm not dead. The Market: 'Ere, it says it’s not dead. Analysts: Yes it is. Load balancers: I'm not. The Market: It isn't. Analysts: Well, it will be soon, it’s very ill. Load balancers: I'm getting better. Analysts: No you're not, you'll be stone dead in a moment. Earlier this year, amidst all the other (perhaps exaggerated) technology deaths, Gartner declared that Load Balancers are Dead. It may come as surprise, then, that application delivery network folks keep talking about them. As do users, customers, partners, and everyone else under the sun. In fact, with the increased interest in cloud computing it seems that load balancers are enjoying a short reprieve from death. LOAD BALANCERS REALLY ARE SO LAST CENTURY They aren’t. Trust me, load balancers aren’t enjoying anything. Load balancing on the other hand, is very much in the spotlight as scalability and infrastructure 2.0 and availability in the cloud are highlighted as issues today’s IT staff must deal with. And if it seems that we keep mentioning load balancers despite their apparent demise, it’s only because the understanding of what a load balancer does is useful to slowly moving people toward what is taking its place: application delivery. Load balancing is an integral component to any high-availability and/or on-demand architecture. The ability to direct application requests across a (cluster|pool|farm|bank) of servers (physical or virtual) is an inherent property of cloud computing and on-demand architectures in general. But it is not the be-all and end-all of application delivery, it’s just the point at which application delivery begins and an integral function of application delivery controllers. Load balancers, back in their day, were “teh bomb.” These simple but powerful pieces of software (which later grew into appliances and later into full-fledged application switches) offered a way for companies to address the growing demand for Web-based access to everything from their news stories to their products to their services to their kids’ pictures. But as traffic demands grew so did the load on servers and eventually new functionality began to be added to load balancers – caching, SSL offload and acceleration, and even security-focused functionality. From the core that was load balancing grew an entire catalog of application-rich features that focused on keeping applications available while delivering them fast and securely. At that point we were no longer simply load balancing applications, we were delivering them. Optimizing them. Accelerating them. Securing them. LET THEM REST IN PEACE… So it made sense that in order to encapsulate the concept of application delivery and move people away from focusing on load balancing that we’d give the product and market a new name. Thus arose the term “application delivery network” and “application delivery controller.” But at the core of both is still load balancing. Not load balancers, but load balancing. A function, if you will, of application delivery. But not the whole enchilada; not by a long shot. If we’re still mentioning load balancing (and even load balancers, as incorrect as that term may be today) it’s because the function is very, very, very important (I could add a few more “verys” but I think you get the point) to so many different architectures and to meeting business goals around availability and performance and security that it should be mentioned, if not centrally then at least peripherally. So yes. Load balancers are very much outdated and no longer able to provide the biggest bang for your buck. But load balancing, particularly when leveraged as a core component in an application delivery network, is very much in vogue (it’s trendy, like iPhones) and very much a necessary part of a successfully implemented high-availability or on-demand architecture. Long live load balancing. The House that Load Balancing Built A new era in application delivery Infrastructure 2.0: The Diseconomy of Scale Virus The Politics of Load Balancing Don't just balance the load, distribute it WILS: Network Load Balancing versus Application Load Balancing Cloud computing is not Burger King. You can’t have it your way. Yet. The Revolution Continues: Let Them Eat Cloud423Views0likes0CommentsDo you control your application network stack? You should.
Owning the stack is important to security, but it’s also integral to a lot of other application delivery functions. And in some cases, it’s downright necessary. Hoff rants with his usual finesse in a recent posting with which I could not agree more. Not only does he point out the wrongness of equating SaaS with “The Cloud”, but points out the importance of “owning the stack” to security. Those that have control/ownership over the entire stack naturally have the opportunity for much tighter control over the "security" of their offerings. Why? because they run their business and the datacenters and applications housed in them with the same level of diligence that an enterprise would. They have context. They have visibility. They have control. They have ownership of the entire stack. Owning the stack has broader implications than just security. The control, visibility, and context-awareness implicit in owning the stack provides much more flexibility in all aspects covering the delivery of applications. Whether we’re talking about emerging or traditional data center architectures the importance of owning the application networking stack should not be underestimated. The arguments over whether virtualized application delivery makes more sense in a cloud computing- based architecture fail to recognize that a virtualized application delivery network forfeits that control over the stack. While it certainly maintains some control at higher levels, it relies upon other software – the virtual machine, hypervisor, and operating system – which shares control of that stack and, in fact, processes all requests before it reaches the virtual application delivery controller. This is quite different from a hardened application delivery controller that maintains control over the stack and provides the means by which security, network, and application experts can tweak, tune, and exert that control in myriad ways to better protect their unique environment. If you don’t completely control layer 4, for example, how can you accurately detect and thus prevent layer 4 focused attacks, such as denial of service and manipulation of the TCP stack? You can’t. If you don’t have control over the stack at the point of entry into the application environment, you are risking a successful attack. As the entry point into application, whether it’s in “the” cloud, “a” cloud, or a traditional data center architecture, a properly implemented application delivery network can offer the control necessary to detect and prevent myriad attacks at every layer of the stack, without concern that an OS or hypervisor-targeted attack will manage to penetrate before the application delivery network can stop it. The visibility, control, and contextual awareness afforded by application delivery solutions also allows the means by which finer-grained control over protocols, users, and applications may be exercised in order to improve performance at the network and application layers. As a full proxy implementation these solutions are capable of enforcing compliance with RFCs for protocols up and down the stack, implement additional technological solutions that improve the efficiency of TCP-based applications, and offer customized solutions through network-side scripting that can be used to immediately address security risks and architectural design decisions. The importance of owning the stack, particularly at the perimeter of the data center, cannot and should not be underestimated. The loss of control, the addition of processing points at which the stack may be exploited, and the inability to change the very behavior of the stack at the point of entry comes from putting into place solutions incapable of controlling the stack. If you don’t own the stack you don’t have control. And if you don’t have control, who does?256Views0likes0Comments