application delivery controller
16 TopicsWILS: Virtual Server versus Virtual IP Address
load balancing intermediaries have long used the terms “virtual server” and “virtual IP address”. With the widespread adoption of virtualization these terms have become even more confusing to the uninitiated. Here’s how load balancing and application delivery use the terminology. I often find it easiest to explain the difference between a “virtual server” and a “virtual IP address (VIP)” by walking through the flow of traffic as it is received from the client. When a client queries for “www.yourcompany.com” they get an IP address, of course. In many cases if the site is served by a load balancer or application delivery controller that IP address is a virtual IP address. That simply means the IP address is not tied to a specific host. It’s kind of floating out there, waiting for requests. It’s more like a taxi than a public bus in that a public bus has a predefined route from which it does not deviate. A taxi, however, can take you wherever you want within the confines of its territory. In the case of a virtual IP address that territory is the set of virtual servers and services offered by the organization. The client (the browser, probably) uses the virtual IP address to make a request to “www.yourcompany.com” for a particular resource such as a web application (HTTP) or to send an e-mail (SMTP). Using the VIP and a TCP port appropriate for the resource, the application delivery controller directs the request to a “virtual server”. The virtual server is also an abstraction. It doesn’t really “exist” anywhere but in the application delivery controller’s configuration. The virtual server determines – via myriad options – which pool of resources will best serve to meet the user’s request. That pool of resources contains “nodes”, which ultimately map to one (or more) physical or virtual web/application servers (or mail servers, or X servers). A virtual IP address can represent multiple virtual servers and the correct mapping between them is generally accomplished by further delineating virtual servers by TCP destination port. So a single virtual IP address can point to a virtual “HTTP” server, a virtual “SMTP” server, a virtual “SSH” server, etc… Each virtual “X” server is a separate instantiation, all essentially listening on the same virtual IP address. It is also true, however, that a single virtual server can be represented by multiple virtual IP addresses. So “www1” and “www2” may represent different virtual IP addresses, but they might both use the same virtual server. This allows an application delivery controller to make routing decisions based on the host name, so “images.yourcompany.com” and “content.yourcompany.com” might resolve to the same virtual IP address and the same virtual server, but the “pool” of resources to which requests for images is directed will be different than the “pool” of resources to which content is directed. This allows for greater flexibility in architecture and scalability of resources at the content-type and application level rather than at the server level. WILS: Write It Like Seth. Seth Godin always gets his point across with brevity and wit. WILS is an ATTEMPT TO BE concise about application delivery TOPICS AND just get straight to the point. NO DILLY DALLYING AROUND. Server Virtualization versus Server Virtualization Architects Need to Better Leverage Virtualization Using "X-Forwarded-For" in Apache or PHP SNAT Translation Overflow WILS: Client IP or Not Client IP, SNAT is the Question WILS: Why Does Load Balancing Improve Application Performance? WILS: The Concise Guide to *-Load Balancing WILS: Network Load Balancing versus Application Load Balancing All WILS Topics on DevCentral If Load Balancers Are Dead Why Do We Keep Talking About Them?2.8KViews1like1CommentWILS: How can a load balancer keep a single server site available?
Most people don’t start thinking they need a “load balancer” until they need a second server. But even if you’ve only got one server a “load balancer” can help with availability, with performance, and make the transition later on to a multiple server site a whole lot easier. Before we reveal the secret sauce, let me first say that if you have only one server and the application crashes or the network stack flakes out, you’re out of luck. There are a lot of things load balancers/application delivery controllers can do with only one server, but automagically fixing application crashes or network connectivity issues ain’t in the list. If these are concerns, then you really do need a second server. But if you’re just worried about standing up to the load then a Load balancer for even a single server can definitely give you a boost.399Views0likes2Comments4 things you can do in your code now to make it more scalable later
No one likes to hear that they need to rewrite or re-architect an application because it doesn't scale. I'm sure no one at Twitter thought that they'd need to be overhauling their architecture because it gained popularity as quickly as it did. Many developers, especially in the enterprise space, don't worry about the kind of scalability that sites like Twitter or LinkedIn need to concern themselves with, but they still need to be (or at least should be) concerned with scalability in general and the effects of inserting an application into a high-scalability environment, such as one fronted by a load balancer or application delivery controller. There are some very simple things you can do in your code, when you're developing an application, that can ease the transition into a high-availability architecture and that will eventually lead to a faster, more scalable application. Here are four things you can do now - and why - to make your application fit better into a high availability environment in the future and avoid rewriting or re-architecting your solutions later. Where's F5? Storage Decisions Sept 23-24 in New York Networld IT Roadmap Sept 23 in Dallas Oracle Open World Sept 21-25 in San Francisco Storage Networking World Oct 13-16 in Dallas Storage Expo 2008 UK Oct 15-16 in London Storage Networking World Oct 27-29 in Frankfurt 1. Don't assume your application is always responsible for cookie encryption Encrypting cookies in today's privacy lax environment that is the Internet is the responsible thing to do. In the first iterations of your application you will certainly be responsible for handling the encryption and decryption of cookies, but later on, when the application is inserted into a high-availability environment and there exists an application delivery controller (ADC), that functionality can be offloaded to the ADC. Offloading the responsibility for encryption and decryption of cookies to the ADC improves performance because the ADC employs hardware acceleration. To make it easier to offload this responsibility to an ADC in the future but support it early on, use a configuration flag to indicate whether you should decrypt or encrypt cookies before examining them. That way you can simply change the configuration flag later on and immediately take advantage of a performance boost from the network infrastructure. 2. Don't assume the client IP is accurate If you need to use/store/access the client's IP address, don't assume the traditional HTTP header is accurate. Early on it certainly will be, but when the application is inserted into a high availability environment and a full-proxy solution is sitting in front of your application, it won't be. A full-proxy mediates between client and server, which means it is the client when talking to the server, so its IP address becomes the "client IP". Almost all full-proxies insert the real client IP address into the X-Forwarded-For HTTP header, so you should always check that header before checking the client IP address. If there is an X-Forwarded-For value, you'll more than likely want to use it instead of the client IP address. This simple check should alleviate the need to make changes to your application when it's moved into a high availability environment. 3. Don't use relative paths Always use the FQDN (fully qualified domain name) when referencing images, scripts, etc... inside your application. Furthermore, use different host names for different content types - i.e. images.example.com and scripts.example.com. Early on all the hosts will point to the same server, probably, but by insuring that you're using the FQDN now makes architecting that high availability environment much easier. While any intelligent application delivery controller can perform layer 7 switching on any part of the URI and arrive at the same architecture, it's much more efficient to load balance and route application data based on the host name. By using the FQDN and separating host names by content type you can later optimize and tune specific servers for delivery of that content, or use the CNAME trick to improve parallelism and performance in request heavy applications. 4. Separate out API rate limiting functionality If you're writing an application with an API for integration later, separate out the rate limiting functionality. Initially you may need it, but when the application is inserted into a high-availability environment with an intelligent application delivery controller, it can take over that functionality and spare your application from having to reject requests that exceed the set limits. Like cookie encryption, use a configuration flag to determine whether you should check this limitation or not so it can be easily be turned on and off at will. By offloading the responsibility for rate limiting to an application delivery controller you remove the need for the server to waste resources (connections, RAM, cycles) on requests it won't respond to anyway. This improves the capacity of the server and thus your application, making it more efficient and more scalable. By thinking about the ways in which your application will need to interact with a high availability infrastructure later and adjusting your code to take that into consideration you can save yourself a lot of headaches later on when your application is inserted into that infrastructure. That means less rewriting of applications, less troubleshooting, and fewer servers needed to scale up quickly to meet demand. Happy coding!360Views0likes1CommentI CAN HAS DEFINISHUN of SoftADC and vADC?
In the networking side of the world, vendors often seek to differentiate their solutions not just based on features and functionality, but on form-factor, as well. Using a descriptor to impart an understanding of the deployment form-factor of a particular solution has always been quite common: appliance, hardware, platform, etc… Sometimes these terms come from analysts, other times they come from vendors themselves. Regardless of where they originate, they quickly propagate and unfortunately often do so without the benefit of a clear definition. A reader recently asked a question that reminded me that we’ve done just that as we cloud computing and virtualization creep into our vernacular. Quite simply the question was, “What’s the definition of a Soft ADC and vADC?” That’s actually an interesting question as it’s more broadly applicable than just to ADCs. For example, the last several years we’ve been hearing about “Soft WOC (WAN Optimization Controller)” in addition to just plain old WOC and the definition of Soft WOC is very similar to Soft ADC. The definitions are, if not well understood and often used, consistent across the entire application delivery realm – from WAN to LAN to cloud. So this post is to address the question in relation to ADC more broadly, as there’s an emerging “xADC” model that should probably be mentioned as well. Let’s start with the basic definition of an Application Delivery Controller (ADC) and go from there, shall we? ADC An application delivery controller is a device that is typically placed in a data center between the firewall and one or more application servers (an area known as the DMZ). First- generation application delivery controllers primarily performed application acceleration and handled load balancing between servers. The latest generation of application delivery controllers handles a much wider variety of functions, including rate shaping and SSL offloading, as well as serving as a Web application firewall. If you said an application delivery controller was a “load balancer on steroids” (which is how I usually describe them to the uninitiated) you wouldn’t be far from the truth. The core competency of an ADC is load balancing, and from that core functionality has been derived, over time, the means by which optimization, acceleration, security, remote access, and a wealth of other functions directly related to application delivery in scalable architectures can be applied in a unified fashion. Hence the use of the term “Unified Application Delivery.” If you prefer a gaming metaphor, an application delivery controller is like a multi-classed D&D character, probably a 3e character because many of the “extra” functions available in an ADC are more like skills or feats than class abilities. SOFT ADC So a "Soft ADC" then is simply an ADC in software format, deployed on commodity hardware. That hardware may or may not have additional hardware processing (like PCI-based SSL acceleration) to assist in offloading compute intense processes and the integration of the software with that hardware varies from vendor to vendor. Soft ADCs are sometimes offered as “softpliances” (many people hate this term) or an “appliance comprised of commodity hardware pre-loaded and configured with the ADC software.” This option allows the vendor to harden and optimize the operating system on which the Soft ADC runs, which can be advantageous to the organization as it will not need to worry about upgrades and/or patches to the solution impacting the functionality of the Soft ADC. This option can also result in higher capacity and better performance for the ADC and the applications it manages, as the operating system’s network stack is often “tweaked” and “tuned” to support the application delivery functions of the Soft ADC. VIRTUAL ADC (vADC) A "vADC" is a virtualized version of an ADC. The ADC may or may not have first been a "Soft ADC", as in the case of BIG-IP which is not available as a "Soft ADC" but is available as a traditional hardware ADC or a virtual ADC. vADCs are ADCs deployed in a virtual network appliance (VNA) form factor, as an image compatible with modern virtual machines (VMware, Xen, Hyper-V). ADC as a SERVICE There is an additional "type" of ADC emerging mainly because of proprietary virtual image formats in clouds like Amazon, the "ADC as a service" which is offered as a provisionable service within a specific cloud computing environment that is not portable (or usable) outside the environment. In all other respects the “ADC as a service” is indistinguishable from the vADC as it, too, is deployed on commodity hardware and lacks integration with the underlying hardware platform or available acceleration chipsets. A PLACE for EVERYTHING and EVERYTHING in its PLACE In the general category of application delivery (and most networking solutions as well) we can make the following abstractions regarding these definitions: “Solution” Soft “Solution” v”Solution” “Solution” as a Service* A traditional hardware-based “solution” A traditional hardware-based solution in a software form-factor that can be deployed on an “appliance” or commodity hardware A traditional hardware-based solution in a virtualized form-factor that can be deployed as a virtual network appliance (VNA) on a variety of virtualization platforms. A traditional hardware-based solution in a proprietary form-factor (software or virtual) that is not usable or portable outside the environment in which it is offered. So if we were to tackle “Soft WOC”, as well, we’d find that the general definition – traditional hardware-based solution in a software form-factor – also fits that category of solution well. It may seem to follow logically than any version of an ADC (or network solution) is “as good” as the next given that the core functionality is almost always the same regardless of form factor. There are, however, pros and cons to each form-factor that should be taken into consideration when designing an architecture that may take advantage of an ADC. In some cases a Soft ADC or vADC will provide the best value, in others a traditional hardware ADC, and in many cases the highly-scalable and flexible architecture will take advantage of both in the appropriate places within the architecture. *Some solutions offered “as a service” are more akin to SaaS in that they are truly web services, regardless of underlying implementation, that are “portable” because they can be accessed from anywhere, though they cannot be “moved” or integrated internally as private solutions.271Views0likes2CommentsI do not think that word means what you think it means
Greg Ferro over at My Etherealmind has a, for lack of a better word, interesting entry in his Network Dictionary on the term "Application Delivery Controller." He says: Application Delivery Controller (ADC) - Historically known as a “load balancer”, until someone put a shiny chrome exhaust and new buttons on it and so it needed a new marketing name. However, the Web Application Firewall and Application Acceleration / Optimisation that are in most ADC are not really load balancing so maybe its alright. Feel free to call it a load balancer when the sales rep is on the ground, guaranteed to upset them. I take issue with this definition primarily because an application delivery controller (ADC) is different from a load-balancer in many ways, and most of them aren't just "shiny chrome exhaust and new buttons". He's right that web application firewalls and web application acceleration/optimization features are also included, but application delivery controllers do more than just load-balancing these days. Application delivery controller is not just a "new marketing name", it's a new name because "load balancing" doesn't properly describe the functionality of the products that fall under the ADC moniker today. First, load-balancing is not the same as layer 7 switching. The former is focused on distribution of requests across a farm or pool of servers whilst the latter is about directing requests based on application layer data such as HTTP headers or application messages. An application delivery controller is capable of performing layer 7 switching, something a simple load-balancer is not. When the two are combined you get layer 7 load-balancing which is a very different beast than the simple load-balancing offered in the past and often offered today by application server clustering technologies, ESB (enterprise service bus) products, and solutions designed primarily for load-balancing. Layer 7 load balancing is the purvey of application delivery controllers, not load-balancers, because it requires application fluency and run-time inspection of application messages - not packets, mind you, but messages. That's an important distinction, but one best left for another day. The core functionality of an application delivery controller is load-balancing, as this is the primary mechanism through which high-availability and failover is provided. But a simple load-balancer does little more than take requests and distribute them based on simple algorithms; they do not augment the delivery of applications by offering additional features such as L7 rate shaping, application security, acceleration, message security, and dynamic inspection and manipulation of application data. Second, a load balancer isn't a platform; an application delivery controller is. It's a platform to which tasks generally left to the application can be offloaded such as cookie encryption and decryption, input validation, transformation of application messages, and exception handling. A load balancer can't dynamically determine the client link speed and then determine whether compression would improve or degrade performance, and either apply it or not based on that decision. A simple load balancer can't inspect application messages and determine whether it's a SOAP fault or not, and then once it's determined it is execute logic that handles that exception. An application delivery controller is the evolution of load balancing to something more; to application delivery. If you really believe that an application delivery controller is just a marketing name for a load-balancer then you haven't looked into the differences or how an ADC can be an integral part of a secure, fast, and available application infrastructure in a way that load-balancers never could. Let me 'splain. No, there is too much. Let me sum up. A load balancer is a paper map. An ADC is a Garmin or a TomTom.249Views0likes2CommentsBlock Attack Vectors, Not Attackers
When an army is configuring defenses, it is not merely the placement of troops and equipment that must be considered, but the likely avenues of attack, directions the attack could develop if it is successful, the terrain around the avenues of attack – because the most likely avenues of attack will be those most favorable to the attacker – and emplacements. Emplacements include such things as barricades, bunkers, barbed wire, tank traps, and land mines. While the long term effects of land mines on civilian populations has recently become evident, there is no denying that they hinder an enemy, and will continue to be used for the foreseeable future. That is because the emplacement category has several things, land mines being one of the primary ones, known as “force multipliers”. I’ve mentioned force multipliers before, but those of you who are new to my blog and those who missed that entry might want a quick refresh. Force multipliers swell the effect of your troops (as if multiplying the number of troops) by impacting the enemy or making your troops more powerful. While the term is relatively recent, the idea has been around for a while. Limit the number of attackers that can actually participate in an attack, and you have a force multiplier because you can bring more defenses to bear than the attacker can overcome. Land mines are a force multiplier because they channel attackers away from themselves and into areas more suited to defense. They can also slow down attackers and leave them in a pre-determined field of fire longer than would have been their choice. No one likes to stand in a field full of bombs, picking their way through, while the enemy is raining fire down upon them. A study of the North African campaign in World War II gives a good understanding of ways that force multipliers can be employed to astounding effect. By cutting off avenues of attack and channeling attackers to where they wanted, the defenders of Tobruk – mostly from the Australian 9th Infantry Division - for example, held off repeated, determined attacks because the avenues left open for attacks were tightly controlled by the defenders. And that is possibly the most effective form of defense that IT Security has also. It is not enough to detect that you’re being attacked and then try to block it any more. The sophistication of attackers means that if they can get to your web application from the Internet, they can attack application and OS in a very rapid succession looking for known vulnerabilities. While “script kiddie” is a phrase of scorn in the hacker community, the fact is that running a scripted attack to see if there are any easy penetrations is simple these days, and script kiddies are as real a threat as full on high skill hackers. Particularly if you don’t patch on the day a vulnerability is announced for any reason. Picture courtesy of Wikipedia Let’s start talking about detecting malevolent connections before they touch your server, about asking for login credentials before they can footprint what OS you are running, and sending those who are not trusted off to a completely different server, isolated from the core datacenter network. While we’re at it, let’s start talking about an interface to the public Internet that can withstand huge DDoS and 3DoS attacks without failing, so not only is the attack averted, it never actually makes it to the server it was intended for, and is shunted off to a different location and/or dropped. Just like force multipliers in the military world, these channel traffic the way you want, stop it before the attack gets rolling, and leaves your servers and security staff free to worry about other things. Like serving legitimate customers. It really is easy as a security professional to get cynical. After all, it is the information security professional’s job to deal with ne’er-do-wells all of the time. And to play the bad cop whenever the business or IT has a “great new idea”. Between the two it could drag you down. But if you have these two force multipliers in place, more of those great ideas can get past you because you have a solid wall of protection in place. In fact, add in a Web Acceleration Firewall (WAF) for added protection at the application layer, and you’ve got a solid architecture that will allow you to be more flexible when a “great idea” really sounds like one. And it might just return some optimism, because the bad guys will have fewer avenues of attack, and you’ll feel just that bit ahead of them. If information technology is undervalued in the organization, information security is really undervalued. But from someone who knows, thank you. It’s a tough job that has to be approached from a “we stopped them today” perspective, and you’re keeping us safe – from the bad guys, and often from ourselves. I’ve done it, and I’m glad you’re doing it. Hopefully technological advances will force you to do less that resembles this picture. DISCLAIMER: Yes, F5 makes products that vaguely fill all of the spaces I mention above. That doesn’t mean no one else does. For some of the spaces anyway. This blog was inspired by a whitepaper I’m working on, so no surprise the areas top-of-mind while writing it are things we do. Doesn’t make them bad ideas, in fact I would argue the opposite. It makes them better ideas than fluff thrown out there to attract you with no solutions available. PS: Trying out a new “Related Articles and Blogs” plug-in that Lori found. Let me know if you like the results better. Related Articles and Blogs: F5 at RSA: Multilayer Security without Compromise Making Security Understandable: A New Approach to Internet Security Committing to Overhead: Proceed With Caution. F5 Enables Mobile Device Management Security On-Demand RSA 2012 - Interview with Jeremiah Grossman RSA 2012 - BIG-IP Data Center Firewall Solution RSA 2012 - F5 MDM Solutions The Conspecific Hybrid Cloud Why BYOD Doesn't Always Work In Healthcare Federal Cybersecurity Guidelines Now Cover Cloud, Mobility249Views0likes0CommentsCommunity: Force Multiplier for Network Programmability
#SDN Programmability on its own is not enough, a strong community is required to drive the future of network capabilities One of the lesser mentioned benefits of an OpenFlow SDN as articulated by ONF is the ability to "customize the network". It promotes rapid service introduction through customization, because network operators can implement the features they want in software they control, rather than having to wait for a vendor to put it in plan in their proprietary products. -- Key Benefits of OpenFlow-Based SDN This ability is not peculiar to SDN or OpenFlow, but rather it's tied to the concept of a programmable, centralized control model architecture. It's an extension of the decoupling of control and data planes as doing so affords an opportunity to insert a programmable layer or framework at a single, strategic point of control in the network. It's ostensibly going to be transparent and non-disruptive to the network because any extension of functionality will be deployed in a single location in the network rather than on every network element in the data center. This is actually a much more powerful benefit than it is often given credit for. The ability to manipulate data in-flight is the foundation for a variety of capabilities – from security to acceleration to load distribution, being able to direct flows in real-time has become for many organizations a critical capability in enabling the dynamism required to implement modern solutions including cloud computing . This is very true at layers 4-7, where ADN provides the extensibility of functionality for application-layer flows, and it will be true at layers 2-3 where SDN will ostensibly provide the same for network-layer flows. One of the keys to success in real-time flow manipulation, a.k.a network programmability, will be a robust community supporting the controller. Community is vital to such efforts because it provides organizations with broader access to experts across various domains as well as of the controller's programmatic environment. Community experts will be vital to assisting in optimization, troubleshooting, and even development of the customized solutions for a given controller. THE PATH to PRODUCTIZATION What ONF does not go on to say about this particular benefit is that eventually customizations end up incorporated into the controller as native functionality. That's important, because no matter how you measure it, software-defined flow manipulation will never achieve the same level of performance as the same manipulations implemented in hardware. And while many organizations can accept a few milliseconds of latency, others cannot or will not. Also true is that some customized functionality eventually becomes so broadly adopted that it requires a more turn-key solution; one that does not require the installation of additional code to enable. This was the case, for example, with session persistence – the capability of an ADC (application delivery controller) to ensure session affinity with a specific server. Such a capability is considered core to load balancing services and is required for a variety of applications, including VDI. Originally, this capability was provided via real-time flow manipulation. It was code that extended the functionality of the ADC that had to be implemented individually by every organization that needed it – which was most of them. The code providing this functionality was shared and refined over and over by the community and eventually became so demanded that it was rolled into the ADC as a native capability. This improved performance, of course, but it also offered a turn-key "checkbox" configuration for something that had previously required code to be downloaded and "installed" on the controller. The same path will need to be available for SDN as has been afforded for ADN, to mitigate complexity of deployment as well as address potential performance implications coming from the implementation of network-functionality in software. That path will be a powerful one, if it is leveraged correctly. While organizations always maintain the ability to extend network services through programmability, if community support exists to assist in refinement and optimization and, ultimately, a path to productization the agility of network services increases ten or hundred fold over the traditional vendor-driven model. There are four requirements to enable such a model to be successful for both customer and vendors alike: Community that encourages sharing and refinement of "applications" Repository of "applications" that is integrated with the controller and enables simple deployment of "applications". Such a repository may require oversight to certify or verify applications as being non-malicious or error-free. A means by which applications can be rated by consumers. This is the feedback mechanism through which the market indicates to vendors which features and functionality are in high-demand and would be valuable implemented as native capabilities. A basic level of configuration management control that enables roll-back of "applications" on the controller. This affords protection against introduction of applications with errors or that interact poorly when deployed in a given environment. The programmability of the network, like programmability of the application delivery network, is a powerful capability for customers and vendors alike. Supporting a robust, active community of administrators and operators who develop, share, and refine "control-plane applications" that manipulate flows in real-time to provide additional value and functionality when it's needed is critical to the success of such a model. Building and supporting such a community should be a top priority, and integrating it into the product development cycle should be right behind it. HTML5 WebSockets Illustrates Need for Programmability in the Network Midokura – The SDN with a Hive Mind Reactive, Proactive, Predictive: SDN Models SDN is Network Control. ADN is Application Control. F5 Friday: Programmability and Infrastructure as Code Integration Topologies and SDN238Views0likes0CommentsCool Hand Cloud
What we’ve got here is a failure to communicate. Some apps you just can’t reach … in the cloud. Доброе утро! What? You don’t speak Russian? Not even “baby” Russian? French? Spanish? Indonesian? Korean? Chinese? If you’ve traveled you’ve probably picked up a few words here and there but it’s unlikely you are, at this point, fluent in any of the world’s languages excepting English. Luckily most other people in the world speak English better than you speak their language so you should get along just fine. Unfortunately for folks stuck in the data center, most of their network and application network devices don’t have even that much in common. If you immediately thought “Hey, they have IP and TCP and HTTP in common” then think again. IP and TCP and even HTTP today are used as transport protocols, not data exchange formats. Your voice and the written word are IP and TCP and HTTP, but the actual data being exchanged? That’s where the difference between English and Russian comes in and rears its ugly head. (Well, there and at bedtime when you’re trying to explain to two non-English speaking girls it’s time to sleep.)236Views0likes0CommentsThe Game of Musical (ADC) Chairs
Now things are starting to get interesting… Nearly everyone has played musical chairs – as a child if not as an adult with a child. When that music stops there's a desperate scrambling to pair up with a chair lest you end up sitting on the sidelines watching the game while others continue to play until finally, one stands alone. The ADC market has recently been a lot like a game of musical chairs, with players scrambling every few months for a chair upon which they can plant themselves and stay in the game. While many of the players in adjacent markets –storage, WAN optimization, switching – have been scrambling for chairs for months, it is only today, when the big kids have joined the game, that folks are really starting to pay attention. While a deepening Cisco-Citrix partnership is certainly worthy of such attention, what's likely to be missed in the distraction caused by such an announcement is that the ADC has become such a critical component in data center and cloud architectures that it would leave would-be and has-been players scrambling for an ADC chair so they can stay in the game. ADCs have become critical to architectures because of the strategic position they maintain: they are the point through which all incoming application and service traffic flows. They are the pivotal platform upon which identity and access management, security, and cloud integration heavily rely. And with application and device growth continuing unabated as well as a growing trend toward offering not only applications but APIs, the incoming flows that must be secured, managed, directed, and optimized are only going to increase in the future. F5 has been firmly attached to a chair for the past 16 years, providing market leading application and context-aware infrastructure that improves the delivery and security of applications, services, and APIs, irrespective of location or device. That has made the past three months particularly exciting for us (and that is the corporate "us") as the market has continued to be shaken up by a variety of events. The lawsuit between A10 and Brocade was particularly noteworthy, putting a damper on A10's ability to not only continue its evolution but to compete in the market. Cisco's "we're out, we're not, well, maybe we are" message regarding its ACE product line shook things up again, and was both surprising and yet not surprising. After all, for those of us who've been keeping score, ACE was the third attempt from Cisco at grabbing an ADC chair. Its track record in the ADC game hasn't been all that inspiring. Unlike Brocade and Riverbed, players in peripherally related games who've recognized the critical nature of an ADC and jumped into the market through acquisition (Brocade with Foundry, Riverbed with Zeus), Cisco is now trying a new tactic to stay in a game it recognizes as critical: a deeper more integrated relationship with Citrix. It would be foolish to assume that either party is a big winner in forging such a relationship. Citrix is struggling simply to maintain Netscaler. Revised market share figures for CYQ1 show a player struggling to prop Netscaler up and doing so primarily through VDI and XenApp opportunities, opportunities that are becoming more and more difficult to execute on for Citrix. This is particularly true for customers moving to dual-vendor strategies in their virtualization infrastructure. Strategies that require an ADC capable of providing feature parity across virtual environments in addition to the speeds and feeds required to support a heterogeneous environment. Strategies that include solutions capable of addressing operational complexity; that enable cloud and software defined data centers with a strong, integrated and programmable platform. While Microsoft applications and Apache continue to be the applications BIG-IP is most often tasked with delivering, virtualization is growing rapidly and Citrix XenApp on BIG-IP is no exception. In fact we've seen an almost 200% growth of Citrix XenApp on BIG-IP from Q2 to Q3 (FY12), owing to BIG-IP's strength and experience in not just delivery optimization and the ability to solve core architectural challenges associated with VDI, but also compelling security and performance capabilities coupled with integration with orchestration and automation platforms driving provisioning and management of virtualization across desktop and server infrastructure. Citrix's announcement makes much of a lot of integration that is forthcoming, of ecosystems and ongoing development. Yet Cisco has made such announcements in the past, and it leaves one curious as to why it would put so many resources toward integrating Citrix when it could have done so at any time with its own solution. Integration via partnerships is a much more difficult and lengthy task to undertake than that of integration with one's own products, for which one has complete control over source code and entry points. If you think about it, Cisco is asking the market to believe that it will be successful with a partner where it has been unsuccessful with its own products. What we have is a vendor struggling to sell its ADC solution asking for help from a vendor who is struggling to sell its own ADC solution. It's a great vision, don't get me wrong; one that sounds as magically delicious as AON. But it's a vision that relies on integration and development efforts, which requires resources; resources that if Cisco has them could have been put toward ACE and integration, but either do not exist or do not align with Cisco priorities. It's a vision that puts Citrix's CloudStack at the center of a combined cloud strategy that conflicts with other efforts, such as the recent release of Cisco's own version of OpenStack which, of course, is heavily supported by competing virtualization partner, VMware. In the game of musical ADC chairs, only one player has remained consistently instep with the beat of the market drum: and that player is F5. Cisco ACE Trade-In Program Latest F5 Information F5 News Articles F5 Press Releases F5 Events F5 Web Media F5 Technology Alliance Partners F5 YouTube Feed227Views0likes0CommentsAgile Operations: A Formula for Just-In-Time Provisioning
One of the ways in which traditional architectures and deployment models is actually superior (yes, I said superior) to cloud computing is in provisioning. Before you label me a cloud heretic, let me explain. In traditional deployment models capacity is generally allocated based on anticipated peaks in demand. Because the time to acquire, deploy, and integrate hardware into the network and application infrastructure this process is planned for and well-understood, and the resources required are in place before they are needed. In cloud computing, the benefit is that the time required to acquire those resources is contracted to virtually nothing, making capacity planning much more difficult. The goal is just-in-time provisioning – resources are not provisioned until you are sure you’re going to need them because part of the value proposition of cloud and highly virtualized infrastructure is that you don’t pay for resources until you need them. But it’s very hard to provision just-in-time and sometimes the result will end up being almost-but-not-quite-in-time. Here’s a cute [whale | squirrel | furry animal] to look at until service is restored. While fans of Twitter’s fail whale are loyal and everyone will likely agree its inception and subsequent use bought Twitter more than a bit of patience with its often times unreliable service, not everyone will be as lucky or have customers as understanding as Twitter. We’d all really rather prefer not to see the Fail Whale, regardless of how endearing he (she? it?) might be. But we also don’t want to overprovision and potentially end up spending more money than we need to. So how can these two needs be balanced?219Views0likes0Comments