Building an elastic environment requires elastic infrastructure

One of the reasons behind some folks pushing for infrastructure as virtual appliances is the on-demand nature of a virtualized environment. When network and application delivery infrastructure hits capacity in terms of throughput - regardless of the layer of the application stack at which it happens - it's frustrating to think you might need to upgrade the hardware rather than just add more compute power via a virtual image.

The truth is that this makes sense. The infrastructure supporting a virtualized environment should be elastic. It should be able to dynamically expand without requiring a new network architecture, a higher performing platform, or new configuration. You should be able to just add more compute resources and walk away.

The good news is that this is possible today. It just requires that you consider carefully your choices in network and application network infrastructure when you build out your virtualized infrastructure.


Last year F5 introduced VIPRION, an elastic, dynamic application networking delivery platform capable of expanding capacity without requiring any changes to the infrastructure.

VIPRION is a chassis-based bladed application delivery controller and its bladed system behaves much in the same way that a virtualized equivalent would behave. Say you start with one blade in the system, and soon after you discover you need more throughput and more processing power. Rather than bring online a new virtual image of such an appliance to increase capacity, you add a blade to the system and voila! VIPRION immediately recognizes the blade and simply adds it to its pools of processing power and capacity. There's no need to reconfigure anything, VIPRION essentially treats each blade like a virtual image and distributes requests and traffic across the network and application delivery capacity available on the blade automatically. Just like a virtual appliance model would, but without concern for the reliability and security of the platform.

Traditional application delivery controllers can also be scaled out horizontally to provide similar functionality and behavior. By deploying additional application delivery controllers in what is often called an active-active model, you can rapidly deploy and synchronize configuration of the master system to add more throughput and capacity. Meshed deployments comprising more than a pair of application delivery controllers can also provide additional network compute resources beyond what is offered by a single system.

The latter option (the traditional scaling model) requires more work to deploy than the former (VIPRION) simply because it requires additional hardware and all the overhead required of such a solution. The elastic option with bladed, chassis-based hardware is really the best option in terms of elasticity and the ability to grow on-demand as your infrastructure needs increase over time.


Often overlooked in the network diagrams detailing virtualized infrastructures is the storage layer. The increase in storage needs in a virtualized environment can be overwhelming, as there is a need to standardize the storage access layer such that virtual images of applications can be deployed in a common, unified way regardless of which server they might need to be executing on at any given time.

This means a shared, unified storage layer on which to store images that are necessarily large. This unified storage layer must also be expandable. As the number of applications and associated images are made available, storage needs increase. What's needed is a system in which additional storage can be added in a non-disruptive manner. If you have to modify the automation and orchestration systems driving your virtualized environment when additional storage is added, you've lost some of the benefits of a virtualized storage infrastructure.

F5's ARX series of storage virtualization provides that layer of unified storage infrastructure. By normalizing the namespaces through which files (images) are accessed, the systems driving a virtualized environment can be assured that images are available via the same access method regardless of where the file or image is physically located. Virtualized storage infrastructure systems are dynamic; additional storage can be added to the infrastructure and "plugged in" to the global namespace to increase the storage available in a non-disruptive manner.

An intelligent virtualized storage infrastructure can further make more efficient the use of the storage available by tiering the storage. Images and files accessed more frequently can be stored on fast, tier one storage so they are loaded and execute more quickly, while less frequently accessed files and images can be moved to less expensive and perhaps less peformant storage systems.


By deploying elastic application delivery network infrastructure instead of virtual appliances you maintain stability, reliability, security, and performance across your virtualized environment. Elastic application delivery network infrastructure is already dynamic, and offers a variety of options for integration into automation and orchestration systems via standards-based control planes, many of which are nearly turn-key solutions.

The reasons why some folks might desire a virtual appliance model for their application delivery network infrastructure are valid. But the reality is that the elasticity and on-demand capacity offered by a virtual appliance is already available in proven, reliable hardware solutions today that do not require sacrificing performance, security, or flexibility.


AddThis Feed Button Bookmark and Share

Published Jan 13, 2009
Version 1.0

Was this article helpful?


  • Colin_Walker_12's avatar
    Historic F5 Account



    "So what does one do when 60Gbps is required? Do we have any hardware option out there right capable of handling it? Clustered software solution is an easy match to the challenge."



    I like your definition of "easy". :) I'd love to see an "easy" to configure, deploy, manage and maintain software solution that can push 60Gbps. I really would, but that doesn't mean they exist.



    The fact of the matter is that you can easily make an argument for a software solution if you slant the discussion in that direction. When you look at the facts and the products used by the giants out there that are actually pushing the huge amounts of traffic you're talking about, they just aren't using software. Do you really think that's because they don't know any better? Don't they pay people millions of dollars to know what the best option is?



    If I, as an admin, had the choice between a handful of hardware systems that used a single management tool for licensing, version updates, patching and administration, vs. what...20? 40? 100? servers running virtual instances of a software based load balancer trying to do the same job, the choice would be easy.



    As Lori already pointed out, absolutely every part of the servers from the hardware to the OS to security patches for the OS and any and ALL other software running on the system, to the virtualization software itself, to the controller, to the virtual instances and all software running on them, etc. ... all of it must be 100% in sync, 100% secure, and easy to manage / patch when new things must be rolled out. That management nightmare alone would be enough to turn me off to the idea, let alone the complex and painful process of trying to configure that many systems into a cluster.



    That's not even getting into the differences in data center space, power, cooling, etc. How do your hundreds of clustered systems look on a heat map compared to 10 hardware solution boxes? What about the increase in individual component failure that you're now driving up at an exponential level? Hard drives, power supplies, backplanes..these things fail, and the more systems you're running, the more often they're going to do so in aggregate for the solution. Have you factored in long-term hardware maintenance costs?



    Your idea of infinitely scaling a cluster is a pretty one, but also unrealistic. You're saying that at some point the hardware solution will be outdated, but the hardware in your cluster won't? Does it somehow upgrade itself so it's never old or underpowered? You're going to just keep adding more and more boxes to it, so now you have mis-matched systems in your cluster, each with their own parts to keep in stock for replacement? Sounds...interesting.



    The bottom line is, there is not a single, easy solution when talking about traffic loads in the stratosphere. The benefits of a hardware solution though, are very clear to me. Call me bias if you want, but look at the market and I think it's hard for anyone that's interested in the best solution to disagree.



  • Ah, no, we won't likely be on the same page.



    We haven't agreed on performance, reliability, or security at this point - only the suitability of traffic managers as virtual appliances for testing, development, and training.



    You're not comparing MTBF of the solutions, nor are you including MTBF of each component required in a virtual appliance - OS, VM, and solution. You're not taking into consideration that the security of a virtual appliance relies upon not only the solution's security, but the security of the VM, the hypervisor, the OS, and all related software executing on that server. And you're not taking into consideration performance as it relates to throughput - hardware and software have vastly different throughput capabilities, and software is further limited by its reliance on the multiple layers of software required.



    In my long years evaluating application delivery solutions for Network Computing I have never run across a software solution capable of matching the throughput and performance of a hardware solution. If you distribute the software across multiple instances, you might be able to match the performance, but then your CapEx story goes out the window as each instance requires more hardware and more maintenance and increases the chances of a failure.



  • @Izzy



    If one were to infer that because I said a particular technology has benefits that F5 is working on a solution in that area then F5 would be spreading its resources across a much wider variety of markets than application delivery.



    Time and deployments will tell, of course. You can go today and talk to vendors like Blue Lock and Joyent and ask them which solution they prefer, and what features it is they require to build the foundation of their cloud computing infrastructure.



    It isn't so much about hardware vs software as it is flexibility, integration capabilities and features.
  • "Another thing that virtual traffic managers can address is development infrastructure. Can you imagine the cost providing access to legacy hardware traffic managers to developers in order for them to develop their application in conjunction with traffic managers (and utilize intelligent traffic manager functions as an extension of their application)?"



    And that's exactly where a virtual appliance version of a network or application network solution makes sense and where they *should* be employed.