on 04-May-2011 02:44
#vcmp #interop Whether it’s a need to support cloud computing or manage the myriad requirements from internal customers, the new network must go beyond multi-tenancy
It’s only natural, after all, that once we managed to work out all the quirks and flaws of server and storage virtualization that we’d move on to the next layer of the data center, the network. What’s being discovered as enterprises build out their own cloud computing or IT as a Service environments is that multi-tenancy doesn’t always go far enough to meet the needs of their various constituents internal to the organization. It’s not just a matter of role-based administrative access or isolating configuration from one department to another, it goes deeper than that – to maintenance windows and fault-isolation and even needs for different versions of the same solution. What’s necessary to address these diverse needs in a way that is non-disruptive is a virtualized approach. But just as valid are the arguments against moving some network-oriented solutions to a virtual form factor, as pointed out by Mike Brandenburg:
For all of the compute power that a virtual environment can bring to bear on a workload, there are still many tasks that favor dedicated hardware. Network processes and tasks like SSL offload and network forensics have deferred pre-processing tasks, such as processing gigabytes of network packets, to discrete chipsets built into the hardware appliances. These chipsets take the burden off of the appliance’s general CPU. Dedicated hardware remains first choice for these specific network tasks.
-- Replacing hardware-based network appliances with virtual appliances, 28 April 2011
The reason analysts and the industry at large are embracing a virtualized network is to achieve the fault-isolation, flexibility of provisioning and improved efficiency of network components that leads to better return on capital investments. But while some may point to opposing views that say hardware with its multi-tenant capabilities isn’t enough to meet the challenges of modern data center architectures and conclude that the only viable solution is to take the network to a virtual form factor, counting on the improvements in x86 processing power to counter the loss of performance, there is an alternative.
Multi-tenancy has thus far been considered the solution to addressing the diverse needs of applications, departments, organizations and customers. But even though many network hardware solutions have gone “Multi-tenant” still there remains very real operational and business requirements that are not met by such an approach.
This is due to the way in which multi-tenancy is implemented internal to a solution as opposed to an internally virtualized network device. If we look at the way in which resource allocation and operating systems are partitioned in each model, we find that their results are very different:
These differences lead to both different benefits and different limitations. For example, because a multi-tenant model shares a single operating system, it requires less memory than a virtualized system with multiple operating system instances. Other benefits of sharing an operating system are that the underlying resources are also shared, so partitions can expand until all resources on a system are utilized.
But the down side to that flexibility is the inability to run different versions of the solution. A network-device tightly couples its operating system and software together as a means to improve performance and ensure reliability, but that means every instance in a multi-tenant system must run the same operating system and solution version. This can be a drawback in situations where certain features or functions of a previous or newer version is required to meet business requirements or to solve a particular technical challenge. Multi-tenant systems also offer a lesser degree of fault isolation than a virtualized system because of the shared operating system and the tight-coupling with the device software. An issue that causes the device to reset/reboot/restart in a multi-tenant system must necessarily disrupt every instance on the device.
Lastly, a multi-tenant solution with an expanding (flexible) resource allocation model – while appropriate and helpful for certain situations in which unanticipated traffic may be encountered – can negatively impact performance. Spikes in processing from one instance will impact all other instances due to the increased consumption of resources. This is detrimental to maintaining reliable and predictable performance, a necessity for real time traffic processing which is required by many vertical industries.
This is still a completely valid means of achieving the fault isolation and reliability of performance required. But what we need to move forward, to continue evolving infrastructure to meet the rapidly changing needs and requirements of an increasingly dynamic data center is a network-based solution that addresses these same concerns without comprising the benefits of tightly-coupled hardware and software solutions, namely those of predictable performance and enterprise class throughput. A solution that addresses the very real possibility of network sprawl, that unsustainable model of growth that has traditionally been addressed through consolidation.
If a network can’t go virtual, then virtual must come to the network…