Network Virtualization Reality Check.
There are quite a few pundits out there that would like to convince you that a purely virtual infrastructure is the wave of the future. Most of them have a bias to drive them to this conclusion, and they’re hoping you’ll overlook it. Others just want to see everything virtualized because they’re aware of the massive benefits server and even in most cases desktop virtualization has brought to the enterprise.
But there’s always a caveat with people who look ahead and see One True Way. The current state of high tech rarely allows for a single architectural solution to emerge, if for no other reason than the existence of a preponderance of legacy code, devices, etc. Ask anyone in storage networking about that. There have been several attempts at One True Way to access your storage. Unfortunately for those who propound them, the market continues to purchase what’s best for its needs, and that varies greatly based upon the needs of an organization – or even an application within an organization.
Network appliances – software running on Commercial Off The Shelf (COTS) server hardware – have been around forever. F5 BIG-IP devices used to fall into this category, and like most networking companies that survive their first few years, we eventually created purpose-built hardware to handle the networking speeds required of a high-performance device. The software IP stacks available weren’t fast enough, and truth be told, the other built-in bottlenecks of commodity hardware were causing performance problems too.
And that, in a nutshell, is why network virtualization everywhere will not be the future. There are certainly places where virtualized networking gear makes sense – like in the cloud, where you don’t have physical hardware deployed. But there are places – like your primary datacenter – where it does not. The volume that has to be supported on a VM which will have at least two functional operating systems (the VM host and the VM client) between the code and the hardware, and physical hardware that is more than likely shared with other client images, is just not feasible in many situations.
You can scale out, that is truth, but how many VMs equals a physical box? Because it’s not just the cost of the VMs, the server they’re residing on costs money to acquire and maintain too, and as you scale out it takes more and more of them. Placing a second instance of a VM on the server to alleviate a problem with network throughput would be… Counterproductive, after all.
Image Courtesy of iPadWalls.com
So there are plenty of reasons to make use of a hybrid environment in networking architecture, and those reasons aren’t going away any time soon. So as I often say, treat pundits who are trying to tell you there is only one wave of the future with a bit of skepticism, they normally have a vested interest in seeing a particular solution be the One True Way.
Just like in the storage networking space, ignore those voices that don’t suit your needs, choose solutions that are going to address your architecture and solve your problems. And they’ll eventually stop trying to tell you what to do, because they’ll realize the futility of doing so. And you’ll still be rocking the house, because in the end it is about you, serving the needs of the business, in the manner that is most efficient.
*** Disclaimer: yes, F5 sells both physical and virtual ADCs, which are in the category “network infrastructure”, but I don’t feel that creates a bias in my views, it simply seems odd to me to claim that all solutions are best served by one or the other. Rather I think that F5 in general, like me in particular, sees the need for both types of solutions and is fulfilling that need.
Think of it like this… I reject One True Way in my Roleplaying, and I reject it in my technology. The two things I most enjoy, so working at F5 isn’t the cause of my belief, just a happy coincidence.