Building an elastic environment requires elastic infrastructure
One of the reasons behind some folks pushing for infrastructure as virtual appliances is the on-demand nature of a virtualized environment. When network and application delivery infrastructure hits c...
Published Jan 13, 2009
Version 1.0Lori_MacVittie
Employee
Joined October 17, 2006
Lori_MacVittie
Employee
Joined October 17, 2006
Colin_Walker_12
Jan 14, 2009Historic F5 Account
Izzy,
"So what does one do when 60Gbps is required? Do we have any hardware option out there right capable of handling it? Clustered software solution is an easy match to the challenge."
I like your definition of "easy". :) I'd love to see an "easy" to configure, deploy, manage and maintain software solution that can push 60Gbps. I really would, but that doesn't mean they exist.
The fact of the matter is that you can easily make an argument for a software solution if you slant the discussion in that direction. When you look at the facts and the products used by the giants out there that are actually pushing the huge amounts of traffic you're talking about, they just aren't using software. Do you really think that's because they don't know any better? Don't they pay people millions of dollars to know what the best option is?
If I, as an admin, had the choice between a handful of hardware systems that used a single management tool for licensing, version updates, patching and administration, vs. what...20? 40? 100? servers running virtual instances of a software based load balancer trying to do the same job, the choice would be easy.
As Lori already pointed out, absolutely every part of the servers from the hardware to the OS to security patches for the OS and any and ALL other software running on the system, to the virtualization software itself, to the controller, to the virtual instances and all software running on them, etc. ... all of it must be 100% in sync, 100% secure, and easy to manage / patch when new things must be rolled out. That management nightmare alone would be enough to turn me off to the idea, let alone the complex and painful process of trying to configure that many systems into a cluster.
That's not even getting into the differences in data center space, power, cooling, etc. How do your hundreds of clustered systems look on a heat map compared to 10 hardware solution boxes? What about the increase in individual component failure that you're now driving up at an exponential level? Hard drives, power supplies, backplanes..these things fail, and the more systems you're running, the more often they're going to do so in aggregate for the solution. Have you factored in long-term hardware maintenance costs?
Your idea of infinitely scaling a cluster is a pretty one, but also unrealistic. You're saying that at some point the hardware solution will be outdated, but the hardware in your cluster won't? Does it somehow upgrade itself so it's never old or underpowered? You're going to just keep adding more and more boxes to it, so now you have mis-matched systems in your cluster, each with their own parts to keep in stock for replacement? Sounds...interesting.
The bottom line is, there is not a single, easy solution when talking about traffic loads in the stratosphere. The benefits of a hardware solution though, are very clear to me. Call me bias if you want, but look at the market and I think it's hard for anyone that's interested in the best solution to disagree.
Colin