Managing a datacenters is often like managing a multi-generational family – you’ve got applications across a variety of life stages that need to be managed individually, and keeping costs down while doing so is a concern.
Those who know Don and I know we have a multi-generational family. Our oldest son is twenty-three and “The Toddler” is, well, almost three. There’s still “The Teenager” at home, and there’s also a granddaughter in there who is, well, almost three, so we’ve got a wide variety of children across which we have to share our limited resources.
Each one, of course, ends up consuming resources in a very different pattern. The Toddler grazes. All day and sometimes into the night. The Teenager eats very sporadically and seems to enjoy consuming my resources, perhaps because they’re often frozen, sweet, and smothered in chocolate. Our oldest son will consume resources voraciously when he is at home, a fairly regular event that generally follows a set pattern. The oldest daughter and The Granddaughter are generally speaking fairly minimal consumers of whatever might be in the refrigerator when they visit and their pattern of visiting is more sporadic than that of the oldest son.
Now I’m a mom so I try to make certain that each one of these very different children is able to consume the resources they enjoy/need when they need it. On-demand. And it’s not just resource consumption in terms of tasty treats that varies over time and across children, it’s also our time and, of course, money. If there’s one thing managing a data center and a family share, it’s the financial considerations of spreading a limited budget across multiple children (applications).
If you extrapolate the difficulties (and there are difficulties, I assure you) in trying to manage the needs of four children who span two different generations you can also see then why it’s so difficult to manage the needs of a multi-generational datacenter. Just replace “children” with “applications” and you’ll start to see what I mean.
Most enterprise folks understand the lifecycle of an application and that its lifecycle spans years (and sometimes decades). They also understand that as applications move through their lifecycle from Toddler to Teenager to Adult that they have different resource consumption patterns. And not just compute resource, but storage, network, application delivery network, financial and people resources. Not only does the rate of resource consumption change over time but the patterns also change over time.
That means, necessarily, that the policies and processes in place to manage the resource consumption of those applications must also change over time based on the stage of “life” the application is in. That’s in addition to the differences in policies and processes between applications at any given point in time. After all, while I might respond favorably and without question to a request from The Granddaughter for a cookie, the same isn’t always true for The Toddler even though they’re both a the same stage of life.
What you need is to be able to ensure that application which only occasionally consumes vast amounts of resources can do so but that the resources aren’t just sitting idle when they aren’t needed. Unlike foodstuffs, the resources won’t spoil and go to waste, but they will “go to waste” in that you’re paying for them to sit around doing nothing. Now certainly virtualization is a solution to ensuring that the application has what it needs, when it needs it, without wasting resources. The challenges really aren’t necessarily about leveraging virtualization to solve that problem, they’re more around balancing and prioritizing – you know, managing – the resources than anything else.
The problem is, of course, that some of these applications were developed or acquired “way back when” and thus the task of updating/upgrading them to meet current data center delivery policies – especially security –may be troublesome to say the least. Everyone in IT has a story about “that” application; the one that is kept humming along by one guy sitting in the basement and God forbid something happen to him. Applications simply can’t be “turned off” because there’s a new data center model being promoted. That’s the fallacy of the “data center.next” marketing hype: that datacenters will magically transform into this new, pristine image of the latest, greatest model.
Doesn’t happen that way. Most enterprise data centers are not green fields and they don’t transform entirely because they are multi-generational. They have aging applications that are still in use because they’re necessary to some piece of the business. The problem is that modifying them may be nigh-unto-impossible because of a variety of reasons including (1) the vendor no longer updates the application (or exists), (2) no one understands it, (3) the reward (ROI) is not worth the effort.
And yet the business will invariably demand that these applications be included; that they be accessible via the web or be incorporated into a single-sign on initiative or secured against some attack. The answer becomes a solution external to the application. But the cost of acquiring those solutions to achieve such goals is oft times far greater than the reward.
This is where it becomes necessary to apply some architectural flexibility. There already exist strategic points of control within the data center that can be more effectively leveraged to enable the application and enforcement of data center policies across all applications – from Toddler to Teenager. The application delivery network provides a strategic point at which security, availability, and access control can be applied without modifying applications. This is particularly beneficial in situations in which there exist applications which cannot be modified, such as third-party sourced applications or those for which it is no longer financially feasible to update.
Remember that an application delivery controller is, in its simplest form, a proxy. That means it can provide a broad set of functions for applications it manages. Early web-based applications, for example, leveraged their own proprietary methods of identity management. They stored simple username-password combinations in a simple database, or leveraged something like Apache’s HTTP basic authentication. In today’s highly complex and rapidly changing environments, managing such a store is not only tedious, but it offers very little return on investment. The optimal solution is to leverage a central, authoritative source of credentials such as a directory (AD, LDAP, etc…) such that changes to the single-identity will automatically propagate across all applications. Enabling an elderly application that is managed by a full-proxy such as application delivery controller – provided the application delivery platform is enabled with the ability to provide application access control – with such an integration is not only possible but feasible and reduces the OPEX associated with later-life application maintenance.
And make no mistake, it is the maintenance and support and people costs associated with applications over their lengthy lives that add up. Reducing those investments can reap a much greater reward than the actual cost of acquisition.
One of the exciting, hopefully, side-effects of the emerging devops role is the potential impact on overall architecture. Developers tend to think only of code as a solution while router jockeys tend to think only in terms of networking components. Devops bridges the seemingly bottomless chasm between them and brings to the table not only a new set of skills but a different perspective on the datacenter.
This is true not only for applications - leveraging the network to enable more modern capabilities - but for the network, too by leveraging development to automate operational processes and enable greater collaboration across the network infrastructure. Architectural solutions can be as effective and in many cases more efficient – operationally and financially – than traditional answers, but in order to architect such solutions one must first be aware that such possibilities exist.