Last week it was boats and sacrificial anodes. This week: cars and variable displacement engines. Everyone clear on what I’m trying to say?
Some background: "Variable displacement is an automobile engine technology that allows the engine displacement to change, usually by de-activating cylinders, for improved fuel economy. Cadillac, in conjunction with Eaton Corporation, developed the innovative V-8-6-4 system which used the industry's first engine control unit to switch the engine from 8- to 6- to 4-cylinder operation depending on the amount of power needed".
Three main points.
1.How cool is that?!
2.Can you believe that was done in 1981?
3.Did you know you can do that with data centres?
Cadillac may not build them, but Variable Displacement Applications do exist. And while not in use en masse today, I expect they will be to the 2010's what the Millennium Falcon was to Han Solo: Mission Critical Technology!
And so we arrive at today’s topic; namely Dynamic Provisioning, or DP.
DP exists in a few different forms, but in this post I want to look at dynamic provisioning within a single data centre. The key differentiator from last week’s Dynamic Allocation post is that – with DP - new elements are created / introduced as required and removed / destroyed when they’re not.
This introduces a little more complexity in that we must now also manage change.
For those looking to introduce DP, trust in automation and the services that monitor automation is key. New virtual machines get turned on or cloned automatically. This introduces an element of risk and contradicts change management policy which, as stated in a previous post, is or can be a significant barrier. DP isn’t a technical problem to be overcome, it is one of governance.
Technically, DP works and has benefits. It should provoke re-examination of how technology is applied. The thing that enables it – the equivalent of Cadillac’s revolutionary engine control unit – is your application delivery controller.