DYNAMIC INFRASTRUCTURE MODELS
CHAPTER ONE: AUTOMATION
What I initially want to talk about in this, the first of a series of posts, is encapsulated by what Paul Maritz, VMware’s CEO, says about cloud; “Cloud is not a destination, but a way of doing computing”. This reflects my experience in the field - the biggest misunderstandings around cloud amongst people I meet are generally around not knowing what they like or dislike about it.
In fact, when I hear anyone say that they are ‘adopting cloud’ or ‘cloud is not for them’, it gets to me. It’s not about using or not using cloud – it’s about whether cloud plays a part in delivering organisational objectives.
There is no cloud as such.
It doesn’t come on a DVD.
You can’t download it.
There’s no manual or user guide.
Cloud is a concept whereby resources can be acquired from anywhere. And one of the big benefits is it can play a very important part in a leaner, more automated IT setup: Dynamic Infrastructure (DI). I see DI as being characterised by three main implementations. More on that later.
Automation is a big part of adopting DI. It is important because it allows replacement of certain aspects of human labour, and is key to adopting a cloud computing methodology. It is only as effective, though, as the rules and metrics that govern it. This last point is extremely important, because it introduces the concept of Strategic Points of Control (SPOCs).
Until the next post, where I’m going to expound a little bit more on what DI allows and how different models are both characterised and implemented, I leave you with this. To paraphrase Asimov’s ‘Three Laws of Robotics’:
• A Dynamic Infrastructure must not cost you your job.
• A Dynamic Infrastructure must alleviate manual workloads without human intervention unless breaking Rule 1.
• A Dynamic Infrastructure must protect itself from Human Error – unless this action conflicts with Rule 1 or 2.
Technorati Tags: Aplication Delivery, Dynamic Infrastructure, f5, Next Generation Datacenter