Beware the Cloud Programmer

We need to be careful that we do not repeat the era of “HTML programmers” with “cloud programmers”.

If you’re old enough you might remember a time when your dad or brother worked on the family car themselves. They changed the oil, bled the brakes, changed the fluids and even replaced head gaskets when necessary. They’d tear apart the engine if need be to repair it; no mechanic necessary. But cars have become highly dependent on technology and today it’s hard to find anyone who hasn’t been specifically trained that works on their own car. Sure, an oil change or topping off the fluids might be feasible, but diagnosing and subsequently repairing a car today is simply not a task for everyone.

This is not necessarily because the core technology has changed – the engines still work the same way, the interaction between fuel-injectors and pistons and axles is no different, but the interfaces and interconnects between many of the various moving parts that make an engine go have changed, and changed dramatically. They’re computerized, they’re automated, they’re complicated.

This is the change we’re seeing in IT as a result of cloud computing , virtualization and automation. The core functions that deliver applications are still the same and based on the same protocols and principles, but the interfaces and increasingly the interconnects are rapidly evolving. They’re becoming more complicated.


The change in skills necessary to effectively deploy and manage emerging data center architectures drives one of the lesser spoken of benefits of public cloud computing: offloading the cost of managing components in this new and more complicated way. Most server admins and network operators do not have the development-oriented skills necessary to integrate systems in a way that promotes the loosely coupled, service-oriented collaboration necessary to fully liberate a data center and shift the operational management burden from people to technology.

Conversely, the developers with those very skills do not have the knowledge of the various data center network and application delivery network components necessary to implement the integration required to enable that collaboration.

Public cloud computing, with its infrastructure as a black box mentality, promises to alleviate the need to make operators into developers and vice-versa. It promises to lift the burden and pressure on IT to transform itself into a services-enabled system. And in that respect it succeeds. When you leverage infrastructure as a black box you only need to interact with the management framework, the constrained interfaces offered by the provider that allow you to configure and manage components as a service. You need not invest in training, in architecture, or in any of the costly endeavors necessary to achieve a more service-focused infrastructure.

The danger in this strategy is that it encourages investing in admins and operators who are well-versed in interfaces (APIs) and know little about the underlying technology.


We saw this phenomenon in the early days of the web, when everything was more or less static HTML and there was very little architecture in the data center supporting the kind of rich interactive applications prevalent on the web today. There was a spate of HTML “programmers”: folks who understood markup language, but little more.

They understood the interface language, but nothing about how applications were assembled, how an application generated HTML, nor how that “code” was delivered to the client and subsequently rendered into something useful. It’s like being trained to run the diagnostic computers that interface with a car but not knowing how to do anything about the problems that might be reported.

The days of the HTML “programmers” were fleeting, because Web 2.0 and demand for highly interactive and personalized applications grew faster than the US national debt. A return to professionals who not only understood the interfaces but the underlying technological principles and practices was required, and the result has been a phenomenal explosion of interactive, interconnected and highly integrated web applications requiring an equally impressive infrastructure to deliver, secure and accelerate.

We are now in the days when we are seeing similar patterns in infrastructure; where it is expected that developers become operators through the use of interfaces (APIs) without necessarily needing or requiring any depth of knowledge regarding how it is the infrastructure is supposed to work.


Luckily, routers still route and switches still switch and load balancers still balance the load regardless of the interface used to manage them. Firewalls still deny or allow access, and identity and access management solutions still guard the gates to applications regardless of where they reside or on what platform.

But the interfaces to these services has and is evolving; they’re becoming API driven and integration is a requirement for automation of the most complicated operational processes, the ones in which many components act in concert to provide the services necessary to deliver applications.

Like the modern mechanic, who uses computer diagnostics to interface with your car before he pulls out a single tool, it is important to remember that while interfaces change, in order to really tune up your data center infrastructure and the processes that leverage it, you need people who understand the technology. It doesn’t matter whether that infrastructure is “in the cloud” or “in the data center”, leveraging infrastructure services requires an understanding of how they work and how they impact the overall delivery process. Something as simple as choosing the wrong load balancing algorithm for your application can have a profound impact on its performance and availability; it can also cause the application to appear to misbehave when the interaction between load balancing services and applications is not well understood.

It’s a fine thing to be able to provision infrastructure services and indeed we must be able to do so if we are to realize IT as a Service, the liberation of the data center. But we should not forget that provisioning infrastructure is the easy part; the hard part is understanding the relationship between the various infrastructure components not only as they relate to one another, but to the application as well. It is as important, perhaps even more so, that operators and administrators and developers – whomever may be provisioning these services – understand the impact of that service on the broader delivery ecosystem. Non-disruptive does not mean non-impactful, after all.

An EFI [Electronic Fuel Injection] system requires several peripheral components in addition to the injector(s), in order to duplicate all the functions of a carburetor. A point worth noting during times of fuel metering repair is that early EFI systems are prone to diagnostic ambiguity.

-- Fuel Injection, Wikipedia

Changes to most of those peripheral components that impact EFI are non-disruptive, i.e. they don’t require changes to other components. But they are definitely impactful, as changes to any one of the peripheral components can and often does change the way in which the system delivers fuel to the engine. Too fast, too slow, too much air, not enough fuel. Any one of these minor, non-disruptive changes can have a major negative impact on how the car performs overall. The same is true in the data center; a non-disruptive change to any one of the delivery components may in fact be non-disruptive, but it also may have a major impact on the performance and availability of the applications it is delivering.


Public cloud computing lends itself to an “HTML programmer” mode of thinking; where those who may not have the underlying infrastructure knowledge are tasked with managing that infrastructure simply because it’s “easy”. Just as early EFI systems were prone to “diagnostic ambiguity” so too are these early cloud computing and automated systems prone to “architectural ambiguity”.

Knowing you need a load balancing service is not the same as knowing what kind of load balancing service you need, and it is not the same as understanding its topological and architectural requirements and constraints.

The changes being wrought by cloud computing and IT as a Service are as profound as the explosion of web applications at the turn of the century. Cloud computing promises easy interfaces and management of infrastructure components and requires no investment whatsoever in the underlying technology. We need to be cautious that we do not run willy-nilly toward a rerun of the evolution of web applications, with “cloud programmers” whose key strength is in their understanding of interfaces instead of infrastructure. A long-term successful IT as a Service strategy will take into consideration that infrastructure services are a critical component to application deployment and delivery. Understanding how those services work themselves as well as how they interact with one another and with the applications they ultimately deliver, secure and accelerate is necessary in order to achieve the efficient and dynamic data center of the future.

A successful long term IT as a Service strategy includes public and private and hybrid cloud computing and certainly requires leveraging interfaces. But it also requires that components be integrated in a way that is architecturally and topologically sound to maintain a positive operational posture. It requires that those responsible for integrating and managing infrastructure services – regardless of where they may be deployed – understand not only how to interface with them but how they interact with other components.

The “cloud programmer” is likely only to understand the interface; they’re able to run the diagnostic computer, but can’t make heads or tails of the information it provides. To make sense of the diagnostics you’re still going to need a highly knowledgeable data center mechanic.

AddThis Feed Button Bookmark and Share

Published Jul 20, 2011
Version 1.0

Was this article helpful?

No CommentsBe the first to comment