For long years those models remained static. Cloud was focused on just proving it was secure enough, affordable enough, and reliable enough to support enterprise applications.
Fast forward almost ten years and the sky is filled with even more cloud models to choose from. Whether as a response to security concerns, latency, or control, there are new faces in the clouds to choose from. To start our series of posts focusing on cloud this month here on DevCentral, it seems appropriate to enumerate just how much the cloudy landscape has expanded.
Colo cloud, or cloud interconnect as they prefer to be called, marries the traditional hosting environment with blazing fast access to public cloud providers. An attractive model for those who need to control latency and manage security, cloud interconnects are popping up all over. Colo cloud works by offering organizations a way to lift and shift traditional infrastructure and applications to a hosted environment and encouraging the use of public cloud by providing fast interconnects (like, on the backbone) to the most popular providers.
Colo cloud is a good choice when:
You need to maintain consistency of security and access policies for applications that will be deployed in a public cloud
You don’t want the added overhead of deploying and maintaining keys and certificates for every app deployed in the public cloud and prefer a centralized approach
You want to leverage multiple public cloud providers to ensure performance and availability for users
SERVERLESS (A.K.A Function as a Service)
Serverless is a relatively new offering that piggybacks on the concept of PaaS (Platform as a Service) but resides as part of an IaaS (Infrastructure as a Service) offering. The core premise of serverless is that sometimes, all you need to do is one thing, and that one thing can be encapsulated in a single function. On the “server” (cloud) side, that single function fires only when called, thus consuming very few resources unless it’s being called a lot. It’s often mentioned in the context of APIs, where path routing can easily map a single API call to a “function” you deploy in a supporting cloud provider’s environment (the big three all do, by the way).
Serverless is a good choice when:
You have a single task that might require the unique scale and capacity of the cloud, like significant video processing.
You want to provide software-updates for your 2 million IoT gadget customers, and don’t have the capacity or bandwidth to do that in house
Ah, private cloud. I put this one last not because it’s the least useful (on the contrary, it’s more popular than most models exception SaaS) but because it’s still the most contentious. When I say “private cloud” I do mean on-premises, private cloud. You know, OpenStack or VMware or, the increasingly possible container-based private cloud.
Private cloud is essentially IaaS, on-premises. Our own data shows it’s still the model organizations prefer for a wide variety of application (workload) types. That may be shifting with the growth of colo cloud options, but such significant an investment is hard to ditch at this point in the implementation cycle.
Private cloud is a good choice when:
You need to retain control for compliance and governance reasons over data thanks to regulations
You can’t risk data crossing international boundaries in the event of a disaster
You will be supporting IIoT (Industrial Internet of Things) with your private cloud
So that makes twice as many cloud models today as there were back in 2008. It’s actually somewhat of a surprise to see that PaaS remains a viable option, but it does. It may be entirely subsumed by Serverless in the future, as it seems the function as a service model may have more legs than PaaS ever did, as least on the public side.
The question seems to be whether public or private IaaS will rule. Most out would answer a resounding “public” but I remain unconvinced. But I promise I’ll eat that crow if, in 2027 when we check back in, it turns out that public IaaS has handily swallowed up private cloud.