Is Cloud Built to Fail or Built to Scale?
#webperf #cloud The difference matters. A lot.
There's been a growing focus on scalability as the Internet of Things has continued its rapid growth. Perhaps due in part to large online failures during periodic or individual events, perhaps due in part to simple growth, the reason is less important than the reality that scalability is a critical technological driver for a variety of new technologies - cloud and SDN being the most often referenced.
But while we've been focusing on scalability we may have been overlooking the related and no less important availability factor. These two "itys" are related, as scalability is one way to achieve availability when dealing with growth, rapid or otherwise. But availability also means being sensitive to failure.
Cloud, in general, is designed for scalability. It is specifically architected to provide elasticity - which is scalability both in and out. Cloud is designed to enable resource growth and contraction to match demand.
In this way, cloud addresses one aspect of availability: capacity. But it does not always address the other aspect - failure. Cloud is built to scale, not necessarily fail.
The Many Faces of Availability
Availability and scale are both achieved primarily through the same mechanisms in both data centers and cloud environments (and SDN network fabrics, for that matter): load balancing. At the network layer we've seen techniques like link aggregation (trunking, teaming, bundling) to both manage both scale and fail. Multiple network links are bound together, usually using an established network protocol, and traffic is distributed via load balancing across those links.
Similarly, the same techniques are used at the application layers (layers 4-7) to provide the same measure of scalability and resilience to failure at the server and application layer. Multiple resources are bound together, usually using a concept known as a virtual server (or virtual IP address) in a load balancing service and then requests are distributed via load balancing across those resources.
In this way, scalability is achieved. As demand grows, resources are transparently added to increase capacity. Similarly, as demand contracts, resources can be transparently decommissioned to decrease capacity. Voila! Elasticity.
But failure, the other and lesser mentioned aspect impacting availability, is not so easily managed.
The Impact of Failure
In the case of the network, the failure of a single link in an aggregated bundle (or trunk) is handled by simply ignoring the failed link. All traffic is distributed across the remaining links, every packet is pushed, and availability is maintained. Except when it isn't because of oversubscription or congestion that results in excessive latencies that cause delays ultimately resulting in poor application performance. While in the purest sense of the word "available" the application is still accessible, most businesses today consider unresponsive or poorly performing applications to be "unavailable", especially when those applications are revenue generating.
At the application layers, failure is even more detrimental to availability. Oversubscription of an application due to failure of resources often results in true downtime; errors or timeouts that prevent the end-user from accessing the application at all. Worse, users that were active may suddenly find they have "lost" their connection as well as any work they may have been doing before the resource was lost.
Load balancing architectures compensate, of course, by directing those users to other application instances. Cloud environments imbued with auto-scaling capabilities may be able to redress the failure by provisioning a new instance to take its place and thus maintain the proper levels of capacity. But that does not mitigate the loss of productivity and access experienced due to the original failure. It addresses scalability, not availability.
That's because when a failure occurs in most cloud environments, all active sessions to the failed application instance are simply discarded. The users must start anew. The cloud infrastructure fabric will certainly redirect them to a new instance and start a new session (and in this way it will "handle" failure), but this is disruptive to the user; it is noticeable. And noticeable degradations of performance or availability are a no-no for most business stakeholders.
Beware the Long Term
It's not necessarily the immediate reaction that should be of concern, but the long term impact. Everyone cites the data presented by Microsoft, Google, and Shopzilla at Velocity 2009 with respect to the impact of seconds of delay on revenue (spoiler: it's not good) but they tend to ignore the long term impact - the behavioral impact - of such delays and disruption on the end user [emphasis mine]:
Their data showed that slow sites get fewer search queries per user, less revenue per visitor, fewer clicks, fewer searches, and lower search engine rankings. They found that in some cases even after site performance was improved users continued to interact as if it was slow. Bad experiences have a lasting influence on customer behavior.
-- More on how web performance impacts revenue…
Did you catch that? Bad experiences (of which disruption is certainly one, I don't think we need to argue about that, do we?) have a lasting influence on customer behavior.
It's important, therefore, to understand the limitations of the environment in which you are deploying an application - particularly one that is customer-facing. Understanding that cloud is built to scale - not fail - is a key piece of knowledge you need to make decisions regarding which applications and workloads are fit for migration to the cloud, and which may be a fit if they are re-architected to address such failure themselves.