How Containers Scale – Service Mesh versus Traditional Architecture

Containers continue to be a hot topic. Some claim they are on the verge of a meteoric rise to dominate the data center. Others find them only suitable for cloud. And still others are waiting patiently to see if containers is the SDN of app infrastructure or not – highly touted by pundits but rarely put it into practice in production.

A quick perusal of research and surveys shows that containers certainly are gaining traction – somewhere.

Like most infrastructure – whether app or network – it’s likely that for the foreseeable future, containers will live alongside apps that still run daily on mainframes and midranges alike. That’s true of most significant shifts in app infrastructure. When the web stack rose to dominance, it didn’t eliminate fat client-server apps. They coexisted, at least for a while.

What it did do, however, was force a change in how we scaled those apps. Web apps imposed dramatically different stress on the network and its servers that required new ways to expand capacity and assure availability. The use of containers to deploy applications – particularly those employing a microservices architecture – is no different.

There is a marked difference between the scaling model employed by a containerized environment and that of a traditional web application.

The Traditional Model

First, let’s refresh our memory on the traditional model of scaling applications.

Whether in a cloud (public, private, what have you) or data center, the traditional model employs a fairly standard pattern. It uses fixed “pools” of resources with configuration and behavior ultimately based on ports and IP addresses.

The software responsible for actually scaling apps (whether deployed on purpose-built hardware or COTS) executes on a fairly isolated operating premise: “I have everything I need to know to make a decision right here and the destination is behind me.” From algorithms to the available resources, everything is right at the scaling service’s proverbial fingertips.

This includes the status of those resources. In a traditional model, it is generally the scaling software that tracks the health of resources and takes them out rotation if they become unavailable.

Traditional models of scale rely on an imperative mode of configuration, and with few notable exceptions (like status) changes are driven by configuration events. That means an operator or external script has issued a very specific command – either via API, CLI, or GUI – to change the configuration.

The Cloud Half-Step

Cloud began to impact this model when the notion of “auto-scaling” entered the domain. Auto-scaling is a half-step between the traditional model and the service mesh model likely to be employed by most containerized environments. It melds the notion of environmental changes like increased demand triggering configuration changes, such as adding or removing resources. The model is still a “PUSH” model, however, meaning the system responsible for scale must still be implicitly told about changes that must be made.

The Service Mesh Model

Enter containers and the highly volatile environment in which they not only live, but seem to thrive. Management of containers is often achieved by some external system – like Kubernetes or Mesos or Open Shift – via a “master” controller that is akin to a command and control center for container clusters. Its job is to manage containers and keep a catalog of them up to date.

The “pools” of resources available for a given service (or application) are dynamic. Much is made of how long any given container actually lives, but it is true that they may only be available for minutes or hours compared to the weeks or months of their virtualized predecessors.

This pace is impossible to track manually. Which is why service registries exist – to keep a real-time list of what resources are available where, and to what service they belong.

This is one of the reasons the service mesh model eschews tightly coupling applications and services to IP addresses and ports. They are still used, of course, but volatility (and reuse of network attributes) requires that apps and services be identified by something else – like labels and tags.

All configuration and behavior within the overall system, then, is based on those tags. These are loosely akin to FQDNs, which of course are mapped to IP addresses by DNS.

All this gives rise to the need for a more collaborative operating premise. Software responsible for scaling containerized apps and services operate on a very different premise from traditional models. Their premise is “I need information from other services to make a decision and my destination might be in another location.”

But you can’t expect the master controller to notify every component of every change. That kind of centralized control does not scale given the number of components in the system. Remember, its not just containers. There are ethereal constructs like services and rules in addition to daemons that monitor and report the telemetry necessary for business and operational analysis. But the scaling software still needs to know when things change (or resources move).

In a service mesh mode, changes are driven by operational events published elsewhere. It is the responsibility of the scaling software to pull those changes and act on them, not have specific configuration changes pushed to it via scripts or human operators. 

That means the changes must be agnostic of the implementation. Changes cannot be specific API calls or commands that need to be implemented. They must be “what” instead of “how”. This is a declarative model of configuration, rather than the imperative one associated with traditional models of scale.  

What That Means

These changes have a rather dramatic impact on the flow of traffic on the network.

A traditional model can be closely visualized as a traffic-light system:

• Fixed by configuration

• Restricted directions

• Routes predefined

A service-mesh model, on the other hand, is more akin to modern roundabout systems of managing traffic:

• Dynamic based on real-time conditions

• Paths variable 

• Routes flexible 

The most difficult part of embracing this model (and I say that from personal experience) is how many more moving parts there are in a service-mesh. It makes it difficult to easily trace the data path from one end (the client) to the other (the app). The dependence of service-mesh models on master controllers and service registries imposes additional care and feeding from operators as they are as important to routing requests as ARP tables are to routing packets in the core network.

The service mesh model is gaining a lot of interest and traction amongst those adopting containers. It’s up to us on the other side of the wall to understand the changes it imposes on the network, and be prepared to manage it.

Published Sep 14, 2017
Version 1.0

Was this article helpful?

No CommentsBe the first to comment