WILS: Content (Application) Switching is like VLANs for HTTP
We focus a lot on encouraging developers to get more “ops” oriented, but seem to have forgotten networking pros also need to get more “apps” oriented.
Most networking professionals know their relevant protocols, the ones they work with day in and day out, that many of them are able to read a live packet capture without requiring a protocol translation to “plain English”. These folks can narrow down a packet as having come from a specific component from its ARP address because they’ve spent a lot of time analyzing and troubleshooting network issues.
And while these same pros understanding load balancing from a traffic routing decision making point of view because in many ways it is similar to trunking and link aggregation (LAG) – teaming and bonding – things get a bit less clear as we move up the stack. Sure, TCP (layer 4) load balancing makes sense, it’s port and IP based and there’s plenty of ways in which networking protocols can be manipulated and routed based on a combination of the two. But let’s move up to HTTP and Layer 7 load balancing, beyond the simple traffic in –> traffic out decision making that’s associated with simple load balancing algorithms like round robin or its cousins least connections and fastest response time.
Content – or application - switching is the use of application protocols or data in making a load balancing (application routing) decision. Instead of letting an algorithm decide which pool of servers will service a request, the decision is made by inspecting the HTTP headers and data in the exchange. The simplest, and most common case, involves using the URI as the basis for a sharding-style scalability domain in which content is sorted out at the load balancing device and directed to appropriate pools of compute resources.
CONTENT SWITCHING = VLANs for HTTP
Examining a simple diagram, it’s a fairly trivial configuration and architecture that requires only that the URIs upon which decisions will be made are known and simplified to a common factor. You wouldn’t want to specify every single possible URI in the configuration, that would be like configuring static routing tables for every IP address in your network. Ugly – and not of the Shrek ugly kind, but the “made for SyFy" horror-flick kind, ugly and painful.
Networking pros would likely never architect a solution that requires that level of routing granularity as it would negatively impact performance as well as make any changes behind the switch horribly disruptive. Instead, they’d likely leverage VLAN and VLAN routing, instead, to provide the kind of separation of traffic necessary to implement the desired network architecture. When packets arrive at the switch in question, it has (may have) a VLAN tag. The switch intercepts the packet, inspects it, and upon finding the VLAN tag routes the packet out the appropriate egress port to the next hop. In this way, traffic and users and applications can be segregated, bandwidth utilization more evenly distributed across a network, and routing tables simplified because they can be based on VLAN ID rather than individual IP addresses, making adds and removals non-disruptive from a network configuration viewpoint.
The use of VLAN tagging enables network virtualization in much the same way server virtualization is used: to divvy up physical resources into discrete, virtual packages that can be constrained and more easily managed. Content switching moves into the realm of application virtualization, in which an application is divvied up and distributed across resources as a means to achieve higher efficiency and better performance.
Content (application or layer 7) switching utilizes the same concepts: an HTTP request arrives, the load balancing service intercepts it, inspects the HTTP header (instead of the IP headers) for the URI “tag”, and then routes the request to the appropriate pool (next hop) of resources. Basically, if you treat content switching as though it were VLANs for HTTP, with the “tag” being the HTTP header URI, you’d be right on the money.
WILS: Write It Like Seth. Seth Godin always gets his point across with brevity and wit. WILS is an ATTEMPT TO BE concise about application delivery TOPICS AND just get straight to the point. NO DILLY DALLYING AROUND.
- WILS: Layer 7 (Protocol) versus Layer 7 (Application)
- What is Network-based Application Virtualization and Why Do You Need It?
- WILS: Three Ways To Better Utilize Resources In Any Data Center
- WILS: The Concise Guide to *-Load Balancing
- WILS: Network Load Balancing versus Application Load Balancing
- Infrastructure Scalability Pattern: Sharding Sessions
- Applying Scalability Patterns to Infrastructure Architecture
- Infrastructure Scalability Pattern: Partition by Function or Type