F5 Distributed Cloud - Customer Edge Site - Deployment & Routing Options

Introduction:

F5 Distributed Cloud’s Customer Edge (CE) software is an incredibly powerful solution for Multi-Cloud Networking, Application Delivery, and Application Security.  The Customer Edge gives you flexibility on how routing is distributed across a multi-cloud fabric, how client-side and server-side connections are handled, and ultimately the injection of highly effective L4-L7 services into a proxy architecture.  Best yet, these powerful data planes are 100% managed from a central control-plane, F5 Distributed Cloud’s Console, giving a single pane of glass for configuration management and observability.  

That all sounds wonderful, but… there are a lot of details surrounding the deployment of the CE software.  Details matter.  The options at hand must be thoroughly examined for the best deployment model that fits your enterprise use case, existing network, cloud architecture(s), performance, and scale, day two operations, and personnel/team expertise.  In this article, I hope to provide an overview of how to attach CEs to your network,  how traffic flows from the network to the CEs for the different attachment models, and how CEs can benefit your enterprise.

First, we must understand what environments can a CE be deployed.  Keep in mind, the CE software is the same software deployed in our Regional Edges (REs).  The difference is the REs are the SaaS data plane of F5 Distributed Cloud that F5 maintains and scales on the behalf of enterprises consuming services on the REs.  Whereas the Customer Edge (CE) software is deployed within the enterprise environment.  The CE software can be installed into four different platform types:

  1. Hypervisor - such as VMWare // KVM
  2. Hyperscaler - such as AWS // Azure // GCP
  3. Baremetal
  4. Kubernetes

In this article, we will focus on the first 3 options, and leave the Kubernetes attachment for another day, as it is a little different from the other 3.  If you’re familiar with F5 Distributed Cloud, you may also know that the CEs have two personas, one if which is Mesh for the use cases I mentioned above, and the other is AppStack, which is turning the CE into its own k8s cluster that you can bring workloads to.  We will not be focusing on AppStack in this article.

Deployment & Routing Options

As you can see, there are five ways of getting traffic to a Customer Edge site (cluster) and the individual nodes making up that site/cluster.  These five deployment models are grouped into three attachment types, Layer 3 attached (blue), Layer 2 attached (purple), and externally attached (green).  You may have also noticed there are three CE nodes in each of the diagrams.  If a single node is deployed, there is less to think about regarding scale and failover, but these attachments can still be utilized.  To achieve High Availability when deploying multiple nodes, F5 Distributed Cloud requires three nodes.  The three nodes are required to form a cluster due to the underlaying software stack. 

Layer Three Attached:

This is personally my preferred method.  I am a big believer in Equal Cost Multi-Path Routing (ECMP) to establish active/active/active pathing for traffic traversing the F5 Distributed Cloud Customer Edge software.  However, not every environment may have routing available, especially dynamic routing via BGP.  This may be due to limitations on the existing network, or comfort level of the individuals deploying the software with routing.  However, if you are comfortable with routing, and the environment supports routing, this can be a great model for your enterprise.  Both models shown above, static and BGP, support the expansion of a cluster via worker nodes.  These worker nodes can provide horizontal scale and performance of the site/cluster.  

When statically routed, depending on your route/switch fabric, you may not get the desired effect.  This could be because the of the lack of support for ECMP, or if the route is persistent even if the network cannot ARP for the next hop.  However, setting up static routing is simple, quick, and takes less network expertise to accomplish.

In the picture below, you’ll notice we’re using custom VIPs associated to 4 “color” applications/FQDNs.  These custom VIPs act as loopbacks to a traditional router as they are locally significant to the F5 Distributed Cloud Customer Edge nodes.  The three static routes configured in the network target the next hop of each Customer Edge node’s SLO or SLI interface to reach the custom VIP.  Once the connection is established to the custom VIP, the software matches the application on criteria higher in the stack, such as port, SNI, or host information.  

This is the exact same for BGP attached, except BGP attached is dynamic in nature.  Each custom VIP is injected as a /32 route into the route/switch fabric.  If for whatever reason a node is unavailable, that custom VIP is removed from the route/switch fabric.   

Layer 2 Attached:

A layer 2 attached model might be the most common for customers who are familiar with many other network appliances such as firewalls or load-balancers, even BIG-IP.  Think of traffic groups and floating IPs in BIG-IP.  These concepts typically utilize a First Hop Resolution Protocol (FHRP) known as VRRP.  In F5 Distributed Cloud CE software, VRRP utilizes a VIP as a virtual address that is shared between the 3 nodes.  However, the VIP is only active on one of the nodes, which creates an Active/Standby/Standby topology.  The Active node’s MAC address is what is returned during the ARP process for the VIP.  If a different node becomes active, the new active node’s MAC is now associated to the VIP and is updated in the broadcast domain via a process called Gratuitous ARP (GARP).  

Today, in F5 Distributed Cloud, if you’re using VRRP for your attachment, the VIP becomes active on a node at random.  We do not expose any priority settings for the VIP.  If using multiple custom VIPs and VRRP, this does allow for the potential of each of the nodes in the cluster to be active for a one or more of the VIPs.  Meaning, you could have traffic actively utilizing all the nodes, but each of the nodes is only active for a specific VIP(s) and subset of apps associated to that VIP(s).  

In our picture below, we again have four color applications that are randomly active across each of the three nodes.  Blue and purple are active on node0; red is active on node1, and green is active on node2.  Take a close look at the ARP table and how the MAC addresses of the physical SLO interface map to the custom VIPs.  Lastly, worker nodes participate in VRRP, and can be utilized to scale the cluster horizontally.  

External Attachment:

The two external attachments for scaling services have been around for a very long time and have grown in popularity as enterprises have taken their tooling to cloud.  When moving tooling from on-prem to cloud, the lack of L2 technologies such as ARP/GARP, forced many enterprises to re-think how traffic is routed to/through the tooling.  This tooling includes Firewalls, Next-Generation Firewalls, Proxies, Load Balancers, Web Application Firewalls, API Gateways, Access Proxies and Federation tooling, and so on…. 

With an external Load Balancer (LB), these can be deployed as L4 or L7 load balancers for sending traffic to/through the Customer Edge software.  If a L4 LB is chosen, depending on the LB technology, but it is likely that the source IP will be lost.  If L7, you can use headers to maintain the source IP information, but if using TLS, you’ll need to manage certificates at the L7 LB, which may not be operationally efficient for your organization.  We can scale the cluster with worker nodes by adding the worker nodes to the external LB pool.  In this deployment model, custom VIPs are less needed, and the SLO or SLI interfaces can be the target.  

 Like the inline LB, we can use an out-of-path LB via DNS.  This DNS could be as simple as round-robin A-Records, or advanced Global Server Load-Balancing (GSLB), which incorporates configured intelligence and health checking into logic of what IP is sent in response to a DNS query.  In this model, while health checking is available, the traffic flows would still be subject to DNS cache and TTLs for failover.  As with the inline LB, worker nodes can be used to scale the cluster, and the node interface IP can be used as the DNS LB target.  

Summary:

The F5 Distributed Cloud Customer Edge Software is a flexible component of the platform.  The CE takes the F5 Distributed Cloud from a pure SaaS solution, to a multi-cloud fabric for Application Delivery and Security.  Depending on the architecture of enterprises' different “service-centers” such as data centers and clouds, the Customer Edge software can attach to the network in many ways.  Please consult your account team and F5 Distributed Cloud specialist for collaborative details on what may work best for your enterprise’s network and use case(s).

Related Links

Updated Mar 14, 2024
Version 4.0