F5 Distributed Cloud - Listener Logic

Introduction:

F5 Distributed Cloud is a SaaS enabled cloud agnostic solution to provide application delivery, application security and networking functions to customers every growing complex yarn ball of services spread across multiple service-centers. 

Service centers?  The industry standard name for these types of environments is multi-cloud, or sometimes hybrid-cloud.  To me, the word cloud quickly eliminates data centers, co-location (colo) rack spaces, and hosting facilities.  We still see many customers that haven’t made the journey to a public cloud provider but have multiple physical facilities from their own data centers to a colo data center.  Sometimes customers have just a single cloud provider, maybe a few hosted applications, but still maintain physical data center(s).  To me, I like to refer to all of them as service-centers, as your applications and services are deployed within these different facilities.  F5 is agnostic to the center that your services live.

For Application Delivery and Security, F5 has always been known for its ability to add value as a proxy layer between the client and the server.  By terminating a client-side connection, and establishing a server-side connection, F5 can inject services from L4 through L7 to best optimize the delivery and security of applications.  This article is going to take a deeper look at the terminations of the client-side connection.  Answering, how does F5 Distributed Cloud, pickup request traffic? 

Let’s first define the data planes within F5 Distributed Cloud.  F5 Distributed Cloud has two data plane types.  First is the Regional Edge (RE) which is a SaaS data plane.  On the RE, a customer can consume services with no ownership of maintaining the hardware, software, or scale of the data plane.  These REs are deployed in F5 regional locations.  The second data plane is the Customer Edge (CE).  The CE is owned and managed by the customer.  The CE gives flexibility into where the data plane lives, as these software data planes can be deployed within the customers data center on their hypervisor, within a public cloud or hyperscaler, on baremetal, or even within Kubernetes (k8s) clusters. 

Listener Logic:

In terms of listener logic, there is another critical factor and difference between the RE and CE.  What are two important elements to establishing a connection?  IP Address and Port.  For a CE, since it is deployed within the customers facility, IP Addressing is straight forward, as the customer owns it.  However, on an RE, F5 owns the deployment, which then means F5 owns the IP addressing. What options does a customer have?  We’ll revisit those options in a bit.

If you’re familiar with F5’s flagship product, BIG-IP, you’re very familiar with a “Virtual Server”.  A Virtual Server was the most common entry point of a client’s traffic, which is where the BIG-IP’s listener logic began.  If you remember, in a list of Virtual Servers, the most specific combination of source, destination and port matching the request, will pick up that traffic flow.  Then from the Virtual Server, you can associate services relevant to the application(s) picked up by that Virtual Server.

There is no difference in F5 Distributed Cloud.  In F5 Distributed Cloud however, the Listener Logic is configured from within a Load Balancer object.  We define this Listener Logic as an Advertise Policy.  The Advertise Policy is a combination of {IP Address} + {Port} and depending on the type of load balancer you deploy, L4-L7 information.

Let’s start by taking a closer look at an HTTP Load Balancer.  Remember, HTTP is the Application layer (Layer 7) protocol parsed and proxied, but you can utilize SSL/TLS on this same Load Balancer type to achieve HTTPS.  Looking at the HTTP LB load balancer deployed on a Regional Edge, you’ll notice the combination of IP and Port as part of the advertise policy within the green columns pictured below.  From there, you have some options on how you’d like to configure the Load Balancer to pick up traffic.  You can specify a domain or use a wildcard domain within the configuration, which will then add logic to the advertise policy on how-to pick-up traffic for that specific Load Balancer object.  A specific domain would look like “host.domain.com” and a wildcard would look like “*.domain.com”.  The more specific the match will determine which LB object is used when client-side traffic requests are received by the IP address and Port.  These domains are either gathered as part of the Server Name Indication (SNI) value of the request when using HTTPS, or the host header if using HTTP.  Lastly, there is a concept of a Default Load Balancer, which is used on an HTTPS load balancer, for anything that isn’t matched in a more specific domain match.  An example of this would be, you have a wildcard certificate for *.domain.com, and you haven’t defined anywhere a domain on an LB of app3.domain.com, then traffic would be picked up by the Default Load Balancer.  You could use routes to act on what to do with these requests that match the Default Load Balancer.  Lastly, the selection of Default Load Balancer is only available on 1 LB object per Advertise Policy (IP+Port).

The other Load Balancer type is a TCP Load Balancer or a Layer 4 Load Balancer.  In this case, the IP and Port are still involved when advertising at the Regional Edge.  One could argue the IP and Port are more critical for multiple TCP Load Balancers to be unique as there isn’t much more past Layer 4 to match on.  This is an accurate assumption, but in F5 Distributed Cloud, if the service that is utilizing the TCP Load Balancer supports SNI, we do allow a domain to be configured and matched based on SNI.  The concept of a Default Load Balancer on a TCP or Layer 4 Load Balancer, doesn’t exist, this is simply a LB that does not match on SNI, but would behave as a default for any client-side traffic flows entering the Regional Edge matching the IP and Port utilized by the Load Balancer. 

Now that we understand how critical an Advertise-Policy is to matching F5 Distributed Cloud Load Balancer Objects to traffic flows, let’s take a look at how we can utilize IP addressing to differentiate configuration objects.  Typically, on an HTTP/HTTPS solution, you wouldn’t need many IP addresses to differentiate configuration objects.  However, there are cases where you have multiple Default LBs, or maybe operationally you like to split out different types of traffic such as non-production and production.  When looking at the Customer Edge, if you need more IP addresses, you can simply configure them as a custom IP address under the VIP advertisement.  Depending on how your CEs attach to your network, will depend how that IP address is utilized across the CEs.  You can see a pervious article of mine that talks through these attachment types - F5 Distributed Cloud - Customer Edge Site - Deployment & Routing Options.  On a Regional Edge, where F5 owns the data plane, how can you get additional IPs to utilize from your tenant for advertisement of Load Balancer objects? 

There are three IP types available within a tenant.  First, when you sign up as a customer for F5 Distributed Cloud, we will provide you a Default Tenant IP.  As it stands today, this Tenant IP is attached to a global Virtual Site, and is advertised out of all Regional Edges.  When building a Load Balancer, to utilize this default tenant IP, you’d simply select that you want to advertise to the Internet under VIP Adveritisement.  The next two types of IP addresses are similar in these that they are additional or secondary to the tenant.  You can choose to pay F5 for an Additional Tenant IP address (/32) or addresses, in which we will provide from our pool of IPs.  There is a cost associated to each additional IP address F5 provides to the customer tenant.  If you have some IP space just laying around, you can also choose to bring your own IP addresses (BYO-IP) to our platform, but they must be a /24 or larger.  We charge per /24 that is brought to our platform.  If you bring a contiguous /22, we’d charge for 4x /24s.  This IP space is then advertised out of F5’s global network.  You’ll see your list of IPs under shared public IP information from within the tenant.  Both the Additional Tenant IP and the BYO Tenant IPs can be added to an LB object on a Regional Edge under VIP Advertisement as Internet (Specified VIP).  You can also attach Virtual Sites to this Specified VIP, to limit which Regional Edge locations we’ll utilize for traffic.  You can learn more on Virtual Sites and our advertisement in another previous article of mine - F5 Distributed Cloud - Regional Decryption with Virtual Sites.  You can see below, I have provided an example of the tenant IP (purple) advertised out of all REs, an additional IP advertised out of 3 REs, and an IP from a BYO-IP space advertised out of 2 REs. 

Caveats:

There are two caveats today to the listener logic.  First, for a HTTP LB, where it is truly listening on HTTP port 80, we do not support a default Load Balancer.  At the time of this article, there is a feature request in to enhance this service.  It is not recommended to ever use non-encrypted protocols.  However, there are use cases where a basic http 80 listener could be helpful such as a redirect service.  The other caveat to the platform today, is that a Customer Edge (CE) doesn’t support the advertise policies that use certain ports.  These ports are specific to the platform and cannot be used for advertise-policies associated to Load Balancers.

Examples:

I’d like to give an example of listener logic.  This example provides 3 IP addresses to the tenant, default tenant IP and two additional tenant IPs.  Note: the IPs are completely random and made up in my example.  For my environment, I have a client, with a browser and local DNS server.  DNS while indirectly related to listener logic, plays a big role in how a client is provided the addressing for a service.  These example services are a combination of fully-qualified-domain names (FQDNs) and wildcard listeners.  I’ve tried to visualize a simplified waterfall or flow down approach of per-IP most specific to least specific.  Then by using a default origin setting, or L7 route logic, I can select the proper origin for the traffic that matched the listener logic for a particular Load Balancer object. 

#1 - Basic Listener Logic

In this example, very simply, the client has entered in the browser app-b.domain.com with a protocol of HTTPS, DNS resolves this name to an IP of 78.54.32.11.  When traffic is routed to the 78.54.32.11 IP address, the Regional Edge will pick up the traffic and because the request was for HTTPS, the port requested is 443.  At this point we’ve matched on the basics of the advertise policy.  Now, we look at the list of LBs with this Advertise Policy, find the most specific match for app-b.domain.com.  We happen to have an exact match, which proxies to another object of origin pool of “origin-b”.  This example is straight forward and basic but gives a good visual example of the process.  Let’s look at some other interesting use cases you can solve with additional listener logic. 

#2 - Default http-to-https redirect

In F5 Distributed Cloud, for every HTTPS Load Balancer you create, you have the option of checking a box for HTTP to HTTPS redirect.  When checking this box, the platform builds another object on HTTP port 80 on the same domain, but with a route for http to https, and no origin attached.  When using the check box, we basically double the number of Load Balancer object count.  Another way to handle this is by not using the checkbox but by manually creating a redirect LB.

In this example you see the client as requested app-y.domain.com but on http.  This domain resolves in DNS to IP 85.45.44.32 and would come to the RE with a port request of 80.  When looking at the LBs that are listening on 80, we have one specific app, app-z.domain.com, but that isn’t what was requested.  For app-y.domain.com, we fall to a wildcard domain defined on a default-80-lb.  For this LB we only have a route defined to always redirect http to https.  This route would respond to the request with a 3xx response code, and a location to redirect to https.

 When the client receives this 3xx response code and location, the browser will create a new request for the service on HTTPS.  When the traffic arrives back at the RE, it receives the traffic requesting port 443, which we then have a specific app-y.domain.com listener defined on.  The associated LB, then forwards traffic to the proper origin, “origin-y”.

#3 - Unused Domains & Direct Response Routes

It is common for an organization to purchase additional domains such as vanity domains that encompass “typos” or simply are similar enough to the real domain, that they want to ensure traffic is still sent to the proper place.  Another situation is where either by acquisition or merger, that domains are no longer in use.  In both situations, you may choose to send a 3xx response with the location of to send the traffic.  However, that can cause additional unintended traffic on the location destination, especially for web scraping services, search bots, or any other automated services. 

In this example we’re looking for host.domain2.com on https.  The client resolves the domain to IP 55.16.77.9.  We have a wildcard and default LB for domain2.com.  In this case, imagine domain2 has been abandoned, or maybe its new and never been put in use.  Instead of redirecting via 3xx, we could send a direct response from the F5 Distributed Cloud.  This is done via a Direct Response Route, which might say something like “This domain is not in use.”.  You could even include a URL within that direct response.  By including the URL vs 3xx redirects, you force a human to take action to click the link to continue to a default page. 

Summary:

F5 Distributed Cloud provides you the tooling to be creative with solutions that best fit your organization.  I had set out to help better educate on the functions and capabilities of listening logic on F5 Distributed Cloud.  I hope you’ve found this helpful and gave you some ideas of how you can use listener logic, multiple IPs, and creative routing to better satisfy both your organization, and your customer’s needs.

If you have questions, please post below.  Also be sure to visit additional content on DevCentral at the links below.

Next, I want to learn more about:

  • Link - Regional Edges, Advertise-Policies, & Virtual Sites
  • Link - Regional Edges & Origin Health Monitors
  • Link - Customer Edges (CEs) Deployment & Attachment models
Updated Mar 14, 2024
Version 5.0

Was this article helpful?