Technical Articles
F5 SMEs share good practice.
Showing results for 
Search instead for 
Did you mean: 
F5 Employee
F5 Employee


As apps rapidly shift into public cloud and Kubernetes-based deployments, the path of least-resistance for quick migrations, is usually the platform-native solution (eg: Azure’s AKS, EKS from AWS and GKE from Google Cloud Platform). Connectivity between these clusters/microservices, with the appropriate WAF/Firewall/Auth/API Security policies presents a unique challenge for organizations trying to scale from the initial migrations into public clouds and microservices architecture, and institute process rigor. 

The east-west bridging of microservices becomes imperative for organizations trying to connect apps deployed in different clouds, using different Kubernetes implemetations. Consider a company that has standardized on Azure's AKS, and acquires an entity that has standardized on Google GKE. Platform-native tools on both sides is certainly an option to bridge the microservices, but the ongoing maintenance, inconsistent security posture, operations and visibility challenges can quickly become an outsized issue. 

With F5 Distributed Cloud (F5 XC) secure multi-cloud networking capabilities, identical security posture can be deployed across different Kubernetes implementations. Service Discovery of services running inside the cluster, provides an easy, seamless way to target services without exposing the entire cluster, and a single point of policy enforcement. 

F5 XC provides a construct called a Kubernetes Site, that deploys an XC Cloud Mesh as a pod inside the Kubernetes Cluster. Once deployed, the pod establishes 2x tunnels with the closest 2 Regional Edges, on the F5 XC App Delivery Network, providing a new transit fabric for communication between the K8s sites. The pods are deployed in namespace 'ves-system'. Once up and running, they'll look like this:


Now that the fabric is created, we can target specific services deployed in each cluster as part of an origin pool. We can create an Origin Pool and select the service we want to target, using the service name. The service name needs to specified in the <service-name>.<namespace> format:

Note - F5 XC provides a mechanism to discover K8s Site type services using Service Discovery. This is not a requirement  for the procedure below, but provides a convenient way to discover K8S services within XC, making it easy to find service names in different clusters, right from within the XC Console. 

Lets create an Origin Pool, starting with an Origin Server. 


F5 XC provides 2 types of origin servers to select from, K8s Service Name of Origin Server on given Sites and DNS name of Origin Server on given Sites (shown below)

Both methods, accomplish the same outcome. 


Once this origin pool is created, attach it to a load balancer (HTTP or TCP) in F5 XC, and select the appropriate WAAP protections (WAF, API, Bot, DoS) to complete the build. 

This is where F5 XC provides great flexibility in service accessibility. For some services, a publicly available endpoint and FQDN is acceptable/needed, while some services need to stay private. Let's review these options: 

Option 1 - Public Endpoint, ADN Transit:

ADN Transit.png

An HTTP/TCP load balancer endpoint can be created, using a DNS domain managed by XC and advertised to the Internet. This endpoint is publicly accessible and resolvable, and can be used to target services from anywhere, including from inside other clusters (assuming they can resolve and reach outside the cluster). The diagram above shows an AKS Service talking to a GKE Service, behind a public load balancer on the XC ADN. In this scenario, data plane traffic rides the tunnels from one cluster to the ADN, and back from the ADN to the other cluster. 

When creating the HTTP LB, the FQDN for this endpoint would use a domain managed by XC (Delegated, Primary/Secondary DNS).  


F5 XC provides automatic certificate generation, when creating a Load Balancer endpoint


At this point, simply select the origin pool (created above) in the load balancer configuration, and you're done. 



Option 2 - Private Endpoint, ADN Transit:

ADN Transit.png

Similar to option 1, an HTTP/TCP load balancer endpoint can be created on F5 XC, without using a managed domain, or advertising to the Internet. This load balancer endpoint is selectively advertised to specific sites, and can be used to target services, only from the site where the VIP is advertised. The example below shows an AKS Service talking to a GKE Service, behind a private load balancer on the XC ADN. In this scenario, data plane traffic rides the tunnels from one cluster to the ADN, and back from the ADN to the other cluster. 

In the Load balancer configuration, look for "VIP Advertisement" (under "other settings"). For Option 1, we chose "Internet". For Option 2, we'll choose "Custom"



Now we can specify which sites we want to advertise this VIP to:



This step, creates a 'private' load balancer endpoint, that is only accessible to services in the site where it's advertised. Under the hood, a NodePort Service (with the same name as the VIP) is created by XC, in the site where the service is advertised. The annotation ( helps identify the service created by XC. 



Note -  when creating a private load balancer as shown above, the listed domain should be in the format <service-name>.<namespace>. Using this name, XC will create the NodePort service in the site where this load balancer is being advertised.  




Enabling secure microservices communication, especially when using different implementations of K8s (such as AKS, GKE, EKS) does not have to be a complicated task. Using F5 XC, you can discover and connect microservices, right from within the XC Console, with added benefit of attaching F5's industry leading WAAP policies to the same endpoints.  


Version history
Last update:
‎06-Feb-2023 12:09
Updated by: