Securely connecting Kubernetes Microservices with F5 Distributed Cloud

Introduction

As apps rapidly shift into public cloud and Kubernetes-based deployments, the path of least-resistance for quick migrations, is usually the platform-native solution (eg: Azure’s AKS, EKS from AWS and GKE from Google Cloud Platform). Connectivity between these clusters/microservices, with the appropriate WAF/Firewall/Auth/API Security policies presents a unique challenge for organizations trying to scale from the initial migrations into public clouds and microservices architecture, and institute process rigor. 

The east-west bridging of microservices becomes imperative for organizations trying to connect apps deployed in different clouds, using different Kubernetes implemetations. Consider a company that has standardized on Azure's AKS, and acquires an entity that has standardized on Google GKE. Platform-native tools on both sides is certainly an option to bridge the microservices, but the ongoing maintenance, inconsistent security posture, operations and visibility challenges can quickly become an outsized issue. 

With F5 Distributed Cloud (F5 XC) secure multi-cloud networking capabilities, identical security posture can be deployed across different Kubernetes implementations. Service Discovery of services running inside the cluster, provides an easy, seamless way to target services without exposing the entire cluster, and a single point of policy enforcement. 

F5 XC provides a construct called a Kubernetes Site, that deploys an XC Cloud Mesh as a pod inside the Kubernetes Cluster. Once deployed, the pod establishes 2x tunnels with the closest 2 Regional Edges, on the F5 XC App Delivery Network, providing a new transit fabric for communication between the K8s sites. The pods are deployed in namespace 'ves-system'. Once up and running, they'll look like this:

Now that the fabric is created, we can target specific services deployed in each cluster as part of an origin pool. We can create an Origin Pool and select the service we want to target, using the service name. The service name needs to specified in the <service-name>.<namespace> format:

Note - F5 XC provides a mechanism to discover K8s Site type services using Service Discovery. This is not a requirement  for the procedure below, but provides a convenient way to discover K8S services within XC, making it easy to find service names in different clusters, right from within the XC Console. 

Lets create an Origin Pool, starting with an Origin Server. 

F5 XC provides 2 types of origin servers to select from, K8s Service Name of Origin Server on given Sites and DNS name of Origin Server on given Sites (shown below)

Both methods, accomplish the same outcome. 

Once this origin pool is created, attach it to a load balancer (HTTP or TCP) in F5 XC, and select the appropriate WAAP protections (WAF, API, Bot, DoS) to complete the build. 

This is where F5 XC provides great flexibility in service accessibility. For some services, a publicly available endpoint and FQDN is acceptable/needed, while some services need to stay private. Let's review these options: 

Option 1 - Public Endpoint, ADN Transit:

An HTTP/TCP load balancer endpoint can be created, using a DNS domain managed by XC and advertised to the Internet. This endpoint is publicly accessible and resolvable, and can be used to target services from anywhere, including from inside other clusters (assuming they can resolve and reach outside the cluster). The diagram above shows an AKS Service talking to a GKE Service, behind a public load balancer on the XC ADN. In this scenario, data plane traffic rides the tunnels from one cluster to the ADN, and back from the ADN to the other cluster. 

When creating the HTTP LB, the FQDN for this endpoint would use a domain managed by XC (Delegated, Primary/Secondary DNS).  

F5 XC provides automatic certificate generation, when creating a Load Balancer endpoint

At this point, simply select the origin pool (created above) in the load balancer configuration, and you're done. 

 

Option 2 - Private Endpoint, ADN Transit:

Similar to option 1, an HTTP/TCP load balancer endpoint can be created on F5 XC, without using a managed domain, or advertising to the Internet. This load balancer endpoint is selectively advertised to specific sites, and can be used to target services, only from the site where the VIP is advertised. The example below shows an AKS Service talking to a GKE Service, behind a private load balancer on the XC ADN. In this scenario, data plane traffic rides the tunnels from one cluster to the ADN, and back from the ADN to the other cluster. 

In the Load balancer configuration, look for "VIP Advertisement" (under "other settings"). For Option 1, we chose "Internet". For Option 2, we'll choose "Custom"

 

Now we can specify which sites we want to advertise this VIP to:

 

This step, creates a 'private' load balancer endpoint, that is only accessible to services in the site where it's advertised. Under the hood, a NodePort Service (with the same name as the VIP) is created by XC, in the site where the service is advertised. The annotation (ves.io/discoveryCreator:xxxxxx) helps identify the service created by XC. 

 

 
Note -  when creating a private load balancer as shown above, the listed domain should be in the format <service-name>.<namespace>. Using this name, XC will create the NodePort service in the site where this load balancer is being advertised.  

 

 

Option 3 - Private Endpoint, Private Transit:

Leveraging the F5 Distributed Cloud Global Transit, is the easiest method though some situations may necessitate the use of private transit between the 2 sites. Consider this as separation of Control Plane and Data Planes, where data plane traffic can transit a site-to-site IPSec tunnel between CE sites (represented above in the yellow line). Private transit paths for data plane traffic can be chosen, without needing to install cumbersome Site-2-Site VPN appliances/software/licensing in each site. The only additional configurations needed to accomplish this in XC are below:  

1) In the XC Console, go to "Shared Config --> Virtual Sites --> AddVirtual Site

 The creation process looks as follows. 

A few key config items to remember/review:

- The site type chosen is CE, since this is a site mesh group of Customer Edges only (specifically, K8s Site type CEs)

- Selector Expressions are what tell a site which other sites to peer with. It helps to have to multiple selectors, to narrow down the targets (as ves-io-xxx labels can apply to several sites), but a custom Key and Value, are the best way to know exactly which sites will be part of the Site Mesh Group. In the example above, I've used the Selector expression "siteowner=k8s-mesh" (custom key is "siteowner" and custom value is "k8s-mesh")

 

Next, we need to create a Site Mesh Group. 

2) Go to Multi-Cloud Network Connect --> Networking --> Site Mesh Groups --> Add Site Mesh Group

This step creates the Site Mesh Group, specifying the vsite to include in this group, and the type of Site Mesh group. Mesh Choices are Full Mesh (as shown below), Hub and Spoke. Additionally, we can choose between a Data Plane Mesh or a Control and Data Plane Mesh. 

Finally, we proceed to adding the custom label/selector expression, to each K8s site that we want participating in the Site Mesh group

3) Go to Multi-Cloud Network Connect --> Sites (under Overview) --> identify the K8s site by name --> click on the 3 dots under Column named "Actions" --> Manage Configuration

4) Now click "Edit Configration" and in the Labels section, add the selector expression. It should look as follows:

To verify that the Site Mesh tunnel is up, go to Multi-Cloud Network Connect --> Networking --> Topology (tab), and find the Site Mesh group that was created. It should look something like this:

With this configuration, traffic from one K8s site type to another, will now prefer the Site to Site tunnel that was setup by the Site Mesh group, and default to the ADN transit, in case the S2S tunnel is not available. Thinking of multiple K8s clusters, in dis-similar environments, with dis-similar transit/ingress/egress options, the potential to simplify cluster to cluster communications here, is significant.  

Conclusion

Enabling secure microservices communication, especially when using different implementations of K8s (such as AKS, GKE, EKS) does not have to be a complicated task. Using F5 XC, you can discover and connect microservices, right from within the XC Console, with added benefit of attaching F5's industry leading WAAP policies to the same endpoints.  

Related Content

Updated Jun 16, 2023
Version 4.0

Was this article helpful?

No CommentsBe the first to comment