Kubernetes architecture options with F5 Distributed Cloud Services

 

Summary

F5 Distributed Cloud Services (F5 XC) can both integrate with your existing Kubernetes (K8s) clusters and/or host a K8s workload itself. Within these distinctions, we have multiple architecture options. This article explores four major architectures in ascending order of sophistication and advantages.

Kubernetes Architecture Options

As K8s continues to grow, options for how we run K8s and integrate with existing K8s platforms continue to grow. F5 XC can both integrate with your existing K8s clusters and/or run a managed K8s platform itself. Multiple architectures exist within these offerings too, so I was thoroughly confused when I first heard about these possibilities.

A colleague recently laid it out for me in a conversation:

"Michael, listen up: XC can either integrate with your K8s platform, run inside your K8s platform, host virtual K8s (Namespace-aaS), or run a K8s platform in your environment."
I replied, "That's great. Now I have a mental model for differentiating between architecture options."

This article will overview these architectures and provide 101-level context: when, how, and why would you implement these options?

Side note 1: F5 XC concepts and terms

F5 XC is a global platform that can provide networking and app delivery services, as well as compute (K8s workloads). We call each of our global PoP's a Regional Edge (RE). RE's are highly meshed to form the backbone of the global platform. They connect your sites, they can expose your services to the Internet, and they can run workloads. This platform is extensible into your data center by running one or more XC Nodes in your network, also called a Customer Edge (CE). A CE is a compute node in your network that registers to our global control plane and is then managed by a customer as SaaS.

The registration of one or more CE's creates a customer site in F5 XC. A CE can run on a hypervisor (VMWare/KVM/Etc), a Hyperscaler (AWS, Azure, GCP, etc), baremetal, or even as a k8s pod, and can be deployed in HA clusters.

XC Mesh functionality provides connectivity between sites, security services, and observability. Optionally, in addition, XC App Stack functionality allows a large and arbitrary number of managed clusters to be logically grouped into a virtual site with a single K8s mgmt interface. So where Mesh services provide the networking, App Stack services provide the Kubernetes compute mgmt. Our first 2 architectures require Mesh services only, and our last two require App Stack.

Side note 2: Service-to-service communication

I'm often asked how to allow services between clusters to communicate with each other. This is possible and easy with XC. Each site can publish services to every other site, including K8s sites. This means that any K8s service can be reachable from other sites you choose. And this can be true in any of the architectures below, although more granular controls are possible with the more sophisticated architectures. I'll explore this common question more in a separate article.

Architecture 1: External Load Balancer (Secure K8s Gateway)

In a Secure Kubernetes Gateway architecture, you have integration with your existing K8s platform, using the XC node as the external load balancer for your K8s cluster. In this scenario, you create a ServiceAccount and kubeconfig file to configure XC. The XC node then performs service discovery against your K8s API server. I've covered this process in a previous article, but the advantage is that you can integrate with existing K8s platforms. This allows exposing both NodePort and ClusterIP services via the XC node.

 

XC is not hosting any workloads in this architecture, but it is exposing your services to your local network, or remote sites, or the Internet. In the diagram above, I show a web application being accesssed from a remote site (and/or the Internet) where the origin pool is a NodePort service discovered in a K8s cluster.

Architecture 2: Run a site within a K8s cluster (K8s site type)

Creating a K8s site is easy - just deploy a single manifest found here. This file deploys multiple resources in your cluster, and together these resources work to provide the services of a CE, and create a customer site. I've heard this referred to as "running a CE inside of K8s" or "running your CE as a pod". However, when I say  "CE node" I'm usually referring to a discreet compute node like a VM or piece of hardware; this architecture is actually a group of pods and related resources that run within K8s to create a XC customer site.

With XC running inside your existing cluster, you can expose services within the cluster by DNS name because the site will resolve these from within the cluster. Your service can then be exposed anywhere by the F5 XC platform. This is similar to Architecture 1 above, but with this model, your site is simply a group of pods within K8s. An advantage here is the ability to expose services of other types (e.g. ClusterIP). 

A site deployed into a K8s cluster will only support Mesh functionality and does not support AppStack functionality (i.e., you cannot run a cluster within your cluster). In this architecture, XC acts as a K8s ingress controller with built-in application security. It also enables Mesh features, such as publishing of other sites' services on this site, and publishing of this site's discovered services on other sites.

Architecture 3: vK8s (Namespace-as-a-Service)

If the services you use include AppStack capabilities, then architectures #3 and #4 are possible for you. In these scenarios, our XC node actually runs your K8s on your workloads. We are no longer integrating XC with your existing K8s platform. XC is the platform.

A simple way to run K8s workloads is to use a virtual k8s (vK8s) architecture. This could be referred to as a "managed Namespace" because by creating a vK8s object in XC you get a single namespace in a virtual cluster. 

Your Namespace can be fully hosted (deployed to RE's) or run on your VM's (CE's), or both. Your kubeconfig file will allow access to your Namespace via the hosted API server. Via your regular kubectl CLI (or via the web console) you can create/delete/manage K8s resources (Deployments, Services, Secrets, ServiceAccounts, etc) and view application resource metrics.

This is great if you have workloads that you want to deploy to remote regions where you do not have infrastructure and would prefer to run in F5's RE's, or if you have disparate clusters across multiple sites and you'd like to manage multiple K8s clusters via a single centralized, virtual cluster.

Best practice guard rails for vK8s

With a vK8s architecture, you don't have your own cluster, but rather a managed Namespace. So there are some restrictions (for example, you cannot run a container as root, bind to a privileged port, or to the Host network). You cannot create CRD's, ClusterRoles, PodSecurityPolicies, or Namespaces, so K8s operators are not supported. In short, you don't have a managed cluster, but a managed Namespace on a virtual cluster.

Architecture 4: mK8s (Managed K8s)

In managed k8s (mk8s, also known as physical K8s or pk8s) deployment, we have an enterprise-level K8s distribution that is run at your site. This means you can use XC to deploy/manage/upgrade K8s infrastructure, but you manage the Kubernetes resources. The benefits include what is typical for 3rd-party K8s mgmt solutions, but also some key differentiators:

  • multi-cloud, with automation for Azure, AWS, and GCP environments
  • consumed by you as SaaS
  • enterprise-level traffic control natively
  • allows a large and arbitrary number of managed clusters to be logically managed with a single K8s mgmt interface

You can enable kubectl access against your local cluster and disable the hosted API server, so your kubeconfigfile can point to a global URL or a local endpoint on-prem.

Another benefit of mK8s is that you are running a full K8s cluster at your site, not just a Namespace in a virtual cluster. The restrictions that apply to vK8s (see above) do not apply to mK8s, so you could run privileged pods if required, use Operators that make use of ClusterRoles and CRDs, and perform other tasks that require cluster-wide access.

Traffic management controls with mK8s

Because your workloads run in a cluster managed by XC, we can apply more sophisticated and native policies to K8s traffic than non-managed clusters in earlier architectures:

  • Service isolation can be enforced within the cluster, so that pods in a given namespace cannot communicate with services outside of that namespace, by default.
  • More service-to-service controls exist so that you can decide which services can reach with other services with more granularity.
  • Egress control can be natively enforced for outbound traffic from the cluster, by namespace, labels, IP ranges, or other methods. E.g.: Svc A can reach myapi.example.com but no other Internet service.
  • WAF policies, bot defense, L3/4 policies, etc—all of these policies that you have typically applied with network firewalls, WAF's, etc—can be applied natively within the platform.

This architecture took me a long time to understand, and longer to fully appreciate. But once you have run your workloads natively on a managed K8s platform that is connected to a global backbone and capable of performing network and application delivery within the platform, the security and traffic mgmt benefits become very compelling.

Conclusion:

As K8s continues to expand, management solutions of your clusters make it possible to secure your K8s services, whether they are managed by XC or exist in disparate clusters. With F5 XC as a global platform consumed as a service—not a discreet installation managed by you—the available architectures here are unique and therefore can accommodate the diverse (and changing!) ways we see K8s run today. 

Related Articles

Updated Mar 14, 2024
Version 12.0

Was this article helpful?

5 Comments

  • Hi Nikoolayy1 

    Thanks for asking. I am happy to hear you know about CIS and BIG-IP and different CNI's, because that makes understanding the options with XC pretty easy.

    With the Security Gateway architecture where the CE is outside of the cluster, you can only expose ClusterIP services if they are reachable from the CE. That means either

    1) you have BGP and are using Calico, OR

    2) maybe your pods are reachable from the CE because you are using EKS, AKS, GKE, etc, where the pods are on the same subnet as the nodes.

    The VXLAN/GENEVE tunnels are not an option at this time. You could also use an ingress controller and expose that via NodePort, but have your backend service be ClusterIP. Foo-Bang_Chan covers some of these options in his article:  Multi-Cluster, Multi-Cloud Networking (MCN) for Kubernetes (see architecture #2 in his article).

    Feel free to email/message/LinkedIn and we can chat too.

    Mike.

     

     

     

  • Hi Nikoolayy1 

    Generally, modern CNI does provides access to the CluserIP. What need to happend is to route those respective subnet to the K8S nodes and the CNI will handel the routing to the internal pods. OpenShift used to require you to create VXLAN tunnel to the cluster IP due to it using OpenShiftSDN. With OCP 4.9 (if I am not wrong), Redhat default cni to OVN-Kubernetes, which support direct to the cluster ip. EKS, AKS (Azure CNI), GKE and Calico a few of those that I know support direct to the cluster IP. AKS (Kubenet) don't. So, its CNI dependent. F5 CIS relied on the CNI to send traffic. You can run Cluster mode or tunnel mode.

  • MichaelOLeary @
    In the original Article , in Architecture 1, it stated :

    Architecture 1: External Load Balancer (Secure K8s Gateway)

    In a Secure Kubernetes Gateway architecture you have integration with your existing K8s platform, using the XC node as the external load balancer for your K8s cluster. In this scenario, you create a ServiceAccount and kubeconfig file to configure XC. The XC node then performs service discovery against your K8s API server. I've covered this process in a previous article, but the advantage is that you can integrate with existing K8s platforms. One disadvantage is that this is true for NodePort services only (not ClusterIP).

    However when it came to the revised version in Oct 2023, this changed to :

    This allows exposing both NodePort and ClusterIP services via the XC node.

    Could you please let me know how we can expose via Cluster IP , because Cluster IP is internal to the cluster and provides internal connectivity , what am i missing here ?

  • adit - you are correct. Not many people know this, but you can actually see previous versions of DevCentral articles. I don't recommend looking through old versions though, because I update my articles if/when they need to be updated due to product updates. Anyway, yes you can now use NodePort or Cluster IP services with the Secure Gateway Artchitecture!

  • MichaelOLeary  About exposing service of type ClusterIP using Secure Gateway  is this done with CNI like Calico (BGP to pod) or Cilium (Vxlan or GENEVE tunels)) as then something needs to installed in the kubernetes cluster? Sorry to ask this but I became interested in this and F5 CIS for BIG-IP is using CNI to expose clusterip type services and there is not enough articles about this.