For more information regarding the security incident at F5, the actions we are taking to address it, and our ongoing efforts to protect our customers, click here.

F5 Distributed Cloud Kubernetes Integration: Securing Services with Direct Pod Connectivity

Introduction

As organizations embrace Kubernetes for container orchestration, they face critical challenges in exposing services securely to external consumers while maintaining granular control over traffic management and security policies. Traditional approaches using NodePort services or basic ingress controllers often fall short in providing the advanced application delivery and security features required for production workloads.

F5 Distributed Cloud (F5 XC) addresses these challenges by offering enterprise-grade application delivery and security services through its Customer Edge (CE) nodes. By establishing direct connectivity to Kubernetes pods, F5 XC can provide sophisticated load balancing, WAF protection, API security, and multi-cloud connectivity without the limitations of NodePort-based architectures.

This article demonstrates how to architect and implement F5 XC CE integration with Kubernetes clusters to expose and secure services effectively, covering both managed Kubernetes platforms (AWS EKS, Azure AKS, Google GKE) and self-managed clusters using K3S with Cilium CNI.

Understanding F5 XC Kubernetes Service Discovery

F5 Distributed Cloud includes a native Kubernetes service discovery feature that communicates directly with Kubernetes API servers to retrieve information about services and their associated pods. This capability operates in two distinct modes:

Isolated Mode

In this mode, F5 XC CE nodes are isolated from the Kubernetes cluster pods and can only reach services exposed as NodePort services. While the discovery mechanism can retrieve all services, connectivity is limited to NodePort-exposed endpoints with the inherent NodePort limitations:

  • Port Range Restrictions: Limited to ports 30000-32767
  • Security Concerns: Exposes services on all node IPs
  • Performance Overhead: Additional network hops through kube-proxy
  • Limited Load Balancing: Basic round-robin without advanced health checks

Non-Isolated Mode, Direct Pod Connectivity (and why it matters)

This is the focus of our implementation. In non-isolated mode, F5 XC CE nodes can reach Kubernetes pods directly using their pod IP addresses. This provides several advantages:

  • Simplified Architecture: Eliminate NodePort complexity and port management limitation
  • Enhanced Security: Apply WAF, DDoS protection, and API security directly at the pod level
  • Advanced Load Balancing: Sophisticated algorithms, circuit breaking, and retry logic

Architectural Patterns for Pod IP Accessibility

To enable direct pod connectivity from external components like F5 XC CEs, the pod IP addresses must be routable outside the Kubernetes cluster. The implementation approach varies based on your infrastructure:

Cloud Provider Managed Kubernetes

Cloud providers typically handle pod IP routing through their native Container Network Interfaces (CNIs):

Figure 1: Cloud providers' K8S CNI routes PODs IPs to the Cloud Provider Private Cloud Routing Table

  • AWS EKS: Uses Amazon VPC CNI, which assigns VPC IP addresses directly to pods
  • Azure AKS: Traditional CNI mode allocates Azure VNET IPs to pods
  • Google GKE: VPC-native clusters provide direct pod IP routing

In these environments, the cloud provider's CNI automatically updates routing tables to make pod IPs accessible within the VPC/VNET.

Self-Managed Kubernetes Clusters

For self-managed clusters, you need an advanced CNI that can expose the Kubernetes overlay network. The most common solutions are:

  • Cilium: Provides eBPF-based networking with BGP support
  • Calico: Offers flexible networking policies with BGP peering capabilities and eBPF support as well

These CNIs typically use BGP to advertise pod subnets to external routers, making them accessible from outside the cluster.

Figure 2: Self-managed K8S clusters use advanced CNI with BGP to expose the overlay subnet

Cloud Provider Implementations

AWS EKS Architecture

Figure 3: AWS EKS with F5 XC CE integration using VPC CNI

With AWS EKS, the VPC CNI plugin assigns real VPC IP addresses to pods, making them directly routable within the VPC without additional configuration.

Azure AKS Traditional CNI

Figure 4: Azure AKS with traditional CNI mode for direct pod connectivity

Azure's traditional CNI mode allocates IP addresses from the VNET subnet directly to pods, enabling native Azure networking features.

Google GKE VPC-Native

Figure 5: Google GKE VPC-native clusters with alias IP ranges for pods

GKE's VPC-native mode uses alias IP ranges to provide pods with routable IP addresses within the Google Cloud VPC.

Deeper dive into the implementation

Implementation Example 1: AWS EKS Integration

Let's walk through a complete implementation using AWS EKS as our Kubernetes platform.

Prerequisites and Architecture

Network Configuration:

  • VPC CIDR: 10.154.0.0/16
  • Three private subnets (one per availability zone)
  • F5 XC CE deployed in Private Subnet 1
  • EKS worker nodes distributed across all three subnets

Figure 6: Complete EKS implementation architecture with F5 XC CE integration

Kubernetes Configuration:

  • EKS cluster with AWS VPC CNI
  • Sample application: microbot (simple HTTP service)
  • Three replicas distributed across nodes

What is running inside the K8S cluster?

The PODs

We have three PODs in the default namespace.

Figure 7: The running PODs in the EKS cluster

One running with POD IP 10.154.125.116, another one with POD IP 10.154.76.183 and one running with POD IP 10.154.69.183.

microbot POD is a simple HTTP application that is returning the full name of the POD and an image.

 

 

 

 

 

 

 

 

 

Figure 8: The microbot app

The services

Figure 9: The services running in the EKS cluster

Configure F5 XC Kubernetes Service Discovery

Create a K8S service discovery object.

Figure 10: Kubernetes service discovery configuration

In the “Access Credentials” activate the “Show Advanced Fields” slider. This is the key!

 

 

Figure 11: The "advanced fields" slider

Then provide the Kubeconfig file of the K8S cluster and select “Kubernetes POD reachable”.

Figure 12: Kubernetes POD network reachability

Then the K8S should be displayed in the “Service Discoveries”.

Figure 13: The discovered PODs IPs

One can see that the services are discovered by the F5 XC node and more interestingly, the PODs IPs.

Are the pods reachable from the F5XC CE?

Figure 14: Testing connectivity to pod 10.154.125.116

Figure 15: Testing connectivity to pod 10.154.76.183

Figure 16: Testing connectivity to pod 10.154.69.183

Yes, they are!

Create Origin Pool with K8S Service

Create an origin pool that references your Kubernetes service:

Figure 17: Creating origin pool with Kubernetes service type

Create an HTTPS Load-Balancer and test the service

Just create a regular F5 XC HTTPS Load-Balancer and use the origin pool created above.

Figure 18: Traffic load-balanced across the three PODs

The result shows traffic being load-balanced across all EKS pods.

Implementation Example 2: Self-Managed K3S with Cilium CNI

One infrastructure subnet (10.154.1.0/24) in which the following components are going to be deployed:

  • F5 XC CE single node (10.154.1.100)
  • Two Linux Ubuntu nodes (10.154.1.10 & 10.154.1.11)

On the Linux Ubuntu nodes, a Kubernetes cluster is going to be deployed using K3S (www.k3s.io) with the following specifications:

  • PODs overlay subnet: 10.160.0.0/16
  • Services overlay subnet: 10.161.0.0/16
  • Default K3S CNI (flannel) will be disabled

K3S CNI will be replaced by Cilium CNI to expose directly the PODs overlay subnet to the “external world”

Figure 19: Self-managed K3S cluster with Cilium CNI and BGP peering to F5 XC CE

 

What is running inside the K8S cluster?

The PODs

We have two PODs in the default namespace.

Figure 20: The running PODs in the K8S cluster

One running on node “k3s-1” with POD IP 10.160.0.203 and the other one running on node “k3s-2” with POD IP 10.160.1.208.

microbot POD is a simple HTTP application that is returning the full name of the POD and an image.

 

 
 

 

 

 

 

 

 

 

The services

Figure 21: The services running in the K8S cluster

Different Kubernetes services are created to expose the microbot PODs, one of type Cluster IP and the other one of type LoadBalancer. The type of service doesn’t really matter for F5XC because we are working in a full routed mode between the CE and the K8S cluster. F5XC only needs to “know” the PODs IPs, which will be discovered through the services.

Configure F5 XC Kubernetes Service Discovery

Steps are identical regarding what we did for EKS. And once done, services and PODs IPs are discovered by F5XC.

Figure 22: The discovered PODs IPs

Configure the BGP peering on F5XC CE

In this example topology, BGP peerings are established directly between the K8S nodes and the F5 XC CE. Other implementations are possible, for instance, with an intermediate router.

Figure 23: BGP peerings

Check if the peerings are established.

Figure 24: Verification of the BGP peerings

Are the pods reachable from the F5XC CE?

Figure 25: PODs reachability test

They are!

Create Origin Pool with K8S Service

As we did for the EKS configuration, create an origin pool that references your Kubernetes service.

Create an HTTPS Load-Balancer and test the service

Just create a regular F5 XC HTTPS Load-Balancer and use the origin pool created above.

Figure 26: Traffic load-balanced across the two PODs

Scaling up?

Let’s add another POD to the deployment to see how F5XC will handle the load-balancing after.

Figure 27: Scaling up the Microbot PODs

And it’s working! Load is spread automatically as soon as new PODs instances are available for the given service.

Figure 28: Traffic load-balanced across the three PODs

 

Appendix - K3S and Cilium deployment example

Step 1: Install K3S without Default CNI

On the master node:

curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" \
  INSTALL_K3S_EXEC="--flannel-backend=none \
  --disable-network-policy \
  --disable=traefik \
  --disable servicelb \
  --cluster-cidr=10.160.0.0/16 \
  --service-cidr=10.161.0.0/16" sh -

# Export kubeconfig
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

# Get token for worker nodes
sudo cat /var/lib/rancher/k3s/server/node-token

On worker nodes:

IP_MASTER=10.154.1.10
K3S_TOKEN=<token-from-master>
curl -sfL https://get.k3s.io | K3S_URL=https://${IP_MASTER}:6443 K3S_TOKEN=${K3S_TOKEN} sh -

 

Step 2: Install and Configure Cilium

On the K3S master node, please perform the following:

Install Helm and Cilium CLI:

# Install Helm
sudo snap install helm --classic

# Download Cilium CLI
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin

Install Cilium with BGP support:

helm repo add cilium https://helm.cilium.io/
helm install cilium cilium/cilium --version 1.16.5 \
  --set=ipam.operator.clusterPoolIPv4PodCIDRList="10.160.0.0/16" \
  --set kubeProxyReplacement=true \
  --set k8sServiceHost=10.154.1.10 \
  --set k8sServicePort=6443 \
  --set bgpControlPlane.enabled=true \
  --namespace kube-system \
  --set bpf.hostLegacyRouting=false \
  --set bpf.masquerade=true

# Monitor installation
cilium status --wait

 

Step 3: Configure BGP Peering

Label nodes for BGP:

kubectl label nodes k3s-1 bgp=true
kubectl label nodes k3s-2 bgp=true

Create BGP configuration:

# BGP Cluster Config
apiVersion: cilium.io/v2alpha1
kind: CiliumBGPClusterConfig
metadata:
  name: cilium-bgp
spec:
  nodeSelector:
    matchLabels:
      bgp: "true"
  bgpInstances:
  - name: "k3s-instance"
    localASN: 65001
    peers:
    - name: "f5xc-ce"
      peerASN: 65002
      peerAddress: 10.154.1.100
      peerConfigRef:
        name: "cilium-peer"
---
# BGP Peer Config
apiVersion: cilium.io/v2alpha1
kind: CiliumBGPPeerConfig
metadata:
  name: cilium-peer
spec:
  timers:
    holdTimeSeconds: 9
    keepAliveTimeSeconds: 3
  gracefulRestart:
    enabled: true
    restartTimeSeconds: 15
  families:
  - afi: ipv4
    safi: unicast
    advertisements:
      matchLabels:
        advertise: "bgp"
---
# BGP Advertisement
apiVersion: cilium.io/v2alpha1
kind: CiliumBGPAdvertisement
metadata:
  name: bgp-advertisements
  labels:
    advertise: bgp
spec:
  advertisements:
  - advertisementType: "PodCIDR"

 

Published Nov 13, 2025
Version 1.0
No CommentsBe the first to comment