F5 Container Ingress Services (CIS) deployment using Cilium CNI and static routes
F5 Container Ingress Services (CIS) supports static route configuration to enable direct routing from F5 BIG-IP to Kubernetes/OpenShift Pods as an alternative to VXLAN tunnels.
Static routes are enabled in the F5 CIS CLI/Helm yaml manifest using the argument --static-routing-mode=true.
In this article, we will use Cilium as the Container Network Interface (CNI) and configure static routes for an NGINX deployment
For initial configuration of the BIG-IP, including AS3 installation, please see
https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/userguide/installation.html and https://clouddocs.f5.com/containers/latest/userguide/kubernetes/#cis-installation
-
The first step is to install Cilium CNI using the steps below on Linux host:
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
cilium install --version 1.18.5
cilium status
cilium status --wait
root@ciliumk8s-ubuntu-server:~# cilium status --wait
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Envoy DaemonSet: OK
\__/¯¯\__/ Hubble Relay: disabled
\__/ ClusterMesh: disabled
DaemonSet cilium Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet cilium-envoy Desired: 1, Ready: 1/1, Available: 1/1
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
Containers: cilium Running: 1
cilium-envoy Running: 1
cilium-operator Running: 1
clustermesh-apiserver
hubble-relay
Cluster Pods: 6/6 managed by Cilium
Helm chart version: 1.18.3
Image versions cilium quay.io/cilium/cilium:v1.18.3@sha256:5649db451c88d928ea585514746d50d91e6210801b300c897283ea319d68de15: 1
cilium-envoy quay.io/cilium/cilium-envoy:v1.34.10-1761014632-c360e8557eb41011dfb5210f8fb53fed6c0b3222@sha256:ca76eb4e9812d114c7f43215a742c00b8bf41200992af0d21b5561d46156fd15: 1
cilium-operator quay.io/cilium/operator-generic:v1.18.3@sha256:b5a0138e1a38e4437c5215257ff4e35373619501f4877dbaf92c89ecfad81797: 1
cilium connectivity test
root@ciliumk8s-ubuntu-server:~# cilium connectivity test
ℹ️ Monitor aggregation detected, will skip some flow validation steps
✨ [default] Creating namespace cilium-test-1 for connectivity check...
✨ [default] Deploying echo-same-node service...
✨ [default] Deploying DNS test server configmap...
✨ [default] Deploying same-node deployment...
✨ [default] Deploying client deployment...
✨ [default] Deploying client2 deployment...
✨ [default] Deploying ccnp deployment...
⌛ [default] Waiting for deployment cilium-test-1/client to become ready...
⌛ [default] Waiting for deployment cilium-test-1/client2 to become ready...
⌛ [default] Waiting for deployment cilium-test-1/echo-same-node to become ready...
⌛ [default] Waiting for deployment cilium-test-ccnp1/client-ccnp to become ready...
⌛ [default] Waiting for deployment cilium-test-ccnp2/client-ccnp to become ready...
⌛ [default] Waiting for pod cilium-test-1/client-645b68dcf7-s5mdb to reach DNS server on cilium-test-1/echo-same-node-f5b8d454c-qkgq9 pod...
⌛ [default] Waiting for pod cilium-test-1/client2-66475877c6-cw7f5 to reach DNS server on cilium-test-1/echo-same-node-f5b8d454c-qkgq9 pod...
⌛ [default] Waiting for pod cilium-test-1/client-645b68dcf7-s5mdb to reach default/kubernetes service...
⌛ [default] Waiting for pod cilium-test-1/client2-66475877c6-cw7f5 to reach default/kubernetes service...
⌛ [default] Waiting for Service cilium-test-1/echo-same-node to become ready...
⌛ [default] Waiting for Service cilium-test-1/echo-same-node to be synchronized by Cilium pod kube-system/cilium-lxjxf
⌛ [default] Waiting for NodePort 10.69.12.2:32046 (cilium-test-1/echo-same-node) to become ready...
🔭 Enabling Hubble telescope...
⚠️ Unable to contact Hubble Relay, disabling Hubble telescope and flow validation: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:4245: connect: connection refused"
ℹ️ Expose Relay locally with:
cilium hubble enable
cilium hubble port-forward&
ℹ️ Cilium version: 1.18.3
🏃[cilium-test-1] Running 126 tests ...
[=] [cilium-test-1] Test [no-policies] [1/126]
....................
[=] [cilium-test-1] Skipping test [no-policies-from-outside] [2/126] (skipped by condition)
[=] [cilium-test-1] Test [no-policies-extra] [3/126]
<- snip ->
-
For this article, we will install k3s with Cilium CNI
root@ciliumk8s-ubuntu-server:~# curl -sfL https://get.k3s.io | sh -s - --flannel-backend=none --disable-kube-proxy --disable servicelb --disable-network-policy --disable traefik --cluster-init --node-ip=10.69.12.2 --cluster-cidr=10.42.0.0/16
root@ciliumk8s-ubuntu-server:~# mkdir -p $HOME/.kube
root@ciliumk8s-ubuntu-server:~# sudo cp -i /etc/rancher/k3s/k3s.yaml $HOME/.kube/config
root@ciliumk8s-ubuntu-server:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config
root@ciliumk8s-ubuntu-server:~# echo "export KUBECONFIG=$HOME/.kube/config" >> $HOME/.bashrc
root@ciliumk8s-ubuntu-server:~# source $HOME/.bashrc
API_SERVER_IP=10.69.12.2
API_SERVER_PORT=6443
CLUSTER_ID=1
CLUSTER_NAME=`hostname`
POD_CIDR="10.42.0.0/16"
root@ciliumk8s-ubuntu-server:~# cilium install --set cluster.id=${CLUSTER_ID} --set cluster.name=${CLUSTER_NAME} --set k8sServiceHost=${API_SERVER_IP} --set k8sServicePort=${API_SERVER_PORT} --set ipam.operator.clusterPoolIPv4PodCIDRList=$POD_CIDR --set kubeProxyReplacement=true --helm-set=operator.replicas=1
root@ciliumk8s-ubuntu-server:~# cilium config view | grep cluster
bpf-lb-external-clusterip false
cluster-id 1
cluster-name ciliumk8s-ubuntu-server
cluster-pool-ipv4-cidr 10.42.0.0/16
cluster-pool-ipv4-mask-size 24
clustermesh-enable-endpoint-sync false
clustermesh-enable-mcs-api false
ipam cluster-pool
max-connected-clusters 255
policy-default-local-cluster false
root@ciliumk8s-ubuntu-server:~# cilium status --wait
- The F5 CIS yaml manifest for deployment using Helm
- Note that these arguments are required for CIS to leverage static routes
-
- static-routing-mode: true
- orchestration-cni: cilium-k8s
- We will also be installing custom resources, so this argument is also required
3. custom-resource-mode: true
-
Values yaml manifest for Helm deployment
bigip_login_secret: f5-bigip-ctlr-login
bigip_secret:
create: false
username:
password:
rbac:
create: true
serviceAccount:
# Specifies whether a service account should be created
create: true
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: k8s-bigip-ctlr
# This namespace is where the Controller lives;
namespace: kube-system
ingressClass:
create: true
ingressClassName: f5
isDefaultIngressController: true
args:
# See https://clouddocs.f5.com/containers/latest/userguide/config-parameters.html
# NOTE: helm has difficulty with values using `-`; `_` are used for naming
# and are replaced with `-` during rendering.
# REQUIRED Params
bigip_url: X.X.X.S
bigip_partition: <BIG-IP_PARTITION>
# OPTIONAL PARAMS -- uncomment and provide values for those you wish to use.
static-routing-mode: true
orchestration-cni: cilium-k8s
# verify_interval:
# node-poll_interval:
# log_level: DEBUG
# python_basedir: ~
# VXLAN
# openshift_sdn_name:
# flannel_name: cilium-vxlan
# KUBERNETES
# default_ingress_ip:
# kubeconfig:
# namespaces: ["foo", "bar"]
# namespace_label:
# node_label_selector:
pool_member_type: cluster
# resolve_ingress_names:
# running_in_cluster:
# use_node_internal:
# use_secrets:
insecure: true
custom-resource-mode: true
log-as3-response: true
as3-validation: true
# gtm-bigip-password
# gtm-bigip-url
# gtm-bigip-username
# ipam : true
image:
# Use the tag to target a specific version of the Controller
user: f5networks
repo: k8s-bigip-ctlr
pullPolicy: Always
version: latest
# affinity:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: kubernetes.io/arch
# operator: Exists
# securityContext:
# runAsUser: 1000
# runAsGroup: 3000
# fsGroup: 2000
# If you want to specify resources, uncomment the following
# limits_cpu: 100m
# limits_memory: 512Mi
# requests_cpu: 100m
# requests_memory: 512Mi
# Set podSecurityContext for Pod Security Admission and Pod Security Standards
# podSecurityContext:
# runAsUser: 1000
# runAsGroup: 1000
# privileged: true
- Installation steps for deploying F5 CIS using helm can be found in this link
https://clouddocs.f5.com/containers/latest/userguide/kubernetes/
- Once F5 CIS is validated to be up and running, we can now deploy the following application example
root@ciliumk8s-ubuntu-server:~# cat application.yaml
apiVersion: cis.f5.com/v1
kind: VirtualServer
metadata:
labels:
f5cr: "true"
name: goblin-virtual-server
namespace: nsgoblin
spec:
host: goblin.com
pools:
- path: /green
service: svc-nodeport
servicePort: 80
- path: /harry
service: svc-nodeport
servicePort: 80
virtualServerAddress: X.X.X.X
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: goblin-backend
namespace: nsgoblin
spec:
replicas: 2
selector:
matchLabels:
app: goblin-backend
template:
metadata:
labels:
app: goblin-backend
spec:
containers:
- name: goblin-backend
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: svc-nodeport
namespace: nsgoblin
spec:
selector:
app: goblin-backend
ports:
- port: 80
targetPort: 80
type: ClusterIP
k apply -f application.yaml
- We can now verify the k8s pods are created.
- Then we will create a sample html page to test access to the backend NGINX pod
root@ciliumk8s-ubuntu-server:~# k -n nsgoblin get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
goblin-backend-7485b6dcdf-d5t48 1/1 Running 0 6d2h 10.42.0.70 ciliumk8s-ubuntu-server <none> <none>
goblin-backend-7485b6dcdf-pt7hx 1/1 Running 0 6d2h 10.42.0.97 ciliumk8s-ubuntu-server <none> <none>
root@ciliumk8s-ubuntu-server:~# k -n nsgoblin exec -it po/goblin-backend-7485b6dcdf-pt7hx -- /bin/sh
# cat > green <<'EOF'
<!DOCTYPE html>
> > <html>
> <head>
<title>Green Goblin</title>
<style>
body { background-color: #4CAF50; color: white; text-align: center; padding: 50px; }
h1 { font-size: 3em; }
> > > > > </style>
</head>
<body>
<h1>I am the green goblin!</h1>
<p>Access me at /green</p>
</body>
</html>
> > > > > > > EOF
root@ciliumk8s-ubuntu-server:~# k -n nsgoblin exec -it goblin-backend-7485b6dcdf-d5t48 -- /bin/sh
# cat > green <<'EOF'
> <!DOCTYPE html>
<html>
<head>
<title>Green Goblin</title>
<style>
body { background-color: #4CAF50; color: white; text-align: center; padding: 50px; }
h1 { font-size: 3em; }
</style>
> </head>
<body>
<h1>I am the green goblin!</h1>
<p>Access me at /green</p>
</body>
</html>
EOF> > > > > > > > > > > > >
- We can now validate the pools are created on the F5 BIG-IP
root@(ciliumk8s-bigip)(cfg-sync Standalone)(Active)(/kubernetes/Shared)(tmos)# list ltm pool all
ltm pool svc_nodeport_80_nsgoblin_goblin_com_green {
description "crd_10_69_12_40_80 loadbalances this pool"
members {
/kubernetes/10.42.0.70:http {
address 10.42.0.70
}
/kubernetes/10.42.0.97:http {
address 10.42.0.97
}
}
min-active-members 1
partition kubernetes
}
ltm pool svc_nodeport_80_nsgoblin_goblin_com_harry {
description "crd_10_69_12_40_80 loadbalances this pool"
members {
/kubernetes/10.42.0.70:http {
address 10.42.0.70
}
/kubernetes/10.42.0.97:http {
address 10.42.0.97
}
}
min-active-members 1
partition kubernetes
}
root@(ciliumk8s-bigip)(cfg-sync Standalone)(Active)(/kubernetes/Shared)(tmos)# list ltm virtual crd_10_69_12_40_80
ltm virtual crd_10_69_12_40_80 {
creation-time 2025-12-22:10:10:37
description Shared
destination /kubernetes/10.69.12.40:http
ip-protocol tcp
last-modified-time 2025-12-22:10:10:37
mask 255.255.255.255
partition kubernetes
persist {
/Common/cookie {
default yes
}
}
policies {
crd_10_69_12_40_80_goblin_com_policy { }
}
profiles {
/Common/f5-tcp-progressive { }
/Common/http { }
}
serverssl-use-sni disabled
source 0.0.0.0/0
source-address-translation {
type automap
}
translate-address enabled
translate-port enabled
vs-index 2
}
CIS log output
2025/12/22 18:10:25 [INFO] [Request: 1] cluster local requested CREATE in VIRTUALSERVER nsgoblin/goblin-virtual-server
2025/12/22 18:10:25 [INFO] [Request: 1][AS3] creating a new AS3 manifest
2025/12/22 18:10:25 [INFO] [Request: 1][AS3][BigIP] posting request to https://10.69.12.1 for tenants
2025/12/22 18:10:26 [INFO] [Request: 2] cluster local requested UPDATE in ENDPOINTS nsgoblin/svc-nodeport
2025/12/22 18:10:26 [INFO] [Request: 3] cluster local requested UPDATE in ENDPOINTS nsgoblin/svc-nodeport
2025/12/22 18:10:43 [INFO] [Request: 1][AS3][BigIP] post resulted in SUCCESS
2025/12/22 18:10:43 [INFO] [AS3][POST] SUCCESS: code: 200 --- tenant:kubernetes --- message: success
2025/12/22 18:10:43 [INFO] [Request: 3][AS3] Processing request
2025/12/22 18:10:43 [INFO] [Request: 3][AS3] creating a new AS3 manifest
2025/12/22 18:10:43 [INFO] [Request: 3][AS3][BigIP] posting request to https://10.69.12.1 for tenants
2025/12/22 18:10:43 [INFO] Successfully updated status of VirtualServer:nsgoblin/goblin-virtual-server in Cluster
W1222 18:10:49.238444 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
2025/12/22 18:10:52 [INFO] [Request: 3][AS3][BigIP] post resulted in SUCCESS
2025/12/22 18:10:52 [INFO] [AS3][POST] SUCCESS: code: 200 --- tenant:kubernetes --- message: success
2025/12/22 18:10:52 [INFO] Successfully updated status of VirtualServer:nsgoblin/goblin-virtual-server in Cluster
-
Troubleshooting:
1. If static routes are not added, the first step is to inspect CIS logs for entries similar to these:
Cilium annotation warning logs
2025/12/22 17:44:45 [WARNING] Cilium node podCIDR annotation not found on node ciliumk8s-ubuntu-server, node has spec.podCIDR ?
2025/12/22 17:46:41 [WARNING] Cilium node podCIDR annotation not found on node ciliumk8s-ubuntu-server, node has spec.podCIDR ?
2025/12/22 17:46:42 [WARNING] Cilium node podCIDR annotation not found on node ciliumk8s-ubuntu-server, node has spec.podCIDR ?
2025/12/22 17:46:43 [WARNING] Cilium node podCIDR annotation not found on node ciliumk8s-ubuntu-server, node has spec.podCIDR ?
2. These are resolved by adding annotations to the node using the reference: https://clouddocs.f5.com/containers/latest/userguide/static-route-support.html
Cilium annotation for node
root@ciliumk8s-ubuntu-server:~# k annotate node ciliumk8s-ubuntu-server io.cilium.network.ipv4-pod-cidr=10.42.0.0/16
root@ciliumk8s-ubuntu-server:~# k describe node | grep -E "Annotations:|PodCIDR:|^\s+.*pod-cidr"
Annotations: alpha.kubernetes.io/provided-node-ip: 10.69.12.2
io.cilium.network.ipv4-pod-cidr: 10.42.0.0/16
PodCIDR: 10.42.0.0/24
3. Verify a static route has been created and test connectivity to k8s pods
root@(ciliumk8s-bigip)(cfg-sync Standalone)(Active)(/kubernetes)(tmos)# list net route
net route k8s-ciliumk8s-ubuntu-server-10.69.12.2 {
description 10.69.12.1
gw 10.69.12.2
network 10.42.0.0/16
partition kubernetes
}
Using pup (command line HTML parser) -> https://commandmasters.com/commands/pup-common/
root@ciliumk8s-ubuntu-server:~# curl -s http://goblin.com/green | pup 'body text{}'
I am the green goblin!
Access me at /green
1 0.000000 10.69.12.34 ? 10.69.12.40 TCP 78 34294 ? 80 [SYN] Seq=0 Win=64240 Len=0 MSS=1460 SACK_PERM TSval=2984295232 TSecr=0 WS=128
2 0.000045 10.69.12.40 ? 10.69.12.34 TCP 78 80 ? 34294 [SYN, ACK] Seq=0 Ack=1 Win=23360 Len=0 MSS=1460 WS=512 SACK_PERM TSval=1809316303 TSecr=2984295232
3 0.001134 10.69.12.34 ? 10.69.12.40 TCP 70 34294 ? 80 [ACK] Seq=1 Ack=1 Win=64256 Len=0 TSval=2984295234 TSecr=1809316303
4 0.001151 10.69.12.34 ? 10.69.12.40 HTTP 149 GET /green HTTP/1.1
5 0.001343 10.69.12.40 ? 10.69.12.34 TCP 70 80 ? 34294 [ACK] Seq=1 Ack=80 Win=23040 Len=0 TSval=1809316304 TSecr=2984295234
6 0.002497 10.69.12.1 ? 10.42.0.97 TCP 78 33707 ? 80 [SYN] Seq=0 Win=23360 Len=0 MSS=1460 WS=512 SACK_PERM TSval=1809316304 TSecr=0
7 0.003614 10.42.0.97 ? 10.69.12.1 TCP 78 80 ? 33707 [SYN, ACK] Seq=0 Ack=1 Win=64308 Len=0 MSS=1410 SACK_PERM TSval=1012609408 TSecr=1809316304 WS=128
8 0.003636 10.69.12.1 ? 10.42.0.97 TCP 70 33707 ? 80 [ACK] Seq=1 Ack=1 Win=23040 Len=0 TSval=1809316307 TSecr=1012609408
9 0.003680 10.69.12.1 ? 10.42.0.97 HTTP 149 GET /green HTTP/1.1
10 0.004774 10.42.0.97 ? 10.69.12.1 TCP 70 80 ? 33707 [ACK] Seq=1 Ack=80 Win=64256 Len=0 TSval=1012609409 TSecr=1809316307
11 0.004790 10.42.0.97 ? 10.69.12.1 TCP 323 HTTP/1.1 200 OK [TCP segment of a reassembled PDU]
12 0.004796 10.42.0.97 ? 10.69.12.1 HTTP 384 HTTP/1.1 200 OK
13 0.004820 10.69.12.40 ? 10.69.12.34 TCP 448 HTTP/1.1 200 OK [TCP segment of a reassembled PDU]
14 0.004838 10.69.12.1 ? 10.42.0.97 TCP 70 33707 ? 80 [ACK] Seq=80 Ack=254 Win=23552 Len=0 TSval=1809316308 TSecr=1012609410
15 0.004854 10.69.12.40 ? 10.69.12.34 HTTP 384 HTTP/1.1 200 OK
Summary:
There we have it, we have successfully deployed an NGINX application on a Kubernetes cluster managed by F5 CIS using static routes to forward traffic to the kubernetes pods
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)