F5 Container Ingress Services (CIS) and using k8s traffic policies to send traffic directly to pods
Integrating external ADC and WAF appliances with k8s kubernetes or openshift clusters becomes more and more important as to effective protect k8s the appliance needs to see inside k8s and this is where CIS comes into play but service meshes introduce a new challenge!
This article will take a look how you can use health monitors on the BIG-IP to solve the issue with constant AS3 REST-API pool member changes or when there is a sidecar service mesh like Istio (F5 has version called Aspen mesh of the istio mesh) or Linkerd mesh. I also have described some possible enchantments for CIS/AS3, Nginx Ingress Controller or Gateway Fabric that will be nice to have in the future.
- Intro
- Install Nginx Ingress Open source and CIS
- F5 CIS without Ingress/Gateway
- F5 CIS with Ingress
- F5 CIS with Gateway fabric
- Summary
1. Intro
F5 CIS allows integration between F5 and k8s kubernetes or openshift clusters. F5 CIS has two modes and that are NodePort and ClusterIP and this is well documented at https://clouddocs.f5.com/containers/latest/userguide/config-options.html . There is also a mode called auto that I prefer as based on k8s service type NodePort or ClusterIP it knows how to configure the pool members.
CIS in ClusterIP mode generally is much better as you bypass the kube-proxy as send traffic directly to pods but there could be issues if k8s pods are constantly being scaled up or down as CIS uses AS3 REST-API to talk and configure the F5 BIG-IP. I also have seen some issues where a bug or a config error that is not well validated can bring the entire CIS to BIG-IP control channel down as you then see 422 errors in the F5 logs and on CIS logs.
By using NodePort and "externaltrafficpolicy: local" and if there is an ingress also "internaltrafficpolicy: local" you can also bypass the kubernetes proxy and send traffic directly to the pods and BIG-IP health monitoring will mark the nodes that don't have pods as down as the traffic policies prevent nodes that do not have the web application pods to send the traffic to other nodes.
2..Install Nginx Ingress Open source and CIS
As I already have the k8s version of nginx and F5 CIS I need 3 different classes of ingress. k8s nginx is end of life https://kubernetes.io/blog/2025/11/11/ingress-nginx-retirement/ , so my example also shows how you can have in parallel the two nginx versions the k8s nginx and F5 nginx.
There is a new option to use The Operator Lifecycle Manager (OLM) that when installed will install the components and this is even better way than helm (you can install OLM with helm and this is even newer way to manage nginx ingress!) but I found it still in early stage for k8s while for Openshift it is much more advanced.
I have installed Nginx in a daemonset not deployment and I will mention why later on and I have added a listener config for the F5 TransportServer even if later it is seen why at the moment it is not usable.
helm install -f values.yaml ginx-ingress oci://ghcr.io/nginx/charts/nginx-ingress \
--version 2.4.1 \
--namespace f5-nginx \
--set controller.kind=daemonset \
--set controller.image.tag=5.3.1 \
--set controller.ingressClass.name=nginx-nginxinc \
--set controller.ingressClass.create=true \
--set controller.ingressClass.setAsDefaultIngress=false
cat values.yaml
controller:
enableCustomResources: true
globalConfiguration:
create: true
spec:
listeners:
- name: nginx-tcp
port: 88
protocol: TCP
kubectl get ingressclasses
NAME CONTROLLER PARAMETERS AGE
f5 f5.com/cntr-ingress-svcs <none> 8d
nginx k8s.io/ingress-nginx <none> 40d
nginx-nginxinc nginx.org/ingress-controller <none> 32s
niki@master-1:~$ kubectl get pods -o wide -n f5-nginx
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ingress-controller-2zbdr 1/1 Running 0 62s 10.10.133.234 worker-2 <none> <none>
nginx-ingress-controller-rrrc9 1/1 Running 0 62s 10.10.226.87 worker-1 <none> <none>
niki@master-1:~$
The CIS config is shown below. I have used "pool_member_type" auto as this allows Cluster-IP or NodePort services to be used at the same time.
helm install -f values.yaml f5-cis f5-stable/f5-bigip-ctlr
cat values.yaml
bigip_login_secret: f5-bigip-ctlr-login
rbac:
create: true
serviceAccount:
create: true
name:
namespace: f5-cis
args:
bigip_url: X.X.X.X
bigip_partition: kubernetes
log_level: DEBUG
pool_member_type: auto
insecure: true
as3_validation: true
custom_resource_mode: true
log-as3-response: true
load-balancer-class: f5
manage-load-balancer-class-only: true
namespaces: [default, test, linkerd-viz, ingress-nginx, f5-nginx]
# verify-interval: 35
image:
user: f5networks
repo: k8s-bigip-ctlr
pullPolicy: Always
nodeSelector: {}
tolerations: []
livenessProbe: {}
readinessProbe: {}
resources: {}
version: latest
3. F5 CIS without Ingress/Gateway
Without Ingress actually the F5's configuration is much simpler as you just need to create nodeport service and the VirtualServer CR. As you see below the health monitor marks the control node and the worker node that do not have pod from "hello-world-app-new-node" as shown in the F5 picture below.
Sending traffic without Ingresses or Gateways removes one extra hop and sub-optimal traffic patterns as when the Ingress or Gateway is in deployment mode for example there could be 20 nodes and only 2 ingress/gateway pods on 1 node each. Traffic will need to go to only those 2 nodes to enter the cluster.
apiVersion: v1
kind: Service
metadata:
name: hello-world-app-new-node
labels:
app: hello-world-app-new-node
spec:
externalTrafficPolicy: Local
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
selector:
app: hello-world-app-new
type: NodePort
---
apiVersion: "cis.f5.com/v1"
kind: VirtualServer
metadata:
name: vs-hello-new
namespace: default
labels:
f5cr: "true"
spec:
virtualServerAddress: "192.168.1.71"
virtualServerHTTPPort: 80
host: www.example.com
hostGroup: "new"
snat: auto
pools:
- monitor:
interval: 10
recv: ""
send: "GET /"
timeout: 31
type: http
path: /
service: hello-world-app-new-node
servicePort: 8080
For Istio and Linkerd Integration an irule could be needed to send custom ALPN extensions to the backend pods that now have a sidecar. I suggest seeing my article at "the Medium" for more information see https://medium.com/@nikoolayy1/connecting-kubernetes-k8s-cluster-to-external-router-using-bgp-with-calico-cni-and-nginx-ingress-2c45ebe493a1
Keep in mind that for the new options with Ambient mesh (sidecarless) the CIS without Ingress will not work as F5 does not speak HBONE (or HTTP-Based Overlay Network Environment) protocol that is send in the HTTP Connect tunnel to inform the zTunnel (layer 3/4 proxy that starts or terminates the mtls) about the real source identity (SPIFFE and SPIRE) that may not be the same as the one in CN/SAN client SSL cert. Maybe in the future there could be an option based on a CRD to provide the IP address of an external device like F5 and the zTunnel proxy to terminate the TLS/SSL (the waypoint layer 7 proxy usually Envoy is not needed in this case as F5 will do the HTTP processing) and send traffic to the pod but for now I see no way to make F5 work directly with Ambient mesh. If the ztunnel takes the identity from the client cert CN/SAN F5 will not have to even speak HBONE.
4. F5 CIS with Ingress
Why we may need an ingress just as a gateway into the k8s you may ask? Nowadays many times a service mesh like linkerd or istio or F5 aspen mesh is used and the pods talk to each other with mTLS handled by the sidecars and an Ingress as shown in https://linkerd.io/2-edge/tasks/using-ingress/ is an easy way for the client-side to be https while the server side to be the service mesh mtls, Even ambient mesh works with Ingresses as it captures traffic after them. It is possible from my tests F5 to talk to a linkerd injected pods for example but it is hard!
I have described this in more detail at https://medium.com/@nikoolayy1/connecting-kubernetes-k8s-cluster-to-external-router-using-bgp-with-calico-cni-and-nginx-ingress-2c45ebe493a1
Unfortunately when there is an ingress things as much more complex! F5 has Integration called "IngressLink" but as I recently found out it is when BIG-IP is only for Layer 3/4 Load Balancing and the Nginx Ingress Controller will actually do the decryption and AppProtect WAF will be on the Nginx as well F5 CIS IngressLink attaching WAF policy on the big-ip through the CRD ? | DevCentral
Wish F5 to make an integration like "IngressLink" but the reverse where each node will have nginx ingress as this can be done with demon set and not deployment on k8s and Nginx Ingress will be the layer 3/4, as the Nginx VirtualServer CRD support this and to just allow F5 in the k8s cluster.
Below is how currently this can be done. I have created a Transportserver but is not used as it does not at the momemt support the option "use-cluster-ip" set to true so that Nginx does not bypass the service and to go directly to the endpoints as this will cause nodes that have nginx ingress pod but no application pod to send the traffic to other nodes and we do not want that as add one more layer of load balancing latency and performance impact.
The gateway is shared as you can have a different gateway per namespace or shared like the Ingress.
apiVersion: v1
kind: Service
metadata:
name: hello-world-app-new-cluster
labels:
app: hello-world-app-new-cluster
spec:
internalTrafficPolicy: Local
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
selector:
app: hello-world-app-new
type: ClusterIP
---
apiVersion: k8s.nginx.org/v1
kind: TransportServer
metadata:
name: nginx-tcp
annotations:
nginx.org/use-cluster-ip: "true"
spec:
listener:
name: nginx-tcp
protocol: TCP
upstreams:
- name: nginx-tcp
service: hello-world-app-new-cluster
port: 8080
action:
pass: nginx-tcp
---
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: nginx-http
spec:
host: "app.example.com"
upstreams:
- name: webapp
service: hello-world-app-new-cluster
port: 8080
use-cluster-ip: true
routes:
- path: /
action:
pass: webapp
The second part of the configuration is to expose the Ingress to BIG-IP using CIS.
---
apiVersion: v1
kind: Service
metadata:
name: f5-nginx-ingress-controller
namespace: f5-nginx
labels:
app.kubernetes.io/name: nginx-ingress
spec:
externalTrafficPolicy: Local
type: NodePort
selector:
app.kubernetes.io/name: nginx-ingress
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
---
apiVersion: "cis.f5.com/v1"
kind: VirtualServer
metadata:
name: vs-hello-ingress
namespace: f5-nginx
labels:
f5cr: "true"
spec:
virtualServerAddress: "192.168.1.81"
virtualServerHTTPPort: 80
snat: auto
pools:
- monitor:
interval: 10
recv: "200"
send: "GET / HTTP/1.1\r\nHost:app.example.com\r\nConnection: close\r\n\r\n"
timeout: 31
type: http
path: /
service: f5-nginx-ingress-controller
servicePort: 80
Only the nodes that have a pod will answer the health monitor.
Hopefully F5 can make some Integration and CRD that makes this configuration simpler like the "IngressLink" and to add the option "use-cluster-ip" to the Transport server as Nginx does not need to see the HTTP traffic at all. This is on my wish list for this year đ Also if AS3 could reference existing group of nodes and just with different ports this could help CIS will need to push AS3 declaration of nodes just one time and then the different VirtualServers could reference it but with different ports and this will make the AS3 REST-API traffic much smaller.
5. F5 CIS with Gateway fabric
This does not at the moment work as gateway-fabric unfortunately does not support "use-cluster-ip" option. The idea is to deploy the gateway fabric in daemonset and to inject it with a sidecar or even without one this will work with ambient meshes. As k8s world is moving away from an Ingress this will be a good option.
Gateway fabric natively supports TCP , UDP traffic and even TLS traffic that is not HTTPS and by exposing the gateway fabric with a Cluster-IP or Node-Port service then with different hostnames the Gateway fabric will select to correct route to send the traffic to!
helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --create-namespace -n nginx-gateway -f values-gateway.yaml
cat values-gateway.yaml
nginx:
# Run the data plane per-node
kind: daemonSet
# How the data plane gets exposed when you create a Gateway
service:
type: NodePort # or NodePort
# (optional) if youâre using Gateway API experimental channel features:
nginxGateway:
gwAPIExperimentalFeatures:
enable: true
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: shared-gw
namespace: nginx-gateway
spec:
gatewayClassName: nginx
listeners:
- name: https
port: 443
protocol: HTTPS
tls:
mode: Terminate
certificateRefs:
- kind: Secret
name: wildcard-tls
allowedRoutes:
namespaces:
from: ALL
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: app-route
namespace: app
spec:
parentRefs:
- name: shared-gw
namespace: nginx-gateway
hostnames:
- app.example.com
rules:
- backendRefs:
- name: app-svc
port: 8080
F5 Nginx Fabric mesh is evolving really fast from what I see , so hopefully we see the features I mentioned soon and always you can open a github case. The documentation is at https://docs.nginx.com/nginx-gateway-fabric and as this use k8s CRD the full options can be seen at TLS - Kubernetes Gateway API
6. Summary
With the release of TMOS 21 F5 now supports much more health monitors and pool members, so this way of deploying CIS with NodePort services may offer benefits with TMOS 21.1 that will be the stable version as shown in https://techdocs.f5.com/en-us/bigip-21-0-0/big-ip-release-notes/big-ip-new-features.html With auto mode some services can still be directly exposed to BIG-IP as the CIS config changes are usually faster to remove a pool member pod than BIG-IP health monitors to mark a node as down.
The new version of CIS that will be CIS advanced may take of the concerns of hitting a bug or not well validated configuration that could bring the control channel down and TMOS 21.1 may also handle AS3 config changes better with less cpu/memory issue, so there could be no need in the future of using trafficpolicies and NodePort mode and k8s services of this type.
For ambient mesh my example with Ingress and Gateway seems the only option for direct communication at the moment.
We will see what the future holds!
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)