BIG-IP Next for Kubernetes CNFs deployment walkthrough
Introduction
F5 Application Delivery and Security Platform covers different deployment scenarios to help deliver and secure any application anywhere. BIG-IP Next for Kubernetes CNF architecture aligns with cloud-native principles by enabling horizontal scaling, ensuring that applications can expand seamlessly without compromising performance. It preserves the deterministic reliability essential for telecom environments, balancing scalability with the stringent demands of real-time processing.
The way F5 implements CNF enables dynamic traffic steering across CNF pods and optimizes resource utilization through intelligent workload distribution. The architecture supports horizontal scaling patterns typical of cloud-native applications while maintaining the deterministic performance characteristics required for telecommunications workloads.
Lab environment
Below are the lab components,
- Kubernetes cluster with a control plane and two worker nodes.
- TMM is deployed on worker node 1.
- Client connects to the subscriber vlan via worker node 2.
- Grafana is reachable through the internal network.
The below lab walk-through assumes,
- A working Kubernetes cluster with Red Hat OpenShift
- Local repo is configured.
- Storage is configured whether Local or NFS.
Below is an overview of the installation steps flow.
- Creating our CNF namespaces, in our lab, we are using cne-core and cnf-fw-01
lab$ oc create namespace cne-core
lab$ oc create namespace cnf-fw-01
lab$ oc get ns
NAME STATUS AGE
cne-core Active 60d
cnf-fw-01 Active 59d
- Installing helm charts with the required values per environment
lab$ helm install f5-cert-manager oci://repo.f5.com/charts/f5-cert-manager --version 0.23.35-0.0.10 -f cert-manager-values.yaml -n cne-core
lab$ helm install f5-fluentd oci://repo.f5.com/charts/f5-toda-fluentd --version 1.31.30-0.0.7 -f fluentd-values.yaml -n cne-core --wait
lab$ helm install f5-dssm oci://repo.f5.com/charts/f5-dssm --version 1.27.1-0.0.20 -f dssm-values.yaml -n cne-core --wait
lab$ helm install cnf-rabbit oci://repo.f5.com/charts/rabbitmq --version 0.6.1-0.0.13 -f rabbitmq-values.yaml -n cne-core --wait
lab$ helm install cnf-cwc oci://repo.f5.com/charts/cwc --version 0.43.1-0.0.15 -f cwc-values.yaml -n cne-core --wait
lab$ helm install f5ingress oci://repo.f5.com/charts/f5ingress --version v13.7.1-0.3.22 -f values-ingress.yaml -n cnf-fw-01 --wait
lab$ oc get pods -n cne-core
NAME READY STATUS RESTARTS AGE
f5-cert-manager-656b6db84f-t9dhn 2/2 Running 0 3h46m
f5-cert-manager-cainjector-5cd9454d6c-nz46d 1/1 Running 0 3h46m
f5-cert-manager-webhook-6d87b5797b-pmlwv 1/1 Running 0 3h46m
f5-dssm-db-0 3/3 Running 0 3h43m
f5-dssm-db-1 3/3 Running 0 3h42m
f5-dssm-db-2 3/3 Running 0 3h41m
f5-dssm-sentinel-0 3/3 Running 0 3h43m
f5-dssm-sentinel-1 3/3 Running 0 3h42m
f5-dssm-sentinel-2 3/3 Running 0 3h41m
f5-rabbit-64c984d4c6-rjd2d 2/2 Running 0 3h40m
f5-spk-cwc-77d487f955-7vqtl 2/2 Running 0 3h39m
f5-toda-fluentd-558cd5b9bd-9cr6w 1/1 Running 0 3h43m
lab$ oc get pods -n cnf-fw-01
NAME READY STATUS RESTARTS AGE
f5-afm-76c7d76fff-8pj4c 2/2 Running 0 3h37m
f5-downloader-657b7fc749-nzfgt 2/2 Running 0 3h37m
f5-dwbld-d858c485b-c6bmf 2/2 Running 0 3h37m
f5-ipsd-79f97fdb9c-dbsqb 2/2 Running 0 3h37m
f5-tmm-7565b4c798-hvsfd 5/5 Running 0 3h37m
f5-zxfrd-d9db549c4-qqhtr 2/2 Running 0 3h37m
f5ingress-f5ingress-7bcc94b9c8-jcbfg 5/5 Running 0 3h37m
otel-collector-75cd944bcc-fsvz4 1/1 Running 0 3h37m
- Deploy subscriber and external (data) vlans
lab$ cat 01-cr-vlan.yaml
apiVersion: "k8s.f5net.com/v1"
kind: F5BigNetVlan
metadata:
name: "subscriber-vlan"
namespace: "cnf-fw-01"
spec:
name: subscriber
interfaces:
- "1.2"
selfip_v4s:
- 10.1.20.100
- 10.1.20.101
- 10.1.20.102
prefixlen_v4: 24
mtu: 1500
cmp_hash: SRC_ADDR
---
apiVersion: "k8s.f5net.com/v1"
kind: F5BigNetVlan
metadata:
name: "data-vlan"
namespace: "cnf-fw-01"
spec:
name: data
interfaces:
- "1.1"
selfip_v4s:
- 10.1.30.100
- 10.1.30.101
- 10.1.30.102
prefixlen_v4: 24
mtu: 1500
cmp_hash: SRC_ADDR
Now, we start testing from the client side and observe our monitoring system, already configured OTEL to send data to Grafana
Now, let’s have a look at our firewall policy, CR
$ cat 06-cr-fw-policy-crd.yaml
apiVersion: "k8s.f5net.com/v1"
kind: F5BigFwPolicy
metadata:
name: "cnfcop-n6policy"
spec:
rule:
- name: deny-53-hsl-log
ipProtocol: udp
source:
addresses:
- "0.0.0.0/0"
ports: []
zones:
- "subscriber"
destination:
addresses:
- "0.0.0.0/0"
ports:
- "53"
zones:
- "data"
action: "drop"
logging: true
- name: permit-any-hsl-log
ipProtocol: any
source:
addresses:
- "0.0.0.0/0"
ports: []
zones:
- "subscriber"
destination:
addresses:
- "0.0.0.0/0"
ports: []
zones:
- "data"
action: "accept"
logging: true
Let’s observe the logs from our monitoring system
Conclusion
In conclusion, BIG-IP Next for Kubernetes CNFs optimizes edge environments and AI workloads by providing consolidated data plane with BIG-IP market-leading application delivery and security platform capabilities for different deployment models.
In this article, we explored CNF implementation with an example focused around CNF edge firewall, following articles to cover additional CRs (IPS, DNS, etc.)
Related Content
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)