kubernetes
78 TopicsF5 Container Ingress Services (CIS) and using k8s traffic policies to send traffic directly to pods
This article will take a look how you can use health monitors on the BIG-IP to solve the issue with constant AS3 REST-API pool member changes or when there is a sidecar service mesh like Istio (F5 has version called Aspen mesh of the istio mesh) or Linkerd mesh. I also have described some possible enchantments for CIS/AS3, Nginx Ingress Controller or Gateway Fabric that will be nice to have in the future. Intro Install Nginx Ingress Open source and CIS F5 CIS without Ingress/Gateway F5 CIS with Ingress F5 CIS with Gateway fabric Summary 1. Intro F5 CIS allows integration between F5 and k8s kubernetes or openshift clusters. F5 CIS has two modes and that are NodePort and ClusterIP and this is well documented at https://clouddocs.f5.com/containers/latest/userguide/config-options.html . There is also a mode called auto that I prefer as based on k8s service type NodePort or ClusterIP it knows how to configure the pool members. CIS in ClusterIP mode generally is much better as you bypass the kube-proxy as send traffic directly to pods but there could be issues if k8s pods are constantly being scaled up or down as CIS uses AS3 REST-API to talk and configure the F5 BIG-IP. I also have seen some issues where a bug or a config error that is not well validated can bring the entire CIS to BIG-IP control channel down as you then see 422 errors in the F5 logs and on CIS logs. By using NodePort and "externaltrafficpolicy: local" and if there is an ingress also "internaltrafficpolicy: local" you can also bypass the kubernetes proxy and send traffic directly to the pods and BIG-IP health monitoring will mark the nodes that don't have pods as down as the traffic policies prevent nodes that do not have the web application pods to send the traffic to other nodes. 2..Install Nginx Ingress Open source and CIS As I already have the k8s version of nginx and F5 CIS I need 3 different classes of ingress. k8s nginx is end of life https://kubernetes.io/blog/2025/11/11/ingress-nginx-retirement/ , so my example also shows how you can have in parallel the two nginx versions the k8s nginx and F5 nginx. There is a new option to use The Operator Lifecycle Manager (OLM) that when installed will install the components and this is even better way than helm (you can install OLM with helm and this is even newer way to manage nginx ingress!) but I found it still in early stage for k8s while for Openshift it is much more advanced. I have installed Nginx in a daemonset not deployment and I will mention why later on and I have added a listener config for the F5 TransportServer even if later it is seen why at the moment it is not usable. helm install -f values.yaml ginx-ingress oci://ghcr.io/nginx/charts/nginx-ingress \ --version 2.4.1 \ --namespace f5-nginx \ --set controller.kind=daemonset \ --set controller.image.tag=5.3.1 \ --set controller.ingressClass.name=nginx-nginxinc \ --set controller.ingressClass.create=true \ --set controller.ingressClass.setAsDefaultIngress=false cat values.yaml controller: enableCustomResources: true globalConfiguration: create: true spec: listeners: - name: nginx-tcp port: 88 protocol: TCP kubectl get ingressclasses NAME CONTROLLER PARAMETERS AGE f5 f5.com/cntr-ingress-svcs <none> 8d nginx k8s.io/ingress-nginx <none> 40d nginx-nginxinc nginx.org/ingress-controller <none> 32s niki@master-1:~$ kubectl get pods -o wide -n f5-nginx NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-ingress-controller-2zbdr 1/1 Running 0 62s 10.10.133.234 worker-2 <none> <none> nginx-ingress-controller-rrrc9 1/1 Running 0 62s 10.10.226.87 worker-1 <none> <none> niki@master-1:~$ The CIS config is shown below. I have used "pool_member_type" auto as this allows Cluster-IP or NodePort services to be used at the same time. helm install -f values.yaml f5-cis f5-stable/f5-bigip-ctlr cat values.yaml bigip_login_secret: f5-bigip-ctlr-login rbac: create: true serviceAccount: create: true name: namespace: f5-cis args: bigip_url: X.X.X.X bigip_partition: kubernetes log_level: DEBUG pool_member_type: auto insecure: true as3_validation: true custom_resource_mode: true log-as3-response: true load-balancer-class: f5 manage-load-balancer-class-only: true namespaces: [default, test, linkerd-viz, ingress-nginx, f5-nginx] # verify-interval: 35 image: user: f5networks repo: k8s-bigip-ctlr pullPolicy: Always nodeSelector: {} tolerations: [] livenessProbe: {} readinessProbe: {} resources: {} version: latest 3. F5 CIS without Ingress/Gateway Without Ingress actually the F5's configuration is much simpler as you just need to create nodeport service and the VirtualServer CR. As you see below the health monitor marks the control node and the worker node that do not have pod from "hello-world-app-new-node" as shown in the F5 picture below. Sending traffic without Ingresses or Gateways removes one extra hop and sub-optimal traffic patterns as when the Ingress or Gateway is in deployment mode for example there could be 20 nodes and only 2 ingress/gateway pods on 1 node each. Traffic will need to go to only those 2 nodes to enter the cluster. apiVersion: v1 kind: Service metadata: name: hello-world-app-new-node labels: app: hello-world-app-new-node spec: externalTrafficPolicy: Local ports: - name: http protocol: TCP port: 8080 targetPort: 8080 selector: app: hello-world-app-new type: NodePort --- apiVersion: "cis.f5.com/v1" kind: VirtualServer metadata: name: vs-hello-new namespace: default labels: f5cr: "true" spec: virtualServerAddress: "192.168.1.71" virtualServerHTTPPort: 80 host: www.example.com hostGroup: "new" snat: auto pools: - monitor: interval: 10 recv: "" send: "GET /" timeout: 31 type: http path: / service: hello-world-app-new-node servicePort: 8080 For Istio and Linkerd Integration an irule could be needed to send custom ALPN extensions to the backend pods that now have a sidecar. I suggest seeing my article at "the Medium" for more information see https://medium.com/@nikoolayy1/connecting-kubernetes-k8s-cluster-to-external-router-using-bgp-with-calico-cni-and-nginx-ingress-2c45ebe493a1 Keep in mind that for the new options with Ambient mesh (sidecarless) the CIS without Ingress will not work as F5 does not speak HBONE (or HTTP-Based Overlay Network Environment) protocol that is send in the HTTP Connect tunnel to inform the zTunnel (layer 3/4 proxy that starts or terminates the mtls) about the real source identity (SPIFFE and SPIRE) that may not be the same as the one in CN/SAN client SSL cert. Maybe in the future there could be an option based on a CRD to provide the IP address of an external device like F5 and the zTunnel proxy to terminate the TLS/SSL (the waypoint layer 7 proxy usually Envoy is not needed in this case as F5 will do the HTTP processing) and send traffic to the pod but for now I see no way to make F5 work directly with Ambient mesh. If the ztunnel takes the identity from the client cert CN/SAN F5 will not have to even speak HBONE. 4. F5 CIS with Ingress Why we may need an ingress just as a gateway into the k8s you may ask? Nowadays many times a service mesh like linkerd or istio or F5 aspen mesh is used and the pods talk to each other with mTLS handled by the sidecars and an Ingress as shown in https://linkerd.io/2-edge/tasks/using-ingress/ is an easy way for the client-side to be https while the server side to be the service mesh mtls, Even ambient mesh works with Ingresses as it captures traffic after them. It is possible from my tests F5 to talk to a linkerd injected pods for example but it is hard! I have described this in more detail at https://medium.com/@nikoolayy1/connecting-kubernetes-k8s-cluster-to-external-router-using-bgp-with-calico-cni-and-nginx-ingress-2c45ebe493a1 Unfortunately when there is an ingress things as much more complex! F5 has Integration called "IngressLink" but as I recently found out it is when BIG-IP is only for Layer 3/4 Load Balancing and the Nginx Ingress Controller will actually do the decryption and AppProtect WAF will be on the Nginx as well F5 CIS IngressLink attaching WAF policy on the big-ip through the CRD ? | DevCentral Wish F5 to make an integration like "IngressLink" but the reverse where each node will have nginx ingress as this can be done with demon set and not deployment on k8s and Nginx Ingress will be the layer 3/4, as the Nginx VirtualServer CRD support this and to just allow F5 in the k8s cluster. Below is how currently this can be done. I have created a Transportserver but is not used as it does not at the momemt support the option "use-cluster-ip" set to true so that Nginx does not bypass the service and to go directly to the endpoints as this will cause nodes that have nginx ingress pod but no application pod to send the traffic to other nodes and we do not want that as add one more layer of load balancing latency and performance impact. The gateway is shared as you can have a different gateway per namespace or shared like the Ingress. apiVersion: v1 kind: Service metadata: name: hello-world-app-new-cluster labels: app: hello-world-app-new-cluster spec: internalTrafficPolicy: Local ports: - name: http protocol: TCP port: 8080 targetPort: 8080 selector: app: hello-world-app-new type: ClusterIP --- apiVersion: k8s.nginx.org/v1 kind: TransportServer metadata: name: nginx-tcp annotations: nginx.org/use-cluster-ip: "true" spec: listener: name: nginx-tcp protocol: TCP upstreams: - name: nginx-tcp service: hello-world-app-new-cluster port: 8080 action: pass: nginx-tcp --- apiVersion: k8s.nginx.org/v1 kind: VirtualServer metadata: name: nginx-http spec: host: "app.example.com" upstreams: - name: webapp service: hello-world-app-new-cluster port: 8080 use-cluster-ip: true routes: - path: / action: pass: webapp The second part of the configuration is to expose the Ingress to BIG-IP using CIS. --- apiVersion: v1 kind: Service metadata: name: f5-nginx-ingress-controller namespace: f5-nginx labels: app.kubernetes.io/name: nginx-ingress spec: externalTrafficPolicy: Local type: NodePort selector: app.kubernetes.io/name: nginx-ingress ports: - name: http protocol: TCP port: 80 targetPort: http --- apiVersion: "cis.f5.com/v1" kind: VirtualServer metadata: name: vs-hello-ingress namespace: f5-nginx labels: f5cr: "true" spec: virtualServerAddress: "192.168.1.81" virtualServerHTTPPort: 80 snat: auto pools: - monitor: interval: 10 recv: "200" send: "GET / HTTP/1.1\r\nHost:app.example.com\r\nConnection: close\r\n\r\n" timeout: 31 type: http path: / service: f5-nginx-ingress-controller servicePort: 80 Only the nodes that have a pod will answer the health monitor. Hopefully F5 can make some Integration and CRD that makes this configuration simpler like the "IngressLink" and to add the option "use-cluster-ip" to the Transport server as Nginx does not need to see the HTTP traffic at all. This is on my wish list for this year đ Also if AS3 could reference existing group of nodes and just with different ports this could help CIS will need to push AS3 declaration of nodes just one time and then the different VirtualServers could reference it but with different ports and this will make the AS3 REST-API traffic much smaller. 5. F5 CIS with Gateway fabric This does not at the moment work as gateway-fabric unfortunately does not support "use-cluster-ip" option. The idea is to deploy the gateway fabric in daemonset and to inject it with a sidecar or even without one this will work with ambient meshes. As k8s world is moving away from an Ingress this will be a good option. Gateway fabric natively supports TCP , UDP traffic and even TLS traffic that is not HTTPS and by exposing the gateway fabric with a Cluster-IP or Node-Port service then with different hostnames the Gateway fabric will select to correct route to send the traffic to! helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric --create-namespace -n nginx-gateway -f values-gateway.yaml cat values-gateway.yaml nginx: # Run the data plane per-node kind: daemonSet # How the data plane gets exposed when you create a Gateway service: type: NodePort # or NodePort # (optional) if youâre using Gateway API experimental channel features: nginxGateway: gwAPIExperimentalFeatures: enable: true apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: shared-gw namespace: nginx-gateway spec: gatewayClassName: nginx listeners: - name: https port: 443 protocol: HTTPS tls: mode: Terminate certificateRefs: - kind: Secret name: wildcard-tls allowedRoutes: namespaces: from: ALL --- apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: app-route namespace: app spec: parentRefs: - name: shared-gw namespace: nginx-gateway hostnames: - app.example.com rules: - backendRefs: - name: app-svc port: 8080 F5 Nginx Fabric mesh is evolving really fast from what I see , so hopefully we see the features I mentioned soon and always you can open a github case. The documentation is at https://docs.nginx.com/nginx-gateway-fabric and as this use k8s CRD the full options can be seen at TLS - Kubernetes Gateway API 6. Summary With the release of TMOS 21 F5 now supports much more health monitors and pool members, so this way of deploying CIS with NodePort services may offer benefits with TMOS 21.1 that will be the stable version as shown in https://techdocs.f5.com/en-us/bigip-21-0-0/big-ip-release-notes/big-ip-new-features.html With auto mode some services can still be directly exposed to BIG-IP as the CIS config changes are usually faster to remove a pool member pod than BIG-IP health monitors to mark a node as down. The new version of CIS that will be CIS advanced may take of the concerns of hitting a bug or not well validated configuration that could bring the control channel down and TMOS 21.1 may also handle AS3 config changes better with less cpu/memory issue, so there could be no need in the future of using trafficpolicies and NodePort mode and k8s services of this type. For ambient mesh my example with Ingress and Gateway seems the only option for direct communication at the moment. We will see what the future holds!647Views6likes2CommentsThe Ingress NGINX Alternative: F5 NGINX Ingress Controller for the Long Term
The Kubernetes community recently announced that Ingress NGINX will be retired in March 2026. After that date, there wonât be any more new updates, bugfixes, or security patches. ingress-nginx is no longer a viable enterprise solution for the long-term, and organizations using it in production should move quickly to explore alternatives and plan to shift their workloads to Kubernetes ingress solutions that are continuing development. Your Options (And Why We Hope Youâll Consider NGINX) There are several good Ingress controllers availableâTraefik, HAProxy, Kong, Envoy-based options, and Gateway API implementations. The Kubernetes docs list many of them, and they all have their strengths. Security start-up Chainguard is maintaining a status-quo version of ingress-nginx and applying basic safety patches as part of their EmeritOSS program. But this program is designed as a stopgap to keep users safe while they transition to a different ingress solution. F5 maintains an OSS permissively licensed NGINX Ingress Controller. The project is open source, Apache 2.0 licensed, and will stay that way. There is a team of dedicated engineers working on it with a slate of upcoming upgrades. If youâre already comfortable with NGINX and just want something that works without a significant learning curve, we believe that the F5 NGINX Ingress Controller for Kubernetes is your smoothest path forward. The benefits of adopting NGINX Ingress Controller open source include: Genuinely open source: Apache 2.0 licensed with 150+ contributors from diverse organizations, not just F5. All development happens publicly on GitHub, and F5 has committed to keeping it open source forever. Plus community calls every 2 weeks. Minimal learning curve: Uses the same NGINX engine you already know. Most Ingress NGINX annotations have direct equivalents, and the migration guide provides clear mappings for your existing configurations. Supported annotations include popular ones such as nginx.org/client-body-buffer-size mirrors nginx.ingress.kubernetes.io/client-body-buffer-size (sets the maximum size of the client request body buffer). Also available in VirtualServer and ConfigMap. nginx.org/rewrite-target mirrors nginx.ingress.kubernetes.io/rewrite-target (sets a replacement path for URI rewrites) nginx.org/ssl-ciphers mirrors nginx.ingress.kubernetes.io/ssl-ciphers (configures enabled TLS cipher suites) nginx.org/ssl-prefer-server-cipher mirrors nginx.ingress.kubernetes.io/ssl-prefer-server-ciphers (controls server-side cipher preference during the TLS handshake) Optional enterprise-grade capabilities: While the OSS version is robust, NGINX Plus integration is available for enterprises needing high availability, authentication and authorization, session persistence, advanced security and commercial support Sustainable maintenance: A dedicated full-time team at F5 ensures regular security updates, bug fixes, and feature development. Production-tested at scale: NGINX Ingress Controller powers approximately 40% of Kubernetes Ingress deployments with over 10 million downloads. Itâs battle-tested in real production environments. Kubernetes-native design: Custom Resource Definitions (VirtualServer, Policy, TransportServer) provide cleaner configuration than annotation overload, with built-in validation to prevent errors. Advanced capabilities when you need them: Support for canary deployments, A/B testing, traffic splitting, JWT validation, rate limiting, mTLS, and moreâavailable in the open source version. Future-proof architecture: Active development of NGINX Gateway Fabric provides a clear migration path when youâre ready to move to Gateway API. NGINX Gateway Fabric is a conformant Gateway API solution under CNCF conformance criteria and it is one of the most widely used open source Gateway API solutions. Moving to NGINX Ingress Controller Hereâs a rough migration guide. You can also check our more detailed migration guide on our documentation site. Phase 1: Take Stock See what you have: Document your current Ingress resources, annotations, and ConfigMaps Check for snippets: Identify any annotations like: nginx.ingress.kubernetes.io/configuration-snippet Confirm you're using it: Run kubectl get pods --all-namespaces --selector app.kubernetes.io/name=ingress-nginx Set it up alongside: Install NGINX Ingress Controller in a separate namespace while keeping your current setup running Phase 2: Translate Your Config Convert annotations: Most of your existing annotations have equivalents in NGINX Ingress Controller - there's a comprehensive migration guide that maps them out Consider VirtualServer resources: These custom resources are cleaner than annotation-heavy Ingress, and give you more control, but it's your choice Or keep using Ingress: If you want minimal changes, it works fine with standard Kubernetes Ingress resources Handle edge cases: For anything that doesn't map directly, you can use snippets or Policy resources Phase 3: Test Everything Try it with test apps: Create some test Ingress rules pointing to NGINX Ingress Controller Run both side-by-side: Keep both controllers running and route test traffic through the new one Verify functionality: Check routing, SSL, rate limiting, CORS, authâwhatever you're using Check performance: Verify it handles your traffic the way you need Phase 4: Move Over Gradually Start small: Migrate your less-critical applications first Shift traffic slowly: Update DNS/routing bit by bit Watch closely: Keep an eye on logs and metrics as you go Keep an escape hatch: Make sure you can roll back if something goes wrong Phase 5: Finish Up Complete the migration: Move your remaining workloads Clean up the old controller: Uninstall community Ingress NGINX once everything's moved Tidy up: Remove old ConfigMaps and resources you don't need anymore Enterprise-grade capabilities and support Once an ingress layer becomes mission-critical, enterprise features become necessary. High availability, predictable failover, and supportability matter as much as features. Enterprise-grade capabilities available for NGINX Ingress Controller Plus include high availability, authentication and authorization, commercial support, and more. These ensure production traffic remains fast, secure, and reliable. Capabilities include: Commercial support Backed by vendor commercial support (SLAs, escalation paths) for production incidents Access to tested releases, patches, and security fixes suitable for regulated/enterprise environments Guidance for production architecture (HA patterns, upgrade strategies, performance tuning) Helps organizations standardize on a supported ingress layer for platform engineering at scale Dynamic Reconfiguration Upstream configuration updates via API without process reloads Eliminates memory bloat and connection timeouts as upstream server lists and variables are updated in real time when pods scale or configurations change Authentication & Authorization Built-in authentication support for OAuth 2.0 / OIDC, JWT validation, and basic auth External identity provider integration (e.g., Okta, Azure AD, Keycloak) via auth request patterns JWT validation at the edge, including signature verification, claims inspection, and token expiry enforcement Fine-grained access control based on headers, claims, paths, methods, or user identity Optional Web Application Firewall Native integration with F5 WAF for NGINX for OWASP Top 10 protection, gRPC schema validation, and OpenAPI enforcement DDoS mitigation capabilities when combined with F5 security solutions Centralized policy enforcement across multiple ingress resources High availability (HA) Designed to run as multiple Ingress Controller replicas in Kubernetes for redundancy and scale State sharing: Maintains session persistence, rate limits, and key-value stores for seamless uptime. Hereâs the full list of differences between NGINX Open Source and NGINX One â a package that includes NGINX Plus Ingress Controller, NGINX Gateway Fabric, F5 WAF for NGINX, and NGINX One Console for managing NGINX Plus Ingress Controllers at scale. Get Started Today Ready to begin your migration? Here's what you need: đ Read the full documentation: NGINX Ingress Controller Docs đ» Clone the repository: github.com/nginx/kubernetes-ingress đł Pull the image: Docker Hub - nginx/nginx-ingress đ Follow the migration guide: Migrate from Ingress-NGINX to NGINX Ingress Controller Interested in the enterprise version? Try NGINX One for free and give it a whirl The NGINX Ingress Controller community is responsive and full of passionate builders -- join the conversation in the GitHub Discussions or the NGINX Community Forum. Youâve got time to plan this migration right, but donât wait until March 2026 to start.705Views1like0CommentsF5 Distributed Cloud Kubernetes Integration: Securing Services with Direct Pod Connectivity
Introduction As organizations embrace Kubernetes for container orchestration, they face critical challenges in exposing services securely to external consumers while maintaining granular control over traffic management and security policies. Traditional approaches using NodePort services or basic ingress controllers often fall short in providing the advanced application delivery and security features required for production workloads. F5 Distributed Cloud (F5 XC) addresses these challenges by offering enterprise-grade application delivery and security services through its Customer Edge (CE) nodes. By establishing direct connectivity to Kubernetes pods, F5 XC can provide sophisticated load balancing, WAF protection, API security, and multi-cloud connectivity without the limitations of NodePort-based architectures. This article demonstrates how to architect and implement F5 XC CE integration with Kubernetes clusters to expose and secure services effectively, covering both managed Kubernetes platforms (AWS EKS, Azure AKS, Google GKE) and self-managed clusters using K3S with Cilium CNI. Understanding F5 XC Kubernetes Service Discovery F5 Distributed Cloud includes a native Kubernetes service discovery feature that communicates directly with Kubernetes API servers to retrieve information about services and their associated pods. This capability operates in two distinct modes: Isolated Mode In this mode, F5 XC CE nodes are isolated from the Kubernetes cluster pods and can only reach services exposed as NodePort services. While the discovery mechanism can retrieve all services, connectivity is limited to NodePort-exposed endpoints with the inherent NodePort limitations: Port Range Restrictions: Limited to ports 30000-32767 Security Concerns: Exposes services on all node IPs Performance Overhead: Additional network hops through kube-proxy Limited Load Balancing: Basic round-robin without advanced health checks Non-Isolated Mode, Direct Pod Connectivity (and why it matters) This is the focus of our implementation. In non-isolated mode, F5 XC CE nodes can reach Kubernetes pods directly using their pod IP addresses. This provides several advantages: Simplified Architecture: Eliminate NodePort complexity and port management limitation Enhanced Security: Apply WAF, DDoS protection, and API security directly at the pod level Advanced Load Balancing: Sophisticated algorithms, circuit breaking, and retry logic Architectural Patterns for Pod IP Accessibility To enable direct pod connectivity from external components like F5 XC CEs, the pod IP addresses must be routable outside the Kubernetes cluster. The implementation approach varies based on your infrastructure: Cloud Provider Managed Kubernetes Cloud providers typically handle pod IP routing through their native Container Network Interfaces (CNIs): Figure 1: Cloud providers' K8S CNI routes PODs IPs to the Cloud Provider Private Cloud Routing Table AWS EKS: Uses Amazon VPC CNI, which assigns VPC IP addresses directly to pods Azure AKS: Traditional CNI mode allocates Azure VNET IPs to pods Google GKE: VPC-native clusters provide direct pod IP routing In these environments, the cloud provider's CNI automatically updates routing tables to make pod IPs accessible within the VPC/VNET. Self-Managed Kubernetes Clusters For self-managed clusters, you need an advanced CNI that can expose the Kubernetes overlay network. The most common solutions are: Cilium: Provides eBPF-based networking with BGP support Calico: Offers flexible networking policies with BGP peering capabilities and eBPF support as well These CNIs typically use BGP to advertise pod subnets to external routers, making them accessible from outside the cluster. Figure 2: Self-managed K8S clusters use advanced CNI with BGP to expose the overlay subnet Cloud Provider Implementations AWS EKS Architecture Figure 3: AWS EKS with F5 XC CE integration using VPC CNI With AWS EKS, the VPC CNI plugin assigns real VPC IP addresses to pods, making them directly routable within the VPC without additional configuration. Azure AKS Traditional CNI Figure 4: Azure AKS with traditional CNI mode for direct pod connectivity Azure's traditional CNI mode allocates IP addresses from the VNET subnet directly to pods, enabling native Azure networking features. Google GKE VPC-Native Figure 5: Google GKE VPC-native clusters with alias IP ranges for pods GKE's VPC-native mode uses alias IP ranges to provide pods with routable IP addresses within the Google Cloud VPC. Deeper dive into the implementation Implementation Example 1: AWS EKS Integration Let's walk through a complete implementation using AWS EKS as our Kubernetes platform. Prerequisites and Architecture Network Configuration: VPC CIDR: 10.154.0.0/16 Three private subnets (one per availability zone) F5 XC CE deployed in Private Subnet 1 EKS worker nodes distributed across all three subnets Figure 6: Complete EKS implementation architecture with F5 XC CE integration Kubernetes Configuration: EKS cluster with AWS VPC CNI Sample application: microbot (simple HTTP service) Three replicas distributed across nodes What is running inside the K8S cluster? The PODs We have three PODs in the default namespace. Figure 7: The running PODs in the EKS cluster One running with POD IP 10.154.125.116, another one with POD IP 10.154.76.183 and one running with POD IP 10.154.69.183. microbot POD is a simple HTTP application that is returning the full name of the POD and an image. Figure 8: The microbot app The services Figure 9: The services running in the EKS cluster Configure F5 XC Kubernetes Service Discovery Create a K8S service discovery object. Figure 10: Kubernetes service discovery configuration In the âAccess Credentialsâ activate the âShow Advanced Fieldsâ slider. This is the key! Figure 11: The "advanced fields" slider Then provide the Kubeconfig file of the K8S cluster and select âKubernetes POD reachableâ. Figure 12: Kubernetes POD network reachability Then the K8S should be displayed in the âService Discoveriesâ. Figure 13: The discovered PODs IPs One can see that the services are discovered by the F5 XC node and more interestingly, the PODs IPs. Are the pods reachable from the F5XC CE? Figure 14: Testing connectivity to pod 10.154.125.116 Figure 15: Testing connectivity to pod 10.154.76.183 Figure 16: Testing connectivity to pod 10.154.69.183 Yes, they are! Create Origin Pool with K8S Service Create an origin pool that references your Kubernetes service: Figure 17: Creating origin pool with Kubernetes service type Create an HTTPS Load-Balancer and test the service Just create a regular F5 XC HTTPS Load-Balancer and use the origin pool created above. Figure 18: Traffic load-balanced across the three PODs The result shows traffic being load-balanced across all EKS pods. Implementation Example 2: Self-Managed K3S with Cilium CNI One infrastructure subnet (10.154.1.0/24) in which the following components are going to be deployed: F5 XC CE single node (10.154.1.100) Two Linux Ubuntu nodes (10.154.1.10 & 10.154.1.11) On the Linux Ubuntu nodes, a Kubernetes cluster is going to be deployed using K3S (www.k3s.io) with the following specifications: PODs overlay subnet: 10.160.0.0/16 Services overlay subnet: 10.161.0.0/16 Default K3S CNI (flannel) will be disabled K3S CNI will be replaced by Cilium CNI to expose directly the PODs overlay subnet to the âexternal worldâ Figure 19: Self-managed K3S cluster with Cilium CNI and BGP peering to F5 XC CE What is running inside the K8S cluster? The PODs We have two PODs in the default namespace. Figure 20: The running PODs in the K8S cluster One running on node âk3s-1â with POD IP 10.160.0.203 and the other one running on node âk3s-2â with POD IP 10.160.1.208. microbot POD is a simple HTTP application that is returning the full name of the POD and an image. The services Figure 21: The services running in the K8S cluster Different Kubernetes services are created to expose the microbot PODs, one of type Cluster IP and the other one of type LoadBalancer. The type of service doesnât really matter for F5XC because we are working in a full routed mode between the CE and the K8S cluster. F5XC only needs to âknowâ the PODs IPs, which will be discovered through the services. Configure F5 XC Kubernetes Service Discovery Steps are identical regarding what we did for EKS. And once done, services and PODs IPs are discovered by F5XC. Figure 22: The discovered PODs IPs Configure the BGP peering on F5XC CE In this example topology, BGP peerings are established directly between the K8S nodes and the F5 XC CE. Other implementations are possible, for instance, with an intermediate router. Figure 23: BGP peerings Check if the peerings are established. Figure 24: Verification of the BGP peerings Are the pods reachable from the F5XC CE? Figure 25: PODs reachability test They are! Create Origin Pool with K8S Service As we did for the EKS configuration, create an origin pool that references your Kubernetes service. Create an HTTPS Load-Balancer and test the service Just create a regular F5 XC HTTPS Load-Balancer and use the origin pool created above. Figure 26: Traffic load-balanced across the two PODs Scaling up? Letâs add another POD to the deployment to see how F5XC will handle the load-balancing after. Figure 27: Scaling up the Microbot PODs And itâs working! Load is spread automatically as soon as new PODs instances are available for the given service. Figure 28: Traffic load-balanced across the three PODs Appendix - K3S and Cilium deployment example Step 1: Install K3S without Default CNI On the master node: curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" \ INSTALL_K3S_EXEC="--flannel-backend=none \ --disable-network-policy \ --disable=traefik \ --disable servicelb \ --cluster-cidr=10.160.0.0/16 \ --service-cidr=10.161.0.0/16" sh - # Export kubeconfig export KUBECONFIG=/etc/rancher/k3s/k3s.yaml # Get token for worker nodes sudo cat /var/lib/rancher/k3s/server/node-token On worker nodes: IP_MASTER=10.154.1.10 K3S_TOKEN=<token-from-master> curl -sfL https://get.k3s.io | K3S_URL=https://${IP_MASTER}:6443 K3S_TOKEN=${K3S_TOKEN} sh - Step 2: Install and Configure Cilium On the K3S master node, please perform the following: Install Helm and Cilium CLI: # Install Helm sudo snap install helm --classic # Download Cilium CLI CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt) CLI_ARCH=amd64 curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum} sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin Install Cilium with BGP support: helm repo add cilium https://helm.cilium.io/ helm install cilium cilium/cilium --version 1.16.5 \ --set=ipam.operator.clusterPoolIPv4PodCIDRList="10.160.0.0/16" \ --set kubeProxyReplacement=true \ --set k8sServiceHost=10.154.1.10 \ --set k8sServicePort=6443 \ --set bgpControlPlane.enabled=true \ --namespace kube-system \ --set bpf.hostLegacyRouting=false \ --set bpf.masquerade=true # Monitor installation cilium status --wait Step 3: Configure BGP Peering Label nodes for BGP: kubectl label nodes k3s-1 bgp=true kubectl label nodes k3s-2 bgp=true Create BGP configuration: # BGP Cluster Config apiVersion: cilium.io/v2alpha1 kind: CiliumBGPClusterConfig metadata: name: cilium-bgp spec: nodeSelector: matchLabels: bgp: "true" bgpInstances: - name: "k3s-instance" localASN: 65001 peers: - name: "f5xc-ce" peerASN: 65002 peerAddress: 10.154.1.100 peerConfigRef: name: "cilium-peer" --- # BGP Peer Config apiVersion: cilium.io/v2alpha1 kind: CiliumBGPPeerConfig metadata: name: cilium-peer spec: timers: holdTimeSeconds: 9 keepAliveTimeSeconds: 3 gracefulRestart: enabled: true restartTimeSeconds: 15 families: - afi: ipv4 safi: unicast advertisements: matchLabels: advertise: "bgp" --- # BGP Advertisement apiVersion: cilium.io/v2alpha1 kind: CiliumBGPAdvertisement metadata: name: bgp-advertisements labels: advertise: bgp spec: advertisements: - advertisementType: "PodCIDR"342Views3likes1CommentF5 CNF/BNK issue with DNS Express tmm scaling and zone notifications
I did see an interesting issue with DNS Express with Next for Kubernetes when playing in a test environment. When you have 2 TMM pods in the same namespace as the DNS zone mirroring is done by zxfrd pod and I you need to create a listener "F5BigDnsApp" as shown in https://clouddocs.f5.com/cnfs/robin/latest/cnf-dnsexpress.html#create-a-dns-zone-to-answer-dns-queries for the optional notify that will feed this to the TMM and then to the zxfrd pod. The issue happens when you have 2 or more TMM as then the "F5BigDnsApp" that is like virtual server/listener as then then on the internal vlans there is arp conflict as the two tmm on two different kubernetes/openshift nodes advertise the same ip address on layer 2. This is seen with "kubectl logs" ("oc logs" for Openshift) on the TMM pods that mention the duplicate arp detected. Interesting that the same does not happen when you do this for the normal listener on the external Vlan (the one that captures and responds to the client DNS queries) as I think by default the ARP is stopped for the external listener that can be on 2 or more TMM as ECMP BGP is used to redistribute the traffic to the TMM by design. I see 4 possible solutions as I see it. One is to be able to control the ARP for the "F5BigDnsApp" CRD for Internal or External Vlans (BGP ECMP to be used also on the server side then) and the second is to be able to select "F5BigDnsApp" to be deployed just one 1 TMM even if there are more. Also if an ip address could be configured for the listener that is not part of the internal ip address range but then as I see with "kubectl logs" on the ingress controller (f5ing-tmm-pod-manager) the config is not pushed to the TMM as also with "configview" from the debug sidecar container on the tmm pods there is no listener at all. The manager logs suggest that because the Listener IP address is not part of the Self-IP IP range under the intnernal Vlan as this maybe system limitation and no one thinking about this use case as in BIG-IP this is is supported to have VIP on non self ip address range that is not advertised with arp because of this. The last solution that can work at the moment is to have many tmm in different namespaces on different kubernetes nodes with affinity rules that can deploy each tmm on different node even if the tmm are on different namespaces by matching a configured label (see the example below) as maybe this is the current working design to have one zxfrd pod with one tmm pod in a namespace but then the auto-scaling may not work as euto scale should create a new tmm pod in the same namespace if needed. Example: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: tmm # Match Pods in any namespaces that have this label namespaceSelector: {} # empty selector = all namespaces topologyKey: "kubernetes.io/hostname" Also it should be considered if the zxfrd pod can push the DNS zone to the RAM of more than one TMM pods as maybe it can't as maybe currently only one to one is supported. Maybe it was never tested what happens when you have Security Context IP address on the Internal Network and multiple TMM pods. Interest stuff that I just wanted to share as this was just testing things outđ103Views1like0CommentsF5 Kubernetes CNF/BNK GSLB functionality ?
Hello everyone is there F5 CNF/BNK GSLB functionality ? I see the containers gslb-engine (probably the main GTM/DNS module) and gslb-probe-agent (probably the big3d in a container/pod ) but no CR/CRD definitions about it and and can this data be shared between F5 TMM in different clusters (something like DNS sync groups) or probing normal F5 BIG-IP devices (not in kubernetes). https://clouddocs.f5.com/cnfs/robin/latest/cnf-software-install.html https://clouddocs.f5.com/cnfs/robin/latest/intro.html170Views0likes4CommentsAnnouncing F5 NGINX Gateway Fabric 2.0.0 with a New Distributed Architecture
Gateway Fabric 2.0 marks our transition into a distributed architecture that is highly scalable, secure, and flexible. This architecture also easily enables more advanced capabilities and prepares us for integration for observability and fleet management with F5 NGINX One. Here are the big highlights for this major release: Control and Data Plane Separation Multiple Gateway Support HTTP Request Mirror Listener Isolation As always, bug fixes354Views2likes1CommentAnnouncing F5 NGINX Gateway Fabric 2.0.0 with a New Distributed Architecture
Today, F5 NGINX Gateway Fabric is reaching an important milestone. The release of NGINX Gateway Fabric 2.0 marks our transition into a distributed architecture that is highly scalable, secure, and flexible. This architecture also easily enables more advanced capabilities and prepares us for integration for observability and fleet management with F5 NGINX One. Here are the big highlights for this major release: Control and Data Plane Separation Multiple Gateway Support HTTP Request Mirror Listener Isolation As always, bug fixes Data and Control Plane Separation Before 2.0, NGINX Gateway Fabric contained both the âcontrol plane,â the container responsible for reading and applying configuration, and the âdata plane,â the NGINX container where all traffic flows through, within the same pod. This meant if you scaled the NGINX Gateway Fabric pod, you would be forced to scale the data and control plane together. Now with our distributed architecture, the data plane deploys in a separate pod from the control plane. This allows us to enable more flexible use cases such as multiple gateways per control plane, a highly available control plane, and directly scaling NGINX replicas per Gateway. This makes NGINX Gateway Fabric much more resource efficient at scale. This change also improves NGINX Gateway Fabricâs security posture by limiting how much is accessible if a single pod is compromised. While NGINX Gateway Fabric and NGINX have always been secure by default, this architecture enables a two-tier defense against potential attacks: any security intrusion on the control plane has no way to directly access traffic, nor can the data plane directly access control plane interfaces. Multiple Gateways NGINX Gateway Fabric has historically been limited to a single Gateway object. We first chose this architecture for the short term because Routes are a good way to separate routing and access control for developers and cluster operators. As our product matures, we know that more advanced use cases require the isolation of infrastructure for separate teams, customers, or SLAs. Our new architecture enables you to do just that: In NGINX Gateway Fabric 2.0, every time you define a Gateway object, the control plane will provision an NGINX deployment, which can then be independently scaled by adding more replicas if needed. With this pattern, itâs easy to create a second, third, or many more Gateways. You can choose to give each customer or team in your cluster their own Gateway so they can each own their own infrastructure. Or you may want to separate infrastructure to apply separate policies based on hostname or product group. Each Gateway can also be scaled independently for varying levels of traffic, all managed by a single control plane. HTTP Request Mirrors Request mirrors are often useful when you want to mirror traffic to analyze traffic for security issues or to test a new version of an application with production traffic without impacting current users. All request responses to a mirrored location are ignored, so request mirrors serve as another useful tool for testing and analysis. These mirrors are added using a filter on the route rule. They will only affect a small group of traffic you choose. If you need more, you can add as many as you need. Listener Isolation We decided to also include the concept of listener isolation in this release to make advanced configurations more intuitive to work with and guard against accidental misconfiguration. Listener isolation means that any request should match at most one Listener within a Gateway. The Gateway API lists this example below: If Listeners are defined for "foo.example.com" and "*.example.com", a request to "foo.example.com" SHOULD only be routed using routes attached to the "foo.example.com" Listener and NOT the "*.example.com" Listener. The alternative is that request may match against multiple listeners unintentionally, which can become a problem when different policies are applied to the other listeners, or the request is routed to the wrong location. Now, NGINX Gateway Fabric will ensure that a Route will only match the most specific Listener on the Gateway it is attached to. Whatâs Next Our next release will be primarily focused around delivering more extended features from the Gateway API in an effort to support all extended features at the extended support level. If you are unfamiliar, the Gateway API has three separate support levels for every feature in the specification: Core: Portable features that all implementations should support. Extended: Portable features that are not universally supported across implementations. Implementations that support the feature should have the same behavior and semantics. Some extended features may eventually move to core. Implementation-specific: These features are not portable and vendor-specific. These features are far less defined in API and schema. You can see the full description of support levels here. So far, NGINX Gateway Fabric has supported all core features since 1.0 and has slowly been adding extended features over the past few releases. For 2.1, we will have a greater focus on these features to enable more advanced use cases. We plan to have all extended features available in NGINX Gateway Fabric by 2.2. For F5 NGINX Plus users, 2.1 will bring connection to the F5 NGINX One Console with basic fleet management capabilities. We plan on expanding these capabilities to include Observability for all NGINX deployments in your organization in the near future. 2.1 will also include support for F5 NGINX App Protect Web Application Firewall to protect your applications from any incoming malicious traffic. Specifically, NGINX Gateway Fabric will be implementing support for v5, with a new configuration experience coming later this year. Resources For the complete changelog for NGINX Gateway Fabric 2.0.0, see the Release Notes. To try NGINX Gateway Fabric for Kubernetes with NGINX Plus, start your free 30-day trial today or contact us to discuss your use cases. If you would like to get involved, see what is coming next, or see the source code for NGINX Gateway Fabric, check out our repository on GitHub! We have weekly community meetings on Tuesdays at 9:30AM Pacific/12:30PM Eastern/5:30PM GMT. Meeting links, updates, agenda, and notes are on the NGINX Gateway Fabric Meeting Calendar. Links are also always available from our GitHub readme.494Views3likes0CommentsCan I deploy the application study tool on Kubernetes?(AST)
Hi, everyone. I followed the AST guide to ensure that AST works on Docker. But, I want to deploy this on Kubernetes, is there a how-to or guide for that? If you have any suggestions on availability, or any alternative links or instructions I can refer to, I would really appreciate it. Thank you for your help.203Views0likes1CommentBIG-IP Next does not work on KubeVirt
Hi all, I use a KubeVirt based Hypervisor called Harvester (https://harvesterhci.io) and tried to start a BIG-IP Next Instance with the BIG-IP-Next-20.3.0-2.716.2+0.0.50.qcow2.tar.gz image. The VM does start but I cannot see any service on tcp port 5443 running after doing setup. What I have seen when I checked the logs of the f5-platform-manager deployment is the following: 2025-02-06T12:52:25.985842479Z stdout F "ts"="2025-02-06 12:52:25.985"|"l"="info"|"m"="Found an unknown Bios vendor"|"id"="19093-000230"|"lt"="A"|"vendor"="KubeVirt"|"pod"="f5-platform-manager-6f78695744-p48tr"|"ct"="f5-platform-manager"|"v"="1.0"|"src"="surveyor/z100_detector.go:60" 2025-02-06T12:52:25.98585795Z stdout F "ts"="2025-02-06 12:52:25.985"|"l"="error"|"m"="Failed to run surveyor probe"|"id"="19093-000259"|"lt"="A"|"error"="Unsupported virtual platform: '{ChassisAssetTag: MachineId: Mfr:KubeVirt Product:None Serial: Uuid:aeada22c-3bf3-5220-a678-91f04ac6db0d Version:pc-q35-7.1}'"|"pod"="f5-platform-manager-6f78695744-p48tr"|"ct"="f5-platform-manager"|"v"="1.0"|"src"="surveyor/surveyor.go:344" So what can I do to get this BIG-IP Next Instance running on the VM? Thanks, Peter166Views1like1Comment