application delivery
2338 TopicsBIG-IP Next for Kubernetes CNF 2.2 what's new
Introduction BIG-IP Next CNF v2.2.0 offers new enhancements to BIG-IP Next for Kubernetes CNFs with a focus on analytics capabilities, traffic distribution, subscriber management, and operational improvements that address real-world challenges in high-scale deployments. High-Speed Logging for Traffic Analysis The Reporting feature introduces high-speed logging (HSL) capabilities that capture session and flow-level metrics in CSV format. Key data points include subscriber identifiers, traffic volumes, transaction counts, video resolution metrics, and latency measurements, exported via Syslog (RFC5424, RFC3164, or legacy-BIG-IP formats) over TCP or UDP. Fluent-bit handles TMM container log processing, forwarding to Fluentd for external analytics servers. Custom Resources simplify configuration of log publishers, reporting intervals, and enforcement policies, making it straightforward to integrate into existing Kubernetes workflows. DNS Cache Inspection and Management New utilities provide detailed visibility into DNS cache operations. The bdt_cli tool supports listing, counting, and selectively deleting cache records using filters for domain names, TTL ranges, response codes, and cache types (RRSet, message, or nameserver). Complementing this, dns-cache-stats delivers performance metrics including hit/miss ratios, query volumes, response time distributions across intervals, and nameserver behavior patterns. These tools enable systematic cache analysis and maintenance directly from debug sidecars. Stateless and Bidirectional DAG Traffic Distribution Stateless DAG implements pod-based hashing to distribute traffic evenly across TMM pods without maintaining flow state. This approach embeds directly within the CNE installation, eliminating separate DAG infrastructure. Bidirectional DAG extends this with symmetric routing for client-to-server and return flows, using consistent redirect VLANs and hash tables. Deployments must align TMM pod counts with self-IP configurations on pod_hash-enabled VLANs to ensure balanced distribution. Dynamic GeoDB Updates for Edge Firewall Policies Edge Firewall Geo Location policies now support dynamic GeoDB updates, replacing static country/region lists embedded in container images. The Controller and PCCD components automatically incorporate new locations and handle deprecated entries with appropriate logging. Firewall Policy CRs can reference newly available geos immediately, enabling responsive policy adjustments without container restarts or rebuilds. This maintains policy currency in environments requiring frequent threat intelligence updates. Subscriber Creation and CGNAT Logging RADIUS-triggered subscriber creation integrates with distributed session storage (DSSM) for real-time synchronization across TMM pods. Subscriber records capture identifiers like IMSI, MSISDN, or NAI, enabling automated session lifecycle management. CGNAT logging enhancements include Subscriber ID in translation events, providing clear IP-to-subscriber mapping. This facilitates correlation of network activity with individual users, supporting troubleshooting, auditing, and regulatory reporting requirements. Kubernetes Secrets Integration for Sensitive Configuration Custom Resources now reference sensitive data through Kubernetes’ native Secrets using secretRef fields (name, namespace, key). The cne-controller fetches secrets securely via mTLS, monitors for updates, and propagates changes to consuming components. This supports certificate rotation through cert-manager without CR reapplication. RBAC controls ensure appropriate access while eliminating plaintext sensitive data from YAML manifests. Dynamic Log Management and Storage Optimization REST API endpoints and ConfigMap watching enable runtime log level adjustments per pod without restarts. Changes propagate through pod-specific ConfigMaps monitored by the F5 logging library. An optional Folder Cleaner CronJob automatically removes orphaned log directories, preventing storage exhaustion in long-running deployments with heavy Fluentd usage. Key Enhancements Overview Several refinements have improved operational aspects: CNE Controller RBAC: Configurable CRD monitoring via ConfigMap eliminates cluster-wide list permissions, with manual controller restart required for list changes. CGNAT/DNAT HA: F5Ingress automatically distributes VLAN configurations to standby TMM pods (excluding self-IPs) for seamless failover. Memory Optimization: 1GB huge page support via tmm.hugepages.preferredhugepagesize parameter. Diagnostics: QKView requests can be canceled by ID, generating partial diagnostics from collected data. Metrics Control: Per-table aggregation modes (Aggregated, Semi-Aggregated, Diagnostic) with configurable export intervals via f5-observer-operator-config ConfigMap. Related content BIG-IP Next for Kubernetes CNF - latest release BIG-IP Next Cloud-Native Network Functions (CNFs) BIG-IP Next for Kubernetes CNFs deployment walkthrough | DevCentral BIG-IP Next Edge Firewall CNF for Edge workloads | DevCentral F5 BIG-IP Next CNF solutions suite of Kubernetes native 5G Network Functions38Views1like0CommentsAI Inference for VLLM models with F5 BIG-IP & Red Hat OpenShift
This article shows how to perform Intelligent Load Balancing for AI workloads using the new features of BIG-IP v21 and Red Hat OpenShift. Intelligent Load Balancing is done based on business logic rules without iRule programming and state metrics of the VLLM inference servers gathered from OpenShift´s Prometheus.232Views1like4CommentsMulti‑Cluster Kubernetes App Delivery Made Simple with F5 BIG‑IP CIS & Nutanix Kubernetes Platform
Organizations are increasingly deploying applications across multiple Kubernetes clusters to achieve greater resilience, scalability, and operational flexibility. However, as environments expand, so does the complexity. Managing traffic, ensuring consistent security policies, and delivering applications seamlessly across multiple Kubernetes clusters can quickly become operationally overwhelming. F5 and Nutanix jointly address these challenges together by combining the application delivery and security capabilities of F5 BIG-IP with the simplicity and operational consistency of the Nutanix Kubernetes Platform (NKP). See it in action—watch the demo video: F5 BIG-IP Container Ingress Services (CIS) Overview F5 BIG‑IP Container Ingress Services (CIS) is a Kubernetes‑native ingress and automation controller that connects F5 BIG‑IP directly to Kubernetes. F5 BIG-IP CIS watches the Kubernetes API in real time and translates native Kubernetes resources—including Ingress, Routes, VirtualServer, TransportServer, and AS3 declarations—into F5 BIG‑IP configurations. This transforms F5 BIG‑IP from an external appliance into a declarative, automated extension of the Kubernetes environment, enabling cloud‑native workflows and eliminating manual, error‑prone configuration. This tight integration ensures that application delivery, security, and traffic management remain consistent and automatically adapt as Kubernetes environments change. Multi-Cluster Application Delivery with CIS Multi-cluster architectures are rapidly becoming the enterprise standard. But delivering applications across multiple Kubernetes clusters introduces challenges, including: Maintaining consistent security policies Automatically routing traffic to the most appropriate cluster as workloads scale or shift Avoiding configuration drift and fragmented visibility Reducing operational friction caused by manual updates Without the right tooling, these challenges can lead to operational sprawl and deployment delays. F5 BIG-IP CIS addresses these challenges through its built‑in multi‑cluster capabilities, enabling a single BIG‑IP Virtual Server to front applications that span multiple Kubernetes clusters. This approach: Consolidates application access behind one unified entry point Automatically updates traffic routing as clusters scale or workloads migrate Enforces consistent policies across environments Significantly reduces operational overhead by eliminating per‑cluster configuration F5 BIG-IP CIS supports both standalone mode and high‑availability (HA) mode for multi-cluster environments. In HA mode, the primary F5 BIG-IP CIS instance is responsible for managing F5 BIG‑IP configuration, while a secondary instance continuously monitors its health. If the primary instance becomes unavailable, the secondary automatically takes over, ensuring uninterrupted management and application delivery continuity. F5 BIG-IP CIS + Nutanix Kubernetes Platform (NKP): Better Together When F5 BIG‑IP CIS is combined with the Nutanix Kubernetes Platform (NKP), organizations gain a unified and automated approach to delivering, securing, and scaling applications across multiple Kubernetes clusters—a cohesive multi‑cluster application services solution. Key benefits include: Unified North–South Control Plane F5 BIG‑IP acts as the intelligent front door for all Kubernetes clusters, centralizing traffic management and visibility. Consistent Security Policies WAF, DDoS protection, and traffic policies can be applied uniformly across Kubernetes clusters to maintain a consistent security posture. Automated Orchestration and Reduced Operational Overhead F5 BIG-IP CIS’s event‑driven automation aligns with NKP’s streamlined cluster lifecycle management, reducing manual configuration and operational complexity. Direct Pod Routing in Cluster Mode Static route support in cluster mode enables CIS to automatically configure static routes on BIG‑IP using the node subnets assigned to Kubernetes cluster nodes. This allows BIG‑IP to route directly to Kubernetes pod subnets without requiring any tunnel configuration, greatly simplifying the networking architecture. Flexible Deployment Topologies: Standalone or HA CIS supports both standalone and high‑availability deployment in multi-cluster environments, enabling resilient application exposure across Kubernetes clusters. Conclusion As Kubernetes environments continue to expand, the need for consistent, secure, and efficient multi‑cluster application delivery becomes increasingly critical. Together, F5 BIG‑IP CIS and Nutanix Kubernetes Platform (NKP) provide a unified, automated, and future‑ready solution that removes much of the operational complexity traditionally associated with distributed architectures. This joint solution delivers consistent security enforcement, intelligent traffic management, and streamlined operations across any number of Kubernetes clusters. Whether an organization is aimed at modernization, expanding into multi‑cluster architectures, or working to streamline and secure Kubernetes traffic flows, F5 and Nutanix jointly offer a forward-looking path. Multi‑cluster Kubernetes doesn’t have to be complex—and with F5 BIG‑IP CIS and Nutanix Kubernetes Platform (NKP), it’s never been simpler. Related URLs F5 BIG-IP Container Ingress Services (CIS) for Multi-Cluster https://clouddocs.f5.com/containers/latest/userguide/multicluster/ Nutanix Kubernetes Platform (NKP) https://www.nutanix.com/products/kubernetes-management-platform
75Views1like0CommentsIdentity-centric F5 ADSP Integration Walkthrough
In this article we explore F5 ADSP from the Identity lense by using BIG-IP APM, BIG-IP SSLO and add BIG-IP AWAF to the service chain. The F5 ADSP addresses four core areas: Deployment at scale, Security against evolving threats, Deliver application reliably, Operate your day to day work efficiently. Each comes with its own challenges, but together they define the foundation for keeping systems fast, stable, and safe. Each architecture deployment example is designed to cover at least two of the four core areas: Deployment, Security, Delivery and XOps.250Views3likes0CommentsBIG-IP Next for Kubernetes CNFs - DNS walkthrough
Introduction F5 enables advanced DNS implementations across different deployments, whether it’s hardware, Virtual Functions and F5 Distributed Cloud. Also, in Kubernetes environment through the F5BigDnsApp Custom Resource Definition (CRD), allowing declarative configuration of DNS listeners, pools, monitors, and profiles directly in-cluster. Deploying DNS services like Express, Cache, and DoH within the Kubernetes cluster using BIG-IP Next for Kubernetes CNF DNS saves external traffic by resolving queries locally (reducing egress to upstream resolvers by up to 80% with caching) and enhances security through in-cluster isolation, mTLS enforcement, and protocol encryption like DoH, preventing plaintext DNS exposure over cluster boundaries. This article provides a walkthrough for DNS Express, DNS Cache, and DNS-over-HTTPS (DoH) on top of Red Hat OpenShift. Prerequisites Deploy BIG-IP Next for Kubernetes CNF following the steps in F5’s Cloud-Native Network Functions (CNFs) Verify the nodes and CNF components are installed [cloud-user@ocp-provisioner f5-cne-2.1.0]$ kubectl get nodes NAME STATUS ROLES AGE VERSION master-1.ocp.f5-udf.com Ready control-plane,master,worker 2y221d v1.29.8+f10c92d master-2.ocp.f5-udf.com Ready control-plane,master,worker 2y221d v1.29.8+f10c92d master-3.ocp.f5-udf.com Ready control-plane,master,worker 2y221d v1.29.8+f10c92d worker-1.ocp.f5-udf.com Ready worker 2y221d v1.29.8+f10c92d worker-2.ocp.f5-udf.com Ready worker 2y221d v1.29.8+f10c92d [cloud-user@ocp-provisioner f5-cne-2.1.0]$ kubectl get pods -n cne-core NAME READY STATUS RESTARTS AGE f5-cert-manager-656b6db84f-dmv78 2/2 Running 10 (15h ago) 19d f5-cert-manager-cainjector-5cd9454d6c-sc8q2 1/1 Running 21 (15h ago) 19d f5-cert-manager-webhook-6d87b5797b-954v6 1/1 Running 4 19d f5-dssm-db-0 3/3 Running 13 (18h ago) 15d f5-dssm-db-1 3/3 Running 0 18h f5-dssm-db-2 3/3 Running 4 (18h ago) 42h f5-dssm-sentinel-0 3/3 Running 0 14h f5-dssm-sentinel-1 3/3 Running 10 (18h ago) 5d8h f5-dssm-sentinel-2 3/3 Running 0 18h f5-rabbit-64c984d4c6-xn2z4 2/2 Running 8 19d f5-spk-cwc-77d487f955-j5pp4 2/2 Running 9 19d [cloud-user@ocp-provisioner f5-cne-2.1.0]$ kubectl get pods -n cnf-fw-01 NAME READY STATUS RESTARTS AGE f5-afm-76c7d76fff-5gdhx 2/2 Running 2 42h f5-downloader-657b7fc749-vxm8l 2/2 Running 0 26h f5-dwbld-d858c485b-6xfq8 2/2 Running 2 26h f5-ipsd-79f97fdb9c-zfqxk 2/2 Running 2 26h f5-tmm-6f799f8f49-lfhnd 5/5 Running 0 18h f5-zxfrd-d9db549c4-6r4wz 2/2 Running 2 (18h ago) 26h f5ingress-f5ingress-7bcc94b9c8-zhldm 5/5 Running 6 26h otel-collector-75cd944bcc-xnwth 1/1 Running 1 42h DNS Express Walkthrough DNS Express configures BIG-IP to authoritatively answer queries for a zone by pulling it via AXFR/IXFR from an upstream server, with optional TSIG auth keeping zone data in-cluster for low-latency authoritative resolution. Step 1: Create a F5BigDnsZone CR for zone transfer (e.g., example.com from upstream 10.1.1.12). # cat 10-cr-dnsxzone.yaml apiVersion: k8s.f5net.com/v1 kind: F5BigDnsZone metadata: name: example.com spec: dnsxAllowNotifyFrom: ["10.1.1.12"] dnsxServer: address: "10.1.1.12" port: 53 dnsxEnabled: true dnsxNotifyAction: consume dnsxVerifyNotifyTsig: false #kubectl apply -f 10-cr-dnsxzone.yaml -n cnf-fw-01 Step 2: Deploy F5BigDnsApp CR with DNS Express enabled # cat 11-cr-dnsx-app-udp.yaml apiVersion: "k8s.f5net.com/v1" kind: F5BigDnsApp metadata: name: "dnsx-app-listener" namespace: "cnf-fw-01" spec: destination: address: "10.1.30.100" port: 53 ipProtocol: "udp" snat: type: "automap" dns: dnsExpressEnabled: true logProfile: "cnf-log-profile" # kubectl apply -f 11-cr-dnsx-app-udp.yaml -n cnf-fw-01 Step 3: Validate: Query from our client pod & tmm statistics dig @10.1.30.100 www.example.com ; <<>> DiG 9.18.30-0ubuntu0.20.04.2-Ubuntu <<>> @10.1.30.100 www.example.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43865 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;www.example.com. IN A ;; ANSWER SECTION: www.example.com. 604800 IN A 192.168.1.11 ;; AUTHORITY SECTION: example.com. 604800 IN NS ns.example.com. ;; ADDITIONAL SECTION: ns.example.com. 604800 IN A 192.168.1.10 ;; Query time: 0 msec ;; SERVER: 10.1.30.100#53(10.1.30.100) (UDP) ;; WHEN: Thu Jan 22 11:10:24 UTC 2026 ;; MSG SIZE rcvd: 93 kubectl exec -it deploy/f5-tmm -c debug -n cnf-fw-01 -- bash /tmctl -id blade tmmdns_zone_stat name=example.com name dnsx_queries dnsx_responses dnsx_xfr_msgs dnsx_notifies_recv ----------- ------------ -------------- ------------- ------------------ example.com 2 2 0 0 DNS Cache Walkthrough DNS Cache reduces latency by storing responses non-authoritatively, referenced via a separate cache CR in the DNS profile, cutting repeated upstream queries and external bandwidth use. Step 1: Create a DNS Cache CR F5BigDnsCache # cat 13-cr-dnscache.yaml apiVersion: "k8s.f5net.com/v1" kind: F5BigDnsCache metadata: name: "cnf-dnscache" spec: cacheType: resolver resolver: useIpv4: true useTcp: false useIpv6: false forwardZones: - forwardZone: "example.com" nameServers: - ipAddress: 10.1.1.12 port: 53 - forwardZone: "." nameServers: - ipAddress: 8.8.8.8 port: 53 # kubectl apply -f 13-cr-dnscache.yaml -n cnf-fw-01 Step 2: Deploy F5BigDnsApp CR with DNS Cache enabled # cat 11-cr-dnsx-app-udp.yaml apiVersion: "k8s.f5net.com/v1" kind: F5BigDnsApp metadata: name: "dnsx-app-listener" namespace: "cnf-fw-01" spec: destination: address: "10.1.30.100" port: 53 ipProtocol: "udp" snat: type: "automap" dns: dnsCache: "cnf-dnscache" logProfile: "cnf-log-profile" # kubectl apply -f 11-cr-dnsx-app-udp.yaml -n cnf-fw-01 Step 3: Validate: Query from our client pod dig @10.1.30.100 www.example.com ; <<>> DiG 9.18.30-0ubuntu0.20.04.2-Ubuntu <<>> @10.1.30.100 www.example.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 18302 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;www.example.com. IN A ;; ANSWER SECTION: www.example.com. 19076 IN A 192.168.1.11 ;; Query time: 4 msec ;; SERVER: 10.1.30.100#53(10.1.30.100) (UDP) ;; WHEN: Thu Jan 22 11:04:45 UTC 2026 ;; MSG SIZE rcvd: 60 DoH Walkthrough DoH exposes DNS over HTTPS (port 443) for encrypted queries, using BIG-IP's protocol inspection and UDP profiles, securing in-cluster DNS from eavesdropping and MITM attacks. Step 1: Ensure TLS secret exists and HTTP profiles exist # cat 14-tls-clientsslsettings.yaml apiVersion: k8s.f5net.com/v1 kind: F5BigClientsslSetting metadata: name: "cnf-clientssl-profile" namespace: "cnf-fw-01" spec: enableTls13: true enableRenegotiation: false renegotiationMode: "require" # cat 15-http-profiles.yaml apiVersion: "k8s.f5net.com/v1" kind: F5BigHttp2Setting metadata: name: http2-profile spec: activationModes: "alpn" concurrentStreamsPerConnection: 10 connectionIdleTimeout: 300 frameSize: 2048 insertHeader: false insertHeaderName: "X-HTTP2" receiveWindow: 32 writeSize: 16384 headerTableSize: 4096 enforceTlsRequirements: true --- apiVersion: "k8s.f5net.com/v1" kind: F5BigHttpSetting metadata: name: http-profile spec: oneConnect: false responseChunking: "sustain" lwsMaxColumn: 80 # kubectl apply -f 14-tls-clientsslsettings.yaml -n cnf-fw-01 # kubectl apply -f 15-http-profiles.yaml -n cnf-fw-01 Step 2: Create DNSApp for DoH service # cat 16-DNSApp-doh.yaml apiVersion: "k8s.f5net.com/v1" kind: F5BigDnsApp metadata: name: "cnf-dohapp" namespace: "cnf-fw-01" spec: ipProtocol: "udp" dohProtocol: "udp" destination: address: "10.1.20.100" port: 443 snat: type: "automap" dns: dnsExpressEnabled: false dnsCache: "cnf-dnscache" clientSslSettings: "clientssl-profile" pool: members: - address: "10.1.10.50" monitors: dns: enabled: true queryName: "www.example.com" queryType: "a" recv: "192.168.1.11" # kubectl apply -f 16-DNSApp-doh.yaml -n cnf-fw-01 Step 3: Testing from our client pod ubuntu@client:~$ dig @10.1.20.100 -p 443 +https +notls-ca www.google.com ; <<>> DiG 9.18.30-0ubuntu0.20.04.2-Ubuntu <<>> @10.1.20.100 -p 443 +https +notls-ca www.google.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4935 ;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;www.google.com. IN A ;; ANSWER SECTION: www.google.com. 69 IN A 142.251.188.103 www.google.com. 69 IN A 142.251.188.147 www.google.com. 69 IN A 142.251.188.106 www.google.com. 69 IN A 142.251.188.105 www.google.com. 69 IN A 142.251.188.99 www.google.com. 69 IN A 142.251.188.104 ;; Query time: 8 msec ;; SERVER: 10.1.20.100#443(10.1.20.100) (HTTPS) ;; WHEN: Thu Jan 22 11:27:05 UTC 2026 ;; MSG SIZE rcvd: 139 ubuntu@client:~$ dig @10.1.20.100 -p 443 +https +notls-ca www.example.com ; <<>> DiG 9.18.30-0ubuntu0.20.04.2-Ubuntu <<>> @10.1.20.100 -p 443 +https +notls-ca www.example.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20401 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;www.example.com. IN A ;; ANSWER SECTION: www.example.com. 17723 IN A 192.168.1.11 ;; Query time: 4 msec ;; SERVER: 10.1.20.100#443(10.1.20.100) (HTTPS) ;; WHEN: Thu Jan 22 11:27:18 UTC 2026 ;; MSG SIZE rcvd: 60 Conclusion BIG-IP Next DNS CRs transform Kubernetes into a production-grade DNS platform, delivering authoritative resolution, caching efficiency, and encrypted DoH, all while optimizing external traffic costs and hardening security boundaries for cloud-native deployments. Related Content BIG-IP Next for Kubernetes CNF guide BIG-IP Next Cloud-Native Network Functions (CNFs) BIG-IP Next for Kubernetes CNF deployment walkthrough BIG-IP Next Edge Firewall CNF for Edge workloads | DevCentral Modern Applications-Demystifying Ingress solutions flavors | DevCentral72Views2likes0CommentsDelivering Secure Application Services Anywhere with Nutanix Flow and F5 Distributed Cloud
Introduction F5 Application Delivery and Security Platform (ADSP) is the premier solution for converging high-performance delivery and security for every app and API across any environment. It provides a unified platform offering granular visibility, streamlined operations, and AI-driven insights — deployable anywhere and in any form factor. The F5 ADSP Partner Ecosystem brings together a broad range of partners to deliver customer value across the entire lifecycle. This includes cohesive solutions, cloud synergies, and access to expert services that help customers maximize outcomes while simplifying operations. In this article, we’ll explore the upcoming integration between Nutanix Flow and F5 Distributed Cloud, showcasing how F5 and Nutanix collaborate to deliver secure, resilient application services across hybrid and multi-cloud environments. Integration Overview At the heart of this integration is the capability to deploy a F5 Distributed Cloud Customer Edge (CE) inside a Nutanix Flow VPC, establish BGP peering with the Nutanix Flow BGP Gateway, and inject CE-advertised BGP routes into the VPC routing table. This architecture provides us complete control over application delivery and security within the VPC. We can selectively advertise HTTP load balancers (LBs) or VIPs to designated VPCs, ensuring secure and efficient connectivity. Additionally, the integration securely simplifies network segmentation across hybrid and multi-cloud environments. By leveraging F5 Distributed Cloud to segment and extend the network to remote locations, combined with Nutanix Flow Security for microsegmentation within VPCs, we deliver comprehensive end-to-end network security. This approach enforces a consistent security posture while simplifying segmentation across environments. In this article, we’ll focus on application delivery and security, and explore segmentation in the next article. Demo Walkthrough Let’s walk through a demo to see how this integration works. The goal of this demo is to enable secure application delivery for nutanix5.f5-demo.com within the Nutanix Flow Virtual Private Cloud (VPC) named dev3. Our demo environment, dev3, is a Nutanix Flow VPC with a F5 Distributed Cloud Customer Edge (CE) named jy-nutanix-overlay-dev3 deployed inside: *Note: CE is named jy-nutanix-overlay-dev3 in the F5 Distributed Cloud Console and xc-ce-dev3 in the Nutanix Prism Central. eBGP peering is ESTABLISHED between the CE and the Nutanix Flow BGP Gateway: On the F5 Distributed Cloud Console, we created an HTTP Load Balancer named jy-nutanix-internal-5 serving the FQDN nutanix5.f5-demo.com. This load balancer distributes workloads across hybrid multicloud environments and is protected by a WAF policy named nutanix-demo: We advertised this HTTP Load Balancer with a Virtual IP (VIP) 10.10.111.175 to the CE jy-nutanix-overlay-dev3 deployed inside Nutanix Flow VPC dev3: The CE then advertised the VIP route to its peer via BGP – the Nutanix Flow BGP Gateway: The Nutanix Flow BGP Gateway received the VIP route and installed it in the VPC routing table: Finally, the VMs in dev3 can securely access nutanix5.f5-demo.com while continuing to use the VPC logical router as their default gateway: F5 Distributed Cloud Console observability provides deep visibility into applications and security events. For example, it offers comprehensive dashboards and metrics to monitor the performance and health of applications served through HTTP load balancers. These include detailed insights into traffic patterns, latency, HTTP error rates, and the status of backend services: Furthermore, the built-in AI assistant provides real-time visibility and actionable guidance on security incidents, improving situational awareness and supporting informed decision-making. This capability enables rapid threat detection and response, helping maintain a strong and resilient security posture: Conclusion The integration demonstrates how F5 Distributed Cloud and Nutanix Flow collaborate to deliver secure, resilient application services across hybrid and multi-cloud environments. Together, F5 and Nutanix enable organizations to scale with confidence, optimize application performance, and maintain robust security—empowering businesses to achieve greater agility and resilience across any environment. This integration is coming soon in CY2026. If you’re interested in early access, please contact your F5 representative. Related URLs Simplifying and Securing Network Segmentation with F5 Distributed Cloud and Nutanix Flow | DevCentral F5 Distributed Cloud - https://www.f5.com/products/distributed-cloud-services Nutanix Flow Virtual Networking - https://www.nutanix.com/products/flow/networking
161Views1like0CommentsFile Permissions Errors When Installing F5 Application Study Tool? Here’s Why.
F5 Application Study Tool is a powerful utility for monitoring and observing your BIG-IP ecosystem. It provides valuable insights into the performance of your BIG-IPs, the applications it delivers, potential threats, and traffic patterns. In my work with my own customers and those of my colleagues, we have sometimes run into permissions errors when initially launching the tool post-installation. This generally prevents the tool from working correctly and, in some cases, from running at all. I tend to see this more in RHEL installations, but the problem can occur with any modern Linux distribution. In this blog, I go through the most common causes, the underlying reasons, and how to fix it. Signs that You Have a File Permissions Issue These issues can appear as empty dashboard panels in Grafana, dashboards with errors in each panel (pink squares with white warning triangles, as seen in the image below), or the Grafana dashboard not loading at all. This image shows the Grafana dashboard with errors in each panel. When diving deeper, we see at least one of the three containers are down or continuously restarting. In the below example, the Prometheus container is continuously restarting: ubuntu@ubuntu:~$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 59a5e474ce36 prom/prometheus "/bin/prometheus --c…" 2 minutes ago Restarting (2) 18 seconds ago prometheus c494909b8317 grafana/grafana "/run.sh" 2 minutes ago Up 2 minutes 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp grafana eb3d25ff00b3 ghcr.io/f5devcentral/application-stu... "/otelcol-custom --c…" 2 minutes ago Up 2 minutes 4317/tcp, 55679-55680/tcp application-study-tool_otel-collector_1 A look at the container’s logs shows a file permissions error: ubuntu@ubuntu:~$ docker logs 59a5e474ce36 ts=2025-10-09T21:41:25.341Z caller=main.go:184 level=info msg="Experimental OTLP write receiver enabled" ts=2025-10-09T21:41:25.341Z caller=main.go:537 level=error msg="Error loading config (--config.file=/etc/prometheus/prometheus.yml)" file=/etc/prometheus/prometheus.yml err="open /etc/prometheus/prometheus.yml: permission denied" Note that the path, “/etc/prometheus/prometheus.yml”, is the path of the file within the container, not the actual location on the host. There are several ways to get the file’s actual location on the host. One easy method is to view the docker-compose.yaml file. Within the prometheus service, in the volumes section, you will find the following line: - ./services/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml” This indicates the file is located at “./services/prometheus/prometheus.yml” on the host. If we look at its permissions, we see that the user, “other” (represented by the three right-most characters in the permissions information to the left of the filename) are all dashes (“-“). This means the permissions are unset (they are disabled) for this user for reading, writing, or executing the file: ubuntu@ubuntu:~$ ls -l services/prometheus/prometheus.yml -rw-rw---- 1 ubuntu ubuntu 270 Aug 10 21:16 services/prometheus/prometheus.yml For a description of default user roles in Linux and file permissions, see Red Hat’s guide, “Managing file system permissions”. Since all containers in the Application Study Tool run as “other” by default, they will not have any access to this file. At minimum, they require read permissions. Without this, you will see the error above. The Fix! Once you figure out the problem lies in file permissions, it’s usually straightforward to fix it. A simple “chmod o+r” (or “chmod 664” for those who like numbers) on the file, followed by a restart of Docker Compose, will get you back up and running most of the time. For example: ubuntu@ubuntu:~$ ls -l services/prometheus/prometheus.yml -rw-rw---- 1 ubuntu ubuntu 270 Aug 10 21:16 services/prometheus/prometheus.yml ubuntu@ubuntu:~$ chmod o+r services/prometheus/prometheus.yml ubuntu@ubuntu:~$ ls -l services/prometheus/prometheus.yml -rw-rw-r-- 1 ubuntu ubuntu 270 Aug 10 21:16 services/prometheus/prometheus.yml ubuntu@ubuntu:~$ docker-compose down ubuntu@ubuntu:~$ docker-compose up -d The above is sufficient when read permission issues only impact in a few specific files. To ensure read permissions are enabled for "other" for all files in the services directory tree (which is where the AST containers read from), you can recursively set these permissions with the following commands: cd services chmod -R o+r . For AST to work, all containing directories also need to be executable by "other", or the tool will not be able to traverse these directories and reach the files. In this case, you will continue to see permissions errors. If that is the case, you can set execute permission recursively, just like the read permission setting performed above. To do this only for the services directory (which is the only place you should need it), run the following commands: # If you just ran the steps in the previous command section, you will still be in the services/ subdirectory. In the case, run "cd .." before running the following commands. chmod o+x services cd services chmod -R o+X . Notes: The dot (".") must be included at the end of the command. This tells chmod to start with the current working directory. The "-R" tells it to recursively act on all subdirectories. The "X" in "o+X" must capitalized to tell chmod to only operate on directories, not regular files. Execute permission is not needed for regular files in AST. For a good description of how directory permissions work in Linux, see https://linuxvox.com/blog/understanding-linux-directory-permissions-reasoning/ But Why Does this Happen? While the above discussion will fix file permissions issues after they've occurred, I wanted to understand what was actually causing this. Until recently, I had just chalked this up to some odd behavior in certain Red Hat installations (RHEL was the only place I had seen this) that modifies file permissions when they are pulled from GitHub repos. However, there is a better explanation. Many organizations have specific hardening practices when configuring new Linux machines. This sometimes involves the use of “umask” to set default file permissions for new files. Certain umask settings, such as 0007 and 0027 (anything ending with 7) will remove all permissions for “other”. This only affects newly created files, such as those pulled from a Git repo. It does not alter existing files. This example shows how the newly created file, testfile, gets created without read permissions for "other" when the umask is set to 0007. ubuntu@ubuntu:~$ umask 0007 ubuntu@ubuntu:~$ umask 0007 ubuntu@ubuntu:~$ touch testfile ubuntu@ubuntu:~$ ls -l testfile -rw-rw---- 1 ubuntu ubuntu 0 Oct 9 22:34 testfile Notes: In the above command block, note the last three characters in the permissions information, "-rw-rw----". These are all dashes ("-"), indicating the permission is disabled for user "other". The umask setting is available in any modern Linux distribution, but I see it more often on RHEL. Also, if you are curious, this post offers a good explanation of how umask works: What is "umask" and how does it work? To prevent permissions problems in the first place, you can run “umask” on the command line to check the setting before cloning the GitHub repo. If it ends in a 7, modify it (assuming your user account has permissions to do this) to something like “0002” or “0022”. This removes write permissions from “other”, or “group” and “other”, respectively, but does not modify read or execute permissions for anyone. You can also set it to “0000” which will cause it to make no changes to the file permissions of any new files. Alternatively, you can take a reactive approach, installing and launching AST as you normally would and only modifying file permissions when you encounter permission errors. If your umask is set to strip out read and/or execute permissions for "other", this will take more work than setting umask ahead of time. However, you can facilitate this by running the recursive "chmod -R o+r ." and "chmod -R o+X ." commands, as discussed above, to give "other" read permissions for all files and execute permissions for all subdirectories in the directory tree. (Note that this will also enable read permissions on all files, including those where it is not needed, so consider this before selecting this approach.) For a more in-depth discussion of file permissions, see Red Hat’s guide, “Managing file system permissions”. Hope this is helpful when you run into this type of error. Feel free to post questions below.104Views2likes2CommentsF5 Distributed Cloud - Traffic Steering based on Client IP Address
When the ability to route client traffic to specific origin server based on the IP address is required, F5 Distributed Cloud Services allows us to control traffic as required in canary deployments or where resources in one location are more appropriate to process requests from some clients. This solution uses the Origin Server Subset Rules feature, which provides the ability to create match conditions for incoming traffic to the HTTP Load Balancer using country, ASN, F5 Regional Edge (RE), IP address, or client label selectors for selection of destination (origin servers). This example uses origin servers in two different locations connected through F5XC Customer Edges using IPsec tunnels. Location 1 hosts application version 1 and Location 2 hosts application version 2. The goal is to forward requests from specific IP addresses to Location 2 and requests from all other IP addresses to Location 1. Configuration 1. Create a Known Label A known label is a key-value pair that can be attached to objects for referencing the objects using the label. Go to Home > Shared Configuration > Manage > Labels > Known Keys Click on Add Known Key and provide a Label Key and the Label values: After adding Click on Add key To verify that the labels are created, go to Manage > Labels > Known Labels. 2. Add the labels to the origin servers Go to Home > Multi-Cloud App Connect or Web App & API Protection > Manage > Load Balancers > Origin Pools Identify the Origin Pool to configure, click the three-dot menu (•••) on the right, and select Manage Configuration Edit the configuration for each origin server to add the label: Select Show Advanced Fields and from the Origin Server Labels select the created Label and the value: This is the configuration after the labels are added to each Origin Server: 3. Enable Subset Load Balancing In the Other Settings section of the Origin Pool configuration, click on Configure and from the Enable/Disable Subset Load Balancing menu, select Enable Subset Load Balancing and then click on Configure: From the Subset Classes section, click on Add Item to add the label key created in step 1: This is the result after adding the label key: Click Apply twice and save the configuration of the Origin Pool. 4. Create an IP Prefix Set An IP Prefix Set contains an ordered list of IP prefixes. It will be used to forward traffic to a specific origin server using origin server subset rules. IP Prefix Sets can be created on multiple workspaces: Web App & API Protection, Multi-Cloud App Connect or Shared Configuration. Go to Home > Shared Configuration > Security > Shared Objects > IP Prefix Sets Click on Add IP Prefix Set, enter a name and description as needed: After adding all the IP addresses, click on Add IP prefix set. 5. Configure the Load Balancer Go to Home > Multi-Cloud App Connect or Web App & API Protection > Manage > Load Balancers > HTTP Load Balancers Identify the Load Balancer to configure, click the three-dot menu (•••) on the right and select Manage Configuration and then Edit Configuration. Edit the configuration for each origin server to add the label: In the Origins section, add the origin pool configured in step 2. Click on Show Advanced Fields and then on Configure in the Origin Server Subset Rules section: Click on Add Item to add the rules: Provide a name, an optional description and from the Action menu, select the created Label key and the appropriate key value: From the Clients section, click on Source IPv4 Match and select IPv4 Prefix List Click Add Item to add the IP Prefix Set created in step 4: This is the first rule to forward traffic to origin servers where the application v2 is hosted: Click Apply and repeat the same steps to forward the traffic of all other IP addresses to the origin server where v1 of the application is hosted. For this rule, add a name, an optional description, select the label key and app-v1 for the key value and for Source IPv4 Match keep the default value; Any Source IP: These are the Server Subset Rules created: Click on Apply and then on Save HTTP Load Balancer. 6. Validate the Solution When requests from IP addresses that are part of the IP Prefix Set (For example, 187.188.10.147) reach the HTTP Load Balancer, these are forwarded to the origin server 2 in f5xc-aws-ce Customer Edge, which hosts application v2. All other traffic is forwarded to origin server 1 in f5xc-onprem-ce Customer Edge, which hosts the application v1. Conclusion F5 Distributed Cloud features can perform traffic management based on various criteria to route requests to the most optimal path for improved user experience and for multiple use cases like canary deployments.235Views5likes0Comments