deployment
296 TopicsAI Inference for VLLM models with F5 BIG-IP & Red Hat OpenShift
This article shows how to perform Intelligent Load Balancing for AI workloads using the new features of BIG-IP v21 and Red Hat OpenShift. Intelligent Load Balancing is done based on business logic rules without iRule programming and state metrics of the VLLM inference servers gathered from OpenShift´s Prometheus.159Views1like1CommentDelivering Secure Application Services Anywhere with Nutanix Flow and F5 Distributed Cloud
Introduction F5 Application Delivery and Security Platform (ADSP) is the premier solution for converging high-performance delivery and security for every app and API across any environment. It provides a unified platform offering granular visibility, streamlined operations, and AI-driven insights — deployable anywhere and in any form factor. The F5 ADSP Partner Ecosystem brings together a broad range of partners to deliver customer value across the entire lifecycle. This includes cohesive solutions, cloud synergies, and access to expert services that help customers maximize outcomes while simplifying operations. In this article, we’ll explore the upcoming integration between Nutanix Flow and F5 Distributed Cloud, showcasing how F5 and Nutanix collaborate to deliver secure, resilient application services across hybrid and multi-cloud environments. Integration Overview At the heart of this integration is the capability to deploy a F5 Distributed Cloud Customer Edge (CE) inside a Nutanix Flow VPC, establish BGP peering with the Nutanix Flow BGP Gateway, and inject CE-advertised BGP routes into the VPC routing table. This architecture provides us complete control over application delivery and security within the VPC. We can selectively advertise HTTP load balancers (LBs) or VIPs to designated VPCs, ensuring secure and efficient connectivity. Additionally, the integration securely simplifies network segmentation across hybrid and multi-cloud environments. By leveraging F5 Distributed Cloud to segment and extend the network to remote locations, combined with Nutanix Flow Security for microsegmentation within VPCs, we deliver comprehensive end-to-end network security. This approach enforces a consistent security posture while simplifying segmentation across environments. In this article, we’ll focus on application delivery and security, and explore segmentation in the next article. Demo Walkthrough Let’s walk through a demo to see how this integration works. The goal of this demo is to enable secure application delivery for nutanix5.f5-demo.com within the Nutanix Flow Virtual Private Cloud (VPC) named dev3. Our demo environment, dev3, is a Nutanix Flow VPC with a F5 Distributed Cloud Customer Edge (CE) named jy-nutanix-overlay-dev3 deployed inside: *Note: CE is named jy-nutanix-overlay-dev3 in the F5 Distributed Cloud Console and xc-ce-dev3 in the Nutanix Prism Central. eBGP peering is ESTABLISHED between the CE and the Nutanix Flow BGP Gateway: On the F5 Distributed Cloud Console, we created an HTTP Load Balancer named jy-nutanix-internal-5 serving the FQDN nutanix5.f5-demo.com. This load balancer distributes workloads across hybrid multicloud environments and is protected by a WAF policy named nutanix-demo: We advertised this HTTP Load Balancer with a Virtual IP (VIP) 10.10.111.175 to the CE jy-nutanix-overlay-dev3 deployed inside Nutanix Flow VPC dev3: The CE then advertised the VIP route to its peer via BGP – the Nutanix Flow BGP Gateway: The Nutanix Flow BGP Gateway received the VIP route and installed it in the VPC routing table: Finally, the VMs in dev3 can securely access nutanix5.f5-demo.com while continuing to use the VPC logical router as their default gateway: F5 Distributed Cloud Console observability provides deep visibility into applications and security events. For example, it offers comprehensive dashboards and metrics to monitor the performance and health of applications served through HTTP load balancers. These include detailed insights into traffic patterns, latency, HTTP error rates, and the status of backend services: Furthermore, the built-in AI assistant provides real-time visibility and actionable guidance on security incidents, improving situational awareness and supporting informed decision-making. This capability enables rapid threat detection and response, helping maintain a strong and resilient security posture: Conclusion The integration demonstrates how F5 Distributed Cloud and Nutanix Flow collaborate to deliver secure, resilient application services across hybrid and multi-cloud environments. Together, F5 and Nutanix enable organizations to scale with confidence, optimize application performance, and maintain robust security—empowering businesses to achieve greater agility and resilience across any environment. This integration is coming soon in CY2026. If you’re interested in early access, please contact your F5 representative. Related URLs Simplifying and Securing Network Segmentation with F5 Distributed Cloud and Nutanix Flow | DevCentral F5 Distributed Cloud - https://www.f5.com/products/distributed-cloud-services Nutanix Flow Virtual Networking - https://www.nutanix.com/products/flow/networking
154Views1like0CommentsSimplifying and Securing Network Segmentation with F5 Distributed Cloud and Nutanix Flow
Introduction Enterprises often separate environments—such as development and production—to improve efficiency, reduce risk, and maintain compliance. A critical enabler of this separation is network segmentation, which isolates networks into smaller, secured segments—strengthening security, optimizing performance, and supporting regulatory standards. In this article, we explore the integration between Nutanix Flow and F5 Distributed Cloud, showcasing how F5 and Nutanix collaborate to simplify and secure network segmentation across diverse environments—on-premises, remote, and hybrid multicloud. Integration Overview At the heart of this integration is the capability to deploy a F5 Distributed Cloud Customer Edge (CE) inside a Nutanix Flow VPC, establish BGP peering with the Nutanix Flow BGP Gateway, and inject CE-advertised BGP routes into the VPC routing table. This architecture provides full control over application delivery and security within the VPC. It enables selective advertisement of HTTP load balancers (LBs) or VIPs to designated VPCs, ensuring secure and efficient connectivity. By leveraging F5 Distributed Cloud to segment and extend networks to remote location—whether on-premises or in the public cloud—combined with Nutanix Flow for microsegmentation within VPCs, enterprises achieve comprehensive end-to-end security. This approach enforces a consistent security posture while reducing complexity across diverse infrastructures. In our previous article (click here) , we explored application delivery and security. Here, we focus on network segmentation and how this integration simplifies connectivity across environments. Demo Walkthrough The demo consists of two parts: Extending a local network segment from a Nutanix Flow VPC to a remote site using F5 Distributed Cloud. Applying microsegmentation within the network segment using Nutanix Flow Security Next-Gen. San Jose (SJ) serves as our local site, and the demo environment dev3 is a Nutanix Flow VPC with an F5 Distributed Cloud Customer Edge (CE) deployed inside: *Note: The SJ CE is named jy-nutanix-overlay-dev3 in the F5 Distributed Cloud Console and xc-ce-dev3 in the Nutanix Prism Central. On the F5 Distributed Cloud Console, we created a network segment named jy-nutanix-sjc-nyc-segment and we assigned it specifically to the subnet 192.170.84.0/24: eBGP peering is ESTABLISHED between the CE and the Nutanix Flow BGP Gateway in this segment: At the remote site in NYC, a CE named jy-nutanix-nyc is deployed with a local subnet of 192.168.60.0/24: To extend jy-nutanix-sjc-nyc-segment from SJ to NYC, simply assign the segment jy-nutanix-sjc-nyc-segment to the NYC CE local subnet 192.168.60.0/24 in the F5 Distributed Cloud Console: Effortlessly and in no time, the segment jy-nutanix-sjc-nyc-segment is now extended across environments from SJ to NYC: Checking the CE routing table, we can see that the local routes originated from the CEs are being exchanged among them: At the local site SJ, the SJ CE jy-nutanix-overlay-dev3 advertises the remote route originating from the NYC CE jy-nutanix-nyc to the Nutanix Flow BGP Gateway via BGP, and installs the route in the dev3 routing table: SJ VMs can now reach NYC VMs and vice versa, while continuing to use their Nutanix Flow VPC logical router as the default gateway: To enforce granular security within the segment, Nutanix Flow Security Next-Gen provides microsegmentation. Together, F5 Distributed Cloud and Nutanix Flow Security Next-Gen deliver a cohesive solution: F5 Distributed cloud seamlessly extends network segments across environments, while Nutanix Flow Security Next-Gen ensures fine-grained security controls within those segments: Our demo extends a network segment between two data centers, but the same approach can also be applied between on-premises and public cloud environments—delivering flexibility across hybrid multicloud environments. Conclusion F5 Distributed Cloud simplifies network segmentation across hybrid and multi-cloud environments, making it both secure and effortless. By seamlessly extending network segments across any environment, F5 removes the complexity traditionally associated with connecting diverse infrastructures. Combined with Nutanix Flow Security Next-Gen for microsegmentation within each segment, this integration delivers end-to-end protection and consistent policy enforcement. Together, F5 and Nutanix help enterprises reduce operational overhead, maintain compliance, and strengthen security—while enabling agility and scalability across all environments. This integration is coming soon in CY2026. If you’re interested in early access, please contact your F5 representative. Related URLs Delivering Secure Application Services Anywhere with Nutanix Flow and F5 Distributed Cloud | DevCentral F5 Distributed Cloud - https://www.f5.com/products/distributed-cloud-services Nutanix Flow Network Security - https://www.nutanix.com/products/flow
176Views2likes0CommentsF5 Distributed Cloud L7 DoS Attack Mitigation Roundup
Ensuring availability for applications that are Layer7 connected and delivered with F5 Distributed Cloud Regional Edges (RE), DoS attack mitigation provides service resiliency. Attacks are mitigated at the F5 regional edge and global backbone before reaching bandwidth limited network segments and vulnerable services downstream. New capabilities have been added to L7 DoS protection in Distributed Cloud: Requests per second thresholds Requests Per Second (RPS) threshold is now configurable for L7 DDoS Detection. L7 DDoS Protection and Mitigation will engage when the defined RPS threshold is exceeded, and origin health degradation is detected. Alternate mitigation with JS or CAPTCHA Challenge You can now configure JS or CAPTCHA challenges as an L7 DDoS mitigation action, which is applied to all users when a Layer 7 DDoS attack is detected, providing an additional layer of security against such threats. When combined, DoS attack mitigation (RPS) tuning can trigger events sooner, and the custom mitigation action enables flexible protection settings. Either immediately block connectivity to the service or prompt clients with challenge, making using DoS protection adaptable for apps and services that have differing volumes of traffic and types of users. Under an attack? Distributed Cloud also supports custom service policies that apply only when being attacked. This makes it possible to configure exception-tier services by allowing apps to continue to be available to select groups of users and customers, while broadly mitigating unidentified users and traffic. The following video provides an example showing how custom service policies in Distributed Cloud can be used to provide different tiers of service while under a DoS attack. L7 DoS Settings & Streamlined Observability In addition to DoS attack mitigation capabilities, enhancements to the Distributed Cloud load balancer security dashboards make it easy to spot detected DoS attacks, their origin, and see auto-mitigations that have occurred. The following video provides an overview of recent security dashboard enhancements focusing on L7 DoS mitigation. An Interactive Product Experience The following interactive product experience provides an L3/L4 Volumetric (Routed) DDoS overview as well as a separate L7 DoS walkthrough. This shows more details for configuring L7 DoS in Distributed Cloud as well as where to go to observe mitigations, attack events, and security alerts. https://f5.navattic.com/mra0ud8 Additional Resources Easily Protect Your Applications from DDoS with F5 Distributed Cloud DDoS Auto-Mitigation Video: F5 Distributed Cloud (F5 XC) DoS Protection – Basic features F5 Distributed Cloud L3-L7 DDoS Mitigation – Basic setup and configuration
155Views2likes1CommentBIG-IP Next for Kubernetes CNFs deployment walkthrough
Introduction F5 Application Delivery and Security Platform covers different deployment scenarios to help deliver and secure any application anywhere. BIG-IP Next for Kubernetes CNF architecture aligns with cloud-native principles by enabling horizontal scaling, ensuring that applications can expand seamlessly without compromising performance. It preserves the deterministic reliability essential for telecom environments, balancing scalability with the stringent demands of real-time processing. The way F5 implements CNF enables dynamic traffic steering across CNF pods and optimizes resource utilization through intelligent workload distribution. The architecture supports horizontal scaling patterns typical of cloud-native applications while maintaining the deterministic performance characteristics required for telecommunications workloads. Lab environment Below are the lab components, Kubernetes cluster with a control plane and two worker nodes. TMM is deployed on worker node 1. Client connects to the subscriber vlan via worker node 2. Grafana is reachable through the internal network. The below lab walk-through assumes, A working Kubernetes cluster with Red Hat OpenShift Local repo is configured. Storage is configured whether Local or NFS. Below is an overview of the installation steps flow. Creating our CNF namespaces, in our lab, we are using cne-core and cnf-fw-01 lab$ oc create namespace cne-core lab$ oc create namespace cnf-fw-01 lab$ oc get ns NAME STATUS AGE cne-core Active 60d cnf-fw-01 Active 59d Installing helm charts with the required values per environment lab$ helm install f5-cert-manager oci://repo.f5.com/charts/f5-cert-manager --version 0.23.35-0.0.10 -f cert-manager-values.yaml -n cne-core lab$ helm install f5-fluentd oci://repo.f5.com/charts/f5-toda-fluentd --version 1.31.30-0.0.7 -f fluentd-values.yaml -n cne-core --wait lab$ helm install f5-dssm oci://repo.f5.com/charts/f5-dssm --version 1.27.1-0.0.20 -f dssm-values.yaml -n cne-core --wait lab$ helm install cnf-rabbit oci://repo.f5.com/charts/rabbitmq --version 0.6.1-0.0.13 -f rabbitmq-values.yaml -n cne-core --wait lab$ helm install cnf-cwc oci://repo.f5.com/charts/cwc --version 0.43.1-0.0.15 -f cwc-values.yaml -n cne-core --wait lab$ helm install f5ingress oci://repo.f5.com/charts/f5ingress --version v13.7.1-0.3.22 -f values-ingress.yaml -n cnf-fw-01 --wait lab$ oc get pods -n cne-core NAME READY STATUS RESTARTS AGE f5-cert-manager-656b6db84f-t9dhn 2/2 Running 0 3h46m f5-cert-manager-cainjector-5cd9454d6c-nz46d 1/1 Running 0 3h46m f5-cert-manager-webhook-6d87b5797b-pmlwv 1/1 Running 0 3h46m f5-dssm-db-0 3/3 Running 0 3h43m f5-dssm-db-1 3/3 Running 0 3h42m f5-dssm-db-2 3/3 Running 0 3h41m f5-dssm-sentinel-0 3/3 Running 0 3h43m f5-dssm-sentinel-1 3/3 Running 0 3h42m f5-dssm-sentinel-2 3/3 Running 0 3h41m f5-rabbit-64c984d4c6-rjd2d 2/2 Running 0 3h40m f5-spk-cwc-77d487f955-7vqtl 2/2 Running 0 3h39m f5-toda-fluentd-558cd5b9bd-9cr6w 1/1 Running 0 3h43m lab$ oc get pods -n cnf-fw-01 NAME READY STATUS RESTARTS AGE f5-afm-76c7d76fff-8pj4c 2/2 Running 0 3h37m f5-downloader-657b7fc749-nzfgt 2/2 Running 0 3h37m f5-dwbld-d858c485b-c6bmf 2/2 Running 0 3h37m f5-ipsd-79f97fdb9c-dbsqb 2/2 Running 0 3h37m f5-tmm-7565b4c798-hvsfd 5/5 Running 0 3h37m f5-zxfrd-d9db549c4-qqhtr 2/2 Running 0 3h37m f5ingress-f5ingress-7bcc94b9c8-jcbfg 5/5 Running 0 3h37m otel-collector-75cd944bcc-fsvz4 1/1 Running 0 3h37m Deploy subscriber and external (data) vlans lab$ cat 01-cr-vlan.yaml apiVersion: "k8s.f5net.com/v1" kind: F5BigNetVlan metadata: name: "subscriber-vlan" namespace: "cnf-fw-01" spec: name: subscriber interfaces: - "1.2" selfip_v4s: - 10.1.20.100 - 10.1.20.101 - 10.1.20.102 prefixlen_v4: 24 mtu: 1500 cmp_hash: SRC_ADDR --- apiVersion: "k8s.f5net.com/v1" kind: F5BigNetVlan metadata: name: "data-vlan" namespace: "cnf-fw-01" spec: name: data interfaces: - "1.1" selfip_v4s: - 10.1.30.100 - 10.1.30.101 - 10.1.30.102 prefixlen_v4: 24 mtu: 1500 cmp_hash: SRC_ADDR Now, we start testing from the client side and observe our monitoring system, already configured OTEL to send data to Grafana Now, let’s have a look at our firewall policy, CR $ cat 06-cr-fw-policy-crd.yaml apiVersion: "k8s.f5net.com/v1" kind: F5BigFwPolicy metadata: name: "cnfcop-n6policy" spec: rule: - name: deny-53-hsl-log ipProtocol: udp source: addresses: - "0.0.0.0/0" ports: [] zones: - "subscriber" destination: addresses: - "0.0.0.0/0" ports: - "53" zones: - "data" action: "drop" logging: true - name: permit-any-hsl-log ipProtocol: any source: addresses: - "0.0.0.0/0" ports: [] zones: - "subscriber" destination: addresses: - "0.0.0.0/0" ports: [] zones: - "data" action: "accept" logging: true Let’s observe the logs from our monitoring system Conclusion In conclusion, BIG-IP Next for Kubernetes CNFs optimizes edge environments and AI workloads by providing consolidated data plane with BIG-IP market-leading application delivery and security platform capabilities for different deployment models. In this article, we explored CNF implementation with an example focused around CNF edge firewall, following articles to cover additional CRs (IPS, DNS, etc.) Related Content BIG-IP Next Cloud-Native Network Functions (CNFs) CNF Home F5 BIG-IP Next CGNAT CNF - Redhat Directory Modern Applications-Demystifying Ingress solutions flavors BIG-IP Next for Kubernetes Nvidia DPU deployment walkthrough From virtual to cloud-native, infrastructure evolution | DevCentral151Views2likes0CommentsThe Ingress NGINX Alternative: F5 NGINX Ingress Controller for the Long Term
The Kubernetes community recently announced that Ingress NGINX will be retired in March 2026. After that date, there won’t be any more new updates, bugfixes, or security patches. ingress-nginx is no longer a viable enterprise solution for the long-term, and organizations using it in production should move quickly to explore alternatives and plan to shift their workloads to Kubernetes ingress solutions that are continuing development. Your Options (And Why We Hope You’ll Consider NGINX) There are several good Ingress controllers available—Traefik, HAProxy, Kong, Envoy-based options, and Gateway API implementations. The Kubernetes docs list many of them, and they all have their strengths. Security start-up Chainguard is maintaining a status-quo version of ingress-nginx and applying basic safety patches as part of their EmeritOSS program. But this program is designed as a stopgap to keep users safe while they transition to a different ingress solution. F5 maintains an OSS permissively licensed NGINX Ingress Controller. The project is open source, Apache 2.0 licensed, and will stay that way. There is a team of dedicated engineers working on it with a slate of upcoming upgrades. If you’re already comfortable with NGINX and just want something that works without a significant learning curve, we believe that the F5 NGINX Ingress Controller for Kubernetes is your smoothest path forward. The benefits of adopting NGINX Ingress Controller open source include: Genuinely open source: Apache 2.0 licensed with 150+ contributors from diverse organizations, not just F5. All development happens publicly on GitHub, and F5 has committed to keeping it open source forever. Plus community calls every 2 weeks. Minimal learning curve: Uses the same NGINX engine you already know. Most Ingress NGINX annotations have direct equivalents, and the migration guide provides clear mappings for your existing configurations. Supported annotations include popular ones such as nginx.org/client-body-buffer-size mirrors nginx.ingress.kubernetes.io/client-body-buffer-size (sets the maximum size of the client request body buffer). Also available in VirtualServer and ConfigMap. nginx.org/rewrite-target mirrors nginx.ingress.kubernetes.io/rewrite-target (sets a replacement path for URI rewrites) nginx.org/ssl-ciphers mirrors nginx.ingress.kubernetes.io/ssl-ciphers (configures enabled TLS cipher suites) nginx.org/ssl-prefer-server-cipher mirrors nginx.ingress.kubernetes.io/ssl-prefer-server-ciphers (controls server-side cipher preference during the TLS handshake) Optional enterprise-grade capabilities: While the OSS version is robust, NGINX Plus integration is available for enterprises needing high availability, authentication and authorization, session persistence, advanced security and commercial support Sustainable maintenance: A dedicated full-time team at F5 ensures regular security updates, bug fixes, and feature development. Production-tested at scale: NGINX Ingress Controller powers approximately 40% of Kubernetes Ingress deployments with over 10 million downloads. It’s battle-tested in real production environments. Kubernetes-native design: Custom Resource Definitions (VirtualServer, Policy, TransportServer) provide cleaner configuration than annotation overload, with built-in validation to prevent errors. Advanced capabilities when you need them: Support for canary deployments, A/B testing, traffic splitting, JWT validation, rate limiting, mTLS, and more—available in the open source version. Future-proof architecture: Active development of NGINX Gateway Fabric provides a clear migration path when you’re ready to move to Gateway API. NGINX Gateway Fabric is a conformant Gateway API solution under CNCF conformance criteria and it is one of the most widely used open source Gateway API solutions. Moving to NGINX Ingress Controller Here’s a rough migration guide. You can also check our more detailed migration guide on our documentation site. Phase 1: Take Stock See what you have: Document your current Ingress resources, annotations, and ConfigMaps Check for snippets: Identify any annotations like: nginx.ingress.kubernetes.io/configuration-snippet Confirm you're using it: Run kubectl get pods --all-namespaces --selector app.kubernetes.io/name=ingress-nginx Set it up alongside: Install NGINX Ingress Controller in a separate namespace while keeping your current setup running Phase 2: Translate Your Config Convert annotations: Most of your existing annotations have equivalents in NGINX Ingress Controller - there's a comprehensive migration guide that maps them out Consider VirtualServer resources: These custom resources are cleaner than annotation-heavy Ingress, and give you more control, but it's your choice Or keep using Ingress: If you want minimal changes, it works fine with standard Kubernetes Ingress resources Handle edge cases: For anything that doesn't map directly, you can use snippets or Policy resources Phase 3: Test Everything Try it with test apps: Create some test Ingress rules pointing to NGINX Ingress Controller Run both side-by-side: Keep both controllers running and route test traffic through the new one Verify functionality: Check routing, SSL, rate limiting, CORS, auth—whatever you're using Check performance: Verify it handles your traffic the way you need Phase 4: Move Over Gradually Start small: Migrate your less-critical applications first Shift traffic slowly: Update DNS/routing bit by bit Watch closely: Keep an eye on logs and metrics as you go Keep an escape hatch: Make sure you can roll back if something goes wrong Phase 5: Finish Up Complete the migration: Move your remaining workloads Clean up the old controller: Uninstall community Ingress NGINX once everything's moved Tidy up: Remove old ConfigMaps and resources you don't need anymore Enterprise-grade capabilities and support Once an ingress layer becomes mission-critical, enterprise features become necessary. High availability, predictable failover, and supportability matter as much as features. Enterprise-grade capabilities available for NGINX Ingress Controller Plus include high availability, authentication and authorization, commercial support, and more. These ensure production traffic remains fast, secure, and reliable. Capabilities include: Commercial support Backed by vendor commercial support (SLAs, escalation paths) for production incidents Access to tested releases, patches, and security fixes suitable for regulated/enterprise environments Guidance for production architecture (HA patterns, upgrade strategies, performance tuning) Helps organizations standardize on a supported ingress layer for platform engineering at scale Dynamic Reconfiguration Upstream configuration updates via API without process reloads Eliminates memory bloat and connection timeouts as upstream server lists and variables are updated in real time when pods scale or configurations change Authentication & Authorization Built-in authentication support for OAuth 2.0 / OIDC, JWT validation, and basic auth External identity provider integration (e.g., Okta, Azure AD, Keycloak) via auth request patterns JWT validation at the edge, including signature verification, claims inspection, and token expiry enforcement Fine-grained access control based on headers, claims, paths, methods, or user identity Optional Web Application Firewall Native integration with F5 WAF for NGINX for OWASP Top 10 protection, gRPC schema validation, and OpenAPI enforcement DDoS mitigation capabilities when combined with F5 security solutions Centralized policy enforcement across multiple ingress resources High availability (HA) Designed to run as multiple Ingress Controller replicas in Kubernetes for redundancy and scale State sharing: Maintains session persistence, rate limits, and key-value stores for seamless uptime. Here’s the full list of differences between NGINX Open Source and NGINX One – a package that includes NGINX Plus Ingress Controller, NGINX Gateway Fabric, F5 WAF for NGINX, and NGINX One Console for managing NGINX Plus Ingress Controllers at scale. Get Started Today Ready to begin your migration? Here's what you need: 📚 Read the full documentation: NGINX Ingress Controller Docs 💻 Clone the repository: github.com/nginx/kubernetes-ingress 🐳 Pull the image: Docker Hub - nginx/nginx-ingress 🔄 Follow the migration guide: Migrate from Ingress-NGINX to NGINX Ingress Controller Interested in the enterprise version? Try NGINX One for free and give it a whirl The NGINX Ingress Controller community is responsive and full of passionate builders -- join the conversation in the GitHub Discussions or the NGINX Community Forum. You’ve got time to plan this migration right, but don’t wait until March 2026 to start.704Views1like0CommentsGetting Started with the Certified F5 NGINX Gateway Fabric Operator on Red Hat OpenShift
As enterprises modernize their Kubernetes strategies, the shift from standard Ingress Controllers to the Kubernetes Gateway API is redefining how we manage traffic. For years, the F5 NGINX Ingress Controller has been a foundational component in OpenShift environments. With the certification of F5 NGINX Gateway Fabric (NGF) 2.2 for Red Hat OpenShift, that legacy enters its next chapter. This new certified operator brings the high-performance NGINX data plane into the standardized, role-oriented Gateway API model—with full integration into OpenShift Operator Lifecycle Manager (OLM). Whether you're a platform engineer managing cluster ingress or a developer routing traffic to microservices, NGF on OpenShift 4.19+ delivers a unified, secure, and fully supported traffic fabric. In this guide, we walk through installing the operator, configuring the NginxGatewayFabric resource, and addressing OpenShift-specific networking patterns such as NodePort + Route. Why NGINX Gateway Fabric on OpenShift? While Red Hat OpenShift 4.19+ includes native support for the Gateway API (v1.2.1), integrating NGF adds critical enterprise capabilities: ✔ Certified & OpenShift-Ready The operator is fully validated by Red Hat, ensuring UBI-compliant images and compatibility with OpenShift’s strict Security Context Constraints (SCCs). ✔ High Performance, Low Complexity NGF delivers the core benefits long associated with NGINX—efficiency, simplicity, and predictable performance. ✔ Advanced Traffic Capabilities Capabilities like Regular Expression path matching and support for ExternalName services allow for complex, hybrid-cloud traffic patterns. ✔ AI/ML Readiness NGF 2.2 supports the Gateway API Inference Extension, enabling inference-aware routing for GenAI and LLM workloads on platforms like Red Hat OpenShift AI. Prerequisites Before we begin, ensure you have: Cluster Administrator access to an OpenShift cluster (version 4.19 or later is recommended for Gateway API GA support). Access to the OpenShift Console and the oc CLI. Ability to pull images from ghcr.io or your internal mirror. Step 1: Installing the Operator from OperatorHub We leverage the Operator Lifecycle Manager (OLM) for a "point-and-click" installation that handles lifecycle management and upgrades. Log into the OpenShift Web Console as an administrator. Navigate to Operators > OperatorHub. Search for NGINX Gateway Fabric in the search box. Select the NGINX Gateway Fabric Operator card and click Install Accept the default installation mode (All namespaces) or select a specific namespace (e.g. nginx-gateway), and click Install. Wait until the status shows Succeeded. Once installed, the operator will manage NGF lifecycle automatically. Step 2: Configuring the NginxGatewayFabric Resource Unlike the Ingress Controller, which used NginxIngressController resources, NGF uses the NginxGatewayFabric Custom Resource (CR) to configure the control plane and data plane. In the Console, go to Installed Operators > NGINX Gateway Fabric Operator. Click the NginxGatewayFabric tab and select Create NginxGatewayFabric. Select YAML view to configure the deployment specifics. Step 3: Configuring the NginxGatewayFabric Resource NGF uses a Kubernetes Service to expose its data plane. Before the data plane launches, we must tell the Controller how to expose it. Option A - LoadBalancer (ROSA, ARO, Managed OpenShift) By default, the NGINX Gateway Fabric Operator configures the service type as LoadBalancer. On public cloud managed OpenShift services (like ROSA on AWS or ARO on Azure), this native default works out-of-the-box to provision a cloud load balancer. No additional steps required. Option B - NodePort with OpenShift Route (On-Prem/Hybrid) However, for on-premise or bare-metal OpenShift clusters lacking a native LoadBalancer implementation, the common pattern is to use a NodePort service exposed via an OpenShift Route. Update the NGF CR to use NodePort In the Console, go to Installed Operators > NGINX Gateway Fabric Operator. Click the NginxGatewayFabric tab and select NginxGatewayFabric. Select YAML view to directly edit the configuration specifics. Change the spec.nginx.service.type to NodePort: apiVersion: gateway.nginx.org/v1alpha1 kind: NginxGatewayFabric metadata: name: default namespace: nginx-gateway spec: nginx: service: type: NodePort Create the OpenShift Route: After applying the CR, create a Route to expose the NGINX Service. oc create route edge ngf \ --service=nginxgatewayfabric-sample-nginx-gateway-fabric\ --port=http \ -n nginx-gateway Note: This creates an Edge TLS termination route. For passthrough TLS (allowing NGINX to handle certificates), use --passthrough and target the https port. Step 4: Validating the Deployment Verify that the operator has deployed the control plane pods successfully. oc get pod -n nginx-gateway NAME READY STATUS RESTARTS AGE nginx-gateway-fabric-controller-manager-dd6586597-bfdl5 1/1 Running 0 23m nginxgatewayfabric-sample-nginx-gateway-fabric-564cc6df4d-hztm8 1/1 Running 0 18m oc get gatewayclass NAME CONTROLLER ACCEPTED AGE nginx gateway.nginx.org/nginx-gateway-controller True 4d1h You should also see a GatewayClass named nginx. This indicates the controller is ready to manage Gateway resources. Step 5: Functional Check with Gateway API To test traffic, we will use the standard Gateway API resources (Gateway and HTTPRoute) Deploy a Test Application (Cafe Service) Ensure you have a backend service running. You can use a simple service for validation. Create a Gateway This resource opens the listener on the NGINX data plane. apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: cafe spec: gatewayClassName: nginx listeners: - name: http port: 80 protocol: HTTP Create an HTTPRoute This binds the traffic to your backend service. apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: coffee spec: parentRefs: - name: cafe hostnames: - "cafe.example.com" rules: - matches: - path: type: PathPrefix value: / backendRefs: - name: coffee port: 80 Test Connectivity If you used Option B (Route), send a request to your OpenShift Route hostname. If you used Option A, send it to the LoadBalancer IP. OpenShift 4.19 Compatibility Meanwhile, it is vital to understand the "under the hood" constraints of OpenShift 4.19: Gateway API Version Pinning: OpenShift 4.19 ships with Gateway API CRDs pinned to v1.2.1. While NGF 2.2 supports v1.3.0 features, it has been conformance-tested against v1.2.1 to ensure stability within OpenShift's version-locked environment. oc get crd gateways.gateway.networking.k8s.io -o yaml | grep "gateway.networking.k8s.io/" gateway.networking.k8s.io/bundle-version: v1.2.1 gateway.networking.k8s.io/channel: standard However, looking ahead, future NGINX Gateway Fabric releases may rely on newer Gateway API specifications that are not natively supported by the pinned CRDs in OpenShift 4.19. If you anticipate running a newer NGF version that may not be compatible with the current OpenShift Gateway API version, please reach out to us to discuss your compatibility requirements. Security Context Constraints (SCC): In previous manual deployments, you might have wrestled with NET_BIND_SERVICE capabilities or creating custom SCCs. The Certified Operator handles these permissions automatically, using UBI-based images that comply with Red Hat's security standards out of the box. Next Steps: AI Inference With NGF running, you are ready for advanced use cases: AI Inference: Explore the Gateway API Inference Extension to route traffic to LLMs efficiently, optimizing GPU usage on Red Hat OpenShift AI. The certified NGINX Gateway Fabric Operator simplifies the operational burden, letting you focus on what matters: delivering secure, high-performance applications and AI workloads. References: NGINX Gateway Fabric Operator on Red Hat Catalog F5 NGINX Gateway Fabric Certified for Red Hat OpenShift NGINX Gateway Fabric Installation Docs292Views3likes1CommentBIG-IP Next Edge Firewall CNF for Edge workloads
Introduction The CNF architecture aligns with cloud-native principles by enabling horizontal scaling, ensuring that applications can expand seamlessly without compromising performance. It preserves the deterministic reliability essential for telecom environments, balancing scalability with the stringent demands of real-time processing. More background information about what value CNF brings to the environment, https://community.f5.com/kb/technicalarticles/from-virtual-to-cloud-native-infrastructure-evolution/342364 Telecom service providers make use of CNFs for performance optimization, Enable efficient and secure processing of N6-LAN traffic at the edge to meet the stringent requirements of 5G networks. Optimize AI-RAN deployments with dynamic scaling and enhanced security, ensuring that AI workloads are processed efficiently and securely at the edge, improving overall network performance. Deploy advanced AI applications at the edge with the confidence of carrier-grade security and traffic management, ensuring real-time processing and analytics for a variety of edge use cases. CNF Firewall Implementation Overview Let’s start with understanding how different CRs are enabled within a CNF implementation this allows CNF to achieve more optimized performance, Capex and Opex. The traditional way of inserting services to the Kubernetes is as below, Moving to a consolidated Dataplane approach saved 60% of the Kubernetes environment’s performance The F5BigFwPolicy Custom Resource (CR) applies industry-standard firewall rules to the Traffic Management Microkernel (TMM), ensuring that only connections initiated by trusted clients will be accepted. When a new F5BigFwPolicy CR configuration is applied, the firewall rules are first sent to the Application Firewall Management (AFM) Pod, where they are compiled into Binary Large Objects (BLOBs) to enhance processing performance. Once the firewall BLOB is compiled, it is sent to the TMM Proxy Pod, which begins inspecting and filtering network packets based on the defined rules. Enabling AFM within BIG-IP Controller Let’s explore how we can enable and configure CNF Firewall. Below is an overview of the steps needed to set up the environment up until the CNF CRs installations [Enabling the AFM] Enabling AFM CR within BIG-IP Controller definition global: afm: enabled: true pccd: enabled: true f5-afm: enabled: true cert-orchestrator: enabled: true afm: pccd: enabled: true image: repository: "local.registry.com" [Configuration] Example for Firewall policy settings apiVersion: "k8s.f5net.com/v1" kind: F5BigFwPolicy metadata: name: "cnf-fw-policy" namespace: "cnf-gateway" spec: rule: - name: allow-10-20-http action: "accept" logging: true servicePolicy: "service-policy1" ipProtocol: tcp source: addresses: - "2002::10:20:0:0/96" zones: - "zone1" - "zone2" destination: ports: - "80" zones: - "zone3" - "zone4" - name: allow-10-30-ftp action: "accept" logging: true ipProtocol: tcp source: addresses: - "2002::10:30:0:0/96" zones: - "zone1" - "zone2" destination: ports: - "20" - "21" zones: - "zone3" - "zone4" - name: allow-us-traffic action: "accept" logging: true source: geos: - "US:California" destination: geos: - "MX:Baja California" - "MX:Chihuahua" - name: drop-all action: "drop" logging: true ipProtocol: any source: addresses: - "::0/0" - "0.0.0.0/0" [Logging & Monitoring] CNF firewall settings allow not only local logging but also to use HSL logging to external logging destinations. apiVersion: "k8s.f5net.com/v1" kind: F5BigLogProfile metadata: name: "cnf-log-profile" namespace: "cnf-gateway" spec: name: "cnf-logs" firewall: enabled: true network: publisher: "cnf-hsl-pub" events: aclMatchAccept: true aclMatchDrop: true tcpEvents: true translationFields: true Verifying the CNF firewall settings can be done through the sidecar container kubectl exec -it deploy/f5-tmm -c debug -n cnf-gateway – bash tmctl -d blade fw_rule_stat context_type context_name ------------ ------------------------------------------ virtual cnf-gateway-cnf-fw-policy-SecureContext_vs rule_name micro_rules counter last_hit_time action ------------------------------------ ----------- ------- ------------- ------ allow-10-20-http-firewallpolicyrule 1 2 1638572860 2 allow-10-30-ftp-firewallpolicyrule 1 5 1638573270 2 Conclusion To conclude our article, we showed how CNFs with consolidated data planes help with optimizing CNF deployments. In this article we went through the overview of BIG-IP Next Edge Firewall CNF implementation, sample configuration and monitoring capabilities. More use cases to cover different use cases to be following. Related content F5BigFwPolicy BIG-IP Next Cloud-Native Network Functions (CNFs) CNF Home177Views3likes2CommentsHow I did it.....again "High-Performance S3 Load Balancing with F5 BIG-IP"
Introduction Welcome back to the "How I did it" series! In the previous installment, we explored the high‑performance S3 load balancing of Dell ObjectScale with F5 BIG‑IP. This follow‑up builds on that foundation with BIG‑IP v21.x’s S3‑focused profiles and how to apply them in the wild. We’ll also put the external monitor to work, validating health with real PUT/GET/DELETE checks so your S3-compatible backends aren’t just “up,” they’re truly dependable. New S3 Profiles for the BIG-IP…..well kind of A big part of why F5 BIG-IP excels is because of its advanced traffic profiles, like TCP and SSL/TLS. These profiles let you fine-tune connection behavior—optimizing throughput, reducing latency, and managing congestion—while enforcing strong encryption and protocol settings for secure, efficient data flow. Available with version 21.x the BIG-IP now includes new S3-specific profiles, (s3-tcp and s3-default-clientssl). These profiles are based off existing default parent profiles, (tcp and clientssl respectively) that have been customized or “tuned” to optimize s3 traffic. Let’s take a closer look. Anatomy of a TCP Profile The BIG-IP includes a number of pre-defined TCP profiles that define how the system manages TCP traffic for virtual servers, controlling aspects like connection setup, data transfer, congestion control, and buffer tuning. These profiles allow administrators to optimize performance for different network conditions by adjusting parameters such as initial congestion window, retransmission timeout, and algorithms like Nagle’s or Delayed ACK. The s3-tcp, (see below) has been tweaked with respect to data transfer and congestion window sizes as well as memory management to optimize typical S3 traffic patterns (i.e. high-throughput data transfer, varying request sizes, large payloads, etc.). Tweaking the Client SSL Profile for S3 Client SSL profiles on BIG-IP define how the system terminates and manages SSL/TLS sessions from clients at the virtual server. They specify critical parameters such as certificates, private keys, cipher suites, and supported protocol versions, enabling secure decryption for advanced traffic handling like HTTP optimization, security policies, and iRules. The s3-default-clientssl has been modified, (see below) from the default client ssl profile to optimize SSL/TLS settings for high-throughput object storage traffic, ensuring better performance and compatibility with S3-specific requirements. Advanced S3-compatible health checking with EAV Has anyone ever told you how cool BIG-IP Extended Application Verification (EAV) aka external monitors are? Okay, I suppose “coolness” is subjective, but EAVs are objectively cool. Let me prove it to you. Health monitoring of backend S3-compatible servers typically involves making an HTTP GET request to either the exposed S3 ingest/egress API endpoint or a liveness probe. Get a 200 and all's good. Wouldn’t it be cool if you could verify a backend server's health by verifying it can actually perform the operations as intended? Fortunately, we can do just that using an EAV monitor. Therefore, based on the transitive property, EAVs are cool. —mic drop The bash script located at the bottom of the page performs health checks on S3-compatible storage by executing PUT, GET, and DELETE operations on a test object. The health check creates a temporary health check file with timestamp, retrieves the file to verify read access, and removes the test file to clean up. If all three operations return the expected HTTP status code, the node is marked up otherwise the node is marked down. Installing and using the EAV health check Import the monitor script Save the bash script, (.sh) extension, (located at the bottom of this page) locally and import the file onto the BIG-IP. Log in to the BIG-IP Configuration Utility and navigate to System > File Management > External Monitor Program File List > Import. Use the file selector to navigate to and select the newly created. bash file, provide a name for the file and select 'Import'. Create a new external monitor Navigate to Local Traffic > Monitors > Create Provide a name for the monitor. Select 'External' for the type, and select the previously uploaded file for the 'External Program'. The 'Interval' and 'Timeout' settings can be modified or left at the default as desired. In addition to the backend host and port, the monitor must pass three (3) additional variables to the backend: bucket - The name of an existing bucket where the monitor can place a small text file. During the health check, the monitor will create a file, request the file and delete the file. access_key - S3-compatible access key with permissions to perform the above operations on the specified bucket. secret_key - corresponding S3-compatible secret key. Select 'Finished' to create the monitor. Associate the monitor with the pool Navigate to Local Traffic > Pools > Pool List and select the relevant backend S3 pool. Under 'Health Monitors' select the newly created monitor and move from 'Available' to the 'Active'. Select 'Update' to save the configuration. Additional Links How I did it - "High-Performance S3 Load Balancing of Dell ObjectScale with F5 BIG-IP" F5 BIG-IP v21.0 brings enhanced AI data delivery and ingestion for S3 workflows Overview of BIG-IP EAV external monitors EAV Bash Script #!/bin/bash ################################################################################ # S3 Health Check Monitor for F5 BIG-IP (External Monitor - EAV) ################################################################################ # # Description: # This script performs health checks on S3-compatible storage by # executing PUT, GET, and DELETE operations on a test object. It uses AWS # Signature Version 4 for authentication and is designed to run as a BIG-IP # External Application Verification (EAV) monitor. # # Usage: # This script is intended to be configured as an external monitor in BIG-IP. # BIG-IP automatically provides the first two arguments: # $1 - Pool member IP address (may be IPv6-mapped format: ::ffff:x.x.x.x) # $2 - Pool member port number # # Additional arguments must be configured in the monitor's "Variables" field: # bucket - S3 bucket name # access_key - Access key for authentication # secret_key - Secret key for authentication # # BIG-IP Monitor Configuration: # Type: External # External Program: /path/to/this/script.sh # Variables: # bucket="your-bucket-name" # access_key="your-access-key" # secret_key="your-secret-key" # # Health Check Logic: # 1. PUT - Creates a temporary health check file with timestamp # 2. GET - Retrieves the file to verify read access # 3. DELETE - Removes the test file to clean up # Success: All three operations return expected HTTP status codes # Failure: Any operation fails or times out # # Exit Behavior: # - Prints "UP" to stdout if all checks pass (BIG-IP marks pool member up) # - Silent exit if any check fails (BIG-IP marks pool member down) # # Requirements: # - openssl (for SHA256 hashing and HMAC signing) # - curl (for HTTP requests) # - xxd (for hex encoding) # - Standard bash utilities (date, cut, sed, awk) # # Notes: # - Handles IPv6-mapped IPv4 addresses from BIG-IP (::ffff:x.x.x.x) # - Uses AWS Signature Version 4 authentication # - Logs activity to syslog (local0.notice) # - Creates temporary files that are automatically cleaned up # # Author: [Gregory Coward/F5] # Version: 1.0 # Last Modified: 12/2025 # ################################################################################ # ===== PARAMETER CONFIGURATION ===== # BIG-IP automatically provides these HOST="$1" # Pool member IP (may include ::ffff: prefix for IPv4) PORT="$2" # Pool member port BUCKET="${bucket}" # S3 bucket name ACCESS_KEY="${access_key}" # S3 access key SECRET_KEY="${secret_key}" # S3 secret key OBJECT="${6:-healthcheck.txt}" # Test object name (default: healthcheck.txt) # Strip IPv6-mapped IPv4 prefix if present (::ffff:10.1.1.1 -> 10.1.1.1) # BIG-IP may pass IPv4 addresses in IPv6-mapped format if [[ "$HOST" =~ ^::ffff: ]]; then HOST="${HOST#::ffff:}" fi # ===== S3/AWS CONFIGURATION ===== ENDPOINT="http://$HOST:$PORT" # S3 endpoint URL SERVICE="s3" # AWS service identifier for signature REGION="" # AWS region (leave empty for S3 compatible such as MinIO/Dell) # ===== TEMPORARY FILE SETUP ===== # Create temporary file for health check upload TMP_FILE=$(mktemp) printf "Health check at %s\n" "$(date)" > "$TMP_FILE" # Ensure temp file is deleted on script exit (success or failure) trap "rm -f $TMP_FILE" EXIT # ===== CRYPTOGRAPHIC HELPER FUNCTIONS ===== # Calculate SHA256 hash and return as hex string # Input: stdin # Output: hex-encoded SHA256 hash hex_of_sha256() { openssl dgst -sha256 -hex | sed 's/^.* //' } # Sign data using HMAC-SHA256 and return hex signature # Args: $1=hex-encoded key, $2=data to sign # Output: hex-encoded signature sign_hmac_sha256_hex() { local key_hex="$1" local data="$2" printf "%s" "$data" | openssl dgst -sha256 -mac HMAC -macopt "hexkey:$key_hex" | awk '{print $2}' } # Sign data using HMAC-SHA256 and return binary as hex # Args: $1=hex-encoded key, $2=data to sign # Output: hex-encoded binary signature (for key derivation chain) sign_hmac_sha256_binary() { local key_hex="$1" local data="$2" printf "%s" "$data" | openssl dgst -sha256 -mac HMAC -macopt "hexkey:$key_hex" -binary | xxd -p -c 256 } # ===== AWS SIGNATURE VERSION 4 IMPLEMENTATION ===== # Generate AWS Signature Version 4 for S3 requests # Args: # $1 - HTTP method (PUT, GET, DELETE, etc.) # $2 - URI path (e.g., /bucket/object) # $3 - Payload hash (SHA256 of request body, or empty hash for GET/DELETE) # $4 - Content-Type header value (empty string if not applicable) # Output: pipe-delimited string "Authorization|Timestamp|Host" aws_sig_v4() { local method="$1" local uri="$2" local payload_hash="$3" local content_type="$4" # Generate timestamp in AWS format (YYYYMMDDTHHMMSSZ) local timestamp=$(date -u +"%Y%m%dT%H%M%SZ" 2>/dev/null || gdate -u +"%Y%m%dT%H%M%SZ") local datestamp=$(date -u +"%Y%m%d") # Build host header (include port if non-standard) local host_header="$HOST" if [ "$PORT" != "80" ] && [ "$PORT" != "443" ]; then host_header="$HOST:$PORT" fi # Build canonical headers and signed headers list local canonical_headers="" local signed_headers="" # Include Content-Type if provided (for PUT requests) if [ -n "$content_type" ]; then canonical_headers="content-type:${content_type}"$'\n' signed_headers="content-type;" fi # Add required headers (must be in alphabetical order) canonical_headers="${canonical_headers}host:${host_header}"$'\n' canonical_headers="${canonical_headers}x-amz-content-sha256:${payload_hash}"$'\n' canonical_headers="${canonical_headers}x-amz-date:${timestamp}" signed_headers="${signed_headers}host;x-amz-content-sha256;x-amz-date" # Build canonical request (AWS Signature V4 format) # Format: METHOD\nURI\nQUERY_STRING\nHEADERS\n\nSIGNED_HEADERS\nPAYLOAD_HASH local canonical_request="${method}"$'\n' canonical_request+="${uri}"$'\n\n' # Empty query string (double newline) canonical_request+="${canonical_headers}"$'\n\n' canonical_request+="${signed_headers}"$'\n' canonical_request+="${payload_hash}" # Hash the canonical request local canonical_hash canonical_hash=$(printf "%s" "$canonical_request" | hex_of_sha256) # Build string to sign local algorithm="AWS4-HMAC-SHA256" local credential_scope="$datestamp/$REGION/$SERVICE/aws4_request" local string_to_sign="${algorithm}"$'\n' string_to_sign+="${timestamp}"$'\n' string_to_sign+="${credential_scope}"$'\n' string_to_sign+="${canonical_hash}" # Derive signing key using HMAC-SHA256 key derivation chain # kSecret = HMAC("AWS4" + secret_key, datestamp) # kRegion = HMAC(kSecret, region) # kService = HMAC(kRegion, service) # kSigning = HMAC(kService, "aws4_request") local k_secret k_secret=$(printf "AWS4%s" "$SECRET_KEY" | xxd -p -c 256) local k_date k_date=$(sign_hmac_sha256_binary "$k_secret" "$datestamp") local k_region k_region=$(sign_hmac_sha256_binary "$k_date" "$REGION") local k_service k_service=$(sign_hmac_sha256_binary "$k_region" "$SERVICE") local k_signing k_signing=$(sign_hmac_sha256_binary "$k_service" "aws4_request") # Calculate final signature local signature signature=$(sign_hmac_sha256_hex "$k_signing" "$string_to_sign") # Return authorization header, timestamp, and host header (pipe-delimited) printf "%s|%s|%s" \ "${algorithm} Credential=${ACCESS_KEY}/${credential_scope}, SignedHeaders=${signed_headers}, Signature=${signature}" \ "$timestamp" \ "$host_header" } # ===== HTTP REQUEST FUNCTION ===== # Execute HTTP request using curl with AWS Signature V4 authentication # Args: # $1 - HTTP method (PUT, GET, DELETE) # $2 - Full URL # $3 - Authorization header value # $4 - Timestamp (x-amz-date header) # $5 - Host header value # $6 - Payload hash (x-amz-content-sha256 header) # $7 - Content-Type (optional, empty for GET/DELETE) # $8 - Data file path (optional, for PUT with body) # Output: HTTP status code (e.g., 200, 404, 500) do_request() { local method="$1" local url="$2" local auth="$3" local timestamp="$4" local host_header="$5" local payload_hash="$6" local content_type="$7" local data_file="$8" # Build curl command with required headers local cmd="curl -s -o /dev/null --connect-timeout 5 --write-out %{http_code} \"$url\"" cmd="$cmd -X $method" cmd="$cmd -H \"Host: $host_header\"" cmd="$cmd -H \"x-amz-date: $timestamp\"" cmd="$cmd -H \"x-amz-content-sha256: $payload_hash\"" # Add optional headers [ -n "$content_type" ] && cmd="$cmd -H \"Content-Type: $content_type\"" cmd="$cmd -H \"Authorization: $auth\"" [ -n "$data_file" ] && cmd="$cmd --data-binary @\"$data_file\"" # Execute request and return HTTP status code eval "$cmd" } # ===== MAIN HEALTH CHECK LOGIC ===== # ===== STEP 1: PUT (Upload Test Object) ===== # Calculate SHA256 hash of the temp file content UPLOAD_HASH=$(openssl dgst -sha256 -binary "$TMP_FILE" | xxd -p -c 256) CONTENT_TYPE="application/octet-stream" # Generate AWS Signature V4 for PUT request SIGN_OUTPUT=$(aws_sig_v4 "PUT" "/$BUCKET/$OBJECT" "$UPLOAD_HASH" "$CONTENT_TYPE") AUTH_PUT=$(cut -d'|' -f1 <<< "$SIGN_OUTPUT") DATE_PUT=$(cut -d'|' -f2 <<< "$SIGN_OUTPUT") HOST_PUT=$(cut -d'|' -f3 <<< "$SIGN_OUTPUT") # Execute PUT request (expect 200 OK) PUT_STATUS=$(do_request "PUT" "$ENDPOINT/$BUCKET/$OBJECT" "$AUTH_PUT" "$DATE_PUT" "$HOST_PUT" "$UPLOAD_HASH" "$CONTENT_TYPE" "$TMP_FILE") # ===== STEP 2: GET (Download Test Object) ===== # SHA256 hash of empty body (for GET requests with no payload) EMPTY_HASH="e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" # Generate AWS Signature V4 for GET request SIGN_OUTPUT=$(aws_sig_v4 "GET" "/$BUCKET/$OBJECT" "$EMPTY_HASH" "") AUTH_GET=$(cut -d'|' -f1 <<< "$SIGN_OUTPUT") DATE_GET=$(cut -d'|' -f2 <<< "$SIGN_OUTPUT") HOST_GET=$(cut -d'|' -f3 <<< "$SIGN_OUTPUT") # Execute GET request (expect 200 OK) GET_STATUS=$(do_request "GET" "$ENDPOINT/$BUCKET/$OBJECT" "$AUTH_GET" "$DATE_GET" "$HOST_GET" "$EMPTY_HASH" "" "") # ===== STEP 3: DELETE (Remove Test Object) ===== # Generate AWS Signature V4 for DELETE request SIGN_OUTPUT=$(aws_sig_v4 "DELETE" "/$BUCKET/$OBJECT" "$EMPTY_HASH" "") AUTH_DEL=$(cut -d'|' -f1 <<< "$SIGN_OUTPUT") DATE_DEL=$(cut -d'|' -f2 <<< "$SIGN_OUTPUT") HOST_DEL=$(cut -d'|' -f3 <<< "$SIGN_OUTPUT") # Execute DELETE request (expect 204 No Content) DEL_STATUS=$(do_request "DELETE" "$ENDPOINT/$BUCKET/$OBJECT" "$AUTH_DEL" "$DATE_DEL" "$HOST_DEL" "$EMPTY_HASH" "" "") # ===== LOG RESULTS ===== # Log all operation results for troubleshooting #logger -p local0.notice "S3 Monitor: PUT=$PUT_STATUS GET=$GET_STATUS DEL=$DEL_STATUS" # ===== EVALUATE HEALTH CHECK RESULT ===== # BIG-IP considers the pool member "UP" only if this script prints "UP" to stdout # Check if all operations returned expected status codes: # PUT: 200 (OK) # GET: 200 (OK) # DELETE: 204 (No Content) if [ "$PUT_STATUS" -eq 200 ] && [ "$GET_STATUS" -eq 200 ] && [ "$DEL_STATUS" -eq 204 ]; then #logger -p local0.notice "S3 Monitor: UP" echo "UP" fi # If any check fails, script exits silently (no "UP" output) # BIG-IP will mark the pool member as DOWN538Views4likes0Comments