cloud
1747 TopicsEquinix and F5 Distributed Cloud Services: Business Partner Application Exchanges
As organizations adopt hybrid and multicloud architectures, one of the challenges they face is how they can securely connect their partners to specific applications while maintaining control over cost and limiting complexity. Unfortunately, traditional private connectivity models tend to struggle with complex setups, slow on-boarding, and rigid policies that make it hard to adapt to changing business needs. F5 Distributed Cloud Services on Equinix Network Edge provides a solution that makes partner connectivity process easier, enhances security with integrated WAF and API protection, and enables consistent policy enforcement across hybrid and multicloud environments. This integration allows businesses to modernize their connectivity strategies, ensuring faster access to applications while maintaining robust security and compliance. Key Benefits The benefits of using Distributed Cloud Services with Equinix Network Edge include: • Seamless Delivery: Deploy apps close to partners for faster access. • API & App Security: Protect data with integrated security features. • Hybrid Cloud Support: Enforce consistent policies in multi-cloud setups. • Compliance Readiness: Meet data protection regulations with built-in security features. • Proven Integration: F5 + Equinix connectivity is optimized for performance and security. Before: Traditional Private Connectivity Challenges Many organizations still rely on traditional private connectivity models that are complex, rigid, and difficult to scale. In a traditional architecture using Equinix, setting up infrastructure is complex and time-consuming. For every connection, an engineer must manually configure circuits through Equinix Fabric, set up BGP routing, apply load balancing, and define firewall rules. These steps are repeated for each partner or application, which adds a lot of overhead and slows down the onboarding process. Each DMZ is managed separately with its own set of WAFs, routers, firewalls, and load balancers. This makes the environment harder to maintain and scale. If something changes, such as moving an app to a different region or giving a new partner access, it often requires redoing the configuration from scratch. This rigid approach limits how fast a business can respond to new needs. Manual setups also increase the risk of mistakes. Missing or misconfigured firewall rules can accidentally expose sensitive applications, creating security and compliance risks. Overall, this traditional model is slow, inflexible, and difficult to manage as environments grow and change. After: F5 Distributed Cloud Services with Equinix Deploying F5 Distributed Cloud Customer Edge (CE) software on Equinix Network Edge addresses these pain points with a modern, simplified model, enabling the creation of secure business partner app exchanges. By integrating Distributed Cloud Services with Equinix, connecting partners to internal applications is faster and simpler. Instead of manually configuring each connection, Distributed Cloud Services automates the process through a centralized management console. Deploying a CE is straightforward and can be done in minutes. From the Distributed Cloud Console, open "Multi-Cloud Network Connect" and create a "Secure Mesh Site" where you can select Equinix as a Provider. Next, open the Equinix Console and deploy the CE image. This can be done through the Equinix Marketplace, where you can select the F5 Distributed Cloud Services and deploy it to your desired location. A CE can replace the need for multiple components like routers, firewalls, and load balancers. It handles BGP routing, traffic inspection through a built-in WAF, and load balancing. All of this is managed through a single web interface. In this case, the CE connects directly to the Arcadia application in the customer’s data center using at least two IPsec tunnels. BGP peering is quickly established with partner environments, allowing dynamic route exchange without manual setup of static routes. Adding a new partner is as simple as configuring another BGP session and applying the correct policy from the central Distributed Cloud Console. Instead of opening up large network subnets, security is enforced at Layer 7, and this app-aware connectivity is inherently zero trust. Each partner only sees and connects to the exact application they’re supposed to, without accessing anything else. Policies are reusable and consistent, so they can be applied across multiple partners with no duplication. The built-in observability gives real-time visibility into traffic and security events. DevOps, NetOps, and SecOps teams can monitor everything from the Distributed Cloud Console, reducing troubleshooting time and improving incident response. This setup avoids the delays and complexity of traditional connectivity methods, while making the entire process more secure and easier to operate. Simplified Partner Onboarding with Segments The integration of F5 and Equinix allows for simplified partner onboarding using Network Segments. This approach enables organizations to create logical groupings of partners, each with its own set of access rules and policies, all managed centrally. With Distributed Cloud Services and Equinix, onboarding multiple partners is fast, secure, and easy to manage. Instead of creating separate configurations for each partner, a single centralized service policy is used to control access. Different partner groups can be assigned to segments with specific rules, which are all managed from the Distributed Cloud Console. This means one unified policy can control access across many Network Segments, reducing complexity and speeding up the onboarding process. To configure a Segment, you can simply attach an interface to a CE and assign it to a specific segment. Each segment can have its own set of policies, such as which applications are accessible, what security measures are in place, and how traffic is routed. Each partner tier gets access only to the applications allowed by the policy. In this example, Gold partners might get access to more services than Silver partners. Security policies are enforced at Layer 7, so partners interact only with the allowed applications. There is no low-level network access and no direct IP-level reachability. WAF, load balancing, and API protection are also controlled centrally, ensuring consistent security for all partners. BGP routing through Equinix Fabric makes it simple to connect multiple partner networks quickly, with minimal configuration steps. This approach scales much better than traditional setups and keeps the environment organized, secure, and transparent. Scalable and Secure Connectivity F5 Distributed Cloud Services makes it simple to expand application connectivity and security across multiple regions using Equinix Network Edge. CE nodes can be quickly deployed at any Equinix location from the Equinix Marketplace. This allows teams to extend app delivery closer to end users and partners, reducing latency and improving performance without building new infrastructure from scratch. Distributed Cloud Services allows you to organize your CE nodes into a "Virtual Site". This Virtual Site can span multiple Equinix locations, enabling you to manage all your CE nodes as a single entity. When you need to add a new region, you can deploy a new CE node in that location and all configurations are automatically applied from the associated Virtual Site. Once a new CE is deployed, existing application and security policies can be automatically replicated to the new site. This standardized approach ensures that all regions follow the same configurations for routing, load balancing, WAF protection, and Layer 7 access control. Policies for different partner tiers are centrally managed and applied consistently across all locations. Built-in observability gives full visibility into traffic flows, segment performance, and app access from every site - all from the Distributed Cloud Console. Operations teams can monitor and troubleshoot with a unified view, without needing to log into each region separately. This centralized control greatly reduces operational overhead and allows the business to scale out quickly while maintaining security and compliance. Service Policy Management When scaling out to multiple regions, centralized management of service policies becomes crucial. Distributed Cloud Services allows you to define service policies that can be applied across all CE nodes in a Virtual Site. This means you can create a single policy that governs how applications are accessed, secured, and monitored, regardless of where they are deployed. For example, you can define a service policy that adds a specific HTTP header to all incoming requests for a particular segment. This can be useful for tracking, logging, or enforcing security measures. Another example is setting up a policy that rate-limits API calls from partners to prevent abuse. This policy can be applied across all CE nodes in the Virtual Site, ensuring that all partners are subject to the same rate limits without needing to configure each node individually. The policy works on the L7 level, meaning it passes only HTTP traffic and blocks any non-HTTP traffic. This ensures that only legitimate web requests are processed, enhancing security and reducing the risk of attacks. Distributed Cloud Services provides different types of dashboards to monitor the performance and security of your applications across all regions. This allows you to monitor security incidents, such as WAF alerts or API abuse, from a single dashboard. The Distributed Cloud Console provides detailed logs with information about each request, including the source IP, HTTP method, response status, and any applied policies. If a request is blocked by a WAF or security policy, the logs will show the reason for the block, making it easier to troubleshoot issues and maintain compliance. The centralized management of service policies and observability features in Distributed Cloud Services allows organizations to save costs and time when managing their hybrid and multi-cloud environments. By applying consistent policies across all regions, businesses can reduce the need for manual configurations and minimize the risk of misconfigurations. This not only enhances security but also simplifies operations, allowing teams to focus on delivering value rather than managing complex network setups. Offload Services to Equinix Network Edge For organizations that require edge compute capabilities, Distributed Cloud Services provides a Virtual Kubernetes Cluster (vK8s) that can be deployed on Equinix Network Edge in combination with F5 Distributed Cloud Regional Edge (RE) nodes. This solution allows you to run containerized applications in a distributed manner, close to your partners and end users to reduce latency. For example, you can deploy frontend services closer to your partners while your backend services can remain in your data center or in a cloud provider. The more services you move to the edge, the more you can benefit from reduced latency and improved performance. You can use vK8s like a regular Kubernetes cluster, deploying applications, managing resources, and scaling as needed. The F5 Distributed Cloud Console provides a CLI and web interface to manage your vK8s clusters, making it easy to deploy and manage applications across multiple regions. Demos Example use-case part 1 - F5 Distributed Cloud & Equinix: Business Partner App Exchange for Edge Services Video link TBD Example use-case part 2 - Go beyond the network with Zero Trust Application Access from F5 and Equinix Video link TBD Standalone Setup, Configuration, Walkthrough, & Tutorial Conclusion F5 Distributed Cloud on Equinix Network Edge transforms how organizations connect partners and applications. With its centralized management, automated connectivity, and built-in security features, it becomes a solid foundation for modern hybrid and multi-cloud environments. This integration simplifies partner onboarding, enhances security, and enables consistent policy enforcement across regions. Learn more about how F5 Distributed Cloud Services and Equinix can help your organization increase agility while reducing complexity and avoiding the pitfalls of traditional private connectivity models. Additional Resources F5 & Equinix Partnership: https://www.f5.com/partners/technology-alliances/equinix F5 Community Technical Article: Building a secure Application DMZ F5 Blogs F5 and Equinix Simplify Secure Deployment of Distributed Apps F5 and Equinix unite to simplify secure multicloud application delivery Extranets aren’t dead; they just need an upgrade Multicloud chaos ends at the Equinix Edge with F5 Distributed Cloud CE
21Views0likes0CommentsMitigating OWASP Web Application Risk: Insecure Design using F5 XC platform
Overview: This article is the last part in a series of articles on mitigation of OWASP Web Application vulnerabilities using F5 Distributed Cloud platform (F5 XC). Introduction to Insecure Design: In an effort to speed up the development cycle, some phases might be reduced in scope which leads to give chance for many vulnerabilities. To focus the risks which are been ignored from design to deployment phases, a new category of “Insecure Design” is added under OWASP Web Application Top 10 2021 list. Insecure Design represents the weaknesses i.e. lack of security controls which are been integrated to the website/application throughout the development cycle. If we do not have any security controls to defend the specific attacks, Insecure Design cannot be fixed by any perfect implementation while at the same time a secure design can still have an implementation flaw which leads to vulnerabilities that may be exploited. Hence the attackers will get vast scope to leverage the vulnerabilities created by the insecure design principles. Here are the multiple scenarios which comes under insecure design vulnerabilities. Credential Leak Authentication Bypass Injection vulnerabilities Scalper bots etc. In this article we will see how F5 XC platform helps to mitigate the scalper bot scenario. What is Scalper Bot: In the e-commerce industry, Scalping is a process which always leads to denial of inventory. Especially, online scalping uses bots nothing but the automated scripts which will check the product availability periodically (in seconds), add the items to the cart and checkout the products in bulk. Hence the genuine users will not get a fair chance to grab the deals or discounts given by the website or company. Alternatively, attackers use these scalper bots to abandon the items added to the cart later, causing losses to the business as well. Demonstration: In this demonstration, we are using an open-source application “Evershop” which will provide end to end online shopping cart facility. It will also provide an Admin page which helps to add/delete the item from the website whereas from the customer site users can login and checkout the items based on the availability. Admin Page: Customer Page: Scalper bot with automation script: The above selenium script will login to the e-commerce application as a customer, checks the product availability and checkout the items by adding the items into the cart. To mitigate this problem, F5 XC is providing the feasibility of identifying and blocking these bots based on the configuration provided under HTTP load balancer. Here is the procedure to configure the bot defense with mitigation action ‘block’ in the load balancer and associate the backend application nothing but ‘evershop’ as the origin pool. Create origin pool Refer pool-creation for more info Create http load balancer (LB) and associate the above origin pool to it. Refer LB-creation for more info Configure bot defense on the load balancer and add the policy with mitigation action as ‘block’. Click on “Save and Exit” to save the Load Balancer configuration. Run the automation script by providing the LB domain details to exploit the items in the application. Validating the product availability for the genuine user manually. Monitor the logs through F5 XC, Navigate to WAAP --> Apps & APIs --> Security Dashboard, select your LB and click on ‘Security Event’ tab. Conclusion: As you have seen from the demonstration, F5 Distributed Cloud WAAP (Web Application and API Protection) has detected the scalpers with the bot defense configuration applied on the Load balancer and mitigated the exploits of scalper bots. It also provides the mitigation action of “_allow_”, “_redirect_” along with “_block_”. Please refer link for more info. Reference links: OWASP Top 10 - 2021 Overview of OWASP Web Application Top 10 2021 F5 Distributed Cloud Services F5 Distributed Cloud Platform Authentication Bypass Injection vulnerabilities2.5KViews2likes0CommentsApp Delivery & Security for Hybrid Environments using F5 Distributed Cloud
As enterprises modernize and expand their digital services, they increasingly deploy multiple instances of the same applications across diverse infrastructure environments—such as VMware, OpenShift, and Nutanix—to support distributed teams, regional data sovereignty, redundancy, or environment-specific compliance needs. These application instances often integrate into service chains that span across clouds and data centers, introducing both scale and operational complexity. F5 Distributed Cloud provides a unified solution for secure, consistent application delivery and security across hybrid and multi-cloud environments. It enables organizations to add workloads seamlessly—whether for scaling, redundancy, or localization—without sacrificing visibility, security, or performance.419Views4likes0CommentsSimplifying and Securing Network Segmentation with F5 Distributed Cloud and Nutanix Flow
Introduction Enterprises often separate environments—such as development and production—to improve efficiency, reduce risk, and maintain compliance. A critical enabler of this separation is network segmentation, which isolates networks into smaller, secured segments—strengthening security, optimizing performance, and supporting regulatory standards. In this article, we explore the integration between Nutanix Flow and F5 Distributed Cloud, showcasing how F5 and Nutanix collaborate to simplify and secure network segmentation across diverse environments—on-premises, remote, and hybrid multicloud. Integration Overview At the heart of this integration is the capability to deploy a F5 Distributed Cloud Customer Edge (CE) inside a Nutanix Flow VPC, establish BGP peering with the Nutanix Flow BGP Gateway, and inject CE-advertised BGP routes into the VPC routing table. This architecture provides full control over application delivery and security within the VPC. It enables selective advertisement of HTTP load balancers (LBs) or VIPs to designated VPCs, ensuring secure and efficient connectivity. By leveraging F5 Distributed Cloud to segment and extend networks to remote location—whether on-premises or in the public cloud—combined with Nutanix Flow for microsegmentation within VPCs, enterprises achieve comprehensive end-to-end security. This approach enforces a consistent security posture while reducing complexity across diverse infrastructures. In our previous article (click here) , we explored application delivery and security. Here, we focus on network segmentation and how this integration simplifies connectivity across environments. Demo Walkthrough The demo consists of two parts: Extending a local network segment from a Nutanix Flow VPC to a remote site using F5 Distributed Cloud. Applying microsegmentation within the network segment using Nutanix Flow Security Next-Gen. San Jose (SJ) serves as our local site, and the demo environment dev3 is a Nutanix Flow VPC with an F5 Distributed Cloud Customer Edge (CE) deployed inside: *Note: The SJ CE is named jy-nutanix-overlay-dev3 in the F5 Distributed Cloud Console and xc-ce-dev3 in the Nutanix Prism Central. On the F5 Distributed Cloud Console, we created a network segment named jy-nutanix-sjc-nyc-segment and we assigned it specifically to the subnet 192.170.84.0/24: eBGP peering is ESTABLISHED between the CE and the Nutanix Flow BGP Gateway in this segment: At the remote site in NYC, a CE named jy-nutanix-nyc is deployed with a local subnet of 192.168.60.0/24: To extend jy-nutanix-sjc-nyc-segment from SJ to NYC, simply assign the segment jy-nutanix-sjc-nyc-segment to the NYC CE local subnet 192.168.60.0/24 in the F5 Distributed Cloud Console: Effortlessly and in no time, the segment jy-nutanix-sjc-nyc-segment is now extended across environments from SJ to NYC: Checking the CE routing table, we can see that the local routes originated from the CEs are being exchanged among them: At the local site SJ, the SJ CE jy-nutanix-overlay-dev3 advertises the remote route originating from the NYC CE jy-nutanix-nyc to the Nutanix Flow BGP Gateway via BGP, and installs the route in the dev3 routing table: SJ VMs can now reach NYC VMs and vice versa, while continuing to use their Nutanix Flow VPC logical router as the default gateway: To enforce granular security within the segment, Nutanix Flow Security Next-Gen provides microsegmentation. Together, F5 Distributed Cloud and Nutanix Flow Security Next-Gen deliver a cohesive solution: F5 Distributed cloud seamlessly extends network segments across environments, while Nutanix Flow Security Next-Gen ensures fine-grained security controls within those segments: Our demo extends a network segment between two data centers, but the same approach can also be applied between on-premises and public cloud environments—delivering flexibility across hybrid multicloud environments. Conclusion F5 Distributed Cloud simplifies network segmentation across hybrid and multi-cloud environments, making it both secure and effortless. By seamlessly extending network segments across any environment, F5 removes the complexity traditionally associated with connecting diverse infrastructures. Combined with Nutanix Flow Security Next-Gen for microsegmentation within each segment, this integration delivers end-to-end protection and consistent policy enforcement. Together, F5 and Nutanix help enterprises reduce operational overhead, maintain compliance, and strengthen security—while enabling agility and scalability across all environments. This integration is coming soon in CY2026. If you’re interested in early access, please contact your F5 representative. Reference URLs https://www.f5.com/products/distributed-cloud-services https://www.nutanix.com/products/flow
76Views0likes0CommentsUsing AWS CloudHSM with F5 BIG-IP
With the release of TMOS version 17.5.1, BIG-IP now supports the latest AWS CloudHSM hardware security module (HSM) type, hsm2m.medium, and the latest AWS CloudHSM Client SDK, version 5. This article explains how to install and configure AWS CloudHSM Client SDK 5 on BIG-IP 17.5.1636Views1like5CommentsGetting Started with the Certified F5 NGINX Gateway Fabric Operator on Red Hat OpenShift
As enterprises modernize their Kubernetes strategies, the shift from standard Ingress Controllers to the Kubernetes Gateway API is redefining how we manage traffic. For years, the F5 NGINX Ingress Controller has been a foundational component in OpenShift environments. With the certification of F5 NGINX Gateway Fabric (NGF) 2.2 for Red Hat OpenShift, that legacy enters its next chapter. This new certified operator brings the high-performance NGINX data plane into the standardized, role-oriented Gateway API model—with full integration into OpenShift Operator Lifecycle Manager (OLM). Whether you're a platform engineer managing cluster ingress or a developer routing traffic to microservices, NGF on OpenShift 4.19+ delivers a unified, secure, and fully supported traffic fabric. In this guide, we walk through installing the operator, configuring the NginxGatewayFabric resource, and addressing OpenShift-specific networking patterns such as NodePort + Route. Why NGINX Gateway Fabric on OpenShift? While Red Hat OpenShift 4.19+ includes native support for the Gateway API (v1.2.1), integrating NGF adds critical enterprise capabilities: ✔ Certified & OpenShift-Ready The operator is fully validated by Red Hat, ensuring UBI-compliant images and compatibility with OpenShift’s strict Security Context Constraints (SCCs). ✔ High Performance, Low Complexity NGF delivers the core benefits long associated with NGINX—efficiency, simplicity, and predictable performance. ✔ Advanced Traffic Capabilities Capabilities like Regular Expression path matching and support for ExternalName services allow for complex, hybrid-cloud traffic patterns. ✔ AI/ML Readiness NGF 2.2 supports the Gateway API Inference Extension, enabling inference-aware routing for GenAI and LLM workloads on platforms like Red Hat OpenShift AI. Prerequisites Before we begin, ensure you have: Cluster Administrator access to an OpenShift cluster (version 4.19 or later is recommended for Gateway API GA support). Access to the OpenShift Console and the oc CLI. Ability to pull images from ghcr.io or your internal mirror. Step 1: Installing the Operator from OperatorHub We leverage the Operator Lifecycle Manager (OLM) for a "point-and-click" installation that handles lifecycle management and upgrades. Log into the OpenShift Web Console as an administrator. Navigate to Operators > OperatorHub. Search for NGINX Gateway Fabric in the search box. Select the NGINX Gateway Fabric Operator card and click Install Accept the default installation mode (All namespaces) or select a specific namespace (e.g. nginx-gateway), and click Install. Wait until the status shows Succeeded. Once installed, the operator will manage NGF lifecycle automatically. Step 2: Configuring the NginxGatewayFabric Resource Unlike the Ingress Controller, which used NginxIngressController resources, NGF uses the NginxGatewayFabric Custom Resource (CR) to configure the control plane and data plane. In the Console, go to Installed Operators > NGINX Gateway Fabric Operator. Click the NginxGatewayFabric tab and select Create NginxGatewayFabric. Select YAML view to configure the deployment specifics. Step 3: Configuring the NginxGatewayFabric Resource NGF uses a Kubernetes Service to expose its data plane. Before the data plane launches, we must tell the Controller how to expose it. Option A - LoadBalancer (ROSA, ARO, Managed OpenShift) By default, the NGINX Gateway Fabric Operator configures the service type as LoadBalancer. On public cloud managed OpenShift services (like ROSA on AWS or ARO on Azure), this native default works out-of-the-box to provision a cloud load balancer. No additional steps required. Option B - NodePort with OpenShift Route (On-Prem/Hybrid) However, for on-premise or bare-metal OpenShift clusters lacking a native LoadBalancer implementation, the common pattern is to use a NodePort service exposed via an OpenShift Route. Update the NGF CR to use NodePort In the Console, go to Installed Operators > NGINX Gateway Fabric Operator. Click the NginxGatewayFabric tab and select NginxGatewayFabric. Select YAML view to directly edit the configuration specifics. Change the spec.nginx.service.type to NodePort: apiVersion: gateway.nginx.org/v1alpha1 kind: NginxGatewayFabric metadata: name: default namespace: nginx-gateway spec: nginx: service: type: NodePort Create the OpenShift Route: After applying the CR, create a Route to expose the NGINX Service. oc create route edge ngf \ --service=nginxgatewayfabric-sample-nginx-gateway-fabric\ --port=http \ -n nginx-gateway Note: This creates an Edge TLS termination route. For passthrough TLS (allowing NGINX to handle certificates), use --passthrough and target the https port. Step 4: Validating the Deployment Verify that the operator has deployed the control plane pods successfully. oc get pod -n nginx-gateway NAME READY STATUS RESTARTS AGE nginx-gateway-fabric-controller-manager-dd6586597-bfdl5 1/1 Running 0 23m nginxgatewayfabric-sample-nginx-gateway-fabric-564cc6df4d-hztm8 1/1 Running 0 18m oc get gatewayclass NAME CONTROLLER ACCEPTED AGE nginx gateway.nginx.org/nginx-gateway-controller True 4d1h You should also see a GatewayClass named nginx. This indicates the controller is ready to manage Gateway resources. Step 5: Functional Check with Gateway API To test traffic, we will use the standard Gateway API resources (Gateway and HTTPRoute) Deploy a Test Application (Cafe Service) Ensure you have a backend service running. You can use a simple service for validation. Create a Gateway This resource opens the listener on the NGINX data plane. apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: cafe spec: gatewayClassName: nginx listeners: - name: http port: 80 protocol: HTTP Create an HTTPRoute This binds the traffic to your backend service. apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: coffee spec: parentRefs: - name: cafe hostnames: - "cafe.example.com" rules: - matches: - path: type: PathPrefix value: / backendRefs: - name: coffee port: 80 Test Connectivity If you used Option B (Route), send a request to your OpenShift Route hostname. If you used Option A, send it to the LoadBalancer IP. OpenShift 4.19 Compatibility Meanwhile, it is vital to understand the "under the hood" constraints of OpenShift 4.19: Gateway API Version Pinning: OpenShift 4.19 ships with Gateway API CRDs pinned to v1.2.1. While NGF 2.2 supports v1.3.0 features, it has been conformance-tested against v1.2.1 to ensure stability within OpenShift's version-locked environment. oc get crd gateways.gateway.networking.k8s.io -o yaml | grep "gateway.networking.k8s.io/" gateway.networking.k8s.io/bundle-version: v1.2.1 gateway.networking.k8s.io/channel: standard However, looking ahead, future NGINX Gateway Fabric releases may rely on newer Gateway API specifications that are not natively supported by the pinned CRDs in OpenShift 4.19. If you anticipate running a newer NGF version that may not be compatible with the current OpenShift Gateway API version, please reach out to us to discuss your compatibility requirements. Security Context Constraints (SCC): In previous manual deployments, you might have wrestled with NET_BIND_SERVICE capabilities or creating custom SCCs. The Certified Operator handles these permissions automatically, using UBI-based images that comply with Red Hat's security standards out of the box. Next Steps: AI Inference With NGF running, you are ready for advanced use cases: AI Inference: Explore the Gateway API Inference Extension to route traffic to LLMs efficiently, optimizing GPU usage on Red Hat OpenShift AI. The certified NGINX Gateway Fabric Operator simplifies the operational burden, letting you focus on what matters: delivering secure, high-performance applications and AI workloads. References: NGINX Gateway Fabric Operator on Red Hat Catalog F5 NGINX Gateway Fabric Certified for Red Hat OpenShift NGINX Gateway Fabric Installation Docs239Views3likes1CommentGet Started with BIG-IP and BIG-IQ Virtual Edition (VE) Trial
Welcome to the BIG-IP and BIG-IQ trials page! This will be your jumping off point for setting up a trial version of BIG-IP VE or BIG-IQ VE in your environment. As you can see below, everything you’ll need is included and organized by operating environment — namely by public/private cloud or virtualization platform. To get started with your trial, use the following software and documentation which can be found in the links below. Upon requesting a trial, you should have received an email containing your license keys. Please bear in mind that it can take up to 30 minutes to receive your licenses. Don't have a trial license? Get one here. Or if you're ready to buy, contact us. Looking for other Resources like tools, compatibility matrix... BIG-IP VE and BIG-IQ VE When you sign up for the BIG-IP and BIG-IQ VE trial, you receive a set of license keys. Each key will correspond to a component listed below: BIG-IQ Centralized Management (CM) — Manages the lifecycle of BIG-IP instances including analytics, licenses, configurations, and auto-scaling policies BIG-IQ Data Collection Device (DCD) — Aggregates logs and analytics of traffic and BIG-IP instances to be used by BIG-IQ BIG-IP Local Traffic Manager (LTM), Access (APM), Advanced WAF (ASM), Network Firewall (AFM), DNS — Keep your apps up and running with BIG-IP application delivery controllers. BIG-IP Local Traffic Manager (LTM) and BIG-IP DNS handle your application traffic and secure your infrastructure. You’ll get built-in security, traffic management, and performance application services, whether your applications live in a private data center or in the cloud. Select the hypervisor or environment where you want to run VE: AWS CFT for single NIC deployment CFT for three NIC deployment BIG-IP VE images in the AWS Marketplace BIG-IQ VE images in the AWS Marketplace BIG-IP AWS documentation BIG-IP video: Single NIC deploy in AWS BIG-IQ AWS documentation Setting up and Configuring a BIG-IQ Centralized Management Solution BIG-IQ Centralized Management Trial Quick Start Azure Azure Resource Manager (ARM) template for single NIC deployment Azure ARM template for three NIC deployment BIG-IP VE images in the Azure Marketplace BIG-IQ VE images in the Azure Marketplace BIG-IQ Centralized Management Trial Quick Start BIG-IP VE Azure documentation Video: BIG-IP VE Single NIC deploy in Azure BIG-IQ VE Azure documentation Setting up and Configuring a BIG-IQ Centralized Management Solution VMware/KVM/Openstack Download BIG-IP VE image Download BIG-IQ VE image BIG-IP VE Setup BIG-IQ VE Setup Setting up and Configuring a BIG-IQ Centralized Management Solution Google Cloud Google Deployment Manager template for single NIC deployment Google Deployment Manager template for three NIC deployment BIG-IP VE images in Google Cloud Google Cloud Platform documentation Video: Single NIC deploy in Google Other Resources AskF5 Github community (f5devcentral, f5networks) Tools to automate your deployment BIG-IQ Onboarding Tool F5 Declarative Onboarding F5 Application Services 3 Extension Other Tools: F5 SDK (Python) F5 Application Services Templates (FAST) F5 Cloud Failover F5 Telemetry Streaming Find out which hypervisor versions are supported with each release of VE. BIG-IP Compatibility Matrix BIG-IQ Compatibility Matrix Do you have any comments or questions? Ask here79KViews9likes24CommentsLeveraging BGP and ECMP for F5 Distributed Cloud Customer Edge, Part Two
Introduction This is the second part of our series on leveraging BGP and ECMP for F5 Distributed Cloud Customer Edge deployments. In Part One, we explored the high-level concepts, architecture decisions, and design principles that make BGP and ECMP such a powerful combination for Customer Edge high availability and maintenance operations. This article provides step-by-step implementation guidance, including: High-level and low-level architecture diagrams Complete BGP peering and routing policy configuration in F5 Distributed Cloud Console Practical configuration examples for Fortinet FortiGate and Palo Alto Networks firewalls By the end of this article, you'll have everything you need to implement BGP-based high availability for your Customer Edge deployment. Architecture Overview Before diving into configuration, let’s establish a clear picture of the architecture we’re implementing. We’ll examine this from two perspectives: a high-level logical view and a detailed low-level view showing specific IP addressing and AS numbers. High-Level Architecture The high-level architecture illustrates the fundamental traffic flow and BGP relationships in our deployment: Key Components: Component Role Internet External connectivity to the network Next-Generation Firewall Acts as the BGP peer and performs ECMP distribution to Customer Edge nodes Customer Edge Virtual Site Two or more CE nodes advertising identical VIP prefixes via BGP The architecture follows a straightforward principle: the upstream firewall establishes BGP peering with each CE node. Each CE advertises its VIP addresses as /32 routes. The firewall, seeing multiple equal-cost paths to the same destination, distributes incoming traffic across all available CE nodes using ECMP. Low-Level Architecture with IP Addressing The low-level diagram provides the specific details needed for implementation, including IP addresses and AS numbers: Network Details: Component IP Address Role Firewall (Inside) 10.154.4.119/24 BGP Peer, ECMP Router CE1 (Outside) 10.154.4.160/24 Customer Edge Node 1 CE2 (Outside) 10.154.4.33/24 Customer Edge Node 2 Global VIP 192.168.100.10/32 Load Balancer VIP BGP Configuration: Parameter Firewall Customer Edge AS Number 65001 65002 Router ID 10.154.4.119 Auto-assigned based on interface IP Advertised Prefix None 192.168.100.0/24 le 32 This configuration uses eBGP (External BGP) between the firewall and CE nodes, with different AS numbers for each. The CE nodes share the same AS number (65002), which is the standard approach for multi-node CE deployments advertising the same VIP prefixes. Configuring BGP in F5 Distributed Cloud Console The F5 Distributed Cloud Console provides a centralized interface for configuring BGP peering and routing policies on your Customer Edge nodes. This section walks you through the complete configuration process. Step 1: Configure the BGP peering Go to: Multi-Cloud Network Connect --> Manage --> Networking --> External Connectivity --> BGP Peers & Policies Click on Add BGP Peer Then add the following information: Object name Site where to apply this BGP configuration ASN Router ID Here is an example of the required parameters. Then click on Peers --> Add Item And filled the relevant fields like below by adapting the parameters for your requirements. Step 2: Configure the BGP routing policies Go to: Multi-Cloud Network Connect --> Manage --> Networking --> External Connectivity --> BGP Peers & Policies --> BGP Routing Policies Click on Add BGP Routing Policy Add a name for your BGP routing policy object and click on Configure to add the rules. Click on Add Item to add a rule. Here we are going to allow the /32 prefixes from our VIP subnet (192.168.100.0/24). Save the BGP Routing Policy Repeat the action to create another BGP routing policy with the exact same parameters except the Action Type, which should be of type Deny. Now we have two BGP routing policies: One to allow the VIP prefixes (for normal operations) One to deny the VIP prefixes (for maintenance mode) We still need to a a third and final BGP routing policy, in order to deny any prefixes on the CE. For that, create a third BGP routing policy with this match. Step 3: Apply the BGP routing policies To apply the BGP routing policies in your BGP peer object, edit the Peer and: Enable the BGP routing policy Apply the BGP routing policy objects created before for Inbound and Outbound Fortinet FortiGate Configuration FortiGate firewalls are widely deployed as network security appliances and support robust BGP capabilities. This section provides the minimum configuration for establishing BGP peering with Customer Edge nodes and enabling ECMP load distribution. Step 1: Configure the Router ID and AS Number Configure the basic BGP settings: config router bgp set as 65001 set router-id 10.154.4.119 set ebgp-multipath enable Step 2: Configure BGP Neighbors Add each CE node as a BGP neighbor: config neighbor edit "10.154.4.160" set remote-as 65002 set route-map-in "ACCEPT-CE-VIPS" set route-map-out "DENY-ALL" set soft-reconfiguration enable next edit "10.154.4.63" set remote-as 65002 set route-map-in "ACCEPT-CE-VIPS" set route-map-out "DENY-ALL" set soft-reconfiguration enable next end end Step 3: Create Prefix List for VIP Range Define the prefix list that matches the CE VIP range: config router prefix-list edit "CE-VIP-PREFIXES" config rule edit 1 set prefix 192.168.100.0 255.255.255.0 set ge 32 set le 32 next end next end Important: The ge 32 and le 32 parameters ensure we only match /32 prefixes within the 192.168.100.0/24 range, which is exactly what CE nodes advertise for their VIPs. Step 4: Create Route Maps Configure route maps to implement the filtering policies: Inbound Route Map (Accept VIP prefixes): config router route-map edit "ACCEPT-CE-VIPS" config rule edit 1 set match-ip-address "CE-VIP-PREFIXES" next end next end Outbound Route Map (Deny all advertisements): config router route-map edit "DENY-ALL" config rule edit 1 set action deny next end next end Step 5: Verify BGP Configuration After applying the configuration, verify the BGP sessions and routes: Check BGP neighbor status: get router info bgp summary VRF 0 BGP router identifier 10.154.4.119, local AS number 65001 BGP table version is 4 1 BGP AS-PATH entries 0 BGP community entries Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 10.154.4.33 4 65002 2092 2365 0 0 0 00:05:33 1 10.154.4.160 4 65002 2074 2346 0 0 0 00:14:14 1 Total number of neighbors 2 Verify ECMP routes: get router info routing-table bgp Routing table for VRF=0 B 192.168.100.10/32 [20/255] via 10.154.4.160 (recursive is directly connected, port2), 00:00:11, [1/0] [20/255] via 10.154.4.33 (recursive is directly connected, port2), 00:00:11, [1/0] Palo Alto Networks Configuration Palo Alto Networks firewalls provide enterprise-grade security with comprehensive routing capabilities. This section covers the minimum BGP configuration for peering with Customer Edge nodes. Note: This part is assuming that Palo Alto firewall is configured in the new "Advanced Routing Engine" mode. And we will use the logical-router named "default". Step 1: Configure ECMP parameters set network logical-router default vrf default ecmp enable yes set network logical-router default vrf default ecmp max-path 4 set network logical-router default vrf default ecmp algorithm ip-hash Step 2: Configure objects IPs and firewall rules for BGP peering set address CE1 ip-netmask 10.154.4.160/32 set address CE2 ip-netmask 10.154.4.33/32 set address-group BGP_PEERS static [ CE1 CE2 ] set address LOCAL_BGP_IP ip-netmask 10.154.4.119/32 set rulebase security rules ALLOW_BGP from service set rulebase security rules ALLOW_BGP to service set rulebase security rules ALLOW_BGP source LOCAL_BGP_IP set rulebase security rules ALLOW_BGP destination BGP_PEERS set rulebase security rules ALLOW_BGP application bgp set rulebase security rules ALLOW_BGP service application-default set rulebase security rules ALLOW_BGP action allow Step 3: Palo Alto Configuration Summary (CLI Format) set network routing-profile filters prefix-list ALLOWED_PREFIXES type ipv4 ipv4-entry 1 prefix entry network 192.168.100.0/24 set network routing-profile filters prefix-list ALLOWED_PREFIXES type ipv4 ipv4-entry 1 prefix entry greater-than-or-equal 32 set network routing-profile filters prefix-list ALLOWED_PREFIXES type ipv4 ipv4-entry 1 prefix entry less-than-or-equal 32 set network routing-profile filters prefix-list ALLOWED_PREFIXES type ipv4 ipv4-entry 1 action permit set network routing-profile filters prefix-list ALLOWED_PREFIXES description "Allow only m32 inside 192.168.100.0m24" set network routing-profile filters prefix-list DENY_ALL type ipv4 ipv4-entry 1 prefix entry network 0.0.0.0/0 set network routing-profile filters prefix-list DENY_ALL type ipv4 ipv4-entry 1 prefix entry greater-than-or-equal 0 set network routing-profile filters prefix-list DENY_ALL type ipv4 ipv4-entry 1 prefix entry less-than-or-equal 32 set network routing-profile filters prefix-list DENY_ALL type ipv4 ipv4-entry 1 action deny set network routing-profile filters prefix-list DENY_ALL description "Deny all prefixes" set network routing-profile bgp filtering-profile FILTER_INBOUND ipv4 unicast inbound-network-filters prefix-list ALLOWED_PREFIXES set network routing-profile bgp filtering-profile FILTER_OUTBOUND ipv4 unicast inbound-network-filters prefix-list DENY_ALL set network logical-router default vrf default bgp router-id 10.154.4.119 set network logical-router default vrf default bgp local-as 65001 set network logical-router default vrf default bgp install-route yes set network logical-router default vrf default bgp enable yes set network logical-router default vrf default bgp peer-group BGP_PEERS type ebgp set network logical-router default vrf default bgp peer-group BGP_PEERS address-family ipv4 ipv4-unicast-default set network logical-router default vrf default bgp peer-group BGP_PEERS filtering-profile ipv4 FILTER_INBOUND set network logical-router default vrf default bgp peer-group BGP_PEERS filtering-profile ipv4 FILTER_OUTBOUND set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE1 peer-as 65002 set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE1 local-address interface ethernet1/2 set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE1 local-address ip svc-intf-ip set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE1 peer-address ip 10.154.4.160 set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE2 peer-as 65002 set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE2 local-address interface ethernet1/2 set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE2 local-address ip svc-intf-ip set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE2 peer-address ip 10.154.4.33 Step 4: Verify BGP Configuration After committing the configuration, verify the BGP sessions and routes: Check BGP neighbor status: run show advanced-routing bgp peer status logical-router default Logical Router: default ============== Peer Name: CE2 BGP State: Established, up for 00:01:55 Peer Name: CE1 BGP State: Established, up for 00:00:44 Verify ECMP routes: run show advanced-routing route logical-router default Logical Router: default ========================== flags: A:active, E:ecmp, R:recursive, Oi:ospf intra-area, Oo:ospf inter-area, O1:ospf ext 1, O2:ospf ext 2 destination protocol nexthop distance metric flag tag age interface 0.0.0.0/0 static 10.154.1.1 10 10 A 01:47:33 ethernet1/1 10.154.1.0/24 connected 0 0 A 01:47:37 ethernet1/1 10.154.1.99/32 local 0 0 A 01:47:37 ethernet1/1 10.154.4.0/24 connected 0 0 A 01:47:37 ethernet1/2 10.154.4.119/32 local 0 0 A 01:47:37 ethernet1/2 192.168.100.10/32 bgp 10.154.4.33 20 255 A E 00:01:03 ethernet1/2 192.168.100.10/32 bgp 10.154.4.160 20 255 A E 00:01:03 ethernet1/2 total route shown: 7 Implementing CE Isolation for Maintenance As discussed in Part One, one of the key advantages of BGP-based deployments is the ability to gracefully isolate CE nodes for maintenance. Here’s how to implement this in practice. Isolation via F5 Distributed Cloud Console To isolate a CE node from receiving traffic, in your BGP peer object, edit the Peer and: Change the Outbound BGP routing policy from the one that is allowing the VIP prefixes to the one that is denying the VIP prefixes The CE will stop advertising its VIP routes, and within seconds (based on BGP timers), the upstream firewall will remove this CE from its ECMP paths. Verification During Maintenance On your firewall, verify the route withdrawal (in this case we are using a Fortigate firewall): get router info bgp summary VRF 0 BGP router identifier 10.154.4.119, local AS number 65001 BGP table version is 4 1 BGP AS-PATH entries 0 BGP community entries Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 10.154.4.33 4 65002 2070 2345 0 0 0 00:04:05 0 10.154.4.160 4 65002 2057 2326 0 0 0 00:12:46 1 Total number of neighbors 2 We are not receiving any prefixes anymore for the 10.154.4.33 peer. get router info routing-table bgp Routing table for VRF=0 B 192.168.100.10/32 [20/255] via 10.154.4.160 (recursive is directly connected, port2), 00:06:34, [1/0] End we have now only one path. Restoring the CE in the data path After maintenance is complete: Return to the BGP Peer configuration in the F5XC Console Restore the original export policy (permit VIP prefixes) Save the configuration On the upstream firewall, confirm that CE prefixes are received again and that ECMP paths are restored Conclusion This article has provided the complete implementation details for deploying BGP and ECMP with F5 Distributed Cloud Customer Edge nodes. You now have: A clear understanding of the architecture at both high and low levels Step-by-step instructions for configuring BGP in F5 Distributed Cloud Console Ready-to-use configurations for both Fortinet FortiGate and Palo Alto Networks firewalls Practical guidance for implementing graceful CE isolation for maintenance By combining the concepts from the first article with the practical configurations in this article, you can build a robust, highly available application delivery infrastructure that maximizes resource utilization, provides automatic failover, and enables zero-downtime maintenance operations. The BGP-based approach transforms your Customer Edge deployment from a traditional Active/Standby model into a full active topology where every node contributes to handling traffic, and any node can be gracefully removed for maintenance without impacting your users.73Views2likes0CommentsLeveraging BGP and ECMP for F5 Distributed Cloud Customer Edge, Part One
Introduction Achieving high availability for application delivery while maintaining operational flexibility is a fundamental challenge for modern enterprises. When deploying F5 Distributed Cloud Customer Edge (CE) nodes in private data centers, on-premises environments or in some cases public cloud environments, the choice of how traffic reaches these nodes significantly impacts both service resilience and operational agility. This article explores how Border Gateway Protocol (BGP) combined with Equal-Cost Multi-Path (ECMP) routing provides an elegant solution for two critical operational requirements: High availability of traffic for load balancers running on Customer Edge nodes Easier maintenance and upgrades of CE nodes without service disruption By leveraging dynamic routing protocols instead of static configurations, you gain the ability to gracefully remove individual CE nodes from the traffic path, perform maintenance or upgrades, and seamlessly reintroduce them—all without impacting your application delivery services. Understanding BGP and ECMP Benefits for Customer Edge Deployments Why BGP and ECMP? Traditional approaches to high availability often rely on protocols like VRRP, which create Active/Standby topologies. While functional, this model leaves standby nodes idle and creates potential bottlenecks on the active node. The BGP with ECMP fundamentally changes this paradigm. The Power of ECMP Equal-Cost Multi-Path routing allows your network infrastructure to distribute traffic across multiple CE nodes simultaneously. When each CE node advertises the same VIP prefix via BGP, your upstream router learns multiple equal-cost paths and distributes traffic across all available nodes: This creates a true Active/Active topology where: All CE nodes actively process traffic Load is distributed across the entire set of CEs Failure of any single node automatically redistributes traffic to remaining nodes No manual intervention required for failover Key Benefits Benefit Description Active/Active/Active All nodes handle traffic simultaneously, maximizing resource utilization Automatic Failover When a CE stops advertising, its VIP, traffic automatically shifts to the remaining nodes Graceful Maintenance Withdraw BGP advertisements to drain traffic before maintenance Horizontal Scaling Add new CE nodes and they automatically join the traffic distribution Understanding Customer Edge VIP Architecture F5 Distributed Cloud Customer Edge nodes support a flexible VIP architecture that integrates seamlessly with BGP. Understanding how VIPs work is essential for proper BGP configuration. The Global VIP Each Customer Edge site can be configured with a Global VIP—a single IP address that serves as the default listener for all load balancers instantiated on that CE. Key characteristics: Configured at the CE site level in the F5 Distributed Cloud Console Acts as the default VIP for any load balancer that doesn’t have a dedicated VIP configured Advertised as a /32 prefix in the routing table To know: The Global VIP is NOT generated in the CE's routing table until at least one load balancer is configured on that CE This last point is particularly important: if you configure a Global VIP but haven't deployed any load balancers, the VIP won't be advertised via BGP. This prevents advertising unreachable services. For this article, we are going to use 192.168.100.0/24 as VIP subnet for all the examples. Load Balancer Dedicated VIPs Individual load balancers can be configured with their own Dedicated VIP, separate from the Global VIP. When a dedicated VIP is configured: The load balancer responds only to its dedicated VIP The load balancer does not respond to the Global VIP The dedicated VIP is also advertised as a /32 prefix Multiple load balancers can have different dedicated VIPs on the same CE This flexibility allows you to: Separate different applications on different VIPs Implement different routing policies per application Maintain granular control over traffic distribution VIP Summary VIP Type Scope Prefix Length When Advertised Global VIP Per CEs /32 When at least one LB is configured on CE Dedicated VIP Per load balancer /32 When the specific LB is configured BGP Filtering Best Practices Proper BGP filtering is essential for security and operational stability. This section covers the recommended filtering policies for both the upstream network device (firewall/router) and the Customer Edge nodes. Design Principles The filtering strategy follows the principle of explicit allow, implicit deny: Only advertise what is necessary Only accept what is expected Use prefix lists with appropriate matching for /32 routes Upstream Device Configuration (Firewall/Router) The device peering with your CE nodes should implement strict filtering: Inbound policy on Firewall/Router The firewall/router should accept only the CE VIP prefixes. In our example, all VIPs fall within 192.168.100.0/24: Why "or longer" (le 32)? Since VIPs are advertised as /32 prefixes, you need to match prefixes more specific than /24. The le 32 (less than or equal to 32) or "or longer" modifier ensures your filter matches the actual /32 routes while still using a manageable prefix range. Outbound policy on Firewall/Router By default, the firewall/router should not advertise any prefixes to the CE nodes. Customer Edge Configuration The CE nodes should implement complementary filtering: Outbound Policy CEs should advertise only their VIP prefixes. Since all VIPs on Customer Edge nodes are /32 addresses, your prefix filters must also follow the "or longer" approach. Inbound Policy CEs should not accept any prefixes from the upstream firewall/router. Filtering Summary Table Device Direction Policy Prefix Match Firewall/Router Inbound (from CE) Accept VIP range 192.168.100.0/24 le 32 Firewall/Router Outbound (to CE) Deny all N/A CE Outbound (to Router) Advertise VIPs only 192.168.100.0/24 le 32 CE Inbound (from Router) Deny all N/A Graceful CE Isolation for Maintenance One of the most powerful benefits of using BGP is the ability to gracefully remove a CE node from the traffic path for maintenance, upgrades, or troubleshooting. This section explains how to isolate a CE by manipulating its BGP route advertisements. The Maintenance Challenge When you need to perform maintenance on a CE node (OS upgrade, software update, reboot, troubleshooting), you want to: Stop new traffic from reaching the node Allow existing connections to complete gracefully Perform your maintenance tasks Reintroduce the node to the traffic pool With VRRP, this can require manual failover procedures. With BGP, you simply stop advertising VIP routes. Isolation Process Overview Step 1: Configure BGP Route Filtering on the CE To isolate a CE, you need to apply a BGP policy that prevents the VIP prefixes from being advertised or received. Where to Apply the Policy? There are two possible approaches to stop a CE from receiving traffic: On the BGP peer (firewall/router): Configure an inbound filter on the upstream device to reject routes from the specific CE you want to isolate. On the Customer Edge itself: Configure an outbound export policy on the CE to stop advertising its VIP prefixes. We recommend the F5 Distributed Cloud approach (option 2) for several reasons: Consideration Firewall/Router Approach F5 Distributed Cloud Approach Automation Requires separate automation for network devices Can be performed in the F5 XC Console or fully automated through API/Terraform infrastructure as a code approach. Team ownership Requires coordination with network team CE team has full autonomy Consistency Configuration syntax varies by vendor Single, consistent interface Audit trail Spread across multiple systems Centralized in F5 XC Console In many organizations, the team responsible for managing the Customer Edge nodes is different from the team managing the network infrastructure (firewalls, routers). By implementing isolation policies on the CE side, you eliminate cross-team dependencies and enable self-service maintenance operations. Applying the Filter The filter is configured through the F5 Distributed Cloud Console on the specific CE site. The filter configuration will be: Match the VIP prefix range (192.168.100.0/24 or longer) Set the action to Deny Apply to the outbound direction (export policy) Once applied, the CE stops advertising its VIP /32 routes to its BGP peers. Step 2: Perform Maintenance With the CE isolated from the traffic path, you can safely: Reboot the CE node Perform OS upgrades Apply software updates Existing long-lived connections to the isolated CE will eventually timeout, while new connections are automatically directed to the remaining CEs. Step 3: Reintroduce the CE in the data path After maintenance is complete: Remove or modify the BGP export filter to allow VIP advertisement The CE will begin advertising its VIP /32 routes again The upstream firewall/router will add the CE back to its ECMP paths Traffic will automatically start flowing to the restored CE Isolation Benefits Summary Aspect Benefit Zero-touch failover Traffic automatically shifts to the remaining CEs Controlled maintenance windows Isolate at your convenience No application impact Users experience no disruption Reversible Simply re-enable route advertisement to restore Per-node granularity Isolate individual nodes without affecting others Rolling Upgrade Strategy Using this isolation technique, you can implement rolling upgrades across your CEs: Rolling Upgrade Sequence: Step 1: Isolate CE1 → Upgrade CE1 → Put back CE1 in the data path Step 2: Isolate CE2 → Upgrade CE2 → Put back CE2 in the data path Throughout this process: • At least 1 CE is always handling traffic • No service interruption occurs • Each CE is validated before moving to the next Conclusion BGP with ECMP provides a robust, flexible foundation for high-availability F5 Distributed Cloud Customer Edge deployments. By leveraging dynamic routing protocols: Traffic is distributed across all active CE nodes, maximizing resource utilization Failover is automatic when a CE becomes unavailable Maintenance is graceful through controlled route withdrawal Scaling is seamless as new CEs automatically join the traffic distribution once they are BGP-peered The combination of proper BGP filtering (accepting only VIP prefixes, advertising only what’s necessary) and the ability to isolate individual CEs through route manipulation gives you complete operational control over your application delivery infrastructure. Whether you’re performing routine maintenance, emergency troubleshooting, or rolling out upgrades, BGP-based CE deployments ensure your applications remain available and your operations remain smooth.109Views3likes0Comments