distributed cloud
58 TopicsHow to deploy an F5XC SMSv2 site with the help of automation
To deploy an F5XC Customer Edge (CE) in SMSv2 mode with the help of automation, it is necessary to follow the three main steps below: Verify the prerequisites at the technical architecture level for the environment in which the CE will be deployed (public cloud or datacenter/private cloud) Create the necessary objects at the F5XC platform level Deploy the CE instance in the target environment We will provide more details for all the steps as well as the simplest Terraform skeleton code to deploy an F5XC CE in the main cloud environments (AWS, GCP and Azure). Step 1: verification of architecture prerequisites To be deployed, a CE must have an interface (which is and will always be its default interface) that has Internet access. This access is necessary to perform the installation steps and provide the "control plane" part to the CE. The name of this interface will be referred to as « Site Local Outside or SLO ». This Internet access can be provided in several ways: "Cloud provider" type site: Public IP address directly on the interface Private IP address on the interface and use of a NAT Gateway as default route Private IP address on the interface and use of a security appliance (firewall type, for example) as default route Private IP address on the interface and use of an explicit proxy Datacenter or "private cloud" type site: Private IP address on the interface and use of a security appliance (firewall type, for example) or router as default route Private IP address on the interface and use of an explicit proxy Furthermore, public IP addresses on the interface and "direct" routing to Internet It is highly recommended (not to say required) to add at least a second interface during the site first deployment. Because depending on the infrastructure (for example, GCP) it is not possible to add network interfaces after the creation of the VM. Even on platforms where adding a network interface is possible, a reboot of the F5XC CE is needed. An F5XC SMSv2 CE can have up to eight interfaces overall. Additional interfaces are (most of the time) used as “Site Local Inside or SLI” interfaces or "Segment interfaces" (that specific part will be covered in another article). Basic CE matrix flow. Interface Direction and protocols Use case / purpose SLO Egress – TCP 53 (DNS), TCP 443 (HTTPS), UDP 53 (DNS), UDP 123 (NTP), UDP 4500 (IPSEC) Registration, software download and upgrade, VPN tunnels towards F5XC infrastructure for control plane SLO Ingress – None RE / CE use case CE to CE use case by using F5 ADN SLO Ingress – UDP 4500 Site Mesh Group for direct CE to CE secure connectivity over SLO interface (no usage of F5 ADN) SLO Ingress – TCP 80 (HTTP), TCP 443 (HTTPS) HTTP/HTTPS LoadBalancer on the CE for WAAP use cases SLI Egress – Depends on the use case / application, but if the security constraint permits it, no restriction SLI Ingress – Depends on the use case / application, but if the security constraint permits it, no restriction For advanced details regarding IPs and domains used for: Registration / software upgrade Tunnels establishment towards F5XC infrastructure Please refer to: https://docs.cloud.f5.com/docs-v2/platform/reference/network-cloud-ref#new-secure-mesh-v2-sites Step 2: creation of necessary objects at the F5XC platform level This step will be performed by the Terraform script by: Creating an SMSv2 token Creating an F5XC site of SMSv2 type API certificate and terraform variables First, it is necessary to create an API certificate. Please follow the instructions in our official documentation here: https://docs.cloud.f5.com/docs-v2/administration/how-tos/user-mgmt/Credentials#generate-api-certificate-for-my-credentials or here: https://docs.cloud.f5.com/docs-v2/administration/how-tos/user-mgmt/Credentials#generate-api-certificate-for-service-credentials Depending on the type of API certificate you want to create and use (user credential or service credential). In the Terraform variables, those are the ones that you need to modify: The “location of the api key” should be the full path where your API P12 file is stored. variable "f5xc_api_p12_file" { type = string description = "F5XC tenant api key" default = "<location of the api key>" } If your F5XC console URL is https://mycompany.console.ves.volterra.io then the value for the f5xc_api_url will be https://mycompany.console.ves.volterra.io/api variable "f5xc_api_url" { type = string default = "https://<tenant name>.console.ves.volterra.io/api" } When using terraform, you will also need to export the P12 certificate password as an environment variable. export VES_P12_PASSWORD=<password of P12 cert> Creation of the SMSv2 token. This is achieved with the following Terraform code and with the “type = 1” parameter. # #F5XC objects creation # resource "volterra_token" "smsv2-token" { depends_on = [volterra_securemesh_site_v2.site] name = "${var.f5xc-ce-site-name}-token" namespace = "system" type = 1 site_name = volterra_securemesh_site_v2.site.name } Creation of the F5XC SMSv2 site. This is achieved with the following Terraform code (example for GCP). This is where you need to configure all the options you want to be applied at site creation. resource "volterra_securemesh_site_v2" "site" { name = format("%s-%s", var.f5xc-ce-site-name, random_id.suffix.hex) namespace = "system" block_all_services = false logs_streaming_disabled = true enable_ha = false labels = { "ves.io/provider" = "ves-io-GCP" } re_select { geo_proximity = true } gcp { not_managed {} } } For instance, if you want to use a corporate proxy and have the CE tunnels passing through the proxy, the following should be added: custom_proxy { enable_re_tunnel = true proxy_ip_address = "10.154.32.254" proxy_port = 8080 } And if you want to force CE to REs connectivity with SSL, the following should be added: tunnel_type = "SITE_TO_SITE_TUNNEL_SSL" Step 3: creation of the CE instance in the target environment This step will be performed by the Terraform script by: Generating a cloud-init file Creating the F5XC site instance in the environment based on the marketplace images or the available F5XC images How to list F5XC available images in Azure: az vm image list --all --publisher f5-networks --offer f5xc_customer_edge --sku f5xccebyol --output table | sort -k4 -V And check in the output, the one with the highest version. x64 f5xc_customer_edge f5-networks f5xccebyol f5-networks:f5xc_customer_edge:f5xccebyol:9.2025.17 9.2025.17 x64 f5xc_customer_edge f5-networks f5xccebyol f5-networks:f5xc_customer_edge:f5xccebyol:2024.40.1 2024.40.1 x64 f5xc_customer_edge f5-networks f5xccebyol f5-networks:f5xc_customer_edge:f5xccebyol:2024.40.2 2024.40.2 x64 f5xc_customer_edge f5-networks f5xccebyol f5-networks:f5xc_customer_edge:f5xccebyol:2024.44.1 2024.44.1 x64 f5xc_customer_edge f5-networks f5xccebyol_2 f5-networks:f5xc_customer_edge:f5xccebyol_2:2024.44.2 2024.44.2 Architecture Offer Publisher Sku Urn Version -------------- ------------------ ----------- ------------ ----------------------------------------------------- --------- We are going to re-use some of the parameters in the Terraform script, to instruct the Terraform code which image it should use. source_image_reference { publisher = "f5-networks" offer = "f5xc_customer_edge" sku = "f5xccebyol" version = "9.2025.17" } Also, for Azure, it’s needed to accept the legal terms of the F5XC CE image. This needs to be performed only once by running the following commands: Select the Azure subscription in which you are planning to deploy the F5XC CE: az account set -s <subscription-id> Accept the terms and conditions for the F5XC CE for this subscription: az vm image terms accept --publisher f5-networks --offer f5xc_customer_edge --plan f5xccebyol How to list F5XC available images in GCP: gcloud compute images list --project=f5-7626-networks-public --filter="name~'f5xc-ce'" --sort-by=~creationTimestamp --format="table(name,creationTimestamp)" And check in the output, the one with the highest version. NAME CREATION_TIMESTAMP f5xc-ce-crt-20250701-0123 2025-07-09T02:15:08.352-07:00 f5xc-cecrt-20250701-0099-9 2025-07-02T01:32:40.154-07:00 f5xc-ce-202505151709081 2025-06-25T22:31:23.295-07:00 How to list F5XC available images in AWS: aws ec2 describe-images \ --region eu-west-3 \ --filters "Name=name,Values=*f5xc-ce*" \ --query "reverse(sort_by(Images, &CreationDate))[*].{ImageId:ImageId,Name:Name,CreationDate:CreationDate}" \ --output table And check in the output, the ami with the latest creation date. Also, for AWS, it’s needed to accept the legal terms of the F5XC CE image. This needs to be performed only once. Go to this page in your AWS Console Then select "View purchase options" and then select "Subscribe". Putting everything together: Global overview We are going to use Azure as the target environment to deploy the F5XC CE. The CE will be deployed with two NICs, the SLO being in a public subnet and a public IP will be attached to the NIC. We assume that all the prerequisites from step 1 are met. Terraform skeleton for Azure is available here: https://github.com/veysph/Prod-TF/ It's not intended to be the perfect thing, just an example of the minimum basic things to deploy an F5XC SMSv2 CE with automation. Changes and enhancements based on the different needs you might have are more than welcome. It's really intended to be flexible and not too strict. Structure of the terraform directory: provider.tf contains everything that is related to the needed providers variables.tf contains all the variables used in the terraform files f5xc_sites.tf contains everything that is related to the F5XC objects creation main.tf contains everything to start the F5XC CE in the target environment Deployment Make all the relevant changes in variables.tf. Don't forget to export your P12 password as an environment variable (see Step 2, API certificate and terraform variables)! Then run, terraform init terraform plan terraform apply Should everything be correct at each step, you should get a CE object in the F5XC console, under Multi-Cloud Network Connect --> Manage --> Site Management --> Secure Mesh Sites v2474Views5likes1CommentVIPTest: Rapid Application Testing for F5 Environments
VIPTest is a Python-based tool for efficiently testing multiple URLs in F5 environments, allowing quick assessment of application behavior before and after configuration changes. It supports concurrent processing, handles various URL formats, and provides detailed reports on HTTP responses, TLS versions, and connectivity status, making it useful for migrations and routine maintenance.1.1KViews5likes2CommentsLeveraging BGP and ECMP for F5 Distributed Cloud Customer Edge, Part One
Introduction Achieving high availability for application delivery while maintaining operational flexibility is a fundamental challenge for modern enterprises. When deploying F5 Distributed Cloud Customer Edge (CE) nodes in private data centers, on-premises environments or in some cases public cloud environments, the choice of how traffic reaches these nodes significantly impacts both service resilience and operational agility. This article explores how Border Gateway Protocol (BGP) combined with Equal-Cost Multi-Path (ECMP) routing provides an elegant solution for two critical operational requirements: High availability of traffic for load balancers running on Customer Edge nodes Easier maintenance and upgrades of CE nodes without service disruption By leveraging dynamic routing protocols instead of static configurations, you gain the ability to gracefully remove individual CE nodes from the traffic path, perform maintenance or upgrades, and seamlessly reintroduce them—all without impacting your application delivery services. Understanding BGP and ECMP Benefits for Customer Edge Deployments Why BGP and ECMP? Traditional approaches to high availability often rely on protocols like VRRP, which create Active/Standby topologies. While functional, this model leaves standby nodes idle and creates potential bottlenecks on the active node. The BGP with ECMP fundamentally changes this paradigm. The Power of ECMP Equal-Cost Multi-Path routing allows your network infrastructure to distribute traffic across multiple CE nodes simultaneously. When each CE node advertises the same VIP prefix via BGP, your upstream router learns multiple equal-cost paths and distributes traffic across all available nodes: This creates a true Active/Active topology where: All CE nodes actively process traffic Load is distributed across the entire set of CEs Failure of any single node automatically redistributes traffic to remaining nodes No manual intervention required for failover Key Benefits Benefit Description Active/Active/Active All nodes handle traffic simultaneously, maximizing resource utilization Automatic Failover When a CE stops advertising, its VIP, traffic automatically shifts to the remaining nodes Graceful Maintenance Withdraw BGP advertisements to drain traffic before maintenance Horizontal Scaling Add new CE nodes and they automatically join the traffic distribution Understanding Customer Edge VIP Architecture F5 Distributed Cloud Customer Edge nodes support a flexible VIP architecture that integrates seamlessly with BGP. Understanding how VIPs work is essential for proper BGP configuration. The Global VIP Each Customer Edge site can be configured with a Global VIP—a single IP address that serves as the default listener for all load balancers instantiated on that CE. Key characteristics: Configured at the CE site level in the F5 Distributed Cloud Console Acts as the default VIP for any load balancer that doesn’t have a dedicated VIP configured Advertised as a /32 prefix in the routing table To know: The Global VIP is NOT generated in the CE's routing table until at least one load balancer is configured on that CE This last point is particularly important: if you configure a Global VIP but haven't deployed any load balancers, the VIP won't be advertised via BGP. This prevents advertising unreachable services. For this article, we are going to use 192.168.100.0/24 as VIP subnet for all the examples. Load Balancer Dedicated VIPs Individual load balancers can be configured with their own Dedicated VIP, separate from the Global VIP. When a dedicated VIP is configured: The load balancer responds only to its dedicated VIP The load balancer does not respond to the Global VIP The dedicated VIP is also advertised as a /32 prefix Multiple load balancers can have different dedicated VIPs on the same CE This flexibility allows you to: Separate different applications on different VIPs Implement different routing policies per application Maintain granular control over traffic distribution VIP Summary VIP Type Scope Prefix Length When Advertised Global VIP Per CEs /32 When at least one LB is configured on CE Dedicated VIP Per load balancer /32 When the specific LB is configured BGP Filtering Best Practices Proper BGP filtering is essential for security and operational stability. This section covers the recommended filtering policies for both the upstream network device (firewall/router) and the Customer Edge nodes. Design Principles The filtering strategy follows the principle of explicit allow, implicit deny: Only advertise what is necessary Only accept what is expected Use prefix lists with appropriate matching for /32 routes Upstream Device Configuration (Firewall/Router) The device peering with your CE nodes should implement strict filtering: Inbound policy on Firewall/Router The firewall/router should accept only the CE VIP prefixes. In our example, all VIPs fall within 192.168.100.0/24: Why "or longer" (le 32)? Since VIPs are advertised as /32 prefixes, you need to match prefixes more specific than /24. The le 32 (less than or equal to 32) or "or longer" modifier ensures your filter matches the actual /32 routes while still using a manageable prefix range. Outbound policy on Firewall/Router By default, the firewall/router should not advertise any prefixes to the CE nodes. Customer Edge Configuration The CE nodes should implement complementary filtering: Outbound Policy CEs should advertise only their VIP prefixes. Since all VIPs on Customer Edge nodes are /32 addresses, your prefix filters must also follow the "or longer" approach. Inbound Policy CEs should not accept any prefixes from the upstream firewall/router. Filtering Summary Table Device Direction Policy Prefix Match Firewall/Router Inbound (from CE) Accept VIP range 192.168.100.0/24 le 32 Firewall/Router Outbound (to CE) Deny all N/A CE Outbound (to Router) Advertise VIPs only 192.168.100.0/24 le 32 CE Inbound (from Router) Deny all N/A Graceful CE Isolation for Maintenance One of the most powerful benefits of using BGP is the ability to gracefully remove a CE node from the traffic path for maintenance, upgrades, or troubleshooting. This section explains how to isolate a CE by manipulating its BGP route advertisements. The Maintenance Challenge When you need to perform maintenance on a CE node (OS upgrade, software update, reboot, troubleshooting), you want to: Stop new traffic from reaching the node Allow existing connections to complete gracefully Perform your maintenance tasks Reintroduce the node to the traffic pool With VRRP, this can require manual failover procedures. With BGP, you simply stop advertising VIP routes. Isolation Process Overview Step 1: Configure BGP Route Filtering on the CE To isolate a CE, you need to apply a BGP policy that prevents the VIP prefixes from being advertised or received. Where to Apply the Policy? There are two possible approaches to stop a CE from receiving traffic: On the BGP peer (firewall/router): Configure an inbound filter on the upstream device to reject routes from the specific CE you want to isolate. On the Customer Edge itself: Configure an outbound export policy on the CE to stop advertising its VIP prefixes. We recommend the F5 Distributed Cloud approach (option 2) for several reasons: Consideration Firewall/Router Approach F5 Distributed Cloud Approach Automation Requires separate automation for network devices Can be performed in the F5 XC Console or fully automated through API/Terraform infrastructure as a code approach. Team ownership Requires coordination with network team CE team has full autonomy Consistency Configuration syntax varies by vendor Single, consistent interface Audit trail Spread across multiple systems Centralized in F5 XC Console In many organizations, the team responsible for managing the Customer Edge nodes is different from the team managing the network infrastructure (firewalls, routers). By implementing isolation policies on the CE side, you eliminate cross-team dependencies and enable self-service maintenance operations. Applying the Filter The filter is configured through the F5 Distributed Cloud Console on the specific CE site. The filter configuration will be: Match the VIP prefix range (192.168.100.0/24 or longer) Set the action to Deny Apply to the outbound direction (export policy) Once applied, the CE stops advertising its VIP /32 routes to its BGP peers. Step 2: Perform Maintenance With the CE isolated from the traffic path, you can safely: Reboot the CE node Perform OS upgrades Apply software updates Existing long-lived connections to the isolated CE will eventually timeout, while new connections are automatically directed to the remaining CEs. Step 3: Reintroduce the CE in the data path After maintenance is complete: Remove or modify the BGP export filter to allow VIP advertisement The CE will begin advertising its VIP /32 routes again The upstream firewall/router will add the CE back to its ECMP paths Traffic will automatically start flowing to the restored CE Isolation Benefits Summary Aspect Benefit Zero-touch failover Traffic automatically shifts to the remaining CEs Controlled maintenance windows Isolate at your convenience No application impact Users experience no disruption Reversible Simply re-enable route advertisement to restore Per-node granularity Isolate individual nodes without affecting others Rolling Upgrade Strategy Using this isolation technique, you can implement rolling upgrades across your CEs: Rolling Upgrade Sequence: Step 1: Isolate CE1 → Upgrade CE1 → Put back CE1 in the data path Step 2: Isolate CE2 → Upgrade CE2 → Put back CE2 in the data path Throughout this process: • At least 1 CE is always handling traffic • No service interruption occurs • Each CE is validated before moving to the next Conclusion BGP with ECMP provides a robust, flexible foundation for high-availability F5 Distributed Cloud Customer Edge deployments. By leveraging dynamic routing protocols: Traffic is distributed across all active CE nodes, maximizing resource utilization Failover is automatic when a CE becomes unavailable Maintenance is graceful through controlled route withdrawal Scaling is seamless as new CEs automatically join the traffic distribution once they are BGP-peered The combination of proper BGP filtering (accepting only VIP prefixes, advertising only what’s necessary) and the ability to isolate individual CEs through route manipulation gives you complete operational control over your application delivery infrastructure. Whether you’re performing routine maintenance, emergency troubleshooting, or rolling out upgrades, BGP-based CE deployments ensure your applications remain available and your operations remain smooth.282Views4likes0CommentsThinking Outside the Box: Rewriting Web Pages with F5 Distributed Cloud (XC)
This article demonstrates how to dynamically rewrite web page content, such as updating links or replacing text, by using native features in F5 Distributed Cloud (XC). It provides a creative workaround that leverages JavaScript injection to modify pages on the fly, avoiding the need for a separate proxy like NGINX or BIG-IP.574Views4likes3CommentsExtending F5 ADSP: Multi-Tailnet Egress
Tailscale tailnets make private networking simple, secure, and efficient. They’re quick to establish, easy to operate, and provide strong identity and network-level protection through zero-trust WireGuard mesh networking. However, while tailnets are secure, applications inside these environments still need enterprise-grade application security, especially when exposed beyond the mesh. This is where F5 Distributed Cloud (XC) App Stack comes in. As F5 XC’s Kubernetes-native platform, App Stack integrates directly with Tailscale to extend F5 ADSP into tailnets. The result is that applications inside tailnets gain the same enterprise-grade security, performance, and operational consistency as in traditional environments, while also taking full advantage of Tailscale networking.629Views4likes2CommentsExperience the power of F5 NGINX One with feature demos
Introduction Introducing F5 NGINX One, a comprehensive solution designed to enhance business operations significantly through improved reliability and performance. At the core of NGINX One is our data plane, which is built on our world-class, lightweight, and high-performance NGINX software. This foundation provides robust traffic management solutions that are essential for modern digital businesses. These solutions include API Gateway, Content Caching, Load Balancing, and Policy Enforcement. NGINX One includes a user-friendly, SaaS-based NGINX One Console that provides essential telemetry and overseas operations without requiring custom development or infrastructure changes. This visibility empowers teams to promptly address customer experience, security vulnerabilities, network performance, and compliance concerns. NGINX One's deployment across various environments empowers businesses to enhance their operations with improved reliability and performance. It is a versatile tool for strengthening operational efficiency, security posture, and overall digital experience. le: Simplifying Application Delivery and Management NGINX One has several promising features on the horizon. Let's highlight three key features: Monitor Certificates and CVEs, Editing and Update Configurations, and Config Sync Groups. Let's delve into these in details. Monitor Certificates and CVE’s: One of NGINX One's standout features is its ability to monitor Common Vulnerabilities and Exposures (CVEs) and Certificate status. This functionality is crucial for maintaining application security integrity in a continually evolving threat landscape. The CVE and Certificate Monitoring capability of NGINX One enables teams to: Prioritize Remediation Efforts: With an accurate and up-to-date database of CVEs and a comprehensive certificate monitoring system, NGINX One assists teams in prioritizing vulnerabilities and certificate issues according to their severity, guaranteeing that essential security concerns are addressed without delay. Maintain Compliance: Continuous monitoring for CVEs and certificates ensures that applications comply with security standards and regulations, crucial for industries subject to stringent compliance mandates. Edit and Update Configurations: This feature empowers users to efficiently edit configurations and perform updates directly within the NGINX One Console interface. With Configuration Editing, you can: Make Configuration Changes: Quickly adapt to changing application demands by modifying configurations, ensuring optimal performance and security. Simplify Management: Eliminate the need to SSH directly into each instance to edit or update configurations. Reduce Errors: The intuitive interface minimizes potential errors in configuration changes, enhancing reliability by offering helpful recommendations. Enhance Automation with NGINX One SaaS Console: Integrates seamlessly into CI/CD and GitOps workflows, including GitHub, through a comprehensive set of APIs. Config Sync Groups: The Config Sync Group feature is invaluable for environments running multiple NGINX instances. This feature ensures consistent configurations across all instances, enhancing application reliability and reducing administrative overhead. The Config Sync Group capability offers: Automated Synchronization: Configurations are seamlessly synchronized across NGINX instances, guaranteeing that all applications operate with the most current and secure settings. When a configuration sync group already has a defined configuration, it will be automatically pushed to instances as they join. Scalability Support: Organizations can easily incorporate new NGINX instances without compromising configuration integrity as their infrastructure expands. Minimized Configuration Drift: This feature is crucial for maintaining consistency across environments and preventing potential application errors or vulnerabilities from configuration discrepancies. Conclusion NGINX One Cloud Console redefines digital monitoring and management by combining all the NGINX core capabilities and use cases. This all-encompassing platform is equipped with sophisticated features to simplify user interaction, drastically cut operational overhead and expenses, bolster security protocols, and broaden operational adaptability. Read our announcement blog for more details on the launch. To explore the platform’s capabilities and see it in action, we invite you to tune in to our webinar on September 25th. This is a great opportunity to witness firsthand how NGINX One can revolutionize your digital monitoring and management strategies.1.1KViews4likes1CommentHow I Did it - Migrating Applications to Nutanix NC2 with F5 Distributed Cloud Secure Multicloud Networking
In this edition of "How I Did it", we will explore how F5 Distributed Cloud Services (XC) enables seamless application extension and migration from an on-premises environment to Nutanix NC2 clusters.1.2KViews4likes0CommentsEquinix and F5 Distributed Cloud Services: Business Partner Application Exchanges
As organizations adopt hybrid and multicloud architectures, one of the challenges they face is how they can securely connect their partners to specific applications while maintaining control over cost and limiting complexity. Unfortunately, traditional private connectivity models tend to struggle with complex setups, slow on-boarding, and rigid policies that make it hard to adapt to changing business needs. F5 Distributed Cloud Services on Equinix Network Edge provides a solution that makes partner connectivity process easier, enhances security with integrated WAF and API protection, and enables consistent policy enforcement across hybrid and multicloud environments. This integration allows businesses to modernize their connectivity strategies, ensuring faster access to applications while maintaining robust security and compliance. Key Benefits The benefits of using Distributed Cloud Services with Equinix Network Edge include: • Seamless Delivery: Deploy apps close to partners for faster access. • API & App Security: Protect data with integrated security features. • Hybrid Cloud Support: Enforce consistent policies in multi-cloud setups. • Compliance Readiness: Meet data protection regulations with built-in security features. • Proven Integration: F5 + Equinix connectivity is optimized for performance and security. Before: Traditional Private Connectivity Challenges Many organizations still rely on traditional private connectivity models that are complex, rigid, and difficult to scale. In a traditional architecture using Equinix, setting up infrastructure is complex and time-consuming. For every connection, an engineer must manually configure circuits through Equinix Fabric, set up BGP routing, apply load balancing, and define firewall rules. These steps are repeated for each partner or application, which adds a lot of overhead and slows down the onboarding process. Each DMZ is managed separately with its own set of WAFs, routers, firewalls, and load balancers. This makes the environment harder to maintain and scale. If something changes, such as moving an app to a different region or giving a new partner access, it often requires redoing the configuration from scratch. This rigid approach limits how fast a business can respond to new needs. Manual setups also increase the risk of mistakes. Missing or misconfigured firewall rules can accidentally expose sensitive applications, creating security and compliance risks. Overall, this traditional model is slow, inflexible, and difficult to manage as environments grow and change. After: F5 Distributed Cloud Services with Equinix Deploying F5 Distributed Cloud Customer Edge (CE) software on Equinix Network Edge addresses these pain points with a modern, simplified model, enabling the creation of secure business partner app exchanges. By integrating Distributed Cloud Services with Equinix, connecting partners to internal applications is faster and simpler. Instead of manually configuring each connection, Distributed Cloud Services automates the process through a centralized management console. Deploying a CE is straightforward and can be done in minutes. From the Distributed Cloud Console, open "Multi-Cloud Network Connect" and create a "Secure Mesh Site" where you can select Equinix as a Provider. Next, open the Equinix Console and deploy the CE image. This can be done through the Equinix Marketplace, where you can select the F5 Distributed Cloud Services and deploy it to your desired location. A CE can replace the need for multiple components like routers, firewalls, and load balancers. It handles BGP routing, traffic inspection through a built-in WAF, and load balancing. All of this is managed through a single web interface. In this case, the CE connects directly to the Arcadia application in the customer’s data center using at least two IPsec tunnels. BGP peering is quickly established with partner environments, allowing dynamic route exchange without manual setup of static routes. Adding a new partner is as simple as configuring another BGP session and applying the correct policy from the central Distributed Cloud Console. Instead of opening up large network subnets, security is enforced at Layer 7, and this app-aware connectivity is inherently zero trust. Each partner only sees and connects to the exact application they’re supposed to, without accessing anything else. Policies are reusable and consistent, so they can be applied across multiple partners with no duplication. The built-in observability gives real-time visibility into traffic and security events. DevOps, NetOps, and SecOps teams can monitor everything from the Distributed Cloud Console, reducing troubleshooting time and improving incident response. This setup avoids the delays and complexity of traditional connectivity methods, while making the entire process more secure and easier to operate. Simplified Partner Onboarding with Segments The integration of F5 and Equinix allows for simplified partner onboarding using Network Segments. This approach enables organizations to create logical groupings of partners, each with its own set of access rules and policies, all managed centrally. With Distributed Cloud Services and Equinix, onboarding multiple partners is fast, secure, and easy to manage. Instead of creating separate configurations for each partner, a single centralized service policy is used to control access. Different partner groups can be assigned to segments with specific rules, which are all managed from the Distributed Cloud Console. This means one unified policy can control access across many Network Segments, reducing complexity and speeding up the onboarding process. To configure a Segment, you can simply attach an interface to a CE and assign it to a specific segment. Each segment can have its own set of policies, such as which applications are accessible, what security measures are in place, and how traffic is routed. Each partner tier gets access only to the applications allowed by the policy. In this example, Gold partners might get access to more services than Silver partners. Security policies are enforced at Layer 7, so partners interact only with the allowed applications. There is no low-level network access and no direct IP-level reachability. WAF, load balancing, and API protection are also controlled centrally, ensuring consistent security for all partners. BGP routing through Equinix Fabric makes it simple to connect multiple partner networks quickly, with minimal configuration steps. This approach scales much better than traditional setups and keeps the environment organized, secure, and transparent. Scalable and Secure Connectivity F5 Distributed Cloud Services makes it simple to expand application connectivity and security across multiple regions using Equinix Network Edge. CE nodes can be quickly deployed at any Equinix location from the Equinix Marketplace. This allows teams to extend app delivery closer to end users and partners, reducing latency and improving performance without building new infrastructure from scratch. Distributed Cloud Services allows you to organize your CE nodes into a "Virtual Site". This Virtual Site can span multiple Equinix locations, enabling you to manage all your CE nodes as a single entity. When you need to add a new region, you can deploy a new CE node in that location and all configurations are automatically applied from the associated Virtual Site. Once a new CE is deployed, existing application and security policies can be automatically replicated to the new site. This standardized approach ensures that all regions follow the same configurations for routing, load balancing, WAF protection, and Layer 7 access control. Policies for different partner tiers are centrally managed and applied consistently across all locations. Built-in observability gives full visibility into traffic flows, segment performance, and app access from every site - all from the Distributed Cloud Console. Operations teams can monitor and troubleshoot with a unified view, without needing to log into each region separately. This centralized control greatly reduces operational overhead and allows the business to scale out quickly while maintaining security and compliance. Service Policy Management When scaling out to multiple regions, centralized management of service policies becomes crucial. Distributed Cloud Services allows you to define service policies that can be applied across all CE nodes in a Virtual Site. This means you can create a single policy that governs how applications are accessed, secured, and monitored, regardless of where they are deployed. For example, you can define a service policy that adds a specific HTTP header to all incoming requests for a particular segment. This can be useful for tracking, logging, or enforcing security measures. Another example is setting up a policy that rate-limits API calls from partners to prevent abuse. This policy can be applied across all CE nodes in the Virtual Site, ensuring that all partners are subject to the same rate limits without needing to configure each node individually. The policy works on the L7 level, meaning it passes only HTTP traffic and blocks any non-HTTP traffic. This ensures that only legitimate web requests are processed, enhancing security and reducing the risk of attacks. Distributed Cloud Services provides different types of dashboards to monitor the performance and security of your applications across all regions. This allows you to monitor security incidents, such as WAF alerts or API abuse, from a single dashboard. The Distributed Cloud Console provides detailed logs with information about each request, including the source IP, HTTP method, response status, and any applied policies. If a request is blocked by a WAF or security policy, the logs will show the reason for the block, making it easier to troubleshoot issues and maintain compliance. The centralized management of service policies and observability features in Distributed Cloud Services allows organizations to save costs and time when managing their hybrid and multi-cloud environments. By applying consistent policies across all regions, businesses can reduce the need for manual configurations and minimize the risk of misconfigurations. This not only enhances security but also simplifies operations, allowing teams to focus on delivering value rather than managing complex network setups. Offload Services to Equinix Network Edge For organizations that require edge compute capabilities, Distributed Cloud Services provides a Virtual Kubernetes Cluster (vK8s) that can be deployed on Equinix Network Edge in combination with F5 Distributed Cloud Regional Edge (RE) nodes. This solution allows you to run containerized applications in a distributed manner, close to your partners and end users to reduce latency. For example, you can deploy frontend services closer to your partners while your backend services can remain in your data center or in a cloud provider. The more services you move to the edge, the more you can benefit from reduced latency and improved performance. You can use vK8s like a regular Kubernetes cluster, deploying applications, managing resources, and scaling as needed. The F5 Distributed Cloud Console provides a CLI and web interface to manage your vK8s clusters, making it easy to deploy and manage applications across multiple regions. Demos Example use-case part 1 - F5 Distributed Cloud & Equinix: Business Partner App Exchange for Edge Services Video link TBD Example use-case part 2 - Go beyond the network with Zero Trust Application Access from F5 and Equinix Video link TBD Standalone Setup, Configuration, Walkthrough, & Tutorial Conclusion F5 Distributed Cloud on Equinix Network Edge transforms how organizations connect partners and applications. With its centralized management, automated connectivity, and built-in security features, it becomes a solid foundation for modern hybrid and multi-cloud environments. This integration simplifies partner onboarding, enhances security, and enables consistent policy enforcement across regions. Learn more about how F5 Distributed Cloud Services and Equinix can help your organization increase agility while reducing complexity and avoiding the pitfalls of traditional private connectivity models. Additional Resources F5 & Equinix Partnership: https://www.f5.com/partners/technology-alliances/equinix F5 Community Technical Article: Building a secure Application DMZ F5 Blogs F5 and Equinix Simplify Secure Deployment of Distributed Apps F5 and Equinix unite to simplify secure multicloud application delivery Extranets aren’t dead; they just need an upgrade Multicloud chaos ends at the Equinix Edge with F5 Distributed Cloud CE
98Views3likes0CommentsLeveraging BGP and ECMP for F5 Distributed Cloud Customer Edge, Part Two
Introduction This is the second part of our series on leveraging BGP and ECMP for F5 Distributed Cloud Customer Edge deployments. In Part One, we explored the high-level concepts, architecture decisions, and design principles that make BGP and ECMP such a powerful combination for Customer Edge high availability and maintenance operations. This article provides step-by-step implementation guidance, including: High-level and low-level architecture diagrams Complete BGP peering and routing policy configuration in F5 Distributed Cloud Console Practical configuration examples for Fortinet FortiGate and Palo Alto Networks firewalls By the end of this article, you'll have everything you need to implement BGP-based high availability for your Customer Edge deployment. Architecture Overview Before diving into configuration, let’s establish a clear picture of the architecture we’re implementing. We’ll examine this from two perspectives: a high-level logical view and a detailed low-level view showing specific IP addressing and AS numbers. High-Level Architecture The high-level architecture illustrates the fundamental traffic flow and BGP relationships in our deployment: Key Components: Component Role Internet External connectivity to the network Next-Generation Firewall Acts as the BGP peer and performs ECMP distribution to Customer Edge nodes Customer Edge Virtual Site Two or more CE nodes advertising identical VIP prefixes via BGP The architecture follows a straightforward principle: the upstream firewall establishes BGP peering with each CE node. Each CE advertises its VIP addresses as /32 routes. The firewall, seeing multiple equal-cost paths to the same destination, distributes incoming traffic across all available CE nodes using ECMP. Low-Level Architecture with IP Addressing The low-level diagram provides the specific details needed for implementation, including IP addresses and AS numbers: Network Details: Component IP Address Role Firewall (Inside) 10.154.4.119/24 BGP Peer, ECMP Router CE1 (Outside) 10.154.4.160/24 Customer Edge Node 1 CE2 (Outside) 10.154.4.33/24 Customer Edge Node 2 Global VIP 192.168.100.10/32 Load Balancer VIP BGP Configuration: Parameter Firewall Customer Edge AS Number 65001 65002 Router ID 10.154.4.119 Auto-assigned based on interface IP Advertised Prefix None 192.168.100.0/24 le 32 This configuration uses eBGP (External BGP) between the firewall and CE nodes, with different AS numbers for each. The CE nodes share the same AS number (65002), which is the standard approach for multi-node CE deployments advertising the same VIP prefixes. Configuring BGP in F5 Distributed Cloud Console The F5 Distributed Cloud Console provides a centralized interface for configuring BGP peering and routing policies on your Customer Edge nodes. This section walks you through the complete configuration process. Step 1: Configure the BGP peering Go to: Multi-Cloud Network Connect --> Manage --> Networking --> External Connectivity --> BGP Peers & Policies Click on Add BGP Peer Then add the following information: Object name Site where to apply this BGP configuration ASN Router ID Here is an example of the required parameters. Then click on Peers --> Add Item And filled the relevant fields like below by adapting the parameters for your requirements. Step 2: Configure the BGP routing policies Go to: Multi-Cloud Network Connect --> Manage --> Networking --> External Connectivity --> BGP Peers & Policies --> BGP Routing Policies Click on Add BGP Routing Policy Add a name for your BGP routing policy object and click on Configure to add the rules. Click on Add Item to add a rule. Here we are going to allow the /32 prefixes from our VIP subnet (192.168.100.0/24). Save the BGP Routing Policy Repeat the action to create another BGP routing policy with the exact same parameters except the Action Type, which should be of type Deny. Now we have two BGP routing policies: One to allow the VIP prefixes (for normal operations) One to deny the VIP prefixes (for maintenance mode) We still need to a a third and final BGP routing policy, in order to deny any prefixes on the CE. For that, create a third BGP routing policy with this match. Step 3: Apply the BGP routing policies To apply the BGP routing policies in your BGP peer object, edit the Peer and: Enable the BGP routing policy Apply the BGP routing policy objects created before for Inbound and Outbound Fortinet FortiGate Configuration FortiGate firewalls are widely deployed as network security appliances and support robust BGP capabilities. This section provides the minimum configuration for establishing BGP peering with Customer Edge nodes and enabling ECMP load distribution. Step 1: Configure the Router ID and AS Number Configure the basic BGP settings: config router bgp set as 65001 set router-id 10.154.4.119 set ebgp-multipath enable Step 2: Configure BGP Neighbors Add each CE node as a BGP neighbor: config neighbor edit "10.154.4.160" set remote-as 65002 set route-map-in "ACCEPT-CE-VIPS" set route-map-out "DENY-ALL" set soft-reconfiguration enable next edit "10.154.4.63" set remote-as 65002 set route-map-in "ACCEPT-CE-VIPS" set route-map-out "DENY-ALL" set soft-reconfiguration enable next end end Step 3: Create Prefix List for VIP Range Define the prefix list that matches the CE VIP range: config router prefix-list edit "CE-VIP-PREFIXES" config rule edit 1 set prefix 192.168.100.0 255.255.255.0 set ge 32 set le 32 next end next end Important: The ge 32 and le 32 parameters ensure we only match /32 prefixes within the 192.168.100.0/24 range, which is exactly what CE nodes advertise for their VIPs. Step 4: Create Route Maps Configure route maps to implement the filtering policies: Inbound Route Map (Accept VIP prefixes): config router route-map edit "ACCEPT-CE-VIPS" config rule edit 1 set match-ip-address "CE-VIP-PREFIXES" next end next end Outbound Route Map (Deny all advertisements): config router route-map edit "DENY-ALL" config rule edit 1 set action deny next end next end Step 5: Verify BGP Configuration After applying the configuration, verify the BGP sessions and routes: Check BGP neighbor status: get router info bgp summary VRF 0 BGP router identifier 10.154.4.119, local AS number 65001 BGP table version is 4 1 BGP AS-PATH entries 0 BGP community entries Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 10.154.4.33 4 65002 2092 2365 0 0 0 00:05:33 1 10.154.4.160 4 65002 2074 2346 0 0 0 00:14:14 1 Total number of neighbors 2 Verify ECMP routes: get router info routing-table bgp Routing table for VRF=0 B 192.168.100.10/32 [20/255] via 10.154.4.160 (recursive is directly connected, port2), 00:00:11, [1/0] [20/255] via 10.154.4.33 (recursive is directly connected, port2), 00:00:11, [1/0] Palo Alto Networks Configuration Palo Alto Networks firewalls provide enterprise-grade security with comprehensive routing capabilities. This section covers the minimum BGP configuration for peering with Customer Edge nodes. Note: This part is assuming that Palo Alto firewall is configured in the new "Advanced Routing Engine" mode. And we will use the logical-router named "default". Step 1: Configure ECMP parameters set network logical-router default vrf default ecmp enable yes set network logical-router default vrf default ecmp max-path 4 set network logical-router default vrf default ecmp algorithm ip-hash Step 2: Configure objects IPs and firewall rules for BGP peering set address CE1 ip-netmask 10.154.4.160/32 set address CE2 ip-netmask 10.154.4.33/32 set address-group BGP_PEERS static [ CE1 CE2 ] set address LOCAL_BGP_IP ip-netmask 10.154.4.119/32 set rulebase security rules ALLOW_BGP from service set rulebase security rules ALLOW_BGP to service set rulebase security rules ALLOW_BGP source LOCAL_BGP_IP set rulebase security rules ALLOW_BGP destination BGP_PEERS set rulebase security rules ALLOW_BGP application bgp set rulebase security rules ALLOW_BGP service application-default set rulebase security rules ALLOW_BGP action allow Step 3: Palo Alto Configuration Summary (CLI Format) set network routing-profile filters prefix-list ALLOWED_PREFIXES type ipv4 ipv4-entry 1 prefix entry network 192.168.100.0/24 set network routing-profile filters prefix-list ALLOWED_PREFIXES type ipv4 ipv4-entry 1 prefix entry greater-than-or-equal 32 set network routing-profile filters prefix-list ALLOWED_PREFIXES type ipv4 ipv4-entry 1 prefix entry less-than-or-equal 32 set network routing-profile filters prefix-list ALLOWED_PREFIXES type ipv4 ipv4-entry 1 action permit set network routing-profile filters prefix-list ALLOWED_PREFIXES description "Allow only m32 inside 192.168.100.0m24" set network routing-profile filters prefix-list DENY_ALL type ipv4 ipv4-entry 1 prefix entry network 0.0.0.0/0 set network routing-profile filters prefix-list DENY_ALL type ipv4 ipv4-entry 1 prefix entry greater-than-or-equal 0 set network routing-profile filters prefix-list DENY_ALL type ipv4 ipv4-entry 1 prefix entry less-than-or-equal 32 set network routing-profile filters prefix-list DENY_ALL type ipv4 ipv4-entry 1 action deny set network routing-profile filters prefix-list DENY_ALL description "Deny all prefixes" set network routing-profile bgp filtering-profile FILTER_INBOUND ipv4 unicast inbound-network-filters prefix-list ALLOWED_PREFIXES set network routing-profile bgp filtering-profile FILTER_OUTBOUND ipv4 unicast inbound-network-filters prefix-list DENY_ALL set network logical-router default vrf default bgp router-id 10.154.4.119 set network logical-router default vrf default bgp local-as 65001 set network logical-router default vrf default bgp install-route yes set network logical-router default vrf default bgp enable yes set network logical-router default vrf default bgp peer-group BGP_PEERS type ebgp set network logical-router default vrf default bgp peer-group BGP_PEERS address-family ipv4 ipv4-unicast-default set network logical-router default vrf default bgp peer-group BGP_PEERS filtering-profile ipv4 FILTER_INBOUND set network logical-router default vrf default bgp peer-group BGP_PEERS filtering-profile ipv4 FILTER_OUTBOUND set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE1 peer-as 65002 set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE1 local-address interface ethernet1/2 set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE1 local-address ip svc-intf-ip set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE1 peer-address ip 10.154.4.160 set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE2 peer-as 65002 set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE2 local-address interface ethernet1/2 set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE2 local-address ip svc-intf-ip set network logical-router default vrf default bgp peer-group BGP_PEERS peer CE2 peer-address ip 10.154.4.33 Step 4: Verify BGP Configuration After committing the configuration, verify the BGP sessions and routes: Check BGP neighbor status: run show advanced-routing bgp peer status logical-router default Logical Router: default ============== Peer Name: CE2 BGP State: Established, up for 00:01:55 Peer Name: CE1 BGP State: Established, up for 00:00:44 Verify ECMP routes: run show advanced-routing route logical-router default Logical Router: default ========================== flags: A:active, E:ecmp, R:recursive, Oi:ospf intra-area, Oo:ospf inter-area, O1:ospf ext 1, O2:ospf ext 2 destination protocol nexthop distance metric flag tag age interface 0.0.0.0/0 static 10.154.1.1 10 10 A 01:47:33 ethernet1/1 10.154.1.0/24 connected 0 0 A 01:47:37 ethernet1/1 10.154.1.99/32 local 0 0 A 01:47:37 ethernet1/1 10.154.4.0/24 connected 0 0 A 01:47:37 ethernet1/2 10.154.4.119/32 local 0 0 A 01:47:37 ethernet1/2 192.168.100.10/32 bgp 10.154.4.33 20 255 A E 00:01:03 ethernet1/2 192.168.100.10/32 bgp 10.154.4.160 20 255 A E 00:01:03 ethernet1/2 total route shown: 7 Implementing CE Isolation for Maintenance As discussed in Part One, one of the key advantages of BGP-based deployments is the ability to gracefully isolate CE nodes for maintenance. Here’s how to implement this in practice. Isolation via F5 Distributed Cloud Console To isolate a CE node from receiving traffic, in your BGP peer object, edit the Peer and: Change the Outbound BGP routing policy from the one that is allowing the VIP prefixes to the one that is denying the VIP prefixes The CE will stop advertising its VIP routes, and within seconds (based on BGP timers), the upstream firewall will remove this CE from its ECMP paths. Verification During Maintenance On your firewall, verify the route withdrawal (in this case we are using a Fortigate firewall): get router info bgp summary VRF 0 BGP router identifier 10.154.4.119, local AS number 65001 BGP table version is 4 1 BGP AS-PATH entries 0 BGP community entries Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 10.154.4.33 4 65002 2070 2345 0 0 0 00:04:05 0 10.154.4.160 4 65002 2057 2326 0 0 0 00:12:46 1 Total number of neighbors 2 We are not receiving any prefixes anymore for the 10.154.4.33 peer. get router info routing-table bgp Routing table for VRF=0 B 192.168.100.10/32 [20/255] via 10.154.4.160 (recursive is directly connected, port2), 00:06:34, [1/0] End we have now only one path. Restoring the CE in the data path After maintenance is complete: Return to the BGP Peer configuration in the F5XC Console Restore the original export policy (permit VIP prefixes) Save the configuration On the upstream firewall, confirm that CE prefixes are received again and that ECMP paths are restored Conclusion This article has provided the complete implementation details for deploying BGP and ECMP with F5 Distributed Cloud Customer Edge nodes. You now have: A clear understanding of the architecture at both high and low levels Step-by-step instructions for configuring BGP in F5 Distributed Cloud Console Ready-to-use configurations for both Fortinet FortiGate and Palo Alto Networks firewalls Practical guidance for implementing graceful CE isolation for maintenance By combining the concepts from the first article with the practical configurations in this article, you can build a robust, highly available application delivery infrastructure that maximizes resource utilization, provides automatic failover, and enables zero-downtime maintenance operations. The BGP-based approach transforms your Customer Edge deployment from a traditional Active/Standby model into a full active topology where every node contributes to handling traffic, and any node can be gracefully removed for maintenance without impacting your users.202Views3likes0Comments