Distributed Cloud
38 TopicsF5 XC and how to keep the Source IP from changing with customer edges!
The best will always be the application to stop tracking users based on something primitive as an ip address and sometimes the issue is in the Load Balancer or ADC after the XC RE and then if the persistence is based on source IP address on the ADC to be changed in case it is BIG-IP to Cookie or Universal or SSL session based if the Load Balancer is doing no decryption. As an XC Regional Edge has many ip addresses it can connect to the origin servers adding a CE for the legacy apps is a good option to keep the source IP from changing for the same client HTTP requests during the session/transaction. Before going through this article I recommend reading the links below: F5 Distributed Cloud – CE High Availability Options: A Comparative Exploration | DevCentral F5 Distributed Cloud - Customer Edge | F5 Distributed Cloud Technical Knowledge Create Two Node HA Infrastructure for Load Balancing Using Virtual Sites with Customer Edges | F5 Distributed Cloud Technical Knowledge RE to CE cluster of 3 nodes The new SNAT prefix option under the origin pool allows no mater what CE connects to the origin pool the same IP address to be seen by the origin. Be careful as if you have more than a single IP with /32 then again the client may get each time different IP address. This may cause "inet port exhaustion " ( that is what it is called in F5BIG-IP) if there are too many connections to the origin server, so be careful as the SNAT option was added primary for that use case. There was an older option called "LB source IP persistence" but better not use it as it was not so optimized and clean as this one. RE to 2 CE nodes in a virtual site The same option with SNAT pool is not allowed for a virtual site made of 2 standalone CE. For this we can use the ring hash algorithm. Why this works? Well as Kayvan explained to me the hashing of the origin is taking into account the CE name, so the same origin under 2 different CE will get the same ring hash and the same source IP address will be send to the same CE to access the Origin Server. This will not work for a single 3-node CE cluster as it all 3 nodes have the same name. I have seen 503 errors when ring hash is enabled under the HTTP LB so enable it only under the XC route object and the attached origin pool to it! CE hosted HTTP LB In XC with CE you can do do HA with 3-cluster CE that can be layer2 HA based on VRRP and arp or Layer 3 persistence based bgp that can work 3 node CE cluster or 2 CE in a virtual site and it's control options like weight, as prepend or local preference options at the router level. if a CE can't reach the origin servers in the origin pool it should stop advertising the HTTP LB IP address through BGP. For those options Deploying F5 Distributed Cloud (XC) Services in Cisco ACI - Layer Three Attached Deployment is a great example as it shows ECMP BGP but with the BGP attributes you can easily select one CE to be active and processing connections, so that just one ip address is seen by the origin server. When a CE gets traffic by default it does prefer to send it to the origin as by default "Local Preferred" is enabled under the origin pool. In the clouds like AWS/Azure just a cloud native LB is added Infront of the 3 CE cluster and the solution there is simple as to just modify the LB to have a persistence.46Views2likes0CommentsF5 Distributed Cloud for Global Layer 3 Virtual Network Implementation
Introduction As organizations expand their infrastructure across multiple cloud providers and on-site locations, the need for seamless network connectivity becomes paramount. F5 Distributed Cloud provides a powerful solution for connecting distributed sites while maintaining network isolation and control. This article walks through implementing a global Layer 3 Virtual Network using segments with Secure Mesh Sites v2 (SMSv2) Customer Edges. It demonstrates connectivity between private data centers and AWS VPCs. We'll explore the configuration steps and BGP peering setup. The Challenge Organizations need to connect multiple isolated environments—private data centers and cloud VPCs, while maintaining: Network segmentation and isolation Dynamic routing capabilities Consistent connectivity across heterogeneous environments Simple management through a unified control plane Solution Architecture Our implementation consists of three distinct sites: Private Site: Running a Customer Edge (CE) in KVM with BGP peering to the local router for subnet exposure AWS VPC Site 1: Hosting a CE within the VPC AWS VPC Site 2: Another CE deployment with complete isolation (no VPC peering with Site 1) All sites utilize SMSv2 Customer Edges with dual-NIC configurations, connected through F5 Distributed Cloud's global network fabric. Figure 1: Global implementation diagram showing all IP subnets across the three sites with CE deployments and network segments Technical Deep Dive Before diving into the configuration, it's crucial to understand what segments are and how they function within F5 Distributed Cloud: Segments In F5 Distributed Cloud, segments can be considered the equivalent of Layer 3 VRFs in traditional networking. Just as VRFs create separate routing table instances in conventional routers, segments provide: Routing isolation: Each segment maintains its own routing table, ensuring traffic separation Multi-tenancy support: Different segments can overlap IP address spaces without conflict Security boundaries: Traffic between segments requires explicit policy configuration Simplified network management: Logical separation of different network domains or applications Key Segment Characteristics Interface Binding Requirements: Segments must be explicitly attached to CE interfaces Each interface can be part of only one segment This one-to-one mapping ensures clear traffic demarcation and prevents routing ambiguity Route Advertisement and Limitations: Supported Route Types: Connected Routes: Routes for subnets directly configured on the segment interface are automatically advertised BGP Learned Routes: Routes received via BGP peering on the segment interface are propagated to other sites in the same segment Current Limitations: No Static Route Support: Static routes cannot currently be advertised through segment interfaces This is an important consideration when planning your routing architecture Workaround: Use BGP to advertise routes that would traditionally be static Traffic Flow: Traffic entering a CE interface flows within the assigned segment Inter-segment communication requires a special configuration Routes learned on one segment remain isolated from other segments unless explicitly shared Only connected and BGP-learned routes are exchanged between sites within a segment Use Cases: Production/Development Separation: Different segments for prod and dev environments Multi-tenant Deployments: Isolated segments per customer or business unit Compliance Requirements: Segmented networks for PCI, HIPAA, or other regulated traffic This architectural approach provides the flexibility of traditional VRF implementations while leveraging F5 Distributed Cloud's global network capabilities. Customer Edge Interface Architecture Understanding CE Interface Requirements F5 Distributed Cloud Customer Edges require careful interface planning to function correctly, especially in SMSv2 deployments with segments. Understanding the interface architecture is crucial for successful implementations. Interface Capacity and Requirements Minimum Requirement: Each CE must be deployed with at least two physical interfaces Maximum Capacity: CEs support up to eight physical interfaces VLAN Support: Sub-interfaces can be created on top of physical interfaces Interface Types and Roles Customer Edge interfaces serve distinct purposes within the F5 Distributed Cloud architecture: 1. Site Local Outside (SLO) Interface The SLO interface is the "management" and control plane interface: Primary Functions: Zero-touch provisioning of the Customer Edge Establishing VPN tunnels for control plane communication with F5 XC Global Controller Management traffic and orchestration commands Health monitoring and telemetry data transmission Requirements: Must have Internet access to reach F5's global infrastructure Should be considered as the "management interface" of the CE Configured on the first interface (eth0/ens3) Cannot be used for segment assignment 2. Site Local Inside (SLI) and Segment Interfaces The remaining interfaces can be configured for data plane traffic: Site Local Inside (SLI): Used for local network connectivity without segment assignment Segment Interfaces: Dedicated to specific network segments (VRF-like isolation) Each interface can belong to only one segment Supports BGP peering within the segment context Used for segmented connectivity Interface Planning Considerations When designing your CE deployment: Two-Interface Minimum Deployment: Interface 1: SLO for management and control plane Interface 2: Segment or SLI for data plane traffic Multi-Segment Deployments: Require additional interfaces (one per segment plus SLO) Example: 4 segments need 5 interfaces (1 SLO + 4 segment interfaces) Cloud Deployments: Ensure cloud instance types support the required number of network interfaces Remember to disable source/destination checks on all interfaces Consider network interfaces limits when planning for scale Routing Considerations for Segments: Plan for BGP peering if you need to advertise routes beyond connected subnets Static routes cannot be advertised through segment interfaces yet Each segment interface will only advertise: It’s directly connected subnet Routes learned via BGP on that interface Design your IP addressing scheme accordingly This interface architecture ensures proper separation between management/control plane traffic and data plane traffic, while providing the flexibility needed for complex network topologies. Prerequisites Before beginning the implementation, ensure you have: F5 Distributed Cloud account with appropriate permissions Three deployed Customer Edge nodes (SMSv2 sites) Basic understanding of BGP configuration (if implementing BGP peering) Step-by-Step Configuration Step 1: Create the Network Segment Navigate to Multi-Cloud Network Connect → Manage → Networking → Segments Click "Add Segment" Configure your segment with appropriate naming and network policies Define the segment scope based on your requirements Save the configuration Figure 2: Segment creation The segment acts as a logical network overlay that spans across all participating sites, similar to extending a VRF across multiple locations in traditional MPLS networks. Step 2: Assign Segments to CE Interfaces Navigate to Multi-Cloud Network Connect → Manage → Site Management → Secure Mesh Sites v2 For each Customer Edge: Select the CE and edit its configuration Navigate to the node interface configuration Modify the interface settings: Select the appropriate interface (typically the second NIC, not the SLO interface) Assign the created segment to this interface Configure the interface mode as required Ensure the SLO interface remains dedicated to management/control plane Apply the changes Figure 3: Node interface configuration showing segment assignment to the appropriate interface Important: Remember that: The SLO interface (typically eth0/ens3) should not be used for segment assignment Each data plane interface can belong to only one segment Plan your interface allocation carefully based on your traffic segmentation requirements Repeat this process for all participating CEs. Once complete, all sites will be connected through the assigned segment. Figure 4: Overview of configured interfaces with segment assignments across all CE nodes Step 3: Configure BGP Peering (Optional) For sites requiring dynamic routing with local infrastructure: Navigate to the CE's BGP configuration Select the correct interface tied to the segment (e.g., "ens4") Configure BGP parameters: Local AS number Peer AS number Peer IP address Network advertisements Apply the configuration Figure 5: BGP peering configuration showing interface selection tied to the segment BGP peering enables automatic route exchange between your CE and local network infrastructure, with routes learned via BGP being contained within the assigned segment's routing domain. Important Note on Route Advertisement: Segment interfaces only advertise connected routes (interface subnets) and BGP-learned routes Static routes are not currently supported for advertisement through segments If you need to advertise additional routes beyond the connected subnet, BGP peering is the only available method This makes BGP configuration essential for most production deployments where multiple subnets need to be accessible Verifying Route Tables To confirm proper route propagation: Navigate to Multi-Cloud Network Connect → Overview → Infrastructure Select your site name Click on CE Routes Apply filters as needed Figure 6: CE Routes selection interface for viewing routing information You should observe: Routes from remote sites appearing in the routing table Correct next-hop information pointing to remote CE IPs BGP-learned routes (if BGP is configured and Site Survivability is enabled) Routes properly isolated within their respective segments Only connected and BGP routes present (no static routes) Figure 7: Route table showing routes received from other sites with next-hop information Conclusion F5 Distributed Cloud's Global Layer 3 Virtual Network with segments provides a robust solution for connecting distributed infrastructure across multiple environments. By leveraging segments as VRF-like constructs, organizations can achieve network isolation, multi-tenancy, and simplified management across their global infrastructure. Key takeaways: Always use dual-NIC configurations for SMSv2 sites (minimum one SLO + one data plane interface) Understand the critical role of the SLO interface for management and control plane Plan interface allocation carefully - CEs support up to 8 physical interfaces plus VLAN sub-interfaces Understand segments as Layer 3 VRF equivalents for proper design Remember the one-to-one mapping between interfaces and segments Be aware that segments only advertise connected and BGP-learned routes (no static route support currently) Use BGP peering to advertise additional subnets beyond connected routes Disable source/destination checks for cloud-based CEs As F5 Distributed Cloud continues to evolve, some of these considerations may change. Always refer to the latest documentation and test thoroughly in your environment.180Views2likes0CommentsSecure Extranet with Equinix Fabric and F5 Distributed Cloud
Why: The Challenge of Building a Secure Extranet Establishing a secure extranet that spans multiple clouds, partners, and enterprise locations is inherently complex. Organizations face several persistent challenges: Technology Fragmentation: Different clouds, vendors, and networking stacks introduce inconsistency and integration friction. Endpoint Proliferation: Each new partner or cloud region adds more endpoints to secure and manage. Configuration Drift: Manual or siloed configurations across environments increase the risk of misalignment and security gaps. Security Exposure: Without centralized control, enforcing consistent policies across environments is difficult, increasing the attack surface. Operational Overhead: Managing disparate systems and connections strains NetOps, DevOps, and SecOps teams. These challenges make it difficult to scale securely and efficiently, especially when onboarding new partners or deploying applications globally. What: A Unified, Secure, and Scalable Extranet Solution The joint solution from F5 and Equinix addresses these challenges by combining: F5® Distributed Cloud Customer Edge (CE): A virtualized network and security node deployed via Equinix Network Edge. Equinix Fabric®: A software-defined interconnection platform that provides private, high-performance connectivity between clouds, partners, and enterprise locations. Together, they create a strategic point of control at the edge of your enterprise network. This enables secure, scalable, and policy-driven connectivity across hybrid and multi-cloud environments. This solution: Simplifies deployment by eliminating physical infrastructure dependencies. Centralizes policy enforcement across all connected environments. Accelerates partner onboarding with pre-integrated, software-defined connectors. Reduces risk by isolating traffic and enforcing consistent security policies. How: Architectural Overview At the heart of the architecture is the F5 Distributed Cloud CE, deployed as a virtual network function (VNF) on Equinix Network Edge. This CE: Acts as a gateway node for each location (cloud, data center, or partner site). Connects to other CEs via F5’s global private backbone, forming a secure service mesh. Integrates with F5 Distributed Cloud Console for centralized orchestration, visibility, and policy management. The CE node(s) are interconnected to partners, vendors, etc. using Equinix Fabric, which provides: Private, low-latency interconnects to major cloud providers (AWS, Azure, GCP, OCI). Software-defined routing via Fabric Cloud Router. Tier-1 internet access for hybrid workloads. This architecture enables a hub-and-spoke or full-mesh extranet topology, depending on business needs. Key Tenets of the Solution Strategic Point of Control The CE becomes the enforcement point for traffic inspection, segmentation, and policy enforcement—across all clouds and partners. Unified Management F5 Distributed Cloud Console provides a single pane of glass for managing networking, security, and application delivery policies. Zero-Trust Connectivity Built-in support for mutual TLS, IPsec, and SSL tunnels ensures encrypted, authenticated communication between nodes. Rapid Partner Onboarding Equinix’s Fabric and F5 CE connectors allow new partners to be onboarded in minutes, not weeks. Operational Efficiency Automation hooks (GitOps, Terraform, APIs) reduce manual effort and configuration drift. Private interconnects and regional CE deployments help meet regulatory requirements. Additional Links F5 and Equinix Partnership The Business Partner Exchange - An F5 Distributed Cloud Services Demonstration Equinix Fabric Overview Additional Equinix and F5 partner information154Views0likes0CommentsHow to deploy an F5XC SMSv2 site with the help of automation
To deploy an F5XC Customer Edge (CE) in SMSv2 mode with the help of automation, it is necessary to follow the three main steps below: Verify the prerequisites at the technical architecture level for the environment in which the CE will be deployed (public cloud or datacenter/private cloud) Create the necessary objects at the F5XC platform level Deploy the CE instance in the target environment We will provide more details for all the steps as well as the simplest Terraform skeleton code to deploy an F5XC CE in the main cloud environments (AWS, GCP and Azure). Step 1: verification of architecture prerequisites To be deployed, a CE must have an interface (which is and will always be its default interface) that has Internet access. This access is necessary to perform the installation steps and provide the "control plane" part to the CE. The name of this interface will be referred to as « Site Local Outside or SLO ». This Internet access can be provided in several ways: "Cloud provider" type site: Public IP address directly on the interface Private IP address on the interface and use of a NAT Gateway as default route Private IP address on the interface and use of a security appliance (firewall type, for example) as default route Private IP address on the interface and use of an explicit proxy Datacenter or "private cloud" type site: Private IP address on the interface and use of a security appliance (firewall type, for example) or router as default route Private IP address on the interface and use of an explicit proxy Furthermore, public IP addresses on the interface and "direct" routing to Internet It is highly recommended (not to say required) to add at least a second interface during the site first deployment. Because depending on the infrastructure (for example, GCP) it is not possible to add network interfaces after the creation of the VM. Even on platforms where adding a network interface is possible, a reboot of the F5XC CE is needed. An F5XC SMSv2 CE can have up to eight interfaces overall. Additional interfaces are (most of the time) used as “Site Local Inside or SLI” interfaces or "Segment interfaces" (that specific part will be covered in another article). Basic CE matrix flow. Interface Direction and protocols Use case / purpose SLO Egress – TCP 53 (DNS), TCP 443 (HTTPS), UDP 53 (DNS), UDP 123 (NTP), UDP 4500 (IPSEC) Registration, software download and upgrade, VPN tunnels towards F5XC infrastructure for control plane SLO Ingress – None RE / CE use case CE to CE use case by using F5 ADN SLO Ingress – UDP 4500 Site Mesh Group for direct CE to CE secure connectivity over SLO interface (no usage of F5 ADN) SLO Ingress – TCP 80 (HTTP), TCP 443 (HTTPS) HTTP/HTTPS LoadBalancer on the CE for WAAP use cases SLI Egress – Depends on the use case / application, but if the security constraint permits it, no restriction SLI Ingress – Depends on the use case / application, but if the security constraint permits it, no restriction For advanced details regarding IPs and domains used for: Registration / software upgrade Tunnels establishment towards F5XC infrastructure Please refer to: https://docs.cloud.f5.com/docs-v2/platform/reference/network-cloud-ref#new-secure-mesh-v2-sites Step 2: creation of necessary objects at the F5XC platform level This step will be performed by the Terraform script by: Creating an SMSv2 token Creating an F5XC site of SMSv2 type API certificate and terraform variables First, it is necessary to create an API certificate. Please follow the instructions in our official documentation here: https://docs.cloud.f5.com/docs-v2/administration/how-tos/user-mgmt/Credentials#generate-api-certificate-for-my-credentials or here: https://docs.cloud.f5.com/docs-v2/administration/how-tos/user-mgmt/Credentials#generate-api-certificate-for-service-credentials Depending on the type of API certificate you want to create and use (user credential or service credential). In the Terraform variables, those are the ones that you need to modify: The “location of the api key” should be the full path where your API P12 file is stored. variable "f5xc_api_p12_file" { type = string description = "F5XC tenant api key" default = "<location of the api key>" } If your F5XC console URL is https://mycompany.console.ves.volterra.io then the value for the f5xc_api_url will be https://mycompany.console.ves.volterra.io/api variable "f5xc_api_url" { type = string default = "https://<tenant name>.console.ves.volterra.io/api" } When using terraform, you will also need to export the P12 certificate password as an environment variable. export VES_P12_PASSWORD=<password of P12 cert> Creation of the SMSv2 token. This is achieved with the following Terraform code and with the “type = 1” parameter. # #F5XC objects creation # resource "volterra_token" "smsv2-token" { depends_on = [volterra_securemesh_site_v2.site] name = "${var.f5xc-ce-site-name}-token" namespace = "system" type = 1 site_name = volterra_securemesh_site_v2.site.name } Creation of the F5XC SMSv2 site. This is achieved with the following Terraform code (example for GCP). This is where you need to configure all the options you want to be applied at site creation. resource "volterra_securemesh_site_v2" "site" { name = format("%s-%s", var.f5xc-ce-site-name, random_id.suffix.hex) namespace = "system" block_all_services = false logs_streaming_disabled = true enable_ha = false labels = { "ves.io/provider" = "ves-io-GCP" } re_select { geo_proximity = true } gcp { not_managed {} } } For instance, if you want to use a corporate proxy and have the CE tunnels passing through the proxy, the following should be added: custom_proxy { enable_re_tunnel = true proxy_ip_address = "10.154.32.254" proxy_port = 8080 } And if you want to force CE to REs connectivity with SSL, the following should be added: tunnel_type = "SITE_TO_SITE_TUNNEL_SSL" Step 3: creation of the CE instance in the target environment This step will be performed by the Terraform script by: Generating a cloud-init file Creating the F5XC site instance in the environment based on the marketplace images or the available F5XC images How to list F5XC available images in Azure: az vm image list --all --publisher f5-networks --offer f5xc_customer_edge --sku f5xccebyol --output table | sort -k4 -V And check in the output, the one with the highest version. x64 f5xc_customer_edge f5-networks f5xccebyol f5-networks:f5xc_customer_edge:f5xccebyol:9.2025.17 9.2025.17 x64 f5xc_customer_edge f5-networks f5xccebyol f5-networks:f5xc_customer_edge:f5xccebyol:2024.40.1 2024.40.1 x64 f5xc_customer_edge f5-networks f5xccebyol f5-networks:f5xc_customer_edge:f5xccebyol:2024.40.2 2024.40.2 x64 f5xc_customer_edge f5-networks f5xccebyol f5-networks:f5xc_customer_edge:f5xccebyol:2024.44.1 2024.44.1 x64 f5xc_customer_edge f5-networks f5xccebyol_2 f5-networks:f5xc_customer_edge:f5xccebyol_2:2024.44.2 2024.44.2 Architecture Offer Publisher Sku Urn Version -------------- ------------------ ----------- ------------ ----------------------------------------------------- --------- We are going to re-use some of the parameters in the Terraform script, to instruct the Terraform code which image it should use. source_image_reference { publisher = "f5-networks" offer = "f5xc_customer_edge" sku = "f5xccebyol" version = "9.2025.17" } Also, for Azure, it’s needed to accept the legal terms of the F5XC CE image. This needs to be performed only once by running the following commands: Select the Azure subscription in which you are planning to deploy the F5XC CE: az account set -s <subscription-id> Accept the terms and conditions for the F5XC CE for this subscription: az vm image terms accept --publisher f5-networks --offer f5xc_customer_edge --plan f5xccebyol How to list F5XC available images in GCP: gcloud compute images list --project=f5-7626-networks-public --filter="name~'f5xc-ce'" --sort-by=~creationTimestamp --format="table(name,creationTimestamp)" And check in the output, the one with the highest version. NAME CREATION_TIMESTAMP f5xc-ce-crt-20250701-0123 2025-07-09T02:15:08.352-07:00 f5xc-cecrt-20250701-0099-9 2025-07-02T01:32:40.154-07:00 f5xc-ce-202505151709081 2025-06-25T22:31:23.295-07:00 How to list F5XC available images in AWS: aws ec2 describe-images \ --region eu-west-3 \ --filters "Name=name,Values=*f5xc-ce*" \ --query "reverse(sort_by(Images, &CreationDate))[*].{ImageId:ImageId,Name:Name,CreationDate:CreationDate}" \ --output table And check in the output, the ami with the latest creation date. Also, for AWS, it’s needed to accept the legal terms of the F5XC CE image. This needs to be performed only once. Go to this page in your AWS Console Then select "View purchase options" and then select "Subscribe". Putting everything together: Global overview We are going to use Azure as the target environment to deploy the F5XC CE. The CE will be deployed with two NICs, the SLO being in a public subnet and a public IP will be attached to the NIC. We assume that all the prerequisites from step 1 are met. Terraform skeleton for Azure is available here: https://github.com/veysph/Prod-TF/ It's not intended to be the perfect thing, just an example of the minimum basic things to deploy an F5XC SMSv2 CE with automation. Changes and enhancements based on the different needs you might have are more than welcome. It's really intended to be flexible and not too strict. Structure of the terraform directory: provider.tf contains everything that is related to the needed providers variables.tf contains all the variables used in the terraform files f5xc_sites.tf contains everything that is related to the F5XC objects creation main.tf contains everything to start the F5XC CE in the target environment Deployment Make all the relevant changes in variables.tf. Don't forget to export your P12 password as an environment variable (see Step 2, API certificate and terraform variables)! Then run, terraform init terraform plan terraform apply Should everything be correct at each step, you should get a CE object in the F5XC console, under Multi-Cloud Network Connect --> Manage --> Site Management --> Secure Mesh Sites v2304Views4likes1CommentExtending F5 ADSP: Multi-Tailnet Egress
Tailscale tailnets make private networking simple, secure, and efficient. They’re quick to establish, easy to operate, and provide strong identity and network-level protection through zero-trust WireGuard mesh networking. However, while tailnets are secure, applications inside these environments still need enterprise-grade application security, especially when exposed beyond the mesh. This is where F5 Distributed Cloud (XC) App Stack comes in. As F5 XC’s Kubernetes-native platform, App Stack integrates directly with Tailscale to extend F5 ADSP into tailnets. The result is that applications inside tailnets gain the same enterprise-grade security, performance, and operational consistency as in traditional environments, while also taking full advantage of Tailscale networking.508Views4likes2CommentsStreamlining Dev Workflows: A Lightweight Self-Service Solution for Bypassing Bot Defense Safely
Automate the update of an F5 Distributed Cloud IP prefix set that’s already wired to a service policy** with the “Skip Bot Defense” option set. An approved developer hits a simple, secret endpoint; the system detects their current public IP and updates the designated IP set with a `/32`. Bot Defense is skipped for that IP on dev/test traffic—immediately. No tickets. No console spelunking. No risky, long-lived exemptions. At a glance Self-service: Developers add their _current_ IP in seconds. - Tight scope: Changes apply only to the dev/test services attached to that policy.73Views1like0CommentsF5 API Security: Discovery and Protection
Introduction APIs are everywhere, accounting for around 83% of all internet traffic today, with API calls growing nearly 300% faster than overall web traffic. Last year, the F5 Office of the CTO estimated that the number of APIs being deployed to production could reach between 500 million to over a billion by 2030. At the time, the notion of over a billion APIs in the wild was overwhelming, made even more concerning by estimates indicating that a significant portion were unmanaged or, in some cases, entirely undocumented. Now, in the era of AI driven development and automation, that estimate of over a billion APIs may prove to be a significant understatement. According to recent research by IDC on API Sprawl and AI Enablement, "Organizations with GenAI enhanced applications/services in production have roughly 5x more APIs than organizations not yet investing significantly in GenAI". That all makes for a very large and complicated attack surface, and complexity is the enemy of security. Discovery, Monitoring, and Protection So, how do we begin securing such a large and complex attack surface? It requires a continuous approach that blends visibility, management, and enforcement. This includes multi-lens Discovery and Learning to detect unknown or shadow APIs, determine authentication status, identify sensitive data, and generate accurate OpenAPI schemas. It also involves Monitoring to establish baselines for endpoint parameters, behaviors, and characteristics, enabling the detection of anomalies. Finally, we must Protect by blocking suspicious requests, applying rate limiting, and enforcing schema validation to prevent misuse. The API Security capabilities of the F5 product portfolio are essential for providing that continuous, defense in depth approach to protecting your APIs from DevTime to Runtime. F5 API Security Essentials Additional Resources F5 API Security Article Series: Out of the Shadows: API Discovery Beyond Rest: Protecting GraphQL Deploy F5 Distributed Cloud API Discovery and Security: F5 Distributed Cloud WAAP Terraform Examples GitHub Repo Deploy F5 Hybrid Architectures API Discovery and Security: F5 Distributed Cloud Hybrid Security Architectures GitHub Repo F5 Distributed Cloud Documentation: F5 Distributed Cloud Terraform Provider Documentation F5 Distributed Cloud Services API Documentation235Views2likes1CommentVIPTest: Rapid Application Testing for F5 Environments
VIPTest is a Python-based tool for efficiently testing multiple URLs in F5 environments, allowing quick assessment of application behavior before and after configuration changes. It supports concurrent processing, handles various URL formats, and provides detailed reports on HTTP responses, TLS versions, and connectivity status, making it useful for migrations and routine maintenance.916Views5likes2Comments