cloud
1795 TopicsImplementing Risk-Based Actions with AI-Powered WAF: Customer Policy Paths
Why Custom policy is where risk-based actions matter most The default policy is straightforward: it applies a broad mix of signatures, threat campaigns, and violations; “Enhance with AI” is an optional add-on. Custom policies are where customers can accidentally recreate the same problems Risk Scoring is designed to solve—usually by combining: Overly broad/noisy signature selection (especially low-accuracy signatures) Aggressive enforcement (blocking Medium too early) Disabling/excluding key signatures and unintentionally reducing ML invocation So the rest of this blog is a tight, configuration-oriented walkthrough of the Custom path. Custom policy: configuration walkthrough (decision points → operational outcomes) Baseline: Navigate to the Custom controls LB Config → Web Application Firewall Create/edit the WAF object (Metadata `Name`, etc.) Set Security Policy = Custom Choose Signature Selection by Accuracy Optionally enable Enhance with AI (Risk Scoring) If enabled, optionally configure Action by Risk Score (risk-based enforcement) Step 1: Signature Selection by Accuracy (choose your baseline level) Accuracy indicates susceptibility to false positives: Low: high likelihood of false positives Medium: some likelihood of false positives High: low likelihood of false positives Note: This setting is foundational: it determines which signatures are active, and therefore the quality and volume of detection signals that feed into downstream risk evaluation. Operationally: High accuracy tends to support faster, safer enforcement. Medium/Low accuracy can expand coverage but increases the chance you’ll need exceptions, investigations, or staged rollout discipline. Step 2: Enhance with AI (turn on Risk Scoring) Enhance with AI = On enables AI-powered risk scoring and assigns each request a High/Medium/Low risk score using layered signals. Two implementation details to make explicit in your blog because they affect customer expectations: ML invocation depends on enabled signatures firing in the specified injection/execution categories. If teams disable/exclude those signatures, they may reduce when the model runs—changing practical behavior of risk evaluation. Step 3: Action by Risk Score (map risk levels to enforcement) When Action by Risk Score is enabled: By default, high-risk requests are blocked Users can choose whether Medium-risk requests are blocked (via dropdown) This is the primary knob that determines how quickly a user decides to move from “safe enforcement” to “broad enforcement.” Recommended rollout path: Day 0 → Day 7 → Steady state This is the most common and safest operational progression for customers Day 0 (safe enforcement baseline) Custom → Signature Selection by Accuracy = High (or High + Medium if you need broader coverage immediately) Enhance with AI = On Action by Risk Score = High Outcome Gets to blocking quickly while minimizing availability risk. High is blocked. This is the “prove safety while stopping obvious bad” posture. Day 7 (controlled expansion) Keep Custom + Enhance with AI + Action by Risk Score Optionally widen Signature Selection from High → High + Medium if coverage is insufficient Enhance with AI = On Action by Risk Score = High + Medium Outcome Expands detection inputs without immediately expanding enforcement. Teams focus on what’s landing in Medium and whether exclusions/disabled signatures are reducing ML invocation in key categories Steady state (mature enforcement) Custom → signature selection set to the broadest set Widen Signature Selection from High + Medium → High + Medium + Low Action by Risk Score = High + Medium Enhance with AI = On Action by Risk Score = High + Medium Outcome Risk outcomes become the enforcement interface. Broad, consistent blocking across apps/APIs with reduced per-app tuning and fewer signature-level decisions Common Pitfalls: Avoid Block Medium on Day 0 when including low-accuracy signatures—this is the fastest way to recreate false-positive outages. If you disable/exclude signatures in the key injection/execution categories, you can reduce ML invocation and change risk evaluation behavior. Summary Custom policies traditionally scale poorly because every app ends up with bespoke signature decisions and exception handling. Risk Scoring is designed to invert that: keep signatures as key signals but standardize enforcement via risk outcomes. If you implement Custom with the Day 0 → Day 7 → Steady state progression above, you get a predictable path from “block safely now” to “enforce broadly later” without returning to signature-by-signature tuning as your primary operating model.52Views1like1CommentF5 Distributed Cloud – Why You Should Never Block Regional Edge IPs on Your Firewall
Introduction A common mistake when onboarding a public-facing application onto F5 Distributed Cloud (XC) is to restrict which source IP addresses can reach the origin server. Network and security teams, following a traditional “deny all / allow what you need” approach, sometimes allow only a handful of F5 XC Regional Edge IPs through their firewall — or worse, block RE IPs entirely because they see unfamiliar traffic hitting the origin from IP ranges they don’t recognize. This article explains why this is fundamentally incompatible with how F5 Distributed Cloud works, and what the consequences are. Understanding Distributed Architecture When you expose an application through F5 Distributed Cloud, the platform advertises your application’s FQDN via an Anycast IP address across all Regional Edges worldwide. As of the latest updates, this means your application is reachable through multiple REs across the Americas, Europe, and Asia-Pacific. Each RE acts as an independent proxy and point of presence. End users are routed to the closest RE based on BGP peering and network proximity. This is the core of F5 XC’s distributed model — there is no single centralized proxy. How Health Checks Work: Each RE Monitors Independently This is the critical point that is often misunderstood. When you configure a Health Check and an Origin Pool with your application’s public IP, every Regional Edge independently performs its own health check against your origin server. Each RE uses its own local internet breakout to reach your application — health check traffic does not traverse the F5 Global Network. This means: If you have an origin server with a public IP, and your Origin Pool is configured with “Public IP” (the default), then all REs will send health-check probes to your origin. Each RE maintains its own independent view of your origin’s health status. On the F5 XC console, you will see the same origin IP listed multiple times — once per RE — each with its own health status. The source IPs of these health checks come from the RE subnet ranges published in the official F5 documentation: F5 Distributed Cloud IP Address and Domain Reference. What Happens When You Block Some RE IPs Suppose you allow only a few RE IP ranges (for example, only European REs) but block the rest. Here is what happens: REs whose IPs are allowed will successfully complete health checks, and your origin will appear as UP from those locations. REs whose IPs are blocked will see health check failures, and your origin will be marked as DOWN from those locations. The immediate and most visible consequence is on the F5 XC console itself. Because a majority of REs report the origin as DOWN, the console will display a degraded application health status — showing poor availability and performance metrics. This gives a misleading picture of your application’s actual state: your origin is perfectly healthy, but the console reflects a largely unhealthy deployment simply because most REs cannot reach it through the firewall. This can trigger unnecessary troubleshooting, false alerts, and erode confidence in the platform’s monitoring data. Now, when an end user connects through a blocked RE (for example, a user in Asia hitting a Singapore RE), the platform behavior depends on the Endpoint Selection policy configured in your Origin Pool: Endpoint Selection Policy Behavior When Local RE Shows Origin as DOWN Local Endpoints Only Traffic is dropped. The user gets an error. No fallback. Local Endpoints Preferred (default) Traffic is forwarded via the F5 Global Network to a RE that has the origin marked as UP. This adds some latency. All Endpoints Same as Local Preferred — traffic is rerouted to a healthy RE over the Global Network. This can add major latency if the responding RE is far away from the origin. In the Local Endpoints Only case, users connecting through blocked REs will experience a complete outage for your application — even though the origin is healthy and reachable. In the Local Preferred or All Endpoints cases, the platform will attempt to reroute traffic through the F5 Global Network to a RE that has a healthy view of the origin. While the application will still be reachable, this introduces several problems: Increased latency: Traffic must travel from the ingress RE to a remote egress RE over the internal F5XC fabric before reaching your origin, instead of egressing locally to the internet. Suboptimal routing: A user in Tokyo may end up having their traffic routed through Paris because only European REs can reach the origin — defeating the purpose of a globally distributed edge. Reduced resilience: You’ve effectively reduced the number of egress points that can serve traffic, creating bottlenecks and potential single points of failure. The Correct Default Approach: Allowlist All RE IP Ranges The F5 official documentation is clear on this point: you should allowlist all F5 Distributed Cloud RE subnet ranges on your origin firewall. The published IP ranges are organized by region (Americas, Europe, Asia) and are available on the official F5 Distributed Cloud documentation page. Ideally, your origin firewall should be configured to only allow the F5 Distributed Cloud subnets for your application’s listening port. This ensures that: All RE health checks succeed, giving the platform an accurate and complete view of your origin’s health. Traffic egresses locally from the closest RE, providing the lowest latency path to your users. Only traffic routed through F5 XC can reach your origin, preventing attackers from bypassing the F5 XC security stack (WAAP, DDoS, Bot Protection, etc.) by hitting the origin directly. What If You Want to Limit Which REs Perform Health Checks? If you have a legitimate reason to reduce the number of REs performing health checks (for example, to reduce health check traffic on the origin or because your application is regionally scoped), F5 XC provides a built-in mechanism for this. Instead of using “Public IP” in the Origin Pool member configuration, select “IP Address of Origin Server on Given Sites” and then assign a Virtual Site that includes only the REs you want. For example, you could create a Virtual Site that includes only European REs, reducing your health check sources from all worldwide REs down to just the ones in that region. Conclusion F5 Distributed Cloud is architected as a fully distributed system. Health monitoring is not performed from a central location — it is performed independently by every Regional Edge. This design is what enables the platform to provide low-latency, resilient application delivery worldwide. Blocking RE IPs on your origin firewall fundamentally breaks this distributed health monitoring model. It causes health checks to fail, triggers suboptimal traffic routing, and potentially increases latency. The correct and recommended approach is to allowlist all F5 Distributed Cloud RE IP ranges on your origin firewall, and use the platform’s built-in Virtual Site mechanism if you need to control which REs perform health checks.66Views2likes1CommentSingle-click CDN Experience for F5 Distributed Cloud Load Balancers
Fundamentals The modern CDN has evolved well beyond cache and serve. Today’s platforms are intelligent edge fabrics that combine performance optimization, layered security, multicloud routing, and even workload execution at the edge. Few products embody this evolution more completely than F5 Distributed Cloud CDN, and this post explores both why CDNs matter and what sets F5’s newest approach apart. At its core, a CDN is a globally distributed system of edge servers, called PoPs or Regional Edges (RE), that cache content and handle user requests on behalf of the server origin. When a user requests a resource, DNS resolution routes them to the nearest PoP. If the resource is cached there (a “cache hit”), it’s returned immediately. If not (a “cache miss”), the PoP fetches it from the origin, stores it, and returns it to the user. The speed improvement isn’t just perceptual. Reduced Round-Trip Time (RTT) correlates directly with business outcomes. Every page load shaved makes a difference for search rankings, checkout completion, and ad viewability all improve with lower latency. CDNs don’t just make things faster; they make digital businesses more competitive. To put the difference in concrete terms, here’s how a typical 200KB page might deliver across different scenarios. Platform deep dive Traditional CDNs optimize for one thing: getting cached bytes to users fast. Distributed Cloud CDN starts there but doesn’t stop, it's engineered as a unified platform where content delivery, application security, multicloud connectivity, and edge compute converge under a single operational surface. F5’s approach is architecturally distinct Most CDNs are standalone services that organizations integrate with separate security tools, load balancers, and observability stacks. The operational overhead of stitching these together and keeping policies consistent across them is substantial. F5 takes a different approach: CDN is one capability within the broader Distributed Cloud Platform, meaning it inherits the platform’s DNS, load balancing, WAF, observability, and multicloud networking services. The practical result, noted by enterprise users, is that WAF rules, DDoS policies, and CDN configurations all live in the same console. There’s no context switching between vendors, no policy drift between your security tool and your delivery tool, and no blind spots at the handoff between them. In the newest product update, anyone already using a Distributed Cloud Load Balancer can enable CDN acceleration with a single click: no rearchitecting, no new deployments. Built-in cacheability insights estimate performance improvement and cost savings before activation, so teams can make informed decisions without guesswork. Target use cases: Where F5 Distributed Cloud CDN fits best There are three primary use-case families for enabling an integrated CDN: Secure apps everywhere (WAAP + CDN): Organizations that need comprehensive web app and API protection with WAF, DDoS, bot defense, unified content delivery under a single policy plane and management console. Modern digital experiences: Dynamic, personalized applications spanning multiple public clouds, edge locations, and on-premises infrastructure that need consistent delivery regardless of where origin workloads live. Multicloud & edge initiatives: Enterprises migrating workloads across cloud providers or deploying edge compute who need a platform that bridges delivery, security, and service mesh without re-platforming for each environment. Visibility & Control: You can’t optimize what you can’t see F5’s Distributed Cloud Platform ships with unified observability that spans delivery performance and security posture. Real-time dashboards expose traffic patterns, cache efficiency metrics, origin health, and security event timelines, all from the same interface used to configure policies. Cache efficiency isn’t a static attribute either. Distributed Cloud CDN provides granular control over cache keys, TTL values, and path or header-based caching rules, enabling teams to optimize hit rates for specific content types and access patterns. Cacheability insights indicate which web apps are candidates for acceleration. For security operations, the edge generates rich telemetry: request rates, blocked attack types, geographic traffic distribution, and bot classification outcomes. This feeds into the same observability layer as performance data, giving teams a single pane of glass rather than separate dashboards for CDN and security. The recently announced F5 Insight capability extends this further, bringing OpenTelemetry-powered observability across BIG-IP, NGINX, and Distributed Cloud Services, consolidating performance and security intelligence across an organization’s entire F5 footprint into actionable, unified visibility. Demo Walkthrough Final thoughts A CDN is no longer an optimization. It’s table stakes for any organization serving digital experiences to a geographically distributed audience. The question isn’t whether to deploy one, but which platform best aligns with the complexity of your architecture and the ambition of your security posture. For organizations operating at the intersection of multicloud delivery, API-driven applications, and enterprise security requirements, Distributed Cloud CDN represents a compelling architectural choice: a platform that treats performance and security not as separate concerns to be stitched together, but as integrated properties of the same edge fabric. The bytes will always need to get from somewhere to your users. F5 makes that journey faster, safer, and smarter. Additional Resources Product information: https://www.f5.com/products/distributed-cloud-services/cdn Technical documentation: https://docs.cloud.f5.com/docs-v2/content-delivery-network/how-to/cdn-mgmt/conf-cache-lb Feature announcement blog: https://www.f5.com/company/blog/f5-distributed-cloud-cdn-faster-apps-one-click-enablement-lower-costs
61Views1like0CommentsDesign for resiliency and protect against cloud outages with F5 DNS and application monitoring
How to reduce DNS recovery time and know when a provider, region, or control plane is having a bad day. Why DNS resiliency matters Major outages happen more often than many architectures assume. The most painful part is frequently not the incident itself, but the operational loss of control that comes from tightly coupling critical functions (like DNS) to a single platform or provider. When that platform is impaired, workarounds become limited and recovery slows. Design principle: fail safely and recover fast A useful way to frame resiliency is failures will occur, so the architecture should prioritize rapid, low-risk recovery. That typically means eliminating single points of dependency, automating failover where practical, and ensuring you can change traffic direction even when one control plane is degraded. The DNS failure mode: what breaks and how long it takes When authoritative DNS is hosted with a single vendor, a DNS incident can translate into recovery times on the order of 30 minutes to 3 hours (depending on the failure domain, TTLs, and operational procedures). With an automated, multi-provider design, recovery can be reduced dramatically, down to ~60 seconds in some scenarios. Solution overview This article describes an end-to-end resiliency pattern that combines (1) multi-provider authoritative DNS, using F5 BIG-IP DNS (commonly deployed on-prem or in IaaS) with F5 Distributed Cloud DNS as an additional authoritative provider, and (2) application assurance via F5 Distributed Cloud Synthetic Monitoring. The DNS design helps keep applications reachable during cloud-service impairments or regional failures by enabling automated failover and preserving the ability to shift control when a dependency is degraded. Synthetic DNS/HTTP checks then continuously validate external reachability and performance, so you can detect issues early and triage faster when incidents occur. What you get from multi-provider authoritative DNS Higher availability: a second authoritative provider reduces the blast radius of a single-vendor outage. Lower query latency: globally distributed anycast networks can shorten resolver-to-authoritative RTT for many users. Built-in DDoS resistance: distributed networks can absorb and disperse volumetric attacks more effectively than a small on-prem footprint. Elastic capacity: the service can scale during traffic spikes without pre-provisioning appliances for peak usage. Better visibility: per-query metrics and synthetic checks help validate reachability from multiple regions. Example: improving availability and latency for Acme Bank Acme Bank, whose name has been changed for the purposes of this article, struggled with higher DNS latency and periodic downtime when their on-prem DNS appliances failed. They also had to plan for peak capacity in advance to handle traffic spikes, an approach that can be expensive and still leave gaps when demand exceeds forecasts. By adding Distributed Cloud DNS as an additional authoritative DNS provider alongside BIG-IP DNS, Acme Bank extended DNS serving closer to end users on a globally distributed network. This improved DNS availability and reduced query latency, while providing a platform that can scale to meet demand. Reference architecture (high-level) At a minimum, you are operating two authoritative DNS providers for the same zone: Primary authoritative: BIG-IP DNS serving the zone (often integrated with existing on-prem or cloud-adjacent infrastructure). Secondary/additional authoritative: Distributed Cloud DNS hosting the same zone data (via zone transfer and/or secondary zone configuration). Delegation: Your registrar/parent zone publishes NS records so recursive resolvers can reach either provider. Configuration walkthrough Step 1: Enable zone transfers from BIG-IP DNS Configure BIG-IP DNS to allow zone transfers (AXFR/IXFR) to the Distributed Cloud DNS name servers for the zones you want to protect. Validate transfers and ensure TSIG and IP-based allowlists (as applicable) are in place to prevent unauthorized replication. Step 2: Add the zone as secondary in Distributed Cloud DNS Add your domain as a secondary DNS zone in Distributed Cloud DNS and point it to BIG-IP DNS for transfers. Once the initial transfer completes, verify the zone is online and that records (including SOA/NS) match expectations. Use the console to inspect zone content and confirm refresh/retry timers align with your operational goals. Step 3: Update delegation at the registrar (planned cutover) Update the domain delegation at your DNS registrar/parent zone to publish the desired authoritative name servers (for example, shifting primary delegation from BIG-IP DNS to Distributed Cloud DNS, or publishing both sets depending on your strategy). Plan for propagation by lowering TTLs ahead of time when feasible, and document a rollback procedure (e.g., reverting NS to the previous set) before making changes. Monitoring and app assurance with synthetic checks Once secondary DNS is active, use DNS and HTTP synthetic monitoring from multiple geographies to validate end-to-end reachability. Track query success rate, response codes, and latency, and alert on anomalies that indicate partial outages (e.g., a single region failing, increased NXDOMAIN/SERVFAIL rates, or unexpected record changes). Application assurance (synthetic monitoring) Even with resilient DNS, application incidents still happen and the worst-case operational pattern is learning about them from users first. Synthetic monitoring helps you detect externally visible failures early (often before customer reports), so response starts with evidence rather than guesswork. F5 Distributed Cloud Synthetic Monitoring continuously simulates DNS lookups and HTTP requests to validate the external health and performance of your applications. Over time, you can establish a baseline for availability and latency, i.e., “what normal looks like,” which makes deviations easier to detect and triage. Global vantage points: run checks from multiple regions to avoid a single-location “false negative.” Multiple providers: compare results across providers to separate internet-path issues from app/origin issues. Actionable alerts: alert on latency spikes, elevated error rates (e.g., HTTP 5xx), and DNS resolution failures. Fast drill-down: pivot from an alert to region-level breakdowns, timelines, and event tables to isolate where the failure is occurring. Example triage workflow: an alert flags a critical payroll application. In the console, you can correlate a single-region degradation (for example, West US) with a sharp increase in HTTP latency and a burst of HTTP 500 responses. A regional timing breakdown can further indicate whether time is being spent in network connect, TLS negotiation, or server processing, helping you route the incident to the correct owning team (e.g., origin/app servers for that region) without hours of cross-team war-room triage. The practical outcome is reduced mean time to detect (MTTD) and faster “mean time to innocence” by quickly narrowing down which component is failing and which team should engage. Video Demonstration The following video reviews each of the challenges described in this article and how F5 solves this by providing cloud resiliency with DNS services and app assurance with synthetic monitoring. Conclusion DNS is a critical dependency, and a common amplification point during outages, so a multi-provider authoritative DNS design (BIG-IP DNS plus Distributed Cloud DNS) helps preserve reachability and control when a vendor, region, or control plane is degraded. But resiliency is strongest when DNS failover is paired with application assurance: synthetic DNS/HTTP checks provide early, external detection and rapid triage signals that shorten both MTTD and time to mitigation. Together, DNS resiliency with app assurance form an end-to-end resiliency solution, keeping users routed to healthy endpoints while simultaneously proving what is (and isn’t) failing, so teams can respond faster with less guesswork. Next, validate your zone-transfer security model, define failover/runbook procedures, instrument synthetic checks and alert thresholds, and test delegation changes in a lower environment before production cutover. Additional Resources F5 DNS Products Distributed Cloud Synthetic Monitoring Related Technical Articles Accelerate Your Initiatives: Secure & Scale Hybrid Cloud Apps on F5 BIG-IP & Distributed Cloud DNS The Power of &: F5 Hybrid DNS solution Use F5 Distributed Cloud to control Primary and Secondary DNS Using F5 Distributed Cloud DNS Load Balancer health checks and DNS observability Demo Guide: F5 Distributed Cloud DNS (SaaS Console)
96Views2likes0CommentsIntegrating External Connectors in Distributed Cloud: IPSec, BGP, & Routing Policy with AWS & Cisco
Introduction As multi‑cloud architectures continue to grow, organizations increasingly need consistent, secure, and efficient connectivity between disparate environments. Linking private data centers, cloud VPC's, third‑party virtual routers, enterprise SD‑WAN domains and partner networks, hybrid connectivity must be reliable, automated, and operationally simple to manage. In this technical article, we’ll explore F5’s new external segment connector specifically designed for edge networks. We’ll focus on the setup process, connectivity testing, and explore the benefits of this solution with a robust example deployment. External Connectors bridge Customer Edge (CE) sites with third‑party edge devices such as Cisco CSR and 8000v routers, using standards‑based IPSec VPN and BGP. This simplifies multi‑cloud and hybrid routing in complex environments and can also be used to integrate enterprise SD‑WAN routing domains and to securely connect to partner networks. This article provides an overview of building IPSec and BGP connections between a F5 CE instance in AWS and a Cisco 8000v router to connect VPC A to VPC B without using VPC peering or a Transit Gateway (TGW). We’ll then share an example of applying BGP routing policy for inbound route control. Solution: External Connectors At a high level, the goal of the solution is to: Establish IPSec VPN between a F5 CE site and a Cisco 8000v router. Bring up BGP peering over the IPSec tunnel. Apply and validate routing policy for inbound route filtering. This example topology has a CE in AWS VPC A located on the right, with two interfaces: Site Local Outside (SLO) and Site Local Inside (SLI). There is a workload behind the CE for end-to-end connectivity tests. The third-party device is a Cisco 8000v router that lives on AWS in VPC B. This device also has two interfaces, and there is a virtual machine behind the Cisco router. To summarize, this includes: CE AWS Site in VPC A, with SLO and SLI interfaces and a workload behind it. Cisco 8000v router in VPC B, with GigabitEthernet1 and GigabitEthernet2, plus a VM behind it. Traffic between the two VPCs must traverse a public IP path due to the absence of VPC peering or a TGW with attachments. This solution uses a Streamlined IPSec Configuration, F5 CE’s support pre‑built IKEv2 Phase 1 and Phase 2 profiles, drastically reducing the setup time for standard IPSec tunnels. While administrators retain the freedom to define custom profiles, the default templates accelerate configuration and limit the risk of mismatch‑related failures. With Consistent Multi‑Cloud Routing running BGP directly over IPSec, the CE’s ensure dynamic routes exchange across hybrid environments, replacing static routing with scalable and distributed control. Enabling visibility with built-in troubleshooting, the following observability features accelerate change validation and incident resolution. CE’s support deep diagnostic tools and include the following: Tunnel and BGP status dashboards Node‑level status granularity CLI tools for BGP (show ip bgp, summaries, advertised routes) Route tables filtered by protocol source Real‑time tunnel throughput metrics Administrators can now enforce consistent inbound and outbound routing behavior across distributed sites. New BGP Routing Policies allow fine‑grained control including: IP Prefix‑lists, Community tags, AS‑path matching, and Actions including allow, deny, MED, local-preference, etc. Demo Highlights 1. Establish IPSec VPN Connectivity Utilize the pre-created default IKE Phase 1 and Phase 2 profiles for streamlined configuration. Both CE and Cisco configurations rely on correctly matching the following: IKEv2 Phase 1 settings IKEv2 Phase 2 transform sets Diffe‑Hellman groups Encryption algorithms (AES‑GCM‑256, AES‑GCM‑192, AES‑GCM‑128) Pre‑shared keys Local/Remote IKE IDs Tunnel source/destination IPs BGP peer addresses CE sites use the tunnel source interface (ens50 in the demo) and assign internal tunnel IPs (172.16.0.X/24). The remote gateway IP (44.212.3.180) represents the Cisco router’s public elastic IP. On the Cisco side, the tunnel interface uses the corresponding internal tunnel address and applies the IPSec profile. Correct IKE ID matching is critical, and with these elements aligned, Phase 1 and Phase 2 negotiations complete successfully. CE local ID = Cisco remote ID CE remote ID = Cisco local ID 2. BGP Configuration - routing policy use case A significant part of this solution is the use of a BGP routing policy for inbound filtering. With the ability to match specific prefixes and apply route filtering actions, this feature enables sophisticated traffic management strategies. Importantly, the demo illustrates the importance of having an allow rule to ensure desired prefixes remain accessible. Configuration on CE: Peer type: External Remote AS: 65001 Peer interface: External Connector IPv4 unicast enabled No authentication used in the demo Passive mode disabled (CE actively initiates sessions) Configuration on Cisco: router bgp 65001 Neighbor = CE tunnel IP IPv4 family activated A few sample networks advertised Once configured, the CE dashboard shows: Tunnel state: UP BGP state: Established Per‑node health status (important for multi‑node sites) Use the CE Site CLI commands show ip bgp neighbors and show ip bgp summary to confirm learned prefixes. 3. Routing Policy: Inbound Route Filtering Our solution implements the following simple inbound filter: First rule: Match exact prefix 10.222.120.0/24 Action: deny Second rule: Match any prefix (0.0.0.0/0 ge 0) Action: allow Rule ordering is critical: Deny‑then‑allow = correct Allow‑then‑deny = deny rule is shadowed After applying the policy to the BGP peer in the inbound direction, CE routing tables show only the permitted routes. If rule #2 is omitted, all routes disappear, an important operational lesson. Video Demonstration F5 ADSP Value Proposition: Delivering Intent‑Based Connectivity F5's Application Delivery and Security Platform (ADSP) stands out by combining quick deployment, high configurability, and robust security features. By leveraging external connectors, users experience enhanced network delivery and protection, ensuring their infrastructure efficiently supports dynamic business applications. In the context of hybrid-edge routing and IPSec/BGP integration, ADSP provides key delivery‑focused advantages. The platform's ability to integrate and manage traffic across complex network environments solidifies F5's role as a leader in secure cloud networking solutions. Key Takeaways 1. Consistent Application Delivery Across Hybrid Architectures ADSP abstracts underlying differences between environments—public cloud, private cloud, on‑prem networks—ensuring applications are reachable, secure, and responsive regardless of where components live. 2. Automated, Policy‑Driven Network Behavior With intent‑based configuration and centralized policy definition, delivery engineers can: Push consistent routing policies to multiple CE sites Automate IPSec and BGP deployment workflows Ensure predictable route propagation and traffic paths 3. High‑Performance, Distributed Data Plane By deploying CE nodes close to workloads and connecting them via the ADSP fabric, organizations achieve: Lower latencies Resilient multi‑node routing Efficient east–west and north–south traffic delivery 4. Integrated Observability for Delivery Teams ADSP offers operational visibility aligned with delivery outcomes: Tunnel throughput Per‑node health BGP routing changes Endpoint reachability This supports rapid validation and troubleshooting of app delivery pipelines. 5. Extensible Connectivity to Third‑Party Edges The External Connector capability extends ADSP’s delivery fabric to: Cisco routers Firewalls Non‑F5 VPN endpoints Carrier devices Third‑party cloud network appliances This ensures that app delivery services follow workloads—no matter where they move. Conclusion This solution illustrates how Distributed Cloud CE External Connectors streamline hybrid connectivity using industry‑standard IPSec and BGP, with the added power of intuitive configuration, deep visibility, and flexible routing policy. The same approach can be used in enterprise SD‑WAN integrations and for securely connecting to partner networks, with consistent routing policy and operational tooling across domains. By combining this capability with the broader F5 ADSP platform, organizations gain a consistent, automated, and delivery‑focused approach to connecting, securing, and scaling applications across distributed cloud architectures. Additional Resources Product information: https://f5.com/hybrid-multicloud-management Product documentation: https://docs.cloud.f5.com/docs-v2/multi-cloud-network-connect/how-tos/networking/external-connectors
68Views1like0CommentsVMware VKS integration with F5 BIG-IP and CIS
Introduction vSphere Kubernetes Service (VKS) is the Kubernetes runtime built directly into VMware Cloud Foundation (VCF). With CNCF certified Kubernetes, VKS enables platform engineers to deploy and manage Kubernetes clusters while leveraging a comprehensive set of cloud services in VCF. Cloud admins benefit from the support for N-2 Kubernetes versions, enterprise-grade security, and simplified lifecycle management for modern apps adoption. Alike with other Kubernetes platforms, the integration with BIG-IP is done through the use of the Container Ingress Services (CIS) component, which is hosted in the Kubernetes platform and allows to configure the BIG-IP using the Kubernetes API. Under the hood, it uses the F5 AS3 declarative API. Note from the picture that BIG-IP integration with VKS is not limited to BIG-IP´s load balancing capabilities and that most BIG-IP features can be configured using this integration. These features include: Advanced TLS encryption, including safe key storage with Hardware Security Module (HSM) or Network & Cloud HSM support. Advanced WAF, L7 bot and API protection. L3-L4 High-performance firewall with IPS for protocol conformance. Behavioral DDoS protection with cloud scrubbing support. Visibility into TLS traffic for inspection with 3 rd party solutions. Identity-aware ingress with Federated SSO and integration with leading MFAs. AI inference and agentic support thanks to JSON and MCP protocol support. Planning the deployment of CIS for VMware VKS The installation of CIS in VMware VKS is performed through the standard Helm charts facility. The platform owner needs to determine beforehand: Whether the deployment is hosted on a vSphere (VDS) network or an NSX network. It has to be taken into account that on an NSX network, VKS doesn´t currently allow to place the load balancers in the same segment as the VKS cluster. No special considerations have to be taken when hosting BIG-IP in a vSphere (VDS) network. Whether this is a single-cluster or a multi-cluster deployment. When using the multi-cluster option and clusterIP mode (only possible with Calico in VKS), it has to be taken into account that the POD networks of the clusters cannot have overlapping prefixes. What Kubernetes networking (CNI) is desired to be used. CIS supports both VKS supported CNIs: Antrea (default) and Calico. From the CIS point of view, the CNI is only relevant when sending traffic directly to the PODs. See next. What integration with the CNI is desired between the BIG-IP and VKS NodePort mode This is done by making applications discoverable using Services of type NodePort. From the BIG-IP, the traffic is sent to the Node´s IPs where it is redistributed to the POD depending on the TrafficPolicies of the Service. This is CNI agnostic. Any CNI can be used. Direct-to-POD mode This is done by making applications discoverable using the Services of type ClusterIP. Note that the CIS integration with Antrea uses Antrea´s nodePortLocal mechanism, which requires an additional annotation in the Service declaration. See the CIS VKS page in F5 CloudDocs for details. This Antrea nodePortLocal mechanism allows to send the traffic directly to the POD without actually using the POD IP address. This is especially relevant for NSX because it allows to access the PODs without actually re-distributing the PODs IPs across the NSX network, which is not allowed. When using vSphere (VDS) networking, either Antrea’s nodePortLocal or clusterIP with Calico can be used. Another way (but not frequent) is the use of hostNetwork POD networking because it requires privileges for the application PODs or ingress controllers. Network-wise, this would have a similar behavior to nodePortLocal, but without the automatic allocation of ports. Whether the deployment is a single-tier or a two-tier deployment. A single-tier deployment is a deployment where the BIG-IP sends the traffic directly to the application PODs. This has a simpler traffic flow and easier persistence and end-to-end monitoring. A two-tier deployment sends the traffic to an ingress controller POD instead of the application PODs. This ingress controller could be Contour, NGINX Gateway Fabric, Istio or an API gateway. This type of deployment offers the ultimate scalability and provides additional segregation between the BIG-IPs (typically owned by NetOps) and the Kubernetes cluster (typically owned by DevOps). Once CIS is deployed, applications can be published either using the Kubernetes standard Ingress resource or F5’s Custom Resources. This latter is the recommended way because it allows to expose most of the BIG-IPs capabilities. Details on the Ingress resource and F5 custom annotations can be found here. Details on the F5 CRDs can be found here. Please note that at time of this writing Antrea nodePortLocal doesn´t support the TransportServer CRD. Please consult your F5 representative for its availability. Detailed instructions on how to deploy CIS for VKS can be found on this CIS VKS page in F5 CloudDocs. Application-aware MultiCluster support MultiCluster allows to expose applications that are hosted in multiple VKS clusters and publish them in a single VIP. BIG-IP & CIS are in charge of: Discover where the PODs of the applications are hosted. Note that a given application doesn´t need to be available in all clusters. Upon receiving the request for a given application, decide to which cluster and Node/Pod the request has to be sent. This decision is based on the weight of each cluster, the application availability and the load balancing algorithm being applied. Single-tier or Two-tier architectures are possible. NodePort and ClusterIP modes are possible as well. Note that at the time of this writing, Antrea in ClusterIP mode (nodePortLocal) is not supported currently. Please consult your F5 representative for availability of this feature. Considerations for NSX Load Balancers cannot be placed in the same VPC segment where the VMware VKS cluster is. These can be placed in a separate VPC segment of the same VPC gateway as shown in the next diagram. In this arrangement the BIG-IP can be configured as either 1NIC mode or as a regular deployment, in which case the MGMT interface is typically configured through an infrastructure VLAN instead of an NSX segment. The data segment is only required to have enough prefixes to host the self-IPs of the BIG-IP units. The prefixes of the VIPs might not belong to the Data Segment´s subnet. These additional prefixes have to be configured as static routes in the VPC Gateway and Route Redistribution for these must be enabled. Given that the Load Balancers are not in line with the traffic flow towards the VKS Cluster, it is required to use SNAT. When using SNAT pools, the prefixes of these can optionally be configured as additional prefixes of the Data Segment, like the VIPs. Specifically for Calico, clusterIP mode cannot be used in NSX because this would require the BIG-IP to be in the same VPC segment as VMware VKS. Note also that BGP multi-hop is not feasible either because it would require the POD cluster network prefixes to be redistributed through NSX, which is not possible either. Conclusion and final remarks F5 BIG-IPs provides unmatched deployment options and features for VMware VKS; these include: Support for all VKS CNIs, which allows sending the traffic directly instead of using hostNetwork (which implies a security risk) or using the common NodePort, which can incur an additional kube-proxy indirection. Both 1-tier or 2-tier arrangements (or both types simultaneously) are possible. F5´s Container Ingress Services provides the ability to handle multiple VMware VKS clusters with application-aware VIPs. This is a unique feature in the industry. Securing applications with the wide range of L3 to L7 security features provided by BIG-IP, including Advanced WAF and Application Access. To complete the circle, this integration also provides IP address management (IPAM) which provides great flexibility to DevOps teams. All these are available regardless of the form factor of the BIG-IP: Virtual Edition, appliance or chassis, allowing great scalability and multi-tenancy options. In NSX deployments, the recommended form-factor is Virtual Edition in order to connect to the NSX segments. We look forward to hearing your experience and feedback on this article.908Views1like1CommentHow I Did It - "Decoupling Access Points from CloudVision AGNI with a BIG‑IP RADIUS Proxy"
Modern network access control platforms like Arista CloudVision AGNI are designed to be centralized policy engines, not edge‑facing protocol endpoints. Introducing a BIG-IP as RADIUS proxy between external access points (APs) and AGNI aligns the architecture with that design intent while solving several real‑world operational, security, and scalability challenges.207Views1like0CommentsLeverage BIG-IP 17.1 Distributed Cloud Services to Integrate F5 Distributed Cloud Bot Defense
Introduction: The F5 Distributed Cloud (XC) Bot Defense protects web and mobile properties from automated attacks by identifying and mitigating malicious bots. The Bot Defense uses JavaScript and API calls to collect telemetry and mitigate malicious users. The F5 Distributed Cloud (XC) Bot Defense is available in Standard and Enterprise service levels. In both the service levels the Bot Defense is available for traffic form web, web scarping, and mobile. The web scrapping is only applicable to web endpoints. This article will show you how to configure and use F5 Distributed Cloud Bot Defense (XC Bot Defense) on BIG-IP version 17.1 and above and monitor the solution on F5 Distributed Cloud Console (XC Console). Prerequisites: A valid XC Console account. If you don't have an account, visit Create a Distributed Cloud Console Account. An Organization plan. If you don't have an Organization plan, upgrade your plan. Getting Started: Log In to F5 XC Console: If XC Bot Defense isn't enabled, a Bot Defense landing page appears. Select Request Service to enable XC Bot Defense. If XC Bot Defense is enabled, you will see the tiles. Select Bot Defense. Verify you are in the correct Namespace. If your Namespace does not have any Protected Applications you will see the following page. Click Add Protected Application When you select a Namespace that has been configured with Protected Applications you will see this page. Scroll down to Manage Click Applications Click Add Application The Protected Application page is presented. Enter: Name Labels Description Select the Application Region - US in this example Connector Type - BIG-IP iApp for this demo. Cloudfront and Custom are other available connectors Scroll to the bottom and Click Save and Exit That will take you back to the Protected Applications Page. Verify your Application is listed with all the Metadata you supplied. Click the three ellipses to the right. Scroll down into the highlighted area and click and Copy App ID, Tenant ID and API Key Copy and save each value to a location where you can access it in the next steps. That completes the configuartion of F5 XC Console. Log In to your BIG-IP You will Notice in version 17.1 and above you will have a new selection along the left pane called Distributed Cloud Services. Expand and you will see all the latest integrations F5 provides. Application Traffic Insight Bot Defense Client-Side Defense Account Protection & Authentication Intelligence Cloud Services This article as stated before will focus on Bot Defense. Look for future articles that will focus on the other integrations. On the Main tab, Click Distributed Cloud Services > Bot Defense > Bot Profiles and Select Create This will bring up the General Properties page where you will enter required and optional information. Mandatory items have a Blue line on the edge. Supply a Name Application ID - From previous step Tenant ID - From previous step API Hostname - Web is filled in for you API Key - from previous step In the JS Injection Configuration section, the BIG-IP Handles JS Injectionsfield is checked by default, if you uncheck the field then follow the Note given in the Web UI. Protected Endpoint(s) - Web - Supply either the URI or IP of the Host Application along with the path and method you are protecting on the protected endpoint. In the following image, I have selected Advanced to show more detail of what is available. Again Mandatory fields have a blue indicator. Here the Protection Pool and SSL Profile. Click Finished when complete. One final step to complete the setup. Go to the Main tab, Local Traffic > Virtual Servers > Virtual Serves List Select the Virtual Server you are going to apply the Bot Defense profile to. Click on Distributed Cloud Services on the top banner Under Service Settings > Bot Defense set to Enable and then select the Bot Defense Profile you created in the above steps. The click Update. You have now sucessfully integrated BIG-IP Distributed Cloud Service on version 17.1 with F5 Distributed Coud Bot Defense. One final visual is the dashboard for F5 Distributed Cloud Bot Defense. This is where you will observe and monitor what bots and actions have been taken against bots and your protected applications. F5 XC Bot Defense on BIG-IP 17.1 Demo: Conclusion: I hope you were able to benefit from this tutorial. I was able to show how quickly and easlity it is to configure F5 Dsitributed Cloud Bot Defense on BIG-IP v17.1 using the built in Distributed Cloud Services integration. Related Links: https://www.f5.com/cloud https://www.f5.com/cloud/products/bot-defense BIG-IP Bot Defense on 14.x-16.x5KViews3likes4CommentsWhere SASE Ends and ADSP Begins, The Dual-Plane Zero Trust Model
Introduction Zero Trust Architecture (ZTA) mandates “never trust, always verify”, explicit policy enforcement across every user, device, network, application, and data flow, regardless of location. The challenge is that ZTA isn’t a single product. It’s a model that requires enforcement at multiple planes. Two converged platforms cover those planes: SASE at the access edge, and F5 ADSP at the application edge. This article explains what each platform does, where the boundary sits, and why both are necessary. Two Planes, One Architecture SASE and F5 ADSP are both converged networking and security platforms. Both deploy across hardware, software, and SaaS. Both serve NetOps, SecOps, and PlatformOps through unified consoles. But they enforce ZTA at different layers, and at different scales. SASE secures the user/access plane: it governs who reaches the network and under what conditions, using ZTNA (Zero Trust Network Access), SWG, CASB, and DLP. F5 ADSP secures the application plane: it governs what authenticated sessions can actually do once traffic arrives, using WAAP, bot management, API security, and ZTAA (Zero Trust Application Access). The NIST SP 800-207 distinction is useful here: SASE houses the Policy Decision Point for network access; ADSP houses the Policy Enforcement Point at the application layer. Neither alone satisfies the full ZTA model. The Forward/Reverse Proxy Split The architectural difference comes down to proxy direction. SASE is a forward proxy. Employee traffic terminates at an SSE PoP, where identity and device posture are checked before content is retrieved on the user’s behalf. SD-WAN steers traffic intelligently across MPLS, broadband, 5G, or satellite based on real-time path quality. SSE enforces CASB, RBI, and DLP policies before delivery. F5 ADSP is a reverse proxy. Traffic destined for an application terminates at ADSP first, where L4–7 inspection, load balancing, and policy enforcement happen before the request reaches the backend. ADSP understands application protocols, session behavior, and traffic patterns, enabling health monitoring, TLS termination, connection multiplexing, and granular authorization across BIG-IP (hardware, virtual, cloud), NGINX, BIG-IP Next for Kubernetes (BNK), and BIG-IP CNE. The scale difference matters: ADSP handles consumer-facing traffic at orders of magnitude higher volume than SASE handles employee access. This is why full platform convergence only makes sense at the SMB scale, enterprise organizations operate them as distinct, specialized systems owned by different teams. ZTA Principles Mapped to Each Platform ZTA requires continuous policy evaluation, not just at initial authentication, but throughout every session. The table below maps NIST SP 800-207 principles to how each platform implements them. ZTA Principle SASE F5 ADSP Verify explicitly Identity + device posture evaluated per session at SSE PoP L7 authz per request: token validation, API key checks, behavioral scoring Least privilege ZTNA grants per-application, per-session access, no implicit lateral movement API gateway enforces method/endpoint/scope, no over-permissive routes Assume breach CASB + DLP monitors post-access behavior, continuous posture re-evaluation WAF + bot mitigation inspects every payload; micro-segmentation at service boundaries Continuous validation Real-time endpoint compliance; access revoked on posture drift ML behavioral baselines detect anomalous request patterns mid-session Use Case Breakdown Secure Remote Access SASE enforces ZTNA, validating identity, MFA, and endpoint compliance before granting access. F5 ADSP picks up from there, enforcing L7 authorization continuity: token inspection, API gateway policy, and traffic steering to protected backends. A compromised identity that passes ZTNA still faces ADSP’s per-request behavioral inspection. Web Application and API Protection (WAAP) SASE pre-filters known malicious IPs and provides initial TLS inspection, reducing volumetric noise. F5 ADSP delivers full-spectrum WAAP in-path, signature, ML, and behavioral WAF models simultaneously, where application context is fully visible. SASE cannot inspect REST API schemas, GraphQL mutation intent, or session-layer business logic. ADSP can. Bot Management SASE blocks bot C2 communications and applies rate limits at the network edge. F5 ADSP handles what gets through: JavaScript telemetry challenges, ML-based device fingerprinting, and human-behavior scoring that distinguishes legitimate automation (CI/CD, partner APIs) from credential stuffing and scraping, regardless of source IP reputation. AI Security SASE applies CASB and DLP policies to block sensitive data uploads to external AI services and discover shadow AI usage across the workforce. F5 ADSP protects custom AI inference endpoints: prompt injection filtering, per-model, rate limiting, request schema validation, and encrypted traffic inspection. The Handoff Gap, and How to Close It The most common zero trust failure in hybrid architectures isn’t within either platform. It’s the handoff between them. ZTNA grants access, but session context (identity claims, device posture score, risk level) doesn’t automatically propagate to the application plane. The fix is explicit context propagation: SASE injects headers carrying identity and posture signals; ADSP policy engines consume them for L7 authorization decisions. This closes the gap between “who is allowed to connect” and “what that specific session is permitted to do.” Conclusion SASE and F5 ADSP are not competing platforms. They are complementary enforcement planes. SASE answers: can this user reach the application? ADSP answers: What can this session do once it arrives? Organizations that deploy only one leave systematic gaps. Together, with explicit context propagation at the handoff, they deliver the end-to-end zero trust coverage that NIST SP 800-207 actually requires. Related Content Why SASE and ADSP are complementary platform285Views4likes0Comments