cloud
75 TopicsThe Blind Spot in Cloud WAF Architectures: Shared IPs and the Origin Bypass Problem
Cloud WAFs are a widely adopted security control, but they carry a structural trust assumption that most operators never examine: whitelisting a vendor's IP ranges grants access not just to your WAF instance, but to every tenant on that platform. This article examines how that assumption can be exploited, why IP-based ownership validation cannot solve it, and what mitigations, including Zero Trust-aligned architectures like F5 Distributed Cloud Customer Edge, actually close the gap. When you deploy a Cloud WAF, whether it's F5, Imperva, Cloudflare or any similar service, you're trusting it to stand between the internet and your origin server. You configure your DNS to point to the WAF, tighten your firewall to only accept traffic from the WAF's published IP ranges, and consider yourself protected. The traffic gets inspected, filtered, and forwarded. Job done. Except there's a subtle but serious flaw baked into this architecture that is widely overlooked, and it stems from a property that is fundamental to how Cloud WAFs work: shared egress IPs. How Cloud WAF Proxying Actually Works When a Cloud WAF forwards traffic to your origin, it does so from a pool of IP addresses that the vendor owns and publishes. These ranges are shared across all customers of that vendor. In fact, every major Cloud WAF provider explicitly instructs you to whitelist their entire published IP range as part of the standard onboarding process. This is not a misconfiguration on your part, it is the vendor-recommended setup. Your firewall rule, "allow traffic from this vendor's IP ranges", doesn't mean "allow traffic from my WAF instance." It means "allow traffic from anyone who also happens to be a customer of that vendor." That distinction matters enormously. An attacker who is also a customer of the same WAF vendor can point their own WAF configuration at your origin IP. When they do, their traffic, potentially malicious and definitely uninspected by your WAF policy, arrives at your server from the very IP ranges you've whitelisted. Your firewall lets it through. Your WAF policy, which applies only to traffic routed through your tenant configuration, never sees it. Your server is now reachable by anyone, even with a $20/month account for some vendors, on the same WAF platform. Why This Matters More Than It Seems This isn't a theoretical edge case. The attack surface is: Broad: Every Cloud WAF customer on a given platform could potentially reach any other customer's origin. Silent: The origin server receives the traffic without any obvious indication it bypassed the WAF policy. Persistent: It doesn't require exploiting a vulnerability, it exploits an intentional architectural property. The classic goal of origin hardening, hiding your real IP and only allowing WAF traffic, is partially undermined the moment you realize that "WAF traffic" and "your WAF traffic" are not the same thing. The Missing Validation: Outbound Origin Ownership What makes this problem interesting is the asymmetry in how Cloud WAFs handle trust. On the inbound side, WAF vendors have robust validation. Before a vendor will proxy traffic for your domain, you must prove you own it, typically via a DNS TXT record or HTTP challenge, the same mechanisms used in TLS certificate issuance. No proof, no proxying. On the outbound side, the connection from the WAF to your origin, there is essentially no equivalent validation. Any tenant can point their configuration at any IP address. The WAF will dutifully forward traffic to it. The origin has no way, at the network layer, to distinguish "traffic from my WAF tenant" from "traffic from another tenant who decided to target my server." An obvious question is: why not apply the same ownership verification to origin IPs that vendors already apply to domains? The answer is that not all IP addresses can map cleanly to ownership the way domain names do. In practice, IPs are frequently shared, multiple services behind a load balancer or shared hosting infrastructure, and in cloud environments they rotate constantly as instances scale up and down or elastic IPs are reassigned. Unlike a domain name, an IP address is not a stable, exclusive identifier of a single operator. That said, the picture is not entirely bleak for enterprises. Organizations that own their own IPs do have a stable and exclusive relationship with their IPs. In these cases, ownership validation is technically feasible: a vendor could verify control via a well-known URL served at that IP, or via a PTR record in the reverse DNS zone, both of which require actual control of the address space. This would mirror the HTTP-01 and DNS-01 challenge models already used in certificate issuance. For the broader market, however, where IPs are leased from cloud providers and rotate frequently, this approach does not hold. The asymmetry therefore reflects a genuine structural limitation of IP-based identity for the general case, even if partial solutions exist for enterprises with dedicated address space. Mitigations Available Today Since origin IP ownership validation isn't yet a standard feature across Cloud WAF platforms, the burden falls on origin operators. There are several practical mitigations, ranging from straightforward to architecturally robust. Shared Secret Header Authentication The most common approach is having the WAF inject a secret header on all forwarded requests (for example, X-Origin-Token), which your origin validates on every request. Traffic missing the header, or presenting the wrong value, is rejected. Most Cloud WAF vendors support this through custom header injection rules. The weakness is operational: the secret must be kept out of logs, rotated periodically, and is only as strong as your header validation implementation. Mutual TLS Between WAF and Origin Several vendors support presenting a client certificate when connecting to your origin, allowing your server to cryptographically verify that the connection came from your WAF vendor's infrastructure, not just the right IP range. This is stronger than a shared secret because it's not a value that can be accidentally leaked in a log file. Private Tunneling (Eliminating the Public IP Entirely) The most architecturally sound solution is to remove your origin from the public internet entirely. An outbound-only encrypted tunnel from your origin to the WAF's edge means your server never needs a publicly routable IP. There is no firewall rule to configure, no IP range to whitelist, and the shared-IP problem becomes entirely irrelevant because there is no exposed surface to exploit. This approach is increasingly the recommended baseline for new deployments, not just a hardening option. Host Header Validation at the Origin Your origin should always reject requests where the Host header doesn't match your expected domain. However, this provides no real protection against the shared-IP bypass, as an attacker can trivially rewrite the Host header in their own WAF configuration to match your domain before forwarding traffic to your origin. It remains good hygiene, but should not be counted as a mitigation for this specific threat. A Note on F5 Distributed Cloud Customer Edge F5 Distributed Cloud takes a different architectural approach that sidesteps this problem structurally. Rather than relying on shared cloud-based egress IPs, F5 XC allows you to deploy a Customer Edge (CE) node(s), a dedicated piece of infrastructure that runs within your environment or network perimeter. Traffic flows from the F5 global network through an encrypted tunnel directly to your CE node, rather than from a shared IP pool to a publicly exposed origin. Because the CE node is yours, deployed in your environment and associated exclusively with your tenant, the concept of another tenant "reaching your origin from a shared IP" simply doesn't apply. The origin isn't exposed to a shared egress pool in the first place. This design is also a natural fit for Zero Trust architectures: the origin never implicitly trusts any network-level connection, and access is gated by tenant identity rather than IP address. It's an architectural answer to an architectural problem, rather than a mitigation layered on top of a fundamentally shared infrastructure model. Conclusion The Cloud WAF shared-IP bypass is a genuine blind spot that deserves more attention than it typically receives. The root cause is an asymmetry in how trust is established: vendors carefully validate that you own your domain before proxying it, but apply no equivalent validation to origin IP ownership. Any tenant on the same platform can route traffic to your origin. The good news is that practical mitigations exist, mTLS, secret headers, and private tunneling cover most production scenarios. The better news is that some architectures, like F5 XC with Customer Edge, eliminate the exposure at the design level rather than patching around it. If your current posture is "I've whitelisted the WAF vendor's IP ranges," it's worth asking: which WAF customers, exactly, have you let in? This article was written by the author and formatted with the assistance of AI.80Views1like1CommentF5 Distributed Cloud (XC) Custom Routes: Capabilities, Limitations, and Key Design Considerations
This article explores how Custom Routes work in F5 Distributed Cloud (XC), why they differ architecturally from standard Load Balancer routes, and what to watch out for in real-world deployments, covering backend abstraction, Endpoint/Cluster dependencies, and critical TLS trust and Root CA requirements.652Views2likes1CommentXC Distributed Cloud and how to keep the Source IP from changing with customer edges(CE)!
Code is community submitted, community supported, and recognized as ‘Use At Your Own Risk’. Old applications sometimes do not accept a different IP address to be used by the clients during the session/connection. How can make certain the IP stays the same for a client? The best will always be the application to stop tracking users based on something primitive as an ip address and sometimes the issue is in the Load Balancer or ADC after the XC RE and then if the persistence is based on source IP address on the ADC to be changed in case it is BIG-IP to Cookie or Universal or SSL session based if the Load Balancer is doing no decryption and it is just TCP/UDP layer . As an XC Regional Edge (RE) has many ip addresses it can connect to the origin servers adding a CE for the legacy apps is a good option to keep the source IP from changing for the same client HTTP requests during the session/transaction. Before going through this article I recommend reading the links below: F5 Distributed Cloud – CE High Availability Options: A Comparative Exploration | DevCentral F5 Distributed Cloud - Customer Edge | F5 Distributed Cloud Technical Knowledge Create Two Node HA Infrastructure for Load Balancing Using Virtual Sites with Customer Edges | F5 Distributed Cloud Technical Knowledge RE to CE cluster of 3 nodes The new SNAT prefix option under the origin pool allows no mater what CE connects to the origin pool the same IP address to be seen by the origin. Be careful as if you have more than a single IP with /32 then again the client may get each time different IP address. This may cause "inet port exhaustion " ( that is what it is called in F5BIG-IP) if there are too many connections to the origin server, so be careful as the SNAT option was added primary for that use case. There was an older option called "LB source IP persistence" but better not use it as it was not so optimized and clean as this one. RE to 2 CE nodes in a virtual site The same option with SNAT pool is not allowed for a virtual site made of 2 standalone CE. For this we can use the ring hash algorithm. Why this works? Well as Kayvan explained to me the hashing of the origin is taking into account the CE name, so the same origin under 2 different CE will get the same ring hash and the same source IP address will be send to the same CE to access the Origin Server. This will not work for a single 3-node CE cluster as it all 3 nodes have the same name. I have seen 503 errors when ring hash is enabled under the HTTP LB so enable it only under the XC route object and the attached origin pool to it! CE hosted HTTP LB with Advertise policy In XC with CE you can do do HA with 3-cluster CE that can be layer2 HA based on VRRP and arp or Layer 3 persistence based bgp that can work 3 node CE cluster or 2 CE in a virtual site and it's control options like weight, as prepend or local preference options at the router level. For the Layer 2 I will just mention that you need to allow 224.0.0.8 for the VRRP if you are migrating from BIG-IP HA and that XC selects 1 CE to hold active IP that is seen in the XC logs and at the moment the selection for some reason can't be controlled. if a CE can't reach the origin servers in the origin pool it should stop advertising the HTTP LB IP address through BGP. For those options Deploying F5 Distributed Cloud (XC) Services in Cisco ACI - Layer Three Attached Deployment is a great example as it shows ECMP BGP but with the BGP attributes you can easily select one CE to be active and processing connections, so that just one ip address is seen by the origin server. When a CE gets traffic by default it does prefer to send it to the origin as by default "Local Preferred" is enabled under the origin pool. In the clouds like AWS/Azure just a cloud native LB is added In front of the 3 CE cluster and the solution there is simple as to just modify the LB to have a persistence. Public Clouds do not support ARP, so forget about Layer 2 and play with the native LB that load balances between the CE 😉 CE on Public Cloud (AWS/Azure/GCP) When deploying on a public cloud the CE can be deployed in two ways one is through the XC GUI and adding the AWS credentials but this way you have not a big freedom to be honest as you can't deploy 2 CE and make a virtual site out of them and add cloud LB in-front of them as it always will be 3-CE cluster with preconfigured cloud LB that will use all 3 LB! Using the newer "clickops" method is much better https://docs.cloud.f5.com/docs-v2/multi-cloud-network-connect/how-to/site-management/deploy-site-aws-clickops or using terraform but with manual mode and aws as a provider (not XC/volterra as it is the same as the XC GUI deployment) https://docs.cloud.f5.com/docs-v2/multi-cloud-network-connect/how-to/site-management/deploy-aws-site-terraform This way you can make the Cloud LB to use just one CE or have some client Persistence or if traffic comes from RE to CE to implement the virtual site 2 CE node! There is no Layer 2 ARP support as I mentioned in public cloud with 3-node cluster but there is NAT policy https://docs.cloud.f5.com/docs-v2/multi-cloud-network-connect/how-tos/networking/nat-policies but I haven't tried it myself to comment on it. Hope you enjoyed this article!385Views2likes0CommentsF5 XC Distributed Cloud HTTP Header/Cookie manipulations and using the client ip/user headers
1 . F5 XC distributed cloud HTTP Header manipulations In the F5 XC Distributed Cloud some client information is saved to variables that can be inserted in HTTP headers similar to how F5 Big-IP saves some data that can after that be used in a iRule or Local Traffic Policy. By default XC will insert XFF header with the client IP address but what if the end servers want an HTTP header with another name to contain the real client IP. Under the HTTP load balancer under "Other Options" under "More Options" the "Header Options" can be found. Then the the predefined variables can be used for this job like in the example below the $[client_address] is used. A list of the predefined variables for F5 XC: https://docs.cloud.f5.com/docs/how-to/advanced-security/configure-http-header-processing There is $[user] variable and maybe in the future if F5 XC does the authentication of the users this option will be insert the user in a proxy chaining scenario but for now I think that this just manipulates data in the XAU (X-Authenticated-User) HTTP header. 2. Matching of the real client ip HTTP headers You can also match a XFF header if it is inserted by a proxy device before the F5 XC nodes for security bypass/blocking or for logging in the F5 XC. For User logging from the XFF Under "Common Security Controls" create a "User Identification Policy". You can also match a regex that matches the ip address and this is in case there are multiple IP addresses in the XFF header as there could have been many Proxy devices in the data path and we want see if just one is present. For Security bypass or blocking based based on XFF Under "Common Security Controls" create a "Trusted Client Rules" or "Client Blocking Rules". Also if you have "User Identification Policy" then you can just use the "User Identifier" but it can't use regex in this case. I have made separate article about User-Identification F5 XC Session tracking and logging with User Identification Policy | DevCentral To match a regex value in the header that is just a single IP address, even when the header has many ip addresses, use the regex (1\.1\.1\.1) as an example to mach address 1.1.1.1. To use the client IP address as a source Ip address to the backend Origin Servers in the TCP packet after going through the F5 XC (similar to removing the SNAT pool or Automap in F5 Big-IP) use the option below: The same way the XAU (X-Authenticated-User) HTTP header can be used in a proxy chaining topology, when there is a proxy before the F5 XC that has added this header. Edit: Keep in mind that in some cases in the XC Regex for example (1\.1\.1\.1) should be written without () as 1\.1\.1\.1 , so test it as this could be something new and I have seen it in service policy regex matches, when making a new custom signature that was not in WAAP WAF XC policy. I could make a seperate article for this 🙂 XC can even send the client certificate attributes to the backend server if Client Side mTLS is enabled but it is configured at the cert tab. 3. F5 XC distributed cloud HTTP Cookie manipulations. Now you can overwrite the XC cookie by keeping the value but modifying the tags and this is big thing as before this was not possible. When combined with cookies this becomes very powerful thing as you can match on User-Agent header and for Mozilla for example to change the flags as if there is bug with the browser etc. The feature changes cookies returned in the Response Set-Cookie header from the origin server as it should.4.8KViews8likes1CommentF5 XC vk8s open source nginx deployment on RE
Code is community submitted, community supported, and recognized as ‘Use At Your Own Risk’. Short Description This an example for F5 XC virtual kubernetes (vk8s) workload on Regional Edges for rewriting the URL requests and response body. Problem solved by this Code Snippet The XC Distributed Cloud rewrite option under the XC routes is sometimes limited in dynamically replacing a specific sting like for example to replace the string "ne" with "da" no matter where in the url the string is located. location ~ .*ne.* { rewrite ^(.*)ne(.*) $1da$2; } Other than that in XC there is no default option to replace a string in the payload like rewrite profile in F5 LTM or iRule stream option. sub_filter 'Example' 'NIKI'; sub_filter_types *; sub_filter_once off; Open source NGINX can also be used to return custom error based on the server error as well: error_page 404 /custom_404.html; location = /custom_404.html { return 404 'gangnam style!'; internal; } Now with proxy protocol support in XC the Nginx can see real client ip even for non-HTTP traffic that does not have XFF HTTP headers. log_format niki '$proxy_protocol_addr - $remote_addr - $remote_user [$time_local] - $proxy_add_x_forwarded_for' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"'; #limit_req_zone $binary_remote_addr zone=mylimit:10m rate=1r/s; server { listen 8080 proxy_protocol; server_name localhost; A cool feature that can be used for implementing HTTP redirection protection similar to AWAF/ASM ""K04211103: Configuring HTTP redirection protection"" as the nginx by default can rewrite the redirects (Module ngx_http_proxy_module) and all not allowed redirects can be send to XC route that has a custom response and combining this with the MAP option you implement "if else" functions, map $host $public_base_url { default ""; site1.com https://public-site1.com; site2.com https://public-site2.com; } location / { proxy_pass http://backend; # Rewrite Location headers from backend proxy_redirect http://internal.example.com/ $public_base_url/; } How to use this Code Snippet Read the description readme file in the github link and modify the nginx default.conf file as per your needs. Code Snippet Meta Information Version: 1.25.4 Nginx Coding Language: nginx config Full Code Snippet https://github.com/Nikoolayy1/xc_nginx/tree/main358Views0likes3CommentsF5 XC CE Debug commands through GUI cloud console and API
Why this feature is important and helpful? With this capability if the IPSEC/SSL tunnels are up from the Customer Edge(CE) to the Regional Edge(RE), there is no need to log into the CE, when troubleshooting is needed. This is possible for Secure Mesh(SM) and Secure Mesh V2 (SMv2) CE deployments. As XC CE are actually SDN-based ADC/proxy devices the option to execute commands from the SDN controller that is the XC cloud seems a logical next step. Using the XC GUI to send SiteCLI debug commands. The first example is sending the "netstat" command to "master-3" of a 3-node CE cluster. This is done under Home > Multi-Cloud Network Connect > Overview > Infrastructure > Sites and finding the site, where you want to trigger the commands. In the VPM logs it is possible to see the command that was send in API format by searching for it or for logs starting with "debug", as to automate this task. If you capture and review the full log, you will even see not only the API URL endpoint but also the POST body data that needs to be added. The VPM logs that can also be seen from the web console and API, are the best place to start investigating issues. XC Commands reference: Node Serviceability Commands Reference | F5 Distributed Cloud Technical Knowledge Troubleshooting Guidelines for Customer Edge Site | F5 Distributed Cloud Technical Knowledge Troubleshooting Guide for Secure Mesh Site v2 Deployment | F5 Distributed Cloud Technical Knowledge Using the XC API to send SiteCLI debug commands. The same commands can be send using the XC API and first the commands can be tested and reviewed using the API doc and developer portals. API documentation even has examples of how to run these commands with vesctl that is the XC shell client that can be installed on any computer or curl. Postman can also be used instead of curl but the best option to test commands through the API is the developer portal. Postman can also be used by the "old school" people 😉 Link reference: F5 Distributed Cloud Services API for ves.io.schema.operate.debug | F5 Distributed Cloud Technical Knowledge F5 Distributed Cloud Dev Portal ves-io-schema-operate-debug-CustomPublicAPI-Exec | F5 Distributed Cloud Technical Knowledge Summary: The option to trigger commands though the XC GUI or even the API is really useful if for example there is a need to periodically monitor the cpu or memory jump with commands like "execcli check-mem" or "execcli top" or even automating the tcpdump with "execcli vifdump xxxx". The use cases for this functionality really are endless.434Views0likes1CommentExport Requests or Security Analytics from F5 Distributed Cloud
Wrote this code and thought I would share. You will need Python3 installed, and may need to use "pip" to install the "requests" package. Parameters can be displayed using the "-h" argument. A valid API Token is required for access to your tenant. One required filter is the Load Balancer name, and additional filters can be added to further confine the output. Times are in UTC, just like the API requires, and is displayed in the JSON event view in the GUI Log entries are written to the specified file in JSON format, as it comes from the API. Example execution: python3 xc-log-api-extract.py test-api.json security my-tenant-name my-namespace my-api-token my-load-balancer-name 2025-01-13T17:15:00.000Z 2025-01-14T17:15:00.000Z Here is the help page: python3 xc-log-api-extract.py -h usage: xc-log-api-extract.py [-h] [-srcip SRCIP] [-action ACTION] [-asorg ASORG] [-asnumber ASNUMBER] [-policy POLICY] outputfilename {access,security} tenant namespace apitoken loadbalancername starttime endtime Python program to extract XC logs positional arguments: outputfilename File to write JSON log messages to {access,security} logtype to query tenant Tenant name namespace Namespace in tenant apitoken API Token to use for accessing log data, created in Administration/IAM/Service Credentials, type "API Token" loadbalancername Load Balancer name to filter on (required) starttime yyyy-mm-mmThh:mm:ss.sssZ endtime yyyy-mm-mmThh:mm:ss.sssZ options: -h, --help show this help message and exit -srcip SRCIP Optional filter by Source IP -action ACTION Optional filter by action (allow, block) -asorg ASORG Optional filter by as_org -asnumber ASNUMBER Optional filter by as_number -policy POLICY Optional filter by policy_hits.policy_hits.policy DeVon Jarvis, v1.2 2025/01/21 Enjoy! DeVon Jarvis234Views1like0Comments