security
18180 TopicsF5 ASM/AWAF – violations logged but no learning suggestions generated
Hey everyone, running into a strange behavior with F5 ASM and hoping someone has seen this before. Setup: - Explicit/closed parameter list (only allowed parameters defined, everything else triggers a violation) - "Illegal Parameter" violation has Learn + Alarm + Block all enabled - Parameter learning mode is set to Always - Violations are appearing correctly in the event logs - no blocked IP addresses exceptions The Problem: Despite all of the above, no learning suggestions are being generated for the illegal parameter violations except one on the Traffic Learning page. What I noticed: After digging through the logs, I found a pattern: - the one request that triggered only the illegal parameter violation (with a valid URL) → learning suggestion WAS generated - Requests that triggered illegal parameter + illegal URL or illegal file type simultaneously → no learning suggestion generated The vast majority of my traffic falls into the second category, which is why the suggestions page looks empty. My question: Is there any documented behavior in ASM/AWAF where requests triggering multiple severe violations (illegal URL + illegal file type + illegal parameter together) are suppressed from generating learning suggestions? Or is something else going on here? Has anyone run into this and found a workaround other than manually adding parameters from the event log? Thanks in advance.103Views0likes3CommentsThe Blind Spot in Cloud WAF Architectures: Shared IPs and the Origin Bypass Problem
Cloud WAFs are a widely adopted security control, but they carry a structural trust assumption that most operators never examine: whitelisting a vendor's IP ranges grants access not just to your WAF instance, but to every tenant on that platform. This article examines how that assumption can be exploited, why IP-based ownership validation cannot solve it, and what mitigations, including Zero Trust-aligned architectures like F5 Distributed Cloud Customer Edge, actually close the gap. When you deploy a Cloud WAF, whether it's F5, Imperva, Cloudflare or any similar service, you're trusting it to stand between the internet and your origin server. You configure your DNS to point to the WAF, tighten your firewall to only accept traffic from the WAF's published IP ranges, and consider yourself protected. The traffic gets inspected, filtered, and forwarded. Job done. Except there's a subtle but serious flaw baked into this architecture that is widely overlooked, and it stems from a property that is fundamental to how Cloud WAFs work: shared egress IPs. How Cloud WAF Proxying Actually Works When a Cloud WAF forwards traffic to your origin, it does so from a pool of IP addresses that the vendor owns and publishes. These ranges are shared across all customers of that vendor. In fact, every major Cloud WAF provider explicitly instructs you to whitelist their entire published IP range as part of the standard onboarding process. This is not a misconfiguration on your part, it is the vendor-recommended setup. Your firewall rule, "allow traffic from this vendor's IP ranges", doesn't mean "allow traffic from my WAF instance." It means "allow traffic from anyone who also happens to be a customer of that vendor." That distinction matters enormously. An attacker who is also a customer of the same WAF vendor can point their own WAF configuration at your origin IP. When they do, their traffic, potentially malicious and definitely uninspected by your WAF policy, arrives at your server from the very IP ranges you've whitelisted. Your firewall lets it through. Your WAF policy, which applies only to traffic routed through your tenant configuration, never sees it. Your server is now reachable by anyone, even with a $20/month account for some vendors, on the same WAF platform. Why This Matters More Than It Seems This isn't a theoretical edge case. The attack surface is: Broad: Every Cloud WAF customer on a given platform could potentially reach any other customer's origin. Silent: The origin server receives the traffic without any obvious indication it bypassed the WAF policy. Persistent: It doesn't require exploiting a vulnerability, it exploits an intentional architectural property. The classic goal of origin hardening, hiding your real IP and only allowing WAF traffic, is partially undermined the moment you realize that "WAF traffic" and "your WAF traffic" are not the same thing. The Missing Validation: Outbound Origin Ownership What makes this problem interesting is the asymmetry in how Cloud WAFs handle trust. On the inbound side, WAF vendors have robust validation. Before a vendor will proxy traffic for your domain, you must prove you own it, typically via a DNS TXT record or HTTP challenge, the same mechanisms used in TLS certificate issuance. No proof, no proxying. On the outbound side, the connection from the WAF to your origin, there is essentially no equivalent validation. Any tenant can point their configuration at any IP address. The WAF will dutifully forward traffic to it. The origin has no way, at the network layer, to distinguish "traffic from my WAF tenant" from "traffic from another tenant who decided to target my server." An obvious question is: why not apply the same ownership verification to origin IPs that vendors already apply to domains? The answer is that not all IP addresses can map cleanly to ownership the way domain names do. In practice, IPs are frequently shared, multiple services behind a load balancer or shared hosting infrastructure, and in cloud environments they rotate constantly as instances scale up and down or elastic IPs are reassigned. Unlike a domain name, an IP address is not a stable, exclusive identifier of a single operator. That said, the picture is not entirely bleak for enterprises. Organizations that own their own IPs do have a stable and exclusive relationship with their IPs. In these cases, ownership validation is technically feasible: a vendor could verify control via a well-known URL served at that IP, or via a PTR record in the reverse DNS zone, both of which require actual control of the address space. This would mirror the HTTP-01 and DNS-01 challenge models already used in certificate issuance. For the broader market, however, where IPs are leased from cloud providers and rotate frequently, this approach does not hold. The asymmetry therefore reflects a genuine structural limitation of IP-based identity for the general case, even if partial solutions exist for enterprises with dedicated address space. Mitigations Available Today Since origin IP ownership validation isn't yet a standard feature across Cloud WAF platforms, the burden falls on origin operators. There are several practical mitigations, ranging from straightforward to architecturally robust. Shared Secret Header Authentication The most common approach is having the WAF inject a secret header on all forwarded requests (for example, X-Origin-Token), which your origin validates on every request. Traffic missing the header, or presenting the wrong value, is rejected. Most Cloud WAF vendors support this through custom header injection rules. The weakness is operational: the secret must be kept out of logs, rotated periodically, and is only as strong as your header validation implementation. Mutual TLS Between WAF and Origin Several vendors support presenting a client certificate when connecting to your origin, allowing your server to cryptographically verify that the connection came from your WAF vendor's infrastructure, not just the right IP range. This is stronger than a shared secret because it's not a value that can be accidentally leaked in a log file. Private Tunneling (Eliminating the Public IP Entirely) The most architecturally sound solution is to remove your origin from the public internet entirely. An outbound-only encrypted tunnel from your origin to the WAF's edge means your server never needs a publicly routable IP. There is no firewall rule to configure, no IP range to whitelist, and the shared-IP problem becomes entirely irrelevant because there is no exposed surface to exploit. This approach is increasingly the recommended baseline for new deployments, not just a hardening option. Host Header Validation at the Origin Your origin should always reject requests where the Host header doesn't match your expected domain. However, this provides no real protection against the shared-IP bypass, as an attacker can trivially rewrite the Host header in their own WAF configuration to match your domain before forwarding traffic to your origin. It remains good hygiene, but should not be counted as a mitigation for this specific threat. A Note on F5 Distributed Cloud Customer Edge F5 Distributed Cloud takes a different architectural approach that sidesteps this problem structurally. Rather than relying on shared cloud-based egress IPs, F5 XC allows you to deploy a Customer Edge (CE) node(s), a dedicated piece of infrastructure that runs within your environment or network perimeter. Traffic flows from the F5 global network through an encrypted tunnel directly to your CE node, rather than from a shared IP pool to a publicly exposed origin. Because the CE node is yours, deployed in your environment and associated exclusively with your tenant, the concept of another tenant "reaching your origin from a shared IP" simply doesn't apply. The origin isn't exposed to a shared egress pool in the first place. This design is also a natural fit for Zero Trust architectures: the origin never implicitly trusts any network-level connection, and access is gated by tenant identity rather than IP address. It's an architectural answer to an architectural problem, rather than a mitigation layered on top of a fundamentally shared infrastructure model. Conclusion The Cloud WAF shared-IP bypass is a genuine blind spot that deserves more attention than it typically receives. The root cause is an asymmetry in how trust is established: vendors carefully validate that you own your domain before proxying it, but apply no equivalent validation to origin IP ownership. Any tenant on the same platform can route traffic to your origin. The good news is that practical mitigations exist, mTLS, secret headers, and private tunneling cover most production scenarios. The better news is that some architectures, like F5 XC with Customer Edge, eliminate the exposure at the design level rather than patching around it. If your current posture is "I've whitelisted the WAF vendor's IP ranges," it's worth asking: which WAF customers, exactly, have you let in? This article was written by the author and formatted with the assistance of AI.15Views0likes0CommentsImplementing Risk-Based Actions with AI-Powered WAF: Customer Policy Paths
Why Custom policy is where risk-based actions matter most The default policy is straightforward: it applies a broad mix of signatures, threat campaigns, and violations; “Enhance with AI” is an optional add-on. Custom policies are where customers can accidentally recreate the same problems Risk Scoring is designed to solve—usually by combining: Overly broad/noisy signature selection (especially low-accuracy signatures) Aggressive enforcement (blocking Medium too early) Disabling/excluding key signatures and unintentionally reducing ML invocation So the rest of this blog is a tight, configuration-oriented walkthrough of the Custom path. Custom policy: configuration walkthrough (decision points → operational outcomes) Baseline: Navigate to the Custom controls LB Config → Web Application Firewall Create/edit the WAF object (Metadata `Name`, etc.) Set Security Policy = Custom Choose Signature Selection by Accuracy Optionally enable Enhance with AI (Risk Scoring) If enabled, optionally configure Action by Risk Score (risk-based enforcement) Step 1: Signature Selection by Accuracy (choose your baseline level) Accuracy indicates susceptibility to false positives: Low: high likelihood of false positives Medium: some likelihood of false positives High: low likelihood of false positives Note: This setting is foundational: it determines which signatures are active, and therefore the quality and volume of detection signals that feed into downstream risk evaluation. Operationally: High accuracy tends to support faster, safer enforcement. Medium/Low accuracy can expand coverage but increases the chance you’ll need exceptions, investigations, or staged rollout discipline. Step 2: Enhance with AI (turn on Risk Scoring) Enhance with AI = On enables AI-powered risk scoring and assigns each request a High/Medium/Low risk score using layered signals. Two implementation details to make explicit in your blog because they affect customer expectations: ML invocation depends on enabled signatures firing in the specified injection/execution categories. If teams disable/exclude those signatures, they may reduce when the model runs—changing practical behavior of risk evaluation. Step 3: Action by Risk Score (map risk levels to enforcement) When Action by Risk Score is enabled: By default, high-risk requests are blocked Users can choose whether Medium-risk requests are blocked (via dropdown) This is the primary knob that determines how quickly a user decides to move from “safe enforcement” to “broad enforcement.” Recommended rollout path: Day 0 → Day 7 → Steady state This is the most common and safest operational progression for customers Day 0 (safe enforcement baseline) Custom → Signature Selection by Accuracy = High (or High + Medium if you need broader coverage immediately) Enhance with AI = On Action by Risk Score = High Outcome Gets to blocking quickly while minimizing availability risk. High is blocked. This is the “prove safety while stopping obvious bad” posture. Day 7 (controlled expansion) Keep Custom + Enhance with AI + Action by Risk Score Optionally widen Signature Selection from High → High + Medium if coverage is insufficient Enhance with AI = On Action by Risk Score = High + Medium Outcome Expands detection inputs without immediately expanding enforcement. Teams focus on what’s landing in Medium and whether exclusions/disabled signatures are reducing ML invocation in key categories Steady state (mature enforcement) Custom → signature selection set to the broadest set Widen Signature Selection from High + Medium → High + Medium + Low Action by Risk Score = High + Medium Enhance with AI = On Action by Risk Score = High + Medium Outcome Risk outcomes become the enforcement interface. Broad, consistent blocking across apps/APIs with reduced per-app tuning and fewer signature-level decisions Common Pitfalls: Avoid Block Medium on Day 0 when including low-accuracy signatures—this is the fastest way to recreate false-positive outages. If you disable/exclude signatures in the key injection/execution categories, you can reduce ML invocation and change risk evaluation behavior. Summary Custom policies traditionally scale poorly because every app ends up with bespoke signature decisions and exception handling. Risk Scoring is designed to invert that: keep signatures as key signals but standardize enforcement via risk outcomes. If you implement Custom with the Day 0 → Day 7 → Steady state progression above, you get a predictable path from “block safely now” to “enforce broadly later” without returning to signature-by-signature tuning as your primary operating model.46Views1like1CommentSSL Forward Proxy, iRules and Client Hello
Hi all, I am seeing odd behaviour using SSL fwd proxy (SSLO): My intention is to use the client hello (SNI) to influence SSSL profile selection. I have 2 SSSL profiles setup, let call them A and B For trusted connections (i.e. certs issuers in SSSL CA bundle) is am unable to extract the SNI from the initial CH, using the CLIENTSSL_CLIENTHELLO event and [SSL::extensions -type 0]. These are send to profile A based on SNI. I have pcaps showing the CH incoming to the F5. I assume this may have something to do with the 'verified handshake' functionality. It appears the test client browser keeps attempting connection and I see inconsistent results (some connections are reset, some succeed). In irule logs its apparent the SNI does eventually become available in the CLIENTSSL_CLIENTHELLO event. For untrusted/self signed etc this doesn't appear to happen, these are sent to Profile B (identical to A for testing purposes) so my assumption is the F5 is doing some kind of SNI processing (compare to CN's in trust store?) and then connecting to the server for 'verified handshake' before releasing the SNI into the CLIENTSSL_CLIENTHELLO event? I have seen an iRule that effectively disables SSL then parses the raw client hello for SNI, I expect this may work as it would intercept the raw CH so the F5 cannot interfere or do any server-side preamble, but I'd rather do this within the realms of defined events if possible... :-) Any suggestions or comments welcome! thanks75Views0likes1CommentAutomating F5 Application Delivery and Security Platform Deployments
The F5 ADSP Architecture Automation Project The F5 ADSP reduces the complexity of modern applications by integrating operations, traffic management, performance optimization, and security controls into a single platform with multiple deployment options. This series outlines practical steps anyone can take to put these ideas into practice using the F5 ADSP Architectures GitHub repo. Each article highlights different deployment examples, which can be run locally or integrated into CI/CD pipelines following DevSecOps practices. The repository is community-supported and provides reference code that can be used for demos, workshops, or as a stepping stone for your own F5 ADSP deployments. If you find any bugs or have any enhancement requests, open an issue, or better yet, contribute. The F5 Application Delivery and Security Platform (F5 ADSP) The F5 ADSP addresses four core areas: how you operate day to day, how you deploy at scale, how you secure against evolving threats, and how you deliver reliably across environments. Each comes with its own challenges, but together they define the foundation for keeping systems fast, stable, and safe. Each architecture deployment example is designed to cover at least two of the four core areas: xOps, Deployment, Delivery, and Security. This ensures the examples demonstrate how multiple components of the platform work together in practice. DevSecOps: Integrating security into the software delivery lifecycle is a necessary part of building and maintaining secure applications. This project incorporates DevSecOps practices by using supported APIs and tooling, with each use case including a GitHub repository containing IaC code, CI/CD integration examples, and telemetry options. Demo: Use-Case 1: F5 Distributed Cloud WAF and BIG-IP Advanced WAF Resources: F5 Application Delivery and Security Platform GitHub Repo and Automation Guide ADSP Architecture Article Series: Automating F5 Application Delivery and Security Platform Deployments (Intro) F5 Hybrid Security Architectures (Part 1 - F5's Distributed Cloud WAF and BIG-IP Advanced WAF) F5 Hybrid Security Architectures (Part 2 - F5's Distributed Cloud WAF and NGINX App Protect WAF) F5 Hybrid Security Architectures (Part 3 - F5 XC API Protection and NGINX Ingress Controller) F5 Hybrid Security Architectures (Part 4 - F5 XC BOT and DDoS Defense and BIG-IP Advanced WAF) F5 Hybrid Security Architectures (Part 5 - F5 XC, BIG-IP APM, CIS, and NGINX Ingress Controller) Minimizing Security Complexity: Managing Distributed WAF Policies
589Views3likes0Commentssyslog over tcp and define management IP as source
Hello I used following method to add syslog server ip with tcp port. can anyone help me how to define source IP (management IP) to send logs to syslog server. https://support.f5.com/csp/article/K13080 Configuring the BIG-IP system to log to the remote syslog server using TCP protocol Impact of procedure: Performing the following procedure should not have a negative impact on your system. 1.Log in to tmsh by typing the following command: tmsh 2.To log to the remote syslog server using the TCP protocol, use the following command syntax: modify /sys syslog include "destination remote_server {tcp(\"\" port (514));};filter f_alllogs {level (debug...emerg);};log source(local);filter(f_alllogs);destination(remote_server);};" For example, to log to the remote syslog server 172.28.68.42, type the following command: modify /sys syslog include "destination remote_server {tcp(\"172.28.68.42/" port (514));};filter f_alllogs {level (debug...emerg);};log {source(local);filter(f_alllogs);destination(remote_server);};"2.1KViews0likes4CommentsMultiple DNS resolvers for root forward zone "."
I have a requirement for two sets of LTM services with different DNS requirements. The primary red secure service uses an internal DNS service but traffic can also be routed to the Internet. The second blue service uses a partner Internet Gateway. This has all worked with both services using the blue DNS resolver until recently one of the cloud apps needs to use 'microsoft.com' services. Because the Blue gateway uses public DNS to validate FQDNs and Microsoft frequently roll (like every 5mins) the public IP addresses in DNS responses we think the blue gateway is caching different IP addresses to the pink DNS server and so when the blue gateway validates the destination IP it can sometimes drop traffic35Views0likes0Commentsssh: Common Criteria mode initialized
I setup a new F5 and I am trying to SSH to an existing F5 but from the new F5 I get " ssh: Common Criteria mode initialized" I ran the command "tmsh list sys db security.commoncriteria" and it is set to false on both F5. I checked the sshd properties and both F5 have the following description none fips-cipher-version 2 inactivity-timeout 6000 include "Ciphers aes256-ctr,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes128-ctr KexAlgorithms ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521 MACs hmac-sha2-256-etm@openssh.com,hmac-sha2-256,hmac-sha2-512-etm@openssh.com,hmac-sha2-512" log-level info login enabled port 22 what am i missing49Views0likes1Comment