http
230 TopicsHTTP Monitor to Check USER-COUNT from Ivanti Node – Regex Issues
Hi everyone, I'm trying to configure an HTTP health monitor on an F5 LTM to check a value returned by an external Ivanti (Pulse Secure) node. The goal is to parse the value of the USER-COUNT field from the HTML response and ensure it's below or equal to 3000 users (based on our license limit). If the value exceeds that threshold, the monitor should mark the node as DOWN. The Ivanti node returns a page that looks like this: <!DOCTYPE html ... > <html xmlns="http://www.w3.org/1999/xhtml" lang="en-US" xml:lang="en-US"> <head> <title>Cluster HealthCheck</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> </head> <body> <h1>Health check details:</h1> CPU-UTILIZATION=1; <br>SWAP-UTILIZATION=0; <br>DISK-UTILIZATION=24; <br>SSL-CONNECTION-COUNT=1; <br>PLATFORM-LIMIT=25000; <br>MAXIMUM-LICENSED-USER-COUNT=0; <br>USER-COUNT=200; <br>MAX-LICENSED-USERS-REACHED=NO; <br>CLUSTER-NAME=CARU-LAB; <br>VPN-TUNNEL-COUNT=0; <br> </body> </html> I’m trying to match the USER-COUNT value using the recv string in the monitor, like this: recv "USER-COUNT=([0-9]{1,3}|[1-2][0-9]{3}|3000);" I’ve also tried many others. The issue is: even when the page returns USER-COUNT=5000;, the monitor still reports the node as UP, when it should be DOWN. The regex seems to match incorrectly. What I need: A working recv regex that matches USER-COUNT values from 0 to 3000 (inclusive), but fails if the value exceeds that limit. Has anyone successfully implemented this kind of monitor with a numeric threshold check using recv? Is there a reliable pattern that avoids partial matches within larger numbers? Thanks in advance for any insight or working exampleSolved137Views0likes7CommentsServer reporting requests coming from port 80
I have a site using F5 to provided CAC authentication. It's a PHP server, I get these values from the SERVER data: $_SERVER['SERVER_PROTOCOL'] = HTTP/1.1 $_SERVER['SERVER_PORT] = 80 As a user, when I navigate to the site I type HTTPS into the browser, but the site php server still sees it coming in on port 80. Im assuming the connection between the user and the F5 proxy is over HTTPS, but whats the connection between F5 and my server? Is that supposed to be HTTPS? I guess what I'm wondering is... should I be concerned and looking into this deeper?Solved110Views0likes2CommentsHigh CPU utilization (100%).
I observed high CPU utilization (100%) on F5 device, resource provision ASM nominal. I checked the client-side throughput and server-side throughput both are normal but found management interface throughput is very high and what i noticed this is happening in same time period for last 30 days. What could be the reason for this spike. Many thanks in advanced for your time and consideration.1.6KViews0likes14CommentsHSTS is not working.
Hi there, We have one irule is configured on VIP which is redirecting to maintenance page if user access the wrong url on that page HSTS is not working but if we access the right url then HSTS is working. We have enabled HSTS in http profile and that is attached to the same VIP with irule. Is there any way to enable HSTS on maintenance page or any remediation to fix that issue. if { $DEBUG } { log local0. "TEST - Source IP address: [IP::client_addr]" } switch -glob $uri_ext { "/httpfoo*" {set uri_int [string map {"/httpfoo" "/adapter_plain"} $uri_ext]} "/httptest*" {set uri_int [string map {"/httptest" "/adapter_plain"} $uri_ext]} default { HTTP::respond 200 content [ifile get ifile_service_unavailable_html] set OK 0 } } Many thanks in advance.Solved253Views0likes1CommentTelemetry Streaming: getting HTTP statistics via SNMP
Hi F5 community, I am looking to get HTTP statistics (total count, and broken by response code) metrics from Telemetry Streaming via SNMP (seems to be the most viable option). F5-BIGIP-LOCAL-MIB::ltmHttpProfileStat oid: .1.3.6.1.4.1.3375.2.2.6.7.6 However, the stats don't seem to come out correct at all: I do see deltas happening, but they don't match at all the traffic rate I expect to see. Furthermore, I have done some tests where I would start a load testing tool (vegeta) to fire concurrent HTTP requests, for which I do see the logs from the virtual server, but no matching increment in the above SNMP OID entries on none of the profiles configured. What am I doing wrong? does something need to be enabled on the HTTP profile in use to collect those stats? Best, Owayss88Views0likes0Comments(HTTP) Redirection via Arbitrary Host Header
Does that title sound familiar to you? It is something we see through in support cases; quite often when a customer has had a PCI audit or penetration test conducted against their web properties. It sounds alarming, but often has a very simple cause, and protecting against it is often also quite simple! What is the Host header? If we go way back to the earliest webservers and HTTP/1.0, RFC1945 didn’t include a specification for a Host header. Instead, it was assumed that the host (IP address) receiving the request was the only intended destination, and that the server was only serving a single website. Obviously, it became apparent to the architects of the modern world-wide web (Tim Berners-Lee and all the others named in the HTTP RFCs) that more flexibility was required, specifically, the ability for a single target IP address to host more than one website under more than one domain (OK, there’s more to it than that – the role of Proxies is also important here, but irrelevant to our current discussion.) To enable that, the “Host:” header was added to RFC2616, the HTTP/1.1 specification document, which would allow a single server to understand which “virtual host” an incoming request was destined for and, through that, serve multiple domains on one system. There are two ways to satisfy that requirement of HTTP/1.1: By sending a “Host:” header along with the request, specifying the desired target (see fig. 1.1) By sending an “Absolute URI” rather than a relative one, with the URI containing the hostname (see fig. 1.2) (See Section 19.6.1.1 of RFC2616 for more information) GET /index.html HTTP/1.1<CRLF> Host: www.example.com<CRLF> <CRLF> Fig 1.1: An example HTTP/1.1 request with Host header GET http://www.example.com/index.html HTTP/1.1<CRLF> <CRLF> Fig 1.2: An example HTTP/1.1 request with Absolute URI What could go wrong? Quite a lot of things, it turns out! There are all sorts of potential problems – many or most of which are now, thankfully, fixed in all of the common webserver and proxy software available today, but still, we must be wary of things like: Host header confusion If a request includes both a Host: header and an Absolute URI, which is used (the RFC is clear here) and do all systems in the request path agree? Server-Side Request Forgery (SSRF) attacks By including special characters (like @) in a URI, can we coerce a proxy to forward on a request which has been modified in an unexpected fashion? Password reset attacks An attacker might be able to abuse the password reset functionality on a legitimate website by manipulating the Host header, causing the website to send a manipulated, malicious password reset link to the victim’s user account contact details, thereby tricking the victim into visiting a phishing website rather than the legitimate site. Web cache poisoning attacks This is a large and complex topic and relates to much more than just the Host header, but a system which trusts a manipulated Host header may make cache poisoning easier for an attacker to perform. Malicious redirects Finally, we arrive at the topic which started this whole article: malicious redirects to an arbitrary destination. Let’s dive into that one more deeply than the others… Redirection via Arbitrary Host Header Let’s be honest for a moment – the real problem here isn’t that you can cause the target system to generate a redirect to an injected host. That’s perhaps not ideal but doesn’t describe any kind of vulnerability; an attacker can’t manipulate the host header on a victim’s system (without having already compromised the victim’s system in some way) and can’t have the reflected, malicious, host header sent to anyone but themselves… ...Unless they can. In the real world, utilizing such a flaw means carrying out one of the other kinds of attack I mentioned earlier; perhaps you can trigger the server to send a redirect (a 302 response with a Location: header) to your arbitrary malicious destination and cause that response to be cached by an intermediate proxy to be subsequently served to other users? Now you’ve poisoned a web cache and anyone you send to the legitimate site via a phishing attack will ultimately be redirected to your malicious domain. Alternatively, the over-trust in the Host header, shown by its use in the responses Location header, might just be a pointer to an attacker, letting the attacker know that they should try to get the vulnerable system to emit the malicious host in other content, like a password reset email. So, what am I saying? I’m saying that the “Redirection via Arbitrary Host Header Manipulation” result we commonly see in vulnerability scans is not, in and of itself, necessarily something to be alarmed about. An attacker being able to send a manipulated redirect back to themselves is next to useless, but it’s a pointer indicating a system might be vulnerable to other attacks that a scanner can’t easily determine in an automated fashion. Unfortunately for us, it’s also often a PCI audit failure, even if the application architecture isn’t vulnerable in a meaningful way. How do we fix it? In part, that depends on why you’re seeing the problem in the first place, so let’s examine some common scenarios: iRules It’s quite common to redirect from HTTP to HTTPS using an iRule – there’s even a built-in iRule on BIG-IP called _sys_https_redirect for that purpose – and without any other checks, the following kind of rule will result in a redirect being generated for whatever host name was received (in other words, you’ll get dinged for “Redirection via Arbitrary Host Header Manipulation” on your audit): when HTTP_REQUEST { HTTP::redirect https://[getfield [HTTP::host] ":" 1][HTTP::uri] } You could fix this by hard-coding the redirect response, of course, and having a single iRule per target application, and that is the most secure option assuming each virtual server only handles traffic for one application; something like this: when HTTP_REQUEST { HTTP::redirect https://www.example.com/[HTTP::uri] } If you need to support multiple applications per virtual server, then your next-best option would be to use a Data Group to define the valid allowed hostnames and then only redirect if the incoming Host header matches one of the hosts in the data group. There’s an excellent answer for this by Kai Wilke, here: https://community.f5.com/discussions/technicalforum/handling-www-with-host-name-redirects-in-irule/27048/replies/27050 BIG-IP Local Traffic Policies It is also quite common to use Local Traffic Policies to redirect HTTP requests, for example to perform an HTTP-to-HTTPS redirect in a more performant way than an iRule. You can still achieve safety here by using the same techniques as for iRules; define the redirect rule to only act when expected host names are received and to drop all other traffic, e.g.: BIG-IP Advanced WAF (ASM) To make preventing this kind of vulnerability incredibly easy, BIG-IP Advanced WAF has a feature called “HTTP redirection protection” which can be configured and enabled on any ASM policy. Configuring it is quite straightforward and is described in K04211103: Configuring HTTP redirection protection; just remember to make sure you have enabled blocking for the policy and enabled Block for the “Illegal redirection attempt” violation under Policy Building->Learning and Blocking Settings! NGINX For NGINX, you just need to be careful when setting up any redirects and use a hard-coded host element rather than taking the resulting hostname from the incoming (potentially attacker-supplied) host header. In other words, don’t do this: location / { return 302 https://$host$request_uri; } Do this instead: location / { return 302 https://example.com$request_uri; } Something else to point out here – it’s very common for administrators to use ‘$uri’ when constructing redirects, but doing so can open you up to header injection and/or response splitting; be sure to use ‘$request_uri’ instead, whenever possible. That’s all for now! That’s all I’m going to cover in this article – there are other ways you can be vulnerable to open redirects (for example if you take an HTTP parameter and use that to construct a subsequent redirect) which aren’t covered here and are a much broader topic. For this article, I chose to concentrate only on the exact report we see across so many PCI audits and vulnerability scans. I will say, though, that BIG-IP Advanced WAF’s HTTP redirect protection will protect you against many, if not all, of the other ways you can be vulnerable because that protection applies to the redirect itself, i.e., to the HTTP response, rather than the request. For that reason (and many, many others), I’d strongly recommend investigating BIG-IP Advanced WAF if you don’t already use it! As always, feel free to leave any comments or questions below and I’ll try to get back to everyone, and thanks for reading this far!768Views1like0CommentsHelp with iRule Proxy
Hi team, I’m working on an iRule where I need to replace the path /admin with the root / and forward the request to the appropriate pool. However, I’m encountering issues with the rule, and it doesn't seem to work as expected. Here’s the first version I implemented: when HTTP_REQUEST { if {[string tolower [HTTP::host]] equals "test.com" and [HTTP::path] starts_with "/admin"} { HTTP::path [string map -nocase {"/admin" "/"} [HTTP::path]] pool POOL-A #log local0.info "Client Address --> [IP::client_addr] | Path: [HTTP::path] | Pool: POOL-A" } else { pool POOL-B #log local0.info "Client Address --> [IP::client_addr] | Path: [HTTP::path] | Pool: POOL-B" } } After some research, I saw that HTTP::path might need to be changed to HTTP::uri. I tried this version: when HTTP_REQUEST { # Log the original URI for debugging log local0. "Original URI: [HTTP::uri]" # Check if the URI starts with "/admin" if {[HTTP::uri] starts_with "/admin"} { # Modify the URI by replacing "/admin" with "/" set new_uri [string map {"/admin" "/"} [HTTP::uri]] HTTP::uri $new_uri # Log the modified URI for debugging log local0. "Modified URI: [HTTP::uri]" # Forward the request to the appropriate pool pool POOL-A } else { # Log default traffic for debugging log local0. "Default traffic - URI: [HTTP::uri], Pool: POOL-B" # Forward to the default pool pool POOL-B } } Issue: Neither version seems to work. When I test requests to /admin, the path replacement does not happen as expected or The replace of path does not allow me to reach any subfolders after root “/” (ex. help, etc etc) and on these objects we faced 404 not found error.Could someone point out what I might be missing or any best practices for this kind of path manipulation? Thanks!101Views0likes1CommentPort Translation & HTTPS -> HTTP
Systeminformation: F5 BIG-IP r2600 Version 17.1.1.1 Build 0.0.2 Hello everyone, We would like to map the following scenario with the f5 BIG-IP I call https://server.domain.com port 443. The BIG-IP should then forward to http://server.domain.com port 55000. Is this even possible? How did you solve it? Configuration: For port translation, we entered port 443 in the virtual server and gave the pool member port 55000. For HTTPS to HTTP we used the following iRule: when HTTP_REQUEST { # Extrahiere den Host und den URI aus der HTTPS-Anfrage set host [HTTP::host] set uri [HTTP::uri] # Leite die Anfrage an die HTTP-Version der gleichen URL weiter HTTP::respond 301 Location "http://$host$uri" log "iRule_HTTP, HTTPS-Anfrage wurde auf HTTP umgeleitet: $host$uri, ClientIP: [IP::client_addr], ClientPort: [TCP::client_port]" } Is the iRule log entry generated before the port translation? The wrong port is in the logs. Best regardsSolved282Views0likes2Comments