HTTP Request Throttle by IP and UserAgent
Problem this snippet solves:
This is a modification of Kirk Bauer's iRule found here: https://devcentral.f5.com/codeshare/http-request-throttle
The modification is for rate limiting individual IP addresses and UserAgents, not to specific URIs but to all virtual servers covered by the rule, testing for x-forwarded-for, etc..
The ultimate goal is to be able to slow down individual IP addresses or bots that we do not necessarily want to block but do not want them to continue to eat up resources at their current rate.
Note: I am strongly considering rewriting this to use rateclass instead of using a table to dictate how many requests can be made during a period of time. I feel like using rateclass will not only clean up the code a little, but we will be using built in rate limiting functionality instead of writing our own.
Code :
when RULE_INIT { # This is a variation of https://devcentral.f5.com/s/articles/http-request-throttle by by Kirk Bauer # This defines the maximum requests to be served within the timing interval defined by the static::timeout variable below. #log local0. "RateLimit Rule Init" set static::maxReqs 4; # Timer Interval in seconds within which only static::maxReqs Requests are allowed. # (i.e: 10 req per 2 sec == 5 req per sec) # If this timer expires, it means that the limit was not reached for this interval and the request # counting starts over. # Making this timeout large increases memory usage. Making it too small negatively affects performance. set static::timeout 8; } when HTTP_REQUEST { if { [HTTP::header exists X-forwarded-for] } { set client_IP_addr [getfield [lindex [HTTP::header values X-Forwarded-For] 0] "," 1] } else { set client_IP_addr [IP::client_addr] } set client_UserAgent [string tolower [HTTP::header "User-Agent"]] #log local0. "$client_UserAgent" if { not ([class match $client_IP_addr equals sec_whitelist ] )} { if {[class match $client_IP_addr equals sec_RateLimitedIPs]} { set getIPcount [table lookup -notouch $client_IP_addr] if { $getIPcount equals "" } { table set $client_IP_addr "1" $static::timeout $static::timeout } else { if { $getIPcount < $static::maxReqs } { table incr -notouch $client_IP_addr #log local0. "$client_IP_addr IP Count: $getIPcount, uri: [HTTP::uri]" } else { reject #log local0. "$client_IP_addr Rejected: $getIPcount" } } } set client_UserAgent [class match -name $client_UserAgent contains sec_RateLimitedUserAgents] #log local0. "$client_UserAgent$client_IP_addr" #Rate limit will be based on UserAgent in the context of Source IP so the two are combined for the tracked value if { [string length $client_UserAgent] > 0 } { set getUserAgentCount [table lookup -notouch $client_UserAgent$client_IP_addr] #log local0. "[class match $client_UserAgent contains sec_RateLimitedUserAgents]" if { $getUserAgentCount equals "" } { table set $client_UserAgent$client_IP_addr "1" $static::timeout $static::timeout } else { if { $getUserAgentCount < $static::maxReqs } { table incr -notouch $client_UserAgent$client_IP_addr #log local0. "$client_UserAgent UserAgent Count: $getUserAgentCount" } else { reject #log local0. "$client_UserAgent Rejected: $getUserAgentCount - [HTTP::host]" } } } } }
- ldesfossesCirrus
Just a quick note, this is a great iRule, I've used it internally, but an attacker alone (no need for many attackers) can generate a new random IP and add it in the X-forwarded-for Header. Same for the UserAgent header, he can generate a new one for each request.
The attacker can then create a lot of entry in the table with a relatively simple loop and take almost all the "chance" of the legitimate users to be able to execute a request.
It's then a DOS situation. If it happen, the only solution is to blacklist the IP.
I don't think it's avoidable by using just iRule, I just wanted to point it.
- Greg_33932Nimbostratus
This looks like exactly what we requested F5 professional services about 8 months ago to make for us. They wanted more then that business unit was willing to spend for the irule to be made, so it never was processed. Is this working out for you so far? Looks like a sound irule. Pinging the app team to see if they want to give it a try in QA and throw a load test at it to see how it does. Thanks for sharing! I'll check out the rateclass modification, if it pans out I'll share if I make that change here.
- Dazzla_20011Nimbostratus
Hi,
We've tried applying the I-rule to a virtual server but go the following errors?
- oedo808_68685Altostratus
@Dazzla, that error is because you need to create the data group sec_RateLimitedUserAgents first. You also need a sec_RateLimitedIPs datagroup. These can be renamed in the iRule with different named data groups if you want to call them something else.
- oedo808_68685Altostratus
@Greg, this is working out pretty decent and is still in use.
- Dazzla_20011Nimbostratus
@Geg, thanks I'm probably being thick but what we're looking for is to be able to rate limit all requests but for certain ip addresses these not to be ratelimited (such as internal IP's). Looks as though this is specifying a list of IP's to rate limit?
Also I'm not fully understanding the purpose of the ratelimiteduseragents and why we would use it.
Wouldn't we need an additional data group too sec_whitelist? Also what does this do.
Came across this as our partner was asking £11K to create an i-rule to limit the requests to a service 10 requests in a 2 minute window from each source ip address. If I get this working it will go down very well.
As you can guess I've very limited experience of i-rules.
Any help much appreciated.
Many Thanks
- cbolducNimbostratus
Thanks for making this iRule. I changed the HTTP response from "403 Forbidden" to "429 Too Many Requests" because my site was getting flooded with requests from Googlebots.