connection limiting
4 TopicsHow to limit access by time?
Dear community, I need to handle requests for a particular domain in a different way. I usually apply a few simple conditions, for example, requests must arrive with the xpto.com header to be forwarded to the pool. I use a BIG-IP LTM 13.0.0. Now I need a particular domain, if it is called more than 50 times in 10 minutes by a same IP, block this IP for 30 minutes. From what I've been researching I believe that the FLOW_INIT function helps me with what I need, but I still can not reach my goal. Below is a simple example of what I use to test: when HTTP_REQUEST { if {[HTTP::host] equals "drop.test:8080"} { switch -glob [HTTP::uri] { "/test/*" { log local0. "/test/ - accept - source: [IP::remote_addr] - uri: [HTTP::host][HTTP::uri]" HTTP::respond "Test ok!" } "/drop/*" { log local0. "/drop/ - accept - source: [IP::remote_addr] - uri: [HTTP::host][HTTP::uri]" HTTP::respond "Drop ok!" } default { log local0. "reject - source: [IP::remote_addr] - uri: [HTTP::host][HTTP::uri]" reject } } } }327Views0likes1CommentLTM :: iRule to Limit Source Addresses
So... here's a weird one. And I understand it's not optimal... ...but say there is a crisis and management sends down the proclamation: "Only allow X sources at a time access to the server pool until the admins can fix XYZ on the systems involved". So we rush to figure out a way to do so... and come up with the below. Other than blasting the table full of addresses (such as a resource exhaustion DDoS against the F5), are there any other caveats that I might not be thinking about here? when CLIENT_ACCEPTED { set hsl [HSL::open -proto UDP -pool syslog-servers.pool] } when HTTP_REQUEST { set source_ip [IP::client_addr] set ip_limit 2000 Delete all IPs table delete -subtable conns -all if { [table lookup -notouch -subtable conns $source_ip] != 1 } { Source IP doesn't exist in table, add to table table add -subtable conns $source_ip 1 900 } else { Source IP is in the table, actively involved, renew the timer table lookup -subtable conns $source_ip } if { [table keys -subtable conns -count] <= $ip_limit } { The current IP count is less than alloted, allow pool access pool $pool_name above variable acquired in prior logic } else { The IP count has been reached. Do not provide pool access. HSL::send $hsl ":: Source IP limit ($ip_limit) hit for pool, redirecting to maintenance page." call maintenance_page.irule::display_page } }193Views0likes1CommentOnly send connections to server2 if connection limit reached on server1
I have 4 HTTP web caching servers in a pool, and we want to try and increase the cache hit rate (likihood that it will have already cached the requested page) by focusing all connctions on one server until it reaches a high number of connections, and needs new connections to be shared to other servers in the pool instead. E.g. I want to send all connections to server1 only, until server 1 reaches a connection limit I have set (say 500, TBC). Then only during the time that server 1 is at its configured limit, i want it to then start sending new connections to server2. Server 2 will have a limit set as well, say 500 again, and if we get to a point where server1 and 2 have both reached thier limit, it will start sending to server 3, then server 4 if needed. Perhaps server 4 could have no limit set, so that the pool overall status does not reach its limit, as it would be better to respond slowly if needed than stop responding completely. Could this be done with a combination of Priority Activation groups and Connection Limits? E.g. Priority Group Activation = Less than 1. * Server1 = Connection limit 500 + priority group 4 * Server2 = Connection limit 500 + priority group 3 * Server3 = Connection limit 500 + priority group 2 * Server4 = Connection limit 0 + priority group 1 If so great, but if there is a better way to do it with an irule perhaps, let me know. Thanks in advance.274Views0likes2CommentsOneConnect & Pool Connection Limits
Hi all, we are facing a very strange issue with 11.4.1 (latest HF) having OneConnect & Pool Connection Limits & Request Queuing enabled. Under high load (testing), everything is running as expected (Connection Limit on pool as well es right number of connections waiting in Request Queuing pool). Once we take down one of the two pool members and re-enable it, we can see that the member is only taking a lower number of connections (looks like it is Pool Connection Limit - (Pool Connection Limit / of TMM). The limit is never reached again, furthermore it looks like that some stale connections are left in Request Queuing pool leading to Conn Resets. I am aware about the known bug (Bug 402510), but I assume that it was fixed in 11.4.1. Anyone else experiencing the same strange behaviour? Can someone confirm if the listed bug was fixed in our release or in 11.5.1 which we plan to upgrade soon? Thx263Views0likes1Comment