Forum Discussion
Chris_Phillips
Jan 13, 2012Nimbostratus
periodic summary logging with table and after?
Howdy folks,
I'm trying to basically update the per URI limiting rule given by Hoolio here http://devcentral.f5.com/Community/...fault.aspx to the modern age with tables and the after comman...
Chris_Phillips
Nimbostratus
OK, so here's my rule at present
Name : _cp_t1_soap_request_limit_irule
Special Notes : static::prefix value MUST be different in each copy of the rule.
Description :
This iRule is used to limit access to specific URIs as defined
in the associated Data Group List. This list needs to be called
_dgl and be a list of uris in the format of:
:= /
Only parameters in the RULE_INIT section should be modified
between different implementations of the same version number.
Current Version : 0.A 13/01/2012
Change Control
0.A - First Draft for functional testing
Changes Author (Version, Date, Name, Company)
=============================================
0.A - 2012 Jan 13 - Chris Phillips ASPIRE
priority 120
when RULE_INIT {
set data prefix for subtables and data group lists
set static::prefix "_cp_soap_uri_request_limits"
0 = send a 503 (web service), 1 = send a 302 (website)
set static::redirect 0
set static::debug 0
}
when HTTP_REQUEST {
search uri dgl for uri to limit against
set limit_element [class match -element -- [HTTP::uri] starts_with ${static::prefix}_dgl]
if { $limit_element ne "" } {
limit_uri = base uri to track against from the soap_limit_class dgl
conn_limit = maximum number of sessions extracting value from the soap_limit_dgl
conn_timeout = timeout period for connection tracking table
scan $limit_element {%[^ ] %d/%d} limit_uri conn_limit conn_timeout
set the subtable to the limited URI
set tbl "${static::prefix}_${limit_uri}_${conn_limit}_${conn_timeout}"
get the total "current" HTTP connections
set conns [table keys -subtable $tbl -count]
count the number of rows in the subtable for the specific URI. if greater or equal to the connection limit then block the request
if {$conns < $conn_limit} {
table set -subtable $tbl -- "[IP::client_addr]:[TCP::client_port]_[clock clicks -milliseconds]" $conns indef $conn_timeout
set result "accepted"
} else {
set rejects [table incr "rejected_count_$tbl"]
table delete rej_log_monitor
log local0. "after info = [after info]"
log local0. "after info lookup = [after info [table lookup rej_log_monitor]]"
if {[after info] ne ""} {
set monitor_id [after info [table lookup rej_log_monitor]]
}
if {[info exists monitor_id]} {
do nothing, don't log right now
log local0. "have rej_log_monitor = [table lookup rej_log_monitor], not logging reject"
log local0. [after info]
} else {
log local0. "no rej_log_monitor setting to log reject"
table set rej_log_monitor [after 5000 {
log local0. "TEST!"
log local0. "Alert! [HTTP::host]$limit_uri exceeded $conn_limit hits per $conn_timeout second(s). Rejected $rejects in the last $static::request_window_timeout"
after cancel [table lookup rej_log_monitor]
table delete "rejected_count_$tbl"
table delete "rej_log_monitor"
}] 10
log local0. "have rej_log_monitor = [table lookup rej_log_monitor], ready to log soon"
log local0. [after info [table lookup rej_log_monitor]]
}
if {$static::redirect} {
HTTP::redirect "/busy?rate"
set result "redirected"
} else {
HTTP::respond 503 content "Service Unavailable"
set result "rejected"
}
}
if {$static::debug >= 1} {
log local0. "$result [HTTP::uri]: uri=$limit_uri; conns=$conns; limit=$conn_limit; timeout=$conn_timeout table=$tbl\n\n"
dump ENTIRE subtable to logs!!
if {$static::debug >= 2}{
foreach key [table keys -subtable $tbl] {
log local0. "$key [table lookup -subtable $tbl $key] [table lifetime -subtable $tbl $key] [table lifetime -subtable $tbl -remaining $key]\n"
}
}
}
}
}
And I'm pretty happy with it now, but i'd just like to add summary logging of rejected requests when, and only when, there are rejectiosn to log.
Chris_Phillips
Jan 21, 2014Nimbostratus
Not to my satisfaction, no. What I ended up doing was to use the fact that it was a high volume service to do a check on each request to see if an 60 second expiring table entry still existed. If the value wasn't there, then it was time to go through the periodic routine. Not great, but it's working fairly well in our production environment..
Recent Discussions
Related Content
DevCentral Quicklinks
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
Discover DevCentral Connects