Log Tcp And Http Request Response Info
Problem this snippet solves: This iRule logs a line for the following events: when a new TCP connection is established with a client when the HTTP headers of an HTTP request are received from the client when the HTTP headers of an HTTP response are received from the pool member when the TCP connection with a client is closed Code : # Here is a sample of the log output for a single TCP connection with three HTTP requests: : New TCP connection from 192.168.99.210:2675 to 192.168.101.41:80 : Client 192.168.99.210:2675 -> test_http_vip/test0.html?parameter=val (request) : Client 192.168.99.210:2675 -> test_http_vip/test0.html?parameter=val (response) - pool info http_pool 192.168.101.45 80 - status: 200 (request/response delta: 0ms) : Client 192.168.99.210:2675 -> test_http_vip/test1.html?parameter=val (request) : Client 192.168.99.210:2675 -> test_http_vip/test1.html?parameter=val (response) - pool info http_pool 192.168.101.45 80 - status: 200 (request/response delta: 0ms) : Client 192.168.99.210:2675 -> test_http_vip/test2.html?parameter=val (request) : Client 192.168.99.210:2675 -> test_http_vip/test2.html?parameter=val (response) - pool info http_pool 192.168.101.45 80 - status: 200 (request/response delta: 1ms) : Closed TCP connection from 192.168.99.210:2675 to 192.168.101.41:80 (open for: 1078ms) when CLIENT_ACCEPTED { # Get time for start of TCP connection in milleseconds set tcp_start_time [clock clicks -milliseconds] # Log the start of a new TCP connection log local0. "New TCP connection from [IP::client_addr]:[TCP::client_port] to [IP::local_addr]:[TCP::local_port]" } when HTTP_REQUEST { # Get time for start of HTTP request set http_request_time [clock clicks -milliseconds] # Log the start of a new HTTP request set LogString "Client [IP::client_addr]:[TCP::client_port] -> [HTTP::host][HTTP::uri]" log local0. "$LogString (request)" } when LB_SELECTED { log local0. "Client [IP::client_addr]:[TCP::client_port]: Selected [LB::server]" } when LB_FAILED { log local0. "Client [IP::client_addr]:[TCP::client_port]: Failed to [LB::server]" } when SERVER_CONNECTED { log local0. "Client [IP::client_addr]:[TCP::client_port]: Connected to [IP::server_addr]:[TCP::server_port]" } when HTTP_RESPONSE { # Received the response headers from the server. Log the pool name, IP and port, status and time delta log local0. "$LogString (response) - pool info: [LB::server] - status: [HTTP::status] (request/response delta: [expr {[clock clicks -milliseconds] - $http_request_time}] ms)" } when CLIENT_CLOSED { # Log the end time of the TCP connection log local0. "Closed TCP connection from [IP::client_addr]:[TCP::client_port] to [IP::local_addr]:[TCP::local_port] (open for: [expr {[clock clicks -milliseconds] - $tcp_start_time}] ms)" }2.2KViews1like3CommentsiRule to decode WebSocket negotiation and frames
Problem this snippet solves: WebSocket establishes a socket via HTTP upgrade and once socket is established subsequent messages are non-HTTP, but WebSocket frame. There might be a situation where you want to dump WebSocket negotiation and frame into log for troubleshooting purpose. This iRule dumps negotiation and WebSocket frame header fields, and payload (only text data). WebSocket frame format looks as below. RFC 6455 - 5.2. Base Framing Protocol 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-------+-+-------------+-------------------------------+ |F|R|R|R| opcode|M| Payload len | Extended payload length | |I|S|S|S| (4) |A| (7) | (16/64) | |N|V|V|V| |S| | (if payload len==126/127) | | |1|2|3| |K| | | +-+-+-+-+-------+-+-------------+ - - - - - - - - - - - - - - - + | Extended payload length continued, if payload len == 127 | + - - - - - - - - - - - - - - - +-------------------------------+ | |Masking-key, if MASK set to 1 | +-------------------------------+-------------------------------+ | Masking-key (continued) | Payload Data | +-------------------------------- - - - - - - - - - - - - - - - + : Payload Data continued ... : + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + | Payload Data continued ... | +---------------------------------------------------------------+ WebSocket frame is quite simple. The first 2 bytes are necessary and always present. Extended payload length exists only when Payload length is set to 126 (in this case Ext len is 16 bits) or 127 (in this case Ext len is 64 bits). Masking-key exists only if MASK bit is set to 1. FIN : If the frame is the last frame, set to 1. If payload is fragmented, the last frame should have FIN = 1. Other fragmented ones should have FIN = 0. RSV : If extension is not used, it is set to 0 opcode : This tells you if it is data frame (can be text or binary) or control frame. %x0:continuation frame %x1:text frame %x2:binary frame %x3-7:reserved for further %x8:connection close %x9:ping %xA:pong %xB-F:reserved for further MASK : When browser sends a data, this bit MUST be set to 1, which means data is masked using the Masking-key. When server sends a data, this bit MUST NOT set to 1 Payload len : If 0 - 125, this field represents the payload length If it is 126, then the Extended payload length (16bit) are used to tell the actual payload length (maximum data size is 65535 bytes) If it is 127, then the Extended payload length (64bit : MSB must be 0) are used to tell the actual payload length (maximum data size is 9223372036854775807 bytes) Extended payload length Only when Payload len is set to 126 Extended payload length_continued Only when Payload len is set to 127 Masking-key Used to mask data. Masking is to avoid proxy poisoning. Non-compliant HTTP proxy caches WebSocket data. If MASK bit is 1, this field is present. If MASK bit is 0, this field is not present. Client sets this key and it must be unpredictable. Payload Payload from client to server is masked using Masking-key. How to use this snippet: Here I am sending text data "DEAD BEEF" in WebSocket frame via WebSocket_cURL.py (https://github.com/jussmen/WebSocket_cURL). HTTP request and response (negotiation) look like below. $ python WebSocket_cURL.py 10.10.148.101 80 -s "DEAD BEEF" GET /ws HTTP/1.1 Host: 10.10.148.101 Connection: Upgrade Upgrade: websocket Sec-WebSocket-Version: 13 Sec-WebSocket-Key: n5twxG/tNPf8h3po+pNrPA== User-Agent: IE HTTP/1.1 101 Switching Protocols Upgrade: websocket Connection: Upgrade Sec-WebSocket-Accept: y9WDs+d4zDl+qvQ7H17KpnP0EhI= This is how iRule dumps the negotiation and subsequent WebSocket frames in /var/log/ltm <ws_request>: ============================================= <ws_request>: Client 10.10.1.2:47427 -> 10.10.148.101/ws (request) <ws_request>: Host: 10.10.148.101 <ws_request>: Connection: Upgrade <ws_request>: Upgrade: websocket <ws_request>: Sec-WebSocket-Version: 13 <ws_request>: Sec-WebSocket-Key: n5twxG/tNPf8h3po+pNrPA== <ws_request>: User-Agent: IE <ws_request>: ============================================= <ws_response>: ============================================= <ws_response>: Client 10.10.1.2:47427 -> 10.10.148.101/ws (response) <ws_response>: Upgrade: websocket <ws_response>: Connection: Upgrade <ws_response>: Sec-WebSocket-Accept: y9WDs+d4zDl+qvQ7H17KpnP0EhI= <ws_response>: ============================================= <ws_server_frame>: ============================================= <ws_server_frame>: FIN bit : 1 <ws_server_frame>: MASK bit : 1 <ws_server_frame>: MASK : 0 <ws_server_frame>: Type : Text - 1 <ws_server_frame>: ============================================= <ws_server_data>: ============================================= <ws_server_data>: The server says: 'Hello'. Connection was accepted. <ws_server_data>: ============================================= <ws_client_frame>: ============================================= <ws_client_frame>: FIN bit : 1 <ws_client_frame>: MASK bit : 1 <ws_client_frame>: MASK : 1178944834 <ws_client_frame>: Type : Text - 1 <ws_client_frame>: ============================================= <ws_client_data>: ============================================= <ws_client_data>: DEAD BEEF <ws_client_data>: ============================================= <ws_server_frame>: ============================================= <ws_server_frame>: FIN bit : 1 <ws_server_frame>: MASK bit : 1 <ws_server_frame>: MASK : 0 <ws_server_frame>: Type : Text - 1 <ws_server_frame>: ============================================= <ws_server_data>: ============================================= <ws_server_data>: The server says: DEAD BEEF back at you <ws_server_data>: ============================================= <ws_client_frame>: ============================================= <ws_client_frame>: FIN bit : 1 <ws_client_frame>: MASK bit : 1 <ws_client_frame>: MASK : 1178944834 <ws_client_frame>: Type : Connection close - 8 <ws_client_frame>: ============================================= <ws_server_frame>: ============================================= <ws_server_frame>: FIN bit : 1 <ws_server_frame>: MASK bit : 1 <ws_server_frame>: MASK : 0 <ws_server_frame>: Type : Connection close - 8 <ws_server_frame>: ============================================= Code : when WS_REQUEST { # Copied from : https://devcentral.f5.com/s/articles/log-http-headers set LogString "Client [IP::client_addr]:[TCP::client_port] -> [HTTP::host][HTTP::uri]" log local0. "=============================================" log local0. "$LogString (request)" foreach aHeader [HTTP::header names] { log local0. "$aHeader: [HTTP::header value $aHeader]" } log local0. "=============================================" } when WS_RESPONSE { # Copied from : https://devcentral.f5.com/s/articles/log-http-headers log local0. "=============================================" log local0. "$LogString (response)" foreach aHeader [HTTP::header names] { log local0. "$aHeader: [HTTP::header value $aHeader]" } log local0. "=============================================" } when WS_CLIENT_FRAME { log local0. "=============================================" log local0. "FIN bit : [WS::frame eom]" log local0. "MASK bit : [WS::frame orig_masked]" if { [WS::frame orig_masked] eq 0 } { log local0. "Not masked. Client frame MUST be masked." } if { [WS::frame orig_masked] eq 1 } { log local0. "MASK : [WS::frame mask]" } switch -glob [WS::frame type] { "0" { log local0. "Type : Continuatoin frame - 0" } "1" { log local0. "Type : Text - 1" WS::collect frame } "2" { log local0. "Type : Binary - 2" } "3" - "4" - "5" - "6" - "7" { log local0. "Type : Reserved type (3-7) - [WS::frame type]" } "8" { log local0. "Type : Connection close - 8" } "9" { log local0. "Type : ping - 9" } "10" { log local0. "Type : pong - 10" } "11" - "12" - "13" - "14" - "15" { log local0. "Type : Reserved type (11-15) - [WS::frame type]" } } log local0. "=============================================" } when WS_SERVER_FRAME { log local0. "=============================================" log local0. "FIN bit : [WS::frame eom]" log local0. "MASK bit : [WS::frame orig_masked]" if { [WS::frame orig_masked] eq 1 } { log local0. "MASK : [WS::frame mask]" } switch -glob [WS::frame type] { "0" { log local0. "Type : Continuatoin frame - 0" } "1" { log local0. "Type : Text - 1" WS::collect frame } "2" { log local0. "Type : Binary - 2" } "3" - "4" - "5" - "6" - "7" { log local0. "Type : Reserved type (3-7) - [WS::frame type]" } "8" { log local0. "Type : Connection close - 8" } "9" { log local0. "Type : ping - 9" } "10" { log local0. "Type : pong - 10" } "11" - "12" - "13" - "14" - "15" { log local0. "Type : Reserved type (11-15) - [WS::frame type]" } } log local0. "=============================================" } #when WS_CLIENT_FRAME_DONE { #log local0. "WS_CLIENT_FRAME_DONE" #} #when WS_SERVER_FRAME_DONE { #log local0. "WS_SERVER_FRAME_DONE" #} when WS_CLIENT_DATA { log local0. "=============================================" log local0. "[WS::payload]" log local0. "=============================================" WS::release } when WS_SERVER_DATA { log local0. "=============================================" log local0. "[WS::payload]" log local0. "=============================================" WS::release } Tested this on version: 12.0894Views0likes0CommentsLink Tracking
Problem this snippet solves: This iRule application will track URI's that come in through your virtual server and store them in a table with the number of times they have been referenced. The data stored can then be viewed by passing in the "/linkadmin" url which will generate a table of the URI's and their view counts. Code : when HTTP_REQUEST { set TABLE_LINKDATA "LINK_TRACKING_[virtual name]" switch [string tolower [HTTP::uri]] { "/linkadmin" { set content { } foreach key [table keys -subtable $TABLE_LINKDATA] { append content ""; } append content " Clear Data URI Views $key [table lookup -subtable $TABLE_LINKDATA $key] "; HTTP::respond 200 Content $content; } "/linkcleardata" { table delete -subtable $TABLE_LINKDATA -all; HTTP::redirect "http://[HTTP::host]/linkadmin" } default { if { [table incr -subtable $TABLE_LINKDATA -mustexist [HTTP::uri]] eq ""} { table set -subtable $TABLE_LINKDATA [HTTP::uri] 1 indefinite indefinite; } } } }417Views0likes0CommentsHeatmaps Part1
Problem this snippet solves: I’ve been tinkering with a way to allow iRules to make that easier, and to allow those interested parties to see some usage statistics in a visually interesting, real-time manner, without adding much heavy lifting for you or your application. The idea is simple, create a heat map view of your HTTP requests, mapped to the locations around the United States (to start with). This will give you an idea which areas are most highly utilizing your application in a very easy on the eyes fashion. Best of all, of course, is that we’re going to generate this 100% with iRules. For the full write-up check out the article. (Note: There is an update to this iRule using the newer GeoCharts.) Code: (as gist) when HTTP_REQUEST { if {[HTTP::uri] starts_with "/heatmap"} { set chld "" set chd "" foreach state [table keys -subtable states] { append chld $state append chd "[table lookup -subtable states $state]," } set chd [string trimright $chd ","] HTTP::respond 200 content "<HTML><center><font size=5>Here is your site's usage by state:</font><img src='http://chart.apis.google.com/chart?cht=t&chd=&chs=440x220&chtm=usa&chd=t:$chd&chld=$chld&chco=f5f5f5,edf0d4,6c9642,365e24,13390a' border='0'><a href='/resetmap'>Reset Map</a></center></HTML>" } elseif {[HTTP::uri] starts_with "/resetmap"} { foreach state [table keys -subtable states] { table delete -subtable states $state } HTTP::respond 200 Content "<HTML><center>Table Cleared. <a href='/heatmap'>Return to Map</a></HTML>" } else { set loc [whereis [IP::client_addr] abbrev] if {$loc eq ""} { set ip [expr { int(rand()*255) }].[expr { int(rand()*255) }].[expr { int(rand()*255) }].[expr { int(rand()*255) }] set loc [whereis $ip abbrev] } if {[table incr -subtable states -mustexist $loc] eq ""} { table set -subtable states $loc 1 indefinite indefinite } } }771Views0likes0CommentsHeatmaps Part4
Problem this snippet solves: The final installment of the HeatMaps tech tip series, here is the completed project. A this stage you can view the entire world map, zoom to different regions, sort by URI, and you even get a readout of the actual numbers that are driving the maps being generated so you can tell exactly how many requests per region you're getting to each URI in question. Full tech tip here Code : when RULE_INIT { ## Configure static portions of the HTML response for the heatmap pages set static::resp1 "<HTML><table width='100%' border='0'><tr><td></td><td><center><font size=5>Here is your site's usage:\ </font></td></tr><tr><td align='center' rowspan='5'><b><u>Connections per Region:</u></b>" set static::resp2 "</td><td><center><img src='http://chart.apis.google.com/chart?cht=t&chd=&chs=440x220&chtm=" set static::resp3 "&chco=f5f5f5,edf0d4,6c9642,365e24,13390a' border='0'></center></td></tr><td><center>Zoom to region:\ <a href='/asia'>Asia</a> | <a href='/africa'>Africa</a> | <a href='/europe'>Europe</a> | <a href='/middle_east'>Middle East</a> | \ <a href='/south_america'>South America</a> | <a href='/usa'>United States</a> | <a href='/heatmap'>World</a></td></tr><tr><td><center>" set static::resp4 "<tr><td><center><a href='/resetmap'>Reset All Counters</a></center></td></tr><tr></tr></HTML>" } when HTTP_REQUEST timing on { switch -glob [string tolower [HTTP::uri]] { "/asia*" - "/africa*" - "/europe*" - "/middle_east*" - "/south_america*" - "/usa*" - "/world*" - "/heatmap*" { set chld "" set chd "" set zoom "" set zoomURL "" set regions "" set urlTotal 0 set regionTotal 0 ## Split apart the zoom region from the filter URL in the request set zoom [getfield [string map {"/" "" "heatmap" "world"} [HTTP::uri]] "?" 1] set zoomURL [getfield [string map {"/" "" "heatmap" "world"} [HTTP::uri]] "?" 2] ## Get a list of all states or countries, applying the URL filter where necessary ## and retrieve the associated count of requests from that area to that URL ## First step through the mytables table, which is a pointer table referencing all subtables with counter values in them foreach mysub [table keys -subtable mytables] { ## Next determine whether to search state or country tables if {$zoom eq "usa"} { if {$mysub starts_with "state:"} { ## For each state sub table step through each key, which will be a URL, and count the request to that URL. ## This is also where URL filtering is applied if applicable foreach myurl [table keys -subtable $mysub] { if {$zoomURL ne ""} { if {$myurl eq $zoomURL} { append chld "[getfield $mysub ":" 2]" append chd "[table lookup -subtable $mysub $myurl]," set urlTotal [table lookup -subtable $mysub $myurl] } } else { append chld "[getfield $mysub ":" 2]" append chd "[table lookup -subtable $mysub $myurl]," set urlTotal [table lookup -subtable $mysub $myurl] } set regionTotal [expr $regionTotal + $urlTotal] set urlTotal 0 } append regions "[getfield $mysub ":" 2] : $regionTotal" set regionTotal 0 } } } ## Send back the pre-formatted response, set in RULE_INIT, combined with the map zoom, list of areas, and requ est count set chd [string trimright $chd ","] ## First loop through the trackingurls class to get a list of all URLs to be tracked and format HTML around them for links set filters "" foreach mytrackingurl [class names trackingurls] { append filters "<a href='${zoom}?${mytrackingurl}'>${mytrackingurl}</a> | " } set filters [string trimright $filters " | "] ## Combine the above generated HTML with the static HTML in RULE INIT and respond to the client HTTP::respond 200 content "${static::resp1}${regions}${static::resp2}${zoom}&chd=t:${chd}&chld=${chld}${st atic::resp3} \ Filter by URL: <a href='/$zoom'>All URLs</a> | $filters\ $static::resp4" } "/resetmap" { foreach pointertable [table keys -subtable mytables] { foreach entry [table keys -subtable $pointertable] { table delete -subtable $pointertable $entry } } foreach pointerentry [table keys -subtable mytables] { table delete -subtable mytables $pointerentry } HTTP::respond 200 Content "<HTML><center>Table Cleared. <a href='/hea tmap'>Return to Map</a></HTML>" } default { ## Look up country & state locations set cloc [whereis [IP::client_addr] country] set sloc [whereis [IP::client_addr] abbrev] ## If the IP doesn't resolve to anything, pick a random IP (useful for testing on private networks) if {($cloc eq "") and ($sloc eq "")} { set ip [expr { int(rand()*255) }].[expr { int(rand()*255) }].[expr { int(rand()*255) }].[expr { int(ran d()*255) }] set cloc [whereis $ip country] set sloc [whereis $ip abbrev] if {($cloc eq "") or ($sloc eq "")} { set cloc "US" set sloc "WA" } } ## Strip slashes from URI to allow easy queries set friendlyURL [string map {/ ""} [HTTP::uri]] ## Create a new table named country:location or state:location if {[table incr -subtable country:$cloc -mustexist $friendlyURL] eq ""} { table set -subtable country:$cloc $friendlyURL 1 indefinite indefinite } ## Update the mytables pointer table with the new country or state table name if {[table incr -subtable mytables -mustexist country:$cloc] eq ""} { table set -subtable mytables country:$cloc 1 indefinite indefinite } ## Same as above for states, not countries. if {$cloc eq "US"} { if {[table incr -subtable state:$sloc -mustexist $friendlyURL] eq ""} { table set -subtable state:$sloc $friendlyURL 1 indefinite indefinite } if {[table incr -subtable mytables -mustexist state:$sloc] eq ""} { table set -subtable mytables state:$sloc 1 indefinite indefinite } } HTTP::respond 200 Content "Added - Country: $cloc State: $sloc" } } }564Views0likes1CommentPerformance Logging iRule (Rule_http_log)
Problem this snippet solves: Here's a logging iRule. You'll need a HSL syslog pool to log too. Various bits gathered from other posts on DevCentral. Sharing in case there is interest. Make sure your rsyslogd is setup to use the newer syslog format like RFC-5424 including milliseconds and timezone info.Includes Country (co) and logs individual request times for each request on a HTTP/1.1 connection. To configure F5 logging to use milliseconds and timezone, disable logging in the gui and use tmsh edit sys syslog and something like: include " # short hostnames options { use_fqdn(no); }; # Remote syslog in RFC5424 - Tim Riker <Tim@Rikers.org> destination remotesyslog { syslog(\"10.1.2.3\" transport(\"udp\") port(51443) ts_format(iso)); }; log { source(s_syslog_pipe); destination(remotesyslog); }; " Uses upvar and proc. Tested on 11.6 - 15.1 This tracks connection info in a table and then copies that down to the per-request log() to handle reporting on http2. This version works around a BIG-IP bug where HTTP::version does not report 2 or higher for http2 and later requests. With http2 profiles, subsequent requests using the same connection can generate this error in the logs if HTTP::respond HTTP::redirect or HTTP::retry is called from and earlier iRule. Reorder your iRules to avoid this. <HTTP_REQUEST> - No HTTP header is cached - ERR_NOT_SUPPORTED (line 1)invoked from within "HTTP::method" How to use this snippet: Add this iRule to whatever virtual hosts you desire. I always add it as the first rule. If you have a rule that sets headers you want to track, you may want this after the rule that sets headers. Interesting Splunk queries can be created like: index=* perflog | timechart avg(cpu_5sec) by host limit=10 to show load across multiple F5s. index=* perflog | timechart max(upstream_time) by http_host limit=10 to show long request times by http_host Any other iRule may add things to the log() array and those will get added to the single hsl output. If you create a dg_http_log datagroup, that will be used to filter what gets logged. Tested on version: 13.0 - 15.1 # Rule_http_log # http logging - Tim Riker <Tim@Rikers.org> # bits taken from this post: # https://devcentral.f5.com/questions/irule-for-getting-total-response-time-server-response-time-and-server-connection-time # iRule performance tracking # https://devcentral.f5.com/questions/Timing-iRules timing on # timing is on by default in 11.5.0+ to see stats: # tmsh show ltm rule Rule_http_log # # if the dg_http_log datagroup exists then vips or hosts/paths in dg_http_log that start with # "NONE" no logging (really anything other than empty) # "INFO" normal logging # "FINE" full request and response headers and CLIENT_CLOSED # # upstream_time := 15000 in the datagroup to log all requests over 15 seconds # # example: # "/Common/vs_www.example.com_HTTPS" := "FINE" - logged including CLIENT_CLOSED # "www.example.com/" := "INFO" - logged # "www.example.com/somepath" := "FINE" - full headers # "www.example.com/otherpath" := "NONE" - not logged when RULE_INIT { # hostname up to first dot set static::hostname [getfield [info hostname] "." 1] } # not calling /Common/proc:hsllog as this logs when the request occurred # instead of the time it calls hsllog at the end of the request proc hsllog {time mylog} { upvar 1 $mylog log # https://tools.ietf.org/html/rfc5424 <local0.info>version rfc-3339time host procid msgid structured_data log # should be able to use a "Z" here instead of "+00:00" but our splunk logs don't handle that # 134 = local0.info set output "<134>1 [clock format [string range $time 0 end-3] -gmt 1 -format %Y-%m-%dT%H:%M:%S.[string range $time end-2 end]+00:00] ${static::hostname} httplog [TMM::cmp_group].[TMM::cmp_unit] - -" foreach key [lsort [array names log]] { if { ($log($key) matches_regex {[\" ;,:]}) } { append output " $key=\"[string map {\" "|"} $log($key)]\"" } else { append output " $key=$log($key)" } } # avoid marking virtual server up when hsl pool is up # https://support.f5.com/csp/article/K14505 set hsl pool_syslog HSL::send [HSL::open -proto UDP -pool $hsl] $output } when CLIENT_ACCEPTED { # calculate and track milliseconds # is this / 1000 guaranteed to be clock seconds? TCL docs say no, but it looks like on f5 it is. set tcp_start_time [clock clicks -milliseconds] set log(loglevel) 0 if { [class exists dg_http_log] } { # virtual name entries need to be full path, ie: /Common/vs_www.example.com_HTTP switch -- [string range [class match -value -- [virtual name] equals dg_http_log] 0 3] { "FINE" { set log(loglevel) 2 } "INFO" { set log(loglevel) 1 } default { set log(loglevel) 0 } } } table set -subtable [IP::client_addr]:[TCP::client_port] loglevel $log(loglevel) table set -subtable [IP::client_addr]:[TCP::client_port] tmm "[TMM::cmp_group].[TMM::cmp_unit]" table set -subtable [IP::client_addr]:[TCP::client_port] client_addr [IP::client_addr] table set -subtable [IP::client_addr]:[TCP::client_port] client_port [TCP::client_port] table set -subtable [IP::client_addr]:[TCP::client_port] cpu_5sec [cpu usage 5secs] table set -subtable [IP::client_addr]:[TCP::client_port] virtual_name [virtual name] set co [whereis [IP::client_addr] country] if { $co eq "" } { set co unknown } table set -subtable [IP::client_addr]:[TCP::client_port] co $co } when HTTP_REQUEST { set http_request_time [clock clicks -milliseconds] set keys [table keys -subtable [IP::client_addr]:[TCP::client_port]] foreach key $keys { set log($key) "[table lookup -subtable "[IP::client_addr]:[TCP::client_port]" "$key"]" } if {[HTTP::has_responded]} { # The rule should come BEFORE any rules that do things like redirects set log(http_has_responded) [HTTP::has_responded] set log(loglevel) 1 set log(event) HTTP_REQUEST call hsllog $http_request_time log return } if { [class exists dg_http_log] } { set logsetting [class match -value -- [HTTP::host][HTTP::uri] starts_with dg_http_log] if { $logsetting ne "" } { # override log(loglevel) if we found something switch -- [string range $logsetting 0 3] { "FINE" { set log(loglevel) 2 } "INFO" { set log(loglevel) 1 } default { set log(loglevel) 0 } } } } set log(http_host) [HTTP::host] set log(http_uri) [HTTP::uri] set log(http_method) [HTTP::method] # request_num might not be accurate for HTTP2 set log(request_num) [HTTP::request_num] set log(request_size) [string length [HTTP::request]] # BUG http2 reported as http1 in pre 16.x # https://cdn.f5.com/product/bugtracker/ID842053.html set log(http_version) [HTTP::version] if { [catch \[HTTP2::version\] result] == 1 } { if { $result contains "Operation not supported" } { #log local0. "HTTP version is: [HTTP::version]" } else { set h2ver [eval "\HTTP2::version"] # we might have http2 support, but not be http2 if { $h2ver != 0 } { set log(http_version) $h2ver } } } #log local0. "http_version = $log(http_version)" if { $log(loglevel) > 1 } { foreach {header} [HTTP::header names] { set log(req-$header) [HTTP::header $header] } } else { foreach {header} {"connection" "content-length" "keep-alive" "last-modified" "policy-cn" "referer" "transfer-encoding" "user-agent" "x-forwarded-for" "x-forwarded-proto" "x-forwarded-scheme"} { if { [HTTP::header exists $header] } { set log(req-$header) [HTTP::header $header] } } } } when LB_SELECTED { set lb_selected_time [clock clicks -milliseconds] set log(server_addr) [LB::server addr] set log(server_port) [LB::server port] set log(pool) [LB::server pool] } when SERVER_CONNECTED { set log(connection_time) [expr {[clock clicks -milliseconds] - $lb_selected_time}] set log(snat_addr) [IP::local_addr] set log(snat_port) [TCP::local_port] } when LB_FAILED { set log(event_info) [event info] } when HTTP_REJECT { set log(http_reject) [HTTP::reject_reason] } when HTTP_REQUEST_SEND { set http_request_send_time [clock clicks -milliseconds] } when HTTP_RESPONSE { set log(upstream_time) [expr {[clock clicks -milliseconds] - $http_request_send_time}] set log(http_status) [HTTP::status] if { $log(loglevel) > 1 } { foreach {header} [HTTP::header names] { set log(res-$header) [HTTP::header $header] } } else { foreach {header} {"cache-control" "connection" "content-encoding" "content-length" "content-type" "content-security-policy" "keep-alive" "last-modified" "location" "server" "www-authenticate"} { if { [HTTP::header exists $header] } { set log(res-$header) [HTTP::header $header] } } } # if logging is off, but upstream_time is over threshold in datagroup, log anyway if { ($log(loglevel) < 1) && [class exists dg_http_log] } { set log_upstream_time [class match -value -- upstream_time equals dg_http_log] if {$log_upstream_time ne "" && $log(upstream_time) >= $log_upstream_time} { set log(over_upstream_time) $log_upstream_time set log(loglevel) 1 } } } when HTTP_RESPONSE_RELEASE { if { [info exists http_request_time] } { set log(http_time) "[expr {[clock clicks -milliseconds] - $http_request_time}]" # push http_time into table so CLIENT_CLOSED can see it in HTTP/2 table set -subtable [IP::client_addr]:[TCP::client_port] http_time $log(http_time) } else { set http_request_time [clock clicks -milliseconds] } set log(event) HTTP_RESPONSE_RELEASE if { $log(loglevel) > 0 } { call hsllog $http_request_time log } } when HTTP_DISABLED { set log(http_passthrough_reason) [HTTP::passthrough_reason] } when CLIENT_CLOSED { # grab log() values from table set keys [table keys -subtable [IP::client_addr]:[TCP::client_port]] foreach key $keys { set log($key) "[table lookup -subtable "[IP::client_addr]:[TCP::client_port]" "$key"]" } set log(tcp_time) "[expr {[clock clicks -milliseconds] - $tcp_start_time}]" set log(event) CLIENT_CLOSED # http_time didn't get set, log here (HTTP_RESPONSE_RELEASE never called, catch redirects, aborted connections) if { not ([info exists log(http_time)]) } { if { [info exists http_request_time] } { # called HTTP_REQUEST but not HTTP_RESPONSE_RELEASE using HTTP 1.0 or 1.1 set log(http_time) "[expr {[clock clicks -milliseconds] - $http_request_time}]" } call hsllog $tcp_start_time log } elseif { $log(loglevel) > 1 } { call hsllog $tcp_start_time log } # clean out table when client disconnects table delete -subtable [IP::client_addr]:[TCP::client_port] -all }3.5KViews3likes7CommentsFTP Session Logging
Problem this snippet solves: This iRule logs FTP connections and username information. By default connection mapping from client through BIG-IP to server is logged as well as the username entered by the client. Optionally you can log the entire FTP session by uncommenting the log message in CLIENT_DATA. Code : # This iRule logs FTP connections and username information. # By default connection mapping from client through BIG-IP to server is logged # as well as the username entered by the client. Optionally you can log the # entire FTP session by uncommenting the log message in CLIENT_DATA. when CLIENT_ACCEPTED { set vip [IP::local_addr]:[TCP::local_port] set user "unknown" } when CLIENT_DATA { # uncomment for full session logging #log local0. "[IP::client_addr]:[TCP::client_port]: collected payload ([TCP::payload length]): [TCP::payload]" # check if payload contains the string we want to replace if { [TCP::payload] contains "USER" } { # use a regular expression to save the user name ## regex modified by arkashik regexp "USER \(\[a-zA-Z0-9_-]+)" [TCP::payload] all user # log connection mapping from client through BIG-IP to server log local0. "FTP connection from $client. Mapped to $inside -> $node, user $user" TCP::release TCP::collect } else { TCP::release TCP::collect } } when SERVER_CONNECTED { set client "[IP::client_addr]:[TCP::client_port]" set node "[IP::server_addr]:[TCP::server_port]" set inside "[serverside {IP::local_addr}]:[serverside {TCP::local_port}]" TCP::collect } when SERVER_DATA { TCP::release clientside { TCP::collect } }1.1KViews0likes4CommentsLog large HTTP payloads in chunks locally and remotely
Problem this snippet solves: Log HTTP POST request payloads remotely via High Speed Logging (HSL) to a syslog server and locally. Code : # Log POST request payloads remotely via HSL to a syslog server and locally. # Based on Steve Hillier's example and the HTTP::collect wiki page # https://devcentral.f5.com/s/wiki/iRules.http__collect.ashx # Note that although any size payload can theoretically be collected, the maximum size of a Tcl variable in v9 and v10 is 4MB # with a smaller functional maximum after charset expansion of approximately 1Mb. # In v11, the maximum variable size was increased to 32Mb. when RULE_INIT { # Log debug to /var/log/ltm? 1=yes, 0=no set static::payload_dbg 1 # Limit payload collection to 5Mb set static::max_collect_len 5242880 # HSL pool name set static::hsl_pool "my_hsl_tcp_pool" # Max characters to log locally (must be less than 1024 bytes) # https://devcentral.f5.com/s/wiki/iRules.log.ashx set static::max_chars 900 } when HTTP_REQUEST { # Only collect POST request payloads if {[HTTP::method] equals "POST"}{ if {$static::payload_dbg}{log local0. "POST request"} # Open HSL connection set hsl [HSL::open -proto TCP -pool $static::hsl_pool] # Get the content length so we can request the data to be processed in the HTTP_REQUEST_DATA event. if {[HTTP::header exists "Content-Length"]}{ set content_length [HTTP::header "Content-Length"] } else { set content_length 0 } # content_length of 0 indicates chunked data (of unknown size) if {$content_length > 0 && $content_length < $static::max_collect_len}{ set collect_length $content_length } else { set collect_length $static::max_collect_len } if {$static::payload_dbg}{log local0. "Content-Length: $content_length, Collect length: $collect_length"} } } when HTTP_REQUEST_DATA { # Log the bytes collected if {$static::payload_dbg}{log local0. "Collected [HTTP::payload length] bytes"} # Send all the collected payload to the remote syslog server HSL::send $hsl "<190>[HTTP::payload]\n" # Log the payload locally if {[HTTP::payload length] < $static::max_chars}{ log local0. "Payload=[HTTP::payload]" } else { # Initialize variables set remaining $payload set position 0 set count 1 set bytes_logged 0 # Loop through and log each chunk of the payload while {[string length $remaining] > $static::max_chars}{ # Get the current chunk to log (subtract 1 from the end as string range is 0 indexed) set current [string range $remaining $position [expr {$position + $static::max_chars -1}]] log local0. "chunk $count=$current" # Add the length of the current chunk to the position for the next chunk incr position [string length $current] # Get the next chunk to log set remaining [string range $remaining $position end] incr count incr bytes_logged $position log local0. "remaining bytes=[string length $remaining], \$position=$position, \$count=$count, \$bytes_logged=$bytes_logged" } if {[string length $remaining]}{ log local0. "chunk $count=$current" incr bytes_logged [string length $remaining] } log local0. "Logged $count chunks for a total of $bytes_logged bytes" } }692Views1like1CommentLog Http Headers
Problem this snippet solves: This simple rule logs all HTTP headers in requests and responses to /var/log/ltm. This can be helpful in troubleshooting. Code : when HTTP_REQUEST { set LogString "Client [IP::client_addr]:[TCP::client_port] -> [HTTP::host][HTTP::uri]" log local0. "=============================================" log local0. "$LogString (request)" foreach aHeader [HTTP::header names] { log local0. "$aHeader: [HTTP::header value $aHeader]" } log local0. "=============================================" } when HTTP_RESPONSE { log local0. "=============================================" log local0. "$LogString (response) - status: [HTTP::status]" foreach aHeader [HTTP::header names] { log local0. "$aHeader: [HTTP::header value $aHeader]" } log local0. "=============================================" } # Sample output: Rule log_http_headers_rule : ============================================= Rule log_http_headers_rule : Client 192.168.99.32:2950 -> webmail.example.com/exchange/Aaron/Inbox/?Cmd=contents (request) Rule log_http_headers_rule : Host: webmail Rule log_http_headers_rule : User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.9) Rule log_http_headers_rule : Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,im Rule log_http_headers_rule : Accept-Language: en-us,en;q=0.5 Rule log_http_headers_rule : Accept-Encoding: gzip,deflate Rule log_http_headers_rule : Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Rule log_http_headers_rule : Keep-Alive: 300 Rule log_http_headers_rule : Connection: keep-alive Rule log_http_headers_rule : Referer: https://webmail.example.com/exchange/ Rule log_http_headers_rule : X-Forwarded-For: 192.168.99.32 Rule log_http_headers_rule : Front-End-Https: On Rule log_http_headers_rule : ============================================= Rule log_http_headers_rule : ============================================= Rule log_http_headers_rule : Client 192.168.99.32:2950 -> webmail.example.com/exchange/Aaron/Inbox/?Cmd=contents (response) - status: 200 Rule log_http_headers_rule : Date: Tue, 06 Nov 2007 16 Rule log_http_headers_rule : Server: Microsoft-IIS/6.0 Rule log_http_headers_rule : X-Powered-By: ASP.NET Rule log_http_headers_rule : Content-Type: text/html Rule log_http_headers_rule : Content-Length: 55446 Rule log_http_headers_rule : MS-WebStorage: 6.5.7638 Rule log_http_headers_rule : Cache-Control: no-cache Rule log_http_headers_rule : =============================================7.5KViews0likes9CommentsCommand Performance
Problem this snippet solves: The article Ten Steps to iRules Optimization illustrates some ways to optimize your iRules. I took a look at the control statements and built a little iRule that will test those assertions and generate performance graphs using Google Charts to present the findings. How to use this snippet: Dependencies This iRule relies on external Class files for the test on the "class match" command. The class names should be in the form of "class_xxx" where xxx is the list size you want to test. Include xxx number of entries with values from 0 to xxx-1. For a list size of 10, the class should look like this: # Snippet in bigip.conf class calc_10 { "0" "1" "2" "3" "4" "5" "6" "7" "8" "9" } I used perl to generate larger classes of size 100, 1000, 5000, and 10000 for my tests. Usage Assign the iRule to a virtual server and then browse to the url http://virtualserver/calccommands. I've included query string arguments to override the default test parameters as follows ls=nnn - List Size. You will need a class defined titled calc_10 for a value of ls=10. i=nnn - Number of iterations. This will be how many times the test is performed for each list size. gw=nnn - Graph Width (default value of 300) gh=nnn - Graph Height (default value of 200) ym=nnn - Graph Y Max value (default 500) An example usage is: http://virtualserver/calccommands?ls=1000&i=500. This will work on a list size of 1000 with 500 iterations per test. Code : when HTTP_REQUEST { #-------------------------------------------------------------------------- # read in parameters #-------------------------------------------------------------------------- set listsize [URI::query [HTTP::uri] "ls"]; set iterations [URI::query [HTTP::uri] "i"]; set graphwidth [URI::query [HTTP::uri] "gw"]; set graphheight [URI::query [HTTP::uri] "gh"]; set ymax [URI::query [HTTP::uri] "ym"]; #-------------------------------------------------------------------------- # set defaults #-------------------------------------------------------------------------- if { ("" == $iterations) || ($iterations > 10000) } { set iterations 500; } if { "" == $listsize } { set listsize 5000; } if { "" == $graphwidth } { set graphwidth 300; } if { "" == $graphheight } { set graphheight 200; } if { "" == $ymax } { set ymax 500; } set modulus [expr $listsize / 5]; set autosize 0; #-------------------------------------------------------------------------- # build lookup list #-------------------------------------------------------------------------- set matchlist "0"; for {set i 1} {$i < $listsize} {incr i} { lappend matchlist "$i"; } set luri [string tolower [HTTP::path]] switch -glob $luri { "/calccommands" { #---------------------------------------------------------------------- # check for existence of class file. If it doesn't exist # print out a nice error message. Otherwise, generate a page of # embedded graphs that route back to this iRule for processing #---------------------------------------------------------------------- if { [catch { class match "1" equals calc_$listsize } ] } { # error set content " BIG-IP Version $static::tcl_platform(tmmVersion)" append content " ERROR: class file 'calc_$listsize' not found "; append content ""; } else { # Build the html and send requests back in for the graphs... set content " BIG-IP Version $static::tcl_platform(tmmVersion)" append content " List Size: ${listsize} " set c 0; foreach item $matchlist { set mod [expr $c % $modulus]; if { $mod == 0 } { append content " "; } incr c; } append content " "; } HTTP::respond 200 content $content; } "/calccommands/*" { #---------------------------------------------------------------------- # Time various commands (switch, switch -glob, if/elseif, matchclass, # class match) and generate redirect to a Google Bar Chart #---------------------------------------------------------------------- set item [getfield $luri "/" 3] set labels "|" set values "" #---------------------------------------------------------------------- # Switch #---------------------------------------------------------------------- set expression "set t1 \[clock clicks -milliseconds\]; \n" append expression "for { set y 0 } { \$y < $iterations } { incr y } { " append expression "switch $item {" foreach i $matchlist { append expression "\"$i\" { } "; } append expression " } " append expression " } \n" append expression "set t2 \[clock clicks -milliseconds\]"; eval $expression; set duration [expr {$t2 - $t1}] if { [expr {$duration < 0}] } { log local0. "NEGATIVE TIME ($item, matchclass: $t1 -> $t2"; } append labels "s|"; if { $values ne "" } { append values ","; } append values "$duration"; if { $autosize && ($duration > $ymax) } { set ymax $duration } #---------------------------------------------------------------------- # Switch -glob #---------------------------------------------------------------------- set expression "set t1 \[clock clicks -milliseconds\]; \n" append expression "for { set y 0 } { \$y < $iterations } { incr y } { " append expression "switch -glob $item {" foreach i $matchlist { append expression "\"$i\" { } "; } append expression " } " append expression " } \n" append expression "set t2 \[clock clicks -milliseconds\]"; eval $expression; set duration [expr {$t2 - $t1}] if { [expr {$duration < 0}] } { log local0. "NEGATIVE TIME ($item, matchclass: $t1 -> $t2"; } append labels "s-g|"; if { $values ne "" } { append values ","; } append values "$duration"; if { $autosize && ($duration > $ymax) } { set ymax $duration } #---------------------------------------------------------------------- # If/Elseif #---------------------------------------------------------------------- set z 0; set y 0; set expression "set t1 \[clock clicks -milliseconds\]; \n" append expression "for { set y 0 } { \$y < $iterations } { incr y } { " foreach i $matchlist { if { $z > 0 } { append expression "else"; } append expression "if { $item eq \"$i\" } { } "; incr z; } append expression " } \n"; append expression "set t2 \[clock clicks -milliseconds\]"; eval $expression; set duration [expr {$t2 - $t1}] if { [expr {$duration < 0}] } { log local0. "NEGATIVE TIME ($item, matchclass: $t1 -> $t2"; } append labels "If|"; if { $values ne "" } { append values ","; } append values "$duration"; if { $autosize && ($duration > $ymax) } { set ymax $duration } #---------------------------------------------------------------------- # Matchclass on list #---------------------------------------------------------------------- set expression "set t1 \[clock clicks -milliseconds\]; \n" append expression "for { set y 0 } { \$y < $iterations } { incr y } { " append expression "if { \[matchclass $item equals \$matchlist \] } { }" append expression " } \n"; append expression "set t2 \[clock clicks -milliseconds\]"; eval $expression; set duration [expr {$t2 - $t1}] if { [expr {$duration < 0}] } { log local0. "NEGATIVE TIME ($item, matchclass: $t1 -> $t2"; } append labels "mc|"; if { $values ne "" } { append values ","; } append values "$duration"; if { $autosize && ($duration > $ymax) } { set ymax $duration } #---------------------------------------------------------------------- # class match (with class) #---------------------------------------------------------------------- set expression "set t1 \[clock clicks -milliseconds\]; \n" append expression "for { set y 0 } { \$y < $iterations } { incr y } { " append expression "if { \[class match $item equals calc_$listsize \] } { }" append expression " } \n"; append expression "set t2 \[clock clicks -milliseconds\]"; log local0. $expression; eval $expression; set duration [expr {$t2 - $t1}] if { [expr {$duration < 0}] } { log local0. "NEGATIVE TIME ($item, matchclass: $t1 -> $t2"; } append labels "c|"; if { $values ne "" } { append values ","; } append values "$duration"; if { $autosize && ($duration > $ymax) } { set ymax $duration } #---------------------------------------------------------------------- # build redirect for the google chart and issue a redirect #---------------------------------------------------------------------- set mod [expr $item % 10] set newuri "http://${mod}.chart.apis.google.com/chart?chxl=0:${labels}&chxr=1,0,${ymax}&chxt=x,y" append newuri "&chbh=a&chs=${graphwidth}x${graphheight}&cht=bvg&chco=A2C180&chds=0,${ymax}&chd=t:${values}" append newuri "&chdl=(in+ms)&chtt=Perf+(${iterations}-${item}/${listsize})&chg=0,2&chm=D,0000FF,0,0,3,1" HTTP::redirect $newuri; } } }325Views0likes2Comments