hsl
34 TopicsIntermediate iRules: High Speed Logging - Spray Those Log Statements!
High Speed Logging has been around since version 10.1, and has been integral to many projects over the past few years. Prior to HSL's introduction, logging remotely was configured entirely in syslog or could be handled in iRules by specifying a destination in the log statement. One enhancement with HSL to that scenario was to allow a pool of servers to be configured for a destination, so given a pool of servers, the log messages were sure to arrive somewhere (ok, for TCP they were sure to arrive!) A drawback with either the log or HSL::send command, however, is that the message was only going to hit one destination. A workaround for that problem is to just use as many commands as necessary to hit all your destinations, but that's not very efficient. Enter the publisher. Beginning in version 11.3, a new option to the HSL::open command was added that allows you to send data to a log publisher instead of only to a pool. This allows you to spray that data to as many servers as you like. In my test setup, I used alias interfaces on a linux virtual machine as the destinations, and created a pool for each to be added to the publisher: ltm pool lp1 { members { 192.168.101.20:514 { address 192.168.101.20 } } } ltm pool lp2 { members { 192.168.101.21:514 { address 192.168.101.21 } } } ltm pool lp3 { members { 192.168.101.22:514 { address 192.168.101.22 } } } Once I have the pools defined, I create the log destinations: sys log-config destination remote-high-speed-log lp1 { pool-name lp1 protocol udp } sys log-config destination remote-high-speed-log lp2 { pool-name lp2 protocol udp } sys log-config destination remote-high-speed-log lp3 { pool-name lp3 protocol udp } Finally, I create the publisher for use in the iRules: sys log-config publisher lpAll { destinations { lp1 lp2 lp3 } } That's all the background magic required to get to the iRules showing off the -publisher option in HSL::open: ltm rule testrule { when CLIENT_ACCEPTED { set lpAll [HSL::open -publisher /Common/lpAll] } when HTTP_REQUEST { HSL::send $lpAll "<190> [IP::client_addr]:[TCP::client_port]-[IP::local_addr]:[TCP::local_port]; [HTTP::host][HTTP::uri]" } } Finally, some visual evidence for the skeptics out there: You can see that all three destinations got the message, and the message arrived as formatted. So now, armed with this new option (as of version 11.3), go forth and code!2.2KViews1like3CommentsMost efficient methods for Connection logging?
Does anyone have real world experience with logging connections at a high rate? If so, which methods are you using to collect and transmit the data? We have a requirement to log all connections going through our F5 devices. Things like the client/server-side IPs/ports as well as HTTP details for HTTP VIPs and DNS details from our GTMs. It's the Whitehouse M-21-31 mandate if anyone if familiar with it. I've used Request Logging Profiles and various iRules with HSL to collect this type of data before, but I've never been too concerned about overhead because I would only apply them as needed, like when t-shooting an issue with a VIP. Our busiest appliance pushes around 150k conn/sec and 5k HTTP req/sec, so I now have consider the most efficient methods to avoid any kind of impact to traffic flows. I've done some lab testing with several different methods but I can't do any meaningful load tests in that environment.Below are some of my opinions based on my lab testing so far. Data Collection AVR - I like that this single feature can meet all the requirements for collecting TCP, HTTP, and DNS data. It would also be relatively easy to perform audits to ensure the VIPs have the necessary Analytics profiles as we can manage it from the AVR profiles themselves. My main concern is the overhead that results from the traffic analysis. I assume it has to maintain a large database where it stores all the analyzed data even if we just ship it off to Splunk. Even the data shipped off to Splunk includes several different logs for each connection (each with a different 'Entity'). Request Logging Profile- This is fairly flexible and should have low overhead since the F5 doesn't need to analyze any of the data like AVR does. This only collects HTTP data so we still need another solution to collect details for non HTTP VIPs. It would be a pain to audit since we don't have use any kind of deployment templates or automation. iRule - This provides a lot of flexibility and it is capable of collecting all the necessary data, but I don't know how well performance overhead compares to AVR. This would also be a pain to audit due to lack of deployment templates and automation. Data Transmission HSL UDP Syslog- I imagine this is the most efficient method to send events, but it's likely only a matter of time before we are required to use TCP/TLS. Telemetry Streaming - This is the more modern method and it offers some interesting features like System Poller, which could eventually allow us to move away from SNMP polling. We would need a workaround for our GTM-only devices because they cannot run a TS listener.788Views0likes1CommentDevice Fingerprinting for mobile devices
We have published Microsoft Exchange server behind ASM+APM policies. I am using BotDefense to generate Device IDs for the connecting clients and then inserting the value of "device_id" variable into HTTP header and then passing it to APM. APM extracts the "device_id" value from HTTP header and sets it as an APM variable: ACCESS::session data set "session.custom.device_id" "$device_id" This APM session variable is then called in APM iRules to perform the required logic. So far, the logic and traffic flow is working as expected. Now, here comes the problem part: If the client is not a web browser, the whole logic fails because BotDefense is generating Device IDs based on JavaScript challenge. In case of Microsoft Exchange, the client could be a mobile device using the native email app of the phone (not mobile browser) like the mail app of Andriod or iOS devices. How can we perform device fingerprinting and generate Device IDs in such case? All mobile devices use the standard Exchange protocol "Microsoft ActiveSync" which is based on HTTPS. When traffic hits my virtual server from a mobile client, BIG-IP can detect it and differentiate it from other web browser traffic because all requests coming from mobile devices have this particular URI string in the HTTP Request: "/Microsoft-Server-ActiveSync". But because it is not a web browser, the JavaScript challenge is not performed and no Device ID generated. My question is: How can we perform fingerprinting and generate Device IDs for mobile devices (not mobile browsers)? Here is my ASM iRule which is handling the Device IDs and fingerprinting: ============================================================================================================================== when RULE_INIT { set static::TPS_Value 1 set static::debug 1 # set as 1 - send request logs set 0 if no request logs should be sent. set static::PBD_debug 1 # list of botdefense actions you want to get request log on set static::Logged_PBD_actions "tcp_rst browser_challenge internal_bigip_response captcha_challenge" set static::host_header "Host: webmail.company.com" } when HTTP_REQUEST { set hsl [HSL::open -proto TCP -pool ASM_Log_Pool2] set http_request [HTTP::request] #HSL::send $hsl $http_request } when BOTDEFENSE_REQUEST { #for demo purpose, make the challange valid from the first request - make sure you go to the default if {[HTTP::uri] equals "/"} { BOTDEFENSE::cs_allowed true } #Mandate the device_id attribute extraction BOTDEFENSE::cs_attribute device_id enable } when BOTDEFENSE_ACTION { set device_id [BOTDEFENSE::device_id] if {$static::debug > 0} {log "reason is, [BOTDEFENSE::reason], action is [BOTDEFENSE::action], botdefense device_id is: $device_id"} if {$static::Logged_PBD_actions contains [BOTDEFENSE::action]} { set botdefense_action [BOTDEFENSE::action] set botdefense_reason [BOTDEFENSE::reason] set PBD_header [concat Host: webmail.company.com\r\nPbd_Action: $botdefense_action\r\nPbd_reason: $botdefense_reason] set asm_http_requet_log [string map -nocase [list $static::host_header $PBD_header] $http_request] if {($static::PBD_debug > 0) && ([info exists asm_http_requet_log])} { HSL::send $hsl $asm_http_requet_log } } log "action is [BOTDEFENSE::action], reason is: [BOTDEFENSE::reason] cs_allowed is: [BOTDEFENSE::cs_allowed]" if {([BOTDEFENSE::action] eq "tcp_rst") && [BOTDEFENSE::cs_allowed] eq 0} { set res [BOTDEFENSE::action custom_response { sorry i am blocking you, try to restart the session } 200] if {$res eq "ok"} { set botdefense_responded 1 } } #if {[BOTDEFENSE::action] eq "captcha_challenge"} { #set res [BOTDEFENSE::action allow] #log "captcha challange with res $res" #if {$res eq "ok"} { #log "bypass allow" #} #} #} when ASM_REQUEST_DONE { if {$static::debug > 0} {log "http uri is [HTTP::uri]"} virtual Hackazone_APM_virt } when HTTP_REQUEST_SEND { clientside { # Need to force the host header replacement and HTTP:: commands into the clientside context # as the HTTP_REQUEST_SEND event is in the serverside context if {$static::debug > 0} {log "device id is: $device_id"} HTTP::header insert "device_id" "$device_id" #if { $suspicious_browser eq "1" } { #HTTP::header insert "suspicious_browser" "1" #log "sending suspicious_browser header" #} #log "after the change [HTTP::request]" } } when HTTP_RESPONSE_RELEASE { if {[info exists botdefense_responded]} { HTTP::header insert "X-TS-BP-Action" "2" } } =================================================================================================================================== Many thanks.800Views1like1Commentalertd high cpu usage
Hello, So we tried taxing the HSL logging of our 12.1.2 cluster with several (4) simple while true; do curl http://virtual-ip/; done curl loops. The virtual IP had a simple iRule that logged [HTTP::request] several times. Our logging is BSD syslog to HSL, to a logstash pool. During our testing, we saw alertd rising to 100% CPU and maxing out there. CPU usage on the dashboard increased aswell, as you see in the image below. sys log-config destination remote-high-speed-log elk-hsl-destination { pool-name syslog-pool } sys log-config destination remote-syslog rsyslog-to-hsl-elk { remote-high-speed-log elk-hsl-destination } sys log-config filter elk-hsl-filter { level info publisher elk-hsl-publisher } sys log-config publisher elk-hsl-publisher { destinations { rsyslog-to-hsl-elk { } } } Any idea how we can combat this? We would like to use HSL to reduce CPU consumption, but this seems like a lot of fuss for simple logging. Ideas? Thanks!Solved643Views0likes1CommentHSL logging logs strange URI
Hi! Wrote a logging iRule to log cache response headers with HSL. It works fine generally but some uris are pretty strange. Here is the rule: when HTTP_REQUEST priority 999 { set host [string tolower [HTTP::host]] set uri [HTTP::uri] set hsl [HSL::open -proto UDP -pool syslog-514_pool] } when HTTP_RESPONSE { if { [HTTP::header exists "Cache-Control"] } { set CacheControl [HTTP::header "Cache-Control"] } else { set CacheControl " " } if { [HTTP::header exists "Expires"] } { set Expires [HTTP::header "Expires"] } else { set Expires " " } HSL::send $hsl "[string map [list "\t \t" "\t-\t"]\ "<165>\t\ $Expires\t\ $CacheControl\t\ $uri\t\ $host\t\ "]\n" } Here is one of the lines that looks strange (note that this particular response did not have an Expires header): Date Log Level Source IP Expires Max-Age URI Host 2014-04-11 08:08:36Local4.Notice10.0.0.1max-age=60, must-revalidate, publichttps://ourwebsite.com/directory/service.svc?query=stringourwebsite.com We can't seen the strange URI's in the IIS logs. Could it be that the BigIP rejects it because the request is malformed? I am out of ideas. /Patrik388Views0likes2CommentsSending HSL data in json format.
Just wanted to know if data can be sent via HSL in json format as below : HSL::send $hsl "{ "Attacker_IP":$remoteip, "Destination_IP":[IP::local_addr], "User-Agent":$useragent, "ISP":$isp, "Country":$country, "Original_Domain":[HTTP::host], "Original_URI":[HTTP::uri], "Fully_decoded_URI":$decodedUri, "Timestamp":$timestamp, "XFF_Header":[HTTP::header X-Forwarded-For]}" Is there some other way to achieve this?Solved1.4KViews0likes6CommentsLog iRule Name with HSL
Is it possible to log the name of the iRule and EVENT that an HSL log message is being emitted from? Using the log local0.info format shows this information. Log local message example log 10.7.29.232 local0.info "locallog Sent Favorite Icon!" Aug 17 14:39:15 BIGIP-1-NON-PROD tmm[2185]: Rule /Common/SND-IHS-Access-Rule : locallog Sent Favorite Icon! HSL formatted syslog publisher example HSL::send $hslsyslog "HSLSYSLOG Sent Favorite Icon!" Aug 17 14:39:15 localhost tmm[2185] HSLSYSLOG Sent Favorite Icon!203Views0likes1CommentHSL for https redirects
Hello (DevCentral) world! I'm running 11.5.1 and I'm trying to use HSL to log whenever the _sys_https_redirect irule does a redirect. when HTTP_REQUEST { HTTP::redirect https://[getfield [HTTP::host] ":" 1][HTTP::uri] } I pulled this HSL irule off of devcentral. I added logging to see where it was failing and from the logs I can see that the CLIENT_ACCEPTED and the HTTP_REQUEST portions of the irule are triggered but not the HTTP_RESPONSE. Any ideas why? iRule Source for remote logging using HSL From: W3C Extended Log File Examples (IIS 6.0) http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/ffdd7079-47be-4277-921f-7a3a6e610dcb.mspx?mfr=true Fields: date time c-ip cs-username s-ip cs-method cs-uri-stem cs-uri-query sc-status sc-bytes cs-bytes time-taken cs-version cs(User-Agent) cs(Cookie) cs(Referrer) when CLIENT_ACCEPTED { Open a new high speed logging connection to the syslog pool named syslog_server_pool set hsl [HSL::open -proto UDP -pool test.syslog.pool] log local0. "client_accepted hit" } when HTTP_REQUEST priority 999 { Save request variables that are not accessible in HTTP_RESPONSE, like the URI, request method, etc set req_start [clock clicks -milliseconds] set cs_username [HTTP::username] set cs_uri_stem [HTTP::path] set cs_uri_query [HTTP::query] set cs_bytes [HTTP::header Content-Length] set ua [HTTP::header User-Agent] set cookies [HTTP::header values Cookie] set referer [HTTP::header Referer] log local0. "http_request hit" } when HTTP_RESPONSE { Send the syslog message with a syslog facility of 134 (local0.info) See the HSL wiki page for details on the facilties: https://devcentral.f5.com/wiki/iRules.HSL__send.ashx Replace null values with a hyphen: Use string map to replace a "tab space tab" with "tab hyphen tab" log local0. "http_response hit" HSL::send $hsl "[string map [list "\t \t" "\t-\t"]\ "<134>\t\ [info hostname]\t\ [IP::local_addr]\t\ [clock format [clock seconds] -format "%d/%m/%Y %H:%M:%S %z"]\t\ [IP::client_addr]\t\ $cs_username\t\ [clientside {IP::local_addr}]\t\ $cs_uri_stem\t\ $cs_uri_query\t\ [HTTP::status]\t\ [HTTP::header Content-Length]\t\ [expr {[clock clicks -milliseconds] - $req_start}]\t\ [HTTP::version]\t\ \"$ua\"\t\ $cookies\t\ $referer\ "]\n" }343Views0likes3CommentsIs there any known impacts of HSL with PROC support ?
Hi All, We are using HSL for remote logging but when all the HSL servers are down we are doing local logging. Below is the existing iRule when RULE_INIT { upvar 0 tcl_platform static::tcl_platform set static::log_publisher "/Common/HSLPublisher" log local0.info "iRule Initialization" set static::logging 0 set static::hsllogging 1 set static::rule_name "testRule1" set static::hsl_pool "pool_hsl_logging" } when CLIENT_ACCEPTED { set CLIENT_ACCEPTED_DEBUG "$static::tcl_platform(machine) debug tmm[TMM::cmp_unit]: Rule $static::rule_name CLIENT_ACCEPTED:" set CLIENT_ACCEPTED_INFO "$static::tcl_platform(machine) info tmm[TMM::cmp_unit]: Rule $static::rule_name CLIENT_ACCEPTED:" set hsl [HSL::open -publisher $static::log_publisher] if { $static::logging == 1 } { if { $static::hsllogging } { if {[active_members $static::hsl_pool] < 1} { log local0.debug "CALL_FLOW Client \[[IP::client_addr]:[TCP::client_port]\]==>F5 \[[IP::local_addr]_[TCP::local_port]\]" } else { HSL::send $hsl "$CLIENT_ACCEPTED_DEBUG CALL_FLOW Client \[[IP::client_addr]:[TCP::client_port]\]==>F5 \[[IP::local_addr]_[TCP::local_port]\]" } } else { log local0.debug "CALL_FLOW Client \[[IP::client_addr]:[TCP::client_port]\]==>F5 \[[IP::local_addr]_[TCP::local_port]\]" } } } when CLIENT_CLOSED { set CLIENT_CLOSED_DEBUG "$static::tcl_platform(machine) debug tmm[TMM::cmp_unit]: Rule $static::rule_name CLIENT_CLOSED:" if { $static::logging == 1 } { if { $static::hsllogging } { if {[active_members $static::hsl_pool] < 1} { log local0.debug "CALL_FLOW CLient \[[IP::client_addr]:[TCP::client_port]\]==>F5 \[[IP::local_addr]_[TCP::local_port]\]" } else { HSL::send $hsl "$CLIENT_CLOSED_DEBUG CALL_FLOW CLient \[[IP::client_addr]:[TCP::client_port]\]==>F5 \[[IP::local_addr]_[TCP::local_port]\]" } } else { log local0.debug "CALL_FLOW CLient \[[IP::client_addr]:[TCP::client_port]\]==>F5 \[[IP::local_addr]_[TCP::local_port]\]" } } } For every one line of log we have to write 12 lines of code instead we are planning to use PROC. Is there any known impact of PROC with HSL. We will have a separate PROC for each iRule below is the snippet proc test_hsl { log_str hsl_log_str hsl_enable hsl_handler} { if { ([active_members $static::hsl_pool] > 0) && ($hsl_enable) } { HSL::send $hsl_handler $hsl_log_str } else { log local0.info "$log_str" } } when RULE_INIT { upvar 0 tcl_platform static::tcl_platform set static::log_publisher "/Common/HSLPublisher" log local0.info "iRule Initialization" set static::logging 0 set static::hsllogging 1 set static::rule_name "testRule1" set static::hsl_pool "pool_hsl_logging" } when CLIENT_ACCEPTED { set CLIENT_ACCEPTED_DEBUG "$static::tcl_platform(machine) debug tmm[TMM::cmp_unit]: Rule $static::rule_name CLIENT_ACCEPTED:" set CLIENT_ACCEPTED_INFO "$static::tcl_platform(machine) info tmm[TMM::cmp_unit]: Rule $static::rule_name CLIENT_ACCEPTED:" set hsl [HSL::open -publisher $static::log_publisher] if { $static::logging == 1 } { call test_hsl "$logStr" "$CLIENT_ACCEPTED_INFO $logStr" $static::hsllogging $hsl } } when CLIENT_CLOSED { set CLIENT_CLOSED_DEBUG "$static::tcl_platform(machine) debug tmm[TMM::cmp_unit]: Rule $static::rule_name CLIENT_CLOSED:" if { $static::logging == 1 } { call test_hsl "$logStr" "$CLIENT_CLOSED_INFO $logStr" $static::hsllogging $hsl } }277Views0likes2CommentsKnown limitation of HSL with proc
Hi, We have iRules in which we log using HSL. If log servers are down we log locally. We wanted to convert HSL logging to proc so that instead of writing these many lines of code just for logging we can use a proc. Is there any known of using HSL with proc. when CLIENT_ACCEPTED { set hsl [HSL::open -publisher $static::log_publisher] if { $log_hsl == 1 } { if {[active_members hsl_pool] < 1 ] } { log local0.info "Accepted client conn [IP::client_addr]:[TCP::client_port]" } else { HSL::send $hsl "$CLIENT_ACCEPTED_INFO Accepted client conn [IP::client_addr]:[TCP::client_port]" } } else { log local0.info "Accepted client conn [IP::client_addr]:[TCP::client_port]" } } Thanks Syed Nazir540Views0likes5Comments