logging
67 TopicsLog client source IP when connecting to TCP Virtual by iRule
Hi All, I received the request if it is possible to log the client IP when connecting to the virtual IP. We did this already based on an HTTP Virtual but now it's for an SMTP relay with regular TCP and so we can't attached the same iRule. when HTTP_REQUEST { if { [info exists logged] && $logged == 1 }{ # Do nothing. Already logged for this connection } else { set logged 1 log "ClientIP Information, from [IP::remote_addr] to vip [IP::local_addr] Cipher [SSL::cipher name]:[SSL::cipher version]:[SSL::cipher bits] User-Agent:[HTTP::header "User-Agent"]" } } I tried to find something similar just for plain TCP but was not able to find it and therefor i come checking in with you guys. Does someone has information on how we can achieve this? (iRule or other method)2.9KViews0likes5Commentsremote SYSlog setup
I'm in the process of setting up remote syslog on my Big IPs. My understanding from documents it's a simple task: If I want to dedicate to a specific drive on a server with multiple drive, is there a way to set it up accordingly? This syslog for system logs. Do I need to add a publisher profile on each virtual server still? Do I get logs locally and remote or this has to be setup? Lastly but not least: if local logs won't show after adding the remote syslog, how can I set it up to get logs locally and remotely? Thanks!Solved2.4KViews0likes3CommentsAPM - How to configure logging of snat addresses for network access and app tunnels
Hello everyone, we are using BIG-IP Access Policy Manager to enable administrative access to systems via App Tunnel and Network Access resources. For security reasons, we need to be able to map requests logged on backend resources/systems (e.g. in SSH audit logs) to the session or user accessing said backend resource via App Tunnel or Network Access in APM. Currently, the following request information is logged. Network Access: May 17 14:42:00 tmm0 tmm[22565]: 01580002:5: /APM/ap_rmgw:Common:c1237463: allow ACL: #app_tunnel_/APM/Some_App-Tunnel@c1237463:15 packet: tcp 192.168.12.18:58680 -> 10.0.0.1:22 App Tunnels: May 17 14:41:10 tmm1 tmm1[22565]: 01580002:5: /APM/ap_rmgw:Common:c6787463: allow ACL: #app_tunnel_/APM/Some_App-Tunnel@c6787463:0 packet: tcp 89.229.152.144:63252 -> 10.0.0.1:2 For Network Access requests, an IP address of the lease pool configured in the Network Access resource is logged as the client IP. For App Tunnel requests, the public IP of the client accessing APM is logged as the client IP. In our setup, both requests will be NATed by APM before hitting the target system (through a snat pool in case of a Network Access request, through the active appliances backend IP in case of App Tunnels). Therefore, the APM self IPs (snat pool/appliance backend) will be logged on the target host, leading to us not being able to correlate logs in APM with logs on the target systems. Is there any way to log the SNAT/NAT addresses and ports used to access target systems through APM? I've tried using ACCESS_ACL_ALLOWED in an iRule to log additional information, unfortunately this event only seems to trigger on Portal Access resources, not when using App Tunnels or Network Access resources. Thank you, Fabian2.2KViews0likes1CommentBig-IQ HA logs
Hi, We have a BIG-IQ HA setup with 4 servers. Two DCD and two mains. One as active and the other standby. When a failover happens for whatever reason i'm assuming there is some information about that written to a log file. Does anyone know where that info is logged? Is it on the mains servers? Or would it be on the dcd's? And which file? Thanks in advanceSolved2.1KViews0likes8CommentsDisable ASM illegal HTTP status response logging
Hello, my ASM policy setting looks like this: Why do I still get Application Request Logs where the only violation is I am fine with the blocking of unallowed HTTP status codes but I was expecting that the unchecked alarm box would prevent these logs. Do I have to define a special log profile for this? It is set to "log illegal requests". Thank youSolved1.7KViews0likes2CommentsLogging Variables
I have an iRule which performs the following: 1. Read the contents of the XML through an XML profile 2. Sets the variable 'id' to $XML::values($I) 3. If the value equals an entry in the data group, sends the traffic to pool_A 4. Else, sends the traffic to pool_B 5. Logs the variable 'id' and the pool member the traffic was sent to This is the iRule: when XML_CONTENT_BASED_ROUTING { for {set i 0} { $i < $XML::count } {incr i} { set id $XML::values($i) if { ([matchclass $XML::values($i) equals DataGroup_by_Org])} { pool pool_A } else { pool pool_B } } } when LB_SELECTED { log local0. "3189: orgName $id sent to [LB::server addr]" } I am having an issue with the logging portion. When I look at the log entry, the variable can't be read. This is the entry in the logs: Dec 15 14:39:09 local/tmm1 err tmm1[21886]: 01220001:3: TCL error: Routing_by_Org - can't read "id": no such variable while executing "log local0. "3189: orgName $id sent to [LB::server addr]""1.6KViews0likes8CommentsLogging/Audit Binary Execution?
Hey Everyone, We're looking to enable logging of binary execution or cli history, much like we can do in Linux using auditd. I've read about support engineers using auditd for troubleshooting purposes and while we can certainly enable auditd rules to catpure binary executions in the auditd logs, I haven't seen anything mention using this on a consistent basis. I'm sure some folks are asking, "Why?" but in our testing we found that it is possible for an attacker to copy nmap to the device and from there start scanning the network. We'd also like to log if/when someone launches, say, tcpdump for instance. I've been playing with this a bit and I can't seem to find anything anywhere that is logging which binaries are being run from the cli, except when we enable specific auditd rules that captures this. This brings up the question of log storage on the device since I've seen a number of posts regarding volumes running out of space due to audit logs growing uncontrolled. Any advice/discussion/help is certainly appreciated!Solved1.3KViews0likes3CommentsAdding the body of requests/responses to the data being logged to Splunk via iRule.
Hi All, We are presently using the iRule below to log request / response data to splunk. I'd like to add the body of the requests to our splunk logging. I had tried to user HTTP::payload as part of HTTP_REQUEST however it seems that the irule no longer functions when I place this there. When I add HTTP_REQUEST_DATA to the iRule to cater for HTTP:payload, I break the app - I expect that this is my implementation of HTTP_REQUEST_DATA. Is there an easy way to add the logging of the body of the request and response to what is sent to splunk? Thanks in advance when CLIENT_ACCEPTED { set client_address [IP::client_addr] set vip [IP::local_addr] } when HTTP_REQUEST { set http_host [HTTP::host]:[TCP::local_port] set http_uri [HTTP::uri] set http_url $http_host$http_uri set http_method [HTTP::method] set http_version [HTTP::version] set http_user_agent [HTTP::header "User-Agent"] set http_content_type [HTTP::header "Content-Type"] set http_referrer [HTTP::header "Referer"] set tcp_start_time [clock clicks -milliseconds] set req_start_time [clock format [clock seconds] -format "%Y/%m/%d %H:%M:%S"] set cookie [HTTP::cookie names] set user [HTTP::username] set virtual_server [LB::server] if { [HTTP::header Content-Length] > 0 } then { set req_length [HTTP::header "Content-Length"] } else { set req_length 0 } } when HTTP_RESPONSE { set res_start_time [clock format [clock seconds] -format "%Y/%m/%d %H:%M:%S"] set node [IP::server_addr] set node_port [TCP::server_port] set http_status [HTTP::status] set req_elapsed_time [expr {[clock clicks -milliseconds] - $tcp_start_time}] if { [HTTP::header Content-Length] > 0 } then { set res_length [HTTP::header "Content-Length"] } else { set res_length 0 } set hsl [HSL::open -proto TCP -pool p-remote-logging] HSL::send $hsl "<190>,f5_irule=Splunk-iRule-HTTP,src_ip=$client_address,vip=$vip,http_method=$http_method,http_host=$http_host,http_uri=$http_uri,http_url=$http_url,http_version=$http_version,http_user_agent=\"$http_user_agent\",http_content_type=$http_content_type,http_referrer=\"$http_referrer\",req_start_time=$req_start_time,cookie=\"$cookie\",user=$user,virtual_server=\"$virtual_server\",bytes_in=$req_length,res_start_time=$res_start_time,node=$node,node_port=$node_port,http_status=$http_status,req_elapsed_time=$req_elapsed_time,bytes_out=$res_length\r\n" } when LB_FAILED { set hsl [HSL::open -proto TCP -pool p-remote-logging] HSL::send $hsl "<190>,f5_irule=Splunk-iRule-LB_FAILED,src_ip=$client_address,vip=$vip,http_method=$http_method,http_host=$http_host,http_uri=$http_uri,http_url=$http_url,http_version=$http_version,http_user_agent=\"$http_user_agent\",http_content_type=$http_content_type,http_referrer=\"$http_referrer\",req_start_time=$req_start_time,cookie=\"$cookie\",user=$user,virtual_server=\"$virtual_server\",bytes_in=$req_length\r\n" }1.3KViews0likes1CommentMost efficient methods for Connection logging?
Does anyone have real world experience with logging connections at a high rate? If so, which methods are you using to collect and transmit the data? We have a requirement to log all connections going through our F5 devices. Things like the client/server-side IPs/ports as well as HTTP details for HTTP VIPs and DNS details from our GTMs. It's the Whitehouse M-21-31 mandate if anyone if familiar with it. I've used Request Logging Profiles and various iRules with HSL to collect this type of data before, but I've never been too concerned about overhead because I would only apply them as needed, like when t-shooting an issue with a VIP. Our busiest appliance pushes around 150k conn/sec and 5k HTTP req/sec, so I now have consider the most efficient methods to avoid any kind of impact to traffic flows. I've done some lab testing with several different methods but I can't do any meaningful load tests in that environment. Below are some of my opinions based on my lab testing so far. Data Collection AVR - I like that this single feature can meet all the requirements for collecting TCP, HTTP, and DNS data. It would also be relatively easy to perform audits to ensure the VIPs have the necessary Analytics profiles as we can manage it from the AVR profiles themselves. My main concern is the overhead that results from the traffic analysis. I assume it has to maintain a large database where it stores all the analyzed data even if we just ship it off to Splunk. Even the data shipped off to Splunk includes several different logs for each connection (each with a different 'Entity'). Request Logging Profile - This is fairly flexible and should have low overhead since the F5 doesn't need to analyze any of the data like AVR does. This only collects HTTP data so we still need another solution to collect details for non HTTP VIPs. It would be a pain to audit since we don't have use any kind of deployment templates or automation. iRule - This provides a lot of flexibility and it is capable of collecting all the necessary data, but I don't know how well performance overhead compares to AVR. This would also be a pain to audit due to lack of deployment templates and automation. Data Transmission HSL UDP Syslog - I imagine this is the most efficient method to send events, but it's likely only a matter of time before we are required to use TCP/TLS. Telemetry Streaming - This is the more modern method and it offers some interesting features like System Poller, which could eventually allow us to move away from SNMP polling. We would need a workaround for our GTM-only devices because they cannot run a TS listener.1.2KViews0likes1Comment