Forum Discussion
Log unique Client Addresses per (Hour|Day|Etc)
Is there a way to supress logging for an IP that I've already logged in the current hour? Or Day? Alternatively, I could interrogate the URL for a landing page and log then...but that seems hokey.
This data will get pushed to our syslog server where it will get reported on. I just don't see any reason to to log every single GET request from the same IP each hour.
when HTTP_REQUEST {
set url [HTTP::host][HTTP::uri]
set sender [IP::client_addr]
set remote [IP::remote_addr]
set country [whereis $sender country]
set state [whereis $sender state]
set city [whereis $sender city]
set zip [whereis $sender zip]
set isp [whereis $sender isp]
set latitude [whereis $sender latitude]
set longitude [whereis $sender longitude]
log local0.info "GeoClientAddress=$sender GeoRequesting=$url GeoRemoteAddress=$remote GeoCountry=$country GeoStateRegion=$state GeoPostalCode=$zip GeoISP=$isp GeoLatitude=$latitude GeoLongitude=$longitude"
}
Thanks.
- Hamish
Cirrocumulus
If you're on v10, you can use a sub table (The table command in iRules) to track IP addresses... (Use the IP address as the key. The value can be a count). Then after 60 minutes (Use the after {} command) you can send (HSL in v10, or a sideband connection in v11) the counts to a remote host. - chester_16314
Nimbostratus
We are on v10. Is tracking this data in a table a good idea? I've heard some negative comments about memory consumption. - hoolio
Cirrostratus
If this is just for reporting, I'd consider using High Speed Logging to send a log message to a remote pool of syslog servers for every request and/or response. The CPU/memory/I/O overhead for HSL is quite low compared with using a subtable and logging locally. In informal testing on an 8900 I saw up to 200k TCP or UDP messages of 1000 bytes sent with ~10% CPU increase and nearly no additional memory usage. The additional CPU was much lower at lower message levels. - chester_16314
Nimbostratus
Thanks Aaron. How does logging through HSL alleviate my need though for a table? Or maybe it doesn't and you are only talking about the actual log function? - hoolio
Cirrostratus
I'm thinking it's better in terms of LTM resource usage to log everything externally using HSL and parse the stats you want off of LTM. - Hamish
Cirrocumulus
I originally started to do that with the LDAP work I did in the iRUle referenced above. Unfortunately it doesn't scale very well, and isn't very reliable. - hoolio
Cirrostratus
Using TCP is a stream of data (No loss). But the buffers are tiny. When they fill up (i.e. receiver can't keep up), the LTM opens a NEW connection to the logging pool... Then when the pool is busy and can't keep up, you get an endless spiral where the number of HSL connections quickly brings everything to it's knees (I had it up to thousands of connections :). WHich rapidly gets messy as it can take a LONG time to process all those streams. WHich uses up all that memory you're trying to avoid in not using the tables in the first place.
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com