BIG-IP Logging and Reporting Toolkit – part three
In the first couple installments of this series we’ve talked about what we’re trying to accomplish, the vendor options we have available to us, their offerings, some strengths and weaknesses of each, etc. In this installment, we’re actually going to roll up our sleeves and get to getting. We’ll look at how to get things working in a couple different use cases including scripts, screenshots and config goodies. Before we get too far ahead of ourselves though, first things first – get the BIG-IP to send messages to syslog.
- Logging & Reporting Toolkit - Part 1
- Logging & Reporting Toolkit - Part 2
- Logging & Reporting Toolkit - Part 3
- Logging & Reporting Toolkit - Part 4
- Bigip v9
syslog {
remote server 10.10.200.30
}
- Bigip v10
syslog {
remote server {
splunk {
host 10.11.100.30
}
}
}
This will send all syslog messages from the BIG-IP to the Splunk server; both BIG-IP system messages and any messages from iRules. If you’re interested in having iRules log to the Splunk server directly you can use the HSL statements or the log statements with a destination host defined. Ex) RULE_INIT has set ::SplunkHost “10.10.200.30” and then in the iRules event you’re interested in you assemble $log_message and then sent it to the log with log $::SplunkHost $log_message . A good practice would be to also record it locally on something like local0 incase the message doesn’t make it to the Splunk host.
For Splunk to receive the message you have to create a Data Input on udp:514 for the log statement. To cover HSL and log statements I’d recommend creating tcp:514 and udp:514 data inputs on Splunk. http://www.splunk.com/base/Documentation/4.0.2/Admin/Monitornetworkports covers this.
We’ll get to the scripts part in a bit, first…
W3C offload case
Now that BIG-IP is setup to send messages to Splunk and Splunk is setup to listen to them, let’s see what it looks like when put together. Open the Splunk Search application and enter ‘w3c-client-logging sourcetype=udp:514’ into the search bar. Here’s one of the things that makes Splunk really easy to work with: it recognized the key-value pairings in the log message without any configuration needed on my part. Next, I opened the Pick fields box and selected user_agent and added it to the list of fields I’m interested in; and now it shows up in alongside the log message and I can now build a report on it by clicking on the arrow.
The engineer in us wants to use technical terms to accurately convey the precise information we want to distribute. Splunk makes it easy to bridge the gap from technical terms to terms that are meaningful to non-engineers. So, for example a BIG-IP admin knows what this iRule is and what it’s called (in this case w3c-client-logging) – but those could be foreign concepts to folks in the Creative Service department that only want to know what browsers people are using to access a website. So, let’s employ some natural language too. The w3c-client-logging rule records a message when an HTTP transaction completes; a request and a response. So, let’s call it what it is.
On your Splunk system open up the $SPLUNKHOME/etc/system/local/eventtypes.conf file and add this:
[httpTransaction]
search = “Rule w3c-client-logging”
You might need to restart Splunk for this change to take effect.
Now, let’s go back to the search console and try out our new event type.
This is a basic usage of event types in Splunk, you can learn more here: http://www.splunk.com/base/Documentation/4.0.2/Admin/Eventtypesconf . With transforms.conf and props.conf you can also effectively rename the attributes so, lb_server could be called webServer instead.
Now that we have a custom event based off our search string, all we have to do is click the dropdown arrow next to userAgent (in this case) and select the report option from the dropdown. Here's the output we'd see:
Heh – lookit’ that; nagios is the most frequent visitor…
Network Event Data Stream Case
Now that we've seen the W3C example, let's take a look at another example that's much more rich (comma delineated). With no keys, just values, this changes things considerably. Let’s look at the Network Event Data Stream specification and see how it’s been implemented as an iRule.
iRule - http://devcentral.f5.com/s/wiki/default.aspx/iRules/NEDSRule.html
Doc – http://devcentral.f5.com/s/downloads/techtips/NedsF5v1.doc
Since this is an information rich data source, conveyed from the BIG-IP to the Splunk server using comma separated values it takes a few more simple steps for Splunk to be able to extract out the fields just like it did for the key-value pairs.
Open up $SPLUNKHOME/etc/system/local/transforms.conf and insert this:
[extract_neds.f5.conn.start.v1_csv]
DELIMS = ","
FIELDS = "EventID","Device","Flow","DateTimeSecs","IngressInterface","Protocol",”DiffServ","TTL","PolicyName","Direction"
[extract_neds.f5.conn.end.v1_csv]
DELIMS = ","
FIELDS = "EventID","Device","Flow","DateTimeSecs","PktsIn","PktsOut","BytesIn","BytesOut"
[extract_neds.f5.http.req.v1_csv]
DELIMS = ","
FIELDS = "EventID","Device","Flow","DateTimeSecs","Request","Host","URI","UserName","UserAgent"
[extract_neds.f5.http.resp.v1_csv]
DELIMS = ","
FIELDS = "EventID","Device","Flow","DateTimeSecs","Reply","ResponseCode","ContentType","ContentLength","LoadBalanceTarget","ServerFlow"
This names each list of information we’re interested in, indicates that the fields in the message are comma delimited and names the fields. You can name the fields for what’s appropriate for your environment.
Save the file. Next, open $SPLUNKHOME/etc/system/local/props.conf and insert this:
[eventtype::F5connectionStartEvent]
REPORT-extrac = extract_neds.f5.conn.start.v1_csv
[eventtype::F5connectionEndEvent]
REPORT-extrac = extract_neds.f5.conn.end.v1_csv
[eventtype::F5httpRequestEvent]
REPORT-extrac = extract_neds.f5.http.req.v1_csv
[eventtype::F5httpResponseEvent]
REPORT-extrac = extract_neds.f5.http.resp.v1_csv
This instructs the Splunk system to extract the information from the named fields.
Save the file. Next, open SPLUNKHOME/etc/system/local/eventtypes.conf and insert this (the ‘sourcetype=udp:514’ part is optional – set it up for your environment or omit the search term):
[F5connectionStartEvent]
search = neds.f5.conn.start.v1 sourcetype=udp:514
[F5connectionEndEvent]
search = neds.f5.conn.end.v1 sourcetype=udp:514
[F5httpRequestEvent]
search = neds.f5.http.req.v1 sourcetype=udp:514
[F5httpResponseEvent]
search = neds.f5.http.resp.v1 sourcetype=udp:514
Lastly, this defines the event to extract the data from.
Save the file, and restart Splunkd. There are a few processes you can restart to avoid a complete Splunkd restart, but my environment is a lab so I just restarted the whole thing. While Splunkd is restarting you should attach the NEDS iRule to a BIG-IP virtual server you want to receive data from and send some traffic though the VIP so your Splunk servers will get some data.
Now let’s navigate back to the Search app in the web ui. In the search bar, enter eventtype=F5connectionEndEvent . I opened the Pick fields box and selected BytesIn, BytesOut, Device, PktsIn and PktsOut .
As another way to use the Splunk search to report on traffic transiting a BIG-IP enter eventtype=F5connectionEndEvent |timechart avg(PktsOut) avg(BytesOut) into the search bar. This will generate a table for you listing the average number of packets transmitted from the VIP and the average number of bytes transmitted by the VIP for the default 10s time period.
I mentioned more to come about the script input at the top of the message. F5 recently added dashboards for WAN optimization and Access Policy management. One thing that I wish the dashboards provided is a historic view of the data so I can see how my infrastructure is changing over time as I upgrade applications and add more traffic to my network. Full disclosure: this BIG-IP interface isn’t a supported interface for anything other than the dashboard. Using BIG-IP 10.1 with a full WAN Optimization license, Perl and Splunk, here’s how I did it.
1) Place this script (http://devcentral.f5.com/s/downloads/techtips/text-dashboard-log.pl) somewhere on your Splunk system and mark it executable – I put mine in $SPLUNKHOME/bin/scripts
2) Ensure you have the proper Perl modules installed for the script to work
3) Add BIG-IP logon data to lines 58 and 59 – the user must be able to access the dashboard
4) Configure a data input for the Splunk server to get the dashboard data. My Splunk system retrieves a data set every 2 minutes. I’ve added in 2 collectors, one for each side of my WOM iSession tunnel.
After getting all this setup and letting it run for a while, if you navigate back to your Search console you should see a new Sourcetype show up called BIG-IP_Acceleration_Dashboard.
Clicking on the BIG-IP_Acceleration_Dashboard sourcetype displays the log entries sent to the Splunk system. Splunk recognizes the key-value pairings and has automatically extracted the data and created entries in the Pick fields list.
That’s a lot of data! Basically it’s the contents of the endpoint_isession_stat table and the endpoint data – you can get this on the CLI via ‘tmctl endpoint_isession_stat’ and ‘b endpoint remote’ . Now I can easily see that from basically March 8 until now my WOM tunnels were only down for about 4 minutes. Another interesting report I’ve built from here from here is the efficacy of adaptive compression for the data transiting my simulated WAN by charting lzo_out_uses, deflate_out_uses and null_out_uses over time.
Last, but certainly not least – there’s the Splunk for F5 Networks application available via http://www.splunk.com/wiki/Apps:Splunk_for_F5. You should definitely install it if you’re an ASM or PSM user.
Logging and Reporting Toolkit Series:
|