How I did it - "Visualizing Data with F5 TS and Splunk"
The new Splunk Add-on for F5 BIG-IP includes several objects, (modular inputs, CIM-knowledge, etc.) that work to “normalize” incoming BIG-IP data for use with other Splunk apps, such as Splunk Enterprise Security and the Splunk App for PCI Compliance.
The add-on includes a mechanism for pulling network traffic data, system logs, system settings, performance metrics, and traffic statistics from the F5 BIG-IP platform using F5’s iControl API, (see below).
But what I'm really excited about is that the add-on now integrates with F5 Telemetry Streaming, (TS). With TS I am easily able to declaratively aggregate, normalize, and push BIG-IP statistics and events, (JSON-formatted) to a variety of third-party analytics vendors.
For the remainder of this article, we’ll take a look at how I integrate F5 TS with Splunk Enterprise. I’ll be working with an existing BIG-IP deployment as well as a newly deployed Splunk Enterprise instance. As an added bonus, (and since it’s part of the article’s title) I’ll import a couple custom dashboards, (see below) to visualize our newly ingested telemetry data.
Oh! As an "Extra" added bonus, here is a link to a video walk through of this solution.
Installing the Splunk Add-on for F5 BIG-IP and Splunk CIM
Installing the Splunk F5 add-on is very simple. Additionally, to make use of the add-on I’ll need to install Splunk’s Common Information Model, (CIM).
1. From the top Splunk the search page, I select ‘Apps’ → ‘Find More Apps’.
2. I browse for “CIM” and select the Splunk Common Information Model add-on.
3. I accept the license agreement, provide my Splunk account login credentials and select ‘Login and Install’.
4. I’ll repeat steps 2-3 to install the Splunk Add-on for F5 BIG-IP.
Setup Splunk HTTP Event Collector
To receive incoming telemetry data into my Splunk Enterprise environment over HTTP/HTTPs I will need to create an HTTP Event Collector.
1. From the UI I select ‘Settings’ → ‘Data Inputs’. I select ‘HTTP Event Collector’ from the input list.
2. Prior to creating a new event collector token, I must first enable token access for my Splunk environment. On the ‘HTTP Event Collector’ page, I select ‘Global Settings’. I set ‘All Tokens’ to enabled, default index, incoming port and ensure SSL is enabled. I click ‘Save’ to exit.
3. I select ‘New Token’ and provide a name for the new collector and select ‘Next’.
4. On the ‘Input Settings’ tab I’ll select my allowed index(es) and select ‘Review’ then ‘Submit’.
5. Once the token is created, I will need to copy the token for use with my F5 TS configuration.
Configure Telemetry Streaming
With my Splunk environment ready to receive telemetry data, I now turn my attention to configuring the BIG-IP for telemetry streaming. Fortunately, F5’s Automation Toolchain configuring the BIG-IP is quite simple.
1. I’ll use Postman to POST an AS3 declaration to configure telemetry resources, (telemetry listener, log publisher, logging profiles, etc.).
The above AS3 declaration, (available here) deploys the required BIG-IP objects for pushing event data to a third-party vendor. Notably, it creates four (4) logging profiles I’ll attach to my application’s virtual server.
2. Still using Postman, I POST my TS declaration, (sample). I will need to provide my Splunk HTTP Collector endpoint address/port as well as the token generated previously.
Associate Logging Profiles to Virtual Server
The final step to configuring the BIG-IP for telemetry streaming is associating the logging profiles I just created with my existing virtual server. In addition to system telemetry, these logging profiles, when assigned to a virtual, will send LTM, AVR, and ASM telemetry.
1. From the BIG-IP management UI, I select ‘Local Traffic’ → ‘Virtual Servers’ → <virtual>.
2. Under ‘Configuration’ I select ‘Advanced’, scroll down and select the HTTP, TCP, and request logging profiles previously created. I select ‘Update’ at the bottom of the page to save
3. From the top of the virtual server page, I select ‘Security’ → ‘Policies’. From the policy settings page, I can see that there is an existing WAF policy associated with my application. To enable ASM logging, I select the previously created ASM logging profile from the available logging profiles and select ‘Update’ to save my changes.
With the configuration process complete, I should now start seeing event data in my Splunk Environment.
Import Dashboards
“Ok, so I have event data streaming into my Splunk environment; now what?”
Since I have installed the Splunk F5 add-on, I can integrate my “normalized” data with other data sources to populate various Splunk applications like Splunk Enterprise Security and Splunk App for PCI Compliance. Likewise, I can use dashboards to visualize my telemetry data as well as monitor BIG-IP resources/processes. To finish up, I’ll use the following steps to create custom dashboards visualizing BIG-IP metrics and Advanced WAF, (formerly ASM) attack information.
1. From the Splunk Search page, I navigate to the Dashboards page by selecting ‘Dashboards’.
2. Select ‘Create New Dashboard’ from the Dashboards page.
3. Provide a name for the new dashboard and select ‘Create Dashboard’. The dashboard name, (ID will remain unchanged) will be updated in the next step where I replace the newly created dashboard’s XML source with one of the community-supported dashboard XML files here.
4. On the ‘Edit Dashboard' screen I select ‘Source’ to edit the dashboard XML. I replace the existing XML data with the contents of the ‘advWafInsights.xml’ file. Select ‘Save’ to install the new dashboard.
5. I’ll repeat steps 1-4 using ‘bigipSystemMetrics.xml’ to install the BIG-IP metrics dashboard,
Additional Links
- Greg_CowardEmployee
Hello @abyanfaishal
I believe the command to fix as noted in the article you mentioned - https://my.f5.com/manage/s/article/K05413010 (see command below) will address the issue. Unfortunately, I don't know of a way to avoid the allowing loopback address.
Command - tmsh modify sys db tmm.tcl.rule.node.allow_loopback_addresses value true
tmsh save sys configThanks,
Greg
- abyanfaishalNimbostratus
i found the root cause already, There is a functional code change in 15.1.6.1 and up, that blocking iRule using node and loopback IP based on https://my.f5.com/manage/s/article/K05413010, i know it earlier but i still didnt have guts since it will open me to a vulnerability, do you guys have any chance to help me to fix or to find another way around to get similar result as the iRule?
the iRule is
ltm rule telemetry_local_rule {
when CLIENT_ACCEPTED {
node 127.0.0.1 6514
}
}Thankyou
- MichaelOLearyEmployee
Hello abyanfaishal
Did you use AS3 to set up your listener? Ie, is this really the name of it:
/Common/Shared/telemetry_publisher
In my case, I had not used AS3 to set up this. So mine was called "/Common/telemetry_publisher". And I had to edit my TMSH command to reflect that. See my comment on this article dated 10 Apr 2023.
Does that help?
Mike O'Leary
- abyanfaishalNimbostratus
Hi MichaelOLeary and Greg_Coward
i found problem where the splunk still didnt receive any telemetry event for AVR (telemetryEventCategory=AVR) i got AVR provisoned and able to see HTTP Analytic from Statistic im also already associate the log profile to my VS, at splunk im able to see event for SystemInfo and syslog meaning that the telemety is able to send to splunk
im also done with The specific TMSH command - modify analytics global-settings { external-logging-publisher /Common/Shared/telemetry_publisher offbox-protocol hsl use-offbox enabled }. and get
# tmsh list analytics global-settings all-properties
analytics global-settings {
avrd-debug-mode disabled
avrd-interval 300
disable-all-internal-logging disabled
ecm-address any6
ecm-port 0
enable-bigiq-configuration disabled
external-logging-publisher /Common/Shared/telemetry_publisher
offbox-protocol hsl
offbox-tcp-addresses none
offbox-tcp-port 0
partition Common
source-id none
tenant-id default
trigger-configuration-update disabled
use-ecm disabled
use-hsl disabled
use-offbox enabled
}is there anything im missing?
- MichaelOLearyEmployee
Greg_Coward and WildWeasel, one thing I find it worth pointing out is that the TMSH command above includes the name of the VIP "/Common/Shared/telemetry_publisher" - however, in my case I have set this up before without using AS3, so the VIP name was "/Common/telemetry_publisher" (or, whatever I may have used instead of telemetry_publisher). So, you may need to edit that TMSH command accordingly.
I make this point in case you copy/paste the TMSH command to export AVR stats and your VIP name is not the same as Greg's. I made this mistake myself and it took me a while to realize it 😀
- Greg_CowardEmployee
Hello,
For AVR, you need to apply a TMSH command to point the AVR configuration to the log publisher. The specific TMSH command is - modify analytics global-settings { external-logging-publisher /Common/Shared/telemetry_publisher offbox-protocol hsl use-offbox enabled }. You can find additional information related to this at: https://clouddocs.f5.com/products/extensions/f5-telemetry-streaming/latest/avr.html#modifying-avr-configuration-to-use-the-log-publisher
Hope this helps,
Greg
- WildWeaselCirrus
FIRST and most important is I want to thank you Greg for putting this together. VERY helpful and I know we all appreciate how thorugh your documentation and instruction is. Very helpful to getting this up ourselves.
Now to an issue I was hoping you could give me a push in the right direction. I get the basic info but not the stats on anything looking for telemetryEventCategory=AVR and yes AVR is provisioned and TS is confirmed to be pushing data. Followed your instructions to the tee. We did have BIG-IQ but removed it from these boxes and confirmed big-iq is not enabled
tmsh list analytics global-settings all-properties
analytics global-settings {
avrd-debug-mode disabled
avrd-interval 300
disable-all-internal-logging disabled
ecm-address any6
ecm-port 443
enable-bigiq-configuration disabled
external-logging-publisher /Common/Shared/telemetry_publisher
offbox-protocol hsl
offbox-tcp-addresses none
offbox-tcp-port 443
partition Common
source-id none
tenant-id default
trigger-configuration-update disabled
use-ecm disabled
use-hsl disabled
use-offbox enabled
}Appreciate any direction you can offer
- Greg_CowardEmployee
Hey Michael,
Thanks for the feedback/updates. I have updated the article links as well as the dashboards themselves. Since original publishing there has been updates to TS ouput that broke the dashboard configurations.
- MichaelOLearyEmployee
Hi Greg,
Another few notes from following your article
1) I was forced to add a version attribute to the "form" element in the XML for both dashboards. I think this would be due to an update in the Splunk Cloud platform I used. An easy addition for anyone that knows XML, but perhaps you could test this out again and if the platform has been updated, you could update your XML?
2) The first time I followed your instructions, I had not created an index called f5_index. That is not part of your instructions but is required by your XML, so I had blank dashboards. Again for the sake of a dummy like me who follows instructions to the letter, perhaps you could include instructions?
3) Finding the CIM add-on was hard for me (your screenshot helped a lot!). Perhaps more have been added since you wrote this article and perhaps there's an easier way to make sure folks select the correct one.
4) When updating your JSON file for TS configuration, I used the public IP address that I got by pinging the DNS name of my splunk cloud instance. I have no idea if that's the right way to do it (probably not, I would expect TLS validation to fail) so if you have advice for the right way to do it, it may help real customers (this is just a PoC I'm doing).
That's all my notes for now. I am still trying to get my dashboards populated with data but I believe I have traffic from BIG-IP arriving in Splunk Cloud now. Thanks so much for this guide.
- MichaelOLearyEmployee
Hi Greg,
Looks like a broken link in the article:
https://github.com/f5devcentral/analytics-vendor-dashboards/blob/main/splunk/splunk_as3__declaration.json is the link in the article but it looks like https://github.com/f5devcentral/analytics-vendor-dashboards/blob/main/splunk/F5%20BIG-IP/splunk_as3__declaration.json is the updated link.
And, thanks again for writing this!