telemetry streaming
15 TopicsAST and Telemetry Streaming
Hi! I am quite please with the Application Study Tool, but there is one thing that I keep wondering about. The OpenTelemetry component (otel-collector) currently pulls data from the F5 using the iControl REST API, however: there is also Telemetry Streaming available. Is it possible to use Telemetry Streaming for getting the data into Prometheus?103Views0likes3CommentsF5 Open telemetry issue
Hi all, we have the issue with TS on f5, we installed TS package, set up declaration but when we are checking f5 url/telemetry we dont see atribute which we should see in the attachment you can see the declaration we have used and posted on f5 expecting to see everything, but we see only some basic status, not eg: f5 pool active members, f5 pool availability we dont see any errors in /var/log/restnoded/restnoded.log and when we check url: localhost/mgmt/shared/telemetry/pullconsumer/metrics we see nothing useful. Any help would be appreciated.63Views0likes1CommentHigh Speed Logging vs Telemetry Streaming (Logging to SIEM)
My goals is to send web application traffic logs from my virtual servers to an external SIEM. It looks like there are quite a few ways to approach this, so I want to check with the community to see what works best for you. Ideally, this would be a high-volume configuration with logging enabled for 400+ public virtual servers. At a minimum, I would like to collect the client IP, user agent, URI path, virtual IP, virtual server name, pool, server name, and server-side response code. I reviewed the overview here: Getting Started with iRules: Logging & Comments | DevCentral It is clear that High Speed Logging (HSL) would be the preferred approach to ensure the resource and capacity burden is placed on the TMM (data plane) and not the control/management plane and to avoid writing to disk on the F5 BIG-IP host. I could write to syslog servers and forward these logs to my SIEM. The HSL seems to be straight forward to configure with a sample iRule looking like: when CLIENT_ACCEPTED { set vs [IP::local_addr]:[TCP::local_port] # Open HSL connections with configured pools set hsl_pool_1 [HSL::open -proto UDP -pool Pool_Syslog_1] set hsl_pool_2 [HSL::open -proto UDP -pool Pool_Syslog_2] } when SERVER_CONNECTED { set client [IP::client_addr]:[TCP::client_port] set srv [IP::remote_addr]:[TCP::remote_port] set log_message "<134>Client: $client connected to $vs and routed to server $srv at [clock format [clock seconds] -format \"%Y-%m-%d %H:%M:%S\"]" # Send logs to both HSL pools HSL::send $hsl_pool_1 $log_message HSL::send $hsl_pool_2 $log_message } However, when I searched through the DevCentral forums for references to SIEM logging, I found that most examples used Telemetry Streaming and AS3 for configuration. An official KB can be found here: Configure Azure sentinel or other telemetry consumer integration with BIG-IP and there is a f5devcentral github publication with configuration declarations for this approach for multiple SIEM vendors (e.g., analytics-vendor-dashboards/elastic at main · f5devcentral/analytics-vendor-dashboards for Elastic) For a use case like mine that involves high-volume logging in BIG-IP, do you know if HSL or Telemetry Streaming would be best to minimize the impact on BIG-IP?256Views0likes1CommentTelemetry Streaming: getting HTTP statistics via SNMP
Hi F5 community, I am looking to get HTTP statistics (total count, and broken by response code) metrics from Telemetry Streaming via SNMP (seems to be the most viable option). F5-BIGIP-LOCAL-MIB::ltmHttpProfileStat oid: .1.3.6.1.4.1.3375.2.2.6.7.6 However, the stats don't seem to come out correct at all: I do see deltas happening, but they don't match at all the traffic rate I expect to see. Furthermore, I have done some tests where I would start a load testing tool (vegeta) to fire concurrent HTTP requests, for which I do see the logs from the virtual server, but no matching increment in the above SNMP OID entries on none of the profiles configured. What am I doing wrong? does something need to be enabled on the HTTP profile in use to collect those stats? Best, Owayss90Views0likes0CommentsTroubleshooting F5 WAF Log Shipping to Microsoft Sentinel SIEM: Issues Isolating ASM Logs
We have an issue with shipping logs from F5 WAF to Microsoft Sentinel SIEM. The issue is peculiar. We do not want to ship either F5Telemetry_system_CL logs or F5Telemetry_LTM_CL logs, only F5Telemetry_ASM_CL logs. We have simplified the command to the most basic one, at first it was working and we managed to ship LTM and ASM logs but when we tried to granulate for just LTM Logs, nothing is being sent. I will include the commands we used at the bottom. Any help would be appreciated, as well as some guidance differentiating between ASM and LTM logs. Command working for LTM and ASM not system logs curl -ku <username>:<password> -H 'Content-Type: application/json' https://192.0.0.0/mgmt/shared/telemetry/declare --data-raw \ '{ "class": "Telemetry", "My_Listener": { "class": "Telemetry_Listener", "port": 6514 } "Pull_Consumer": { "class": "Telemetry_Pull_Consumer", "type": "default", "systemPoller": ["Poller"] }, "My_Consumer": { "class": "Telemetry_Consumer", "type": "Azure_Log_Analytics", "workspaceId": "secret", "passphrase": { "cipherText": "secret" }, } }' New command was successful, but nothing is being sent curl -ku <username>:<password> -H 'Content-Type: application/json' https://192.0.0.0/mgmt/shared/telemetry/declare --data-raw \ ' { "class": "Telemetry", "controls": { "class": "Controls", "logLevel": "info", "debug": false }, "My_Consumer": { "class": "Telemetry_Consumer", "type": "Azure_Log_Analytics", "workspaceId": secret", "passphrase": { "cipherText": "secret" } } }204Views0likes0CommentsTelemetry Streaming Question
Hi, I'm after a little assistance, I have installed the Telemetry Streaming extension to use Opentelemetry as an endpoint. I've been able to achieve this and my receiver is receiving the metrics. The declaration I am POST-ing is as follows { "class": "Telemetry", "My_System": { "class": "Telemetry_System", "systemPoller": { "interval": 60 } }, "My_Listener": { "class": "Telemetry_Listener", "port": 6514 }, "My_Consumer": { "class": "Telemetry_Consumer", "type": "OpenTelemetry_Exporter", "host": "10.1.2.3", "port": 4317, "metricsPath": "/v1/metrics", "convertBooleansToMetrics": true, "enable": true, "trace": false, "allowSelfSignedCert": true, "exporter": "protobuf", "protocol": "https" } } This is succesful as I say, but the received metrics don't include a hostname (which is important as there are some profile duplications on the 40+ devices looked afer), is there a way to insert the hostname into the messages exported from the systems? Thanks in advance.780Views0likes4CommentsTelemetry Streaming - Re-starting restnoded
Hello Everyone, I have a dilemma ever since I set up telemetry streaming. I noticed that the restnoded daemon is restarting (some days are more frequent than others) but I can't get my hand into the root cause of it and how to solve it. I have been keeping a close eye on "/var/log/restnoded/restnoded.log" but couldnt pin point what could cause the restnoded daemon to restart. Regards, Sarah.1.1KViews0likes2CommentsMost efficient methods for Connection logging?
Does anyone have real world experience with logging connections at a high rate? If so, which methods are you using to collect and transmit the data? We have a requirement to log all connections going through our F5 devices. Things like the client/server-side IPs/ports as well as HTTP details for HTTP VIPs and DNS details from our GTMs. It's the Whitehouse M-21-31 mandate if anyone if familiar with it. I've used Request Logging Profiles and various iRules with HSL to collect this type of data before, but I've never been too concerned about overhead because I would only apply them as needed, like when t-shooting an issue with a VIP. Our busiest appliance pushes around 150k conn/sec and 5k HTTP req/sec, so I now have consider the most efficient methods to avoid any kind of impact to traffic flows. I've done some lab testing with several different methods but I can't do any meaningful load tests in that environment. Below are some of my opinions based on my lab testing so far. Data Collection AVR - I like that this single feature can meet all the requirements for collecting TCP, HTTP, and DNS data. It would also be relatively easy to perform audits to ensure the VIPs have the necessary Analytics profiles as we can manage it from the AVR profiles themselves. My main concern is the overhead that results from the traffic analysis. I assume it has to maintain a large database where it stores all the analyzed data even if we just ship it off to Splunk. Even the data shipped off to Splunk includes several different logs for each connection (each with a different 'Entity'). Request Logging Profile - This is fairly flexible and should have low overhead since the F5 doesn't need to analyze any of the data like AVR does. This only collects HTTP data so we still need another solution to collect details for non HTTP VIPs. It would be a pain to audit since we don't have use any kind of deployment templates or automation. iRule - This provides a lot of flexibility and it is capable of collecting all the necessary data, but I don't know how well performance overhead compares to AVR. This would also be a pain to audit due to lack of deployment templates and automation. Data Transmission HSL UDP Syslog - I imagine this is the most efficient method to send events, but it's likely only a matter of time before we are required to use TCP/TLS. Telemetry Streaming - This is the more modern method and it offers some interesting features like System Poller, which could eventually allow us to move away from SNMP polling. We would need a workaround for our GTM-only devices because they cannot run a TS listener.1.1KViews0likes1CommentDeploying BIG-IP Telemetry Streaming with Azure Sentinel as its consumer.
AZURE SENTINEL and BIG-IP ...with Telemetry Streaming! This work was completed as a collaboration of Remo Mattei r.mattei@f5.com and Bill Wester b.wester@f5.com, feel free to email us if you have questions. One of the things that I have discovered recently is how neat it is to be able to leverage Azures new Sentinel to receive and display telemetry data from F5's BIG-IP devices. The devices don't even have to be in Azure, you could have dedicated hardware BIG-IPs and still send via Telemetry Streaming to Sentinel as your destination for statistics and logs. Let us explore a bit more on how to get all of the moving pieces together to a single cohesive implementation. Telemetry Streaming is a way for you to forward events and statistics from the BIG-IP system to your preferred data consumer and visualization application. You can do all of this by POSTing a single JSON declaration to a declarative REST API endpoint. Telemetry Streaming uses a declarative model, meaning you provide a JSON declaration rather than a set of imperative commands. More info can be found here: https://clouddocs.f5.com/products/extensions/f5-telemetry-streaming/latest/userguide/about-telemetry.html BIG-IP allows you to send logs to several external providers. Splunk, a well known one, is one of the most used out there. However, the new Azure Sentinel, a cloud solution, is something that many customers can take advantages from. This section, will help in understanding on how to setup BIG-IP to get the logs to Azure Sentinel. Setup BIG-IP First of all, this is broken into two parts, one shows the logs of the BIG-IP System Metrics, like what OS, what modules are installed etc. The second, is about the module ASM. The two have a few things in common. They use the TS RPM file which is added to the BIG-IP, and the declaration, which tells the BIG-IP where to send the stream of data. To send data relate to BIG-IP System Metrics it is required to have AVR provisioned on the device. ASM is not required but we use it here as an example of how to enable another module. Here is a screenshot from the Azure which shows the required modules. One more important thing is that ASM will need to have AFM also enabled otherwise you will not get logs in Azure. ASM Once enabled the required modules it will show System Metrics Common components that you must install for this to work First you need Telemetry Streaming: The TS RPM can be found here on GITHUB: https://github.com/F5Networks/f5-telemetry-streaming/releases/ You can use Visual Studio Code to install the RPM or your favorite way... Here are some screen shots form VS Code, using the F5 Plugin. NOTE: in order to use VSCode to push AS3, DO etc you must install the F5 Plugin. Use the command options in Mac it’s command+shift+P (here you can search for RPM by just typing it in the box) Select AS3 and make sure to install both AS3 and TS: Select the version : (probably latest is best here) The Telemetry Streaming declaration looks like this: { "class": "Telemetry", "My_Listener": { "class": "Telemetry_Listener", "port": 6514 }, "Poller": { "class": "Telemetry_System_Poller", "interval": 60, "enable": true, "trace": false, "allowSelfSignedCert": false, "host": "localhost", "port": 8100, "protocol": "http", "actions": [ { "enable": true, "includeData": {}, "locations": { "system": true, "virtualServers": true, "httpProfiles": true, "clientSslProfiles": true, "serverSslProfiles": true } } ] }, "Pull_Consumer": { "class": "Telemetry_Pull_Consumer", "type": "default", "systemPoller": [ "Poller" ] }, "Azure_Consumer": { "class": "Telemetry_Consumer", "type": "Azure_Log_Analytics", "workspaceId": "workspaceID", "passphrase": { "cipherText": "primkey" } }, "schemaVersion": "1.12.0" } NOTE: You will need to get the workspaceID and the primarykey. You can use the azure cli for that: az monitor log-analytics workspace list --out table CustomerId Location Name ProvisioningState PublicNetworkAccessForIngestion PublicNetworkAccessForQuery ResourceGroup RetentionInDays ------------------------------------ ------------- ---------------------------------------------------------- ------------------- --------------------------------- ----------------------------- ------------------------- ----------------- a05d4bfb-27c8-49a6-96e2-351d2dc78c61 eastus adrianLA Succeeded Enabled Enabled adrian_rg_01 7 63be43ed-b3f5-4e9f-bc92-226bb3393d11 eastus DefaultWorkspace-77c6ebef-d849-4527-a355-742d8d7d3fdc-EUS Succeeded Enabled Enabled defaultresourcegroup-eus 30 2ccbd35a-dfdf-4a5e-ab5f-1d5314f52e4b southeastasia DefaultWorkspace-77c6ebef-d849-4527-a355-742d8d7d3fdc-SEA Succeeded Enabled Enabled defaultresourcegroup-sea 30 9436f742-069a-4e29-aac0-e1258f7b1f87 westus2 calalangakslog Succeeded Enabled Enabled calalang-rg 30 ac071b51-f0c6-43b6-8bef-16b9197fde0f westus2 edgar-log Succeeded Enabled Enabled defaultresourcegroup-eus 31 555ae8d5-75bc-4058-becf-df510c09f8d3 westus2 DefaultWorkspace-77c6ebef-d849-4527-a355-742d8d7d3fdc-WUS2 Succeeded Enabled Enabled defaultresourcegroup-wus2 30 f633bdb1-d560-43cd-a664-cc7a93ed8781 westus2 edgar-log-analytics Succeeded Enabled Enabled edgar-rg 30 9334eb7c-16fc-4db9-a84f-5824a7177ccb centralus DefaultWorkspace-77c6ebef-d849-4527-a355-742d8d7d3fdc-CUS Succeeded Enabled Enabled defaultresourcegroup-cus 30 091c2cf3-853d-4297-9001-41d2109c28ec westus DefaultWorkspace-77c6ebef-d849-4527-a355-742d8d7d3fdc-WUS Succeeded Enabled Enabled defaultresourcegroup-wus 30 52471748-d9c7-46ba-9f9f-72ed8e92a201 westus remo-analytics Succeeded Enabled Enabled remo-telemetry 30 bc8e90ca-f59c-4fbf-a28b-213fe1cfcfda westus wester-log Succeeded Enabled Enabled wester_rg 30 Here you can see the name of the resource group then run the following command: az monitor log-analytics workspace get-shared-keys --resource-group wester_rg --workspace-name wester-log Which will print out the primarykey The workspace is CustomerId from the main table. To install this declaration you can use POSTMAN, curl, or Visual Studio Code; we used Visual Studio Code. Copy the text into a new VScode tab, make sure it’s in json format and then use the command pallet to post it Verify by using the TS version at the bottom of VSCode, it will execute a GET to the BIG-IP that is connected. ASM In order to use ASM you will need to configure a VIP with the IP of 255.255.255.254, and the port to the 6514, as well as an iRule. This can be done with an AS3 declaration or TMSH. Sample of AS3 declaration { "class": "ADC", "schemaVersion": "3.10.0", "remark": "Example depicting creation of BIG-IP module log profiles", "Common": { "Shared": { "class": "Application", "template": "shared", "telemetry_local_rule": { "remark": "Only required when TS is a local listener", "class": "iRule", "iRule": "when CLIENT_ACCEPTED {\n node 127.0.0.1 6514\n}" }, "telemetry_local": { "remark": "Only required when TS is a local listener", "class": "Service_TCP", "virtualAddresses": [ "255.255.255.254" ], "virtualPort": 6514, "iRules": [ "telemetry_local_rule" ] }, "telemetry": { "class": "Pool", "members": [ { "enable": true, "serverAddresses": [ "255.255.255.254" ], "servicePort": 6514 } ], "monitors": [ { "bigip": "/Common/tcp" } ] }, "telemetry_hsl": { "class": "Log_Destination", "type": "remote-high-speed-log", "protocol": "tcp", "pool": { "use": "telemetry" } }, "telemetry_formatted": { "class": "Log_Destination", "type": "splunk", "forwardTo": { "use": "telemetry_hsl" } }, "telemetry_publisher": { "class": "Log_Publisher", "destinations": [ { "use": "telemetry_formatted" } ] }, "telemetry_traffic_log_profile": { "class": "Traffic_Log_Profile", "requestSettings": { "requestEnabled": true, "requestProtocol": "mds-tcp", "requestPool": { "use": "telemetry" }, "requestTemplate": "event_source=\"request_logging\",hostname=\"$BIGIP_HOSTNAME\",client_ip=\"$CLIENT_IP\",server_ip=\"$SERVER_IP\",http_method=\"$HTTP_METHOD\",http_uri=\"$HTTP_URI\",virtual_name=\"$VIRTUAL_NAME\",event_timestamp=\"$DATE_HTTP\"" } }, "telemetry_security_log_profile": { "class": "Security_Log_Profile", "application": { "localStorage": false, "remoteStorage": "splunk", "protocol": "tcp", "servers": [ { "address": "255.255.255.254", "port": "6514" } ], "storageFilter": { "requestType": "illegal-including-staged-signatures" } }, "network": { "publisher": { "use": "telemetry_publisher" }, "logRuleMatchAccepts": false, "logRuleMatchRejects": true, "logRuleMatchDrops": true, "logIpErrors": true, "logTcpErrors": true, "logTcpEvents": true } } } } } To post an AS3 declaration like above use Visual Studio Code Use the command menu and select F5 Post an AS3 Declaration from the tab you have pasted the code OUTPUT from the declaration above: iRule used Assign the Telemetry Policy to the Virtual Service by selecting the option in the advanced menu Once you have the modules installed, and configured the appropriate settings, like above, then you will see data coming in Azure Sentinel. Here is an example: ASM System Metrics For System Metrics to work, you will need to have AVR installed, you do not need an AS3 declaration or an iRule. Once you have AVR installed, and have pushed the declaration to the BIG-IP, you will need to execute the following command in your BIG-IP. tmsh modify analytics global-settings { offbox-protocol tcp offbox-tcp-addresses add { 127.0.0.1 } offbox-tcp-port 6514 use-offbox enabled } tmsh save /sys config Check the logs in your BIG-IP less /var/log/restnoded/restnoded.log You will see something like: Fri, 18 Sep 2020 06:36:04 GMT - info: [telemetry] Starting system poller Poller::Poller. Interval = 60 sec. Fri, 18 Sep 2020 06:36:04 GMT - info: [telemetry] 1 consumer plug-in(s) loaded Next you will need to go into the Azure Portal, and you can find a nice pre-defined Sentinel Workbook to view and start to work with: You will select the "template" and then fill out the correct workspace from the dropdown, then select the correct hostname from the dropdown and you will start to see data showing up. Azure Sentinel displaying the workbook As you enable more modules, they will show up in the Azure Sentinel and will show how it’s enabled. You can also add / modify / enhance the workbook to show more data that is in Sentinel sent from the BIG-IP. Remo and I hope you found this article helpful and enjoy using BIG-IPs with Sentinel!5KViews3likes9CommentsDoes Telemetry Streaming generic HTTP push consumer forwards events to cloud SIEM platform?
We have set up Generic_HTTP consumer in Telemetry Streaming to forward events to cloud platform but the events are not forwarded. We observing this logs in /var/log/restnoded/restnoded.log Does Generic_HTTP consumer supports forwarding Telemetry streaming data to the cloud SIEM platforms? Is there any other configuration we need to follow? Also, does it supports only http protocol or both ?1.2KViews0likes5Comments