Telemetry Streaming
6 Topics01 - Visualization of F5 BIG-IP metrics on Grafana using Prometheus and Telemetry Streaming service
This user guide isall about configuration and deployment of telemetry streaming service on F5 BIG-IP device and scraps those metrics by Prometheus which will be finally visualized by the Grafana. One can select the relevant metrics scraped by the Prometheus and visualize them on the Grafana which will be demonstrated later in the guide. Note: More detailed steps along with configuration images can be found on : https://nishalrai.com.np/2022/08/18/visualization-of-f5-big-ip-metrics-on-grafana-using-prometheus-and-telemetry-streaming-service/ This guide is heavily based on the work performed by Michael O'Leary and one can view onhere. The purpose of this guide is to document a little more elaborated guide for both learning and deployment aspects and also address the possible issues that could be faced during the process of deployment. Telemetry streaming (TS)is an iControl LX extension delivered as a TMOS-independent RPM file with the ability to declaratively aggregate, normalize and forward statistics and events from the BIG-IP to a consumer application by posting a single TS JSON declaration to TS’s declarative REST API endpoint. Additional information about Telemetry streaming can be foundhere. Prometheus is an open-source monitoring solution that stores time series data like metrics whereas Grafana allows visualizing the data stored in Prometheus and also supports a wide range of other sources. A short briefing about the architecture diagram in case of this user-deployment case scenario, the F5 BIG-IP system is on standalone mode with a management IP of 172.20.100.173, and both Prometheus and Grafana services are running on the same host with an IP address of 192.168.180.191 where the service port for Prometheus is on default – 9090 and the service port for Grafana is 5000. The whole deployment guide is broadly divided into the following sections and one can jump to the required step if they have achieved the previous configuration successfully: Section I: Download and install Telemetry Streaming Section II: Telemetry Streaming Declaration on the F5 BIG-IP device Section III: Configuration of Prometheus Section IV: Configuration on Grafana using Prometheus as a data source Section I: Download and install Telemetry Streaming We need to first download and install the telemetry streaming package on the F5 BIG-IP device. Since the telemetry streaming package is an RPM file that can be downloaded and can install through GUI or curl command on the CLI of the F5 BIG-IP device. In this user manual guide, we will download and then upload the telemetry streaming package on the BIG-IP using the iControl/iApp LX framework. One can use the alternative way which can be foundhere. First, we need to download the RPM file, one can find the latest telemetry streaming RPM file on the F5 Telemetry site on GitHub and download the latest RPM file. The GitHub page to download telemetry streaming can be foundhere. After downloading the file, you need to access your F5 BIG-IP GUI with your admin privilege account then follow the following steps: Go To iApps module > Package Management LX > Import > Browse to the downloaded location > Select Section II: Telemetry Streaming Declaration on the F5 BIG-IP device Once the download and installation of the F5 telemetry streaming package have been completed, we need to send a Telemetry Streaming declaration to configure a Telemetry Streaming pull consumer target. Before we jump into this configuration, we need to create a new user with an administrator role on the F5 BIG-IP device and you can just continue with the default admin user on the further configuration. We can create a new user in the following steps: Go to System > Users > User List Click on Create button Input the new user’s name and password Select role as administrator then add Click on the Finished button As we’re using Prometheus on this user-guide manual so, the Telemetry Streaming consumer target will be Prometheus which is hosted on 192.168.180.191:5000 We can either use Postman or using curl command on the CLI of the F5 BIG-IP device to configure a Telemetry Streaming pull consumer target. Configuration using Postman application Just follow the following steps for the configuration of the telemetry streaming consumer target using the Postman application. Step I: Open the Postman and create a new tab Step II: Select the GET method and paste the following link https://<big-ip-management-ip-address>/mgmt/shared/telemetry/declare Step III: Browse on Auth field and fill up the credentials Use the credentials used to log into F5 BIG-IP (in this case, recently created new user) Step IV: Select on Body option Change the method into POST, then select raw sub-option and then JSON data format. Past the Telemetry Streaming declaration on the body section and then click on the send button. { "class": "Telemetry", "My_Poller": { "class": "Telemetry_System_Poller", "interval": 0 }, "My_System": { "class": "Telemetry_System", "enable": "true", "systemPoller": [ "My_Poller" ] }, "metrics": { "class": "Telemetry_Pull_Consumer", "type": "Prometheus", "systemPoller": "My_Poller" } } Step V: Verify the response as the success status Select GET HTTP method on https://<big-ip-management-ip-address>/mgmt/shared/telemetry/declare Step VI: Verify the available metrics Create a new tab on Postman: -On the URL section https://<big-ip-management-ip-address>/mgmt/shared/telemetry/pullconsumer/metrics -On the authorization section, use the same credentials used before7.2KViews4likes4Comments02 - Visualization of F5 BIG-IP metrics on Grafana using Prometheus and Telemetry Streaming service
Configuration using CLI of F5 BIG-IP device Following steps for the configuration of telemetry streaming consumer target using CLI of F5 BIG-IP device are discussed below: Once you have accessed your F5 BIG-IP device CLI terminal then access either your default admin credentials or the new user you’ve recently created on the above section. Then execute the following commands on the terminal: On the username and password section, you either enter your default admin credentials or the new user you’ve recently created has the administrator privilege. curl -u username:password -k https://localhost/mgmt/shared/telemetry/declare Note: -k, --insecure to be made secure by using the CA certificate bundle installed by default. This makes all connections considered "insecure" fail unless -k, --insecure is used. ChangChange into tmp directory and create a file called ts-config.json and I am using vi editor for it. cd /tmp vi ts-config.json Paste the Telemetry Streaming declaration and then save the file and exit the vi editor. { "class": "Telemetry", "My_Poller": { "class": "Telemetry_System_Poller", "interval": 0 }, "My_System": { "class": "Telemetry_System", "enable": "true", "systemPoller": [ "My_Poller" ] }, "metrics": { "class": "Telemetry_Pull_Consumer", "type": "Prometheus", "systemPoller": "My_Poller" } } Then execute the following command on the terminal on thesame directory /tmp and change the username and password section with your F5 BIG-IP device credentialshaving the administrator privilege. curl -X POST -u username:password -khttps://localhost/mgmt/shared/telemetry/declare-d @ts-config.json -H “content-type:application/json” To verify the available metrics curl -u username:password -k https://localhost/mgmt/shared/telemetry/pullconsumer/metrics Section III: Configuration of Prometheus Once the telemetry streaming service has been successfully configured and the metrics are available on the path. We need to configure Prometheus in order to scrape the metrics data on the predefined path. The following are the steps to configure the Prometheus: Note: On this user-guide demonstration, both Grafana and Prometheus are installed on the same host with different service ports as mentioned earlier. CentOS 7 is used as the OS for this host machine and you may have different syntax to view the following status check. First, check the status of the Prometheus sudo systemctl status prometheus.service View the current working directory and change into /etc/prometheus pwd cd /etc/prometheus ls -al global: scrape_interval: 10s scrape_configs: - job_name: 'TelemetryStreaming' scrape_timeout: 30s scrape_interval: 30s scheme: https tls_config: insecure_skip_verify: true metrics_path: '/mgmt/shared/telemetry/pullconsumer/metrics' basic_auth: username: 'F5-BIG-IP-username' password: 'F5-BIG-IP-password' static_configs: - targets: ['BIGIP-managementIP:443'] Then restart the Prometheus service and check the status of the Prometheus service. sudo systemctl restart prometheus.service sudo systemctl status prometheus.service Note: If the configuration is correct, then the Prometheus service will be enabled otherwise, the status of the Prometheus service will be disabled. To further verify whether instances has been discovered on the Prometheus: -Go tohttp://prometheus-ip:service/port - Click on the Status option and select the Target option Section IV: Configuration on Grafana using Prometheus as a data source In this section, we need to connect Prometheus as a data source on Grafana Once the data source has been successfully configured on Grafna then Create a new dashboard and select Prometheus as the data source then select the relevant metrics and change the refresh interval as required. Save and apply the panel. Then,Save the dashboard and view the metrics on the Grafana dashboard. The possible issue that can arise during the configuration If you use the default TS declare from the official telemetry streaming document website then you may fail to view the available metrics on the mentioned link: https://<f5-management-ip>/mgmt/shared/telemetry/pullconsumer/metrics3.4KViews2likes3CommentsTelemetry Streaming - Re-starting restnoded
Hello Everyone, I have a dilemma ever since I set up telemetry streaming. I noticed that the restnoded daemon is restarting (some days are more frequent than others) but I can't get my hand into the root cause of it and how to solve it. I have been keeping a close eye on "/var/log/restnoded/restnoded.log" but couldnt pin point what could cause the restnoded daemon to restart. Regards, Sarah.700Views0likes2CommentsMost efficient methods for Connection logging?
Does anyone have real world experience with logging connections at a high rate? If so, which methods are you using to collect and transmit the data? We have a requirement to log all connections going through our F5 devices. Things like the client/server-side IPs/ports as well as HTTP details for HTTP VIPs and DNS details from our GTMs. It's the Whitehouse M-21-31 mandate if anyone if familiar with it. I've used Request Logging Profiles and various iRules with HSL to collect this type of data before, but I've never been too concerned about overhead because I would only apply them as needed, like when t-shooting an issue with a VIP. Our busiest appliance pushes around 150k conn/sec and 5k HTTP req/sec, so I now have consider the most efficient methods to avoid any kind of impact to traffic flows. I've done some lab testing with several different methods but I can't do any meaningful load tests in that environment.Below are some of my opinions based on my lab testing so far. Data Collection AVR - I like that this single feature can meet all the requirements for collecting TCP, HTTP, and DNS data. It would also be relatively easy to perform audits to ensure the VIPs have the necessary Analytics profiles as we can manage it from the AVR profiles themselves. My main concern is the overhead that results from the traffic analysis. I assume it has to maintain a large database where it stores all the analyzed data even if we just ship it off to Splunk. Even the data shipped off to Splunk includes several different logs for each connection (each with a different 'Entity'). Request Logging Profile- This is fairly flexible and should have low overhead since the F5 doesn't need to analyze any of the data like AVR does. This only collects HTTP data so we still need another solution to collect details for non HTTP VIPs. It would be a pain to audit since we don't have use any kind of deployment templates or automation. iRule - This provides a lot of flexibility and it is capable of collecting all the necessary data, but I don't know how well performance overhead compares to AVR. This would also be a pain to audit due to lack of deployment templates and automation. Data Transmission HSL UDP Syslog- I imagine this is the most efficient method to send events, but it's likely only a matter of time before we are required to use TCP/TLS. Telemetry Streaming - This is the more modern method and it offers some interesting features like System Poller, which could eventually allow us to move away from SNMP polling. We would need a workaround for our GTM-only devices because they cannot run a TS listener.697Views0likes1CommentTelemetry Streaming Question
Hi, I'm after a little assistance, I have installed the Telemetry Streaming extension to use Opentelemetry as an endpoint. I've been able to achieve this and my receiver is receiving the metrics. The declaration I am POST-ing is as follows { "class": "Telemetry", "My_System": { "class": "Telemetry_System", "systemPoller": { "interval": 60 } }, "My_Listener": { "class": "Telemetry_Listener", "port": 6514 }, "My_Consumer": { "class": "Telemetry_Consumer", "type": "OpenTelemetry_Exporter", "host": "10.1.2.3", "port": 4317, "metricsPath": "/v1/metrics", "convertBooleansToMetrics": true, "enable": true, "trace": false, "allowSelfSignedCert": true, "exporter": "protobuf", "protocol": "https" } } This is succesful as I say, but the received metrics don't include a hostname (which is important as there are some profile duplications on the 40+ devices looked afer), is there a way to insert the hostname into the messages exported from the systems? Thanks in advance.599Views0likes4CommentsTelemetry Streaming: getting HTTP statistics via SNMP
Hi F5 community, I am looking to get HTTP statistics (total count, and broken by response code) metrics from Telemetry Streaming via SNMP (seems to be the most viable option). F5-BIGIP-LOCAL-MIB::ltmHttpProfileStat oid: .1.3.6.1.4.1.3375.2.2.6.7.6 However, the stats don't seem to come out correct at all: I do see deltas happening, but they don't match at all the traffic rate I expect to see. Furthermore, I have done some tests where I would start a load testing tool (vegeta) to fire concurrent HTTP requests, for which I do see the logs from the virtual server, but no matching increment in the above SNMP OID entries on none of the profiles configured. What am I doing wrong? does something need to be enabled on the HTTP profile in use to collect those stats? Best, Owayss19Views0likes0Comments