Grafana
8 TopicsIntroducing the F5 Application Study Tool (AST)
In the ever-evolving world of application delivery and security, gaining actionable insights into your infrastructure and applications has become more critical than ever. The Application Study Tool (AST) is designed to help technical teams and administrators leverage the power of open-source telemetry and visualization tools to enhance their monitoring, diagnostics, and analysis workflows.14KViews10likes14CommentsF5 Distributed Cloud Telemetry (Logs) - Loki
Scope This article walks through the process of integrating log data from F5 Distributed Cloud’s (F5 XC) Global Log Receiver (GLR) with Grafana Loki. By the end, you'll have a working log pipeline where logs sent from F5 XC can be visualized and explored through Grafana. Introduction Observability is a critical part of managing modern applications and infrastructure. F5 XC offers the GLR as a centralized system to stream logs from across distributed services. Grafana Loki, part of the Grafana observability stack, is a powerful and efficient tool for aggregating and querying logs. To improve observability, you can forward logs from F5 XC into Loki for centralized log analysis and visualization. This article shows you how to implement a lightweight Python webhook that bridges F5 XC GLR with Grafana Loki. The webhook acts as a log ingestion and transformation service, enabling logs to flow seamlessly into Loki for real-time exploration via Grafana. Prerequisites Access to F5 Distributed Cloud (XC) SaaS tenant with GLR setup VM with Python3 installed Running Loki instance (If not, check "Configuring Loki and Grafana" section below) Running Grafana instance (If not, check "Configuring Loki and Grafana" section below) Note – In this demo, an AWS VM is used with Python3 installed and running webhook (port - 5000), Loki (port - 3100) and Grafana (port - 3000) running as docker instance, all in the same VM. Architecture Overview F5 XC GLR → Python Webhook → Loki → Grafana F5 XC GLR Configuration Follow the steps mentioned below to set up and configure Global Log Receiver (GLR). F5 XC GLR Building the Python Webhook To send the log data from F5 Distributed Cloud Global Log Receiver (GLR) to Grafana Loki, we used a lightweight Python webhook implemented using the Flask framework. This webhook acts as a simple transformation and relay service. It receives raw log entries from F5 XC, repackages them in the structure Loki expects, and pushes them to a Loki instance running on the same virtual machine. Key Functions of the Webhook Listens for Log Data: The webhook exposes an endpoint (/glr-webhook) on port 5000 that accepts HTTP POST requests from the GLR. Each request can contain one or more newline-separated log entries. Parses and Structures the Logs: Incoming logs are expected to be JSON-formatted. The webhook parses each line individually and assigns a consistent timestamp (in nanoseconds, as required by Loki). Formats the Payload for Loki: The logs are then wrapped in a structure that conforms to Loki’s push API format. This includes organizing them into a stream, which can be labeled (e.g., with a job name like f5-glr) to make logs easier to query and group in Grafana. Pushes Logs to Loki: Once formatted, the webhook sends the payload to the Loki HTTP API using a standard POST request. If the request is successful, Loki returns a 204 No Content status. Handles Errors Gracefully: The webhook includes basic error handling for malformed JSON, network issues, or unexpected failures, returning appropriate HTTP responses. Running the Webhook python3 webhook.py > python.log 2>&1 & This command runs webhook.py using Python3 in the background and redirects all standard output and error messages to python.log for easier debugging. Configuring Loki and Grafana docker run -d --name=loki -p 3100:3100 grafana/loki:latest docker run -d --name=grafana -p 3000:3000 grafana/grafana:latest Loki and Grafana are running as docker instance in the same VM, private IP of the Loki docker instance along with port is used as data source in Grafana configuration. Once Loki is configured under Grafana Data sources, follow the below steps: Navigate to Explore menu Select “Loki” in data source picker Choose appropriate label and value, in this case label=job and value=f5-glr Select desired time range and click “Run query” Observe logs will be displayed based on “Log Type” selected in F5 XC GLR Configuration Note: Some requests need to be generated for logs to be visible in Grafana based on Log Type selected. Conclusion F5 Distributed Cloud's (F5 XC) Global Log Receiver (GLR) unlocks real-time observability by integrating with open-source tools like Grafana Loki. This reflects F5 XC's commitment to open source, enabling seamless log management with minimal overhead. A customizable Python webhook ensures adaptability to evolving needs. Centralized logs in Loki and visualized in Grafana empower teams with actionable insights, accelerating troubleshooting and optimization. F5 XC GLR's flexibility future-proofs observability strategies. This integration showcases F5’s dedication to interoperability and empowering customers with community-driven solutions.205Views0likes0CommentsF5 Distributed Cloud Telemetry (Metrics) - Prometheus
Scope This article walks through the process of collecting metrics from F5 Distributed Cloud’s (XC) Service Graph API and exposing them in a format that Prometheus can scrape. Prometheus then scrapes these metrics, which can be visualized in Grafana. Introduction Metrics are essential for gaining real-time insight into service performance and behaviour. F5 Distributed Cloud (XC) provides a Service Graph API that captures service-to-service communication data across your infrastructure. Prometheus, a leading open-source monitoring system, can scrape and store time-series metrics — and when paired with Grafana, offers powerful visualization capabilities. This article shows how to integrate a custom Python-based exporter that transforms Service Graph API data into Prometheus-compatible metrics. These metrics are then scraped by Prometheus and visualized in Grafana, all running in Docker for easy deployment. Prerequisites Access to F5 Distributed Cloud (XC) SaaS tenant VM with Python3 installed Running Prometheus instance (If not check "Configuring Prometheus" section below) Running Grafana instance (If not check "Configuring Grafana" section below) Note – In this demo, an AWS VM is used with Python installed and running exporter (port - 8888), Prometheus (host port - 9090) and Grafana (port - 3000) running as docker instance, all in same VM. Architecture Overview F5 XC API → Python Exporter → Prometheus → Grafana Building the Python Exporter To collect metrics from the F5 Distributed Cloud (XC) Service Graph API and expose them in a format Prometheus understands, we created a lightweight Python exporter using Flask. This exporter acts as a transformation layer — it fetches service graph data, parses it, and exposes it through a /metrics endpoint that Prometheus can scrape. Code Link -> exporter.py Key Functions of the Exporter Uses XC-Provided .p12 File for Authentication: To authenticate API requests to F5 Distributed Cloud (XC), the exporter uses a client certificate packaged in a .p12 file. This file must be manually downloaded from the F5 XC console (steps) and stored on the VM where the Python script runs. The script expects the full path to the .p12 file and its associated password to be specified in the configuration section. Fetches Service Graph Metrics: The script pulls service-level metrics such as request rates, error rates, throughput, and latency from the XC API. It supports both aggregated and individual load balancer views. Processes and Structures the Data: The exporter parses the raw API response to extract the latest metric values and converts them into Prometheus exposition format. Each metric is labelled (e.g., by vhost and direction) for flexibility in Grafana queries. Exposes a /metrics Endpoint: A Flask web server runs on port 8888, serving the /metrics endpoint. Prometheus periodically scrapes this endpoint to ingest the latest metrics. Handles Multiple Metric Types: Traffic metrics and health scores are handled and formatted individually. Each metric includes a descriptive name, type declaration, and optional labels for fine-grained monitoring and visualization. Running the Exporter python3 exporter.py > python.log 2>&1 & This command runs exporter.py using Python3 in background and redirects all standard output and error messages to python.log for easier debugging. Configuring Prometheus docker run -d --name=prometheus --network=host -v $(pwd)/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus:latest Prometheus is running as docker instance in host network (port 9090) mode with below configuration (prometheus.yml), scrapping /metrics endpoint exposed from python flask exporter on port 8888 every 60 seconds. Configuring Grafana docker run -d --name=grafana -p 3000:3000 grafana/grafana:latest Private IP of the Prometheus docker instance along with port (9090) is used as data source in Grafana configuration. Once Prometheus is configured under Grafana Data sources, follow below steps: Navigate to Explore menu Select “Prometheus” in data source picker Choose appropriate metric, in this case “f5xc_downstream_http_request_rate” Select desired time range and click “Run query” Observe metrics graph will be displayed Note : Some requests need to be generated for metrics to be visible in Grafana. A broader, high-level view of all metrics can be accessed by navigating to “Drilldown” and selecting “Metrics”, providing a comprehensive snapshot across services. Conclusion F5 Distributed Cloud’s (F5 XC) Service Graph API provides deep visibility into service-to-service communication, and when paired with Prometheus and Grafana, it enables powerful, real-time monitoring without vendor lock-in. This integration highlights F5 XC’s alignment with open-source ecosystems, allowing users to build flexible and scalable observability pipelines. The custom Python exporter bridges the gap between the XC API and Prometheus, offering a lightweight and adaptable solution for transforming and exposing metrics. With Grafana dashboards on top, teams can gain instant insight into service health and performance. This open approach empowers operations teams to respond faster, optimize more effectively, and evolve their observability practices with confidence and control.207Views3likes0Comments01 - Visualization of F5 BIG-IP metrics on Grafana using Prometheus and Telemetry Streaming service
This user guide is all about configuration and deployment of telemetry streaming service on F5 BIG-IP device and scraps those metrics by Prometheus which will be finally visualized by the Grafana. One can select the relevant metrics scraped by the Prometheus and visualize them on the Grafana which will be demonstrated later in the guide. Note: More detailed steps along with configuration images can be found on : https://nishalrai.com.np/2022/08/18/visualization-of-f5-big-ip-metrics-on-grafana-using-prometheus-and-telemetry-streaming-service/ This guide is heavily based on the work performed by Michael O'Leary and one can view on here. The purpose of this guide is to document a little more elaborated guide for both learning and deployment aspects and also address the possible issues that could be faced during the process of deployment. Telemetry streaming (TS) is an iControl LX extension delivered as a TMOS-independent RPM file with the ability to declaratively aggregate, normalize and forward statistics and events from the BIG-IP to a consumer application by posting a single TS JSON declaration to TS’s declarative REST API endpoint. Additional information about Telemetry streaming can be found here. Prometheus is an open-source monitoring solution that stores time series data like metrics whereas Grafana allows visualizing the data stored in Prometheus and also supports a wide range of other sources. A short briefing about the architecture diagram in case of this user-deployment case scenario, the F5 BIG-IP system is on standalone mode with a management IP of 172.20.100.173, and both Prometheus and Grafana services are running on the same host with an IP address of 192.168.180.191 where the service port for Prometheus is on default – 9090 and the service port for Grafana is 5000. The whole deployment guide is broadly divided into the following sections and one can jump to the required step if they have achieved the previous configuration successfully: Section I: Download and install Telemetry Streaming Section II: Telemetry Streaming Declaration on the F5 BIG-IP device Section III: Configuration of Prometheus Section IV: Configuration on Grafana using Prometheus as a data source Section I: Download and install Telemetry Streaming We need to first download and install the telemetry streaming package on the F5 BIG-IP device. Since the telemetry streaming package is an RPM file that can be downloaded and can install through GUI or curl command on the CLI of the F5 BIG-IP device. In this user manual guide, we will download and then upload the telemetry streaming package on the BIG-IP using the iControl/iApp LX framework. One can use the alternative way which can be found here. First, we need to download the RPM file, one can find the latest telemetry streaming RPM file on the F5 Telemetry site on GitHub and download the latest RPM file. The GitHub page to download telemetry streaming can be found here. After downloading the file, you need to access your F5 BIG-IP GUI with your admin privilege account then follow the following steps: Go To iApps module > Package Management LX > Import > Browse to the downloaded location > Select Section II: Telemetry Streaming Declaration on the F5 BIG-IP device Once the download and installation of the F5 telemetry streaming package have been completed, we need to send a Telemetry Streaming declaration to configure a Telemetry Streaming pull consumer target. Before we jump into this configuration, we need to create a new user with an administrator role on the F5 BIG-IP device and you can just continue with the default admin user on the further configuration. We can create a new user in the following steps: Go to System > Users > User List Click on Create button Input the new user’s name and password Select role as administrator then add Click on the Finished button As we’re using Prometheus on this user-guide manual so, the Telemetry Streaming consumer target will be Prometheus which is hosted on 192.168.180.191:5000 We can either use Postman or using curl command on the CLI of the F5 BIG-IP device to configure a Telemetry Streaming pull consumer target. Configuration using Postman application Just follow the following steps for the configuration of the telemetry streaming consumer target using the Postman application. Step I: Open the Postman and create a new tab Step II: Select the GET method and paste the following link https://<big-ip-management-ip-address>/mgmt/shared/telemetry/declare Step III: Browse on Auth field and fill up the credentials Use the credentials used to log into F5 BIG-IP (in this case, recently created new user) Step IV: Select on Body option Change the method into POST, then select raw sub-option and then JSON data format. Past the Telemetry Streaming declaration on the body section and then click on the send button. { "class": "Telemetry", "My_Poller": { "class": "Telemetry_System_Poller", "interval": 0 }, "My_System": { "class": "Telemetry_System", "enable": "true", "systemPoller": [ "My_Poller" ] }, "metrics": { "class": "Telemetry_Pull_Consumer", "type": "Prometheus", "systemPoller": "My_Poller" } } Step V: Verify the response as the success status Select GET HTTP method on https://<big-ip-management-ip-address>/mgmt/shared/telemetry/declare Step VI: Verify the available metrics Create a new tab on Postman: - On the URL section https://<big-ip-management-ip-address>/mgmt/shared/telemetry/pullconsumer/metrics - On the authorization section, use the same credentials used before9.5KViews4likes4Comments02 - Visualization of F5 BIG-IP metrics on Grafana using Prometheus and Telemetry Streaming service
Configuration using CLI of F5 BIG-IP device Following steps for the configuration of telemetry streaming consumer target using CLI of F5 BIG-IP device are discussed below: Once you have accessed your F5 BIG-IP device CLI terminal then access either your default admin credentials or the new user you’ve recently created on the above section. Then execute the following commands on the terminal: On the username and password section, you either enter your default admin credentials or the new user you’ve recently created has the administrator privilege. curl -u username:password -k https://localhost/mgmt/shared/telemetry/declare Note: -k, --insecure to be made secure by using the CA certificate bundle installed by default. This makes all connections considered "insecure" fail unless -k, --insecure is used. ChangChange into tmp directory and create a file called ts-config.json and I am using vi editor for it. cd /tmp vi ts-config.json Paste the Telemetry Streaming declaration and then save the file and exit the vi editor. { "class": "Telemetry", "My_Poller": { "class": "Telemetry_System_Poller", "interval": 0 }, "My_System": { "class": "Telemetry_System", "enable": "true", "systemPoller": [ "My_Poller" ] }, "metrics": { "class": "Telemetry_Pull_Consumer", "type": "Prometheus", "systemPoller": "My_Poller" } } Then execute the following command on the terminal on the same directory /tmp and change the username and password section with your F5 BIG-IP device credentials having the administrator privilege. curl -X POST -u username:password -k https://localhost/mgmt/shared/telemetry/declare -d @ts-config.json -H “content-type:application/json” To verify the available metrics curl -u username:password -k https://localhost/mgmt/shared/telemetry/pullconsumer/metrics Section III: Configuration of Prometheus Once the telemetry streaming service has been successfully configured and the metrics are available on the path. We need to configure Prometheus in order to scrape the metrics data on the predefined path. The following are the steps to configure the Prometheus: Note: On this user-guide demonstration, both Grafana and Prometheus are installed on the same host with different service ports as mentioned earlier. CentOS 7 is used as the OS for this host machine and you may have different syntax to view the following status check. First, check the status of the Prometheus sudo systemctl status prometheus.service View the current working directory and change into /etc/prometheus pwd cd /etc/prometheus ls -al global: scrape_interval: 10s scrape_configs: - job_name: 'TelemetryStreaming' scrape_timeout: 30s scrape_interval: 30s scheme: https tls_config: insecure_skip_verify: true metrics_path: '/mgmt/shared/telemetry/pullconsumer/metrics' basic_auth: username: 'F5-BIG-IP-username' password: 'F5-BIG-IP-password' static_configs: - targets: ['BIGIP-managementIP:443'] Then restart the Prometheus service and check the status of the Prometheus service. sudo systemctl restart prometheus.service sudo systemctl status prometheus.service Note: If the configuration is correct, then the Prometheus service will be enabled otherwise, the status of the Prometheus service will be disabled. To further verify whether instances has been discovered on the Prometheus: - Go to http://prometheus-ip:service/port - Click on the Status option and select the Target option Section IV: Configuration on Grafana using Prometheus as a data source In this section, we need to connect Prometheus as a data source on Grafana Once the data source has been successfully configured on Grafna then Create a new dashboard and select Prometheus as the data source then select the relevant metrics and change the refresh interval as required. Save and apply the panel. Then, Save the dashboard and view the metrics on the Grafana dashboard. The possible issue that can arise during the configuration If you use the default TS declare from the official telemetry streaming document website then you may fail to view the available metrics on the mentioned link: https://<f5-management-ip>/mgmt/shared/telemetry/pullconsumer/metrics4.2KViews2likes3CommentsAutomation Toolchain - Telemetry Streaming - Grafana StatsD Graphite
Introduction This article explains how to use the Telemetry Streaming component (TS) of the Automation Tool chain (ATC) for integration with Grafana through StatsD and Graphite. To get more information on the Push Consumer supported by the F5 Networks ATC, more in particular the TS component, please refer to the official documentation on CloudDocs here. BIG-IP Configuration In order to configure the TS component of the ATC correctly for integration with Grafana, we will need to post the following JSON blob to your BIG-IP TS API endpoint at https://<BIG-IP-ADDRESS>:8443/mgmt/shared/telemetry/declare { "class": "Telemetry", "MyTelemetrySystem": { "class": "Telemetry_System", "allowSelfSignedCert": true, "systemPoller": { "interval": 60 } }, "GraphiteConsumer": { "class": "Telemetry_Consumer", "type": "Graphite", "host": "10.0.0.55", "protocol": "http", "port": 80 }, "StatsdConsumer": { "class": "Telemetry_Consumer", "type": "Statsd", "host":"10.0.0.55", "protocol": "udp", "port": 8125 }, "MyTelemetryListener": { "class": "Telemetry_Listener", "port": 6514 } } The above 4 JSON stanzas are the following A Telemetry System class, that sets up the system poller. More information here. Two Push Consumers classes, that will push the metrics or data externally. In this case to Graphite and StatsD. More information here. A Telemetry Listener class, that sets an Event Listener )both TCP and UDP protocols) and can accept events in a specific format and process them. More information here Note that in this example, Graphite and StatsD are running on the same host, because we used a docker container to host them as follows # docker run -d \ --name graphite \ --restart=always \ -p 80:80 \ -p 2003-2004:2003-2004 \ -p 2023-2024:2023-2024 \ -p 8125:8125/udp \ -p 8126:8126 \ graphiteapp/graphite-statsd Telemetry data Let's have a look at the TS telemetry data being produced and send over to Graphite. StatsD is used for metrics, Graphite is being used for events StatsD metrics StatsD is supporting 3 main metric types: gauges, timers and counters. The TS StatsD integration is using Gauges. We can use Netcat to have a look at the format of these gauge based metrics # echo "gauges" | nc 10.0.0.55 8126 { 'statsd.timestamp_lag': 0, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.networkInterfaces.1-0.counters-bitsIn': 297895992, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.networkInterfaces.1-0.counters-bitsOut': 0, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.networkInterfaces.mgmt.counters-bitsIn': 248764520, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.networkInterfaces.mgmt.counters-bitsOut': 134973160, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.tmmTraffic.clientSideTraffic-bitsIn': 62854192, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.tmmTraffic.clientSideTraffic-bitsOut': 229153456, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.tmmTraffic.serverSideTraffic-bitsIn': 62432120, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.tmmTraffic.serverSideTraffic-bitsOut': 228977008, ... We can also see the same gauge metrics inside the Graphite admin UI The structure and path of this telemetry data is important when you create you own dashboards Graphite events As mentioned earlier, the TS Graphite integration uses events to send the data to Graphite. You can observe those events by going to the /events endpoint on your Graphite admin UI The details of such an event are as follows Grafana In order to be able to use and display the data now collected in Graphite, one needs to set-up Graphite as a data source and import a Grafana dashboard that uses this data Graphite data source Let's add Graphite as a data source Grafana BIG-IP TS dashboard Let's import an example dashboard that used the data available. This sample dashboard is also available in the Grafana dashboard collection online here This sample dashboard will make use of dashboard variables, so users can filter on parameters like Device (which BIG-IP), Tenant (which BIG-IP partition), Application, Virtual Server and Pool. For the sake of demonstration, there is also a filter for Profile For more information and screenshots on the dashboard itself, refer to the Grafana website where the dashboard is downloadable. The dashboard contains separate rows for application health status: 4xx and 5xx responses. You can add slow responses as well as a matter of excercise device system statistics: CPU, memory, TTM traffic in/out, interface traffic in/out virtual server traffic in/out and server connections pool traffic in/out and server connections members traffic in/out and server connections profile details statistics The variable queries used for this dashboard are as follows, based on the structure of the metrics data you will find in the Graphite admin UI Conclusion In this article we have demonstrated how the F5 Automation Tool Chain, and more in particular also its Telemetry Streaming component, is a perfect match for integration into popular DevOps telemetry solutions. For a fully automated scenario, demonstrating the usage of Declarative Onboarding (DO), Application Services 3 (AS3) and Telemetry Streaming (TS) with automated Grafana integration, you can refer to the following Github repo.2.8KViews3likes5CommentsBehavioral DDOS Grafana Dashboard using BIG-IP APIs
From the desk of Pavel Borovsky (March 15, 2017) F5 L7 Behavioral DDOS feature provides with API's to monitor and debug the detection and mitigation process in real time. To provide an example on how to use the API we developed Grafana plugin that utilizes the API and shows real time data on the Dos attacks. How to install the Dashboard on Grafana: Install Grafana 4.1 Install the following panels: grafana-piechart-panel https://grafana.net/plugins/grafana-piechart-panel grafana-worldmap-panel https://grafana.net/plugins/grafana-worldmap-panel mtanda-histogram-panel https://grafana.net/plugins/mtanda-histogram-panel Install admdb plugin copy data\plugins\grafana-admdb-datasource copy public\dashboards*.json enable dashboards Configure the dashboard(see example defaults.ini attached) edit conf/defaults.ini modify these lines: [dashboards.json] enabled = true path = public/dashboards Enable admdb on big IP: tmsh modify sys db adm.cloud.host value local Add data source to Grafana using web interface as in the following screenshot: Download the Grafana bados dahsboard here.2.9KViews1like2CommentsDataBase not found in DataSource error in Grafana Dashboard for F5 DDoS
Hello folks, I followed the procedure to install the F5 app in Grafana which is located in https://github.com/f5devcentral/f5-bados-app. However, I am having an error which says DataBase not found in DataSource every time I try to create a data source in the Grafana admin website. I am putting the word "default" in the ADMdb details Database field. I don't know where to manually create or copy the database. Thanks JM1.2KViews0likes1Comment