Grafana
5 TopicsIntroducing the F5 Application Study Tool (AST)
In the ever-evolving world of application delivery and security, gaining actionable insights into your infrastructure and applications has become more critical than ever. The Application Study Tool (AST) is designed to help technical teams and administrators leverage the power of open-source telemetry and visualization tools to enhance their monitoring, diagnostics, and analysis workflows.13KViews10likes14CommentsBehavioral DDOS Grafana Dashboard using BIG-IP APIs
From the desk of Pavel Borovsky (March 15, 2017) F5 L7 Behavioral DDOS feature provides with API's to monitor and debug the detection and mitigation process in real time. To provide an example on how to use the API we developed Grafana plugin that utilizes the API and shows real time data on the Dos attacks. How to install the Dashboard on Grafana: Install Grafana 4.1 Install the following panels: grafana-piechart-panel https://grafana.net/plugins/grafana-piechart-panel grafana-worldmap-panel https://grafana.net/plugins/grafana-worldmap-panel mtanda-histogram-panel https://grafana.net/plugins/mtanda-histogram-panel Install admdb plugin copy data\plugins\grafana-admdb-datasource copy public\dashboards*.json enable dashboards Configure the dashboard(see example defaults.ini attached) edit conf/defaults.ini modify these lines: [dashboards.json] enabled = true path = public/dashboards Enable admdb on big IP: tmsh modify sys db adm.cloud.host value local Add data source to Grafana using web interface as in the following screenshot: Download the Grafana bados dahsboard here.2.9KViews1like2CommentsAutomation Toolchain - Telemetry Streaming - Grafana StatsD Graphite
Introduction This article explains how to use the Telemetry Streaming component (TS) of the Automation Tool chain (ATC) for integration with Grafana through StatsD and Graphite. To get more information on the Push Consumer supported by the F5 Networks ATC, more in particular the TS component, please refer to the official documentation on CloudDocs here. BIG-IP Configuration In order to configure the TS component of the ATC correctly for integration with Grafana, we will need to post the following JSON blob to your BIG-IP TS API endpoint at https://<BIG-IP-ADDRESS>:8443/mgmt/shared/telemetry/declare { "class": "Telemetry", "MyTelemetrySystem": { "class": "Telemetry_System", "allowSelfSignedCert": true, "systemPoller": { "interval": 60 } }, "GraphiteConsumer": { "class": "Telemetry_Consumer", "type": "Graphite", "host": "10.0.0.55", "protocol": "http", "port": 80 }, "StatsdConsumer": { "class": "Telemetry_Consumer", "type": "Statsd", "host":"10.0.0.55", "protocol": "udp", "port": 8125 }, "MyTelemetryListener": { "class": "Telemetry_Listener", "port": 6514 } } The above 4 JSON stanzas are the following A Telemetry System class, that sets up the system poller. More information here. Two Push Consumers classes, that will push the metrics or data externally. In this case to Graphite and StatsD. More information here. A Telemetry Listener class, that sets an Event Listener )both TCP and UDP protocols) and can accept events in a specific format and process them. More information here Note that in this example, Graphite and StatsD are running on the same host, because we used a docker container to host them as follows # docker run -d \ --name graphite \ --restart=always \ -p 80:80 \ -p 2003-2004:2003-2004 \ -p 2023-2024:2023-2024 \ -p 8125:8125/udp \ -p 8126:8126 \ graphiteapp/graphite-statsd Telemetry data Let's have a look at the TS telemetry data being produced and send over to Graphite. StatsD is used for metrics, Graphite is being used for events StatsD metrics StatsD is supporting 3 main metric types: gauges, timers and counters. The TS StatsD integration is using Gauges. We can use Netcat to have a look at the format of these gauge based metrics # echo "gauges" | nc 10.0.0.55 8126 { 'statsd.timestamp_lag': 0, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.networkInterfaces.1-0.counters-bitsIn': 297895992, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.networkInterfaces.1-0.counters-bitsOut': 0, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.networkInterfaces.mgmt.counters-bitsIn': 248764520, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.networkInterfaces.mgmt.counters-bitsOut': 134973160, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.tmmTraffic.clientSideTraffic-bitsIn': 62854192, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.tmmTraffic.clientSideTraffic-bitsOut': 229153456, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.tmmTraffic.serverSideTraffic-bitsIn': 62432120, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.tmmTraffic.serverSideTraffic-bitsOut': 228977008, ... We can also see the same gauge metrics inside the Graphite admin UI The structure and path of this telemetry data is important when you create you own dashboards Graphite events As mentioned earlier, the TS Graphite integration uses events to send the data to Graphite. You can observe those events by going to the /events endpoint on your Graphite admin UI The details of such an event are as follows Grafana In order to be able to use and display the data now collected in Graphite, one needs to set-up Graphite as a data source and import a Grafana dashboard that uses this data Graphite data source Let's add Graphite as a data source Grafana BIG-IP TS dashboard Let's import an example dashboard that used the data available. This sample dashboard is also available in the Grafana dashboard collection online here This sample dashboard will make use of dashboard variables, so users can filter on parameters like Device (which BIG-IP), Tenant (which BIG-IP partition), Application, Virtual Server and Pool. For the sake of demonstration, there is also a filter for Profile For more information and screenshots on the dashboard itself, refer to the Grafana website where the dashboard is downloadable. The dashboard contains separate rows for application health status: 4xx and 5xx responses. You can add slow responses as well as a matter of excercise device system statistics: CPU, memory, TTM traffic in/out, interface traffic in/out virtual server traffic in/out and server connections pool traffic in/out and server connections members traffic in/out and server connections profile details statistics The variable queries used for this dashboard are as follows, based on the structure of the metrics data you will find in the Graphite admin UI Conclusion In this article we have demonstrated how the F5 Automation Tool Chain, and more in particular also its Telemetry Streaming component, is a perfect match for integration into popular DevOps telemetry solutions. For a fully automated scenario, demonstrating the usage of Declarative Onboarding (DO), Application Services 3 (AS3) and Telemetry Streaming (TS) with automated Grafana integration, you can refer to the following Github repo.2.8KViews3likes5CommentsF5 Distributed Cloud Telemetry (Metrics) - Prometheus
Scope This article walks through the process of collecting metrics from F5 Distributed Cloud’s (XC) Service Graph API and exposing them in a format that Prometheus can scrape. Prometheus then scrapes these metrics, which can be visualized in Grafana. Introduction Metrics are essential for gaining real-time insight into service performance and behaviour. F5 Distributed Cloud (XC) provides a Service Graph API that captures service-to-service communication data across your infrastructure. Prometheus, a leading open-source monitoring system, can scrape and store time-series metrics — and when paired with Grafana, offers powerful visualization capabilities. This article shows how to integrate a custom Python-based exporter that transforms Service Graph API data into Prometheus-compatible metrics. These metrics are then scraped by Prometheus and visualized in Grafana, all running in Docker for easy deployment. Prerequisites Access to F5 Distributed Cloud (XC) SaaS tenant VM with Python3 installed Running Prometheus instance (If not check "Configuring Prometheus" section below) Running Grafana instance (If not check "Configuring Grafana" section below) Note – In this demo, an AWS VM is used with Python installed and running exporter (port - 8888), Prometheus (host port - 9090) and Grafana (port - 3000) running as docker instance, all in same VM. Architecture Overview F5 XC API → Python Exporter → Prometheus → Grafana Building the Python Exporter To collect metrics from the F5 Distributed Cloud (XC) Service Graph API and expose them in a format Prometheus understands, we created a lightweight Python exporter using Flask. This exporter acts as a transformation layer — it fetches service graph data, parses it, and exposes it through a /metrics endpoint that Prometheus can scrape. Code Link -> exporter.py Key Functions of the Exporter Uses XC-Provided .p12 File for Authentication: To authenticate API requests to F5 Distributed Cloud (XC), the exporter uses a client certificate packaged in a .p12 file. This file must be manually downloaded from the F5 XC console (steps) and stored on the VM where the Python script runs. The script expects the full path to the .p12 file and its associated password to be specified in the configuration section. Fetches Service Graph Metrics: The script pulls service-level metrics such as request rates, error rates, throughput, and latency from the XC API. It supports both aggregated and individual load balancer views. Processes and Structures the Data: The exporter parses the raw API response to extract the latest metric values and converts them into Prometheus exposition format. Each metric is labelled (e.g., by vhost and direction) for flexibility in Grafana queries. Exposes a /metrics Endpoint: A Flask web server runs on port 8888, serving the /metrics endpoint. Prometheus periodically scrapes this endpoint to ingest the latest metrics. Handles Multiple Metric Types: Traffic metrics and health scores are handled and formatted individually. Each metric includes a descriptive name, type declaration, and optional labels for fine-grained monitoring and visualization. Running the Exporter python3 exporter.py > python.log 2>&1 & This command runs exporter.py using Python3 in background and redirects all standard output and error messages to python.log for easier debugging. Configuring Prometheus docker run -d --name=prometheus --network=host -v $(pwd)/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus:latest Prometheus is running as docker instance in host network (port 9090) mode with below configuration (prometheus.yml), scrapping /metrics endpoint exposed from python flask exporter on port 8888 every 60 seconds. Configuring Grafana docker run -d --name=grafana -p 3000:3000 grafana/grafana:latest Private IP of the Prometheus docker instance along with port (9090) is used as data source in Grafana configuration. Once Prometheus is configured under Grafana Data sources, follow below steps: Navigate to Explore menu Select “Prometheus” in data source picker Choose appropriate metric, in this case “f5xc_downstream_http_request_rate” Select desired time range and click “Run query” Observe metrics graph will be displayed Note : Some requests need to be generated for metrics to be visible in Grafana. A broader, high-level view of all metrics can be accessed by navigating to “Drilldown” and selecting “Metrics”, providing a comprehensive snapshot across services. Conclusion F5 Distributed Cloud’s (F5 XC) Service Graph API provides deep visibility into service-to-service communication, and when paired with Prometheus and Grafana, it enables powerful, real-time monitoring without vendor lock-in. This integration highlights F5 XC’s alignment with open-source ecosystems, allowing users to build flexible and scalable observability pipelines. The custom Python exporter bridges the gap between the XC API and Prometheus, offering a lightweight and adaptable solution for transforming and exposing metrics. With Grafana dashboards on top, teams can gain instant insight into service health and performance. This open approach empowers operations teams to respond faster, optimize more effectively, and evolve their observability practices with confidence and control.199Views3likes0CommentsF5 Distributed Cloud Telemetry (Logs) - Loki
Scope This article walks through the process of integrating log data from F5 Distributed Cloud’s (F5 XC) Global Log Receiver (GLR) with Grafana Loki. By the end, you'll have a working log pipeline where logs sent from F5 XC can be visualized and explored through Grafana. Introduction Observability is a critical part of managing modern applications and infrastructure. F5 XC offers the GLR as a centralized system to stream logs from across distributed services. Grafana Loki, part of the Grafana observability stack, is a powerful and efficient tool for aggregating and querying logs. To improve observability, you can forward logs from F5 XC into Loki for centralized log analysis and visualization. This article shows you how to implement a lightweight Python webhook that bridges F5 XC GLR with Grafana Loki. The webhook acts as a log ingestion and transformation service, enabling logs to flow seamlessly into Loki for real-time exploration via Grafana. Prerequisites Access to F5 Distributed Cloud (XC) SaaS tenant with GLR setup VM with Python3 installed Running Loki instance (If not, check "Configuring Loki and Grafana" section below) Running Grafana instance (If not, check "Configuring Loki and Grafana" section below) Note – In this demo, an AWS VM is used with Python3 installed and running webhook (port - 5000), Loki (port - 3100) and Grafana (port - 3000) running as docker instance, all in the same VM. Architecture Overview F5 XC GLR → Python Webhook → Loki → Grafana F5 XC GLR Configuration Follow the steps mentioned below to set up and configure Global Log Receiver (GLR). F5 XC GLR Building the Python Webhook To send the log data from F5 Distributed Cloud Global Log Receiver (GLR) to Grafana Loki, we used a lightweight Python webhook implemented using the Flask framework. This webhook acts as a simple transformation and relay service. It receives raw log entries from F5 XC, repackages them in the structure Loki expects, and pushes them to a Loki instance running on the same virtual machine. Key Functions of the Webhook Listens for Log Data: The webhook exposes an endpoint (/glr-webhook) on port 5000 that accepts HTTP POST requests from the GLR. Each request can contain one or more newline-separated log entries. Parses and Structures the Logs: Incoming logs are expected to be JSON-formatted. The webhook parses each line individually and assigns a consistent timestamp (in nanoseconds, as required by Loki). Formats the Payload for Loki: The logs are then wrapped in a structure that conforms to Loki’s push API format. This includes organizing them into a stream, which can be labeled (e.g., with a job name like f5-glr) to make logs easier to query and group in Grafana. Pushes Logs to Loki: Once formatted, the webhook sends the payload to the Loki HTTP API using a standard POST request. If the request is successful, Loki returns a 204 No Content status. Handles Errors Gracefully: The webhook includes basic error handling for malformed JSON, network issues, or unexpected failures, returning appropriate HTTP responses. Running the Webhook python3 webhook.py > python.log 2>&1 & This command runs webhook.py using Python3 in the background and redirects all standard output and error messages to python.log for easier debugging. Configuring Loki and Grafana docker run -d --name=loki -p 3100:3100 grafana/loki:latest docker run -d --name=grafana -p 3000:3000 grafana/grafana:latest Loki and Grafana are running as docker instance in the same VM, private IP of the Loki docker instance along with port is used as data source in Grafana configuration. Once Loki is configured under Grafana Data sources, follow the below steps: Navigate to Explore menu Select “Loki” in data source picker Choose appropriate label and value, in this case label=job and value=f5-glr Select desired time range and click “Run query” Observe logs will be displayed based on “Log Type” selected in F5 XC GLR Configuration Note: Some requests need to be generated for logs to be visible in Grafana based on Log Type selected. Conclusion F5 Distributed Cloud's (F5 XC) Global Log Receiver (GLR) unlocks real-time observability by integrating with open-source tools like Grafana Loki. This reflects F5 XC's commitment to open source, enabling seamless log management with minimal overhead. A customizable Python webhook ensures adaptability to evolving needs. Centralized logs in Loki and visualized in Grafana empower teams with actionable insights, accelerating troubleshooting and optimization. F5 XC GLR's flexibility future-proofs observability strategies. This integration showcases F5’s dedication to interoperability and empowering customers with community-driven solutions.167Views0likes0Comments