Telemetry
7 TopicsGetting started with F5 Distributed Cloud (XC) Telemetry
Introduction: This is an introductory article on the F5 Distributed Cloud (XC) telemetry series covering the basics. Going forward, there will be more articles focusing on exporting and visualizing logs and metrics from XC platform to telemetry tools like ELK Stack, Loki, Prometheus, Grafana etc. What is Telemetry? Telemetry refers to the process of collection and transmission of various kinds of data from remote systems to some central receiving entity for monitoring, analyzing and improving the performance, reliability, and security of remote systems. Telemetry Data involves: Metrics: Quantitative data like request rates, error rates, request/response throughputs etc. collected at regular intervals over a period of time. Logs: Textual time and event-based records generated by applications like request logs, security logs, etc. Traces: Information regarding journey/flow of requests across multiple services in a distributed system. Alerts: Alerts use telemetry data to set limits and send real-time notifications allowing organizations to act quickly if their systems don’t behave as expected. This makes alerts a critical pillar of observability. Overview: The F5 Distributed Cloud platform is designed to meet the needs of today’s modern and distributed applications. It allows for delivery, security, and observability across multiple clouds, hybrid clouds, and edge environments. This will create telemetry data that can be seen in XC’s own dashboards. But there may be times when customers want to collect their application’s telemetry data from different platforms to their own SIEM systems. To fulfill this kind of requirement, XC has come up with the Global Log Receiver (GLR) which will send XC logs to customer’s log collection systems. Along with this XC also exposes API that contains metrics data which can be fetched by exporter scripts and can be parsed and processed in such a way that telemetry tools can understand. As shown in the above diagram, there are a few steps involved before raw telemetry data can be presented into the dashboards, which include data collection, storage, and processing from remote systems. Once done, only then will the telemetry data be sent to the visualization tools for real-time monitoring and observability. To achieve this, there are several telemetry tools available like Prometheus (which is used for collecting, storing, and analyzing metrics), ELK stack, Grafana etc. We have covered a brief description of a few such tools below. F5 XC Global Log Receiver: F5 XC Global Log Receiver facilitates sending XC logs (Request, Audit, Security event and DNS request logs) to an external log collection system. The sent logs include all system and application logs of F5 XC tenant. Global log receiver supports sending the logs for the following log collection systems: AWS Cloudwatch AWS S3 HTTP Receiver Azure Blob Storage Azure Event Hubs Datadog GCP Bucket Generic HTTP or HTTPs server IBM QRadar Kafka NewRelic Splunk SumoLogic More information on how to setup or configure XC GLR can be found in this document. Observability/Monitoring Tools: Note: Below is a brief description of a few commonly used monitoring tools used by organizations. Prometheus: Prometheus is an open-source monitoring and alerting tool designed for collecting, storing, and analyzing time-series data (metrics) from modern, cloud-native, and distributed systems. It scrapes metrics from targets via HTTP endpoints, stores them in its optimized time-series database, and allows querying using the powerful PromQL language. Prometheus integrates seamlessly with tools like Grafana for visualization and includes Alertmanager for real-time alerting. It can also be integrated with Kubernetes and can help in continuously discovering and monitoring services from remote systems. Loki: Loki is a lightweight, open-source log aggregation tool designed for storing and querying logs from remote systems. Unlike traditional log management systems, Loki focuses on processing logs alongside metrics and is often paired with Prometheus, making it more efficient. It does not index the log content; rather it sets labels for each log stream. Logs can be queried using LogQL, a PromQL-like language. It is best suited for debugging and monitoring logs in cloud-native or containerized environments like Kubernetes. Grafana: Grafana is an open-source visualization and analytics platform for creating real-time dashboards from diverse data sets. It integrates with tools like Prometheus, Loki, Elasticsearch, and more. Grafana enables users to visualize trends, monitor performance, and set up alerts using a highly customizable interface. ELK Stack: The ELK Stack (Elasticsearch, Logstash, Kibana) is a powerful open-source solution for log management, search, and analytics. Elasticsearch handles storing, indexing, and querying data. Logstash ingests, parses, and transforms logs from various sources. Kibana provides an interactive interface for visualizing data and building dashboards. Conclusion: Telemetry turns system data into actionable insights enabling real-time visibility, early detection of issues, and performance tuning, thereby ensuring system reliability, security, stability, and efficiency. In this article, we’ve explored some of the foundational building blocks and essential tools that will set the stage for the topics we’ll cover in the upcoming articles of this series! Related Articles: F5 Distributed Cloud Telemetry (Logs) - ELK Stack F5 Distributed Cloud Telemetry (Metrics) - ELK Stack F5 Distributed Cloud Telemetry (Logs) - Loki F5 Distributed Cloud Telemetry (Metrics) - Prometheus References: XC Global Log Receiver Prometheus ELK Stack Loki258Views1like3CommentsF5 Distributed Cloud Telemetry (Metrics) - ELK Stack
As we are looking into exporting metrics data to the ELK stack using Python script, let's first get a high-level overview of the same. Metrics are numerical values that provide actionable insights into the performance, health and behavior of systems or applications over time, allowing teams to monitor and improve the reliability, stability and performance of modern distributed systems. ELK Stack (Elasticsearch, Logstash, and Kibana) is a powerful open-source platform. It enables organizations to collect, process, store, and visualize telemetry data such as logs, metrics, and traces from remote systems in real-time.99Views1like0CommentsF5 Distributed Cloud Telemetry (Logs) - Loki
Scope This article walks through the process of integrating log data from F5 Distributed Cloud’s (F5 XC) Global Log Receiver (GLR) with Grafana Loki. By the end, you'll have a working log pipeline where logs sent from F5 XC can be visualized and explored through Grafana. Introduction Observability is a critical part of managing modern applications and infrastructure. F5 XC offers the GLR as a centralized system to stream logs from across distributed services. Grafana Loki, part of the Grafana observability stack, is a powerful and efficient tool for aggregating and querying logs. To improve observability, you can forward logs from F5 XC into Loki for centralized log analysis and visualization. This article shows you how to implement a lightweight Python webhook that bridges F5 XC GLR with Grafana Loki. The webhook acts as a log ingestion and transformation service, enabling logs to flow seamlessly into Loki for real-time exploration via Grafana. Prerequisites Access to F5 Distributed Cloud (XC) SaaS tenant with GLR setup VM with Python3 installed Running Loki instance (If not, check "Configuring Loki and Grafana" section below) Running Grafana instance (If not, check "Configuring Loki and Grafana" section below) Note – In this demo, an AWS VM is used with Python3 installed and running webhook (port - 5000), Loki (port - 3100) and Grafana (port - 3000) running as docker instance, all in the same VM. Architecture Overview F5 XC GLR → Python Webhook → Loki → Grafana F5 XC GLR Configuration Follow the steps mentioned below to set up and configure Global Log Receiver (GLR). F5 XC GLR Building the Python Webhook To send the log data from F5 Distributed Cloud Global Log Receiver (GLR) to Grafana Loki, we used a lightweight Python webhook implemented using the Flask framework. This webhook acts as a simple transformation and relay service. It receives raw log entries from F5 XC, repackages them in the structure Loki expects, and pushes them to a Loki instance running on the same virtual machine. Key Functions of the Webhook Listens for Log Data: The webhook exposes an endpoint (/glr-webhook) on port 5000 that accepts HTTP POST requests from the GLR. Each request can contain one or more newline-separated log entries. Parses and Structures the Logs: Incoming logs are expected to be JSON-formatted. The webhook parses each line individually and assigns a consistent timestamp (in nanoseconds, as required by Loki). Formats the Payload for Loki: The logs are then wrapped in a structure that conforms to Loki’s push API format. This includes organizing them into a stream, which can be labeled (e.g., with a job name like f5-glr) to make logs easier to query and group in Grafana. Pushes Logs to Loki: Once formatted, the webhook sends the payload to the Loki HTTP API using a standard POST request. If the request is successful, Loki returns a 204 No Content status. Handles Errors Gracefully: The webhook includes basic error handling for malformed JSON, network issues, or unexpected failures, returning appropriate HTTP responses. Running the Webhook python3 webhook.py > python.log 2>&1 & This command runs webhook.py using Python3 in the background and redirects all standard output and error messages to python.log for easier debugging. Configuring Loki and Grafana docker run -d --name=loki -p 3100:3100 grafana/loki:latest docker run -d --name=grafana -p 3000:3000 grafana/grafana:latest Loki and Grafana are running as docker instance in the same VM, private IP of the Loki docker instance along with port is used as data source in Grafana configuration. Once Loki is configured under Grafana Data sources, follow the below steps: Navigate to Explore menu Select “Loki” in data source picker Choose appropriate label and value, in this case label=job and value=f5-glr Select desired time range and click “Run query” Observe logs will be displayed based on “Log Type” selected in F5 XC GLR Configuration Note: Some requests need to be generated for logs to be visible in Grafana based on Log Type selected. Conclusion F5 Distributed Cloud's (F5 XC) Global Log Receiver (GLR) unlocks real-time observability by integrating with open-source tools like Grafana Loki. This reflects F5 XC's commitment to open source, enabling seamless log management with minimal overhead. A customizable Python webhook ensures adaptability to evolving needs. Centralized logs in Loki and visualized in Grafana empower teams with actionable insights, accelerating troubleshooting and optimization. F5 XC GLR's flexibility future-proofs observability strategies. This integration showcases F5’s dedication to interoperability and empowering customers with community-driven solutions.197Views0likes0CommentsF5 Distributed Cloud Telemetry (Metrics) - Prometheus
Scope This article walks through the process of collecting metrics from F5 Distributed Cloud’s (XC) Service Graph API and exposing them in a format that Prometheus can scrape. Prometheus then scrapes these metrics, which can be visualized in Grafana. Introduction Metrics are essential for gaining real-time insight into service performance and behaviour. F5 Distributed Cloud (XC) provides a Service Graph API that captures service-to-service communication data across your infrastructure. Prometheus, a leading open-source monitoring system, can scrape and store time-series metrics — and when paired with Grafana, offers powerful visualization capabilities. This article shows how to integrate a custom Python-based exporter that transforms Service Graph API data into Prometheus-compatible metrics. These metrics are then scraped by Prometheus and visualized in Grafana, all running in Docker for easy deployment. Prerequisites Access to F5 Distributed Cloud (XC) SaaS tenant VM with Python3 installed Running Prometheus instance (If not check "Configuring Prometheus" section below) Running Grafana instance (If not check "Configuring Grafana" section below) Note – In this demo, an AWS VM is used with Python installed and running exporter (port - 8888), Prometheus (host port - 9090) and Grafana (port - 3000) running as docker instance, all in same VM. Architecture Overview F5 XC API → Python Exporter → Prometheus → Grafana Building the Python Exporter To collect metrics from the F5 Distributed Cloud (XC) Service Graph API and expose them in a format Prometheus understands, we created a lightweight Python exporter using Flask. This exporter acts as a transformation layer — it fetches service graph data, parses it, and exposes it through a /metrics endpoint that Prometheus can scrape. Code Link -> exporter.py Key Functions of the Exporter Uses XC-Provided .p12 File for Authentication: To authenticate API requests to F5 Distributed Cloud (XC), the exporter uses a client certificate packaged in a .p12 file. This file must be manually downloaded from the F5 XC console (steps) and stored on the VM where the Python script runs. The script expects the full path to the .p12 file and its associated password to be specified in the configuration section. Fetches Service Graph Metrics: The script pulls service-level metrics such as request rates, error rates, throughput, and latency from the XC API. It supports both aggregated and individual load balancer views. Processes and Structures the Data: The exporter parses the raw API response to extract the latest metric values and converts them into Prometheus exposition format. Each metric is labelled (e.g., by vhost and direction) for flexibility in Grafana queries. Exposes a /metrics Endpoint: A Flask web server runs on port 8888, serving the /metrics endpoint. Prometheus periodically scrapes this endpoint to ingest the latest metrics. Handles Multiple Metric Types: Traffic metrics and health scores are handled and formatted individually. Each metric includes a descriptive name, type declaration, and optional labels for fine-grained monitoring and visualization. Running the Exporter python3 exporter.py > python.log 2>&1 & This command runs exporter.py using Python3 in background and redirects all standard output and error messages to python.log for easier debugging. Configuring Prometheus docker run -d --name=prometheus --network=host -v $(pwd)/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus:latest Prometheus is running as docker instance in host network (port 9090) mode with below configuration (prometheus.yml), scrapping /metrics endpoint exposed from python flask exporter on port 8888 every 60 seconds. Configuring Grafana docker run -d --name=grafana -p 3000:3000 grafana/grafana:latest Private IP of the Prometheus docker instance along with port (9090) is used as data source in Grafana configuration. Once Prometheus is configured under Grafana Data sources, follow below steps: Navigate to Explore menu Select “Prometheus” in data source picker Choose appropriate metric, in this case “f5xc_downstream_http_request_rate” Select desired time range and click “Run query” Observe metrics graph will be displayed Note : Some requests need to be generated for metrics to be visible in Grafana. A broader, high-level view of all metrics can be accessed by navigating to “Drilldown” and selecting “Metrics”, providing a comprehensive snapshot across services. Conclusion F5 Distributed Cloud’s (F5 XC) Service Graph API provides deep visibility into service-to-service communication, and when paired with Prometheus and Grafana, it enables powerful, real-time monitoring without vendor lock-in. This integration highlights F5 XC’s alignment with open-source ecosystems, allowing users to build flexible and scalable observability pipelines. The custom Python exporter bridges the gap between the XC API and Prometheus, offering a lightweight and adaptable solution for transforming and exposing metrics. With Grafana dashboards on top, teams can gain instant insight into service health and performance. This open approach empowers operations teams to respond faster, optimize more effectively, and evolve their observability practices with confidence and control.200Views3likes0CommentsF5 Distributed Cloud Telemetry (Logs) - ELK Stack
Introduction: This article is a part of the F5 Distributed Cloud (F5 XC) telemetry series. Here we will discuss how we can export logs from the XC console to ELK Stack using XC’s GLR (Global Log Receiver). F5 Distributed Cloud GLR (Global Log Receiver): Global Log Receiver is a feature provided by Distributed Cloud. It enables customers to send their logs from the F5 Distributed Cloud (F5 XC) console dashboards to their respective centralized SIEM tools like ELK. Global log receiver supports the following log collection systems: AWS Cloudwatch AWS S3 Azure Blob Storage Azure Event Hubs Datadog GCP Bucket Generic HTTP or HTTPs server IBM QRadar Kafka NewRelic Splunk SumoLogic As of now, global log receiver supports sending request (access) logs, DNS request logs, security events, and audit logs of all HTTP load balancers and sites. ELK Stack: ELK Stack is a popular and powerful open-source suite of tools used for centralized log aggregation, analysis, and visualization. "ELK" stands for Elasticsearch Logstash Kibana Together, these tools collect, process and visualize machine-generated data, helping organizations gain insights into their systems. Components of the ELK Stack: Elasticsearch: Elasticsearch is a highly scalable, distributed RESTful search and analytics engine that serves as the core backend of the ELK stack. It is a central data store where all data logs are indexed and stored. It is designed to search and analyze large volumes of structured or unstructured data, such as logs and metrics, quickly and in near real time. Logstash: Logstash is a data ingestion and processing tool that collects data (logs or events) from various sources, transforms it, and sends it to Elasticsearch (or other destinations). It acts as a data collection pipeline with configurable input, output, and filter blocks. Kibana: Kibana is the visualization layer of the ELK stack. It provides a powerful interface for exploring, visualizing, and analyzing data (logs or events) stored in Elasticsearch. It does this with the help of charts, graphs, and maps. It helps organizations monitor the health, performance, and behavior of applications and take data-driven decisions. Architecture Diagram: For this demo, we have configured GLR to export logs from a namespace to Logstash listening on port 8080. Logstash receives and processes the logs, and sends it to Elasticsearch, where the logs are indexed and stored to enable real-time search and queries. At the end, Kibana retrieves the logs from Elasticsearch and represents it through interactive dashboards. Demonstration: To bring the setup up, we will first deploy the ELK stack in the docker environment. ELK deployment and configurations: Step1: Clone the repository using command: git clone https://github.com/deviantony/docker-elk.git Step2: Update ./docker-elk/docker-compose.yml (by adding http receiver port 8080 under logstash section as shown in the screenshot below). Step3: Update ./docker-elk/logstash/pipeline/logstash.conf file. Step 4: Now, run command: docker-compose up setup followed by command: docker-compose up Step 5: Check status of ELK stack containers run command: docker ps Step 6: Once the ELK stack is already up and running, then you can access to ELK GUI http://<public-ip>:5601 using default username/password (elastic/changeme) F5 XC GLR configurations: Step 1: Login to the XC console from the home page, select the Multi-Cloud Network Connect service or the Shared Configuration service. Multi-Cloud Network Connect service: Select Manage > Log Management > Global Log Receiver, Shared Configuration service: Select Manage > Global Log Receiver. Select Add Global Log Receiver. Note: If used path: [Multi-Cloud Network Connect service: Select Manage > Log Management > Global Log Receiver] Log Message Selection can only set to current namespace Step 2: Enter a name in the Metadata section. Optionally, set labels and add a description. From the Log Type menu, select Request Logs, Security Events, Audit Logs, or DNS Request Logs. The request logs are set by default. For this demo, we have selected security events Step 3: In the case of Multi-Cloud Network Connect service, select from Log Message Selection menu, for this demo, we have set it to Select logs in specific namespaces. Step 4: From the Receiver Configuration drop-down menu, select a receiver. Here for this demo, we have set it to HTTP receiver and provided an HTTP URI (public IP of the ELK stack along with the receiver port we have set in the logstash configuration, i.e. 8080). Step 5: Optionally, configure advanced settings. Click Save and Exit. Step 6: Finally, inspect your connection by clicking on the Test Connection button as shown in the below screenshots and verify that logs are collected in the receiver ( Access ELK GUI http://<ELK instance public IP>:5601, and navigate to Home> Analytics>Discover, Add logs-* as a data view filter) Verification: Step 1: Monitor security event logs of the Load Balancers deployed in the specified namespace from the XC console. Select WAAP service, your namespace and then navigate to Overview > Security, here select the LB and then click on the security analytics tab. Step 2: Access ELK GUI http://<ELK instance public IP>:5601, and navigate to Home> Analytics>Discover, Add logs-* as a data view filter. You will notice the logs have been exported to ELK. Step 3: Optionally, Navigate to Home> Analytics>Dashboards and click create visualization to generate a customized visualization dashboard for your collected logs. Conclusion: F5 XC already has an in-built observability dashboard providing real-time visualization to monitor, analyze, and troubleshoot applications and infrastructure across multi-cloud and edge environments. This helps organizations boost efficiency, reduce downtime, and ensure system reliability. With the help of XC’s GLR feature, XC can provide seamless integration with other SIEM tools as well, like ELK stack for customers preferring to consolidate telemetry data from multiple platforms to their centralized SIEM systems. References: XC Global Log Receiver Docker-elk ELK Stack DevCentral Article141Views1like0CommentsAutomation Toolchain - Telemetry Streaming - Grafana StatsD Graphite
Introduction This article explains how to use the Telemetry Streaming component (TS) of the Automation Tool chain (ATC) for integration with Grafana through StatsD and Graphite. To get more information on the Push Consumer supported by the F5 Networks ATC, more in particular the TS component, please refer to the official documentation on CloudDocs here. BIG-IP Configuration In order to configure the TS component of the ATC correctly for integration with Grafana, we will need to post the following JSON blob to your BIG-IP TS API endpoint at https://<BIG-IP-ADDRESS>:8443/mgmt/shared/telemetry/declare { "class": "Telemetry", "MyTelemetrySystem": { "class": "Telemetry_System", "allowSelfSignedCert": true, "systemPoller": { "interval": 60 } }, "GraphiteConsumer": { "class": "Telemetry_Consumer", "type": "Graphite", "host": "10.0.0.55", "protocol": "http", "port": 80 }, "StatsdConsumer": { "class": "Telemetry_Consumer", "type": "Statsd", "host":"10.0.0.55", "protocol": "udp", "port": 8125 }, "MyTelemetryListener": { "class": "Telemetry_Listener", "port": 6514 } } The above 4 JSON stanzas are the following A Telemetry System class, that sets up the system poller. More information here. Two Push Consumers classes, that will push the metrics or data externally. In this case to Graphite and StatsD. More information here. A Telemetry Listener class, that sets an Event Listener )both TCP and UDP protocols) and can accept events in a specific format and process them. More information here Note that in this example, Graphite and StatsD are running on the same host, because we used a docker container to host them as follows # docker run -d \ --name graphite \ --restart=always \ -p 80:80 \ -p 2003-2004:2003-2004 \ -p 2023-2024:2023-2024 \ -p 8125:8125/udp \ -p 8126:8126 \ graphiteapp/graphite-statsd Telemetry data Let's have a look at the TS telemetry data being produced and send over to Graphite. StatsD is used for metrics, Graphite is being used for events StatsD metrics StatsD is supporting 3 main metric types: gauges, timers and counters. The TS StatsD integration is using Gauges. We can use Netcat to have a look at the format of these gauge based metrics # echo "gauges" | nc 10.0.0.55 8126 { 'statsd.timestamp_lag': 0, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.networkInterfaces.1-0.counters-bitsIn': 297895992, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.networkInterfaces.1-0.counters-bitsOut': 0, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.networkInterfaces.mgmt.counters-bitsIn': 248764520, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.networkInterfaces.mgmt.counters-bitsOut': 134973160, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.tmmTraffic.clientSideTraffic-bitsIn': 62854192, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.tmmTraffic.clientSideTraffic-bitsOut': 229153456, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.tmmTraffic.serverSideTraffic-bitsIn': 62432120, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.tmmTraffic.serverSideTraffic-bitsOut': 228977008, ... We can also see the same gauge metrics inside the Graphite admin UI The structure and path of this telemetry data is important when you create you own dashboards Graphite events As mentioned earlier, the TS Graphite integration uses events to send the data to Graphite. You can observe those events by going to the /events endpoint on your Graphite admin UI The details of such an event are as follows Grafana In order to be able to use and display the data now collected in Graphite, one needs to set-up Graphite as a data source and import a Grafana dashboard that uses this data Graphite data source Let's add Graphite as a data source Grafana BIG-IP TS dashboard Let's import an example dashboard that used the data available. This sample dashboard is also available in the Grafana dashboard collection online here This sample dashboard will make use of dashboard variables, so users can filter on parameters like Device (which BIG-IP), Tenant (which BIG-IP partition), Application, Virtual Server and Pool. For the sake of demonstration, there is also a filter for Profile For more information and screenshots on the dashboard itself, refer to the Grafana website where the dashboard is downloadable. The dashboard contains separate rows for application health status: 4xx and 5xx responses. You can add slow responses as well as a matter of excercise device system statistics: CPU, memory, TTM traffic in/out, interface traffic in/out virtual server traffic in/out and server connections pool traffic in/out and server connections members traffic in/out and server connections profile details statistics The variable queries used for this dashboard are as follows, based on the structure of the metrics data you will find in the Graphite admin UI Conclusion In this article we have demonstrated how the F5 Automation Tool Chain, and more in particular also its Telemetry Streaming component, is a perfect match for integration into popular DevOps telemetry solutions. For a fully automated scenario, demonstrating the usage of Declarative Onboarding (DO), Application Services 3 (AS3) and Telemetry Streaming (TS) with automated Grafana integration, you can refer to the following Github repo.2.8KViews3likes5CommentsTelemetry streaming - One click deploy using Ansible
In this article we will focus on using Ansible to enable and install telemetry streaming (TS) and associated dependencies. Telemetry streaming The F5 BIG-IP is a full proxy architecture, which essentially means that the BIG-IP LTM completely understands the end-to-end connection, enabling it to be an endpoint and originator of client and server side connections. This empowers the BIG-IP to have traffic statistics from the client to the BIG-IP and from the BIG-IP to the server giving the user the entire view of their network statistics. To gain meaningful insight, you must be able to gather your data and statistics (telemetry) into a useful place.Telemetry streaming is an extension designed to declaratively aggregate, normalize, and forward statistics and events from the BIG-IP to a consumer application. You can earn more about telemetry streaming here, but let's get to Ansible. Enable and Install using Ansible The Ansible playbook below performs the following tasks Grab the latest Application Services 3 (AS) and Telemetry Streaming (TS) versions Download the AS3 and TS packages and install them on BIG-IP using a role Deploy AS3 and TS declarations on BIG-IP using a role from Ansible galaxy If AVR logs are needed for TS then provision the BIG-IP AVR module and configure AVR to point to TS Prerequisites Supported on BIG-IP 14.1+ version If AVR is required to be configured make sure there is enough memory for the module to be enabled along with all the other BIG-IP modules that are provisioned in your environment The TS data is being pushed to Azure log analytics (modify it to use your own consumer). If azure logs are being used then change your TS json file with the correct workspace ID and sharedkey Ansible is installed on the host from where the scripts are run Following files are present in the directory Variable file (vars.yml) TS poller and listener setup (ts_poller_and_listener_setup.declaration.json) Declare logging profile (as3_ts_setup_declaration.json) Ansible playbook (ts_workflow.yml) Get started Download the following roles from ansible galaxy. ansible-galaxy install f5devcentral.f5app_services_package --force This role performs a series of steps needed to download and install RPM packages on the BIG-IP that are a part of F5 automation toolchain. Read through the prerequisites for the role before installing it. ansible-galaxy install f5devcentral.atc_deploy --force This role deploys the declaration using the RPM package installed above. Read through the prerequisites for the role before installing it. By default, roles get installed into the /etc/ansible/role directory. Next copy the below contents into a file named vars.yml. Change the variable file to reflect your environment # BIG-IP MGMT address and username/password f5app_services_package_server: "xxx.xxx.xxx.xxx" f5app_services_package_server_port: "443" f5app_services_package_user: "*****" f5app_services_package_password: "*****" f5app_services_package_validate_certs: "false" f5app_services_package_transport: "rest" # URI from where latest RPM version and package will be downloaded ts_uri: "https://github.com/F5Networks/f5-telemetry-streaming/releases" as3_uri: "https://github.com/F5Networks/f5-appsvcs-extension/releases" #If AVR module logs needed then set to 'yes' else leave it as 'no' avr_needed: "no" # Virtual servers in your environment to assign the logging profiles (If AVR set to 'yes') virtual_servers: - "vs1" - "vs2" Next copy the below contents into a file named ts_poller_and_listener_setup.declaration.json. { "class": "Telemetry", "controls": { "class": "Controls", "logLevel": "debug" }, "My_Poller": { "class": "Telemetry_System_Poller", "interval": 60 }, "My_Consumer": { "class": "Telemetry_Consumer", "type": "Azure_Log_Analytics", "workspaceId": "<<workspace-id>>", "passphrase": { "cipherText": "<<sharedkey>>" }, "useManagedIdentity": false, "region": "eastus" } } Next copy the below contents into a file named as3_ts_setup_declaration.json { "class": "ADC", "schemaVersion": "3.10.0", "remark": "Example depicting creation of BIG-IP module log profiles", "Common": { "Shared": { "class": "Application", "template": "shared", "telemetry_local_rule": { "remark": "Only required when TS is a local listener", "class": "iRule", "iRule": "when CLIENT_ACCEPTED {\n node 127.0.0.1 6514\n}" }, "telemetry_local": { "remark": "Only required when TS is a local listener", "class": "Service_TCP", "virtualAddresses": [ "255.255.255.254" ], "virtualPort": 6514, "iRules": [ "telemetry_local_rule" ] }, "telemetry": { "class": "Pool", "members": [ { "enable": true, "serverAddresses": [ "255.255.255.254" ], "servicePort": 6514 } ], "monitors": [ { "bigip": "/Common/tcp" } ] }, "telemetry_hsl": { "class": "Log_Destination", "type": "remote-high-speed-log", "protocol": "tcp", "pool": { "use": "telemetry" } }, "telemetry_formatted": { "class": "Log_Destination", "type": "splunk", "forwardTo": { "use": "telemetry_hsl" } }, "telemetry_publisher": { "class": "Log_Publisher", "destinations": [ { "use": "telemetry_formatted" } ] }, "telemetry_traffic_log_profile": { "class": "Traffic_Log_Profile", "requestSettings": { "requestEnabled": true, "requestProtocol": "mds-tcp", "requestPool": { "use": "telemetry" }, "requestTemplate": "event_source=\"request_logging\",hostname=\"$BIGIP_HOSTNAME\",client_ip=\"$CLIENT_IP\",server_ip=\"$SERVER_IP\",http_method=\"$HTTP_METHOD\",http_uri=\"$HTTP_URI\",virtual_name=\"$VIRTUAL_NAME\",event_timestamp=\"$DATE_HTTP\"" } } } } } NOTE: To better understand the above declarations check out our clouddocs page: https://clouddocs.f5.com/products/extensions/f5-telemetry-streaming/latest/telemetry-system.html Next copy the below contents into a file named ts_workflow.yml - name: Telemetry streaming setup hosts: localhost connection: local any_errors_fatal: true vars_files: vars.yml tasks: - name: Get latest AS3 RPM name action: shell wget -O - {{as3_uri}} | grep -E rpm | head -1 | cut -d "/" -f 7 | cut -d "=" -f 1 | cut -d "\"" -f 1 register: as3_output - debug: var: as3_output.stdout_lines[0] - set_fact: as3_release: "{{as3_output.stdout_lines[0]}}" - name: Get latest AS3 RPM tag action: shell wget -O - {{as3_uri}} | grep -E rpm | head -1 | cut -d "/" -f 6 register: as3_output - debug: var: as3_output.stdout_lines[0] - set_fact: as3_release_tag: "{{as3_output.stdout_lines[0]}}" - name: Get latest TS RPM name action: shell wget -O - {{ts_uri}} | grep -E rpm | head -1 | cut -d "/" -f 7 | cut -d "=" -f 1 | cut -d "\"" -f 1 register: ts_output - debug: var: ts_output.stdout_lines[0] - set_fact: ts_release: "{{ts_output.stdout_lines[0]}}" - name: Get latest TS RPM tag action: shell wget -O - {{ts_uri}} | grep -E rpm | head -1 | cut -d "/" -f 6 register: ts_output - debug: var: ts_output.stdout_lines[0] - set_fact: ts_release_tag: "{{ts_output.stdout_lines[0]}}" - name: Download and Install AS3 and TS RPM ackages to BIG-IP using role include_role: name: f5devcentral.f5app_services_package vars: f5app_services_package_url: "{{item.uri}}/download/{{item.release_tag}}/{{item.release}}?raw=true" f5app_services_package_path: "/tmp/{{item.release}}" loop: - {uri: "{{as3_uri}}", release_tag: "{{as3_release_tag}}", release: "{{as3_release}}"} - {uri: "{{ts_uri}}", release_tag: "{{ts_release_tag}}", release: "{{ts_release}}"} - name: Deploy AS3 and TS declaration on the BIG-IP using role include_role: name: f5devcentral.atc_deploy vars: atc_method: POST atc_declaration: "{{ lookup('template', item.file) }}" atc_delay: 10 atc_retries: 15 atc_service: "{{item.service}}" provider: server: "{{ f5app_services_package_server }}" server_port: "{{ f5app_services_package_server_port }}" user: "{{ f5app_services_package_user }}" password: "{{ f5app_services_package_password }}" validate_certs: "{{ f5app_services_package_validate_certs | default(no) }}" transport: "{{ f5app_services_package_transport }}" loop: - {service: "AS3", file: "as3_ts_setup_declaration.json"} - {service: "Telemetry", file: "ts_poller_and_listener_setup_declaration.json"} #If AVR logs need to be enabled - name: Provision BIG-IP with AVR bigip_provision: provider: server: "{{ f5app_services_package_server }}" server_port: "{{ f5app_services_package_server_port }}" user: "{{ f5app_services_package_user }}" password: "{{ f5app_services_package_password }}" validate_certs: "{{ f5app_services_package_validate_certs | default(no) }}" transport: "{{ f5app_services_package_transport }}" module: "avr" level: "nominal" when: avr_needed == "yes" - name: Enable AVR logs using tmsh commands bigip_command: commands: - modify analytics global-settings { offbox-protocol tcp offbox-tcp-addresses add { 127.0.0.1 } offbox-tcp-port 6514 use-offbox enabled } - create ltm profile analytics telemetry-http-analytics { collect-geo enabled collect-http-timing-metrics enabled collect-ip enabled collect-max-tps-and-throughput enabled collect-methods enabled collect-page-load-time enabled collect-response-codes enabled collect-subnets enabled collect-url enabled collect-user-agent enabled collect-user-sessions enabled publish-irule-statistics enabled } - create ltm profile tcp-analytics telemetry-tcp-analytics { collect-city enabled collect-continent enabled collect-country enabled collect-nexthop enabled collect-post-code enabled collect-region enabled collect-remote-host-ip enabled collect-remote-host-subnet enabled collected-by-server-side enabled } provider: server: "{{ f5app_services_package_server }}" server_port: "{{ f5app_services_package_server_port }}" user: "{{ f5app_services_package_user }}" password: "{{ f5app_services_package_password }}" validate_certs: "{{ f5app_services_package_validate_certs | default(no) }}" transport: "{{ f5app_services_package_transport }}" when: avr_needed == "yes" - name: Assign TCP and HTTP profiles to virtual servers bigip_virtual_server: provider: server: "{{ f5app_services_package_server }}" server_port: "{{ f5app_services_package_server_port }}" user: "{{ f5app_services_package_user }}" password: "{{ f5app_services_package_password }}" validate_certs: "{{ f5app_services_package_validate_certs | default(no) }}" transport: "{{ f5app_services_package_transport }}" name: "{{item}}" profiles: - http - telemetry-http-analytics - telemetry-tcp-analytics loop: "{{virtual_servers}}" when: avr_needed == "yes" Now execute the playbook: ansible-playbook ts_workflow.yml Verify Login to the BIG-IP UI Go to menu iApps->Package Management LX. Both the f5-telemetry and f5-appsvs RPM's should be present Login to BIG-IP CLI Check restjavad logs present at /var/log for any TS errors Login to your consumer where the logs are being sent to and make sure the consumer is receiving the logs Conclusion The Telemetry Streaming (TS) extension is very powerful and is capable of sending much more information than described above. Take a look at the complete list of logs as well as consumer applications supported by TS over on CloudDocs: https://clouddocs.f5.com/products/extensions/f5-telemetry-streaming/latest/using-ts.html813Views3likes0Comments