F5 XC GLR
2 TopicsF5 Distributed Cloud Telemetry (Logs) - Loki
Scope This article walks through the process of integrating log data from F5 Distributed Cloud’s (F5 XC) Global Log Receiver (GLR) with Grafana Loki. By the end, you'll have a working log pipeline where logs sent from F5 XC can be visualized and explored through Grafana. Introduction Observability is a critical part of managing modern applications and infrastructure. F5 XC offers the GLR as a centralized system to stream logs from across distributed services. Grafana Loki, part of the Grafana observability stack, is a powerful and efficient tool for aggregating and querying logs. To improve observability, you can forward logs from F5 XC into Loki for centralized log analysis and visualization. This article shows you how to implement a lightweight Python webhook that bridges F5 XC GLR with Grafana Loki. The webhook acts as a log ingestion and transformation service, enabling logs to flow seamlessly into Loki for real-time exploration via Grafana. Prerequisites Access to F5 Distributed Cloud (XC) SaaS tenant with GLR setup VM with Python3 installed Running Loki instance (If not, check "Configuring Loki and Grafana" section below) Running Grafana instance (If not, check "Configuring Loki and Grafana" section below) Note – In this demo, an AWS VM is used with Python3 installed and running webhook (port - 5000), Loki (port - 3100) and Grafana (port - 3000) running as docker instance, all in the same VM. Architecture Overview F5 XC GLR → Python Webhook → Loki → Grafana F5 XC GLR Configuration Follow the steps mentioned below to set up and configure Global Log Receiver (GLR). F5 XC GLR Building the Python Webhook To send the log data from F5 Distributed Cloud Global Log Receiver (GLR) to Grafana Loki, we used a lightweight Python webhook implemented using the Flask framework. This webhook acts as a simple transformation and relay service. It receives raw log entries from F5 XC, repackages them in the structure Loki expects, and pushes them to a Loki instance running on the same virtual machine. Key Functions of the Webhook Listens for Log Data: The webhook exposes an endpoint (/glr-webhook) on port 5000 that accepts HTTP POST requests from the GLR. Each request can contain one or more newline-separated log entries. Parses and Structures the Logs: Incoming logs are expected to be JSON-formatted. The webhook parses each line individually and assigns a consistent timestamp (in nanoseconds, as required by Loki). Formats the Payload for Loki: The logs are then wrapped in a structure that conforms to Loki’s push API format. This includes organizing them into a stream, which can be labeled (e.g., with a job name like f5-glr) to make logs easier to query and group in Grafana. Pushes Logs to Loki: Once formatted, the webhook sends the payload to the Loki HTTP API using a standard POST request. If the request is successful, Loki returns a 204 No Content status. Handles Errors Gracefully: The webhook includes basic error handling for malformed JSON, network issues, or unexpected failures, returning appropriate HTTP responses. Running the Webhook python3 webhook.py > python.log 2>&1 & This command runs webhook.py using Python3 in the background and redirects all standard output and error messages to python.log for easier debugging. Configuring Loki and Grafana docker run -d --name=loki -p 3100:3100 grafana/loki:latest docker run -d --name=grafana -p 3000:3000 grafana/grafana:latest Loki and Grafana are running as docker instance in the same VM, private IP of the Loki docker instance along with port is used as data source in Grafana configuration. Once Loki is configured under Grafana Data sources, follow the below steps: Navigate to Explore menu Select “Loki” in data source picker Choose appropriate label and value, in this case label=job and value=f5-glr Select desired time range and click “Run query” Observe logs will be displayed based on “Log Type” selected in F5 XC GLR Configuration Note: Some requests need to be generated for logs to be visible in Grafana based on Log Type selected. Conclusion F5 Distributed Cloud's (F5 XC) Global Log Receiver (GLR) unlocks real-time observability by integrating with open-source tools like Grafana Loki. This reflects F5 XC's commitment to open source, enabling seamless log management with minimal overhead. A customizable Python webhook ensures adaptability to evolving needs. Centralized logs in Loki and visualized in Grafana empower teams with actionable insights, accelerating troubleshooting and optimization. F5 XC GLR's flexibility future-proofs observability strategies. This integration showcases F5’s dedication to interoperability and empowering customers with community-driven solutions.229Views0likes0CommentsF5 Distributed Cloud Telemetry (Logs) - ELK Stack
Introduction: This article is a part of the F5 Distributed Cloud (F5 XC) telemetry series. Here we will discuss how we can export logs from the XC console to ELK Stack using XC’s GLR (Global Log Receiver). F5 Distributed Cloud GLR (Global Log Receiver): Global Log Receiver is a feature provided by Distributed Cloud. It enables customers to send their logs from the F5 Distributed Cloud (F5 XC) console dashboards to their respective centralized SIEM tools like ELK. Global log receiver supports the following log collection systems: AWS Cloudwatch AWS S3 Azure Blob Storage Azure Event Hubs Datadog GCP Bucket Generic HTTP or HTTPs server IBM QRadar Kafka NewRelic Splunk SumoLogic As of now, global log receiver supports sending request (access) logs, DNS request logs, security events, and audit logs of all HTTP load balancers and sites. ELK Stack: ELK Stack is a popular and powerful open-source suite of tools used for centralized log aggregation, analysis, and visualization. "ELK" stands for Elasticsearch Logstash Kibana Together, these tools collect, process and visualize machine-generated data, helping organizations gain insights into their systems. Components of the ELK Stack: Elasticsearch: Elasticsearch is a highly scalable, distributed RESTful search and analytics engine that serves as the core backend of the ELK stack. It is a central data store where all data logs are indexed and stored. It is designed to search and analyze large volumes of structured or unstructured data, such as logs and metrics, quickly and in near real time. Logstash: Logstash is a data ingestion and processing tool that collects data (logs or events) from various sources, transforms it, and sends it to Elasticsearch (or other destinations). It acts as a data collection pipeline with configurable input, output, and filter blocks. Kibana: Kibana is the visualization layer of the ELK stack. It provides a powerful interface for exploring, visualizing, and analyzing data (logs or events) stored in Elasticsearch. It does this with the help of charts, graphs, and maps. It helps organizations monitor the health, performance, and behavior of applications and take data-driven decisions. Architecture Diagram: For this demo, we have configured GLR to export logs from a namespace to Logstash listening on port 8080. Logstash receives and processes the logs, and sends it to Elasticsearch, where the logs are indexed and stored to enable real-time search and queries. At the end, Kibana retrieves the logs from Elasticsearch and represents it through interactive dashboards. Demonstration: To bring the setup up, we will first deploy the ELK stack in the docker environment. ELK deployment and configurations: Step1: Clone the repository using command: git clone https://github.com/deviantony/docker-elk.git Step2: Update ./docker-elk/docker-compose.yml (by adding http receiver port 8080 under logstash section as shown in the screenshot below). Step3: Update ./docker-elk/logstash/pipeline/logstash.conf file. Step 4: Now, run command: docker-compose up setup followed by command: docker-compose up Step 5: Check status of ELK stack containers run command: docker ps Step 6: Once the ELK stack is already up and running, then you can access to ELK GUI http://<public-ip>:5601 using default username/password (elastic/changeme) F5 XC GLR configurations: Step 1: Login to the XC console from the home page, select the Multi-Cloud Network Connect service or the Shared Configuration service. Multi-Cloud Network Connect service: Select Manage > Log Management > Global Log Receiver, Shared Configuration service: Select Manage > Global Log Receiver. Select Add Global Log Receiver. Note: If used path: [Multi-Cloud Network Connect service: Select Manage > Log Management > Global Log Receiver] Log Message Selection can only set to current namespace Step 2: Enter a name in the Metadata section. Optionally, set labels and add a description. From the Log Type menu, select Request Logs, Security Events, Audit Logs, or DNS Request Logs. The request logs are set by default. For this demo, we have selected security events Step 3: In the case of Multi-Cloud Network Connect service, select from Log Message Selection menu, for this demo, we have set it to Select logs in specific namespaces. Step 4: From the Receiver Configuration drop-down menu, select a receiver. Here for this demo, we have set it to HTTP receiver and provided an HTTP URI (public IP of the ELK stack along with the receiver port we have set in the logstash configuration, i.e. 8080). Step 5: Optionally, configure advanced settings. Click Save and Exit. Step 6: Finally, inspect your connection by clicking on the Test Connection button as shown in the below screenshots and verify that logs are collected in the receiver ( Access ELK GUI http://<ELK instance public IP>:5601, and navigate to Home> Analytics>Discover, Add logs-* as a data view filter) Verification: Step 1: Monitor security event logs of the Load Balancers deployed in the specified namespace from the XC console. Select WAAP service, your namespace and then navigate to Overview > Security, here select the LB and then click on the security analytics tab. Step 2: Access ELK GUI http://<ELK instance public IP>:5601, and navigate to Home> Analytics>Discover, Add logs-* as a data view filter. You will notice the logs have been exported to ELK. Step 3: Optionally, Navigate to Home> Analytics>Dashboards and click create visualization to generate a customized visualization dashboard for your collected logs. Conclusion: F5 XC already has an in-built observability dashboard providing real-time visualization to monitor, analyze, and troubleshoot applications and infrastructure across multi-cloud and edge environments. This helps organizations boost efficiency, reduce downtime, and ensure system reliability. With the help of XC’s GLR feature, XC can provide seamless integration with other SIEM tools as well, like ELK stack for customers preferring to consolidate telemetry data from multiple platforms to their centralized SIEM systems. References: XC Global Log Receiver Docker-elk ELK Stack DevCentral Article183Views1like0Comments