Telemetry
2 TopicsAutomation Toolchain - Telemetry Streaming - Grafana StatsD Graphite
Introduction This article explains how to use the Telemetry Streaming component (TS) of the Automation Tool chain (ATC) for integration with Grafana through StatsD and Graphite. To get more information on the Push Consumer supported by the F5 Networks ATC, more in particular the TS component, please refer to the official documentation on CloudDocs here. BIG-IP Configuration In order to configure the TS component of the ATC correctly for integration with Grafana, we will need to post the following JSON blob to your BIG-IP TS API endpoint at https://<BIG-IP-ADDRESS>:8443/mgmt/shared/telemetry/declare { "class": "Telemetry", "MyTelemetrySystem": { "class": "Telemetry_System", "allowSelfSignedCert": true, "systemPoller": { "interval": 60 } }, "GraphiteConsumer": { "class": "Telemetry_Consumer", "type": "Graphite", "host": "10.0.0.55", "protocol": "http", "port": 80 }, "StatsdConsumer": { "class": "Telemetry_Consumer", "type": "Statsd", "host":"10.0.0.55", "protocol": "udp", "port": 8125 }, "MyTelemetryListener": { "class": "Telemetry_Listener", "port": 6514 } } The above 4 JSON stanzas are the following A Telemetry System class, that sets up the system poller. More information here. Two Push Consumers classes, that will push the metrics or data externally. In this case to Graphite and StatsD. More information here. A Telemetry Listener class, that sets an Event Listener )both TCP and UDP protocols) and can accept events in a specific format and process them. More information here Note that in this example, Graphite and StatsD are running on the same host, because we used a docker container to host them as follows # docker run -d \ --name graphite \ --restart=always \ -p 80:80 \ -p 2003-2004:2003-2004 \ -p 2023-2024:2023-2024 \ -p 8125:8125/udp \ -p 8126:8126 \ graphiteapp/graphite-statsd Telemetry data Let's have a look at the TS telemetry data being produced and send over to Graphite. StatsD is used for metrics, Graphite is being used for events StatsD metrics StatsD is supporting 3 main metric types: gauges, timers and counters. The TS StatsD integration is using Gauges. We can use Netcat to have a look at the format of these gauge based metrics # echo "gauges" | nc 10.0.0.55 8126 { 'statsd.timestamp_lag': 0, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.networkInterfaces.1-0.counters-bitsIn': 297895992, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.networkInterfaces.1-0.counters-bitsOut': 0, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.networkInterfaces.mgmt.counters-bitsIn': 248764520, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.networkInterfaces.mgmt.counters-bitsOut': 134973160, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.tmmTraffic.clientSideTraffic-bitsIn': 62854192, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.tmmTraffic.clientSideTraffic-bitsOut': 229153456, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.tmmTraffic.serverSideTraffic-bitsIn': 62432120, 'f5telemetry.ip-10-0-0-130-eu-west-1-compute-internal.system.tmmTraffic.serverSideTraffic-bitsOut': 228977008, ... We can also see the same gauge metrics inside the Graphite admin UI The structure and path of this telemetry data is important when you create you own dashboards Graphite events As mentioned earlier, the TS Graphite integration uses events to send the data to Graphite. You can observe those events by going to the /events endpoint on your Graphite admin UI The details of such an event are as follows Grafana In order to be able to use and display the data now collected in Graphite, one needs to set-up Graphite as a data source and import a Grafana dashboard that uses this data Graphite data source Let's add Graphite as a data source Grafana BIG-IP TS dashboard Let's import an example dashboard that used the data available. This sample dashboard is also available in the Grafana dashboard collection online here This sample dashboard will make use of dashboard variables, so users can filter on parameters like Device (which BIG-IP), Tenant (which BIG-IP partition), Application, Virtual Server and Pool. For the sake of demonstration, there is also a filter for Profile For more information and screenshots on the dashboard itself, refer to the Grafana website where the dashboard is downloadable. The dashboard contains separate rows for application health status: 4xx and 5xx responses. You can add slow responses as well as a matter of excercise device system statistics: CPU, memory, TTM traffic in/out, interface traffic in/out virtual server traffic in/out and server connections pool traffic in/out and server connections members traffic in/out and server connections profile details statistics The variable queries used for this dashboard are as follows, based on the structure of the metrics data you will find in the Graphite admin UI Conclusion In this article we have demonstrated how the F5 Automation Tool Chain, and more in particular also its Telemetry Streaming component, is a perfect match for integration into popular DevOps telemetry solutions. For a fully automated scenario, demonstrating the usage of Declarative Onboarding (DO), Application Services 3 (AS3) and Telemetry Streaming (TS) with automated Grafana integration, you can refer to the following Github repo.2.5KViews3likes5CommentsTelemetry streaming - One click deploy using Ansible
In this article we will focus on using Ansible to enable and install telemetry streaming (TS) and associated dependencies. Telemetry streaming The F5 BIG-IP is a full proxy architecture, which essentially means that the BIG-IP LTM completely understands the end-to-end connection, enabling it to be an endpoint and originator of client and server side connections. This empowers the BIG-IP to have traffic statistics from the client to the BIG-IP and from the BIG-IP to the server giving the user the entire view of their network statistics. To gain meaningful insight, you must be able to gather your data and statistics (telemetry) into a useful place.Telemetry streaming is an extension designed to declaratively aggregate, normalize, and forward statistics and events from the BIG-IP to a consumer application. You can earn more about telemetry streaming here, but let's get to Ansible. Enable and Install using Ansible The Ansible playbook below performs the following tasks Grab the latest Application Services 3 (AS) and Telemetry Streaming (TS) versions Download the AS3 and TS packages and install them on BIG-IP using a role Deploy AS3 and TS declarations on BIG-IP using a role from Ansible galaxy If AVR logs are needed for TS then provision the BIG-IP AVR module and configure AVR to point to TS Prerequisites Supported on BIG-IP 14.1+ version If AVR is required to be configured make sure there is enough memory for the module to be enabled along with all the other BIG-IP modules that are provisioned in your environment The TS data is being pushed to Azure log analytics (modify it to use your own consumer). If azure logs are being used then change your TS json file with the correct workspace ID and sharedkey Ansible is installed on the host from where the scripts are run Following files are present in the directory Variable file (vars.yml) TS poller and listener setup (ts_poller_and_listener_setup.declaration.json) Declare logging profile (as3_ts_setup_declaration.json) Ansible playbook (ts_workflow.yml) Get started Download the following roles from ansible galaxy. ansible-galaxy install f5devcentral.f5app_services_package --force This role performs a series of steps needed to download and install RPM packages on the BIG-IP that are a part of F5 automation toolchain. Read through the prerequisites for the role before installing it. ansible-galaxy install f5devcentral.atc_deploy --force This role deploys the declaration using the RPM package installed above. Read through the prerequisites for the role before installing it. By default, roles get installed into the /etc/ansible/role directory. Next copy the below contents into a file named vars.yml. Change the variable file to reflect your environment # BIG-IP MGMT address and username/password f5app_services_package_server: "xxx.xxx.xxx.xxx" f5app_services_package_server_port: "443" f5app_services_package_user: "*****" f5app_services_package_password: "*****" f5app_services_package_validate_certs: "false" f5app_services_package_transport: "rest" # URI from where latest RPM version and package will be downloaded ts_uri: "https://github.com/F5Networks/f5-telemetry-streaming/releases" as3_uri: "https://github.com/F5Networks/f5-appsvcs-extension/releases" #If AVR module logs needed then set to 'yes' else leave it as 'no' avr_needed: "no" # Virtual servers in your environment to assign the logging profiles (If AVR set to 'yes') virtual_servers: - "vs1" - "vs2" Next copy the below contents into a file named ts_poller_and_listener_setup.declaration.json. { "class": "Telemetry", "controls": { "class": "Controls", "logLevel": "debug" }, "My_Poller": { "class": "Telemetry_System_Poller", "interval": 60 }, "My_Consumer": { "class": "Telemetry_Consumer", "type": "Azure_Log_Analytics", "workspaceId": "<<workspace-id>>", "passphrase": { "cipherText": "<<sharedkey>>" }, "useManagedIdentity": false, "region": "eastus" } } Next copy the below contents into a file named as3_ts_setup_declaration.json { "class": "ADC", "schemaVersion": "3.10.0", "remark": "Example depicting creation of BIG-IP module log profiles", "Common": { "Shared": { "class": "Application", "template": "shared", "telemetry_local_rule": { "remark": "Only required when TS is a local listener", "class": "iRule", "iRule": "when CLIENT_ACCEPTED {\n node 127.0.0.1 6514\n}" }, "telemetry_local": { "remark": "Only required when TS is a local listener", "class": "Service_TCP", "virtualAddresses": [ "255.255.255.254" ], "virtualPort": 6514, "iRules": [ "telemetry_local_rule" ] }, "telemetry": { "class": "Pool", "members": [ { "enable": true, "serverAddresses": [ "255.255.255.254" ], "servicePort": 6514 } ], "monitors": [ { "bigip": "/Common/tcp" } ] }, "telemetry_hsl": { "class": "Log_Destination", "type": "remote-high-speed-log", "protocol": "tcp", "pool": { "use": "telemetry" } }, "telemetry_formatted": { "class": "Log_Destination", "type": "splunk", "forwardTo": { "use": "telemetry_hsl" } }, "telemetry_publisher": { "class": "Log_Publisher", "destinations": [ { "use": "telemetry_formatted" } ] }, "telemetry_traffic_log_profile": { "class": "Traffic_Log_Profile", "requestSettings": { "requestEnabled": true, "requestProtocol": "mds-tcp", "requestPool": { "use": "telemetry" }, "requestTemplate": "event_source=\"request_logging\",hostname=\"$BIGIP_HOSTNAME\",client_ip=\"$CLIENT_IP\",server_ip=\"$SERVER_IP\",http_method=\"$HTTP_METHOD\",http_uri=\"$HTTP_URI\",virtual_name=\"$VIRTUAL_NAME\",event_timestamp=\"$DATE_HTTP\"" } } } } } NOTE: To better understand the above declarations check out our clouddocs page: https://clouddocs.f5.com/products/extensions/f5-telemetry-streaming/latest/telemetry-system.html Next copy the below contents into a file named ts_workflow.yml - name: Telemetry streaming setup hosts: localhost connection: local any_errors_fatal: true vars_files: vars.yml tasks: - name: Get latest AS3 RPM name action: shell wget -O - {{as3_uri}} | grep -E rpm | head -1 | cut -d "/" -f 7 | cut -d "=" -f 1 | cut -d "\"" -f 1 register: as3_output - debug: var: as3_output.stdout_lines[0] - set_fact: as3_release: "{{as3_output.stdout_lines[0]}}" - name: Get latest AS3 RPM tag action: shell wget -O - {{as3_uri}} | grep -E rpm | head -1 | cut -d "/" -f 6 register: as3_output - debug: var: as3_output.stdout_lines[0] - set_fact: as3_release_tag: "{{as3_output.stdout_lines[0]}}" - name: Get latest TS RPM name action: shell wget -O - {{ts_uri}} | grep -E rpm | head -1 | cut -d "/" -f 7 | cut -d "=" -f 1 | cut -d "\"" -f 1 register: ts_output - debug: var: ts_output.stdout_lines[0] - set_fact: ts_release: "{{ts_output.stdout_lines[0]}}" - name: Get latest TS RPM tag action: shell wget -O - {{ts_uri}} | grep -E rpm | head -1 | cut -d "/" -f 6 register: ts_output - debug: var: ts_output.stdout_lines[0] - set_fact: ts_release_tag: "{{ts_output.stdout_lines[0]}}" - name: Download and Install AS3 and TS RPM ackages to BIG-IP using role include_role: name: f5devcentral.f5app_services_package vars: f5app_services_package_url: "{{item.uri}}/download/{{item.release_tag}}/{{item.release}}?raw=true" f5app_services_package_path: "/tmp/{{item.release}}" loop: - {uri: "{{as3_uri}}", release_tag: "{{as3_release_tag}}", release: "{{as3_release}}"} - {uri: "{{ts_uri}}", release_tag: "{{ts_release_tag}}", release: "{{ts_release}}"} - name: Deploy AS3 and TS declaration on the BIG-IP using role include_role: name: f5devcentral.atc_deploy vars: atc_method: POST atc_declaration: "{{ lookup('template', item.file) }}" atc_delay: 10 atc_retries: 15 atc_service: "{{item.service}}" provider: server: "{{ f5app_services_package_server }}" server_port: "{{ f5app_services_package_server_port }}" user: "{{ f5app_services_package_user }}" password: "{{ f5app_services_package_password }}" validate_certs: "{{ f5app_services_package_validate_certs | default(no) }}" transport: "{{ f5app_services_package_transport }}" loop: - {service: "AS3", file: "as3_ts_setup_declaration.json"} - {service: "Telemetry", file: "ts_poller_and_listener_setup_declaration.json"} #If AVR logs need to be enabled - name: Provision BIG-IP with AVR bigip_provision: provider: server: "{{ f5app_services_package_server }}" server_port: "{{ f5app_services_package_server_port }}" user: "{{ f5app_services_package_user }}" password: "{{ f5app_services_package_password }}" validate_certs: "{{ f5app_services_package_validate_certs | default(no) }}" transport: "{{ f5app_services_package_transport }}" module: "avr" level: "nominal" when: avr_needed == "yes" - name: Enable AVR logs using tmsh commands bigip_command: commands: - modify analytics global-settings { offbox-protocol tcp offbox-tcp-addresses add { 127.0.0.1 } offbox-tcp-port 6514 use-offbox enabled } - create ltm profile analytics telemetry-http-analytics { collect-geo enabled collect-http-timing-metrics enabled collect-ip enabled collect-max-tps-and-throughput enabled collect-methods enabled collect-page-load-time enabled collect-response-codes enabled collect-subnets enabled collect-url enabled collect-user-agent enabled collect-user-sessions enabled publish-irule-statistics enabled } - create ltm profile tcp-analytics telemetry-tcp-analytics { collect-city enabled collect-continent enabled collect-country enabled collect-nexthop enabled collect-post-code enabled collect-region enabled collect-remote-host-ip enabled collect-remote-host-subnet enabled collected-by-server-side enabled } provider: server: "{{ f5app_services_package_server }}" server_port: "{{ f5app_services_package_server_port }}" user: "{{ f5app_services_package_user }}" password: "{{ f5app_services_package_password }}" validate_certs: "{{ f5app_services_package_validate_certs | default(no) }}" transport: "{{ f5app_services_package_transport }}" when: avr_needed == "yes" - name: Assign TCP and HTTP profiles to virtual servers bigip_virtual_server: provider: server: "{{ f5app_services_package_server }}" server_port: "{{ f5app_services_package_server_port }}" user: "{{ f5app_services_package_user }}" password: "{{ f5app_services_package_password }}" validate_certs: "{{ f5app_services_package_validate_certs | default(no) }}" transport: "{{ f5app_services_package_transport }}" name: "{{item}}" profiles: - http - telemetry-http-analytics - telemetry-tcp-analytics loop: "{{virtual_servers}}" when: avr_needed == "yes" Now execute the playbook: ansible-playbook ts_workflow.yml Verify Login to the BIG-IP UI Go to menu iApps->Package Management LX. Both the f5-telemetry and f5-appsvs RPM's should be present Login to BIG-IP CLI Check restjavad logs present at /var/log for any TS errors Login to your consumer where the logs are being sent to and make sure the consumer is receiving the logs Conclusion The Telemetry Streaming (TS) extension is very powerful and is capable of sending much more information than described above. Take a look at the complete list of logs as well as consumer applications supported by TS over on CloudDocs: https://clouddocs.f5.com/products/extensions/f5-telemetry-streaming/latest/using-ts.html618Views3likes0Comments