Application Study Tool: Bring your own Prometheus
The Application Study Tool (AST) from F5 is a powerful utility for monitoring and observing your BIG-IP ecosystem. It provides valuable insights into the performance of your BIG-IP, the applications it delivers, potential threats, and traffic patterns.
The default installation includes its own instance of Prometheus, the time-series database where the Application Study Tool stores the metrics it collects. However, some customers prefer to use their existing Prometheus instances, which are already serving as databases for other applications.
Their reasons vary. In some cases, they want to leverage a dedicated team of Prometheus specialists to maintain and optimize their own custom configuration of this technology. In other cases, they find it easier to consolidate metric data from a variety of sources into one database. And, in still other cases, they want to leverage the enterprise Prometheus licenses they have already purchased. There are still more reasons beyond these. Whatever the reason, AST can accommodate a bring-your-own-Prometheus deployment with very little effort.
In this guide, we will discuss options for using your own Prometheus instance with the Application Study Tool. Please note that not all options or configurations will be covered, but hopefully, this blog provides enough guidance to get you started.
If you somehow ended up here but really just wanted to use your own instance of Grafana, please see my other blog, Displaying Application Study Tool (AST) Dashboards in Your Own Grafana Instance.
Alternate Prometheus Instance
Note: The following steps assume you already have an alternate Prometheus instance running. If you need to spin up an instance to test out this functionality, see the steps for doing this at the end of this section.
The most common Prometheus deployment request from Application Study Tool users is to swap out the AST Prometheus instance with their own instance. This can be running as a container, as a native executable, or a remotely-hosted cloud instance. You just need reachability, and you need to know the hostname and port number. (The default port number is 9090, but that can be changed.)
As long as the deployment just uses the basic Prometheus configuration, you the process to swap out the default instance with your instance is a straightforward process. First change the endpoint in the ./services/otel_collector/defaults/bigip-scraper-config.yaml file to point to your alternate Prometheus endpoint.
Under exporters.otlphttp/metrics-local, change:
endpoint: http://prometheus:9090/api/v1/otlp
to
endpoint: http://13.83.83.136:9090/api/v1/otlp (My external Prometheus instance was running on 13.83.83.136.)
The exporters section of the file will now look like this:
exporters:
otlphttp/metrics-local:
endpoint: http://prometheus:9090/api/v1/otlp
otlp/f5-datafabric:
endpoint: us.edge.df.f5.com:443
headers:
# Requires Sensor ID and Token to authenticate.
Authorization: "kovacs ${env:SENSOR_ID} ${env:SENSOR_SECRET_TOKEN}"
X-F5-OTEL: "GRPC"
tls:
insecure: false
ca_file: /etc/ssl/certs/ca-certificates.pem
debug/bigip:
verbosity: basic
sampling_initial: 5
sampling_thereafter: 200
Then restart the tool:
sudo docker compose down
sudo docker compose up
(For the container pros reading this, you can instead just restart the Otel Collector container and leave everything else running.)
Note that the AST Grafana dashboard, unless configured to point to this other Prometheus instance, will no longer show new data.
Verify this was successful by browsing to the dashboard of the alternate Prometheus instance (http://[hostname]:9090 if left as the default). Enter “f5” in the expression bar, select one of the metrics from the list that appears as you type, and click the "Execute" button on the right.
Verify that metrics appear. If it looks similar to the above screenshot, metrics are now successfully flowing to the new Prometheus instance.
Steps to quickly spin up a Prometheus instance for testing purposes:
Launch a new Prometheus container. If it is running on the same host as the rest of the AST containers, be sure to set it to run on a port other than 9090 to avoid conflicts. In this example, I used port 9091.
$ docker run -d --name=prometheus2 -p 9091:9090 prom/prometheus:v2.54.1
Verify it is successfully running by issuing the command, “docker ps”, and then browsing to http://[hostname]:9091 to check that the Prometheus GUI comes up.
Multiple Prometheus Instances
If you don’t want to change the default AST Prometheus instance, you can add your own instance as a second database. You can still collect data in the original instance. Follow these steps to make the Otel Collector send data to multiple Prometheus instances at the same time.
First, edit services/otel_collector/defaults/bigip-scraper-config.yaml. Add a second exporter to the exporters section. In this example, I called it "metrics-remote", but you can choose any valid name for it. Take note of this name as you will need to reference it in the next step.
receivers: ${file:/etc/otel-collector-config/receivers.yaml}
processors:
batch/local:
batch/f5-datafabric:
send_batch_max_size: 8192
# Only export data to f5 (if enabled) every 300s
interval/f5-datafabric:
interval: 300s
# Apply the folowing transformations to metrics bound for F5 Datafabric
attributes/f5-datafabric:
actions:
- key: dataType
action: upsert
value: bigip-ast-metric
exporters:
otlphttp/metrics-local:
endpoint: http://prometheus:9090/api/v1/otlp
otlphttp/metrics-remote:
endpoint: http://192.168.0.97:9090/api/v1/otlp
otlp/f5-datafabric:
endpoint: us.edge.df.f5.com:443
headers:
# Requires Sensor ID and Token to authenticate.
Authorization: "kovacs ${env:SENSOR_ID} ${env:SENSOR_SECRET_TOKEN}"
X-F5-OTEL: "GRPC"
tls:
insecure: false
ca_file: /etc/ssl/certs/ca-certificates.pem
debug/bigip:
verbosity: basic
sampling_initial: 5
sampling_thereafter: 200
service:
# Changed in upstream otel collector, default only responds on localhost
telemetry:
metrics:
readers:
- pull:
exporter:
prometheus:
host: '0.0.0.0'
port: 8888
pipelines: ${file:/etc/otel-collector-config/pipelines.yaml}
Next, edit services/otel_collector/pipelines.yaml within your Application Study Tool directory. Add a second exporter to the exporters section. This will reference the exporter you defined in the services/otel_collector/defaults/bigip-scraper-config.yaml file above. Be sure to use the same name. In this example, I use otlphttp/metrics-remote.
metrics/f5-datafabric:
exporters:
- otlp/f5-datafabric
- debug/bigip
processors:
- interval/f5-datafabric
- attributes/f5-datafabric
- batch/f5-datafabric
receivers:
- bigip/1
metrics/local:
exporters:
- otlphttp/metrics-local
- otlphttp/metrics-remote
- debug/bigip
processors:
- batch/local
receivers:
- bigip/1
- bigip/2
- bigip/3
- bigip/4
Now you are ready to restart Docker Compose.
sudo docker compose down
sudo docker compose up
Once AST is back up, verify each Prometheus instance by navigating to each instance’s GUI (http://[hostname]:9090 or http://[hostname]:9091 if you followed the steps above for deploying a second instance on the same host) and, again, type “f5” in the expression bar, select one of the metrics from the list that appears as you type, and click the "Execute" button on the right.
You can further verify your original AST stack is still running correctly by opening the Grafana dashboard (http://[hostname]:3000 is the default endpoint) and ensuring data is still flowing in its dashboards.
Adding AST Dashboards to Your External Grafana Instance
Whether you are moving these metrics over to your own Prometheus instance or you are adding your instance as a secondary database, you probably now need to consume this data from a separate application. If you are using Grafana for this, you can follow the steps in my previous blog, Displaying Application Study Tool (AST) Dashboards in Your Own Grafana Instance, to import the AST dashboards into your Grafana instance and view these metrics.
Take particular note of the Export a Dashboard from AST, Connect the New Grafana Instance to the AST Prometheus Instance, and Import the Dashboard into the New Grafana Instance sections. These sections will record the dashboard settings, set the data source URL to point to your Prometheus instance, and set up the new dashboards in your own Grafana instance.
Caveats
This blog has covered two basic scenarios: using an alternate Prometheus instance instead of the default, and using a second instance in parallel with the default instance. It assumes the configuration of the second instance is similar to the default instance that is included as part of AST. This is a very basic configuration. Some settings outside of this standard configuration (for example, authentication requirements, using a non-default port, or configuring Prometheus to pull data, instead of receiving pushed data, from the Otel Collector) will require additional configuration outside the scope of this guide.
Given the large number of configuration options available in Prometheus, there are many other caveats missing from this guide. However, I hope that this has helped to get you started in customizing AST to use your own Prometheus instance.
If you have gotten value from using your own Prometheus instance, feel free to post your usecase and what you did below, as many of our readers will find this valuable.