Application observability (Open Telemetry Tracing)
Hello, do you, or your customers, need BIG-IP to deliver OTEL tracing? It won't (AFAIK) be implemented in BIG-IP classic, but I've opened a RFE to ask for implementation of Open Telemetry (distributed) Tracing on BIG-IP Next: RFE: (Bug alias 1621853) [RFE] Implement OTEL traces If you'll need it, don't hesitate to open a support case to link that RFE-ID, that will give it more weight for prioritization.53Views0likes2CommentsTelemetry streaming to Elasticsearch
Hi all I am following a couple of threads since I want to send ASM logging to Elasticsearch like this one fromGreg What I understand is that I need to send an AS3 declaration and a TS declaration. But there are a couple of things not entirely clear to me. 1. Can I remove the iRule, Service_TCP, Pool, Log_Destination, Log_Publisher and Traffic_Log_profile declarations from the AS3 declaration json? In the example the telemetry_asm_security_log_profile does not seem to depend on these? 2. In the AS declaration json an IP address is specified 255.255.255.254 (perhaps just an example since it is a subnet mask) and also in the TS declaration where it is 172.16.60.194. How are the IP in the servers section of the AS3 declaration related to the one in the consumer part in the TS declaration? 3. Intelemetry_asm_security_log_profile the field remoteStorage is set to splunk. According to the reference guide:Reference Guide security-log-profile-application-objectthe allowed values are “remote”, “splunk”, “arcsight”, “bigiq”. I would opt for just remote. Is that the correct choice? Regards Hans717Views0likes9CommentsStreaming Telemetry Errors to Kafka
Has anyone seen errors like the following in the restnoded.log file? Fri, 13 Oct 2023 12:45:34 GMT - severe: [telemetry.f5telemetry_default::My_System::SystemPoller_1] Error: EndpointLoader.loadEndpoint: provisioning: Error: Bad status code: 500 Server Error for http://localhost:8100/mgmt/tm/sys/provision Fri, 13 Oct 2023 12:45:34 GMT - severe: [telemetry.f5telemetry_default::My_System::SystemPoller_1] Error: EndpointLoader.loadEndpoint: bashDisabled: Error: Bad status code: 500 Server Error for http://localhost:8100/mgmt/tm/sys/db/systemauth.disablebash Fri, 13 Oct 2023 12:45:34 GMT - severe: [telemetry.f5telemetry_default::My_System::SystemPoller_1] Error: SystemStats._loadData: provisioning (undefined): Error: Bad status code: 500 Server Error for http://localhost:8100/mgmt/tm/sys/provision Fri, 13 Oct 2023 12:45:34 GMT - severe: [telemetry.f5telemetry_default::My_System::SystemPoller_1] Error: SystemStats._loadData: bashDisabled (undefined): Error: Bad status code: 500 Server Error for http://localhost:8100/mgmt/tm/sys/db/systemauth.disablebash Fri, 13 Oct 2023 12:45:34 GMT - severe: [telemetry.f5telemetry_default::My_System::SystemPoller_1] Error: SystemStats._processProperty: provisioning (provisioning::items): Error: Bad status code: 500 Server Error for http://localhost:8100/mgmt/tm/sys/provision Fri, 13 Oct 2023 12:45:34 GMT - severe: [telemetry.f5telemetry_default::My_System::SystemPoller_1] Error: SystemStats._processProperty: bashDisabled (bashDisabled::value): Error: Bad status code: 500 Server Error for http://localhost:8100/mgmt/tm/sys/db/systemauth.disablebash Fri, 13 Oct 2023 12:45:34 GMT - severe: [telemetry.f5telemetry_default::My_System::SystemPoller_1] Bad status code: 500 Server Error for http://localhost:8100/mgmt/tm/sys/provision I pushed a json file similar to the following (Few fields redacted with variables) { "class": "Telemetry", "schemaVersion": "1.33.0", "My_System": { "class": "Telemetry_System", "systemPoller": { "interval": 60, "enable": true }, "enable": true, "host": "localhost", "port": 8100, "protocol": "http", "allowSelfSignedCert": false }, "My_Listener": { "class": "Telemetry_Listener", "port": 6514, "enable": true }, "My_Consumer": { "class": "Telemetry_Consumer", "type": "Kafka", "topic": "myTopic", "host": "myHost", "protocol": "binaryTcpTls", "port": 9093, "allowSelfSignedCert": false, "enable": true } } --- What would be causing this? I tried turning it to debug and trace but didn't have much luck. Debug didn't show much more and I could not actually locate the trace file. Thanks in advance, Josh434Views0likes1CommentF5 Telmeter to Node Exporter
Hello, I want to stream F5 Telemetry to Node_exporter because node exporter is integrated with Oracle cloud. how ever the node_exporter config accepts only HTTP URLs as we know the F5 endpoint is HTTPS and also uses a user/password. the endpoint I have tested working on POSTMAN. any workaround for that?Solved1.5KViews0likes5CommentsPrometheus and basic auth
Dear all I have setup telemetry streaming so that a remote prometheus server can scrape metrics. I used this advice to use a guest account for "basic auth" done on prometheus : https://devcentral.f5.com/s/articles/icontrol-rest-fine-grained-role-based-access-control-30773 Here is the prometheus scrape_configs entry : - job_name: bigip honor_timestamps: true scrape_interval: 10s scrape_timeout: 10s metrics_path: /mgmt/shared/telemetry/pullconsumer/My_Prometheus scheme: https basic_auth: username: prometheus password: <secret> tls_config: ca_file: /etc/ssl/certs/ca.crt cert_file: /etc/ssl/certs/prometheus.crt key_file: /etc/ssl/certs/prometheus.key insecure_skip_verify: false static_configs: - targets: - lb5 My problem is excessive warning messages in the logs : Dec 13 16:33:37 lb5 warning httpd[13888]: [warn] [client XXXX] AUTHCACHE Error processing cookie 7BA470C4F1E2F722E1685046756D1F1A70621E38 - Cookie user mismatch The problem is clearly identified (K11140735) but changing pam idle timeout is not a solution as promtheus scrapes every 10s which is too low for an usual webui idle timeout. I was wondering if there is a fix or other way to do it ? Using a F5 token is not a solution as prometheus does not seam to support it in its scrape_config section (https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config). Thanks for your help ;-))817Views0likes0CommentsNeed help extracting fields from Telemetry streamed JSON in Splunk
Hi, We are streaming telemetry data from an F5 into Splunk, specifically client side bits in and out for all virtual servers. That part is working. In Splunk however I cannot extract or reference the field to get to the metrics. I've tried various arrangements of spath (index="f5_ltm"| spath "virtualServers{}.clientSide.bitsIn" output=n | table n) but none yield any data from the search. Any help or pointer is greatly appreciated. Thank you.345Views0likes0Comments