BIG-IP Telemetry Streaming to Azure

There are a few other articles on DevCentral related to this topic but there are a few inconsistencies in those articles and they don't cover everything, so I thought I'd create a new guide which hopefully will set out the steps in a single guide - covering logging of DNS, AFM, ASM, and LTM.

Steps

First important point is that you have to use the REST-API for configuring Telemetry Streaming - there isn't a way to provision using TMSH or the GUI. The way it is done is by POSTing a json declaration to BIG-IP Telemetry Streaming’s declarative REST API endpoint. For Azure, the details are here:

https://clouddocs.f5.com/products/extensions/f5-telemetry-streaming/latest/setting-up-consumer.html#microsoft-azure-log-analytics

I like to use AS3 where possible so I provide the AS3 code snippets, but I'll also show the config on the GUI as well.

The steps are:

  1. Download and install AS3 and Telemetry Streaming
  2. Create Azure Sentinel workspace
  3. Send TS declaration
  4. base AS3 declaration
  5. adding AFW logs
  6. adding ASM logs
  7. adding LTM logs

This article is a result of using TS in Azure for nearly 3 years, over which time I've gained a good understanding of how it works. However, there are updates and changes all the time, so I'd welcome any feedback if any part of the article is incorrect or out of date.

Download and install AS3 and Telemetry Streaming

To create necessary configuration needed for streaming to Sentinel, you firstly need to download the iControl LX plug-ins. These are available in the F5 github repository for AS3 and Telemetry Streaming as RPM files.

Links are:

On the right hand side of the github page you'll see a link to the latest release: 

- it's the RPM file you need (usually the biggest size file!).

I download the files to my PC and then import them using the GUI in: iApps/Package Management LX:

Some key points:

  • Your BIG-IP needs to be on at least 13.1
  • to send a declaration to TS and AS3 your account must have the Administrator role - it's usually recommended to use the 'admin' username.
  • I use a REST API client to send declarations. I use Insomnia but Postman is another popular alternative.

Setting up Azure Workspace

You can create logging to Azure from the BIG-IP using a pre-built F5 connector available as part of Azure Sentinel. Alternatively, you can just setup a Log Analytics Workspace and stream the logs into it. I'll explain both methods:

Using Azure Sentinel

To create a Sentinel instance, you need to first create a Log Analytics Workspace. Then add Sentinel to the workspace. If there are no workspaces defined in the region you are adding Sentinel, the ARM template will prompt you to add one. Once created, you can add Sentinel to it:

Once created you need the workspace credentials to allow the BIG-IP to connect and send data.

Azure Workspace Credentials

To be able to send logs into the Azure workspace, you need 2 important pieces of data - firstly the "Log Analytics Workspace ID", and then the "Primary key".

F5 provide a data connector for Sentinel which is an easy way to get this information. On the Sentinel page select the 'Content Management' / 'Content Hub' blade (1), search for 'f5' and then select the 'F5 Advanced WAF Integration via Telemetry Streaming' connector (3). Click on the 'Install' button (3):

Once installed, on the blade menu, select "Configuration" and "Data connectors". You should see a connector called "F5 BIG-IP". If you select this, and then click "Open connector page":

This will then tell you the Workspace ID and the Primary Key you need (in section "Configuration").

The connector is a handy tool within Sentinel as it monitors and shows you the status of the telemetry coming into Azure needed for the 2 workbooks which have also been added as part of the Content Hub installation you did in the previous step. We will see this working later...

Using Log Analytics only

Sentinel is a SIEM solution which 'sits' on top of Log Analytics. If you don't need Sentinel's features, then BIG-IP Telemetry Streaming works fine just with a Log Analytics Workspace. Create the workspace from the Azure portal, ideally in the same region as the BIG-IP devices to avoid inter-VLAN costs in sending data to the workspace if you are using network isolation.

In Azure Portal search bar type "Log Analytics workspaces" and + Create. All is needed is a name and region. Once created, navigate to "Settings" and "Agents". In the section "Log Analytics agent instructions" you will see the Workspace ID and the Primary Key you need for the TS declaration:

 

Using MSI

Telemetry Streaming v1.11 added support for sending data to Azure with an Azure Managed Service Identity (MSI). An MSI is a great way of maintaining secure access between Azure objects by leveraging Entra ID (formally Azure AD) to grant access without needing keys. The Primary Workspace key may be regenerated at some point (this may be a part of the key rotation policies of the customer) and if this happens, TS will stop as Azure will reject the incoming telemetry connection from BIG-IP.

To use the MSI, create it in the Azure Portal and assign it to the Virtual Machine running the BIG-IP (Security/Identity). I would recommend creating a user assigned MSI rather than a system one. The system ID is restricted to a single resource and only for the lifetime of that resource. A user MSI can be assigned to multiple big-ip machines.

Once created, assign the following role assignments to the Log Analytics Workspace (in "Access Control" blade in the LAW): "Log Analytics Contributor".

 

Send TS declaration

We now can send the TS declaration. The endpoint you need to reach is:

POST   https://{{BIG_IP_DEVICE_IP}}/mgmt/shared/telemetry/declare

Before I give the declaration, there are a few issues I found using TS in Azure which I need to explain...

System Logging Issue

The telemetry logs from the BIG-IP will create what are known as "Custom Logs" in LAW. These are explained in more detail at the end of this article, but the most important thing about them is that have a limit of 500 columns for each Log Type. This was originally causing issues as BIG-IP was creating a set of columns for all the properties of each named item and very soon, the 500 limit was reached.

F5 had already spotting this issue and fixed it in v1.24 with an option "format" with value "propertyBased" on the Consumer class (ref: https://clouddocs.f5.com/products/extensions/f5-telemetry-streaming/latest/setting-up-consumer.html#additions-to-the-azure-log-analytics-consumer)

However, I found that when ASM is enabled on the BIG-IP, each signature update download was creating an set of additional columns in the System log which eventually took it over the 500 limit again:

 This has now been fixed by F5 with TS v.1.37.0 with a new "format" value of "propertyBasedV2". This allows asmAttackSignatures to be under Log Type F5Telemetry_asmAttackSignatures instead of F5Telemetry_system.

If you see any logs either not appearing, or logs stopping on Azure, you can check for this error with the following Kusto query:

Operation | where OperationCategory contains “Ingestion”

 

AVR Logging Issue

When ASM is enabled, it automatically enables AVR as well. This creates an AVR log which has a LOT of data in it. I've noticed that the AVR log is both excessive and can also exceed to 500 column limit due to a mishmash of data in it. Therefore, In my declaration I have made use of the excludeData option in the TS Listener to remove some of the log sources - the column/field 'Entity_s' identifies the source of the data:

  • DNS_Offbox_All - this generates a log for every DNS request which is made. If your BIG-IP is acting as a DNS Cache for your environment, this very quickly becomes a massive log.
  • ProcessCpuUtil - again, this creates a load of additional columns which record the CPU utilisation of every running process on the device. Might be useful to some... not to me!
  • TcpStat - this logs TCP events against each TCP profile on the BIG-IP (whether used for not) every 5 minutes. If you have a busy device, they quickly flood the log.
  • IruleEvents - shows data associated with iRules (times triggered, event counts..etc). I had no use for this data. I use iRules but did not need statistics on how many times an iRule was used.
  • ACL_STAGE and ACL_FORCE - these seemed to be pointless logs related to AFM but not really giving any information which isn't already in the AFM logs. It was duplicated data of no value.
  • There were also a number of other AVR logs which did not seem to create any meaningful data for me. These were:
    • ServerHealth, GtmWideip, BOT DEFENSE EVENT, InterfaceHealth, AsmBypassInfo, FwNatTransSrc, FwNatTransDest
    • I therefore have excluded these log types.

This is not an exhaustive list of entity types in the AVR logs, but hopefully omitting these will (a) reduce your log sizes and (b) prevent the 500 column issue. If you want to analyse what different types (entities) of logs are in the AVR log, the following kusto query can be run:

F5Telemetry_AVR_CL | summarize count() by Entity_s

This will show the amount of AVR logs by each Entity type (source). You can then run a query for a specific type, analyse the content, and decide whether to filter it or not.

Declaration

Ok - after all that, here is my declaration. It contains the following:

  • A Telemetry System class - this is needed to generate the device system logging which goes into a log called "F5Telemetry_system_CL"
  • A System Poller - this collects system data at a set interval (60 seconds is a reasonable setting here which produces logs which are not too large but with good granularity of data). 
    • The System Poller also allow us to filter logs using excludeData. We exclude the following:
      • asmAttackSignatures - as explained above, these should no longer appear in System logs, but this is just to make sure!
      • diskLatency - this is a large set of columns storing the disk stats. As we are using VMs in Azure, this info is available within the Azure IaS service, so I did not see any point of collecting it again from the VM level, especially as the latency is a function of the selected machine type in Azure.
      • location - this is just the SNMP location, waste of a column name.
      • description - this is just the SNMP description, waste of a column name.
    • A Telemetry Listener - this is used to listen to and collect event logs it receives on the specified port from configured BIG-IP system services, including LTM, ASM, AFM and AVR.
    • A Telemetry Push Consumer - this is used to push the collected data to Azure. It is here we use the workspace ID and the primary key we collected in the above steps. 
{
    "class": "Telemetry",
    "controls": {
        "class": "Controls",
        "logLevel": "info",
        "debug": false
    },
    "telemetry-system-azure": {
        "class": "Telemetry_System",
        "trace": false,
        "allowSelfSignedCert": true,
        "host": "localhost",
        "port": 8100,
        "protocol": "http",
        "systemPoller": [
            "telemetry-systemPoller-azure"
        ]
    },
    "telemetry-systemPoller-azure": {
        "class": "Telemetry_System_Poller",
        "interval": 60,
        "actions": [
            {
                "excludeData": {},
                "locations": {
                    "system": {
                        "asmAttackSignatures": true,
                        "diskLatency": true,
                        "tmstats": true,
                        "location": true,
                        "description": true
                    }
                }
            }
        ]
    },
    "telemetry-listener-azure": {
        "class": "Telemetry_Listener",
        "port": 6514,
        "enable": true,
        "trace": false,
        "match": "",
        "actions": [
            {
                "setTag": {
                    "tenant": "`T`",
                    "application": "`A`"
                },
                "enable": true
            },
            {
                "excludeData": {},
                "ifAnyMatch": [
                    {
                        "Entity": "DNS_Offbox_All"
                    },
                    {
                        "Entity": "ProcessCpuUtil"
                    },
                    {
                        "Entity": "TcpStat"
                    },
                    {
                        "Entity": "IruleEvents"
                    },
                    {
                        "Entity": "ACL_STAGE"
                    },
                    {
                        "Entity": "ACL_FORCE"
                    },
                    {
                        "Entity": "ServerHealth"
                    },
                    {
                        "Entity": "GtmWideip"
                    },
                    {
                        "Entity": "BOT DEFENSE EVENT"
                    },
                    {
                        "Entity": "InterfaceHealth"
                    },
                    {
                        "Entity": "AsmBypassInfo"
                    },
                    {
                        "Entity": "FwNatTransSrc"
                    },
                    {
                        "Entity": "FwNatTransDest"
                    }
                ],
                "locations": {
                    "^.*$": true
                }
            }
        ]
    },
    "telemetry-pushConsumer-azure": {
        "class": "Telemetry_Consumer",
        "type": "Azure_Log_Analytics",
        "format": "propertyBasedV2",
        "trace": false,
        "workspaceId": "{{LOG_ANALYTICS_WORKSPACE_ID}}",
        "passphrase": {
            "cipherText": "{{LOG_ANALYTICS_PRIMARY_KEY}}"
        },
        "useManagedIdentity": false
    }
}

Note: If you are using an MSI managed identity, the consumer changes to this:

"telemetry-pushConsumer-azure": {
	"class": "Telemetry_Consumer",
	"type": "Azure_Log_Analytics",
	"format": "propertyBasedV2",
	"trace": false,
	"useManagedIdentity": true
}

You need to look for a "200 OK" response to come back from the REST client. 

The logs for Telemetry go into: /var/log/restnoded/restnoded.log and will alert if there are errors in connectivity from the BIG-IP into LAW.

 

Adding Non-System Logs

To add logs from the Security managers on BIG-IP (AFM, ASM ..etc) you need to create a few AS3 resources to handle the internal routing of logs from the various managers into the telemetry listener just created above. The resources are:

    • a Log Publisher for the security log profile to link to.
    • a Log Destination formatter to a high speed link (HSL) pool, with a format type of "splunk"
    • a Log Destination HSL to a pool which maps to an internal address using TCP port 6514
    • an LTM Pool which uses a local address.
    • a TCP Virtual Server (vIP) on tcp/6514 with the local address.
    • an iRule for the vIP to remap traffic onto the loopback address (where it will be picked up by the TS listener.
"irule-telemetryLocalRule": {
	"class": "iRule",
	"remark": "Telemetry Streaming",
	"iRule": {
		"base64": "d2hlbiBDTElFTlRfQUNDRVBURUQgcHJpb3JpdHkgNTAwIHsNCiAgbm9kZSAxMjcuMC4wLjEgNjUxNA0KfQ=="
	}
},
"logDestination-telemetryHsl": {
	"class": "Log_Destination",
	"type": "remote-high-speed-log",
	"protocol": "tcp",
	"pool": {
		"use": "pool-telemetry"
	}
},
"logDestination-telemetry": {
	"class": "Log_Destination",
	"type": "splunk",
	"forwardTo": {
		"use": "logDestination-telemetryHsl"
	}
},
"logPublisher-telemetry": {
	"class": "Log_Publisher",
	"destinations": [
		{
			"use": "logDestination-telemetry"
		}
	]
},
"pool-telemetry": {
	"class": "Pool",
	"remark": "Telemetry Streaming to Azure Sentinel",
	"monitors": [
	],
	"members": [
		{
			"serverAddresses": [
				"255.255.255.254"
			],
			"adminState": "enable",
			"servicePort": 6514
		}
	]
},
"vip-telemetryLocal": {
	"class": "Service_TCP",
	"virtualAddresses": [
		"255.255.255.254"
	],
	"iRules": [
		"irule-telemetryLocalRule"
	],
	"pool": "pool-telemetry",
	"remark": "Telemetry Streaming",
	"addressStatus": true,
	"virtualPort": 6514
}

The iRule is base64 encoded in the AS3 declaration above but is just this:

when CLIENT_ACCEPTED priority 500 {
    node 127.0.0.1 6514
}

Now you have a Log Publisher which routes to a local pool mapping to the loopback of the BIG-IP. The TS Listener will then pick this up (notice the port in the TS Declaration object "telemetry-listener-azure" matches the Log High Speed Logging Destination pool (6514)).

 

Loopback Issue

When creating the virtual server above, tmm errors are observed which prevent logging via the Telemetry virtual server iRule as it rejects remapping to the loopback. The following log is seen in /var/log/ltm :

testf5 err tmm1[6506]: 01220001:3: TCL error: /Common/Shared/irule-telemetryLocalRule - disallow self or loopback connection (line 1)TCL error (line 1) (line 1) invoked from within "node 127.0.0.1 6514"

Ref: After an upgrade, iRules using the loopback address may fail and log TCL errors (f5.com)

To fix this, change the following db value:

tmsh modify sys db tmm.tcl.rule.node.allow_loopback_addresses value true
tmsh save sys config

 

Adding AFM logs

The Advanced Firewall Manager allows firewall policy to be defined at a number of points (called "contexts") in the flow of traffic through the F5. A global policy can be applied,  or a policy can be added at the Self-IP, Route Domain, or Virtual Server level.

What is important to realize is that there is a pre-built Security Logging Profile for all policies operating at the 'Global' context - called global-network. If your policy is applied as a global policy, you have to change this profile to get logging into Azure.

The profile is here under Security / Event Logs / Logging Profiles:

Click on the 'global-network' profile and in the "Network Firewall" tab set the publisher to the one you have built above. You can also decide what to log - at the least you should log any policy drops or rejects:

For any AFM policies added at any other context, you can create your own logging profile

The logs produced go into the Azure custom log: F5Telemetry_AFM_CL. A log is produced for every firewall event with the column "action_s" recording the rule match action (Accept, Drop or Reject).

 

Adding ASM, DDoS and IDPS logs

Logging for the Application Security Manager (ASM), Protocol Inspection (IDPS) and DoS Protection features are all via a Security Logging Profile which is then assigned to the virtual server.

"security-loggingProfile": {
	"class": "Security_Log_Profile",
	"application": {
		"localStorage": false,
		"remoteStorage": "splunk",
		"protocol": "tcp",
		"servers": [
			{
				"address": "127.0.0.1",
				"port": "6514"
			}
		],
		"storageFilter": {
			"requestType": "all"
		}
	},
	"network": {
		"publisher": {
			"use": "logPublisher-telemetry"
		},
		"logRuleMatchAccepts": false,
		"logRuleMatchRejects": true,
		"logRuleMatchDrops": true,
		"logIpErrors": true,
		"logTcpErrors": true,
		"logTcpEvents": true
	},
	"dosApplication": {
		"remotePublisher": {
			"use": "logPublisher-telemetry"
		}
	},
	"dosNetwork": {
		"publisher": {
			"use": "logPublisher-telemetry"
		}
	},
	"protocolDnsDos": {
		"publisher": {
			"use": "logPublisher-telemetry"
		}
	},
	"protocolInspection": {
		"publisher": {
			"use": "logPublisher-telemetry"
		},
		"logPacketPayloadEnabled": true
	}
},

In the example above we are enabling logging for the ASM in the "application" property. An important configuration here is server setting. ASM logging only works if the address used here is 127.0.0.1 and port tcp/6514. In the GUI it looks like this:

We have also enabled logging for DoS and IDS/IPS (Protocol Inspection). This is more straightforward as it just references the Log Publisher we created earlier:

 

To assign the various Security features to the virtual server, we use the Security Policy tab and as we mentioned, this is also where we assign the Security Log Profile we created earlier:

An example AS3 code snippet for a HTTP virtual server matching what you see in the GUI above is shown below:

"vip-testapi": {
	"class": "Service_HTTPS",
	"virtualAddresses": [
		"172.16.255.254"
	],
	"shareAddresses": false,
	"profileHTTP": {
		"use": "http"
	},
	"remark": "Test API",
	"addressStatus": true,
	"allowVlans": [
		"vlan001"
	],
	"virtualPort": 443,
	"redirect80": false,
	"snat": "auto",
	"policyWAF": {
		"use": "policy-test"
	},
	"profileDOS": {
		"use": "dos"
	},
	"profileProtocolInspection": {
		"use": "protocol_inspection_http"
	},
	"securityLogProfiles": [
		{
			"bigip": "security-loggingProfile"
		}
	]
}

The securityLogProfiles property references the logging profile we created above. Note that an "Application Security Policy" (property: policyWAF) can only be enabled when the virtual server is of type: Service_HTTP or Service_HTTPS and has a HTTP profile assigned (property: profileHTTP).

The outputted logs from the various security managers end up in the following logs:

    • Advanced Firewall Manager (AFM)
F5Telemetry_AFM_CL
| where isnotempty(acl_policy_name_s)

    • Application Security Manager (ASM)
F5Telemetry_ASM_CL

    • DoS Protection
F5Telemetry_AVR_CL
| where Entity_s contains "DosVisibility" or Entity_s contains "AfmDosStat"
    • Protocol Inspection
F5Telemetry_AFM_CL
| where isnotempty(insp_id_s)

 

Adding DNS logs

If you are using the BIG-IP as an DNS (formally GTM) for GSLB Wide IP load balancing, you will probably want to see the GSLB requests logged in Azure. I found a couple of issues with this... Firstly, the DNS logging profile does not support the "splunk" format which the log destination needs to be for the AFM logging. If you create a separate log destination for "syslog" format, this creates a separate log in Azure called "F5Telemetry_event_CL" which just dumps the raw data in a "data_s" column like this:

Therefore, what I have done is created an GTM iRule which can be added to the GSLB Listener and used to generate request/response DNS logs into the F5Telemetry_LTM_CL log:

when DNS_REQUEST priority 50 {
    set hostname [info hostname]
    set ldns [IP::client_addr]
    set vs_name [virtual name]
    set q_name [DNS::question name]
    set q_type [DNS::question type]
    set now [clock seconds]
    set ts [clock format $now -format {%a, %d %b %Y %H:%M:%S %Z}]
    if { $q_type == "A" or  $q_type == "AAAA" } {
        set hsl_reqlog [HSL::open -proto TCP -pool "/Common/Shared/pool-telemetry"]
        HSL::send $hsl_reqlog "event_source=\"dns_request_logging\",hostname=\"$hostname\",client_ip=\"$ldns\",server_ip=\"\",http_method=\"\",http_uri=\"\",virtual_name=\"$vs_name\",dns_query_name=\"$q_name\",dns_query_type=\"$q_type\",dns_query_answer=\"\",event_timestamp=\"$ts\"\n"
        unset hsl_reqlog --
    }
    unset ldns vs_name q_name q_type now ts --
   }
   
when DNS_RESPONSE priority 50 {

    set hostname [info hostname]
    set ldns [IP::client_addr]
    set vs_name [virtual name]
    set q_name [DNS::question name]
    set q_type [DNS::question type]
    set q_answer [DNS::answer]
    set now [clock seconds]
    set ts [clock format $now -format {%a, %d %b %Y %H:%M:%S %Z}]

    if { $q_type == "A" or  $q_type == "AAAA" } {
        set hsl_reslog [HSL::open -proto TCP -pool "/Common/Shared/pool-telemetry"]
        HSL::send $hsl_reslog "event_source=\"dns_response_logging\",hostname=\"$hostname\",client_ip=\"$ldns\",server_ip=\"\",http_method=\"\",http_uri=\"\",virtual_name=\"$vs_name\",dns_query_name=\"$q_name\",dns_query_type=\"$q_type\",dns_query_answer=\"$q_answer\",event_timestamp=\"$ts\"\n"
        unset hsl_reslog --
    }
    unset ldns vs_name q_name q_type q_answer now ts --
   }

Just add this to the GTM Listener and ensure you don't have the DNS logging profile enabled in the DNS profile:

Here are the logs (nicely formatted!):

The event_source_s column is set to "dns_request_logging" and "dns_response_logging" to distinguish them from the LTM request logs in this log.

 

Adding LTM logs

LTM logs in Sentinel are sent to the custom log F5Telemetry_LTM_CL and are the output from the Request Logging service in BIG-IP. This creates a log for every HTTP request (and optionally the response) which is made through a Virtual Server which has a HTTP profile applied and which also includes a Request Logging profile.

Request Logging uses the High Speed Log (HSL) to send the logs directly out of TMM. We already setup a HSL log destination in our base AS3 declaration so we can use this. The request log is very flexible in what you want to record and fields are detailed here:

Reference: Configuring request logging using the Request Logging profile (f5.com)

I find a useful field is $TIME_USECS which is added to the Microtimestamp column. This is useful as it can be used to tie together the request with the response when troubleshooting.

Here is the AS3 code snippet for adding a Request Logging Profile:

"profile-ltmRequestLog": {
	"class": "Traffic_Log_Profile",
	"requestSettings": {
		"requestEnabled": true,
		"requestProtocol": "mds-tcp",
		"requestPool": {
			"use": "pool-telemetry"
		},
		"requestTemplate": "event_source=\"request_logging\",hostname=\"$BIGIP_HOSTNAME\",client_ip=\"$CLIENT_IP\",server_ip=\"$SERVER_IP\",dest_ip=\"$VIRTUAL_IP\",dest_port=\"$VIRTUAL_PORT\",http_method=\"$HTTP_METHOD\",http_uri=\"$HTTP_URI\",virtual_name=\"$VIRTUAL_NAME\",event_timestamp=\"$DATE_HTTP\",Microtimestamp=\"$TIME_USECS\""
	},
	"responseSettings": {
		"responseEnabled": true,
		"responseProtocol": "mds-tcp",
		"responsePool": {
			"use": "pool-telemetry"
		},
		"responseTemplate": "event_source=\"response_logging\",hostname=\"$BIGIP_HOSTNAME\",client_ip=\"$CLIENT_IP\",server_ip=\"$SERVER_IP\",http_method=\"$HTTP_METHOD\",http_uri=\"$HTTP_URI\",virtual_name=\"$VIRTUAL_NAME\",event_timestamp=\"$DATE_HTTP\",http_statcode=\"$HTTP_STATCODE\",http_status=\"$HTTP_STATUS\",Microtimestamp=\"$TIME_USECS\",response_ms=\"$RESPONSE_MSECS\""
	}
}

Note that it references the High Speed Logging pool we created earlier.

If you want to add the template in the BIG-IP GUI, below is the formatted text to add to the template field. Make sure 'Request Logging' and 'Response Logging' is enabled, the HSL Protocol is TCP, and the Pool Name is the pool we created earlier (called 'pool-telemetry' in my example):

Request Settings / Template:
event_source="request_logging",hostname="$BIGIP_HOSTNAME",client_ip="$CLIENT_IP",server_ip="$SERVER_IP",dest_ip="$VIRTUAL_IP",dest_port="$VIRTUAL_PORT",http_method="$HTTP_METHOD",http_uri="$HTTP_URI",virtual_name="$VIRTUAL_NAME",event_timestamp="$DATE_HTTP",Microtimestamp="$TIME_USECS"

Response Settings / Template:
event_source="response_logging",hostname="$BIGIP_HOSTNAME",client_ip="$CLIENT_IP",server_ip="$SERVER_IP",http_method="$HTTP_METHOD",http_uri="$HTTP_URI",virtual_name="$VIRTUAL_NAME",event_timestamp="$DATE_HTTP",http_statcode="$HTTP_STATCODE",http_status="$HTTP_STATUS",Microtimestamp="$TIME_USECS",response_ms="$RESPONSE_MSECS"

Sending syslog to Azure

Some errors on the system may not show up in the standard telemetry logging tables - in particular TLS errors due to certificate issues (reported by the pkcs11d daemon) do not generate logs. To aid reporting, we can redirect syslog for any logs of a particular level (e.g. warning and above) and push them to the localhost on port 6514 - they are then picked up by the Telemetry System listener and pushed out to Azure Log Analytics.

(tmos)# edit /sys syslog all-properties
    • this opens up the settings in the vi editor.
    • in the edited section remove the line:
      •  include none

    • replace with below:
include "
filter f_remote_loghost {
level(warning..emerg);
};

destination d_remote_loghost {
udp(\"127.0.0.1\" port(6514));
};

log {
source(s_syslog_pipe);
filter(f_remote_loghost);
destination(d_remote_loghost);
};
"
    • then write-quit vi (type ':wq')
    • you should get the prompt:
         Save changes? (y/n/e)
    • select 'y'
    • finally save the config:
          (tmos)# save /sys config

This will create a new custom log called F5Telemetry_syslog_CL which contains the syslog message. The messages are send in raw format, so need a bit of kusto manipulation. The following KQL extracts the data into columns to hold the reporting process/daemon, the Severity, Hostname, and the log text:

F5Telemetry_syslog_CL
| extend processName = extract(@'([\w-]+)\[\d+\]:', 1, data_s)
| extend message_s = extract(@'\[\d+\]: (.*)', 1, data_s)
| extend severity = extract(@'(\w+)\s[\w-]+\[\d+\]', 1, data_s)
| extend severity_s = replace_strings(
        severity,
        dynamic(['err', 'emerg']), // Lookup strings
        dynamic(['error', 'emergency']) // Replacements
        )
| project
   TimeGenerated,
   Severity = severity_s,
   Process = processName,
   ['Log Message'] = message_s,
   Hostname = tostring(split(hostname_s, ".")[0]) 
| order by TimeGenerated desc

The output looks like this:

 

The Azure Log Collector API

Telemetry Streaming leverages the Azure HTTP Data Collector API as a client and uses the exposed REST API to send formatted log data. All data in Log Analytics is stored as a record with a particular record type. TS formats the data as multiple records in json format with appropriate headers to direct data into specific logs. An individual record is created for each record in the request payload.

The data sent into the Azure Monitor HTTP Data Collector API via Telemetry Streaming is formatted to place records whose Type is equal to the LogType value specified and appends with _CL. For example, the Telemetry System Listener creates logs with a logType of "F5Telemetry_system" which outputs all records into a custom log in the Log Analytics Workspace called F5Telemetry_system_CL.

Reference: https://learn.microsoft.com/en-us/previous-versions/azure/azure-monitor/logs/data-collector-api

Note: Please be aware that the API has been deprecated and will no longer be functional as of 14/09/2026. Hopefully TS will be updated accordingly.

Published Dec 11, 2024
Version 1.0
No CommentsBe the first to comment