cloud
70 TopicsF5 XC Session tracking with User Identification Policy
With F5 AWAF/ASM there is feature called session tracking that allows tracking and blocking users that do too many violations not only based on IP address but also things like the BIG-IP AWAF/ASM session cookie. What about F5 XC Distributed Cloud? Well now we will answer that question 😉 Why tracking on ip addresses some times is not enough? XC has a feature called malicious users that allows to block users if they generate too many service policy, waf , bot or other violations. By default users are tracked based on source IP addresses but what happens if there are proxies before the XC Cloud or NAT devices ? Well then all traffic for many users will come from a single ip address and when this IP address is blocked many users will get blocked, not just the one that did the violation. Now that we answered this question lets see what options we have. Reference: AI/ML detection of Malicious Users using F5 Distributed Cloud WAAP Trusted Client IP header This option is useful when the client real ip addresses are in something like a XFF header that the proxy before the F5 XC adds. By enabling this option automatically XC will use this header not the IP packet to get the client ip address and enforce Rate Limiting , Malicious Users blocking etc. Even in the XC logs now the ip address in the header will be shown as a source IP and if there is no such header the ip address in the packet will be used as backup. Reference: How to setup a Client IP as the Source IP on the HTTP Load Balancer headers? – F5 Distributed Cloud Services (zendesk.com) Overview of Trusted Client IP Headers in F5 Distributed Cloud Platform User Identification Policies The second more versatile feature is the XC user identification policies that by default is set to "Client IP" that will be the client ip from the IP packet or if "Trusted Client IP header" is configured the IP address from the configured header will be used. When customizing the feature allows the use of TLS fingerprints , HTTP headers like the "Authorization" header and more options to track the users and enforce rate limiters on them or if they make too many violations and Malicious users is enabled to block them based on the configured identifier if they make too many waf violations and so much more. The user identification will failover to the ip address in the packet if it can't identify the source user but multiple identification rules could be configured and evaluated one after another, as to only failover to the packet ip address if an identification rule can't be matched! If the backend upstream origin server application cookie is used for user identification and XC WAF App firewall is enabled and you can also use Cookie protection to protect the cookie from being send from another IP address! A nice option is that you can search the request and security logs using the "user id" as by default in XC for example you can't search the logs using the Authorization header that for example OAUTH and API traffic uses or a cookie like the session JSESSION or PHPSSEID but this way you can! The demo juice shop app at https://demo.owasp-juice.shop/ can be used for such testing! References Lab 3: Malicious Users (f5.com) Malicious Users | F5 Distributed Cloud Technical Knowledge Configuring user session tracking (f5.com) How to configure Cookie Protection – F5 Distributed Cloud Services (zendesk.com) Configure Rate Limiting per User | F5 Distributed Cloud Technical Knowledge184Views1like0CommentsF5 XC CE Debug commands through GUI cloud console and API
Why this feature is important and helpful? With this capability if the IPSEC/SSL tunnels are up from the Customer Edge(CE) to the Regional Edge(RE), there is no need to log into the CE, when troubleshooting is needed. This is possible for Secure Mesh(SM) and Secure Mesh V2 (SMv2) CE deployments. As XC CE are actually SDN-based ADC/proxy devices the option to execute commands from the SDN controller that is the XC cloud seems a logical next step. Using the XC GUI to send SiteCLI debug commands. The first example is sending the "netstat" command to "master-3" of a 3-node CE cluster. This is done under Home > Multi-Cloud Network Connect > Overview > Infrastructure > Sites and finding the site, where you want to trigger the commands. In the VPM logs it is possible to see the command that was send in API format by searching for it or for logs starting with "debug", as to automate this task. If you capture and review the full log, you will even see not only the API URL endpoint but also the POST body data that needs to be added. The VPM logs that can also be seen from the web console and API, are the best place to start investigating issues. XC Commands reference: Node Serviceability Commands Reference | F5 Distributed Cloud Technical Knowledge Troubleshooting Guidelines for Customer Edge Site | F5 Distributed Cloud Technical Knowledge Troubleshooting Guide for Secure Mesh Site v2 Deployment | F5 Distributed Cloud Technical Knowledge Using the XC API to send SiteCLI debug commands. The same commands can be send using the XC API and first the commands can be tested and reviewed using the API doc and developer portals. API documentation even has examples of how to run these commands with vesctl that is the XC shell client that can be installed on any computer or curl. Postman can also be used instead of curl but the best option to test commands through the API is the developer portal. Postman can also be used by the "old school" people 😉 Link reference: F5 Distributed Cloud Services API for ves.io.schema.operate.debug | F5 Distributed Cloud Technical Knowledge F5 Distributed Cloud Dev Portal ves-io-schema-operate-debug-CustomPublicAPI-Exec | F5 Distributed Cloud Technical Knowledge Summary: The option to trigger commands though the XC GUI or even the API is really useful if for example there is a need to periodically monitor the cpu or memory jump with commands like "execcli check-mem" or "execcli top" or even automating the tcpdump with "execcli vifdump xxxx". The use cases for this functionality really are endless.46Views0likes1CommentExport Requests or Security Analytics from F5 Distributed Cloud
Wrote this code and thought I would share. You will need Python3 installed, and may need to use "pip" to install the "requests" package. Parameters can be displayed using the "-h" argument. A valid API Token is required for access to your tenant. One required filter is the Load Balancer name, and additional filters can be added to further confine the output. Times are in UTC, just like the API requires, and is displayed in the JSON event view in the GUI Log entries are written to the specified file in JSON format, as it comes from the API. Example execution: python3 xc-log-api-extract.py test-api.json security my-tenant-name my-namespace my-api-token my-load-balancer-name 2025-01-13T17:15:00.000Z 2025-01-14T17:15:00.000Z Here is the help page: python3 xc-log-api-extract.py -h usage: xc-log-api-extract.py [-h] [-srcip SRCIP] [-action ACTION] [-asorg ASORG] [-asnumber ASNUMBER] [-policy POLICY] outputfilename {access,security} tenant namespace apitoken loadbalancername starttime endtime Python program to extract XC logs positional arguments: outputfilename File to write JSON log messages to {access,security} logtype to query tenant Tenant name namespace Namespace in tenant apitoken API Token to use for accessing log data, created in Administration/IAM/Service Credentials, type "API Token" loadbalancername Load Balancer name to filter on (required) starttime yyyy-mm-mmThh:mm:ss.sssZ endtime yyyy-mm-mmThh:mm:ss.sssZ options: -h, --help show this help message and exit -srcip SRCIP Optional filter by Source IP -action ACTION Optional filter by action (allow, block) -asorg ASORG Optional filter by as_org -asnumber ASNUMBER Optional filter by as_number -policy POLICY Optional filter by policy_hits.policy_hits.policy DeVon Jarvis, v1.2 2025/01/21 Enjoy! DeVon Jarvis26Views0likes0CommentsF5 XC Distributed Cloud HTTP Header manipulations and matching of the client ip/user HTTP headers
1 . F5 XC distributed cloud HTTP Header manipulations In the F5 XC Distributed Cloud some client information is saved to variables that can be inserted in HTTP headers similar to how F5 Big-IP saves some data that can after that be used in a iRule or Local Traffic Policy. By default XC will insert XFF header with the client IP address but what if the end servers want an HTTP header with another name to contain the real client IP. Under the HTTP load balancer under "Other Options" under "More Options" the "Header Options" can be found. Then the the predefined variables can be used for this job like in the example below the $[client_address] is used. A list of the predefined variables for F5 XC: https://docs.cloud.f5.com/docs/how-to/advanced-security/configure-http-header-processing There is $[user] variable and maybe in the future if F5 XC does the authentication of the users this option will be insert the user in a proxy chaining scenario but for now I think that this just manipulates data in the XAU (X-Authenticated-User) HTTP header. 2. Matching of the real client ip HTTP headers You can also match a XFF header if it is inserted by a proxy device before the F5 XC nodes for security bypass/blocking or for logging in the F5 XC. For User logging from the XFF Under "Common Security Controls" create a "User Identification Policy". You can also match a regex that matches the ip address and this is in case there are multiple IP addresses in the XFF header as there could have been many Proxy devices in the data path and we want see if just one is present. For Security bypass or blocking based based on XFF Under "Common Security Controls" create a "Trusted Client Rules" or "Client Blocking Rules". Also if you have "User Identification Policy" then you can just use the "User Identifier" but it can't use regex in this case. To match a regex value in the header that is just a single IP address, even when the header has many ip addresses, use the regex (1\.1\.1\.1) as an example to mach address 1.1.1.1. To use the client IP address as a source Ip address to the backend Origin Servers in the TCP packet after going through the F5 XC (similar to removing the SNAT pool or Automap in F5 Big-IP) use the option below: The same way the XAU (X-Authenticated-User) HTTP header can be used in a proxy chaining topology, when there is a proxy before the F5 XC that has added this header. Edit: Keep in mind that in some cases in the XC Regex for example (1\.1\.1\.1) should be written without () as 1\.1\.1\.1 , so test it as this could be something new and I have seen it in service policy regex matches, when making a new custom signature that was not in WAAP WAF XC policy. I could make a seperate article for this 🙂 XC can even send the client certificate attributes to the backend server if Client Side mTLS is enabled but it is configured at the cert tab.3KViews8likes1CommentBIG-IP Telemetry Streaming to Azure
Steps First important point is that you have to use the REST-API for configuring Telemetry Streaming - there isn't a way to provision using TMSH or the GUI. The way it is done is by POSTing a json declaration to BIG-IP Telemetry Streaming’s declarative REST API endpoint. For Azure, the details are here: https://clouddocs.f5.com/products/extensions/f5-telemetry-streaming/latest/setting-up-consumer.html#microsoft-azure-log-analytics I like to use AS3 where possible so I provide the AS3 code snippets, but I'll also show the config on the GUI as well. The steps are: Download and install AS3 and Telemetry Streaming Create Azure Sentinel workspace Send TS declaration base AS3 declaration adding AFW logs adding ASM logs adding LTM logs This article is a result of using TS in Azure for nearly 3 years, over which time I've gained a good understanding of how it works. However, there are updates and changes all the time, so I'd welcome any feedback if any part of the article is incorrect or out of date. Download and install AS3 and Telemetry Streaming To create necessary configuration needed for streaming to Sentinel, you firstly need to download the iControl LX plug-ins. These are available in the F5 github repository for AS3 and Telemetry Streaming as RPM files. Links are: Telemetry Streaming: F5Networks/f5-telemetry-streaming: F5 Telemetry Streaming (github.com) AS3: F5Networks/f5-appsvcs-extension: F5 Application Services 3 Extension (github.com) On the right hand side of the github page you'll see a link to the latest release: - it's the RPM file you need (usually the biggest size file!). I download the files to my PC and then import them using the GUI in: iApps/Package Management LX: Some key points: Your BIG-IP needs to be on at least 13.1 to send a declaration to TS and AS3 your account must have the Administrator role - it's usually recommended to use the 'admin' username. I use a REST API client to send declarations. I use Insomnia but Postman is another popular alternative. Setting up Azure Workspace You can create logging to Azure from the BIG-IP using a pre-built F5 connector available as part of Azure Sentinel. Alternatively, you can just setup a Log Analytics Workspace and stream the logs into it. I'll explain both methods: Using Azure Sentinel To create a Sentinel instance, you need to first create a Log Analytics Workspace. Then add Sentinel to the workspace. If there are no workspaces defined in the region you are adding Sentinel, the ARM template will prompt you to add one. Once created, you can add Sentinel to it: Once created you need the workspace credentials to allow the BIG-IP to connect and send data. Azure Workspace Credentials To be able to send logs into the Azure workspace, you need 2 important pieces of data - firstly the "Log Analytics Workspace ID", and then the "Primary key". F5 provide a data connector for Sentinel which is an easy way to get this information. On the Sentinel page select the 'Content Management' / 'Content Hub' blade (1), search for 'f5' and then select the 'F5 Advanced WAF Integration via Telemetry Streaming' connector (3). Click on the 'Install' button (3): Once installed, on the blade menu, select "Configuration" and "Data connectors". You should see a connector called "F5 BIG-IP". If you select this, and then click "Open connector page": This will then tell you the Workspace ID and the Primary Key you need (in section "Configuration"). The connector is a handy tool within Sentinel as it monitors and shows you the status of the telemetry coming into Azure needed for the 2 workbooks which have also been added as part of the Content Hub installation you did in the previous step. We will see this working later... Using Log Analytics only Sentinel is a SIEM solution which 'sits' on top of Log Analytics. If you don't need Sentinel's features, then BIG-IP Telemetry Streaming works fine just with a Log Analytics Workspace. Create the workspace from the Azure portal, ideally in the same region as the BIG-IP devices to avoid inter-VLAN costs in sending data to the workspace if you are using network isolation. In Azure Portal search bar type "Log Analytics workspaces" and + Create. All is needed is a name and region. Once created, navigate to "Settings" and "Agents". In the section "Log Analytics agent instructions" you will see the Workspace ID and the Primary Key you need for the TS declaration: Using MSI Telemetry Streaming v1.11 added support for sending data to Azure with an Azure Managed Service Identity (MSI). An MSI is a great way of maintaining secure access between Azure objects by leveraging Entra ID (formally Azure AD) to grant access without needing keys. The Primary Workspace key may be regenerated at some point (this may be a part of the key rotation policies of the customer) and if this happens, TS will stop as Azure will reject the incoming telemetry connection from BIG-IP. To use the MSI, create it in the Azure Portal and assign it to the Virtual Machine running the BIG-IP (Security/Identity). I would recommend creating a user assigned MSI rather than a system one. The system ID is restricted to a single resource and only for the lifetime of that resource. A user MSI can be assigned to multiple big-ip machines. Once created, assign the following role assignments to the Log Analytics Workspace (in "Access Control" blade in the LAW): "Log Analytics Contributor". Send TS declaration We now can send the TS declaration. The endpoint you need to reach is: POST https://{{BIG_IP_DEVICE_IP}}/mgmt/shared/telemetry/declare Before I give the declaration, there are a few issues I found using TS in Azure which I need to explain... System Logging Issue The telemetry logs from the BIG-IP will create what are known as "Custom Logs" in LAW. These are explained in more detail at the end of this article, but the most important thing about them is that have a limit of 500 columns for each Log Type. This was originally causing issues as BIG-IP was creating a set of columns for all the properties of each named item and very soon, the 500 limit was reached. F5 had already spotting this issue and fixed it in v1.24 with an option "format" with value "propertyBased" on the Consumer class (ref: https://clouddocs.f5.com/products/extensions/f5-telemetry-streaming/latest/setting-up-consumer.html#additions-to-the-azure-log-analytics-consumer) However, I found that when ASM is enabled on the BIG-IP, each signature update download was creating an set of additional columns in the System log which eventually took it over the 500 limit again: This has now been fixed by F5 with TS v.1.37.0 with a new "format" value of "propertyBasedV2". This allows asmAttackSignatures to be under Log Type F5Telemetry_asmAttackSignatures instead of F5Telemetry_system. If you see any logs either not appearing, or logs stopping on Azure, you can check for this error with the following Kusto query: Operation | where OperationCategory contains “Ingestion” AVR Logging Issue When ASM is enabled, it automatically enables AVR as well. This creates an AVR log which has a LOT of data in it. I've noticed that the AVR log is both excessive and can also exceed to 500 column limit due to a mishmash of data in it. Therefore, In my declaration I have made use of the excludeData option in the TS Listener to remove some of the log sources - the column/field 'Entity_s' identifies the source of the data: DNS_Offbox_All - this generates a log for every DNS request which is made. If your BIG-IP is acting as a DNS Cache for your environment, this very quickly becomes a massive log. ProcessCpuUtil - again, this creates a load of additional columns which record the CPU utilisation of every running process on the device. Might be useful to some... not to me! TcpStat - this logs TCP events against each TCP profile on the BIG-IP (whether used for not) every 5 minutes. If you have a busy device, they quickly flood the log. IruleEvents - shows data associated with iRules (times triggered, event counts..etc). I had no use for this data. I use iRules but did not need statistics on how many times an iRule was used. ACL_STAGE and ACL_FORCE - these seemed to be pointless logs related to AFM but not really giving any information which isn't already in the AFM logs. It was duplicated data of no value. There were also a number of other AVR logs which did not seem to create any meaningful data for me. These were: ServerHealth, GtmWideip, BOT DEFENSE EVENT, InterfaceHealth, AsmBypassInfo, FwNatTransSrc, FwNatTransDest I therefore have excluded these log types. This is not an exhaustive list of entity types in the AVR logs, but hopefully omitting these will (a) reduce your log sizes and (b) prevent the 500 column issue. If you want to analyse what different types (entities) of logs are in the AVR log, the following kusto query can be run: F5Telemetry_AVR_CL | summarize count() by Entity_s This will show the amount of AVR logs by each Entity type (source). You can then run a query for a specific type, analyse the content, and decide whether to filter it or not. Declaration Ok - after all that, here is my declaration. It contains the following: A Telemetry System class - this is needed to generate the device system logging which goes into a log called "F5Telemetry_system_CL" A System Poller - this collects system data at a set interval (60 seconds is a reasonable setting here which produces logs which are not too large but with good granularity of data). The System Poller also allow us to filter logs using excludeData. We exclude the following: asmAttackSignatures - as explained above, these should no longer appear in System logs, but this is just to make sure! diskLatency - this is a large set of columns storing the disk stats. As we are using VMs in Azure, this info is available within the Azure IaS service, so I did not see any point of collecting it again from the VM level, especially as the latency is a function of the selected machine type in Azure. location - this is just the SNMP location, waste of a column name. description - this is just the SNMP description, waste of a column name. A Telemetry Listener - this is used to listen to and collect event logs it receives on the specified port from configured BIG-IP system services, including LTM, ASM, AFM and AVR. A Telemetry Push Consumer - this is used to push the collected data to Azure. It is here we use the workspace ID and the primary key we collected in the above steps. { "class": "Telemetry", "controls": { "class": "Controls", "logLevel": "info", "debug": false }, "telemetry-system-azure": { "class": "Telemetry_System", "trace": false, "allowSelfSignedCert": true, "host": "localhost", "port": 8100, "protocol": "http", "systemPoller": [ "telemetry-systemPoller-azure" ] }, "telemetry-systemPoller-azure": { "class": "Telemetry_System_Poller", "interval": 60, "actions": [ { "excludeData": {}, "locations": { "system": { "asmAttackSignatures": true, "diskLatency": true, "tmstats": true, "location": true, "description": true } } } ] }, "telemetry-listener-azure": { "class": "Telemetry_Listener", "port": 6514, "enable": true, "trace": false, "match": "", "actions": [ { "setTag": { "tenant": "`T`", "application": "`A`" }, "enable": true }, { "excludeData": {}, "ifAnyMatch": [ { "Entity": "DNS_Offbox_All" }, { "Entity": "ProcessCpuUtil" }, { "Entity": "TcpStat" }, { "Entity": "IruleEvents" }, { "Entity": "ACL_STAGE" }, { "Entity": "ACL_FORCE" }, { "Entity": "ServerHealth" }, { "Entity": "GtmWideip" }, { "Entity": "BOT DEFENSE EVENT" }, { "Entity": "InterfaceHealth" }, { "Entity": "AsmBypassInfo" }, { "Entity": "FwNatTransSrc" }, { "Entity": "FwNatTransDest" } ], "locations": { "^.*$": true } } ] }, "telemetry-pushConsumer-azure": { "class": "Telemetry_Consumer", "type": "Azure_Log_Analytics", "format": "propertyBasedV2", "trace": false, "workspaceId": "{{LOG_ANALYTICS_WORKSPACE_ID}}", "passphrase": { "cipherText": "{{LOG_ANALYTICS_PRIMARY_KEY}}" }, "useManagedIdentity": false } } Note: If you are using an MSI managed identity, the consumer changes to this: "telemetry-pushConsumer-azure": { "class": "Telemetry_Consumer", "type": "Azure_Log_Analytics", "format": "propertyBasedV2", "trace": false, "useManagedIdentity": true } You need to look for a "200 OK" response to come back from the REST client. The logs for Telemetry go into: /var/log/restnoded/restnoded.log and will alert if there are errors in connectivity from the BIG-IP into LAW. Adding Non-System Logs To add logs from the Security managers on BIG-IP (AFM, ASM ..etc) you need to create a few AS3 resources to handle the internal routing of logs from the various managers into the telemetry listener just created above. The resources are: a Log Publisher for the security log profile to link to. a Log Destination formatter to a high speed link (HSL) pool, with a format type of "splunk" a Log Destination HSL to a pool which maps to an internal address using TCP port 6514 an LTM Pool which uses a local address. a TCP Virtual Server (vIP) on tcp/6514 with the local address. an iRule for the vIP to remap traffic onto the loopback address (where it will be picked up by the TS listener. "irule-telemetryLocalRule": { "class": "iRule", "remark": "Telemetry Streaming", "iRule": { "base64": "d2hlbiBDTElFTlRfQUNDRVBURUQgcHJpb3JpdHkgNTAwIHsNCiAgbm9kZSAxMjcuMC4wLjEgNjUxNA0KfQ==" } }, "logDestination-telemetryHsl": { "class": "Log_Destination", "type": "remote-high-speed-log", "protocol": "tcp", "pool": { "use": "pool-telemetry" } }, "logDestination-telemetry": { "class": "Log_Destination", "type": "splunk", "forwardTo": { "use": "logDestination-telemetryHsl" } }, "logPublisher-telemetry": { "class": "Log_Publisher", "destinations": [ { "use": "logDestination-telemetry" } ] }, "pool-telemetry": { "class": "Pool", "remark": "Telemetry Streaming to Azure Sentinel", "monitors": [ ], "members": [ { "serverAddresses": [ "255.255.255.254" ], "adminState": "enable", "servicePort": 6514 } ] }, "vip-telemetryLocal": { "class": "Service_TCP", "virtualAddresses": [ "255.255.255.254" ], "iRules": [ "irule-telemetryLocalRule" ], "pool": "pool-telemetry", "remark": "Telemetry Streaming", "addressStatus": true, "virtualPort": 6514 } The iRule is base64 encoded in the AS3 declaration above but is just this: when CLIENT_ACCEPTED priority 500 { node 127.0.0.1 6514 } Now you have a Log Publisher which routes to a local pool mapping to the loopback of the BIG-IP. The TS Listener will then pick this up (notice the port in the TS Declaration object "telemetry-listener-azure" matches the Log High Speed Logging Destination pool (6514)). Loopback Issue When creating the virtual server above, tmm errors are observed which prevent logging via the Telemetry virtual server iRule as it rejects remapping to the loopback. The following log is seen in /var/log/ltm : testf5 err tmm1[6506]: 01220001:3: TCL error: /Common/Shared/irule-telemetryLocalRule - disallow self or loopback connection (line 1)TCL error (line 1) (line 1) invoked from within "node 127.0.0.1 6514" Ref: After an upgrade, iRules using the loopback address may fail and log TCL errors (f5.com) To fix this, change the following db value: tmsh modify sys db tmm.tcl.rule.node.allow_loopback_addresses value true tmsh save sys config Adding AFM logs The Advanced Firewall Manager allows firewall policy to be defined at a number of points (called "contexts") in the flow of traffic through the F5. A global policy can be applied, or a policy can be added at the Self-IP, Route Domain, or Virtual Server level. What is important to realize is that there is a pre-built Security Logging Profile for all policies operating at the 'Global' context - called global-network. If your policy is applied as a global policy, you have to change this profile to get logging into Azure. The profile is here under Security / Event Logs / Logging Profiles: Click on the 'global-network' profile and in the "Network Firewall" tab set the publisher to the one you have built above. You can also decide what to log - at the least you should log any policy drops or rejects: For any AFM policies added at any other context, you can create your own logging profile The logs produced go into the Azure custom log: F5Telemetry_AFM_CL. A log is produced for every firewall event with the column "action_s" recording the rule match action (Accept, Drop or Reject). Adding ASM, DDoS and IDPS logs Logging for the Application Security Manager (ASM), Protocol Inspection (IDPS) and DoS Protection features are all via a Security Logging Profile which is then assigned to the virtual server. "security-loggingProfile": { "class": "Security_Log_Profile", "application": { "localStorage": false, "remoteStorage": "splunk", "protocol": "tcp", "servers": [ { "address": "127.0.0.1", "port": "6514" } ], "storageFilter": { "requestType": "all" } }, "network": { "publisher": { "use": "logPublisher-telemetry" }, "logRuleMatchAccepts": false, "logRuleMatchRejects": true, "logRuleMatchDrops": true, "logIpErrors": true, "logTcpErrors": true, "logTcpEvents": true }, "dosApplication": { "remotePublisher": { "use": "logPublisher-telemetry" } }, "dosNetwork": { "publisher": { "use": "logPublisher-telemetry" } }, "protocolDnsDos": { "publisher": { "use": "logPublisher-telemetry" } }, "protocolInspection": { "publisher": { "use": "logPublisher-telemetry" }, "logPacketPayloadEnabled": true } }, In the example above we are enabling logging for the ASM in the "application" property. An important configuration here is server setting. ASM logging only works if the address used here is 127.0.0.1 and port tcp/6514. In the GUI it looks like this: We have also enabled logging for DoS and IDS/IPS (Protocol Inspection). This is more straightforward as it just references the Log Publisher we created earlier: To assign the various Security features to the virtual server, we use the Security Policy tab and as we mentioned, this is also where we assign the Security Log Profile we created earlier: An example AS3 code snippet for a HTTP virtual server matching what you see in the GUI above is shown below: "vip-testapi": { "class": "Service_HTTPS", "virtualAddresses": [ "172.16.255.254" ], "shareAddresses": false, "profileHTTP": { "use": "http" }, "remark": "Test API", "addressStatus": true, "allowVlans": [ "vlan001" ], "virtualPort": 443, "redirect80": false, "snat": "auto", "policyWAF": { "use": "policy-test" }, "profileDOS": { "use": "dos" }, "profileProtocolInspection": { "use": "protocol_inspection_http" }, "securityLogProfiles": [ { "bigip": "security-loggingProfile" } ] } The securityLogProfiles property references the logging profile we created above. Note that an "Application Security Policy" (property: policyWAF) can only be enabled when the virtual server is of type: Service_HTTP or Service_HTTPS and has a HTTP profile assigned (property: profileHTTP). The outputted logs from the various security managers end up in the following logs: Advanced Firewall Manager (AFM) F5Telemetry_AFM_CL | where isnotempty(acl_policy_name_s) Application Security Manager (ASM) F5Telemetry_ASM_CL DoS Protection F5Telemetry_AVR_CL | where Entity_s contains "DosVisibility" or Entity_s contains "AfmDosStat" Protocol Inspection F5Telemetry_AFM_CL | where isnotempty(insp_id_s) Adding DNS logs If you are using the BIG-IP as an DNS (formally GTM) for GSLB Wide IP load balancing, you will probably want to see the GSLB requests logged in Azure. I found a couple of issues with this... Firstly, the DNS logging profile does not support the "splunk" format which the log destination needs to be for the AFM logging. If you create a separate log destination for "syslog" format, this creates a separate log in Azure called "F5Telemetry_event_CL" which just dumps the raw data in a "data_s" column like this: Therefore, what I have done is created an GTM iRule which can be added to the GSLB Listener and used to generate request/response DNS logs into the F5Telemetry_LTM_CL log: when DNS_REQUEST priority 50 { set hostname [info hostname] set ldns [IP::client_addr] set vs_name [virtual name] set q_name [DNS::question name] set q_type [DNS::question type] set now [clock seconds] set ts [clock format $now -format {%a, %d %b %Y %H:%M:%S %Z}] if { $q_type == "A" or $q_type == "AAAA" } { set hsl_reqlog [HSL::open -proto TCP -pool "/Common/Shared/pool-telemetry"] HSL::send $hsl_reqlog "event_source=\"dns_request_logging\",hostname=\"$hostname\",client_ip=\"$ldns\",server_ip=\"\",http_method=\"\",http_uri=\"\",virtual_name=\"$vs_name\",dns_query_name=\"$q_name\",dns_query_type=\"$q_type\",dns_query_answer=\"\",event_timestamp=\"$ts\"\n" unset hsl_reqlog -- } unset ldns vs_name q_name q_type now ts -- } when DNS_RESPONSE priority 50 { set hostname [info hostname] set ldns [IP::client_addr] set vs_name [virtual name] set q_name [DNS::question name] set q_type [DNS::question type] set q_answer [DNS::answer] set now [clock seconds] set ts [clock format $now -format {%a, %d %b %Y %H:%M:%S %Z}] if { $q_type == "A" or $q_type == "AAAA" } { set hsl_reslog [HSL::open -proto TCP -pool "/Common/Shared/pool-telemetry"] HSL::send $hsl_reslog "event_source=\"dns_response_logging\",hostname=\"$hostname\",client_ip=\"$ldns\",server_ip=\"\",http_method=\"\",http_uri=\"\",virtual_name=\"$vs_name\",dns_query_name=\"$q_name\",dns_query_type=\"$q_type\",dns_query_answer=\"$q_answer\",event_timestamp=\"$ts\"\n" unset hsl_reslog -- } unset ldns vs_name q_name q_type q_answer now ts -- } Just add this to the GTM Listener and ensure you don't have the DNS logging profile enabled in the DNS profile: Here are the logs (nicely formatted!): The event_source_s column is set to "dns_request_logging" and "dns_response_logging" to distinguish them from the LTM request logs in this log. Adding LTM logs LTM logs in Sentinel are sent to the custom log F5Telemetry_LTM_CL and are the output from the Request Logging service in BIG-IP. This creates a log for every HTTP request (and optionally the response) which is made through a Virtual Server which has a HTTP profile applied and which also includes a Request Logging profile. Request Logging uses the High Speed Log (HSL) to send the logs directly out of TMM. We already setup a HSL log destination in our base AS3 declaration so we can use this. The request log is very flexible in what you want to record and fields are detailed here: Reference: Configuring request logging using the Request Logging profile (f5.com) I find a useful field is $TIME_USECS which is added to the Microtimestamp column. This is useful as it can be used to tie together the request with the response when troubleshooting. Here is the AS3 code snippet for adding a Request Logging Profile: "profile-ltmRequestLog": { "class": "Traffic_Log_Profile", "requestSettings": { "requestEnabled": true, "requestProtocol": "mds-tcp", "requestPool": { "use": "pool-telemetry" }, "requestTemplate": "event_source=\"request_logging\",hostname=\"$BIGIP_HOSTNAME\",client_ip=\"$CLIENT_IP\",server_ip=\"$SERVER_IP\",dest_ip=\"$VIRTUAL_IP\",dest_port=\"$VIRTUAL_PORT\",http_method=\"$HTTP_METHOD\",http_uri=\"$HTTP_URI\",virtual_name=\"$VIRTUAL_NAME\",event_timestamp=\"$DATE_HTTP\",Microtimestamp=\"$TIME_USECS\"" }, "responseSettings": { "responseEnabled": true, "responseProtocol": "mds-tcp", "responsePool": { "use": "pool-telemetry" }, "responseTemplate": "event_source=\"response_logging\",hostname=\"$BIGIP_HOSTNAME\",client_ip=\"$CLIENT_IP\",server_ip=\"$SERVER_IP\",http_method=\"$HTTP_METHOD\",http_uri=\"$HTTP_URI\",virtual_name=\"$VIRTUAL_NAME\",event_timestamp=\"$DATE_HTTP\",http_statcode=\"$HTTP_STATCODE\",http_status=\"$HTTP_STATUS\",Microtimestamp=\"$TIME_USECS\",response_ms=\"$RESPONSE_MSECS\"" } } Note that it references the High Speed Logging pool we created earlier. If you want to add the template in the BIG-IP GUI, below is the formatted text to add to the template field. Make sure 'Request Logging' and 'Response Logging' is enabled, the HSL Protocol is TCP, and the Pool Name is the pool we created earlier (called 'pool-telemetry' in my example): Request Settings / Template: event_source="request_logging",hostname="$BIGIP_HOSTNAME",client_ip="$CLIENT_IP",server_ip="$SERVER_IP",dest_ip="$VIRTUAL_IP",dest_port="$VIRTUAL_PORT",http_method="$HTTP_METHOD",http_uri="$HTTP_URI",virtual_name="$VIRTUAL_NAME",event_timestamp="$DATE_HTTP",Microtimestamp="$TIME_USECS" Response Settings / Template: event_source="response_logging",hostname="$BIGIP_HOSTNAME",client_ip="$CLIENT_IP",server_ip="$SERVER_IP",http_method="$HTTP_METHOD",http_uri="$HTTP_URI",virtual_name="$VIRTUAL_NAME",event_timestamp="$DATE_HTTP",http_statcode="$HTTP_STATCODE",http_status="$HTTP_STATUS",Microtimestamp="$TIME_USECS",response_ms="$RESPONSE_MSECS" Sending syslog to Azure Some errors on the system may not show up in the standard telemetry logging tables - in particular TLS errors due to certificate issues (reported by the pkcs11d daemon) do not generate logs. To aid reporting, we can redirect syslog for any logs of a particular level (e.g. warning and above) and push them to the localhost on port 6514 - they are then picked up by the Telemetry System listener and pushed out to Azure Log Analytics. (tmos)# edit /sys syslog all-properties this opens up the settings in the vi editor. in the edited section remove the line: include none replace with below: include " filter f_remote_loghost { level(warning..emerg); }; destination d_remote_loghost { udp(\"127.0.0.1\" port(6514)); }; log { source(s_syslog_pipe); filter(f_remote_loghost); destination(d_remote_loghost); }; " then write-quit vi (type ':wq') you should get the prompt: Save changes? (y/n/e) select 'y' finally save the config: (tmos)# save /sys config This will create a new custom log called F5Telemetry_syslog_CL which contains the syslog message. The messages are send in raw format, so need a bit of kusto manipulation. The following KQL extracts the data into columns to hold the reporting process/daemon, the Severity, Hostname, and the log text: F5Telemetry_syslog_CL | extend processName = extract(@'([\w-]+)\[\d+\]:', 1, data_s) | extend message_s = extract(@'\[\d+\]: (.*)', 1, data_s) | extend severity = extract(@'(\w+)\s[\w-]+\[\d+\]', 1, data_s) | extend severity_s = replace_strings( severity, dynamic(['err', 'emerg']), // Lookup strings dynamic(['error', 'emergency']) // Replacements ) | project TimeGenerated, Severity = severity_s, Process = processName, ['Log Message'] = message_s, Hostname = tostring(split(hostname_s, ".")[0]) | order by TimeGenerated desc The output looks like this: The Azure Log Collector API Telemetry Streaming leverages the Azure HTTP Data Collector API as a client and uses the exposed REST API to send formatted log data. All data in Log Analytics is stored as a record with a particular record type. TS formats the data as multiple records in json format with appropriate headers to direct data into specific logs. An individual record is created for each record in the request payload. The data sent into the Azure Monitor HTTP Data Collector API via Telemetry Streaming is formatted to place records whose Type is equal to the LogType value specified and appends with _CL. For example, the Telemetry System Listener creates logs with a logType of "F5Telemetry_system" which outputs all records into a custom log in the Log Analytics Workspace called F5Telemetry_system_CL. Reference: https://learn.microsoft.com/en-us/previous-versions/azure/azure-monitor/logs/data-collector-api Note: Please be aware that the API has been deprecated and will no longer be functional as of 14/09/2026. Hopefully TS will be updated accordingly.204Views0likes0CommentsThe sooner the better: Web App Scanning Without Internet Exposure
In the fast-paced world of app development, security often takes a backseat to feature delivery and tight deadlines. Many organizations rely on external teams to perform penetration testing on their web applications and APIs, but this typically happens after the app has been live for some time and is driven by compliance or regulatory requirements. Waiting until this stage can leave vulnerabilities unaddressed during critical early phases, potentially leading to costly fixes, reputational damage, or even breaches. Early-stage application security testing is key to building a strong foundation and mitigating risks before they escalate. Wouldn't it be cool if there was a way you could scan your apps in a proactive, automated way while they are still in beta? Since you are reading this article, here in the F5 community, you probably already know, that F5 Distributed Cloud Web App Scanning allows you to dynamically and continuously scan your external attack surface to uncover exposed web apps and APIs. We all know that exposing apps, which are at an early stage of their development, to the internet is risky because they may contain unfinished or untested code that could lead to unintended data leaks, privacy violations, or other risks. Therefore you want to keep access to your beta-stage apps restricted. Scanning but not exposing your apps At this point in time XC Web App Scanning can only scan apps that are exposed on the internet. But with some configuration tweaks, you can ensure that only WAS has access to your apps. I want to show a real-world example of how you can restrict access to your application solely to the XC WAS scan engine. Let's take a look at the beta-stage application we aim to perform penetration testing on. It is hosted on an EC2 instance in AWS. Of course we don't plan to expose our application directly to the internet with a Web Application Firewall. Hence F5 Distributed Cloud Web App & API Protection (WAAP) will be positioned as a cloud-proxy in front of our app. Therefore we must make sure only traffic from F5 Distributed Cloud Services has access to our app. Next we want to make sure that only the scan engine of F5 Distributed Cloud Web App Scanning can reach our app, again we want to block the rest of the internet from accessing our app. A pictures of says more then words, we want to achieve something like this: How to set it up Let's take a look how we can satisfy our requirements. ... in AWS In AWS Security Groups are used to control which traffic is allowd to reach reach and leave the resources that it is associated with. Since our application is hosted on an EC2 instance, the Security Group controlls the ingress and egress traffic for the instance. One can think of it like a virtual packet filter firewall. Usually protocol, port and a source IP address range in CIDR notation for an inbound Security Group. We want to allow access only from F5 Distributed Cloud Services to our EC2 instance. Creating hundreds of ingress rules inside of a Security Group did not seem very efficient to me. Hence I used a customer-managed prefix lists and added all F5 Regional Edges. Prefix lists are configured in the VPC section of AWS. The IPv4 address list of all F5 Regional Edges is available here: Public IPv4 Subnet Ranges for F5 Regional Edges After you created you prefix list, you can use it in a Security Group This way we met our first goal. Only F5 Regional Edges can reach our app. ... in XC In the F5 Distributed Cloud a similar kind of access control can be achieved by using Service Policies. Service Policies are a mechanism to control and enforce fine-grained access control for applications deployed in XC. I created a Service Policy that allows access only from the list of ephemeral IP addresses associated with XC Web App Scanning, while blocking all other traffic. First create a Service Policy, in the Rules section select Allowed Sources. In the XC Console Service Policies are created in Security > Service Policies > Service Policies. Then add the IP ranges to the IPv4 Prefix List. The list of all IP addresses associated with XC Web App Scanning is available here: Use Known IPs in Web App Scanning The Service Policy is then applied in the section Common Security Controls of a HTTP Load Balancer configuration. Conclusion By combining AWS Security Groups and XC Security Policies, I can ensure that my beta app (or beta API) is accessible exclusively to the scan engine of XC Web App Scanning, while blocking access from malicious actors on the internet.102Views2likes1CommentMicrosoft 365 IP Steering python Script
Hello! Hola! I have created a small and rudimentary script that generates a datagroup with MS 365 IPv4 and v6 addresses to be used by an iRule or policy. There are other scripts that solve this same issue but either they were: based on iRulesLX which forces you to enable iRuleLX only for this, and made me run into issues when upgrading (memory table got filled with nonsense) based on the XML version of the list which MS changed to a JSON file. This script is a super simple bash script that calls another super simple python file, and a couple of helper files. The biggest To Do are: Add a more secure approach to password usage. Right now, it is stored in a parameters file locked away with permissions. There should be a better way. Add support for URLs. You can find the contents here: https://github.com/teoiovine-novared/fetch-office365/tree/main I appreciate advice, (constructive) criticism and questions all the same! Thank you for your time.49Views0likes0CommentsF5 XC vk8s open source nginx deployment on RE
Code is community submitted, community supported, and recognized as ‘Use At Your Own Risk’. Short Description This an example for F5 XC virtual kubernetes (vk8s) workload on Regional Edges for rewriting the URL requests and response body. Problem solved by this Code Snippet The XC Distributed Cloud rewrite option under the XC routes is sometimes limited in dynamically replacing a specific sting like for example to replace the string "ne" with "da" no matter where in the url the string is located. location ~ .*ne.* { rewrite ^(.*)ne(.*) $1da$2; } Other than that in XC there is no default option to replace a string in the payload like rewrite profile in F5 LTM or iRule stream option. sub_filter 'Example' 'NIKI'; sub_filter_types *; sub_filter_once off; Open source NGINX can also be used to return custom error based on the server error as well: error_page 404 /custom_404.html; location = /custom_404.html { return 404 'gangnam style!'; internal; } Now with proxy protocol support in XC the Nginx can see real client ip even for non-HTTP traffic that does not have XFF HTTP headers. log_format niki '$proxy_protocol_addr - $remote_addr - $remote_user [$time_local] - $proxy_add_x_forwarded_for' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"'; #limit_req_zone $binary_remote_addr zone=mylimit:10m rate=1r/s; server { listen 8080 proxy_protocol; server_name localhost; How to use this Code Snippet Read the description readme file in the github link and modify the nginx default.conf file as per your needs. Code Snippet Meta Information Version: 1.25.4 Nginx Coding Language: nginx config Full Code Snippet https://github.com/Nikoolayy1/xc_nginx/tree/main193Views0likes3CommentsF5 XC vk8s workload with Open Source Nginx
I have shared the code in the link below under Devcentral code share: F5 XC vk8s open source nginx deployment on RE | DevCentral Here I will desribe the basic steps for creating a workload object that is F5 XC custom kubernetes object that creates in the background kubernetes deployments, pods and Cluster-IP type services. The free unprivileged nginx image nginxinc/docker-nginx-unprivileged: Unprivileged NGINX Dockerfiles (github.com) Create a virtual site that groups your Regional Edges and Customer Edges. After that create the vk8s virtual kubernetes and relate it to the virtual site."Note": Keep in mind for the limitations of kubernetes deployments on Regional Edges mentioned in Create Virtual K8s (vK8s) Object | F5 Distributed Cloud Tech Docs. First create the workload object and select type service that can be related to Regional Edge virtual site or Customer Edge virtual site. After select the container image that will be loaded from a public repository like github or private repo. You will need to configure advertise policy that will expose the pod/container with a kubernetes cluster-ip service. If you are deploying test containers, you will not need to advertise the container . To trigger commands at a container start, you may need to use /bin/bash -c -- and a argument."Note": This is not related for this workload deployment and it is just an example. Select to overwrite the default config file for the opensource nginx unprivileged with a file mount. "Note": the volume name shouldn't have a dot as it will cause issues. For the image options select a repository with no rate limit as otherwise you will see the error under the the events for the pod. You can also configure command and parameters to push to the container that will run on boot up. You can use empty dir on the virtual kubernetes on the Regional Edges for volume mounts like the log directory or the Nginx Cache zone but the unprivileged Nginx by default exports the logs to the XC GUI, so there is no need. "Note": This is not related for this workload deployment and it is just an example. The Logs and events can be seen under the pod dashboard and even the container/pod can accessed. "Note": For some workloads to see the logs from the XC GUI you will need to direct the output to stderr but not for nginx. After that you can reference the auto created kubernetes Cluster-IP service in a origin pool, using the workload name and the XC namespace (for example niki-nginx.default). "Note": Use the same virtual-site where the workload was attached and the same port as in the advertise cluster config. Deployments and Cluster-IP services can be created directly without a workload but better use the workload option. When you modify the config of the nginx actually you are modifying a configmap that the XC workload has created in the background and mounted as volume in the deployment but you will need to trigger deployment recreation as of now not supported by the XC GUI. From the GUI you can scale the workload to 0 pod instances and then back to 1 but a better solution is to use kubectl. You can log into the virtual kubernetes like any other k8s environment using a cert and then you can run the command "kubectl rollout restart deployment/niki-nginx". Just download the SSL/TLS cert. You can automate the entire process using XC API and then you can use normal kubernetes automation to run the restart command F5 Distributed Cloud Services API for ves.io.schema.views.workload | F5 Distributed Cloud API Docs! F5 XC has added proxy_protocol support and now the nginx container can work directly with the real client ip addresses without XFF HTTP headers or non-http services like SMTP that nginx supports and this way XC now can act as layer 7 proxy for email/smpt traffic 😉. You just need to add "proxy_protocol" directive and to log the variable "$proxy_protocol_addr". Related resources: For nginx Plus deployments for advanced functions like SAML or OpenID Connect (OIDC) or the advanced functions of the Nginx Plus dynamic modules like njs that is allowing java scripting (similar to F5 BIG-IP or BIG-IP Next TCL based iRules), see: Enable SAML SP on F5 XC Application Bolt-on Auth with NGINX Plus and F5 Distributed Cloud Dynamic Modules | NGINX Documentation njs scripting language (nginx.org) Accepting the PROXY Protocol | NGINX Documentation390Views2likes1Comment