cloud
14 TopicsHome Lab Server Build Using an Intel NUC and Free VMware ESXi 7
If you're like me, despite having cheap or even free access to cloud compute, you still want to have a bit of compute in a home lab. I can create and destroy to my hearts content. Things can get weird and messy - and it's nobodys problem but my own. For the past 10 years, my home lab has consisted of a couple 2U Dell R710 servers. They are were beefy in specs but they are very loud and consume a relatively large amount of power and space. They have served me really well over the years but it is finally time to upgrade. I ordered an Intel NUC last year. It should be able to handle the workload I'm running on my Dell servers with room to spare. Due to supply chain issues, it took a few months but it finally arrived. I was extremely surprised at how small these are. I knew they were small but I did not expect it to fit in the palm of my hand! I threw on VMware ESXi 7 for the hypervisor but I wanted to document the build for anyone who is building up a similar setup as I encountered a couple issues during my installation. Here is my complete parts list: Intel NUC11TNKV7 2x Kingston 32GB DDR4 3200MHz SODIMM 1TB Samsung 970 EVO NVMe I did document this in a video but this article also serves as a companion to that since there is a lot of commands involved. I immediately found out that because the network card on the NUC does not have a compatible driver included on the ESXi 7 image, I had to create an ISO with the Community Network Driver (Fling). The steps are documented here: https://www.virten.net/2021/11/vmware-esxi-7-0-update-3-on-intel-nuc/ however I also came across my own nuances which I'm noting below. First, download the ESXi Offline Bundle and Fling Community Network Driver and place them in a temporary folder. You need to install the vmware.powercli and vmware.imagebuilder modules from the Powershell command line install-module -name vmware.powercli install-module -name vmware.imagebuilder HOWEVER vmware.powercli and vmware.imagebuilder modules for Powershell is not supported on Powershell v6 and above which meant I could not run these commands on my Mac. Luckily, I had a Windows box kicking around with Powershell v5. I was also getting an error in trying to download the VMware.imagebuilder plugin. As it turns out, my version of PowerShell must have been using TLS 1.0/1.1. These intructions configured TLS1.2: https://docs.microsoft.com/en-us/powershell/scripting/gallery/installing-psget?view=powershell-7.2 [Net.ServicePointManager]::SecurityProtocol = [Net.ServicePointManager]::SecurityProtocol -bor [Net.SecurityProtocolType]::Tls12 After all that, I was able to proceed with building the image. The steps were pretty close to what is in the Virten article however the version of ESXi they used was pulled and replaced. I ended up with a different build which is reflected with the file names I used. Add-EsxSoftwareDepot .\VMware-ESXi-7.0U3c-19193900-depot.zip Add-EsxSoftwareDepot .\Net-Community-Driver_1.2.2.0-1vmw.700.1.0.15843807_18835109.zip New-EsxImageProfile -CloneProfile "ESXi-7.0U3c-19193900-standard" -name "ESXi-7.0U3c-19193900-NUC" -Vendor "buulam" Add-EsxSoftwarePackage -ImageProfile "ESXi-7.0U3c-19193900-NUC" -SoftwarePackage "net-community" Export-ESXImageProfile -ImageProfile "ESXi-7.0U3c-19193900-NUC" -ExportToISO -filepath ESXi-7.0U3c-19193900-NUC.iso Note: If you encounter the following error: "windowspowershell\modules\vmware.vimautomation.sdk\12.5.0.19093564\vmware.vimautomation.sdk.psm1 cannot be loaded because running scripts is disabled on this system" you may need to enter the following command: Set-ExecutionPolicy -ExecutionPolicy AllSigned Credit to Pawan Jheeta for this find! Now that I have an ISO image with the Fling Community Network Driver, it was time to create the bootable USB installer. I have a Mac and here are the steps I used to create the USB flash drive: https://virtuallywired.io/2020/08/01/create-a-bootable-esxi-7-usb-installer-on-macos/. I did not encounter any issues with these steps so please refer to the linked article to follow them. In case you are running Windows, this appears to be a good guide for creating the USB flash drive: https://www.virten.net/2014/12/howto-create-a-bootable-esxi-installer-usb-flash-drive/ Once you have the bootable USB flash drive created, you can insert that into the Intel NUC and begin your ESXi installation. The remaining steps I will leave to be explained in my video. I accepted all the defaults except for configuring a static IP address for the management address. I hope this helps some of you out and if there are any questions, please reply to this thread. I'd also love to hear about your home labs!24KViews7likes6CommentsF5 XC Distributed Cloud HTTP Header manipulations and matching of the client ip/user HTTP headers
1 . F5 XC distributed cloud HTTP Header manipulations In the F5 XC Distributed Cloud some client information is saved to variables that can be inserted in HTTP headers similar to how F5 Big-IP saves some data that can after that be used in a iRule or Local Traffic Policy. By default XC will insert XFF header with the client IP address but what if the end servers want an HTTP header with another name to contain the real client IP. Under the HTTP load balancer under "Other Options" under "More Options" the "Header Options" can be found. Then the the predefined variables can be used for this job like in the example below the $[client_address] is used. A list of the predefined variables for F5 XC: https://docs.cloud.f5.com/docs/how-to/advanced-security/configure-http-header-processing There is $[user] variable and maybe in the future if F5 XC does the authentication of the users this option will be insert the user in a proxy chaining scenario but for now I think that this just manipulates data in the XAU (X-Authenticated-User) HTTP header. 2. Matching of the real client ip HTTP headers You can also match a XFF header if it is inserted by a proxy device before the F5 XC nodes for security bypass/blocking or for logging in the F5 XC. For User logging from the XFF Under "Common Security Controls" create a "User Identification Policy". You can also match a regex that matches the ip address and this is in case there are multiple IP addresses in the XFF header as there could have been many Proxy devices in the data path and we want see if just one is present. For Security bypass or blocking based based on XFF Under "Common Security Controls" create a "Trusted Client Rules" or "Client Blocking Rules". Also if you have "User Identification Policy" then you can just use the "User Identifier" but it can't use regex in this case. To match a regex value in the header that is just a single IP address, even when the header has many ip addresses, use the regex (1\.1\.1\.1) as an example to mach address 1.1.1.1. To use the client IP address as a source Ip address to the backend Origin Servers in the TCP packet after going through the F5 XC (similar to removing the SNAT pool or Automap in F5 Big-IP) use the option below: The same way the XAU (X-Authenticated-User) HTTP header can be used in a proxy chaining topology, when there is a proxy before the F5 XC that has added this header. Edit: Keep in mind that in some cases in the XC Regex for example (1\.1\.1\.1) should be written without () as 1\.1\.1\.1 , so test it as this could be something new and I have seen it in service policy regex matches, when making a new custom signature that was not in WAAP WAF XC policy. I could make a seperate article for this 🙂 XC can even send the client certificate attributes to the backend server if Client Side mTLS is enabled but it is configured at the cert tab.3.8KViews8likes1CommentWhat is the Lightning Network?
When I'm thinking of up and coming technologies in terms of how they'd fit into my everyday life, I often forget that there are things I assume for myself that aren't nessecarily true for others. One of these things is the ease at which I can transact with people and businesses. I can move Canadian dollars to other Canadians for free and instantly. I can exchange money for goods and services from a merchant with just a tap of my phone or bank card. But this is simply not the case in third world countries where banking systems are not as mature or trusted. Blockchain technology has enabled a number of distruptive use cases, over and above enabling something like Bitcoin. Now what we're seeing is use cases that enable anybody with internet connectivity to be able to execute transactions with others in a direct manner. A use case that builds on this idea is payment exchange in third world countries and this is built on the Lightning Network. The Lightning Network is a layer 2 payment protocol. It is built on top of the Bitcoin network but instead of waiting up to 10 minutes for transactions to settle, this side-chain or layer 2 network can transact instantly. It's capable of making large and small transactions so it has use cases that can serve C2C, B2C and B2B. Imagine yourself travelling through Vietnam. You bought lunch. It was $2USD. You don't have the benefit of tap to pay like in your home country. You have some local currency but you'd prefer not to keep breaking up your larger bills. Constantly converting the currency in your head to keep track of your holiday spending is taking away from the fun of your vacation. It's also harder and harder as the larger bills get broken down into smaller ones. But if you can pay through Lightning Network, you settle the transaction in Bitcoin and know exactly how much you've spent. Or let's say you're a student in North America. Your parents are back home overseas and they need to send over some money for the year's tuition. Money transfer agents can help you move the money but at a cost. With Lightning Network, the money can be moved immediately and for little cost. Or, let's say you're a business that needs to wire funds to your supplier. Normally, you'd go to the bank, fill out a wire transfer, hope you got all the numbers right and then wait 5 days for the money to show up in the suppliers account. With Lightning Network, that transaction can happen immediately and it can be tracked electronically to show it was received. The market is flooded with a lot of Blockchain based projects that are still finding their way but I am confident that Lightning Network is something that's going to take off in certain parts of the world. I was able to arrange an interview with Albert Buu, the Founder and CEO of Neutronpay, a Lightning Network Service Provider (LSP) and got to deep dive into his insights on this emerging use case!793Views5likes1CommentBIG-IP Telemetry Streaming to Azure
Steps First important point is that you have to use the REST-API for configuring Telemetry Streaming - there isn't a way to provision using TMSH or the GUI. The way it is done is by POSTing a json declaration to BIG-IP Telemetry Streaming’s declarative REST API endpoint. For Azure, the details are here: https://clouddocs.f5.com/products/extensions/f5-telemetry-streaming/latest/setting-up-consumer.html#microsoft-azure-log-analytics I like to use AS3 where possible so I provide the AS3 code snippets, but I'll also show the config on the GUI as well. The steps are: Download and install AS3 and Telemetry Streaming Create Azure Sentinel workspace Send TS declaration base AS3 declaration adding AFW logs adding ASM logs adding LTM logs This article is a result of using TS in Azure for nearly 3 years, over which time I've gained a good understanding of how it works. However, there are updates and changes all the time, so I'd welcome any feedback if any part of the article is incorrect or out of date. Download and install AS3 and Telemetry Streaming To create necessary configuration needed for streaming to Sentinel, you firstly need to download the iControl LX plug-ins. These are available in the F5 github repository for AS3 and Telemetry Streaming as RPM files. Links are: Telemetry Streaming: F5Networks/f5-telemetry-streaming: F5 Telemetry Streaming (github.com) AS3: F5Networks/f5-appsvcs-extension: F5 Application Services 3 Extension (github.com) On the right hand side of the github page you'll see a link to the latest release: - it's the RPM file you need (usually the biggest size file!). I download the files to my PC and then import them using the GUI in: iApps/Package Management LX: Some key points: Your BIG-IP needs to be on at least 13.1 to send a declaration to TS and AS3 your account must have the Administrator role - it's usually recommended to use the 'admin' username. I use a REST API client to send declarations. I use Insomnia but Postman is another popular alternative. Setting up Azure Workspace You can create logging to Azure from the BIG-IP using a pre-built F5 connector available as part of Azure Sentinel. Alternatively, you can just setup a Log Analytics Workspace and stream the logs into it. I'll explain both methods: Using Azure Sentinel To create a Sentinel instance, you need to first create a Log Analytics Workspace. Then add Sentinel to the workspace. If there are no workspaces defined in the region you are adding Sentinel, the ARM template will prompt you to add one. Once created, you can add Sentinel to it: Once created you need the workspace credentials to allow the BIG-IP to connect and send data. Azure Workspace Credentials To be able to send logs into the Azure workspace, you need 2 important pieces of data - firstly the "Log Analytics Workspace ID", and then the "Primary key". F5 provide a data connector for Sentinel which is an easy way to get this information. On the Sentinel page select the 'Content Management' / 'Content Hub' blade (1), search for 'f5' and then select the 'F5 Advanced WAF Integration via Telemetry Streaming' connector (3). Click on the 'Install' button (3): Once installed, on the blade menu, select "Configuration" and "Data connectors". You should see a connector called "F5 BIG-IP". If you select this, and then click "Open connector page": This will then tell you the Workspace ID and the Primary Key you need (in section "Configuration"). The connector is a handy tool within Sentinel as it monitors and shows you the status of the telemetry coming into Azure needed for the 2 workbooks which have also been added as part of the Content Hub installation you did in the previous step. We will see this working later... Using Log Analytics only Sentinel is a SIEM solution which 'sits' on top of Log Analytics. If you don't need Sentinel's features, then BIG-IP Telemetry Streaming works fine just with a Log Analytics Workspace. Create the workspace from the Azure portal, ideally in the same region as the BIG-IP devices to avoid inter-VLAN costs in sending data to the workspace if you are using network isolation. In Azure Portal search bar type "Log Analytics workspaces" and + Create. All is needed is a name and region. Once created, navigate to "Settings" and "Agents". In the section "Log Analytics agent instructions" you will see the Workspace ID and the Primary Key you need for the TS declaration: Using MSI Telemetry Streaming v1.11 added support for sending data to Azure with an Azure Managed Service Identity (MSI). An MSI is a great way of maintaining secure access between Azure objects by leveraging Entra ID (formally Azure AD) to grant access without needing keys. The Primary Workspace key may be regenerated at some point (this may be a part of the key rotation policies of the customer) and if this happens, TS will stop as Azure will reject the incoming telemetry connection from BIG-IP. To use the MSI, create it in the Azure Portal and assign it to the Virtual Machine running the BIG-IP (Security/Identity). I would recommend creating a user assigned MSI rather than a system one. The system ID is restricted to a single resource and only for the lifetime of that resource. A user MSI can be assigned to multiple big-ip machines. Once created, assign the following role assignments to the Log Analytics Workspace (in "Access Control" blade in the LAW): "Log Analytics Contributor". Send TS declaration We now can send the TS declaration. The endpoint you need to reach is: POST https://{{BIG_IP_DEVICE_IP}}/mgmt/shared/telemetry/declare Before I give the declaration, there are a few issues I found using TS in Azure which I need to explain... System Logging Issue The telemetry logs from the BIG-IP will create what are known as "Custom Logs" in LAW. These are explained in more detail at the end of this article, but the most important thing about them is that have a limit of 500 columns for each Log Type. This was originally causing issues as BIG-IP was creating a set of columns for all the properties of each named item and very soon, the 500 limit was reached. F5 had already spotting this issue and fixed it in v1.24 with an option "format" with value "propertyBased" on the Consumer class (ref: https://clouddocs.f5.com/products/extensions/f5-telemetry-streaming/latest/setting-up-consumer.html#additions-to-the-azure-log-analytics-consumer) However, I found that when ASM is enabled on the BIG-IP, each signature update download was creating an set of additional columns in the System log which eventually took it over the 500 limit again: This has now been fixed by F5 with TS v.1.37.0 with a new "format" value of "propertyBasedV2". This allows asmAttackSignatures to be under Log Type F5Telemetry_asmAttackSignatures instead of F5Telemetry_system. If you see any logs either not appearing, or logs stopping on Azure, you can check for this error with the following Kusto query: Operation | where OperationCategory contains “Ingestion” AVR Logging Issue When ASM is enabled, it automatically enables AVR as well. This creates an AVR log which has a LOT of data in it. I've noticed that the AVR log is both excessive and can also exceed to 500 column limit due to a mishmash of data in it. Therefore, In my declaration I have made use of the excludeData option in the TS Listener to remove some of the log sources - the column/field 'Entity_s' identifies the source of the data: DNS_Offbox_All - this generates a log for every DNS request which is made. If your BIG-IP is acting as a DNS Cache for your environment, this very quickly becomes a massive log. ProcessCpuUtil - again, this creates a load of additional columns which record the CPU utilisation of every running process on the device. Might be useful to some... not to me! TcpStat - this logs TCP events against each TCP profile on the BIG-IP (whether used for not) every 5 minutes. If you have a busy device, they quickly flood the log. IruleEvents - shows data associated with iRules (times triggered, event counts..etc). I had no use for this data. I use iRules but did not need statistics on how many times an iRule was used. ACL_STAGE and ACL_FORCE - these seemed to be pointless logs related to AFM but not really giving any information which isn't already in the AFM logs. It was duplicated data of no value. There were also a number of other AVR logs which did not seem to create any meaningful data for me. These were: ServerHealth, GtmWideip, BOT DEFENSE EVENT, InterfaceHealth, AsmBypassInfo, FwNatTransSrc, FwNatTransDest I therefore have excluded these log types. This is not an exhaustive list of entity types in the AVR logs, but hopefully omitting these will (a) reduce your log sizes and (b) prevent the 500 column issue. If you want to analyse what different types (entities) of logs are in the AVR log, the following kusto query can be run: F5Telemetry_AVR_CL | summarize count() by Entity_s This will show the amount of AVR logs by each Entity type (source). You can then run a query for a specific type, analyse the content, and decide whether to filter it or not. Declaration Ok - after all that, here is my declaration. It contains the following: A Telemetry System class - this is needed to generate the device system logging which goes into a log called "F5Telemetry_system_CL" A System Poller - this collects system data at a set interval (60 seconds is a reasonable setting here which produces logs which are not too large but with good granularity of data). The System Poller also allow us to filter logs using excludeData. We exclude the following: asmAttackSignatures - as explained above, these should no longer appear in System logs, but this is just to make sure! diskLatency - this is a large set of columns storing the disk stats. As we are using VMs in Azure, this info is available within the Azure IaS service, so I did not see any point of collecting it again from the VM level, especially as the latency is a function of the selected machine type in Azure. location - this is just the SNMP location, waste of a column name. description - this is just the SNMP description, waste of a column name. A Telemetry Listener - this is used to listen to and collect event logs it receives on the specified port from configured BIG-IP system services, including LTM, ASM, AFM and AVR. A Telemetry Push Consumer - this is used to push the collected data to Azure. It is here we use the workspace ID and the primary key we collected in the above steps. { "class": "Telemetry", "controls": { "class": "Controls", "logLevel": "info", "debug": false }, "telemetry-system-azure": { "class": "Telemetry_System", "trace": false, "allowSelfSignedCert": true, "host": "localhost", "port": 8100, "protocol": "http", "systemPoller": [ "telemetry-systemPoller-azure" ] }, "telemetry-systemPoller-azure": { "class": "Telemetry_System_Poller", "interval": 60, "actions": [ { "excludeData": {}, "locations": { "system": { "asmAttackSignatures": true, "diskLatency": true, "tmstats": true, "location": true, "description": true } } } ] }, "telemetry-listener-azure": { "class": "Telemetry_Listener", "port": 6514, "enable": true, "trace": false, "match": "", "actions": [ { "setTag": { "tenant": "`T`", "application": "`A`" }, "enable": true }, { "excludeData": {}, "ifAnyMatch": [ { "Entity": "DNS_Offbox_All" }, { "Entity": "ProcessCpuUtil" }, { "Entity": "TcpStat" }, { "Entity": "IruleEvents" }, { "Entity": "ACL_STAGE" }, { "Entity": "ACL_FORCE" }, { "Entity": "ServerHealth" }, { "Entity": "GtmWideip" }, { "Entity": "BOT DEFENSE EVENT" }, { "Entity": "InterfaceHealth" }, { "Entity": "AsmBypassInfo" }, { "Entity": "FwNatTransSrc" }, { "Entity": "FwNatTransDest" } ], "locations": { "^.*$": true } } ] }, "telemetry-pushConsumer-azure": { "class": "Telemetry_Consumer", "type": "Azure_Log_Analytics", "format": "propertyBasedV2", "trace": false, "workspaceId": "{{LOG_ANALYTICS_WORKSPACE_ID}}", "passphrase": { "cipherText": "{{LOG_ANALYTICS_PRIMARY_KEY}}" }, "useManagedIdentity": false } } Note: If you are using an MSI managed identity, the consumer changes to this: "telemetry-pushConsumer-azure": { "class": "Telemetry_Consumer", "type": "Azure_Log_Analytics", "format": "propertyBasedV2", "trace": false, "useManagedIdentity": true } You need to look for a "200 OK" response to come back from the REST client. The logs for Telemetry go into: /var/log/restnoded/restnoded.log and will alert if there are errors in connectivity from the BIG-IP into LAW. Adding Non-System Logs To add logs from the Security managers on BIG-IP (AFM, ASM ..etc) you need to create a few AS3 resources to handle the internal routing of logs from the various managers into the telemetry listener just created above. The resources are: a Log Publisher for the security log profile to link to. a Log Destination formatter to a high speed link (HSL) pool, with a format type of "splunk" a Log Destination HSL to a pool which maps to an internal address using TCP port 6514 an LTM Pool which uses a local address. a TCP Virtual Server (vIP) on tcp/6514 with the local address. an iRule for the vIP to remap traffic onto the loopback address (where it will be picked up by the TS listener. "irule-telemetryLocalRule": { "class": "iRule", "remark": "Telemetry Streaming", "iRule": { "base64": "d2hlbiBDTElFTlRfQUNDRVBURUQgcHJpb3JpdHkgNTAwIHsNCiAgbm9kZSAxMjcuMC4wLjEgNjUxNA0KfQ==" } }, "logDestination-telemetryHsl": { "class": "Log_Destination", "type": "remote-high-speed-log", "protocol": "tcp", "pool": { "use": "pool-telemetry" } }, "logDestination-telemetry": { "class": "Log_Destination", "type": "splunk", "forwardTo": { "use": "logDestination-telemetryHsl" } }, "logPublisher-telemetry": { "class": "Log_Publisher", "destinations": [ { "use": "logDestination-telemetry" } ] }, "pool-telemetry": { "class": "Pool", "remark": "Telemetry Streaming to Azure Sentinel", "monitors": [ ], "members": [ { "serverAddresses": [ "255.255.255.254" ], "adminState": "enable", "servicePort": 6514 } ] }, "vip-telemetryLocal": { "class": "Service_TCP", "virtualAddresses": [ "255.255.255.254" ], "iRules": [ "irule-telemetryLocalRule" ], "pool": "pool-telemetry", "remark": "Telemetry Streaming", "addressStatus": true, "virtualPort": 6514 } The iRule is base64 encoded in the AS3 declaration above but is just this: when CLIENT_ACCEPTED priority 500 { node 127.0.0.1 6514 } Now you have a Log Publisher which routes to a local pool mapping to the loopback of the BIG-IP. The TS Listener will then pick this up (notice the port in the TS Declaration object "telemetry-listener-azure" matches the Log High Speed Logging Destination pool (6514)). Loopback Issue When creating the virtual server above, tmm errors are observed which prevent logging via the Telemetry virtual server iRule as it rejects remapping to the loopback. The following log is seen in /var/log/ltm : testf5 err tmm1[6506]: 01220001:3: TCL error: /Common/Shared/irule-telemetryLocalRule - disallow self or loopback connection (line 1)TCL error (line 1) (line 1) invoked from within "node 127.0.0.1 6514" Ref: After an upgrade, iRules using the loopback address may fail and log TCL errors (f5.com) To fix this, change the following db value: tmsh modify sys db tmm.tcl.rule.node.allow_loopback_addresses value true tmsh save sys config Adding AFM logs The Advanced Firewall Manager allows firewall policy to be defined at a number of points (called "contexts") in the flow of traffic through the F5. A global policy can be applied, or a policy can be added at the Self-IP, Route Domain, or Virtual Server level. What is important to realize is that there is a pre-built Security Logging Profile for all policies operating at the 'Global' context - called global-network. If your policy is applied as a global policy, you have to change this profile to get logging into Azure. The profile is here under Security / Event Logs / Logging Profiles: Click on the 'global-network' profile and in the "Network Firewall" tab set the publisher to the one you have built above. You can also decide what to log - at the least you should log any policy drops or rejects: For any AFM policies added at any other context, you can create your own logging profile The logs produced go into the Azure custom log: F5Telemetry_AFM_CL. A log is produced for every firewall event with the column "action_s" recording the rule match action (Accept, Drop or Reject). Adding ASM, DDoS and IDPS logs Logging for the Application Security Manager (ASM), Protocol Inspection (IDPS) and DoS Protection features are all via a Security Logging Profile which is then assigned to the virtual server. "security-loggingProfile": { "class": "Security_Log_Profile", "application": { "localStorage": false, "remoteStorage": "splunk", "protocol": "tcp", "servers": [ { "address": "127.0.0.1", "port": "6514" } ], "storageFilter": { "requestType": "all" } }, "network": { "publisher": { "use": "logPublisher-telemetry" }, "logRuleMatchAccepts": false, "logRuleMatchRejects": true, "logRuleMatchDrops": true, "logIpErrors": true, "logTcpErrors": true, "logTcpEvents": true }, "dosApplication": { "remotePublisher": { "use": "logPublisher-telemetry" } }, "dosNetwork": { "publisher": { "use": "logPublisher-telemetry" } }, "protocolDnsDos": { "publisher": { "use": "logPublisher-telemetry" } }, "protocolInspection": { "publisher": { "use": "logPublisher-telemetry" }, "logPacketPayloadEnabled": true } }, In the example above we are enabling logging for the ASM in the "application" property. An important configuration here is server setting. ASM logging only works if the address used here is 127.0.0.1 and port tcp/6514. In the GUI it looks like this: We have also enabled logging for DoS and IDS/IPS (Protocol Inspection). This is more straightforward as it just references the Log Publisher we created earlier: To assign the various Security features to the virtual server, we use the Security Policy tab and as we mentioned, this is also where we assign the Security Log Profile we created earlier: An example AS3 code snippet for a HTTP virtual server matching what you see in the GUI above is shown below: "vip-testapi": { "class": "Service_HTTPS", "virtualAddresses": [ "172.16.255.254" ], "shareAddresses": false, "profileHTTP": { "use": "http" }, "remark": "Test API", "addressStatus": true, "allowVlans": [ "vlan001" ], "virtualPort": 443, "redirect80": false, "snat": "auto", "policyWAF": { "use": "policy-test" }, "profileDOS": { "use": "dos" }, "profileProtocolInspection": { "use": "protocol_inspection_http" }, "securityLogProfiles": [ { "bigip": "security-loggingProfile" } ] } The securityLogProfiles property references the logging profile we created above. Note that an "Application Security Policy" (property: policyWAF) can only be enabled when the virtual server is of type: Service_HTTP or Service_HTTPS and has a HTTP profile assigned (property: profileHTTP). The outputted logs from the various security managers end up in the following logs: Advanced Firewall Manager (AFM) F5Telemetry_AFM_CL | where isnotempty(acl_policy_name_s) Application Security Manager (ASM) F5Telemetry_ASM_CL DoS Protection F5Telemetry_AVR_CL | where Entity_s contains "DosVisibility" or Entity_s contains "AfmDosStat" Protocol Inspection F5Telemetry_AFM_CL | where isnotempty(insp_id_s) Adding DNS logs If you are using the BIG-IP as an DNS (formally GTM) for GSLB Wide IP load balancing, you will probably want to see the GSLB requests logged in Azure. I found a couple of issues with this... Firstly, the DNS logging profile does not support the "splunk" format which the log destination needs to be for the AFM logging. If you create a separate log destination for "syslog" format, this creates a separate log in Azure called "F5Telemetry_event_CL" which just dumps the raw data in a "data_s" column like this: Therefore, what I have done is created an GTM iRule which can be added to the GSLB Listener and used to generate request/response DNS logs into the F5Telemetry_LTM_CL log: when DNS_REQUEST priority 50 { set hostname [info hostname] set ldns [IP::client_addr] set vs_name [virtual name] set q_name [DNS::question name] set q_type [DNS::question type] set now [clock seconds] set ts [clock format $now -format {%a, %d %b %Y %H:%M:%S %Z}] if { $q_type == "A" or $q_type == "AAAA" } { set hsl_reqlog [HSL::open -proto TCP -pool "/Common/Shared/pool-telemetry"] HSL::send $hsl_reqlog "event_source=\"dns_request_logging\",hostname=\"$hostname\",client_ip=\"$ldns\",server_ip=\"\",http_method=\"\",http_uri=\"\",virtual_name=\"$vs_name\",dns_query_name=\"$q_name\",dns_query_type=\"$q_type\",dns_query_answer=\"\",event_timestamp=\"$ts\"\n" unset hsl_reqlog -- } unset ldns vs_name q_name q_type now ts -- } when DNS_RESPONSE priority 50 { set hostname [info hostname] set ldns [IP::client_addr] set vs_name [virtual name] set q_name [DNS::question name] set q_type [DNS::question type] set q_answer [DNS::answer] set now [clock seconds] set ts [clock format $now -format {%a, %d %b %Y %H:%M:%S %Z}] if { $q_type == "A" or $q_type == "AAAA" } { set hsl_reslog [HSL::open -proto TCP -pool "/Common/Shared/pool-telemetry"] HSL::send $hsl_reslog "event_source=\"dns_response_logging\",hostname=\"$hostname\",client_ip=\"$ldns\",server_ip=\"\",http_method=\"\",http_uri=\"\",virtual_name=\"$vs_name\",dns_query_name=\"$q_name\",dns_query_type=\"$q_type\",dns_query_answer=\"$q_answer\",event_timestamp=\"$ts\"\n" unset hsl_reslog -- } unset ldns vs_name q_name q_type q_answer now ts -- } Just add this to the GTM Listener and ensure you don't have the DNS logging profile enabled in the DNS profile: Here are the logs (nicely formatted!): The event_source_s column is set to "dns_request_logging" and "dns_response_logging" to distinguish them from the LTM request logs in this log. Adding LTM logs LTM logs in Sentinel are sent to the custom log F5Telemetry_LTM_CL and are the output from the Request Logging service in BIG-IP. This creates a log for every HTTP request (and optionally the response) which is made through a Virtual Server which has a HTTP profile applied and which also includes a Request Logging profile. Request Logging uses the High Speed Log (HSL) to send the logs directly out of TMM. We already setup a HSL log destination in our base AS3 declaration so we can use this. The request log is very flexible in what you want to record and fields are detailed here: Reference: Configuring request logging using the Request Logging profile (f5.com) I find a useful field is $TIME_USECS which is added to the Microtimestamp column. This is useful as it can be used to tie together the request with the response when troubleshooting. Here is the AS3 code snippet for adding a Request Logging Profile: "profile-ltmRequestLog": { "class": "Traffic_Log_Profile", "requestSettings": { "requestEnabled": true, "requestProtocol": "mds-tcp", "requestPool": { "use": "pool-telemetry" }, "requestTemplate": "event_source=\"request_logging\",hostname=\"$BIGIP_HOSTNAME\",client_ip=\"$CLIENT_IP\",server_ip=\"$SERVER_IP\",dest_ip=\"$VIRTUAL_IP\",dest_port=\"$VIRTUAL_PORT\",http_method=\"$HTTP_METHOD\",http_uri=\"$HTTP_URI\",virtual_name=\"$VIRTUAL_NAME\",event_timestamp=\"$DATE_HTTP\",Microtimestamp=\"$TIME_USECS\"" }, "responseSettings": { "responseEnabled": true, "responseProtocol": "mds-tcp", "responsePool": { "use": "pool-telemetry" }, "responseTemplate": "event_source=\"response_logging\",hostname=\"$BIGIP_HOSTNAME\",client_ip=\"$CLIENT_IP\",server_ip=\"$SERVER_IP\",http_method=\"$HTTP_METHOD\",http_uri=\"$HTTP_URI\",virtual_name=\"$VIRTUAL_NAME\",event_timestamp=\"$DATE_HTTP\",http_statcode=\"$HTTP_STATCODE\",http_status=\"$HTTP_STATUS\",Microtimestamp=\"$TIME_USECS\",response_ms=\"$RESPONSE_MSECS\"" } } Note that it references the High Speed Logging pool we created earlier. If you want to add the template in the BIG-IP GUI, below is the formatted text to add to the template field. Make sure 'Request Logging' and 'Response Logging' is enabled, the HSL Protocol is TCP, and the Pool Name is the pool we created earlier (called 'pool-telemetry' in my example): Request Settings / Template: event_source="request_logging",hostname="$BIGIP_HOSTNAME",client_ip="$CLIENT_IP",server_ip="$SERVER_IP",dest_ip="$VIRTUAL_IP",dest_port="$VIRTUAL_PORT",http_method="$HTTP_METHOD",http_uri="$HTTP_URI",virtual_name="$VIRTUAL_NAME",event_timestamp="$DATE_HTTP",Microtimestamp="$TIME_USECS" Response Settings / Template: event_source="response_logging",hostname="$BIGIP_HOSTNAME",client_ip="$CLIENT_IP",server_ip="$SERVER_IP",http_method="$HTTP_METHOD",http_uri="$HTTP_URI",virtual_name="$VIRTUAL_NAME",event_timestamp="$DATE_HTTP",http_statcode="$HTTP_STATCODE",http_status="$HTTP_STATUS",Microtimestamp="$TIME_USECS",response_ms="$RESPONSE_MSECS" Sending syslog to Azure Some errors on the system may not show up in the standard telemetry logging tables - in particular TLS errors due to certificate issues (reported by the pkcs11d daemon) do not generate logs. To aid reporting, we can redirect syslog for any logs of a particular level (e.g. warning and above) and push them to the localhost on port 6514 - they are then picked up by the Telemetry System listener and pushed out to Azure Log Analytics. (tmos)# edit /sys syslog all-properties this opens up the settings in the vi editor. in the edited section remove the line: include none replace with below: include " filter f_remote_loghost { level(warning..emerg); }; destination d_remote_loghost { udp(\"127.0.0.1\" port(6514)); }; log { source(s_syslog_pipe); filter(f_remote_loghost); destination(d_remote_loghost); }; " then write-quit vi (type ':wq') you should get the prompt: Save changes? (y/n/e) select 'y' finally save the config: (tmos)# save /sys config This will create a new custom log called F5Telemetry_syslog_CL which contains the syslog message. The messages are send in raw format, so need a bit of kusto manipulation. The following KQL extracts the data into columns to hold the reporting process/daemon, the Severity, Hostname, and the log text: F5Telemetry_syslog_CL | extend processName = extract(@'([\w-]+)\[\d+\]:', 1, data_s) | extend message_s = extract(@'\[\d+\]: (.*)', 1, data_s) | extend severity = extract(@'(\w+)\s[\w-]+\[\d+\]', 1, data_s) | extend severity_s = replace_strings( severity, dynamic(['err', 'emerg']), // Lookup strings dynamic(['error', 'emergency']) // Replacements ) | project TimeGenerated, Severity = severity_s, Process = processName, ['Log Message'] = message_s, Hostname = tostring(split(hostname_s, ".")[0]) | order by TimeGenerated desc The output looks like this: The Azure Log Collector API Telemetry Streaming leverages the Azure HTTP Data Collector API as a client and uses the exposed REST API to send formatted log data. All data in Log Analytics is stored as a record with a particular record type. TS formats the data as multiple records in json format with appropriate headers to direct data into specific logs. An individual record is created for each record in the request payload. The data sent into the Azure Monitor HTTP Data Collector API via Telemetry Streaming is formatted to place records whose Type is equal to the LogType value specified and appends with _CL. For example, the Telemetry System Listener creates logs with a logType of "F5Telemetry_system" which outputs all records into a custom log in the Log Analytics Workspace called F5Telemetry_system_CL. Reference: https://learn.microsoft.com/en-us/previous-versions/azure/azure-monitor/logs/data-collector-api Note: Please be aware that the API has been deprecated and will no longer be functional as of 14/09/2026. Hopefully TS will be updated accordingly.703Views1like0CommentsF5 XC vk8s workload with Open Source Nginx
I have shared the code in the link below under Devcentral code share: F5 XC vk8s open source nginx deployment on RE | DevCentral Here I will desribe the basic steps for creating a workload object that is F5 XC custom kubernetes object that creates in the background kubernetes deployments, pods and Cluster-IP type services. The free unprivileged nginx image nginxinc/docker-nginx-unprivileged: Unprivileged NGINX Dockerfiles (github.com) Create a virtual site that groups your Regional Edges and Customer Edges. After that create the vk8s virtual kubernetes and relate it to the virtual site."Note": Keep in mind for the limitations of kubernetes deployments on Regional Edges mentioned in Create Virtual K8s (vK8s) Object | F5 Distributed Cloud Tech Docs. First create the workload object and select type service that can be related to Regional Edge virtual site or Customer Edge virtual site. After select the container image that will be loaded from a public repository like github or private repo. You will need to configure advertise policy that will expose the pod/container with a kubernetes cluster-ip service. If you are deploying test containers, you will not need to advertise the container . To trigger commands at a container start, you may need to use /bin/bash -c -- and a argument."Note": This is not related for this workload deployment and it is just an example. Select to overwrite the default config file for the opensource nginx unprivileged with a file mount. "Note": the volume name shouldn't have a dot as it will cause issues. For the image options select a repository with no rate limit as otherwise you will see the error under the the events for the pod. You can also configure command and parameters to push to the container that will run on boot up. You can use empty dir on the virtual kubernetes on the Regional Edges for volume mounts like the log directory or the Nginx Cache zone but the unprivileged Nginx by default exports the logs to the XC GUI, so there is no need. "Note": This is not related for this workload deployment and it is just an example. The Logs and events can be seen under the pod dashboard and even the container/pod can accessed. "Note": For some workloads to see the logs from the XC GUI you will need to direct the output to stderr but not for nginx. After that you can reference the auto created kubernetes Cluster-IP service in a origin pool, using the workload name and the XC namespace (for example niki-nginx.default). "Note": Use the same virtual-site where the workload was attached and the same port as in the advertise cluster config. Deployments and Cluster-IP services can be created directly without a workload but better use the workload option. When you modify the config of the nginx actually you are modifying a configmap that the XC workload has created in the background and mounted as volume in the deployment but you will need to trigger deployment recreation as of now not supported by the XC GUI. From the GUI you can scale the workload to 0 pod instances and then back to 1 but a better solution is to use kubectl. You can log into the virtual kubernetes like any other k8s environment using a cert and then you can run the command "kubectl rollout restart deployment/niki-nginx". Just download the SSL/TLS cert. You can automate the entire process using XC API and then you can use normal kubernetes automation to run the restart command F5 Distributed Cloud Services API for ves.io.schema.views.workload | F5 Distributed Cloud API Docs! F5 XC has added proxy_protocol support and now the nginx container can work directly with the real client ip addresses without XFF HTTP headers or non-http services like SMTP that nginx supports and this way XC now can act as layer 7 proxy for email/smpt traffic 😉. You just need to add "proxy_protocol" directive and to log the variable "$proxy_protocol_addr". Related resources: For nginx Plus deployments for advanced functions like SAML or OpenID Connect (OIDC) or the advanced functions of the Nginx Plus dynamic modules like njs that is allowing java scripting (similar to F5 BIG-IP or BIG-IP Next TCL based iRules), see: Enable SAML SP on F5 XC Application Bolt-on Auth with NGINX Plus and F5 Distributed Cloud Dynamic Modules | NGINX Documentation njs scripting language (nginx.org) Accepting the PROXY Protocol | NGINX Documentation474Views2likes1Comment