cloud
9 TopicsF5 XC Session tracking with User Identification Policy
With F5 AWAF/ASM there is feature called session tracking that allows tracking and blocking users that do too many violations not only based on IP address but also things like the BIG-IP AWAF/ASM session cookie. What about F5 XC Distributed Cloud? Well now we will answer that question 😉 Why tracking on ip addresses some times is not enough? XC has a feature called malicious users that allows to block users if they generate too many service policy, waf , bot or other violations. By default users are tracked based on source IP addresses but what happens if there are proxies before the XC Cloud or NAT devices ? Well then all traffic for many users will come from a single ip address and when this IP address is blocked many users will get blocked, not just the one that did the violation. Now that we answered this question lets see what options we have. Reference: AI/ML detection of Malicious Users using F5 Distributed Cloud WAAP Trusted Client IP header This option is useful when the client real ip addresses are in something like a XFF header that the proxy before the F5 XC adds. By enabling this option automatically XC will use this header not the IP packet to get the client ip address and enforce Rate Limiting , Malicious Users blocking etc. Even in the XC logs now the ip address in the header will be shown as a source IP and if there is no such header the ip address in the packet will be used as backup. Reference: How to setup a Client IP as the Source IP on the HTTP Load Balancer headers? – F5 Distributed Cloud Services (zendesk.com) Overview of Trusted Client IP Headers in F5 Distributed Cloud Platform User Identification Policies The second more versatile feature is the XC user identification policies that by default is set to "Client IP" that will be the client ip from the IP packet or if "Trusted Client IP header" is configured the IP address from the configured header will be used. When customizing the feature allows the use of TLS fingerprints , HTTP headers like the "Authorization" header and more options to track the users and enforce rate limiters on them or if they make too many violations and Malicious users is enabled to block them based on the configured identifier if they make too many waf violations and so much more. The user identification will failover to the ip address in the packet if it can't identify the source user but multiple identification rules could be configured and evaluated one after another, as to only failover to the packet ip address if an identification rule can't be matched! If the backend upstream origin server application cookie is used for user identification and XC WAF App firewall is enabled and you can also use Cookie protection to protect the cookie from being send from another IP address! The demo juice shop app at https://demo.owasp-juice.shop/ can be used for such testing! References Lab 3: Malicious Users (f5.com) Malicious Users | F5 Distributed Cloud Technical Knowledge Configuring user session tracking (f5.com) How to configure Cookie Protection – F5 Distributed Cloud Services (zendesk.com)109Views0likes0CommentsF5 XC vk8s workload with Open Source Nginx
I have shared the code in the link below under Devcentral code share: F5 XC vk8s open source nginx deployment on RE | DevCentral Here I will desribe the basic steps for creating a workload object that is F5 XC custom kubernetes object that creates in the background kubernetes deployments, pods and Cluster-IP type services. The free unprivileged nginx image nginxinc/docker-nginx-unprivileged: Unprivileged NGINX Dockerfiles (github.com) Create a virtual site that groups your Regional Edges and Customer Edges. After that create the vk8s virtual kubernetes and relate it to the virtual site."Note": Keep in mind for the limitations of kubernetes deployments on Regional Edges mentioned in Create Virtual K8s (vK8s) Object | F5 Distributed Cloud Tech Docs. First create the workload object and select type service that can be related to Regional Edge virtual site or Customer Edge virtual site. After select the container image that will be loaded from a public repository like github or private repo. You will need to configure advertise policy that will expose the pod/container with a kubernetes cluster-ip service. If you are deploying test containers, you will not need to advertise the container . To trigger commands at a container start, you may need to use /bin/bash -c -- and a argument."Note": This is not related for this workload deployment and it is just an example. Select to overwrite the default config file for the opensource nginx unprivileged with a file mount."Note": the volume name shouldn't have a dot as it will cause issues. For the image options select a repository with no rate limit as otherwise you will see the error under the the events for the pod. You can also configure command and parameters to push to the container that will run on boot up. You can use empty dir on the virtual kubernetes on the Regional Edges for volume mounts like the log directory or the Nginx Cache zone but the unprivileged Nginx by default exports the logs to the XC GUI, so there is no need. "Note": This is not related for this workload deployment and it is just an example. The Logs and events can be seen under the pod dashboard and even the container/pod can accessed. "Note": For some workloads to see the logs from the XC GUI you will need to direct the output to stderr but not for nginx. After that you can reference the auto created kubernetes Cluster-IP service in a origin pool, using the workload name and the XC namespace (for example niki-nginx.default). "Note": Use the same virtual-site where the workload was attached and the same port as in the advertise cluster config. Deployments and Cluster-IP services can be created directly without a workload but better use the workload option. When you modify the config of the nginx actually you are modifying a configmap that the XC workload has created in the background and mounted as volume in the deployment but you will need to trigger deployment recreation as of now not supported by the XC GUI. From the GUI you can scale the workload to 0 pod instances and then back to 1 but a better solution is to use kubectl. You can log into the virtual kubernetes like any other k8s environment using a cert and then you can run the command "kubectl rollout restart deployment/niki-nginx". Just download the SSL/TLS cert. You can automate the entire process using XC API and then you can use normal kubernetes automation to run the restart command F5 Distributed Cloud Services API for ves.io.schema.views.workload | F5 Distributed Cloud API Docs! F5 XC has added proxy_protocol support and now the nginx container can work directly with the real client ip addresses without XFF HTTP headers or non-http services like SMTP that nginx supports and this way XC now can act as layer 7 proxy for email/smpt traffic 😉. You just need to add "proxy_protocol" directive and to log the variable "$proxy_protocol_addr". Related resources: For nginx Plus deployments for advanced functions like SAML or OpenID Connect (OIDC) or the advanced functions of the Nginx Plus dynamic modules like njs that is allowing java scripting (similar to F5 BIG-IP or BIG-IP Next TCL based iRules), see: Enable SAML SP on F5 XC Application Bolt-on Auth with NGINX Plus and F5 Distributed Cloud Dynamic Modules | NGINX Documentation njs scripting language (nginx.org) Accepting the PROXY Protocol | NGINX Documentation334Views2likes1CommentF5 XC Distributed Cloud HTTP Header manipulations and matching of the client ip/user HTTP headers
1 . F5 XC distributed cloud HTTP Header manipulations In the F5 XC Distributed Cloud some client information is saved to variables that can be inserted in HTTP headers similar to how F5 Big-IP saves some data that can after that be used in a iRule or Local Traffic Policy. By default XC will insert XFF header with the client IP address but what if the end servers want an HTTP header with another name to contain the real client IP. Under the HTTP load balancer under "Other Options" under "More Options" the "Header Options" can be found. Then the the predefined variables can be used for this job like in the example below the $[client_address] is used. A list of the predefined variables for F5 XC: https://docs.cloud.f5.com/docs/how-to/advanced-security/configure-http-header-processing There is $[user] variable and maybe in the future if F5 XC does the authentication of the users this option will be insert the user in a proxy chaining scenario but for now I think that this just manipulates data in the XAU (X-Authenticated-User) HTTP header. 2. Matching of the real client ip HTTP headers You can also match a XFF header if it is inserted by a proxy device before the F5 XC nodes for security bypass/blocking or for logging in the F5 XC. For User logging from the XFF Under "Common Security Controls" create a "User Identification Policy". You can also match a regex that matches the ip address and this is in case there are multiple IP addresses in the XFF header as there could have been many Proxy devices in the data path and we want see if just one is present. For Security bypass or blocking based based on XFF Under "Common Security Controls" create a "Trusted Client Rules" or "Client Blocking Rules". Also if you have "User Identification Policy" then you can just use the "User Identifier" but it can't use regex in this case. To match a regex value in the header that is just a single IP address, even when the header has many ip addresses, use the regex (1\.1\.1\.1) as an example to mach address 1.1.1.1. To use the client IP address as a source Ip address to the backend Origin Servers in the TCP packet after going through the F5 XC (similar to removing the SNAT pool or Automap in F5 Big-IP) use the option below: The same way the XAU (X-Authenticated-User) HTTP header can be used in a proxy chaining topology, when there is a proxy before the F5 XC that has added this header. Edit: Keep in mind that in some cases in the XC Regex for example (1\.1\.1\.1) should be written without () as 1\.1\.1\.1 , so test it as this could be something new and I have seen it in service policy regex matches, when making a new custom signature that was not in WAAP WAF XC policy. I could make a seperate article for this 🙂2.7KViews8likes1CommentWhat is the Lightning Network?
When I'm thinking of up and coming technologies in terms of how they'd fit into my everyday life, I often forget that there are things I assume for myself that aren't nessecarily true for others. One of these things is the ease at which I can transact with people and businesses. I can move Canadian dollars to other Canadians for free and instantly. I can exchange money for goods and services from a merchant with just a tap of my phone or bank card. But this is simply not the case in third world countries where banking systems are not as mature or trusted. Blockchain technology has enabled a number of distruptive use cases, over and above enabling something like Bitcoin. Now what we're seeing is use cases that enable anybody with internet connectivity to be able to execute transactions with others in a direct manner. A use case that builds on this idea is payment exchange in third world countries and this is built on the Lightning Network. The Lightning Network is a layer 2 payment protocol. It is built on top of the Bitcoin network but instead of waiting up to 10 minutes for transactions to settle, this side-chain or layer 2 network can transact instantly. It's capable of making large and small transactions so it has use cases that can serve C2C, B2C and B2B. Imagine yourself travelling through Vietnam. You bought lunch. It was $2USD. You don't have the benefit of tap to pay like in your home country. You have some local currency but you'd prefer not to keep breaking up your larger bills. Constantly converting the currency in your head to keep track of your holiday spending is taking away from the fun of your vacation. It's also harder and harder as the larger bills get broken down into smaller ones. But if you can pay through Lightning Network, you settle the transaction in Bitcoin and know exactly how much you've spent. Or let's say you're a student in North America. Your parents are back home overseas and they need to send over some money for the year's tuition. Money transfer agents can help you move the money but at a cost. With Lightning Network, the money can be moved immediately and for little cost. Or, let's say you're a business that needs to wire funds to your supplier. Normally, you'd go to the bank, fill out a wire transfer, hope you got all the numbers right and then wait 5 days for the money to show up in the suppliers account. With Lightning Network, that transaction can happen immediately and it can be tracked electronically to show it was received. The market is flooded with a lot of Blockchain based projects that are still finding their way but I am confident that Lightning Network is something that's going to take off in certain parts of the world. I was able to arrange an interview with Albert Buu, the Founder and CEO of Neutronpay, a Lightning Network Service Provider (LSP) and got to deep dive into his insights on this emerging use case!749Views5likes1CommentHome Lab Server Build Using an Intel NUC and Free VMware ESXi 7
If you're like me, despite having cheap or even free access to cloud compute, you still want to have a bit of compute in a home lab. I can create and destroy to my hearts content. Things can get weird and messy - and it's nobodys problem but my own. For the past 10 years, my home lab has consisted of a couple 2U Dell R710 servers. They are were beefy in specs but they are very loud and consume a relatively large amount of power and space. They have served me really well over the years but it is finally time to upgrade. I ordered an Intel NUC last year. It should be able to handle the workload I'm running on my Dell servers with room to spare. Due to supply chain issues, it took a few months but it finally arrived. I was extremely surprised at how small these are. I knew they were small but I did not expect it to fit in the palm of my hand! I threw on VMware ESXi 7 for the hypervisor but I wanted to document the build for anyone who is building up a similar setup as I encountered a couple issues during my installation. Here is my complete parts list: Intel NUC11TNKV7 2x Kingston 32GB DDR4 3200MHz SODIMM 1TB Samsung 970 EVO NVMe I did document this in a video but this article also serves as a companion to that since there is a lot of commands involved. I immediately found out that because the network card on the NUC does not have a compatible driver included on the ESXi 7 image, I had to create an ISO with the Community Network Driver (Fling). The steps are documented here:https://www.virten.net/2021/11/vmware-esxi-7-0-update-3-on-intel-nuc/however I also came across my own nuances which I'm noting below. First, download the ESXi Offline Bundle and Fling Community Network Driver and place them in a temporary folder. You need to install the vmware.powercli and vmware.imagebuilder modules from the Powershell command line install-module -name vmware.powercli install-module -name vmware.imagebuilder HOWEVER vmware.powercli and vmware.imagebuilder modules for Powershell is not supported on Powershell v6 and above which meant I could not run these commands on my Mac. Luckily, I had a Windows box kicking around with Powershell v5. I was also getting an error in trying to download the VMware.imagebuilder plugin. As it turns out, my version of PowerShell must have been using TLS 1.0/1.1. These intructions configured TLS1.2:https://docs.microsoft.com/en-us/powershell/scripting/gallery/installing-psget?view=powershell-7.2 [Net.ServicePointManager]::SecurityProtocol = [Net.ServicePointManager]::SecurityProtocol -bor [Net.SecurityProtocolType]::Tls12 After all that, I was able to proceed with building the image. The steps were pretty close to what is in the Virten article however the version of ESXi they used was pulled and replaced. I ended up with a different build which is reflected with the file names I used. Add-EsxSoftwareDepot .\VMware-ESXi-7.0U3c-19193900-depot.zip Add-EsxSoftwareDepot .\Net-Community-Driver_1.2.2.0-1vmw.700.1.0.15843807_18835109.zip New-EsxImageProfile -CloneProfile "ESXi-7.0U3c-19193900-standard" -name "ESXi-7.0U3c-19193900-NUC" -Vendor "buulam" Add-EsxSoftwarePackage -ImageProfile "ESXi-7.0U3c-19193900-NUC" -SoftwarePackage "net-community" Export-ESXImageProfile -ImageProfile "ESXi-7.0U3c-19193900-NUC" -ExportToISO -filepath ESXi-7.0U3c-19193900-NUC.iso Note: If you encounter the following error: "windowspowershell\modules\vmware.vimautomation.sdk\12.5.0.19093564\vmware.vimautomation.sdk.psm1 cannot be loaded because running scripts is disabled on this system" you may need to enter the following command: Set-ExecutionPolicy -ExecutionPolicy AllSigned Credit to Pawan Jheeta for this find! Now that I have an ISO image with the Fling Community Network Driver, it was time to create the bootable USB installer. I have a Mac and here are the steps I used to create the USB flash drive:https://virtuallywired.io/2020/08/01/create-a-bootable-esxi-7-usb-installer-on-macos/. I did not encounter any issues with these steps so please refer to the linked article to follow them. In case you are running Windows, this appears to be a good guide for creating the USB flash drive:https://www.virten.net/2014/12/howto-create-a-bootable-esxi-installer-usb-flash-drive/ Once you have the bootable USB flash drive created, you can insert that into the Intel NUC and begin your ESXi installation. The remaining steps I will leave to be explained in my video. I accepted all the defaults except for configuring a static IP address for the management address. I hope this helps some of you out and if there are any questions, please reply to this thread. I'd also love to hear about your home labs!22KViews7likes6Comments