Security
11 TopicsF5 XC Session tracking with User Identification Policy
With F5 AWAF/ASM there is feature called session tracking that allows tracking and blocking users that do too many violations not only based on IP address but also things like the BIG-IP AWAF/ASM session cookie. What about F5 XC Distributed Cloud? Well now we will answer that question 😉 Why tracking on ip addresses some times is not enough? XC has a feature called malicious users that allows to block users if they generate too many service policy, waf , bot or other violations. By default users are tracked based on source IP addresses but what happens if there are proxies before the XC Cloud or NAT devices ? Well then all traffic for many users will come from a single ip address and when this IP address is blocked many users will get blocked, not just the one that did the violation. Now that we answered this question lets see what options we have. Reference: AI/ML detection of Malicious Users using F5 Distributed Cloud WAAP Trusted Client IP header This option is useful when the client real ip addresses are in something like a XFF header that the proxy before the F5 XC adds. By enabling this option automatically XC will use this header not the IP packet to get the client ip address and enforce Rate Limiting , Malicious Users blocking etc. Even in the XC logs now the ip address in the header will be shown as a source IP and if there is no such header the ip address in the packet will be used as backup. Reference: How to setup a Client IP as the Source IP on the HTTP Load Balancer headers? – F5 Distributed Cloud Services (zendesk.com) Overview of Trusted Client IP Headers in F5 Distributed Cloud Platform User Identification Policies The second more versatile feature is the XC user identification policies that by default is set to "Client IP" that will be the client ip from the IP packet or if "Trusted Client IP header" is configured the IP address from the configured header will be used. When customizing the feature allows the use of TLS fingerprints , HTTP headers like the "Authorization" header and more options to track the users and enforce rate limiters on them or if they make too many violations and Malicious users is enabled to block them based on the configured identifier if they make too many waf violations and so much more. The user identification will failover to the ip address in the packet if it can't identify the source user but multiple identification rules could be configured and evaluated one after another, as to only failover to the packet ip address if an identification rule can't be matched! If the backend upstream origin server application cookie is used for user identification and XC WAF App firewall is enabled and you can also use Cookie protection to protect the cookie from being send from another IP address! The demo juice shop app at https://demo.owasp-juice.shop/ can be used for such testing! References Lab 3: Malicious Users (f5.com) Malicious Users | F5 Distributed Cloud Technical Knowledge Configuring user session tracking (f5.com) How to configure Cookie Protection – F5 Distributed Cloud Services (zendesk.com)104Views0likes0CommentsF5 XC vk8s workload with Open Source Nginx
I have shared the code in the link below under Devcentral code share: F5 XC vk8s open source nginx deployment on RE | DevCentral Here I will desribe the basic steps for creating a workload object that is F5 XC custom kubernetes object that creates in the background kubernetes deployments, pods and Cluster-IP type services. The free unprivileged nginx image nginxinc/docker-nginx-unprivileged: Unprivileged NGINX Dockerfiles (github.com) Create a virtual site that groups your Regional Edges and Customer Edges. After that create the vk8s virtual kubernetes and relate it to the virtual site."Note": Keep in mind for the limitations of kubernetes deployments on Regional Edges mentioned in Create Virtual K8s (vK8s) Object | F5 Distributed Cloud Tech Docs. First create the workload object and select type service that can be related to Regional Edge virtual site or Customer Edge virtual site. After select the container image that will be loaded from a public repository like github or private repo. You will need to configure advertise policy that will expose the pod/container with a kubernetes cluster-ip service. If you are deploying test containers, you will not need to advertise the container . To trigger commands at a container start, you may need to use /bin/bash -c -- and a argument."Note": This is not related for this workload deployment and it is just an example. Select to overwrite the default config file for the opensource nginx unprivileged with a file mount."Note": the volume name shouldn't have a dot as it will cause issues. For the image options select a repository with no rate limit as otherwise you will see the error under the the events for the pod. You can also configure command and parameters to push to the container that will run on boot up. You can use empty dir on the virtual kubernetes on the Regional Edges for volume mounts like the log directory or the Nginx Cache zone but the unprivileged Nginx by default exports the logs to the XC GUI, so there is no need. "Note": This is not related for this workload deployment and it is just an example. The Logs and events can be seen under the pod dashboard and even the container/pod can accessed. "Note": For some workloads to see the logs from the XC GUI you will need to direct the output to stderr but not for nginx. After that you can reference the auto created kubernetes Cluster-IP service in a origin pool, using the workload name and the XC namespace (for example niki-nginx.default). "Note": Use the same virtual-site where the workload was attached and the same port as in the advertise cluster config. Deployments and Cluster-IP services can be created directly without a workload but better use the workload option. When you modify the config of the nginx actually you are modifying a configmap that the XC workload has created in the background and mounted as volume in the deployment but you will need to trigger deployment recreation as of now not supported by the XC GUI. From the GUI you can scale the workload to 0 pod instances and then back to 1 but a better solution is to use kubectl. You can log into the virtual kubernetes like any other k8s environment using a cert and then you can run the command "kubectl rollout restart deployment/niki-nginx". Just download the SSL/TLS cert. You can automate the entire process using XC API and then you can use normal kubernetes automation to run the restart command F5 Distributed Cloud Services API for ves.io.schema.views.workload | F5 Distributed Cloud API Docs! F5 XC has added proxy_protocol support and now the nginx container can work directly with the real client ip addresses without XFF HTTP headers or non-http services like SMTP that nginx supports and this way XC now can act as layer 7 proxy for email/smpt traffic 😉. You just need to add "proxy_protocol" directive and to log the variable "$proxy_protocol_addr". Related resources: For nginx Plus deployments for advanced functions like SAML or OpenID Connect (OIDC) or the advanced functions of the Nginx Plus dynamic modules like njs that is allowing java scripting (similar to F5 BIG-IP or BIG-IP Next TCL based iRules), see: Enable SAML SP on F5 XC Application Bolt-on Auth with NGINX Plus and F5 Distributed Cloud Dynamic Modules | NGINX Documentation njs scripting language (nginx.org) Accepting the PROXY Protocol | NGINX Documentation331Views2likes1CommentF5 ICAP over SSL/TLS (Secure ICAP) with F5 ASM/AWAF Antivirus Protection feature
As mentioned in articlehttps://my.f5.com/manage/s/article/K17964220 (K17964220: Is it possible to activate antivirus checking using ICAP over SSL?) for ICAP over SSL/TLS the F5 LTM option with Adapt Request/Response profiles needs to be used but there could be a potential workaround. A Virtual Server can be created that has a server-side SSL profile and on the client side it is listening for unencrypted ICAP traffic. The Virtual server can have any kind of an IP address as it can be configured to listen on a Vlan that only exists on F5 system and it is not attached to any interface or trunk as the purpose is the F5 AWAF module internally to forward the Antivirus ICAP traffic to the Virtual server that will encrypt it and send it to the pool of the real ICAP servers. As there is no official statement from F5 about this option, better test if it works correctly on your TMOS version, see the Virtual Server statistics and do tcpdumps to see the traffic being send to the F5 pool members! This will also allow to use multiple icap servers in a pool not just one and maybe some iRules but for a better iRule support the LTM ADAPT Profiles option seems the way to go. This may solve the issue with ICAP over SSL but F5 anti-virus protection in the F5 AWAF/ASM module still has some other limitations like the ones mentioned below that could force the use of the LTM Adapt profiles: Large file limit as F5 Antivirus Protection can only send files not bigger than 20MB to the ICAP servers as mentioned in https://my.f5.com/manage/s/article/K12984 . Base64 encoded files are not send to the ICAP servers as mentioned in https://my.f5.com/manage/s/article/K47008252 If the ICAP server is down the users will get an F5 support id blocking page as there is no way to configure a bypass if the ICAP servers are down like in the LTM Adapt profiles F5 Adapt profiles support some iRule events and iRule commands like "ADAPT_REQUEST_HEADERS" that allow you to return different response pages, based on the HTTP headers that the ICAP server sends to the F5 device. For more information seehttps://clouddocs.f5.com/api/irules/ADAPT.html andhttps://community.f5.com/t5/technical-forum/is-it-possible-to-insert-http-payload-in-an-icap-reply-or-to/td-p/299959 If you decide to use the LTM Adapt profiles, because of the AWAF antivirus protection limitations that I mentioned, also configure some iRules or Local traffic policies that will limit the traffic being send to the ICAP servers for scanning like only POST requests for the URL where customers upload files, etc. You can see the examples at (the example is for SSL Orchestrator but in the background SSLO uses Adapt profiles for it's ICAP service): https://clouddocs.f5.com/sslo-deployment-guide/sslo-08/chapter4/page4.5.html1.9KViews1like2CommentsPrevent BIG-IP Edge Client VPN Driver to roll back (or forward) during PPP/RAS errors
If you (like some of my customers) want to have the BIG-IP Edge Client packaged and distributed as a software package within your corporate infrastructure and therefore have switched off automatic component updates in your connectivity profiles, you might still get the covpn64.sys file upgraded or downgraded to the same version as the one installed on the BIG-IP APM server. Background We discovered that on some Windows clients the file covpn64.sys file got a newer/older timestamp in and started to investigate what caused this. The conclusion was that sometimes after hibernation or sleep, the Edge Client is unable to open the VPN interface and therefore tries to reinstall the driver. However, instead of using a local copy of the CAB file where the covpn64.sys file resides, it downloads it from the APM server regardless of if the version on the server and client match each other or not. In normal circumstances when you have automatic upgrades on the clients, this might not be a problem, however when you need to have full control on which version is being used on each connected client, this behavior can be a bit of a problem. Removing the Installer Component? Now you might be thinking, hey… Why don't you just remove the Component Installer module from the Edge Client and you won't have this issue. Well the simple answer to this is the fact that the Component Installer module is not only used to install/upgrade the client. In fact, it seems like it's also used when performing the Machine Check Info from the Access Policy when authenticating the user. So by removing the Component Installer module result in other issues. The Solution/workaround The Solution I came up with is to store each version of the urxvpn.cab file in an IFile and then use an iRule to deliver the correct version whenever a client tries to fetch the file for reinstallation. What's needed? In order to make this work we need to Grab a copy of urxvpn.cab from each version of the client Create an IFile for each of these versions Install iRule Attach iRule to the Virtual Server that is running the Access Policy Fetching the file from the apmclients ISOs For every version of the APM client that is available within your organization a corresponding iFile needs to be created. To create the iFiles automatically you can do the following on the APM server. Login to the CLI console with SSH Make sure you are in bash by typing bash Create temporary directories mkdir /tmp/apm-urxvpn mkdir /tmp/apm-iso Run the following (still in bash not TMSH) on the BIG-IP APM server to automatically extract the urxvpn.cab file from each installed image and save them in the folder /tmp/apm-urxvpn. for c in /shared/apm/images/apmclients-* do version="$(echo "$c" | awk -F. \ '{gsub(".*apmclients-","");printf "%04d.%04d.%04d.%04d", $1, $2, $3, $4}')" && \ (mount -o ro $c /tmp/apm-iso cp /tmp/apm-iso/sam/www/webtop/public/download/urxvpn.cab \ /tmp/apm-urxvpn/URXVPN.CAB-$version umount /tmp/apm-iso) done Check the files copied ls -al /tmp/apm-urxvpn Import each file either with tmsh or with GUI. We will cover how to import with tmsh below. If you prefer to do it with the GUI, more information abour how to do it can be found in K13423 You can use the following script to automatically import all files cd /tmp/apm-urxvpn for f in URXVPN.CAB-* do printf "create sys file ifile $f source-path file:$(pwd)/$f\ncreate ltm ifile $f file-name $f\n" | tmsh done Save the new configuration tmsh -c “save sys config” Time to create the iRule when CLIENT_ACCEPTED { ACCESS::restrict_irule_events disable } when HTTP_REQUEST { set uri [HTTP::uri] set ua [HTTP::header "User-Agent"] if {$uri starts_with "/vdesk" || $uri starts_with "/pre"} { set version "" regexp -- {EdgeClient/(\d{4}\.\d{4}\.\d{4}\.\d{4})} $ua var version if {$version != ""} { table set -subtable vpn_client_ip_to_versions [IP::client_addr] $version 86400 86400 } else { log local0.debug "Unable to parse version from: $ua for IP: [IP::client_addr] URI: $uri" } } elseif {$uri == "/public/download/urxvpn.cab"} { set version "" regexp -- {EdgeClient/(\d{4}\.\d{4}\.\d{4}\.\d{4})} $ua var version if {$version == ""} { log local0.warning "Unable to parse version from: $ua, will search session table" set version [table lookup -subtable vpn_client_ip_to_versions [IP::client_addr]] log local0.warning "Version in table: $version" } if {$version == ""} { log local0.warning "Unable to find version session table" HTTP::respond 404 content "Missing version in request" "Content-Type" "text/plain" } else { set out "" catch { set out [ifile get "/Common/URXVPN.CAB-$version"] } if {$out == ""} { log local0.error "Didn't find urxvpn.cab file for Edge Client version: $version" HTTP::respond 404 content "Unable to find requested file for version $version\n" "Content-Type" "text/plain" } else { HTTP::respond 200 content $out "Content-Type" "application/vnd.ms-cab-compressed" } } } } Add the iRule to the APM Virtual Server Known Limitations If multiple clients with different versions of the Edge Client are behind the same IP address, they might download the wrong version. This is due to the fact that the client doesn't present the version when the request for the file urxvpn.cab reaches the iRule. This is why the iRule tries to store IP addresses based on the source IP address of other requests related to the VPN. More information about this problem can be found in K0001327351.8KViews6likes1CommentDemystifying Time-based OTP
This article is written as an extensive explanation of how a Time-based OTP algorithm works and some guidelines on how to implement this in your F5. What is a TOTP? TOTP (aka Time-based OTP) is a way to use a code that is changing every 30 seconds instead of using a static password. REF - https://en.wikipedia.org/wiki/Time-based_one-time_password REF - https://datatracker.ietf.org/doc/html/rfc6238 So, in summary, every user has one secret associated that is shared between them and a third entity (F5), with this secret, it is possible to generate a 6-digit code that changes every 30 seconds, as Google and other vendors do. Take into account that most of the vendors are using the same algorithm, so, working with Google Authenticator is the same as using any other 6-digits TOTP (Microsoft Authenticator, FortiToken Mobile, etc.). How to implement TOTP in production? TOTP is composed of 3 steps: Generation of the secret Distribution of the secret Validation of the secret How a secret is generated? You can generate the code in many ways, but your goal is to get a 16-digit word (base32) for each user. Next below, we are showing how to get this secret using TCL commands. # Generate a random number as seed set num [expr rand()] # OUTPUT: 0.586026769404 # generate a hash of this seed set num_hash [md5 $num] # OUTPUT: Ï�àD½È�W\ݼú�Uä # Encode this hash using base64 set num_b64 [b64encode $num_hash] # OUTPUT: Cc+e4ES9yJRXXN28+o5V5A== # Take only the first 10 digits of this previous code (10 digits x 8 bits = 80 bits) set secret_raw [string range $num_b64 0 9] # OUTPUT: Cc+e4ES9yJ # Encode the previous code using base32 (80 bits / 5 bits by word = 16 words) set secret_b32 [call b32encode $secret_raw] # OUTPUT: INRSWZJUIVJTS6KK BTW, this is how a Base32 dictionary works, I mean, the equivalence between words and bits. 00000 - A 00001 - B 00010 - C 00011 - D 00100 - E 00101 - F 00110 - G 00111 - H 01000 - I 01001 - J 01010 - K 01011 - L 01100 - M 01101 - N 01110 - O 01111 - P 10000 - Q 10001 - R 10010 - S 10011 - T 10100 - U 10101 - V 10110 - W 10111 - X 11000 - Y 11001 - Z 11010 - 2 11011 - 3 11100 - 4 11101 - 5 11110 - 6 11111 - 7 0000 - A=== 0001 - C=== 0010 - E=== 0011 - G=== 0100 - I=== 0101 - K=== 0110 - M=== 0111 - O=== 1000 - Q=== 1001 - S=== 1010 - U=== 1011 - W=== 1100 - Y=== 1101 - 2=== 1110 - 4=== 1111 - 6=== 000 - A====== 001 - E====== 010 - I====== 011 - M====== 100 - Q====== 101 - U====== 110 - Y====== 111 - 4====== 00 - A= 01 - I= 10 - Q= 11 - Y= 0 - A==== 1 - Q==== REF - https://datatracker.ietf.org/doc/html/rfc4648#page-8 If you are interested, there are other iRules to generate base32 codes. Here are some examples: https://community.f5.com/t5/crowdsrc/tcl-procedures-for-base32-encoding-decoding/ta-p/286602 https://community.f5.com/t5/technical-articles/base32-encoding-and-decoding-with-irules/ta-p/277299 How a secret is distributed? Most of the time, the secret is distributed using QR codes, because it’s an easy way to distribute it to dummy users. Google Authenticator and any other vendors use this scheme: # EXAMPLE: otpauth://totp/ACME:john@acme.com?secret=INRSWZJUIVJTS6KK ## WHERE: ACME - Company john@acme.com - User Account secret=INRSWZJUIVJTS6KK - Secret REF - https://github.com/google/google-authenticator/wiki/Key-Uri-Format So, the best plan is to inject this previous sentence into a QR code. Here is an example: https://rootprojects.org/authenticator/ With the example above, is clear how a user can get the secret in their smartphone, but take into account that both entities (user and F5) have to know the secret in order to be able to perform those authentications. Later on, we will show you some tips to store the key from the F5 perspective. How a secret is validated? When both (the user and the F5) know the secret, they can authenticate using a TOTP. Next below, we are showing the steps required to generate a Time-based code from the secret. # We start knowing the secret (base32) set secret_b32 "INRSWZJUIVJTS6KK" # OUTPUT: INRSWZJUIVJTS6KK # Decode the secret from a b32 code (translating to a 10 digits secret) set secret_raw [call b32decode $secret_b32] # OUTPUT: Cc+e4ES9yJ # ---------------------------------- # There are other ways to decode b32, here is another example set secret_b32 "INRSWZJUIVJTS6KK" # OUTPUT: INRSWZJUIVJTS6KK set secret_binary [string map -nocase $static::b32_to_binary $secret_b32] # OUTPUT: 01000011 01100011 00101011 01100101 00110100 01000101 01010011 00111001 01111001 01001010 set secret_raw [binary format B80 $secret_binary] # OUTPUT: Cc+e4ES9yJ # ---------------------------------- # Get a UNIX timestamp and divide it by 30 (to get gaps of 30 seconds) set clock [expr { [clock seconds] / 30 } ] # OUTPUT: 53704892 # Translate the previous code into binary set clock_raw [binary format W* $clock]] # OUTPUT: 00000000 00000000 00000000 00000000 00000011 00110011 01111000 10111100 # Sign the clock value using the secret value, which means "HMAC-SHA1[secret,clock]" set hmac_raw [CRYPTO::sign -alg hmac-sha1 -key $secret_raw $clock_raw] # OUTPUT: Ùòbàc¹´Í¬{�ü�s)�3 # Translate the previous code to hexadecimal binary scan $hmac_raw H* hmac # OUTPUT: 1cd9f262e063b9b4cd13ac7b8dfc8a7329801733 # Take the last digit of this hexadecimal code ("3" in this case) set last_char [string index $hmac end] # OUTPUT: 3 # Multiply the last value by 2 to generate a range of 16 possible 4-bytes words, as it's shown below # Note that the last two digits are always ignored set offset [expr { "0x$last_char" * 2 } ] # OUTPUT: 6 # Example: # 0: 1cd9f262 e063b9b4cd13ac7b8dfc8a7329801733 # 1: 1c d9f262e0 63b9b4cd13ac7b8dfc8a7329801733 # 2: 1cd9 f262e063 b9b4cd13ac7b8dfc8a7329801733 # 3: [1cd9f2 62e063b9 b4cd13ac7b8dfc8a7329801733] <- This word is selected (last digit = '3') # 4: 1cd9f262 e063b9b4 cd13ac7b8dfc8a7329801733 # 5: 1cd9f262e0 63b9b4cd 13ac7b8dfc8a7329801733 # 6: 1cd9f262e063 b9b4cd13 ac7b8dfc8a7329801733 # 7: 1cd9f262e063b9 b4cd13ac 7b8dfc8a7329801733 # 8: 1cd9f262e063b9b4 cd13ac7b 8dfc8a7329801733 # 9: 1cd9f262e063b9b4cd 13ac7b8d fc8a7329801733 # a: 1cd9f262e063b9b4cd13 ac7b8dfc 8a7329801733 # b: 1cd9f262e063b9b4cd13ac 7b8dfc8a 7329801733 # c: 1cd9f262e063b9b4cd13ac7b 8dfc8a73 29801733 # d: 1cd9f262e063b9b4cd13ac7b8d fc8a7329 801733 # e: 1cd9f262e063b9b4cd13ac7b8dfc 8a732980 1733 # f: 1cd9f262e063b9b4cd13ac7b8dfc8a 73298017 33 # Get the word from the table based on the last digit (see example above) set word [string range $hmac $offset [expr { $offset + 7 } ]] # OUTPUT: 62e063b9 # Translate the previous code to base10 (removing negative values) set us_word [expr { "0x$word" & 0x7FFFFFFF } ] # OUTPUT: 1658872761 (62e063b9) # Apply a modulus 1000000 to get a 6-digits range number [000000 - 999999] set token [format %06d [expr { $us_word % 1000000 } ]] # OUTPUT: 872761 # The previous value is the token that the user should use during authentication # This value is changing every 30 seconds. There are many iRules you can use to validate your user input codes. Here are some examples: https://community.f5.com/t5/crowdsrc/google-authenticator-verification-irule-tmos-v11-1-optimized/ta-p/286672 https://community.f5.com/t5/crowdsrc/apm-google-authenticator-http-api/ta-p/287952 https://community.f5.com/t5/crowdsrc/google-authenticator-token-verification-irule-for-apm/ta-p/277510 How a secret is stored? At this point, the user knows their secret (they already got their QR code with the secret), but the F5 still doesn't know how to get the secret to check if the TOTP provided by the user is correct. There are many ways: Store a key pair of "user-secret" in a data group. It is really simple to implement, but not secure in a production environment because the secrets are stored in cleartext. Store a key pair of "user-encrypted(secret)" in a data group. That solves the problem of storing the secrets in cleartext, but it’s not scalable. AsStan_PIRON_F5 pointed out here. There is a way to store those secrets in AD fields in an encrypted way that could suit a production environment. Here below, we describe those steps, using Powershell scripts that should be running on the Windows Server where the AD resides. 1. Generate a symmetric key to encrypt the secrets. function Create-AesKey($KeySize) { $AesManaged = New-Object "System.Security.Cryptography.AesManaged" $AesManaged.KeySize = $KeySize $AesManaged.GenerateKey() [System.Convert]::ToBase64String($AesManaged.Key) } $size= $Args[0] $key = Create-AesKey $size Write-Output $key Input: .\CreateKey.ps1 256 Output: pnnqLfua6Mk/Oh3xqWV/6NTLd0r0aYaO4je3irwDbng= 2. Store each user secret in the ‘pager’ field of the AD. function Encrypt-Data($AesKey, $Data) { $Data = [System.Text.Encoding]::UTF8.GetBytes($Data) $AesManaged = New-Object "System.Security.Cryptography.AesManaged" $AesManaged.Mode = [System.Security.Cryptography.CipherMode]::CBC $AesManaged.Padding = [System.Security.Cryptography.PaddingMode]::PKCS7 $AesManaged.BlockSize = 128 $AesManaged.KeySize = 256 $AesManaged.Key = [System.Convert]::FromBase64String($AesKey) $Encryptor = $AesManaged.CreateEncryptor() $EncryptedData = $Encryptor.TransformFinalBlock($Data, 0, $Data.Length); [byte[]] $EncryptedData = $AesManaged.IV + $EncryptedData $AesManaged.Dispose() [System.Convert]::ToBase64String($EncryptedData) } $username = $Args[0] $encryptKey = "pnnqLfua6Mk/Oh3xqWV/6NTLd0r0aYaO4je3irwDbng=" [String]$userkey = "" 1..16 | % { $userkey += $(Get-Random -InputObject A,B,C,D,E,F,G,H,I,J,K,L,M,N,O,P,Q,R,S,T,U,V,W,X,Y,Z,2,3,4,5,6,7) } $encrypted = Encrypt-Data $encryptKey $userkey Write-Output "Key: $userkey ; Encrypted: $encrypted" Set-AdUser -Identity $username -replace @{"pager"="$encrypted"} Input: .\EncryptData.ps1 myuser INRSWZJUIVJTS6KK Output: Key: INRSWZJUIVJTS6KK Encrypted: i6GoODygXJ05vG2xWcatNjrl1NubA1xHEZpMTzOlsdx52oeEp1a4891CdM5/aCMg 3. Validate that the secret was stored correctly function Decrypt-Data($AesKey, $Data) { $Data = [System.Convert]::FromBase64String($Data) $AesManaged = New-Object "System.Security.Cryptography.AesManaged" $AesManaged.Mode = [System.Security.Cryptography.CipherMode]::CBC $AesManaged.Padding = [System.Security.Cryptography.PaddingMode]::PKCS7 $AesManaged.BlockSize = 128 $AesManaged.KeySize = 256 $AesManaged.IV = $Data[0..15] $AesManaged.Key = [System.Convert]::FromBase64String($AesKey) $Decryptor = $AesManaged.CreateDecryptor(); $DecryptedData = $Decryptor.TransformFinalBlock($Data, 16, $Data.Length - 16); $aesManaged.Dispose() [System.Text.Encoding]::UTF8.GetString($DecryptedData) } $encryptKey = "pnnqLfua6Mk/Oh3xqWV/6NTLd0r0aYaO4je3irwDbng=" $userkey = $Args[0] $decrypted = Decrypt-Data $encryptKey $userkey Write-Output "Key: $decrypted" Input: .\DecryptData.ps1 i6GoODygXJ05vG2xWcatNjrl1NubA1xHEZpMTzOlsdx52oeEp1a4891CdM5/aCMg Output: INRSWZJUIVJTS6KK How to generate a QR code? There are many ways to generate a QR code from a secret word. 1. Google has an API to generate QR codes, still works but it’s in a deprecated state. REF - https://developers.google.com/chart/infographics/docs/qr_codes ## EXAMPLE: https://chart.googleapis.com/chart?cht=qr&chs=200x200&chld=M|0&chl=otpauth://totp/myser@mydomain.com?secret=AAAAAAAAAAAAAAAA ## WHERE: cht=qr - QR Code chs=200x200 - Sizing chld=M|0 - Redundancy 'M' and Margin '0' chl=otpauth://totp/myser@mydomain.com?secret=AAAAAAAAAAAAAAAA - Message 2. Similar to Google, there are other APIs to generate those QR codes, but like with the previous API from Google, using them is a wrong decision because you are sending your secret to an external entity. REF - https://quickchart.io/documentation/#qr ## EXAMPLE https://quickchart.io/qr?size=200&ecLevel=M&margin=1&text=otpauth://totp/myser@mydomain.com?secret=AAAAAAAAAAAAAAAA ## WHERE: size=200 - Sizing ecLevel=M - Redundancy 'M' margin=1 - Margin '1' text=otpauth://totp/myser@mydomain.com?secret=AAAAAAAAAAAAAAAA - Message 3. The best way to implement this in a production environment is to configure a dedicated server to generate those QR codes. There are many options on the internet, here is an example: REF - https://github.com/edent/QR-Generator-PHP Requirements: yum install php php-mysql php-fpm yum install php-gd4.4KViews8likes2CommentsF5 AFM/Edge Firewall and the difference between Edge Firewalls and Next-generation Firewalls (NGFW)
Next-generation Firewalls (NGFW) have a lot of features like policies based on AD users and AD groups, dynamic user quarantine, Application/Service and Virus/Spyware/Vulnerability default or custom signatures to allow traffic only comming from specific applications that is scanned for viruses or other malware types. A long time ago I also did not know the difference between the F5 AFM and NGFW (I even asked a question on the forum https://community.f5.com/t5/technical-forum/to-make-the-f5-afm-like-a-full-ngfw-is-there-plans-the-f5-afm-to/td-p/207685 ), so after time I understood the difference and I have made this post to clear things out 😉 NGFW truly provides a lot of nice options but where they are lacking when they are deployed at the Internet Service Providers, Mobile Operators or at the Edge of big corporate networks or private scrubbing centers as they don't have good DDOS protections or CG-NAT functions. NGFW dp have NAT capabilities but in most cases dose capabilities are limited to basic source PAT, destination NAT or Static NAT. Also at the Edge of the Network the firewall device should have high throughput and there is no need for it to work with AD users/AD groups, user/group redistribution between the firewalls or specific Applications/Services, used just by a specific company as in the case with ISP or Mobile Operators it should protect many customers with the Advanced DOS/DDOS options, to be able to do NAT that is easily traceable in the logs which IP address to which source ip which public ip was allocated (great feature for mobile or Internet providers combined with F5 PEM for user monetization and tracking) Also the Edge firewall device may need to failover to a Scrubbing center if the DDOS attack becomes too big, so this function is nice to have or to have an ip intelligence feed list to block attacks even before doing any deep inspections just based on the source or destination IP address. This is where the F5 AFM comes into the picture as not an replacement of the NGFWs but as a complementary device that is at the Edge of the Network and filters the traffic and then the customer NGFWs do the more fine grade checks. Sometimes AFM is deployed as a server firewall together with F5 LTM/APM/aAWAF after the NGFWs for example to filter the a DDOS attack that the scrubbing center did not block as it was too small and directed to a specific destination and most scrubbing center block only really high volume attacks (most scrubbing centers can't look in the SSL data like the F5 Silverline) that can bring down the entire data center.AFM can now work with subscriber data at the ISP mobile operator level and from what I have seen the NGFW are limited in this field and they are made for internal Enterprise use, where AD groups and AD users are needed not subscriber data. The F5 AFM capabilities that I have not seen at most NGFW are : DOS based protections on the AFM have the option to be Fully Atomatic and to adjust their thresholds based Machne Learning (ML) learning, so there id no need for someone to constantly modify the DOS thresholds like with other DOS protection products. Also the DDOS protection has Dinamic signatures and with this feature a dynamic signature of the DDOS traffic is Automatically generated, so only the attackers to be blocked. By default the DDOS protection thresholds under "Security > DoS Protection > Device Protection " are inforced if a not more specific DOS profile is athached under the Virtual Server. The F5 AFM can be combined with the F5 Advanced WAF/ASM for full layer 3/4/7 DDOS protection and there is device named F5 DDoS Hybrid Defender that is combination between the Layer3/4 and the Layer7 protections and it is configured with a Guided Configuration Wizard. The F5 AFM has DDOS protections not only for TCP, UDP,ICMP traffic but also for HTTP, DNS and SIP protocols. There are great community articles about the DDOS features and their configuration that I will share: https://community.f5.com/t5/technical-articles/explanation-of-f5-ddos-threshold-modes/ta-p/286884 https://community.f5.com/t5/technical-articles/ddos-mitigation-with-big-ip-afm/ta-p/281234 Also this link is helpfull: https://support.f5.com/csp/article/K49869231 The AFM can redirect the traffic to a Scrubing Center if it becomes too big and this may save some money to only use a scrubbing center if the DDOS is too big. If BGP is used the AFM will use the F5 Zebos Routing module that is like a mini router inside F5. The previous F5 product Carrier Grade NAT is now migrated to the AFM which allows you to not only use source nat, destination nat or static nat but also to use NAT features like PBA,Deterministic NAT or PCP.The AFM can also respond to ARP requests for translated source IP addresses and this is called Proxy ARP or to intgrate with the ZebOS routin module that is like a mini router inside the F5 device to advertize the translated addresses. Port block allocation (PBA) mode is a translation mode option that reduces CGNAT logging, by logging only the allocation and release of each block of ports. When a subscriber first establishes a network connection, the BIG-IP® system reserves a block of ports on a single IP address for that subscriber. The system releases the block when no more connections are using it. This reduces the logging overhead because the CGNAT logs only the allocation and release of each block of ports. Deterministic mode is an option used to assign translation address, and is port-based on the client address/port and destination address/port. It uses reversible mapping to reduce logging, while maintaining the ability for translated IP address to be discovered for troubleshooting and compliance with regulations. Deterministic mode also provides an option to configure backup-members.And there is even a tool dnatutil to see the mapping of a client ip address. Port Control Protocol (PCP) is a computer networking protocol that allows hosts on IPv4 or IPv6 networks to control how the incoming IPv4 or IPv6 packets are translated and forwarded by an upstream router that performs network address translation (NAT) or packet filtering. By allowing hosts to create explicit port forwarding rules, handling of the network traffic can be easily configured to make hosts placed behind NATs or firewalls reachable from the rest of the Internet (so they can also act as network servers), which is a requirement for many applications. As logging the user NAT translations is mandatory this can generate a lot of logs for the Service Providers but with DNAT and PBA the needed log space is reduced as much as possible but still keeping the needed log info. The AFM now supports some of the options of F5 PEM for Traffic Intelligence or as in the NGFW applicaion discovery or subscriber discovery and security rules based on subscribers discovered by Radius or DHCP sniffing or iRules as the NGFW have AD users and AD groups but Service and Mobile providers work with IMEI phone codes and not with AD groups/users. https://community.f5.com/t5/technical-articles/traffic-intelligence-in-afm-through-categories/ta-p/295310 Another really wonderful feature is the IP intelligence that will protect you from bad source or destination ip addresses and with the AFM you can also feed the AFM custom list that are generated by your threat intelligence platform.The AFM and Advanced WAF/ASM can automatically place the IP addresses in a shun list that is blocked by the IP intelligence as the IP intelligence checks happen before the ASM or even the AFM in the traffic path! There is a nice community video about this feature:https://community.f5.com/t5/technical-articles/the-power-of-ip-intelligence-ipi/ta-p/300528 The AFM also has port misuse policies or Protocol Inspection profiles that are similar the NGFW Applications/services to allow only the correct protocol on the port not just port number or IPS/Antivirus signatures. The F5 AFM Protocol Inspection is based on SNORT so you can not only block attacks but allow traffic based on the payload, for example providing access to sertain server only if the Referer header is a sertain value by writing custom signatures. It by default has many signatures and protocol RFC compliance checks. The F5 AFM protocol inspection can also be used as as more fine grade way for custom application control than the Port Misuse policies, when creating a custom signature for example to block specific User-Agent HTTP header! One of the best features that the F5 Protocol Inspection IPS has compared even to NGFW products is to place new signatures in staging (for example after a new signature set is downloaded) for some time and to monitor how many times the signatures get triggered in that staging period before enforcing and that feature is really great. For more information I suggest checkingthe link below: https://support.f5.com/csp/article/K00322533 https://f5-agility-labs-firewall.readthedocs.io/en/latest/class2/module3/lab4.html https://support.f5.com/csp/article/K25265787 The F5 AFM is also a great Edge firewall for many protocols like DNS, SSH,SIP not only HTTP.The F5 AFM simiarly to the aWAF/ASM can work in a transperant bridged mode thanks to Vlan Groups, Wildcard VS and Proxy Arp, where it is invisible for the end users (https://support.f5.com/csp/article/K15099). Do not forget that tha AFM is before any other module except the IP intelligence and to decide if it will work in a firewall or ADC mode(https://support.f5.com/csp/article/K92958047). Also the order or the rules is important (Global context policies/rules > Route Domain Context > Virtual Server/Self IP > Managment) . You can even use DNS FQDN names in the security policy rules if needed and trace any issues related to Security Rules and DOS with the Packet Tester tool and with Timer policies you can allow long live connections that do not generate traffic through the firewall if needed!The Managment IP in newer versions can use AFM rules even without AFM being provisioned (https://support.f5.com/csp/article/K46122561), isn't that nice😀 ! F5 supports vWire or Vlan groups, so F5 AFM or F5 DHD (DDOS Hybrid Defender) can be placed not only like a layer 3 firewall but also in Transparent/Invisible layer 2 or in case or Virtual Wire layer 1 mode. The F5 AFM operations guide is trully a nice resource to review: https://support.f5.com/csp/article/K382017552.6KViews3likes0CommentsWorldTech IT - Who Ya Gonna Call? Scary Hack Story
At WorldTech IT, our specialty for Always-On emergency support means that when things go bump in the night on F5 devices, we're the ones who wake up and investigate. We've seen our fair share of headless entities, killer bugs, gremlins, possessions, zombies, and daemons, but our scariest hack story started like any other day.2.2KViews8likes2CommentsBrute Force protection for single parameter like OTP
Brute Force Protection for single parameter This can be achieved with the help of ASM Data Guard & Session tracking 1. Log all request & response to record valid OTP request & invalid OTP request/response. This is just to record request & response. After recording request & response, you should remove Log All request profile from virtual server. 2. From invalid OTP response, identify unique response For eg - FAILED or Mobile number not registered 3. Configure this unique response in Data Guard Custom pattern so that firewall will track session based on that 4. Configure URL which sends OTP parameter at Data Guard Protection Enforcement Enforced URLs 5. Now go to session tracking, Enable Session Awareness, Track Violations and Perform Actions, mention violation detection period 60 seconds. you can change this time as per recommendation by your security team 6. In session tracking, go to Delay Blocking , enable Session threshold to 3 violation. It means 3 violations in 60 seconds will be ignored or 3 violations in 60 seconds will not be blocked 7. Enable IP Address threshold to 20 , it means if any IP will be blocked after 20 violations 8. In Associated Violations, Select Data Guard:Information leakage detected1.9KViews3likes2CommentsCertificate Expiry Email alert configuration
Here are steps to receive certificate expiry email alert Step 1. update/config/ user_alert.conf directory with alert CERTIFICATE_EXPIRED "Certificate (.*) expired" { snmptrap OID=".1.3.6.1.4.1.3375.2.4.0.300"; email toaddress="xyz@domain.com" fromaddress="Certificate_Expiry_Alert" body="Certificate Expired on BigIP" } alert CERTIFICATE_WILL_EXPIRE "Certificate (.*) will expire" { snmptrap OID=".1.3.6.1.4.1.3375.2.4.0.301"; email toaddress="xyz@domain.com" fromaddress="Certificate_Expiry_Alert" body="Certificate will Expire on BigIP" } Step 2: Update /etc/ssmtp/ssmtp.conf with below details mailhub=mail.domain.com To update above email kindly execute below command tmsh modify sys outbound-smtp mailhub mail.domain.com Verify whether it is updated correctly or not with below command cat/etc/ssmtp/ssmtp.conf Step 3: Test email delivery with below command echo "Subject: Smtp test mail" | sendmail -vs xyz@domain.com Kindly make sure, you are able to telnet mail.domain.com at port 25 from BigIP/F5 Step 4: Create file with below command Vi Cert_Expiry_Alert.sh Update Cert_Expiry_Alert file with below command tmsh run sys crypto check-cert Step 5: Provide required permission to script with below command Chmod +x Cert_Expiry_Alert.sh Step 6: Update crontab with below command Crontab -e 30 13 * * * /usr/bin/bash /var/tmp/Cert_Expiry_Alert.sh Here 30 implies minutes & 13 implies hours This cron will be executed daily at 13:30 # Example of job definition: # .---------------- minute (0 - 59) # | .------------- hour (0 - 23) # | | .---------- day of month (1 - 31) # | | | .------- month (1 - 12) OR jan,feb,mar,apr ... # | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat # | | | | | # * * * * * user-name command to be executed More details about Cron is available atK33730915 This solution has been tested at version 161.8KViews2likes0CommentsKnowledge sharing: Order of precedence for BIG-IP modules, ASM DDOS Protection, Bot Protection
For the General order of the modules in F5: Packet Filter > AFM > iRule Flow Init event> LTM(or GTM/DNS) >APM > ASM . Also in the AFM there is DDOS at Layer 3 or 4 that is before the AFM rules (the same as the ASM). For the AFM DDOS there is general device DDOS and virtual server specific DDOS and the Genaral Device DDOS takes precedence but it has higher by default thresholds and this why during attack the Virtual server DDOS will in most cases be first activated. The Device DDOS is present even without the AFM module but when there is AFM module it can actually be controlled and configured(not only using the default values). The AFM rules themselves have a conext order (https://techdocs.f5.com/kb/en-us/products/big-ip-afm/manuals/product/network-firewall-policies-implementations-13-1-0/2.html). To see what part of the AFM is blocking you use the packet tracer tool: https://clouddocs.f5.com/training/community/firewall/html/class1/module2/module2.html If needed you can still place the ASM infront the APM by following: https://support.f5.com/csp/article/K54217479 https://support.f5.com/csp/article/K13315545 Other F5 precedences is the GTM/DNS order : https://support.f5.com/csp/article/K14510 The Local traffic object and VIP order for the LTM: https://support.f5.com/csp/article/K9038 https://support.f5.com/csp/article/K14800 The F5 irule event order: https://devcentral.f5.com/s/question/0D51T00006i7X94/irule-event-order-http The picture of the F5 order is from the old F5 401 study guide: As in the newer F5 TMOS versions the Bot defense is seperated from the DDOS Protection and as my tests confirmed first the ASM DDOS is activated then the Bot defense and after that the ASM policy and in the most F5 documentation maybe not writen good this is the case. In the older versons also first the DDOS filtered requests and then the Bot Defense further filtered the traffic before the ASM policy. As of now the Bot protection also generates support id, so if you are blocked and you see support id but in the security policy searches you can't find anything also search the support id under the Bot defence request logs as I found this the hard way. The Bot defence can also make in some cases dynamic signatures for the DDOS in order to stop the traffic at the DDOS checks but I still have not seen this done. https://clouddocs.f5.com/training/community/ddos/html/class7/bados/module4.html For testing web DDOS attacks jmeter is a great free tool and for bigger commercial tests there is cloud platform named RedWolf but Jmeter in most cases will do just fine.2.5KViews3likes1Comment