security
256 TopicsF5 XC Distributed Cloud HTTP Header/Cookie manipulations and using the client ip/user headers
1 . F5 XC distributed cloud HTTP Header manipulations In the F5 XC Distributed Cloud some client information is saved to variables that can be inserted in HTTP headers similar to how F5 Big-IP saves some data that can after that be used in a iRule or Local Traffic Policy. By default XC will insert XFF header with the client IP address but what if the end servers want an HTTP header with another name to contain the real client IP. Under the HTTP load balancer under "Other Options" under "More Options" the "Header Options" can be found. Then the the predefined variables can be used for this job like in the example below the $[client_address] is used. A list of the predefined variables for F5 XC: https://docs.cloud.f5.com/docs/how-to/advanced-security/configure-http-header-processing There is $[user] variable and maybe in the future if F5 XC does the authentication of the users this option will be insert the user in a proxy chaining scenario but for now I think that this just manipulates data in the XAU (X-Authenticated-User) HTTP header. 2. Matching of the real client ip HTTP headers You can also match a XFF header if it is inserted by a proxy device before the F5 XC nodes for security bypass/blocking or for logging in the F5 XC. For User logging from the XFF Under "Common Security Controls" create a "User Identification Policy". You can also match a regex that matches the ip address and this is in case there are multiple IP addresses in the XFF header as there could have been many Proxy devices in the data path and we want see if just one is present. For Security bypass or blocking based based on XFF Under "Common Security Controls" create a "Trusted Client Rules" or "Client Blocking Rules". Also if you have "User Identification Policy" then you can just use the "User Identifier" but it can't use regex in this case. I have made separate article about User-Identification F5 XC Session tracking and logging with User Identification Policy | DevCentral To match a regex value in the header that is just a single IP address, even when the header has many ip addresses, use the regex (1\.1\.1\.1) as an example to mach address 1.1.1.1. To use the client IP address as a source Ip address to the backend Origin Servers in the TCP packet after going through the F5 XC (similar to removing the SNAT pool or Automap in F5 Big-IP) use the option below: The same way the XAU (X-Authenticated-User) HTTP header can be used in a proxy chaining topology, when there is a proxy before the F5 XC that has added this header. Edit: Keep in mind that in some cases in the XC Regex for example (1\.1\.1\.1) should be written without () as 1\.1\.1\.1 , so test it as this could be something new and I have seen it in service policy regex matches, when making a new custom signature that was not in WAAP WAF XC policy. I could make a seperate article for this π XC can even send the client certificate attributes to the backend server if Client Side mTLS is enabled but it is configured at the cert tab. 3. F5 XC distributed cloud HTTP Cookie manipulations. Now you can overwrite the XC cookie by keeping the value but modifying the tags and this is big thing as before this was not possible. When combined with cookies this becomes very powerful thing as you can match on User-Agent header and for Mozilla for example to change the flags as if there is bug with the browser etc. The feature changes cookies returned in the Response Set-Cookie header from the origin server as it should.3.9KViews8likes1CommentTLS Server Name Indication
Problem this snippet solves: Extensions to TLS encryption protocols after TLS v1.0 have added support for passing the desired servername as part of the initial encryption negotiation. This functionality makes it possible to use different SSL certificates with a single IP address by changing the server's response based on this field. This process is called Server Name Indication (http://en.wikipedia.org/wiki/Server_Name_Indication). It is not supported on all browsers, but has a high level of support among widely-used browsers. Only use this functionality if you know the bulk of the browsers accessing your site support SNI - the fact that IE on Windows XP does not precludes the wide use of this functionality for most sites, but only for now. As older browsers begin to die off, SNI will be a good weapon in your arsenal of virtual hosting tools. You can test if your browser supports SNI by clicking here: https://alice.sni.velox.ch/ Supported Browsers: * Internet Explorer 7 or later, on Windows Vista or higher * Mozilla Firefox 2.0 or later * Opera 8.0 or later (the TLS 1.1 protocol must be enabled) * Opera Mobile at least version 10.1 beta on Android * Google Chrome (Vista or higher. XP on Chrome 6 or newer) * Safari 2.1 or later (Mac OS X 10.5.6 or higher and Windows Vista or higher) * MobileSafari in Apple iOS 4.0 or later (8) * Windows Phone 7 * MicroB on Maemo Unsupported Browsers: * Konqueror/KDE in any version * Internet Explorer (any version) on Windows XP * Safari on Windows XP * wget * BlackBerry Browser * Windows Mobile up to 6.5 * Android default browser (Targeted for Honeycomb but won't be fixed until next version for phone users as Honeycomb will be reserved to tablets only) * Oracle Java JSSE Note: The iRule listed here is only supported on v10 and above. Note: Support for SNI was added in 11.1.0. See SOL13452 for more information. How to use this snippet: Create a string-type datagroup to be called "tls_servername". Each hostname that needs to be supported on the VIP must be input along with its matching clientssl profile. For example, for the site "testsite.site.com" with a ClientSSL profile named "clientssl_testsite", you should add the following values to the datagroup. String: testsite.site.com Value: clientssl_testsite If you wish to switch pool context at the time the servername is detected in TLS, then you need to create a string-type datagroup called "tls_servername_pool". You will input each hostname to be supported by the VIP and the pool to direct the traffic towards. For the site "testsite.site.com" to be directed to the pool "testsite_pool_80", add the following to the datagroup: String: testsite.site.com Value: testsite_pool_80 Apply the iRule below to a chosen VIP. When applied, this iRule will detect if an SNI field is present and dynamically switch the SSL profile and pool to use the configured certificate. Important: The VIP must have a clientSSL profile AND a default pool set. If you don't set this, the iRule will likely break. There is also no real errorhandling for incorrect/inaccurate entries in the datagroup lists -- if you enter a bad value, it'll fail. This allows you to support multiple certificates and multiple pools per VS IP address. when CLIENT_ACCEPTED { if { [PROFILE::exists clientssl] } { # We have a clientssl profile attached to this VIP but we need # to find an SNI record in the client handshake. To do so, we'll # disable SSL processing and collect the initial TCP payload. set default_tls_pool [LB::server pool] set detect_handshake 1 SSL::disable TCP::collect } else { # No clientssl profile means we're not going to work. log local0. "This iRule is applied to a VS that has no clientssl profile." set detect_handshake 0 } } when CLIENT_DATA { if { ($detect_handshake) } { # If we're in a handshake detection, look for an SSL/TLS header. binary scan [TCP::payload] cSS tls_xacttype tls_version tls_recordlen # TLS is the only thing we want to process because it's the only # version that allows the servername extension to be present. When we # find a supported TLS version, we'll check to make sure we're getting # only a Client Hello transaction -- those are the only ones we can pull # the servername from prior to connection establishment. switch $tls_version { "769" - "770" - "771" { if { ($tls_xacttype == 22) } { binary scan [TCP::payload] @5c tls_action if { not (($tls_action == 1) && ([TCP::payload length] > $tls_recordlen)) } { set detect_handshake 0 } } } default { set detect_handshake 0 } } if { ($detect_handshake) } { # If we made it this far, we're still processing a TLS client hello. # # Skip the TLS header (43 bytes in) and process the record body. For TLS/1.0 we # expect this to contain only the session ID, cipher list, and compression # list. All but the cipher list will be null since we're handling a new transaction # (client hello) here. We have to determine how far out to parse the initial record # so we can find the TLS extensions if they exist. set record_offset 43 binary scan [TCP::payload] @${record_offset}c tls_sessidlen set record_offset [expr {$record_offset + 1 + $tls_sessidlen}] binary scan [TCP::payload] @${record_offset}S tls_ciphlen set record_offset [expr {$record_offset + 2 + $tls_ciphlen}] binary scan [TCP::payload] @${record_offset}c tls_complen set record_offset [expr {$record_offset + 1 + $tls_complen}] # If we're in TLS and we've not parsed all the payload in the record # at this point, then we have TLS extensions to process. We will detect # the TLS extension package and parse each record individually. if { ([TCP::payload length] >= $record_offset) } { binary scan [TCP::payload] @${record_offset}S tls_extenlen set record_offset [expr {$record_offset + 2}] binary scan [TCP::payload] @${record_offset}a* tls_extensions # Loop through the TLS extension data looking for a type 00 extension # record. This is the IANA code for server_name in the TLS transaction. for { set x 0 } { $x < $tls_extenlen } { incr x 4 } { set start [expr {$x}] binary scan $tls_extensions @${start}SS etype elen if { ($etype == "00") } { # A servername record is present. Pull this value out of the packet data # and save it for later use. We start 9 bytes into the record to bypass # type, length, and SNI encoding header (which is itself 5 bytes long), and # capture the servername text (minus the header). set grabstart [expr {$start + 9}] set grabend [expr {$elen - 5}] binary scan $tls_extensions @${grabstart}A${grabend} tls_servername set start [expr {$start + $elen}] } else { # Bypass all other TLS extensions. set start [expr {$start + $elen}] } set x $start } # Check to see whether we got a servername indication from TLS. If so, # make the appropriate changes. if { ([info exists tls_servername] ) } { # Look for a matching servername in the Data Group and pool. set ssl_profile [class match -value [string tolower $tls_servername] equals tls_servername] set tls_pool [class match -value [string tolower $tls_servername] equals tls_servername_pool] if { $ssl_profile == "" } { # No match, so we allow this to fall through to the "default" # clientssl profile. SSL::enable } else { # A match was found in the Data Group, so we will change the SSL # profile to the one we found. Hide this activity from the iRules # parser. set ssl_profile_enable "SSL::profile $ssl_profile" catch { eval $ssl_profile_enable } if { not ($tls_pool == "") } { pool $tls_pool } else { pool $default_tls_pool } SSL::enable } } else { # No match because no SNI field was present. Fall through to the # "default" SSL profile. SSL::enable } } else { # We're not in a handshake. Keep on using the currently set SSL profile # for this transaction. SSL::enable } # Hold down any further processing and release the TCP session further # down the event loop. set detect_handshake 0 TCP::release } else { # We've not been able to match an SNI field to an SSL profile. We will # fall back to the "default" SSL profile selected (this might lead to # certificate validation errors on non SNI-capable browsers. set detect_handshake 0 SSL::enable TCP::release } } }1.2KViews0likes7CommentsThe WAF Dilemma
We are always facing the dilemma "Security vs Usability" in the world of security. This becomes painfully obvious once you start implementing a WAF. I have now implemented a wide range of WAF security policies, both BigIP AWAF and NAP, and two application functions/features always stand out: file upload and wiki editors. The core problem with the two scenarios is that they are about handling unstructured data. No matter how hard you try to tune the policy you will have an endless amount of false positives interrupting the end users. If we don't handle this problem correctly we will be forced (aka being demanded by the business) to disable the WAF policy. And that is a loose-loose situation. What I have constructed is a way to minimize this problem by differentiate between authenticated and unauthenticated end users. In most situations we can have a higher level of trust in traffic that is authenticated and thus tune down on the security. My design is very binary, if you are authenticated the WAF is turned off, if not it is on. This might not be good enough for you but this is only an example on how to go about the core problem. You can fine-tune the solution to be more granular based on the information available like switching the security policy or other mitigating actions. Just remember that having a simple WAF is always better than not having any at all. You can find the details, configuration and code here: NGINX App Protect with Authentication | Wiki As always feedback is much appreciated!181Views2likes6CommentsTrigger js challenge/Captcha for ip reputation/ip intelligence categories
Problem solved by this Code Snippet Because some ISP or cloud providers do not monitor their users a lot of times client ip addresses are marked as "spam sources" or "windows exploits" and as the ip addresses are dynamic and after time a legitimate user can use this ip addresses the categories are often stopped in the IP intelligence profile or under the ASM/AWAF policy. This usually happens in Public Clouds that do not monitor what their users do and the IP gets marked as bad then another good user after a day or two has this ip address and this causes the issue. For many of my clients I had to stop the ip reputation/ip intelligence category "spam sources" and in some cases "windows exploits" so having a javascript/captcha checks seems a nice compromise π To still make use of this categories the users coming from those ip addresses can be forced to solve captcha checks or at least to be checked for javascript support! How to use this Code Snippet Have AWAF/ASM and ip intelligence licensed Add AWAF/ASM policy with irule support option (by default not enabled under the policy) or/and Bot profile under the Virtual server Optionally add IP intelligence profile or enable the Ip intelligence under the WAF policy without the categories that cause a lot of false positives, Add the irule and if needed modify the categories for which it triggers Do not forget to first create the data group, used in the code or delete that part of the code and to uncomment the Bot part of the code, if you plan to do js check and not captcha and maybe comment the captcha part ! Code Snippet Meta Information Version: 17.1.3 Coding Language: TCL Code You can find the code and further documentation in my GitHub repository: reputation-javascript-captcha-challlenge/ at main Β· Nikoolayy1/reputation-javascript-captcha-challlenge when HTTP_REQUEST { # Take the ip address for ip reputation/intelligence check from the XFF header if it comes from the whitelisted source ip addresses in data group "client_ip_class" if { [HTTP::header exists "X-Forwarded-For"] && [class match [IP::client_addr] equals "/Common/client_ip_class"] } { set trueIP [HTTP::header "X-Forwarded-For"] } else { set trueIP [IP::client_addr] } # Check if IP reputation is triggered and it is containing "Spam Sources" if { ([llength [IP::reputation $trueIP]] != 0) && ([IP::reputation $trueIP] contains "Spam Sources") }{ log local0. "The category is [IP::reputation $trueIP] from [IP::client_addr]" # Set the variable 1 or bulean true as to trigger ASM captcha or bot defense javascript set js_ch 1 } else { set js_ch 0 } # Custom response page just for testing if there is no real backend origin server for testing if {!$js_ch} { HTTP::respond 200 content { <html> <head> <title>Apology Page</title> </head> <body> We are sorry, but the site you are looking for is temporarily out of service<br> If you feel you have reached this page in error, please try again. </body> </html> } } } # when BOTDEFENSE_ACTION { # Trigger bot defense action javascript check for Spam Sources # if {$js_ch && (not ([BOTDEFENSE::reason] starts_with "passed browser challenge")) && ([BOTDEFENSE::action] eq "allow") }{ # BOTDEFENSE::action browser_challenge # } # } when ASM_REQUEST_DONE { # Trigger ASM captcha check only for users comming from Spam sources that have not already passed the captcha check (don't have the captcha cookie) if {$js_ch && [ASM::captcha_status] ne "correct"} { set res [ASM::captcha] if {$res ne "ok"} { log local0. "Cannot send captcha_challenge: \"$res\"" } } } Extra References: BOTDEFENSE::action ASM::captcha ASM::captcha_status280Views1like1CommentF5 ASM/AWAF Preventing unauthorized users accessing admin pathβ using iRule script
The below code uses the new BIG-IP variables " [ASM::is_authenticated] " and " [ASM::username] " and the code is simple enough as if you are authenticated but not admin then you will not get access to the url path " /about.php " and this is logged in the /var/log/asm logs because " log local3. ". At the end of the article I have shown how with APM you can accomplish AD group limit for specific urls but then the Authentication is moved on the APM while the AWAF iRule example the authentication is on the origin web server and the AWAF just handles the URL Authorization. when ASM_REQUEST_DONE { if { [ASM::is_authenticated] && [HTTP::path] equals "/about.php" } { log local3. "This request was sent by user [ASM::username]." if {[ASM::username] equals "admin"} { log local3. "The admin has logged!" return } else { drop } } } Github link: Nikoolayy1/F5_AWAF-ASM-ADMIN-Access: F5 BIG-IP iRule code for limiting users by to access urls! The harder part is that you need to do several prerequisites that I will explain here: Enable iRule support in the ASM policy. Configure a login page and optionally login enforcement (if " /about.php " is not blocked by the origin server to not be accessible before login this is a needed step!) Enable session tracking by login page Attach the irule Test and see Example logs: cat /var/log/asm ......... Jun 25 03:59:33 bigip1.com info tmm2[11400]: Rule /Common/f5-asm-allow-admin <ASM_REQUEST_DONE>: This request was sent by user admin. Jun 25 03:59:33 bigip1.com info tmm2[11400]: Rule /Common/f5-asm-allow-admin <ASM_REQUEST_DONE>: The admin has logged! [root@bigip1:Active:Standalone] config # The DVWA app was used for this demo that is old but gold and there are many F5 demos how to configure login enforcement for it! Here is a youtube video for assistance: BIG-IP AWAF Demo 32 - Use Login Page Enforcement with F5 BIG-IP Adv WAF (formerly ASM) Extra links (there is also a new event "ASM_RESPONSE_LOGIN"): ASM::username ASM::is_authenticated https://clouddocs.f5.com/api/irules/ASM.html AD group url enforcement: If you want to control access to URLs based on AD groups I suggest seeing the F5 APM/Acess module that will take of the authentication and with Layer 7 ACL each AD group could be limited what it has access to. APM and AWAF can work together as with layered virtual server AWAF can be before the APM as by default is after it and then to get the username you need to use the login page feature and not "Use APM username and Session ID" feature in the AWAF policy. Configuring Access Control Lists https://my.f5.com/manage/s/article/K00363504 https://my.f5.com/manage/s/article/K03113285 https://my.f5.com/manage/s/article/K54217479 Example APM profile of type LTM+APM and the APM policy for anyone interested where the APM uses AD to authenticate the users and query for group data and the members for of the guest group have an ACL assigned that limits their access π Summary: This probably will be seen as well in F5 NEXT with many more cool features !158Views0likes0CommentsNGINX App Protect v5 Signature Notifications
When working with NAP (NGINX App Protect) you don't have an easy way of knowing when any of the signatures are updated. As an old BigIP guy I find that rather strange. Here you have build-in automatic updates and notifications. Unfortunately there isn't any API's you can probe which would have been the best way of doing it. Hopefully it will come one day. However, "friction" and "hard" will not keep me from finding a solution π I have previously made a solution for NAPv4 and I have tried mentally to get me going on a NAPv5 version. The reason for the delay is in the different way NAPv4 and NAPv5 are designed. Where NAPv4 is one module loaded in NGINX, NAPv5 is completely detached from NGINX (well almost, you still need to load a small module to get the traffic from NGINX to NAP) and only works with containers. NAPv5 has moved the signature "storage" from the actual host it is running on (e.g. an installed package) to the policy. This has the consequence that finding a valid "source of truth", for the latest signature versions, is not as simple as building a new image and see which versions got installed. There are very good reasons for this design that I will come back to later. When you fire up NAPv5 you got three containers for the data plane (NGINX, waf-enforcer and waf-config-mgr) and one for the "control plane" (waf-compiler). For this solution the "control plane" is the useful one. It isn't really a control plane but it gives a nice picture of how it is detached from the actual processing of traffic. When you update your signatures you are actually doing it through the waf-compiler. The waf-compiler is a container hosting the actual signature databases and every time a new verison is released you need to rebuild this container and compile your policies into a new version and reload NGINX. And this is what I take advantage of when I look for signature updates. It has the upside that you only need the waf-compiler to get the information you need. My solution will take care of the entire process and make sure that you are always running with the latest signatures. Back to the reason why the split of functions is a very good thing. When you build a new version of the NGINX image and deploy it into production, NAP needs to compile the policies as they load. During the compilation NGINX is not moving any traffic! This becomes a annoying problem even when you have a low number of policies. I have installations where it takes 5 to 10 minutes from deployment of the new image until it starts moving traffic. That is a crazy long time when you are used to working with micro-services and expect everything to flip within seconds. If you have your NAPv4 hooked up to a NGINX Instance Manager (NIM) the problem is somewhat mitigated as NIM will compile the policies before sending them to the gateways. NIM is not a nimble piece of software so it doesn't always fit into the environment. And now here is my hack to the notification problem: The solution consist of two bash scripts and one html template. The template is used when sending a notification mail. I wanted it to be pretty and that was easiest with html. Strictly speaking you could do with just a simple text based mail. Save all three in the same directory. The main script is called "waf_policy_auto_compile.sh"and is the one you put into crontab. The main script will build a new waf-compiler image and compile a test policy. The outcome of that is information about what versions are the newest. It will then extract versions from an old policy and simply see if any of the versions differ. For this to work you need to have an uncompiled policy (you can just use the default one) and a compiled version of it ready beforehand. When a diff has been identified the notification logic is executed and a second script is called: "compile_waf_policies.sh". It basically just trawls through the directory of you policies and logging profiles and compiles a new version of them all. It is not necessary to recompile the logging profiles, so this will probably change in the next version. As the compilation completes the main script will nudge NGINX to reload thus implement all the new versions. You can run "waf_policy_auto_compile.sh" with a verbose flag (-v) and a debug flag (-d). The verbose flag is intended to be used when you run it on a terminal and want the information displayed there. Debug is, well, for debug π The construction of the scripts are based on my own needs but they should be easy to adjust for any need. I will be happy for any feedback, so please don't hold back π version_report_template.html: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>WAF Policy Version Report</title> <style> body { font-family: system-ui, sans-serif; } .ok { color: #28a745; font-weight: bold; } .warn { color: #f0ad4e; font-weight: bold; } .section { margin-bottom: 1.2em; } .label { font-weight: bold; } </style> </head> <body> <h2>WAF Policy Version Report</h2> <div class="section"> <div class="label">Attack Signatures:</div> <div>Current: <span>{{ATTACK_OLD}}</span></div> <div>New: <span>{{ATTACK_NEW}}</span></div> <div>Status: <span class="{{ATTACK_CLASS}}">{{ATTACK_STATUS}}</span></div> </div> <div class="section"> <div class="label">Bot Signatures:</div> <div>Current: <span>{{BOT_OLD}}</span></div> <div>New: <span>{{BOT_NEW}}</span></div> <div>Status: <span class="{{BOT_CLASS}}">{{BOT_STATUS}}</span></div> </div> <div class="section"> <div class="label">Threat Campaigns:</div> <div>Current: <span>{{THREAT_OLD}}</span></div> <div>New: <span>{{THREAT_NEW}}</span></div> <div>Status: <span class="{{THREAT_CLASS}}">{{THREAT_STATUS}}</span></div> </div> <div>Run completed: {{RUN_DATETIME}}</div> </body> </html> compile_waf_policies.sh: #!/bin/bash # ============================================================================== # Script Name: compile_waf_policies.sh # # Description: # Compiles: # 1. WAF policy JSON files from the 'policies' directory # 2. WAF logging JSON files from the 'logging' directory # using the 'waf-compiler-latest:custom' Docker image. Output goes to # '/opt/napv5/app_protect_etc_config' where NGINX and waf-config-mgr # can reach them. # # Requirements: # - Docker installed and accessible # - Docker image 'waf-compiler-latest:custom' present locally # # Usage: # ./compile_waf_policies.sh # ============================================================================== set -euo pipefail IFS=$'\n\t' SECONDS=0 # Track total execution time # ======================== # CONFIGURABLE VARIABLES # ======================== BASE_DIR="/root/napv5/waf-compiler" OUTPUT_DIR="/opt/napv5/app_protect_etc_config" POLICY_INPUT_DIR="$BASE_DIR/policies" POLICY_OUTPUT_DIR="$OUTPUT_DIR" LOGGING_INPUT_DIR="$BASE_DIR/logging" LOGGING_OUTPUT_DIR="$OUTPUT_DIR" GLOBAL_SETTINGS="$BASE_DIR/global_settings.json" DOCKER_IMAGE="waf-compiler-latest:custom" # ======================== # VALIDATION # ======================== echo "π§ Validating paths..." [[ -d "$POLICY_INPUT_DIR" ]] || { echo "β Error: Policy input directory '$POLICY_INPUT_DIR' does not exist."; exit 1; } [[ -f "$GLOBAL_SETTINGS" ]] || { echo "β Error: Global settings file '$GLOBAL_SETTINGS' not found."; exit 1; } mkdir -p "$POLICY_OUTPUT_DIR" mkdir -p "$LOGGING_OUTPUT_DIR" # ======================== # POLICY COMPILATION # ======================== echo "π¦ Compiling WAF policies from: $POLICY_INPUT_DIR" for POLICY_FILE in "$POLICY_INPUT_DIR"/*.json; do [[ -f "$POLICY_FILE" ]] || continue BASENAME=$(basename "$POLICY_FILE" .json) OUTPUT_FILE="$POLICY_OUTPUT_DIR/${BASENAME}.tgz" echo "βοΈ [Policy] Compiling $(basename "$POLICY_FILE") -> $(basename "$OUTPUT_FILE")" docker run --rm \ -v "$POLICY_INPUT_DIR":"$POLICY_INPUT_DIR" \ -v "$POLICY_OUTPUT_DIR":"$POLICY_OUTPUT_DIR" \ -v "$(dirname "$GLOBAL_SETTINGS")":"$(dirname "$GLOBAL_SETTINGS")" \ "$DOCKER_IMAGE" \ -g "$GLOBAL_SETTINGS" \ -p "$POLICY_FILE" \ -o "$OUTPUT_FILE" done # ======================== # LOGGING COMPILATION # ======================== echo "π Compiling WAF logging configs from: $LOGGING_INPUT_DIR" if [[ -d "$LOGGING_INPUT_DIR" ]]; then for LOG_FILE in "$LOGGING_INPUT_DIR"/*.json; do [[ -f "$LOG_FILE" ]] || continue BASENAME=$(basename "$LOG_FILE" .json) OUTPUT_FILE="$LOGGING_OUTPUT_DIR/${BASENAME}.tgz" echo "βοΈ [Logging] Compiling $(basename "$LOG_FILE") -> $(basename "$OUTPUT_FILE")" docker run --rm \ -v "$LOGGING_INPUT_DIR":"$LOGGING_INPUT_DIR" \ -v "$LOGGING_OUTPUT_DIR":"$LOGGING_OUTPUT_DIR" \ "$DOCKER_IMAGE" \ -l "$LOG_FILE" \ -o "$OUTPUT_FILE" done else echo "β οΈ Skipping logging config compilation: directory '$LOGGING_INPUT_DIR' does not exist." fi # ======================== # COMPLETION MESSAGE # ======================== RUNTIME=$SECONDS printf "\nβ Compilation complete.\n" echo " - Policies output: $POLICY_OUTPUT_DIR" echo " - Logging output: $LOGGING_OUTPUT_DIR" echo printf "β±οΈ Total time taken: %02d minutes %02d seconds\n" $((RUNTIME / 60)) $((RUNTIME % 60)) echo waf_policy_auto_compile.sh: #!/bin/bash ############################################################################### # waf_policy_auto_compile.sh # # - Only prints colorized summary output to terminal if -v/--verbose is used # - Mails a styled HTML report using a template, substituting version numbers/status/colors # - Debug output (step_log) only to syslog if -d/--debug is used # - Otherwise: completely silent except for errors # - All main blocks are modularized in functions ############################################################################### set -euo pipefail IFS=$'\n\t' # ===== CONFIGURABLE VARIABLES ===== WORKROOT="/root/napv5" WORKDIR="$WORKROOT/waf-compiler" DOCKERFILE="$WORKDIR/Dockerfile" BUNDLE_DIR="$WORKDIR/test" NEW_BUNDLE="$BUNDLE_DIR/test_new.tgz" OLD_BUNDLE="$BUNDLE_DIR/test_old.tgz" NEW_META="$BUNDLE_DIR/test_new_meta.json" COMPILER_IMAGE="waf-compiler-latest:custom" EMAIL_RECIPIENT="example@example.com" EMAIL_SUBJECT="WAF Compiler Update Notification" NGINX_RELOAD_CMD="docker exec nginx-plus nginx -s reload" HTML_TEMPLATE="$WORKDIR/version_report_template.html" HTML_REPORT="$WORKDIR/version_report.html" VERBOSE=0 DEBUG=0 # ===== DEBUG AND ERROR LOGGING ===== exec 2> >(tee -a /tmp/waf_policy_auto_compile_error.log | /usr/bin/logger -t waf_policy_auto_compile_error) step_log() { if [ "$DEBUG" -eq 1 ]; then echo "DEBUG: $1" | /usr/bin/logger -t waf_policy_auto_compile fi } # ===== ARGUMENT PARSING ===== while [[ $# -gt 0 ]]; do case "$1" in -v|--verbose) VERBOSE=1 shift ;; -d|--debug) DEBUG=1 echo "Debug log can be found in the syslog..." shift ;; -*) echo "Unknown option: $1" >&2 exit 1 ;; *) shift ;; esac done # ----- LOG INITIAL ENVIRONMENT IF DEBUG ----- step_log "waf_policy_auto_compile starting (PID $$)" step_log "Script PATH: $PATH" step_log "which docker: $(which docker 2>/dev/null)" step_log "which jq: $(which jq 2>/dev/null)" # ===== COLOR DEFINITIONS ===== color_reset="\033[0m" color_green="\033[1;32m" color_yellow="\033[1;33m" # ===== LOGGING FUNCTIONS ===== log() { # Only log to terminal if VERBOSE is enabled if [ "$VERBOSE" -eq 1 ]; then echo "[$(date --iso-8601=seconds)] $*" fi } # ===== HTML REPORT GENERATOR ===== generate_html_report() { local attack_old="$1" local attack_new="$2" local attack_status="$3" local attack_class="$4" local bot_old="$5" local bot_new="$6" local bot_status="$7" local bot_class="$8" local threat_old="$9" local threat_new="${10}" local threat_status="${11}" local threat_class="${12}" local datetime datetime=$(date --iso-8601=seconds) cp "$HTML_TEMPLATE" "$HTML_REPORT" sed -i "s|{{ATTACK_OLD}}|$attack_old|g" "$HTML_REPORT" sed -i "s|{{ATTACK_NEW}}|$attack_new|g" "$HTML_REPORT" sed -i "s|{{ATTACK_STATUS}}|$attack_status|g" "$HTML_REPORT" sed -i "s|{{ATTACK_CLASS}}|$attack_class|g" "$HTML_REPORT" sed -i "s|{{BOT_OLD}}|$bot_old|g" "$HTML_REPORT" sed -i "s|{{BOT_NEW}}|$bot_new|g" "$HTML_REPORT" sed -i "s|{{BOT_STATUS}}|$bot_status|g" "$HTML_REPORT" sed -i "s|{{BOT_CLASS}}|$bot_class|g" "$HTML_REPORT" sed -i "s|{{THREAT_OLD}}|$threat_old|g" "$HTML_REPORT" sed -i "s|{{THREAT_NEW}}|$threat_new|g" "$HTML_REPORT" sed -i "s|{{THREAT_STATUS}}|$threat_status|g" "$HTML_REPORT" sed -i "s|{{THREAT_CLASS}}|$threat_class|g" "$HTML_REPORT" sed -i "s|{{RUN_DATETIME}}|$datetime|g" "$HTML_REPORT" } # ===== BUILD COMPILER IMAGE ===== build_compiler() { step_log "about to build_compiler" docker build --no-cache --platform linux/amd64 \ --secret id=nginx-crt,src="$WORKROOT/nginx-repo.crt" \ --secret id=nginx-key,src="$WORKROOT/nginx-repo.key" \ -t "$COMPILER_IMAGE" \ -f "$DOCKERFILE" "$WORKDIR" > "$WORKDIR/waf_compiler_build.log" 2>&1 || { echo "ERROR: docker build failed. Dumping build log:" | /usr/bin/logger -t waf_policy_auto_compile_error cat "$WORKDIR/waf_compiler_build.log" | /usr/bin/logger -t waf_policy_auto_compile_error exit 1 } step_log "after build_compiler" } # ===== COMPILE TEST POLICY ===== compile_test_policy() { step_log "about to compile_test_policy" docker run --rm -v "$BUNDLE_DIR:/bundle" "$COMPILER_IMAGE" \ -p /bundle/test.json -o /bundle/test_new.tgz > "$NEW_META" step_log "after compile_test_policy" if [ -f "$NEW_META" ]; then step_log "$(cat "$NEW_META")" else step_log "NEW_META does not exist" fi } # ===== CHECK OLD_BUNDLE ===== check_old_bundle() { step_log "about to check OLD_BUNDLE" if [ -f "$OLD_BUNDLE" ]; then step_log "$(ls -l "$OLD_BUNDLE")" else step_log "OLD_BUNDLE does not exist" fi } # ===== GET NEW VERSIONS FUNCTION ===== get_new_versions() { jq -r ' { "attack": .attack_signatures_package.version, "bot": .bot_signatures_package.version, "threat": .threat_campaigns_package.version }' "$NEW_META" } # ===== VERSION EXTRACTION FROM OLD BUNDLE ===== extract_bundle_versions() { docker run --rm -v "$BUNDLE_DIR:/bundle" "$COMPILER_IMAGE" \ -dump -bundle "/bundle/test_old.tgz" } extract_versions_from_dump() { extract_bundle_versions | awk ' BEGIN { print "{" } /attack-signatures:/ { in_attack=1; next } /bot-signatures:/ { in_bot=1; next } /threat-campaigns:/ { in_threat=1; next } in_attack && /version:/ { gsub("version: ", "") printf "\"attack\":\"%s\",\n", $1 in_attack=0 } in_bot && /version:/ { gsub("version: ", "") printf "\"bot\":\"%s\",\n", $1 in_bot=0 } in_threat && /version:/ { gsub("version: ", "") printf "\"threat\":\"%s\"\n", $1 in_threat=0 } END { print "}" } ' } get_old_versions() { extract_versions_from_dump } # ===== GET & PRINT VERSIONS ===== get_versions() { step_log "about to get_new_versions" new_versions=$(get_new_versions) step_log "new_versions: $new_versions" step_log "after get_new_versions" step_log "about to get_old_versions" old_versions=$(get_old_versions) step_log "old_versions: $old_versions" step_log "after get_old_versions" } # ===== VERSION COMPARISON ===== compare_versions() { step_log "compare_versions start" attack_old=$(echo "$old_versions" | jq -r .attack) attack_new=$(echo "$new_versions" | jq -r .attack) bot_old=$(echo "$old_versions" | jq -r .bot) bot_new=$(echo "$new_versions" | jq -r .bot) threat_old=$(echo "$old_versions" | jq -r .threat) threat_new=$(echo "$new_versions" | jq -r .threat) attack_status=$([[ "$attack_old" != "$attack_new" ]] && echo "Updated" || echo "No Change") bot_status=$([[ "$bot_old" != "$bot_new" ]] && echo "Updated" || echo "No Change") threat_status=$([[ "$threat_old" != "$threat_new" ]] && echo "Updated" || echo "No Change") attack_class=$([[ "$attack_status" == "Updated" ]] && echo "warn" || echo "ok") bot_class=$([[ "$bot_status" == "Updated" ]] && echo "warn" || echo "ok") threat_class=$([[ "$threat_status" == "Updated" ]] && echo "warn" || echo "ok") echo "Attack:$attack_status Bot:$bot_status Threat:$threat_status" > "$WORKDIR/status_flags.txt" [[ "$attack_status" == "Updated" ]] && attack_status_colored="${color_yellow}*** Updated ***${color_reset}" || attack_status_colored="${color_green}No Change${color_reset}" [[ "$bot_status" == "Updated" ]] && bot_status_colored="${color_yellow}*** Updated ***${color_reset}" || bot_status_colored="${color_green}No Change${color_reset}" [[ "$threat_status" == "Updated" ]] && threat_status_colored="${color_yellow}*** Updated ***${color_reset}" || threat_status_colored="${color_green}No Change${color_reset}" { echo -e "Version comparison for container \033[1mNAPv5\033[0m:\n" echo -e "Attack Signatures:" echo -e " Current Version: $attack_old" echo -e " New Version: $attack_new" echo -e " Status: $attack_status_colored\n" echo -e "Threat Campaigns:" echo -e " Current Version: $threat_old" echo -e " New Version: $threat_new" echo -e " Status: $threat_status_colored\n" echo -e "Bot Signatures:" echo -e " Current Version: $bot_old" echo -e " New Version: $bot_new" echo -e " Status: $bot_status_colored" } > "$WORKDIR/version_report.ansi" sed 's/\x1B\[[0-9;]*[mK]//g' "$WORKDIR/version_report.ansi" > "$WORKDIR/version_report.txt" step_log "Calling log_versions_syslog" log_versions_syslog "$attack_old" "$attack_new" "$attack_status" "$attack_class" \ "$bot_old" "$bot_new" "$bot_status" "$bot_class" \ "$threat_old" "$threat_new" "$threat_status" "$threat_class" step_log "compare_versions finished" } # ===== SYSLOG VERSION LOGGING and HTML REPORT GEN ===== log_versions_syslog() { # Args: # 1-attack_old 2-attack_new 3-attack_status 4-attack_class # 5-bot_old 6-bot_new 7-bot_status 8-bot_class # 9-threat_old 10-threat_new 11-threat_status 12-threat_class local msg msg="AttackSig (current: $1, latest: $2), BotSig (current: $5, latest: $6), ThreatCamp (current: $9, latest: $10)" /usr/bin/logger -t waf_policy_auto_compile "$msg" # Also print to terminal if VERBOSE is enabled if [ "$VERBOSE" -eq 1 ]; then echo "$msg" fi # Always (re)generate HTML for the mail at this point generate_html_report "$@" } # ===== RESPONSE ACTIONS ===== compile_all_policies() { log "Change detected β compiling all policies..." if [ "$VERBOSE" -eq 1 ]; then "$WORKDIR/compile_waf_policies.sh" else "$WORKDIR/compile_waf_policies.sh" > /dev/null 2>&1 fi } reload_nginx() { log "Reloading NGINX..." eval "$NGINX_RELOAD_CMD" } rotate_bundles() { log "Archiving new test bundle as old..." mv "$NEW_BUNDLE" "$OLD_BUNDLE" rm -f "$NEW_META" } send_report_email() { local html_report="$1" mail -s "$EMAIL_SUBJECT" -a "Content-Type: text/html" "$EMAIL_RECIPIENT" < "$html_report" } # ===== MAIN LOGIC ===== main() { build_compiler compile_test_policy check_old_bundle get_versions compare_versions if [[ "$VERBOSE" -eq 1 ]]; then cat "$WORKDIR/version_report.ansi" fi if grep -q "Updated" "$WORKDIR/status_flags.txt"; then if [[ "$VERBOSE" -eq 1 ]]; then echo "Detected updates. Recompiling policies, reloading NGINX, sending report." fi compile_all_policies reload_nginx rotate_bundles send_report_email "$HTML_REPORT" else log "No changes detected β nothing to do." fi log "Done." } main "$@" And should be it.206Views2likes4CommentsSeamless Upgrade of F5 BIG-IP SSL Orchestrator (SSLO) L3 HA Cluster Without Downtime
Introduction Upgrading the SSL Orchestrator (SSLO) in a high availability (HA) deployment can be a daunting task. especially in the BFSI sector, where even minimal downtime can impact mission-critical services such as net banking and mobile banking. While F5βs official knowledge article K64003555 provides detailed steps for SSLO upgrades, it typically involves a 30β45 minute service outage. This may not be acceptable in many production environments. This article provides a practical, tested method to upgrade SSLO (version 15.x.x and above) in an HA cluster without any service disruption. Step-by-Step: Zero-Downtime Upgrade Procedure 1. Convert Full SSLO Topologies to LTM VIPs SSLO topologies consist of tightly coupled components: virtual servers, access profiles, per-request policies, service chains, inspection services, and iRules. These objects are auto-generated and logically bound. Manual changes or upgrades can cause them to break or behave unpredictably. To avoid this, the first step is to recreate your SSLO traffic flow using native LTM virtual servers. Use duplicate IPs (or source IP-based routing) to create new LTM VIPs. Attach the corresponding SSLO access policies, profiles, and iRules to the LTM VIPs manually. Test traffic flow through these LTM VIPs to ensure they replicate the behavior of your existing SSLO topologies. Note: In my setup, I used existing VIP IPs by limiting the source address to my own public IP. I then changed source matching rules for controlled traffic diversion during testing. 2. Detach SSLO Profiles/Policies from VIPs on the Standby Device Once the LTM VIPs are functional and validated, detach SSLO-related profiles from them on the standby device before starting the upgrade. Use the following example command: modify ltm virtual myapp.example.com \ per-flow-request-access-policy none \ profiles delete { sslo_myapp_topology.app/sslo_myapp_topology_accessProfile } \ rules none Once detached, the device will act as a standard LTM unitβsafe for upgrades without SSLO-related interruptions. 3. Upgrade Standby Device to New TMOS Version Continue to upgrade the standby SSLO device to your desired TMOS version (e.g., from 16.x to 17.x). Follow the standard BIG-IP upgrade process: π F5 KB: Upgrade Process β οΈ Important: Do not access SSLO β Configuration in the GUI before both devices are upgraded. This may trigger an SSLO RPM package upgrade prematurely. 4. Validate Post-Upgrade Configuration After the upgrade: Confirm all access policies and profiles are present via Access β Profiles / Policies Verify that the LTM VIPs created earlier are intact and function as expected. 5. Re-Attach SSLO Policies and Profiles Once validation is complete, re-attach the access profiles, policies, and rules to the respective LTM VIPs. You can automate this using TMSH commands, like the example below: modify ltm virtual myapp.example.com \ per-flow-request-access-policy ssloP_IDS_IPS_WAF.app/ssloP_IDS_IPS_WAF_per_req_policy \ profiles add { sslo_appname.app/sslo_appname_accessProfile { } } \ rules { sslo_appname.app/sslo_appname-min_in_t ssloS_WAF.app/ssloS_WAF-port_remap } 6. Failover to the Upgraded Device Perform a manual failover to the upgraded standby device. Monitor application behavior and test traffic flow thoroughly. 7. Repeat for the Second Device Now, repeat Steps 2β6 for the second (now standby) SSLO device. This will complete the upgrade process for the full HA pair. 8. Trigger SSLO GUI Upgrade Once both devices are upgraded and traffic is tested from both nodes, access SSLO β Configuration via GUI. This will initiate the SSLO RPM package upgrade safely. 9. Monitor Upgrade Status Use the following REST API calls for monitoring upgrade progress and sync status: restcurl -u admin: shared/gossip | grep status restcurl -u admin: tm/cm/device restcurl -u admin: tm/cm/device | jq '.items[] | {devicename: .name, ConfigSyncIP: .configsyncIp, UnicastAddress: .unicastAddress}' References π K64003555 β SSL Orchestrator Upgrade Process π K84554955 β BIG-IP General Upgrade Steps Conclusion Upgrading a F5 SSLO HA cluster without downtime is possible with proper planning and by converting topologies into LTM-native VIPs. This approach ensures continuous traffic flow and minimizes the risk associated with auto-generated SSLO objects during upgrades.123Views0likes0CommentsNGINX Plus Request body Rate Limit with the NJS module and javascript
The nginx njs module allows javascript to process the code or as they call it on the backend nodejs. The module is dynamic for Nginx Plus https://docs.nginx.com/nginx/admin-guide/dynamic-modules/dynamic-modules/ while for the community nginx it needs to be compiled. The code and nginx configuration are also present at: https://github.com/Nikoolayy1/nginx_njs_request_body_limit/tree/main I have used the example rate limiter from https://github.com/nginx/njs-examples and https://clouddocs.f5.com/training/community/nginx/html/class3/class3.html and modified rate limit example to be based on the request body. It works as expected. The "r.internalRedirect('@app-backend');" internal redirect is needed as nginx by default does not populate or save the request body and this is why the request needs to pass 2 times in nginx proxy for the body variable to be properly populated! The nginx plus rootless container is a great option for F5 XC RE where root containers are not accepted and for Nginx on XC RE I have made another article at F5 XC vk8s open source nginx deployment on RE | DevCentral NJS "main" file code: const defaultResponse = "0"; const user = 'username'; const pass = 'username'; function ratelimit(r) { switch (r.method) { case 'POST': var body = r.requestText; r.log(`body: ${body}`); if (r.headersIn['Content-Type'] != 'application/x-www-form-urlencoded' || !body.length) { r.internalRedirect('@app-backend'); return; } var result_user = body.includes(user); var result_pass = body.includes(pass); if (!result_user) { r.internalRedirect('@app-backend'); return; } const zone = r.variables['rl_zone_name']; const kv = zone && ngx.shared && ngx.shared[zone]; if (!kv) { r.log(`ratelimit: ${zone} js_shared_dict_zone not found`); r.internalRedirect('@app-backend'); return; } const key = r.variables['rl_key'] || r.variables['remote_addr']; const window = Number(r.variables['rl_windows_ms']) || 60000; const limit = Number(r.variables['rl_limit']) || 10; const now = Date.now(); let requestData = kv.get(key); if (requestData === undefined || requestData.length === 0) { requestData = { timestamp: now, count: 1 } kv.set(key, JSON.stringify(requestData)); r.internalRedirect('@app-backend'); return; } try { requestData = JSON.parse(requestData); } catch (e) { requestData = { timestamp: now, count: 1 } kv.set(key, JSON.stringify(requestData)); r.internalRedirect('@app-backend'); return; } if (!requestData) { requestData = { timestamp: now, count: 1 } kv.set(key, JSON.stringify(requestData)); r.internalRedirect('@app-backend'); return; } if (now - requestData.timestamp >= window) { requestData.timestamp = now; requestData.count = 1; } else { requestData.count++; } const elapsed = now - requestData.timestamp; r.log(`limit: ${limit} window: ${window} elapsed: ${elapsed} count: ${requestData.count} timestamp: ${requestData.timestamp}`) let retryAfter = 0; if (requestData.count > limit) { retryAfter = 1; } kv.set(key, JSON.stringify(requestData)); if (retryAfter) { r.return(401, "Unauthorized\n"); return; } default: r.internalRedirect('@app-backend'); return; } } export default {sub, header, ratelimit, parseRequestBody, log}; Nginx nginx.conf file: server { listen 80 default_server; server_name localhost; access_log /var/log/nginx/host.access.log main; js_var $rl_zone_name kv; # shared dict zone name; requred variable js_var $rl_windows_ms 30000; # optional window in miliseconds; default 1 minute window if not set js_var $rl_limit 3; # optional limit for the window; default 10 requests if not set js_var $rl_key $remote_addr; # rate limit key; default remote_addr if not set js_set $rl_result main.ratelimit; # call ratelimit function that returns retry-after value if limit is exceeded root /var/www/html; index index.html; include /etc/nginx/mime.types; error_log /var/log/nginx/host.error_log debug; if ($target) { return 401; } location / { js_content main.ratelimit; } location @app-backend { internal; proxy_pass http://backend; } location /backend { internal; proxy_set_header Host httpforever.com; proxy_pass http://backend/; } Summary: There is another example how to populate the internal request body variable using that is needed by the njs module using the " mirror " option, shown at https://www.f5.com/company/blog/nginx/deploying-nginx-plus-as-an-api-gateway-part-2-protecting-backend-services but it did not work for me, so I used the " internal " option with "r.internalRedirect(uri)" https://nginx.org/en/docs/njs/reference.html Nginx njs feature r.subrequest can be used to populate response headers and body but mainly it is for logging and not for rate limiting and I think making a real http subrequest using javascript is not optimal and will not scale well, so I will not recommend this option as rate limiters are best left to be request based. Also I saw strange bug that the subrequest changes the content type header of the response and I had use "js_header_filter" to again change the response header.Nginx App Protect has the BD process from F5 BIG-IP AWAF/ASM that has DOS protections that can monitor the Server's response latency dynamically and make auto thresholds!76Views1like0Comments