security
241 TopicsTLS Server Name Indication
Problem this snippet solves: Extensions to TLS encryption protocols after TLS v1.0 have added support for passing the desired servername as part of the initial encryption negotiation. This functionality makes it possible to use different SSL certificates with a single IP address by changing the server's response based on this field. This process is called Server Name Indication (http://en.wikipedia.org/wiki/Server_Name_Indication). It is not supported on all browsers, but has a high level of support among widely-used browsers. Only use this functionality if you know the bulk of the browsers accessing your site support SNI - the fact that IE on Windows XP does not precludes the wide use of this functionality for most sites, but only for now. As older browsers begin to die off, SNI will be a good weapon in your arsenal of virtual hosting tools. You can test if your browser supports SNI by clicking here: https://alice.sni.velox.ch/ Supported Browsers: * Internet Explorer 7 or later, on Windows Vista or higher * Mozilla Firefox 2.0 or later * Opera 8.0 or later (the TLS 1.1 protocol must be enabled) * Opera Mobile at least version 10.1 beta on Android * Google Chrome (Vista or higher. XP on Chrome 6 or newer) * Safari 2.1 or later (Mac OS X 10.5.6 or higher and Windows Vista or higher) * MobileSafari in Apple iOS 4.0 or later (8) * Windows Phone 7 * MicroB on Maemo Unsupported Browsers: * Konqueror/KDE in any version * Internet Explorer (any version) on Windows XP * Safari on Windows XP * wget * BlackBerry Browser * Windows Mobile up to 6.5 * Android default browser (Targeted for Honeycomb but won't be fixed until next version for phone users as Honeycomb will be reserved to tablets only) * Oracle Java JSSE Note: The iRule listed here is only supported on v10 and above. Note: Support for SNI was added in 11.1.0. See SOL13452 for more information. How to use this snippet: Create a string-type datagroup to be called "tls_servername". Each hostname that needs to be supported on the VIP must be input along with its matching clientssl profile. For example, for the site "testsite.site.com" with a ClientSSL profile named "clientssl_testsite", you should add the following values to the datagroup. String: testsite.site.com Value: clientssl_testsite If you wish to switch pool context at the time the servername is detected in TLS, then you need to create a string-type datagroup called "tls_servername_pool". You will input each hostname to be supported by the VIP and the pool to direct the traffic towards. For the site "testsite.site.com" to be directed to the pool "testsite_pool_80", add the following to the datagroup: String: testsite.site.com Value: testsite_pool_80 Apply the iRule below to a chosen VIP. When applied, this iRule will detect if an SNI field is present and dynamically switch the SSL profile and pool to use the configured certificate. Important: The VIP must have a clientSSL profile AND a default pool set. If you don't set this, the iRule will likely break. There is also no real errorhandling for incorrect/inaccurate entries in the datagroup lists -- if you enter a bad value, it'll fail. This allows you to support multiple certificates and multiple pools per VS IP address. when CLIENT_ACCEPTED { if { [PROFILE::exists clientssl] } { # We have a clientssl profile attached to this VIP but we need # to find an SNI record in the client handshake. To do so, we'll # disable SSL processing and collect the initial TCP payload. set default_tls_pool [LB::server pool] set detect_handshake 1 SSL::disable TCP::collect } else { # No clientssl profile means we're not going to work. log local0. "This iRule is applied to a VS that has no clientssl profile." set detect_handshake 0 } } when CLIENT_DATA { if { ($detect_handshake) } { # If we're in a handshake detection, look for an SSL/TLS header. binary scan [TCP::payload] cSS tls_xacttype tls_version tls_recordlen # TLS is the only thing we want to process because it's the only # version that allows the servername extension to be present. When we # find a supported TLS version, we'll check to make sure we're getting # only a Client Hello transaction -- those are the only ones we can pull # the servername from prior to connection establishment. switch $tls_version { "769" - "770" - "771" { if { ($tls_xacttype == 22) } { binary scan [TCP::payload] @5c tls_action if { not (($tls_action == 1) && ([TCP::payload length] > $tls_recordlen)) } { set detect_handshake 0 } } } default { set detect_handshake 0 } } if { ($detect_handshake) } { # If we made it this far, we're still processing a TLS client hello. # # Skip the TLS header (43 bytes in) and process the record body. For TLS/1.0 we # expect this to contain only the session ID, cipher list, and compression # list. All but the cipher list will be null since we're handling a new transaction # (client hello) here. We have to determine how far out to parse the initial record # so we can find the TLS extensions if they exist. set record_offset 43 binary scan [TCP::payload] @${record_offset}c tls_sessidlen set record_offset [expr {$record_offset + 1 + $tls_sessidlen}] binary scan [TCP::payload] @${record_offset}S tls_ciphlen set record_offset [expr {$record_offset + 2 + $tls_ciphlen}] binary scan [TCP::payload] @${record_offset}c tls_complen set record_offset [expr {$record_offset + 1 + $tls_complen}] # If we're in TLS and we've not parsed all the payload in the record # at this point, then we have TLS extensions to process. We will detect # the TLS extension package and parse each record individually. if { ([TCP::payload length] >= $record_offset) } { binary scan [TCP::payload] @${record_offset}S tls_extenlen set record_offset [expr {$record_offset + 2}] binary scan [TCP::payload] @${record_offset}a* tls_extensions # Loop through the TLS extension data looking for a type 00 extension # record. This is the IANA code for server_name in the TLS transaction. for { set x 0 } { $x < $tls_extenlen } { incr x 4 } { set start [expr {$x}] binary scan $tls_extensions @${start}SS etype elen if { ($etype == "00") } { # A servername record is present. Pull this value out of the packet data # and save it for later use. We start 9 bytes into the record to bypass # type, length, and SNI encoding header (which is itself 5 bytes long), and # capture the servername text (minus the header). set grabstart [expr {$start + 9}] set grabend [expr {$elen - 5}] binary scan $tls_extensions @${grabstart}A${grabend} tls_servername set start [expr {$start + $elen}] } else { # Bypass all other TLS extensions. set start [expr {$start + $elen}] } set x $start } # Check to see whether we got a servername indication from TLS. If so, # make the appropriate changes. if { ([info exists tls_servername] ) } { # Look for a matching servername in the Data Group and pool. set ssl_profile [class match -value [string tolower $tls_servername] equals tls_servername] set tls_pool [class match -value [string tolower $tls_servername] equals tls_servername_pool] if { $ssl_profile == "" } { # No match, so we allow this to fall through to the "default" # clientssl profile. SSL::enable } else { # A match was found in the Data Group, so we will change the SSL # profile to the one we found. Hide this activity from the iRules # parser. set ssl_profile_enable "SSL::profile $ssl_profile" catch { eval $ssl_profile_enable } if { not ($tls_pool == "") } { pool $tls_pool } else { pool $default_tls_pool } SSL::enable } } else { # No match because no SNI field was present. Fall through to the # "default" SSL profile. SSL::enable } } else { # We're not in a handshake. Keep on using the currently set SSL profile # for this transaction. SSL::enable } # Hold down any further processing and release the TCP session further # down the event loop. set detect_handshake 0 TCP::release } else { # We've not been able to match an SNI field to an SSL profile. We will # fall back to the "default" SSL profile selected (this might lead to # certificate validation errors on non SNI-capable browsers. set detect_handshake 0 SSL::enable TCP::release } } }1.2KViews0likes7CommentsThe WAF Dilemma
We are always facing the dilemma "Security vs Usability" in the world of security. This becomes painfully obvious once you start implementing a WAF. I have now implemented a wide range of WAF security policies, both BigIP AWAF and NAP, and two application functions/features always stand out: file upload and wiki editors. The core problem with the two scenarios is that they are about handling unstructured data. No matter how hard you try to tune the policy you will have an endless amount of false positives interrupting the end users. If we don't handle this problem correctly we will be forced (aka being demanded by the business) to disable the WAF policy. And that is a loose-loose situation. What I have constructed is a way to minimize this problem by differentiate between authenticated and unauthenticated end users. In most situations we can have a higher level of trust in traffic that is authenticated and thus tune down on the security. My design is very binary, if you are authenticated the WAF is turned off, if not it is on. This might not be good enough for you but this is only an example on how to go about the core problem. You can fine-tune the solution to be more granular based on the information available like switching the security policy or other mitigating actions. Just remember that having a simple WAF is always better than not having any at all. You can find the details, configuration and code here: NGINX App Protect with Authentication | Wiki As always feedback is much appreciated!181Views2likes6CommentsTrigger js challenge/Captcha for ip reputation/ip intelligence categories
Problem solved by this Code Snippet Because some ISP or cloud providers do not monitor their users a lot of times client ip addresses are marked as "spam sources" or "windows exploits" and as the ip addresses are dynamic and after time a legitimate user can use this ip addresses the categories are often stopped in the IP intelligence profile or under the ASM/AWAF policy. This usually happens in Public Clouds that do not monitor what their users do and the IP gets marked as bad then another good user after a day or two has this ip address and this causes the issue. For many of my clients I had to stop the ip reputation/ip intelligence category "spam sources" and in some cases "windows exploits" so having a javascript/captcha checks seems a nice compromise π To still make use of this categories the users coming from those ip addresses can be forced to solve captcha checks or at least to be checked for javascript support! How to use this Code Snippet Have AWAF/ASM and ip intelligence licensed Add AWAF/ASM policy with irule support option (by default not enabled under the policy) or/and Bot profile under the Virtual server Optionally add IP intelligence profile or enable the Ip intelligence under the WAF policy without the categories that cause a lot of false positives, Add the irule and if needed modify the categories for which it triggers Do not forget to first create the data group, used in the code or delete that part of the code and to uncomment the Bot part of the code, if you plan to do js check and not captcha and maybe comment the captcha part ! Code Snippet Meta Information Version: 17.1.3 Coding Language: TCL Code You can find the code and further documentation in my GitHub repository: reputation-javascript-captcha-challlenge/ at main Β· Nikoolayy1/reputation-javascript-captcha-challlenge when HTTP_REQUEST { # Take the ip address for ip reputation/intelligence check from the XFF header if it comes from the whitelisted source ip addresses in data group "client_ip_class" if { [HTTP::header exists "X-Forwarded-For"] && [class match [IP::client_addr] equals "/Common/client_ip_class"] } { set trueIP [HTTP::header "X-Forwarded-For"] } else { set trueIP [IP::client_addr] } # Check if IP reputation is triggered and it is containing "Spam Sources" if { ([llength [IP::reputation $trueIP]] != 0) && ([IP::reputation $trueIP] contains "Spam Sources") }{ log local0. "The category is [IP::reputation $trueIP] from [IP::client_addr]" # Set the variable 1 or bulean true as to trigger ASM captcha or bot defense javascript set js_ch 1 } else { set js_ch 0 } # Custom response page just for testing if there is no real backend origin server for testing if {!$js_ch} { HTTP::respond 200 content { <html> <head> <title>Apology Page</title> </head> <body> We are sorry, but the site you are looking for is temporarily out of service<br> If you feel you have reached this page in error, please try again. </body> </html> } } } # when BOTDEFENSE_ACTION { # Trigger bot defense action javascript check for Spam Sources # if {$js_ch && (not ([BOTDEFENSE::reason] starts_with "passed browser challenge")) && ([BOTDEFENSE::action] eq "allow") }{ # BOTDEFENSE::action browser_challenge # } # } when ASM_REQUEST_DONE { # Trigger ASM captcha check only for users comming from Spam sources that have not already passed the captcha check (don't have the captcha cookie) if {$js_ch && [ASM::captcha_status] ne "correct"} { set res [ASM::captcha] if {$res ne "ok"} { log local0. "Cannot send captcha_challenge: \"$res\"" } } } Extra References: BOTDEFENSE::action ASM::captcha ASM::captcha_status280Views1like1CommentF5 ASM/AWAF Preventing unauthorized users accessing admin pathβ using iRule script
The below code uses the new BIG-IP variables " [ASM::is_authenticated] " and " [ASM::username] " and the code is simple enough as if you are authenticated but not admin then you will not get access to the url path " /about.php " and this is logged in the /var/log/asm logs because " log local3. ". At the end of the article I have shown how with APM you can accomplish AD group limit for specific urls but then the Authentication is moved on the APM while the AWAF iRule example the authentication is on the origin web server and the AWAF just handles the URL Authorization. when ASM_REQUEST_DONE { if { [ASM::is_authenticated] && [HTTP::path] equals "/about.php" } { log local3. "This request was sent by user [ASM::username]." if {[ASM::username] equals "admin"} { log local3. "The admin has logged!" return } else { drop } } } Github link: Nikoolayy1/F5_AWAF-ASM-ADMIN-Access: F5 BIG-IP iRule code for limiting users by to access urls! The harder part is that you need to do several prerequisites that I will explain here: Enable iRule support in the ASM policy. Configure a login page and optionally login enforcement (if " /about.php " is not blocked by the origin server to not be accessible before login this is a needed step!) Enable session tracking by login page Attach the irule Test and see Example logs: cat /var/log/asm ......... Jun 25 03:59:33 bigip1.com info tmm2[11400]: Rule /Common/f5-asm-allow-admin <ASM_REQUEST_DONE>: This request was sent by user admin. Jun 25 03:59:33 bigip1.com info tmm2[11400]: Rule /Common/f5-asm-allow-admin <ASM_REQUEST_DONE>: The admin has logged! [root@bigip1:Active:Standalone] config # The DVWA app was used for this demo that is old but gold and there are many F5 demos how to configure login enforcement for it! Here is a youtube video for assistance: BIG-IP AWAF Demo 32 - Use Login Page Enforcement with F5 BIG-IP Adv WAF (formerly ASM) Extra links (there is also a new event "ASM_RESPONSE_LOGIN"): ASM::username ASM::is_authenticated https://clouddocs.f5.com/api/irules/ASM.html AD group url enforcement: If you want to control access to URLs based on AD groups I suggest seeing the F5 APM/Acess module that will take of the authentication and with Layer 7 ACL each AD group could be limited what it has access to. APM and AWAF can work together as with layered virtual server AWAF can be before the APM as by default is after it and then to get the username you need to use the login page feature and not "Use APM username and Session ID" feature in the AWAF policy. Configuring Access Control Lists https://my.f5.com/manage/s/article/K00363504 https://my.f5.com/manage/s/article/K03113285 https://my.f5.com/manage/s/article/K54217479 Example APM profile of type LTM+APM and the APM policy for anyone interested where the APM uses AD to authenticate the users and query for group data and the members for of the guest group have an ACL assigned that limits their access π Summary: This probably will be seen as well in F5 NEXT with many more cool features !158Views0likes0CommentsNGINX App Protect v5 Signature Notifications
When working with NAP (NGINX App Protect) you don't have an easy way of knowing when any of the signatures are updated. As an old BigIP guy I find that rather strange. Here you have build-in automatic updates and notifications. Unfortunately there isn't any API's you can probe which would have been the best way of doing it. Hopefully it will come one day. However, "friction" and "hard" will not keep me from finding a solution π I have previously made a solution for NAPv4 and I have tried mentally to get me going on a NAPv5 version. The reason for the delay is in the different way NAPv4 and NAPv5 are designed. Where NAPv4 is one module loaded in NGINX, NAPv5 is completely detached from NGINX (well almost, you still need to load a small module to get the traffic from NGINX to NAP) and only works with containers. NAPv5 has moved the signature "storage" from the actual host it is running on (e.g. an installed package) to the policy. This has the consequence that finding a valid "source of truth", for the latest signature versions, is not as simple as building a new image and see which versions got installed. There are very good reasons for this design that I will come back to later. When you fire up NAPv5 you got three containers for the data plane (NGINX, waf-enforcer and waf-config-mgr) and one for the "control plane" (waf-compiler). For this solution the "control plane" is the useful one. It isn't really a control plane but it gives a nice picture of how it is detached from the actual processing of traffic. When you update your signatures you are actually doing it through the waf-compiler. The waf-compiler is a container hosting the actual signature databases and every time a new verison is released you need to rebuild this container and compile your policies into a new version and reload NGINX. And this is what I take advantage of when I look for signature updates. It has the upside that you only need the waf-compiler to get the information you need. My solution will take care of the entire process and make sure that you are always running with the latest signatures. Back to the reason why the split of functions is a very good thing. When you build a new version of the NGINX image and deploy it into production, NAP needs to compile the policies as they load. During the compilation NGINX is not moving any traffic! This becomes a annoying problem even when you have a low number of policies. I have installations where it takes 5 to 10 minutes from deployment of the new image until it starts moving traffic. That is a crazy long time when you are used to working with micro-services and expect everything to flip within seconds. If you have your NAPv4 hooked up to a NGINX Instance Manager (NIM) the problem is somewhat mitigated as NIM will compile the policies before sending them to the gateways. NIM is not a nimble piece of software so it doesn't always fit into the environment. And now here is my hack to the notification problem: The solution consist of two bash scripts and one html template. The template is used when sending a notification mail. I wanted it to be pretty and that was easiest with html. Strictly speaking you could do with just a simple text based mail. Save all three in the same directory. The main script is called "waf_policy_auto_compile.sh"and is the one you put into crontab. The main script will build a new waf-compiler image and compile a test policy. The outcome of that is information about what versions are the newest. It will then extract versions from an old policy and simply see if any of the versions differ. For this to work you need to have an uncompiled policy (you can just use the default one) and a compiled version of it ready beforehand. When a diff has been identified the notification logic is executed and a second script is called: "compile_waf_policies.sh". It basically just trawls through the directory of you policies and logging profiles and compiles a new version of them all. It is not necessary to recompile the logging profiles, so this will probably change in the next version. As the compilation completes the main script will nudge NGINX to reload thus implement all the new versions. You can run "waf_policy_auto_compile.sh" with a verbose flag (-v) and a debug flag (-d). The verbose flag is intended to be used when you run it on a terminal and want the information displayed there. Debug is, well, for debug π The construction of the scripts are based on my own needs but they should be easy to adjust for any need. I will be happy for any feedback, so please don't hold back π version_report_template.html: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>WAF Policy Version Report</title> <style> body { font-family: system-ui, sans-serif; } .ok { color: #28a745; font-weight: bold; } .warn { color: #f0ad4e; font-weight: bold; } .section { margin-bottom: 1.2em; } .label { font-weight: bold; } </style> </head> <body> <h2>WAF Policy Version Report</h2> <div class="section"> <div class="label">Attack Signatures:</div> <div>Current: <span>{{ATTACK_OLD}}</span></div> <div>New: <span>{{ATTACK_NEW}}</span></div> <div>Status: <span class="{{ATTACK_CLASS}}">{{ATTACK_STATUS}}</span></div> </div> <div class="section"> <div class="label">Bot Signatures:</div> <div>Current: <span>{{BOT_OLD}}</span></div> <div>New: <span>{{BOT_NEW}}</span></div> <div>Status: <span class="{{BOT_CLASS}}">{{BOT_STATUS}}</span></div> </div> <div class="section"> <div class="label">Threat Campaigns:</div> <div>Current: <span>{{THREAT_OLD}}</span></div> <div>New: <span>{{THREAT_NEW}}</span></div> <div>Status: <span class="{{THREAT_CLASS}}">{{THREAT_STATUS}}</span></div> </div> <div>Run completed: {{RUN_DATETIME}}</div> </body> </html> compile_waf_policies.sh: #!/bin/bash # ============================================================================== # Script Name: compile_waf_policies.sh # # Description: # Compiles: # 1. WAF policy JSON files from the 'policies' directory # 2. WAF logging JSON files from the 'logging' directory # using the 'waf-compiler-latest:custom' Docker image. Output goes to # '/opt/napv5/app_protect_etc_config' where NGINX and waf-config-mgr # can reach them. # # Requirements: # - Docker installed and accessible # - Docker image 'waf-compiler-latest:custom' present locally # # Usage: # ./compile_waf_policies.sh # ============================================================================== set -euo pipefail IFS=$'\n\t' SECONDS=0 # Track total execution time # ======================== # CONFIGURABLE VARIABLES # ======================== BASE_DIR="/root/napv5/waf-compiler" OUTPUT_DIR="/opt/napv5/app_protect_etc_config" POLICY_INPUT_DIR="$BASE_DIR/policies" POLICY_OUTPUT_DIR="$OUTPUT_DIR" LOGGING_INPUT_DIR="$BASE_DIR/logging" LOGGING_OUTPUT_DIR="$OUTPUT_DIR" GLOBAL_SETTINGS="$BASE_DIR/global_settings.json" DOCKER_IMAGE="waf-compiler-latest:custom" # ======================== # VALIDATION # ======================== echo "π§ Validating paths..." [[ -d "$POLICY_INPUT_DIR" ]] || { echo "β Error: Policy input directory '$POLICY_INPUT_DIR' does not exist."; exit 1; } [[ -f "$GLOBAL_SETTINGS" ]] || { echo "β Error: Global settings file '$GLOBAL_SETTINGS' not found."; exit 1; } mkdir -p "$POLICY_OUTPUT_DIR" mkdir -p "$LOGGING_OUTPUT_DIR" # ======================== # POLICY COMPILATION # ======================== echo "π¦ Compiling WAF policies from: $POLICY_INPUT_DIR" for POLICY_FILE in "$POLICY_INPUT_DIR"/*.json; do [[ -f "$POLICY_FILE" ]] || continue BASENAME=$(basename "$POLICY_FILE" .json) OUTPUT_FILE="$POLICY_OUTPUT_DIR/${BASENAME}.tgz" echo "βοΈ [Policy] Compiling $(basename "$POLICY_FILE") -> $(basename "$OUTPUT_FILE")" docker run --rm \ -v "$POLICY_INPUT_DIR":"$POLICY_INPUT_DIR" \ -v "$POLICY_OUTPUT_DIR":"$POLICY_OUTPUT_DIR" \ -v "$(dirname "$GLOBAL_SETTINGS")":"$(dirname "$GLOBAL_SETTINGS")" \ "$DOCKER_IMAGE" \ -g "$GLOBAL_SETTINGS" \ -p "$POLICY_FILE" \ -o "$OUTPUT_FILE" done # ======================== # LOGGING COMPILATION # ======================== echo "π Compiling WAF logging configs from: $LOGGING_INPUT_DIR" if [[ -d "$LOGGING_INPUT_DIR" ]]; then for LOG_FILE in "$LOGGING_INPUT_DIR"/*.json; do [[ -f "$LOG_FILE" ]] || continue BASENAME=$(basename "$LOG_FILE" .json) OUTPUT_FILE="$LOGGING_OUTPUT_DIR/${BASENAME}.tgz" echo "βοΈ [Logging] Compiling $(basename "$LOG_FILE") -> $(basename "$OUTPUT_FILE")" docker run --rm \ -v "$LOGGING_INPUT_DIR":"$LOGGING_INPUT_DIR" \ -v "$LOGGING_OUTPUT_DIR":"$LOGGING_OUTPUT_DIR" \ "$DOCKER_IMAGE" \ -l "$LOG_FILE" \ -o "$OUTPUT_FILE" done else echo "β οΈ Skipping logging config compilation: directory '$LOGGING_INPUT_DIR' does not exist." fi # ======================== # COMPLETION MESSAGE # ======================== RUNTIME=$SECONDS printf "\nβ Compilation complete.\n" echo " - Policies output: $POLICY_OUTPUT_DIR" echo " - Logging output: $LOGGING_OUTPUT_DIR" echo printf "β±οΈ Total time taken: %02d minutes %02d seconds\n" $((RUNTIME / 60)) $((RUNTIME % 60)) echo waf_policy_auto_compile.sh: #!/bin/bash ############################################################################### # waf_policy_auto_compile.sh # # - Only prints colorized summary output to terminal if -v/--verbose is used # - Mails a styled HTML report using a template, substituting version numbers/status/colors # - Debug output (step_log) only to syslog if -d/--debug is used # - Otherwise: completely silent except for errors # - All main blocks are modularized in functions ############################################################################### set -euo pipefail IFS=$'\n\t' # ===== CONFIGURABLE VARIABLES ===== WORKROOT="/root/napv5" WORKDIR="$WORKROOT/waf-compiler" DOCKERFILE="$WORKDIR/Dockerfile" BUNDLE_DIR="$WORKDIR/test" NEW_BUNDLE="$BUNDLE_DIR/test_new.tgz" OLD_BUNDLE="$BUNDLE_DIR/test_old.tgz" NEW_META="$BUNDLE_DIR/test_new_meta.json" COMPILER_IMAGE="waf-compiler-latest:custom" EMAIL_RECIPIENT="example@example.com" EMAIL_SUBJECT="WAF Compiler Update Notification" NGINX_RELOAD_CMD="docker exec nginx-plus nginx -s reload" HTML_TEMPLATE="$WORKDIR/version_report_template.html" HTML_REPORT="$WORKDIR/version_report.html" VERBOSE=0 DEBUG=0 # ===== DEBUG AND ERROR LOGGING ===== exec 2> >(tee -a /tmp/waf_policy_auto_compile_error.log | /usr/bin/logger -t waf_policy_auto_compile_error) step_log() { if [ "$DEBUG" -eq 1 ]; then echo "DEBUG: $1" | /usr/bin/logger -t waf_policy_auto_compile fi } # ===== ARGUMENT PARSING ===== while [[ $# -gt 0 ]]; do case "$1" in -v|--verbose) VERBOSE=1 shift ;; -d|--debug) DEBUG=1 echo "Debug log can be found in the syslog..." shift ;; -*) echo "Unknown option: $1" >&2 exit 1 ;; *) shift ;; esac done # ----- LOG INITIAL ENVIRONMENT IF DEBUG ----- step_log "waf_policy_auto_compile starting (PID $$)" step_log "Script PATH: $PATH" step_log "which docker: $(which docker 2>/dev/null)" step_log "which jq: $(which jq 2>/dev/null)" # ===== COLOR DEFINITIONS ===== color_reset="\033[0m" color_green="\033[1;32m" color_yellow="\033[1;33m" # ===== LOGGING FUNCTIONS ===== log() { # Only log to terminal if VERBOSE is enabled if [ "$VERBOSE" -eq 1 ]; then echo "[$(date --iso-8601=seconds)] $*" fi } # ===== HTML REPORT GENERATOR ===== generate_html_report() { local attack_old="$1" local attack_new="$2" local attack_status="$3" local attack_class="$4" local bot_old="$5" local bot_new="$6" local bot_status="$7" local bot_class="$8" local threat_old="$9" local threat_new="${10}" local threat_status="${11}" local threat_class="${12}" local datetime datetime=$(date --iso-8601=seconds) cp "$HTML_TEMPLATE" "$HTML_REPORT" sed -i "s|{{ATTACK_OLD}}|$attack_old|g" "$HTML_REPORT" sed -i "s|{{ATTACK_NEW}}|$attack_new|g" "$HTML_REPORT" sed -i "s|{{ATTACK_STATUS}}|$attack_status|g" "$HTML_REPORT" sed -i "s|{{ATTACK_CLASS}}|$attack_class|g" "$HTML_REPORT" sed -i "s|{{BOT_OLD}}|$bot_old|g" "$HTML_REPORT" sed -i "s|{{BOT_NEW}}|$bot_new|g" "$HTML_REPORT" sed -i "s|{{BOT_STATUS}}|$bot_status|g" "$HTML_REPORT" sed -i "s|{{BOT_CLASS}}|$bot_class|g" "$HTML_REPORT" sed -i "s|{{THREAT_OLD}}|$threat_old|g" "$HTML_REPORT" sed -i "s|{{THREAT_NEW}}|$threat_new|g" "$HTML_REPORT" sed -i "s|{{THREAT_STATUS}}|$threat_status|g" "$HTML_REPORT" sed -i "s|{{THREAT_CLASS}}|$threat_class|g" "$HTML_REPORT" sed -i "s|{{RUN_DATETIME}}|$datetime|g" "$HTML_REPORT" } # ===== BUILD COMPILER IMAGE ===== build_compiler() { step_log "about to build_compiler" docker build --no-cache --platform linux/amd64 \ --secret id=nginx-crt,src="$WORKROOT/nginx-repo.crt" \ --secret id=nginx-key,src="$WORKROOT/nginx-repo.key" \ -t "$COMPILER_IMAGE" \ -f "$DOCKERFILE" "$WORKDIR" > "$WORKDIR/waf_compiler_build.log" 2>&1 || { echo "ERROR: docker build failed. Dumping build log:" | /usr/bin/logger -t waf_policy_auto_compile_error cat "$WORKDIR/waf_compiler_build.log" | /usr/bin/logger -t waf_policy_auto_compile_error exit 1 } step_log "after build_compiler" } # ===== COMPILE TEST POLICY ===== compile_test_policy() { step_log "about to compile_test_policy" docker run --rm -v "$BUNDLE_DIR:/bundle" "$COMPILER_IMAGE" \ -p /bundle/test.json -o /bundle/test_new.tgz > "$NEW_META" step_log "after compile_test_policy" if [ -f "$NEW_META" ]; then step_log "$(cat "$NEW_META")" else step_log "NEW_META does not exist" fi } # ===== CHECK OLD_BUNDLE ===== check_old_bundle() { step_log "about to check OLD_BUNDLE" if [ -f "$OLD_BUNDLE" ]; then step_log "$(ls -l "$OLD_BUNDLE")" else step_log "OLD_BUNDLE does not exist" fi } # ===== GET NEW VERSIONS FUNCTION ===== get_new_versions() { jq -r ' { "attack": .attack_signatures_package.version, "bot": .bot_signatures_package.version, "threat": .threat_campaigns_package.version }' "$NEW_META" } # ===== VERSION EXTRACTION FROM OLD BUNDLE ===== extract_bundle_versions() { docker run --rm -v "$BUNDLE_DIR:/bundle" "$COMPILER_IMAGE" \ -dump -bundle "/bundle/test_old.tgz" } extract_versions_from_dump() { extract_bundle_versions | awk ' BEGIN { print "{" } /attack-signatures:/ { in_attack=1; next } /bot-signatures:/ { in_bot=1; next } /threat-campaigns:/ { in_threat=1; next } in_attack && /version:/ { gsub("version: ", "") printf "\"attack\":\"%s\",\n", $1 in_attack=0 } in_bot && /version:/ { gsub("version: ", "") printf "\"bot\":\"%s\",\n", $1 in_bot=0 } in_threat && /version:/ { gsub("version: ", "") printf "\"threat\":\"%s\"\n", $1 in_threat=0 } END { print "}" } ' } get_old_versions() { extract_versions_from_dump } # ===== GET & PRINT VERSIONS ===== get_versions() { step_log "about to get_new_versions" new_versions=$(get_new_versions) step_log "new_versions: $new_versions" step_log "after get_new_versions" step_log "about to get_old_versions" old_versions=$(get_old_versions) step_log "old_versions: $old_versions" step_log "after get_old_versions" } # ===== VERSION COMPARISON ===== compare_versions() { step_log "compare_versions start" attack_old=$(echo "$old_versions" | jq -r .attack) attack_new=$(echo "$new_versions" | jq -r .attack) bot_old=$(echo "$old_versions" | jq -r .bot) bot_new=$(echo "$new_versions" | jq -r .bot) threat_old=$(echo "$old_versions" | jq -r .threat) threat_new=$(echo "$new_versions" | jq -r .threat) attack_status=$([[ "$attack_old" != "$attack_new" ]] && echo "Updated" || echo "No Change") bot_status=$([[ "$bot_old" != "$bot_new" ]] && echo "Updated" || echo "No Change") threat_status=$([[ "$threat_old" != "$threat_new" ]] && echo "Updated" || echo "No Change") attack_class=$([[ "$attack_status" == "Updated" ]] && echo "warn" || echo "ok") bot_class=$([[ "$bot_status" == "Updated" ]] && echo "warn" || echo "ok") threat_class=$([[ "$threat_status" == "Updated" ]] && echo "warn" || echo "ok") echo "Attack:$attack_status Bot:$bot_status Threat:$threat_status" > "$WORKDIR/status_flags.txt" [[ "$attack_status" == "Updated" ]] && attack_status_colored="${color_yellow}*** Updated ***${color_reset}" || attack_status_colored="${color_green}No Change${color_reset}" [[ "$bot_status" == "Updated" ]] && bot_status_colored="${color_yellow}*** Updated ***${color_reset}" || bot_status_colored="${color_green}No Change${color_reset}" [[ "$threat_status" == "Updated" ]] && threat_status_colored="${color_yellow}*** Updated ***${color_reset}" || threat_status_colored="${color_green}No Change${color_reset}" { echo -e "Version comparison for container \033[1mNAPv5\033[0m:\n" echo -e "Attack Signatures:" echo -e " Current Version: $attack_old" echo -e " New Version: $attack_new" echo -e " Status: $attack_status_colored\n" echo -e "Threat Campaigns:" echo -e " Current Version: $threat_old" echo -e " New Version: $threat_new" echo -e " Status: $threat_status_colored\n" echo -e "Bot Signatures:" echo -e " Current Version: $bot_old" echo -e " New Version: $bot_new" echo -e " Status: $bot_status_colored" } > "$WORKDIR/version_report.ansi" sed 's/\x1B\[[0-9;]*[mK]//g' "$WORKDIR/version_report.ansi" > "$WORKDIR/version_report.txt" step_log "Calling log_versions_syslog" log_versions_syslog "$attack_old" "$attack_new" "$attack_status" "$attack_class" \ "$bot_old" "$bot_new" "$bot_status" "$bot_class" \ "$threat_old" "$threat_new" "$threat_status" "$threat_class" step_log "compare_versions finished" } # ===== SYSLOG VERSION LOGGING and HTML REPORT GEN ===== log_versions_syslog() { # Args: # 1-attack_old 2-attack_new 3-attack_status 4-attack_class # 5-bot_old 6-bot_new 7-bot_status 8-bot_class # 9-threat_old 10-threat_new 11-threat_status 12-threat_class local msg msg="AttackSig (current: $1, latest: $2), BotSig (current: $5, latest: $6), ThreatCamp (current: $9, latest: $10)" /usr/bin/logger -t waf_policy_auto_compile "$msg" # Also print to terminal if VERBOSE is enabled if [ "$VERBOSE" -eq 1 ]; then echo "$msg" fi # Always (re)generate HTML for the mail at this point generate_html_report "$@" } # ===== RESPONSE ACTIONS ===== compile_all_policies() { log "Change detected β compiling all policies..." if [ "$VERBOSE" -eq 1 ]; then "$WORKDIR/compile_waf_policies.sh" else "$WORKDIR/compile_waf_policies.sh" > /dev/null 2>&1 fi } reload_nginx() { log "Reloading NGINX..." eval "$NGINX_RELOAD_CMD" } rotate_bundles() { log "Archiving new test bundle as old..." mv "$NEW_BUNDLE" "$OLD_BUNDLE" rm -f "$NEW_META" } send_report_email() { local html_report="$1" mail -s "$EMAIL_SUBJECT" -a "Content-Type: text/html" "$EMAIL_RECIPIENT" < "$html_report" } # ===== MAIN LOGIC ===== main() { build_compiler compile_test_policy check_old_bundle get_versions compare_versions if [[ "$VERBOSE" -eq 1 ]]; then cat "$WORKDIR/version_report.ansi" fi if grep -q "Updated" "$WORKDIR/status_flags.txt"; then if [[ "$VERBOSE" -eq 1 ]]; then echo "Detected updates. Recompiling policies, reloading NGINX, sending report." fi compile_all_policies reload_nginx rotate_bundles send_report_email "$HTML_REPORT" else log "No changes detected β nothing to do." fi log "Done." } main "$@" And should be it.206Views2likes4CommentsNGINX Plus Request body Rate Limit with the NJS module and javascript
The nginx njs module allows javascript to process the code or as they call it on the backend nodejs. The module is dynamic for Nginx Plus https://docs.nginx.com/nginx/admin-guide/dynamic-modules/dynamic-modules/ while for the community nginx it needs to be compiled. The code and nginx configuration are also present at: https://github.com/Nikoolayy1/nginx_njs_request_body_limit/tree/main I have used the example rate limiter from https://github.com/nginx/njs-examples and https://clouddocs.f5.com/training/community/nginx/html/class3/class3.html and modified rate limit example to be based on the request body. It works as expected. The "r.internalRedirect('@app-backend');" internal redirect is needed as nginx by default does not populate or save the request body and this is why the request needs to pass 2 times in nginx proxy for the body variable to be properly populated! The nginx plus rootless container is a great option for F5 XC RE where root containers are not accepted and for Nginx on XC RE I have made another article at F5 XC vk8s open source nginx deployment on RE | DevCentral NJS "main" file code: const defaultResponse = "0"; const user = 'username'; const pass = 'username'; function ratelimit(r) { switch (r.method) { case 'POST': var body = r.requestText; r.log(`body: ${body}`); if (r.headersIn['Content-Type'] != 'application/x-www-form-urlencoded' || !body.length) { r.internalRedirect('@app-backend'); return; } var result_user = body.includes(user); var result_pass = body.includes(pass); if (!result_user) { r.internalRedirect('@app-backend'); return; } const zone = r.variables['rl_zone_name']; const kv = zone && ngx.shared && ngx.shared[zone]; if (!kv) { r.log(`ratelimit: ${zone} js_shared_dict_zone not found`); r.internalRedirect('@app-backend'); return; } const key = r.variables['rl_key'] || r.variables['remote_addr']; const window = Number(r.variables['rl_windows_ms']) || 60000; const limit = Number(r.variables['rl_limit']) || 10; const now = Date.now(); let requestData = kv.get(key); if (requestData === undefined || requestData.length === 0) { requestData = { timestamp: now, count: 1 } kv.set(key, JSON.stringify(requestData)); r.internalRedirect('@app-backend'); return; } try { requestData = JSON.parse(requestData); } catch (e) { requestData = { timestamp: now, count: 1 } kv.set(key, JSON.stringify(requestData)); r.internalRedirect('@app-backend'); return; } if (!requestData) { requestData = { timestamp: now, count: 1 } kv.set(key, JSON.stringify(requestData)); r.internalRedirect('@app-backend'); return; } if (now - requestData.timestamp >= window) { requestData.timestamp = now; requestData.count = 1; } else { requestData.count++; } const elapsed = now - requestData.timestamp; r.log(`limit: ${limit} window: ${window} elapsed: ${elapsed} count: ${requestData.count} timestamp: ${requestData.timestamp}`) let retryAfter = 0; if (requestData.count > limit) { retryAfter = 1; } kv.set(key, JSON.stringify(requestData)); if (retryAfter) { r.return(401, "Unauthorized\n"); return; } default: r.internalRedirect('@app-backend'); return; } } export default {sub, header, ratelimit, parseRequestBody, log}; Nginx nginx.conf file: server { listen 80 default_server; server_name localhost; access_log /var/log/nginx/host.access.log main; js_var $rl_zone_name kv; # shared dict zone name; requred variable js_var $rl_windows_ms 30000; # optional window in miliseconds; default 1 minute window if not set js_var $rl_limit 3; # optional limit for the window; default 10 requests if not set js_var $rl_key $remote_addr; # rate limit key; default remote_addr if not set js_set $rl_result main.ratelimit; # call ratelimit function that returns retry-after value if limit is exceeded root /var/www/html; index index.html; include /etc/nginx/mime.types; error_log /var/log/nginx/host.error_log debug; if ($target) { return 401; } location / { js_content main.ratelimit; } location @app-backend { internal; proxy_pass http://backend; } location /backend { internal; proxy_set_header Host httpforever.com; proxy_pass http://backend/; } Summary: There is another example how to populate the internal request body variable using that is needed by the njs module using the " mirror " option, shown at https://www.f5.com/company/blog/nginx/deploying-nginx-plus-as-an-api-gateway-part-2-protecting-backend-services but it did not work for me, so I used the " internal " option with "r.internalRedirect(uri)" https://nginx.org/en/docs/njs/reference.html Nginx njs feature r.subrequest can be used to populate response headers and body but mainly it is for logging and not for rate limiting and I think making a real http subrequest using javascript is not optimal and will not scale well, so I will not recommend this option as rate limiters are best left to be request based. Also I saw strange bug that the subrequest changes the content type header of the response and I had use "js_header_filter" to again change the response header.Nginx App Protect has the BD process from F5 BIG-IP AWAF/ASM that has DOS protections that can monitor the Server's response latency dynamically and make auto thresholds!76Views1like0CommentsF5 BIG-IP and ENTRUST nShield HSM SSL key/cert auto synchronization between HA peers with iCall
Code version: The code was tested on 15.1.8. Main Article: For more information about RFS and Client agent I suggest seeing the vendors article. https://nshielddocs.entrust.com/security-world-docs/v13.3/connect-ug-nix/intro.html Useful F5 links for F5 and nShield Integration for GTM and LTM: https://my.f5.com/manage/s/article/K000135349 https://techdocs.f5.com/en-us/bigip-15-1-0/big-ip-system-and-nshield-hsm-implementation/setting-up-t... The nShield architecture includes a component called the Remote File System (RFS) that stores and manages the encrypted key files. The RFS can be installed on the BIG-IP system or on another server on your network. Basically the HSM agent/client is installed on the F5 devices hos Linux host system and the F5 devices are also the RFS servers. The RFS commands are bellow as when installed on the BIG-IP the HSM agent and RFS they are available for use: https://nshielddocs.entrust.com/security-world-docs/utilities/rfs-sync.html The issue I solved with iCall script is that when when you create a new HSM key for BIG-IP HA, you must run command βrfs-sync --updateβ on all standby BIG-IP devices (the devices where the cert/key were not created or changed) to update the local Thales encrypted file object cache. Without this action, SSL traffic using this key will fail when BIG-IP fails over to one of the unsynced standby devices. When you create the the key and cert on the active F5 device "rfs-sync -commit" and "rfs-sync -update" run automatically on it but not the "rfs-sync -update" on the standby devices and the icall script basically is triggered on the standby devices when you run the normal config sync. The iCall script matched an event called "HA_EVENT" that is configured in the custom alarms section and triggers the full command with the path "/opt/nfast/bin/rfs-sync --update" to check if there was an update in the rfs. I suggest reading the links below that explain the iCall (one is from JRahmβ ) and the HA logs and the last one is mine that is from the time before I learned proper article formatting π and it also shows how to run scripts not only with iCall but also during HA events and so on. Run tcpdump on event | DevCentral What is iCall? | DevCentral https://my.f5.com/manage/s/article/K34291400 https://my.f5.com/manage/s/article/K3727 https://my.f5.com/manage/s/article/K11127 Knowledge sharing: Ways to trigger and schedule scripts on the F5 BIG-IP devices. | DevCentral tmsh list sys icall sys icall handler triggered ha-handler { script ha-script subscriptions { ha-subscription { event-name HA_EVENT } } } sys icall script ha-script { app-service none definition { exec /bin/bash -c "logger -p local0.notice 'yes'" exec /bin/bash -c "/opt/nfast/bin/rfs-sync --update" } description none events none } cat /config/user_alert.conf alert HA_EVENT "(.*)Sync of device group(.*)" { snmptrap OID=".1.3.6.1.4.1.3375.2.4.0.500" } You can use tmsh::log "yes" to log to /var/log/ltm as shown in: iCall script to validate Virtual Server and node with same IP addresses Testing: This can be tested even without HSM as I don't have one at my home using "logger -p" to inject the logs. I have added "yes" in the logs as a test π logger -p local0.notice "010714a0:5: Sync of device group /Common/Failover" cat /var/log/ltm ............ Jul 1 08:34:07 bigip1.com notice root[23762]: yes Jul 1 08:34:53 bigip1.com notice root[23940]: 010714a0:5: Sync of device group /Common/Failover Using Linux bash script: In some versions you can trigger from icall script a bash or sh script with advanced logic inside, for example /bin/sh -c "/var/tmp/ha_script" but I saw issues on 17.1.x triggering from from icall a bash script that in previous versions I solved by adding "HOME=/root <linux bash command>" . In the "What is iCall" it is shown that you can do "if else" or "for" loops inside the iCall script but I find it easy to use bash for advanced logic. Good thing like everything with F5 usually there is more than one way to do things and this case in the user_alert.conf you can actually trigger a bash script from the log messages! What is iCall? | DevCentral iCall script triggers error need ${HOME} to run | DevCentral Running a command or custom script based on a syslog message cat /var/tmp/ha_script #!/bin/bash logger -p local0.notice "yes" /opt/nfast/bin/rfs-sync --update cat /config/user_alert.conf alert HA_EVENT "(.*)Sync of device group(.*)" { exec command="/var/tmp/ha_script" } Extra Notes: Using /opt/nfast/bin/rfs-sync --update or rfs-sync --update depends in some cases on the versions in the iCall script. In the release notes I saw a new bug https://cdn.f5.com/product/bugtracker/ID1429897.html that is solved in the latest 17.1.x versions where if the RFS is on the BIG-IP after a key/cert are created 'rfs-sync -c' needs to be run on the F5 Device that created them as well. The 'rfs-sync -c' can also be automated the way I have shown and my iCall script will work as well for BIG-IP that use external RFS and after the key/cert are created and committed from an F5 device (usually the active one) then an HA config sync needs to be started that will trigger 'rfs-sync -u' on the other F5 devices. Another nice way if you are using something like Ansible for example is to make to trigger the RFS update command on all F5 devices in a cluster as F5 supports bash commands even through API not only CLI. Example: curl -sku admin:XXX https://XXXX/mgmt/tm/util/bash -H "Content-Type: application/json" -X POST -d '{"command":"run", "utilCmdArgs":"-c \"/opt/nfast/bin/rfs-sync --update\""}' https://clouddocs.f5.com/products/orchestration/ansible/devel/f5_bigip/modules_2_0/bigip_command_mod...86Views1like0CommentsSSH Brute Force Protection with SSH Proxy
This script is designed to add brute force protection for SSH services on a virtual server. It makes use of the Protocol Security feature called SSH Proxy which makes it possible to look into the ssh session and see the actual logon results. Documentation for setting up SSH Proxy can be found here: https://techdocs.f5.com/en-us/bigip-14-1-0/big-ip-network-firewall-policies-and-implementations-14-1-0/ssh-proxy-security.html The script basically looks in the ltm log for these lines: Jun 8 11:40:57 f5-01 info tmm[12656]: 23003164 "Jun 08 2025 11:40:57","ssh_serverside_auth_fail","10.1.0.2","10.10.9.64","49862","22","4092","TCP","root","Password authentication failure" and counts the number of failed attempts for each IP. When a threshold has been reached that IP goes into a AFM address-list. This address-list needs to be part of a rule and policy, and attached to the SSH VS. This way be can block access for that source. The name of the address-list is specified in a variable in the script, so you can adjust it easily for your needs. You should use iCall to run the script periodically: sys icall script ssh_bf { app-service none definition { exec /shared/scripts/ban_failed_ssh.sh > /var/log/ssh_log.log } description none events none } sys icall handler periodic ssh_bf { interval 300 script ssh_bf } Make sure that the location of the script matches the exec statement in the iCall script, and that the script is executable. If you are having an HA, you also need to make sure you have it on both units. You can consume the above config by running this command: tmsh load sys config merge from-terminal and simply paste it in and on an empty line hit CTRL-d. Next you need to have a logging profile configured with these settings: It is important you have that particular publisher set as it is sending the failures to the ltm log. You should be aware of the limitations of logging locally before you make use of this solution. If it is a very busy unit the disk might not be happy with some extra writing. In a perfect world you could log these failures to a remote logging server, but then you need to have the counting logic running elsewhere and have the script use the API to ingest the IPs. Or make use of a IP Intelligence feed which can load the IPs from a webserver. This is however out of scope for this solution, but I might make a follow up article on this if time allows it. With the AFM policy on the SSH VS and the logging profile attached, you now only need to save this script to your favorite script location on the BigIP: #!/bin/bash # # ban_failed_ssh.sh - Detect and ban IPs with repeated SSH login failures from /var/log/ltm # # This script uses AFM address lists and maintains a state file with banned IPs and expiry timestamps. # Author: Thomas Domingo Dahlmann # # --- Configuration --- LOG_FILE="/var/log/ltm" STATE_FILE="/var/tmp/ssh_ban_state.csv" AFM_ADDR_LIST="/Common/ssh_banned_ips" DUMMY_IP="192.0.2.254" BAN_THRESHOLD=3 BAN_WINDOW_SECONDS=300 # 5 minutes BAN_DURATION_SECONDS=1800 # 1 hour TMP_LOG="/var/tmp/ssh_ban_tmp.log" DEBUG=1 # Set to 1 for verbose debug logging, 0 to disable # --- Logging helper --- log_debug() { [[ "$DEBUG" -eq 1 ]] && echo "[DEBUG] $(date '+%F %T') - $*" >&2 } # --- Ensure state file and dummy IP exists --- ensure_state_file() { touch "$STATE_FILE" if ! grep -q "^$DUMMY_IP," "$STATE_FILE"; then echo "$DUMMY_IP,9999999999" >> "$STATE_FILE" log_debug "Adding dummy IP $DUMMY_IP to state file and AFM list" tmsh modify security firewall address-list "$AFM_ADDR_LIST" { addresses add { "$DUMMY_IP" } } 2>/dev/null else log_debug "Dummy IP $DUMMY_IP already present in state" fi } # --- Remove expired IPs from state and AFM --- cleanup_expired_bans() { local now now=$(date +%s) local new_state=() log_debug "Cleaning up expired bans at epoch time $now" while IFS=',' read -r ip expiry; do [[ -z "$ip" || -z "$expiry" ]] && continue if [[ "$ip" == "$DUMMY_IP" ]]; then new_state+=("$ip,9999999999") continue fi if (( expiry > now )); then log_debug "Keeping active ban for $ip (expires at $expiry)" new_state+=("$ip,$expiry") else log_debug "Unbanning expired IP $ip (expired at $expiry)" tmsh modify security firewall address-list "$AFM_ADDR_LIST" { addresses delete { "$ip" } } 2>/dev/null fi done < "$STATE_FILE" printf "%s\n" "${new_state[@]}" > "$STATE_FILE" log_debug "State file rewritten with ${#new_state[@]} active entries" } # --- Add or update ban for IP --- ban_ip() { local ip=$1 local expiry=$(( $(date +%s) + BAN_DURATION_SECONDS )) log_debug "Initiating ban for IP $ip until $(date -d "@$expiry") [$expiry]" # Remove old entry grep -v "^$ip," "$STATE_FILE" > "${STATE_FILE}.tmp" && mv "${STATE_FILE}.tmp" "$STATE_FILE" log_debug "Removed previous entry for $ip (if any)" # Add new ban echo "$ip,$expiry" >> "$STATE_FILE" log_debug "Added $ip,$expiry to $STATE_FILE" # Add to AFM list tmsh modify security firewall address-list "$AFM_ADDR_LIST" { addresses add { "$ip" } } 2>/dev/null log_debug "Added IP $ip to AFM address list $AFM_ADDR_LIST" } # --- Check recent log entries for violations --- check_for_new_violations() { local now start_epoch now=$(date +%s) start_epoch=$(( now - BAN_WINDOW_SECONDS )) log_debug "Checking for new violations between $start_epoch and $now" > "$TMP_LOG" awk -v threshold="$start_epoch" ' match($0, /"([A-Z][a-z]{2} [ 0-9][0-9] [0-9]{4} [0-9]{2}:[0-9]{2}:[0-9]{2})","ssh_.*auth_fail","([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)"/, m) { cmd = "date -d \"" m[1] "\" +%s" cmd | getline epoch close(cmd) if (epoch >= threshold) { print epoch, m[2] } } ' "$LOG_FILE" > "$TMP_LOG" log_debug "Parsed entries written to $TMP_LOG:" [[ "$DEBUG" -eq 1 ]] && cat "$TMP_LOG" >&2 awk '{count[$2]++} END { for (ip in count) if (count[ip] >= '"$BAN_THRESHOLD"') print ip }' "$TMP_LOG" | while read -r ip; do log_debug "Found IP with threshold violations: $ip" if grep -q "^$ip," "$STATE_FILE"; then log_debug "IP $ip already in state file, skipping ban" else ban_ip "$ip" fi done } # --- Main --- log_debug "==== SSH Ban Monitor Started ====" ensure_state_file cleanup_expired_bans check_for_new_violations log_debug "==== SSH Ban Monitor Finished ====" Happy hunting π74Views1like0CommentsAutomating F5 Licensing - without direct internet access
Hello DevCentral Community! I'm excited to share a project I've been working on recently: **Automating F5 BIG-IP VE Licensing** without needing direct internet access! The project covers: Retrieving a Dossier automatically via iControl REST API. Interacting with F5 licensing servers through proxies or offline. Re-activating licenses post-upgrade using custom scripts. Full Python 3 support (moving away from BigSuds/Python 2 limitations). β The idea is to help users who need to automate the licensing process, especially for secure or offline environments. I'll be sharing: Scripts Use cases Lessons learned Tips for real-world deployments If you're interested in automating your BIG-IP licensing process, feel free to follow along! Feedback, ideas, or collaboration is most welcome! π import requests import json import urllib3 urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) class BigIPLicenseManager: def __init__(self, host, username, password, registration_key): self.host = host self.username = username self.password = password self.registration_key = registration_key self.base_url = f"https://{self.host}/mgmt/tm/sys/license" self.headers = {'Content-Type': 'application/json'} def get_dossier(self): payload = { "command": "install", "registrationKey": self.registration_key } response = requests.post( self.base_url, auth=(self.username, self.password), headers=self.headers, json=payload, verify=False ) if response.status_code == 200: data = response.json() dossier = data.get('dossier') if dossier: print("[+] Dossier retrieved successfully.") return dossier else: print("[-] No dossier found in response.") return None else: print(f"[-] Failed to retrieve dossier: {response.text}") return None def install_license(self, license_text): payload = { "command": "install", "licenseText": license_text } response = requests.post( self.base_url, auth=(self.username, self.password), headers=self.headers, json=payload, verify=False ) if response.status_code == 200: print("[+] License installed successfully.") else: print(f"[-] Failed to install license: {response.text}") if __name__ == "__main__": # Define your BIG-IP credentials and registration key here bigip_host = "192.168.1.245" bigip_username = "admin" bigip_password = "admin" registration_key = "AAAAA-BBBBB-CCCCC-DDDDD-EEEEE" manager = BigIPLicenseManager( bigip_host, bigip_username, bigip_password, registration_key ) dossier = manager.get_dossier() if dossier: # Print the dossier to manually activate it via activate.f5.com print("\n[!] Submit the following dossier to F5 activation server:") print(dossier) # After getting the license text (offline or from a licensing server) license_text = input("\nPaste the license text here:\n") manager.install_license(license_text.strip())197Views3likes1CommentBash script to check if you have a SIP profile attached to a Virtual Server (CVE-2023-22842)
Code is community submitted, community supported, and recognized as βUse At Your Own Riskβ. Short Description Generate a CSV report to know which virtual servers have a SIP profile attached. Problem solved by this Code Snippet This is to shorten the time to investigate for CVE-2023-22842 for example. How to use this Code Snippet On bash prompt: # vi sip_profile.sh Paste the script (see and copy below) Save the file :wq! change permission # chmod 777 sip_profile.sh Run the script: # ./sip_profile.sh The output will be a CSV file under /var/tmp/sip-mapped-to-virtuals-output.csv To check the output: [admin@bigip:Active:Standalone] ~ # cat /var/tmp/sip-mapped-to-virtuals-output.csv Virtual Server, SIP Profile VS_SIP,sip VS_MRP,sip_mrp Code Snippet Meta Information Full Code Snippet #!/bin/bash echo "Virtual Server, SIP Profile" > /var/tmp/sip_profile_map_to_virtual.csv profile_names=`tmsh list ltm profile sip one-line | awk -F" " '{print $4}'` for x in ${profile_names} do virtual_name=`tmsh list ltm virtual one-line | grep -w $x | awk -F" " '{print $3}'` if [ "${virtual_name}" != "" ] then for y in ${virtual_name} do echo "$y,$x" >> /var/tmp/sip_profile_map_to_virtual.csv done fi done641Views2likes2Comments