NGINX WAF
39 TopicsNGINX App Protect v5 Signature Notifications
When working with NAP (NGINX App Protect) you don't have an easy way of knowing when any of the signatures are updated. As an old BigIP guy I find that rather strange. Here you have build-in automatic updates and notifications. Unfortunately there isn't any API's you can probe which would have been the best way of doing it. Hopefully it will come one day. However, "friction" and "hard" will not keep me from finding a solution 😆 I have previously made a solution for NAPv4 and I have tried mentally to get me going on a NAPv5 version. The reason for the delay is in the different way NAPv4 and NAPv5 are designed. Where NAPv4 is one module loaded in NGINX, NAPv5 is completely detached from NGINX (well almost, you still need to load a small module to get the traffic from NGINX to NAP) and only works with containers. NAPv5 has moved the signature "storage" from the actual host it is running on (e.g. an installed package) to the policy. This has the consequence that finding a valid "source of truth", for the latest signature versions, is not as simple as building a new image and see which versions got installed. There are very good reasons for this design that I will come back to later. When you fire up NAPv5 you got three containers for the data plane (NGINX, waf-enforcer and waf-config-mgr) and one for the "control plane" (waf-compiler). For this solution the "control plane" is the useful one. It isn't really a control plane but it gives a nice picture of how it is detached from the actual processing of traffic. When you update your signatures you are actually doing it through the waf-compiler. The waf-compiler is a container hosting the actual signature databases and every time a new verison is released you need to rebuild this container and compile your policies into a new version and reload NGINX. And this is what I take advantage of when I look for signature updates. It has the upside that you only need the waf-compiler to get the information you need. My solution will take care of the entire process and make sure that you are always running with the latest signatures. Back to the reason why the split of functions is a very good thing. When you build a new version of the NGINX image and deploy it into production, NAP needs to compile the policies as they load. During the compilation NGINX is not moving any traffic! This becomes a annoying problem even when you have a low number of policies. I have installations where it takes 5 to 10 minutes from deployment of the new image until it starts moving traffic. That is a crazy long time when you are used to working with micro-services and expect everything to flip within seconds. If you have your NAPv4 hooked up to a NGINX Instance Manager (NIM) the problem is somewhat mitigated as NIM will compile the policies before sending them to the gateways. NIM is not a nimble piece of software so it doesn't always fit into the environment. And now here is my hack to the notification problem: The solution consist of two bash scripts and one html template. The template is used when sending a notification mail. I wanted it to be pretty and that was easiest with html. Strictly speaking you could do with just a simple text based mail. Save all three in the same directory. The main script is called "waf_policy_auto_compile.sh"and is the one you put into crontab. The main script will build a new waf-compiler image and compile a test policy. The outcome of that is information about what versions are the newest. It will then extract versions from an old policy and simply see if any of the versions differ. For this to work you need to have an uncompiled policy (you can just use the default one) and a compiled version of it ready beforehand. When a diff has been identified the notification logic is executed and a second script is called: "compile_waf_policies.sh". It basically just trawls through the directory of you policies and logging profiles and compiles a new version of them all. It is not necessary to recompile the logging profiles, so this will probably change in the next version. As the compilation completes the main script will nudge NGINX to reload thus implement all the new versions. You can run "waf_policy_auto_compile.sh" with a verbose flag (-v) and a debug flag (-d). The verbose flag is intended to be used when you run it on a terminal and want the information displayed there. Debug is, well, for debug 😝 The construction of the scripts are based on my own needs but they should be easy to adjust for any need. I will be happy for any feedback, so please don't hold back 😄 version_report_template.html: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>WAF Policy Version Report</title> <style> body { font-family: system-ui, sans-serif; } .ok { color: #28a745; font-weight: bold; } .warn { color: #f0ad4e; font-weight: bold; } .section { margin-bottom: 1.2em; } .label { font-weight: bold; } </style> </head> <body> <h2>WAF Policy Version Report</h2> <div class="section"> <div class="label">Attack Signatures:</div> <div>Current: <span>{{ATTACK_OLD}}</span></div> <div>New: <span>{{ATTACK_NEW}}</span></div> <div>Status: <span class="{{ATTACK_CLASS}}">{{ATTACK_STATUS}}</span></div> </div> <div class="section"> <div class="label">Bot Signatures:</div> <div>Current: <span>{{BOT_OLD}}</span></div> <div>New: <span>{{BOT_NEW}}</span></div> <div>Status: <span class="{{BOT_CLASS}}">{{BOT_STATUS}}</span></div> </div> <div class="section"> <div class="label">Threat Campaigns:</div> <div>Current: <span>{{THREAT_OLD}}</span></div> <div>New: <span>{{THREAT_NEW}}</span></div> <div>Status: <span class="{{THREAT_CLASS}}">{{THREAT_STATUS}}</span></div> </div> <div>Run completed: {{RUN_DATETIME}}</div> </body> </html> compile_waf_policies.sh: #!/bin/bash # ============================================================================== # Script Name: compile_waf_policies.sh # # Description: # Compiles: # 1. WAF policy JSON files from the 'policies' directory # 2. WAF logging JSON files from the 'logging' directory # using the 'waf-compiler-latest:custom' Docker image. Output goes to # '/opt/napv5/app_protect_etc_config' where NGINX and waf-config-mgr # can reach them. # # Requirements: # - Docker installed and accessible # - Docker image 'waf-compiler-latest:custom' present locally # # Usage: # ./compile_waf_policies.sh # ============================================================================== set -euo pipefail IFS=$'\n\t' SECONDS=0 # Track total execution time # ======================== # CONFIGURABLE VARIABLES # ======================== BASE_DIR="/root/napv5/waf-compiler" OUTPUT_DIR="/opt/napv5/app_protect_etc_config" POLICY_INPUT_DIR="$BASE_DIR/policies" POLICY_OUTPUT_DIR="$OUTPUT_DIR" LOGGING_INPUT_DIR="$BASE_DIR/logging" LOGGING_OUTPUT_DIR="$OUTPUT_DIR" GLOBAL_SETTINGS="$BASE_DIR/global_settings.json" DOCKER_IMAGE="waf-compiler-latest:custom" # ======================== # VALIDATION # ======================== echo "🔧 Validating paths..." [[ -d "$POLICY_INPUT_DIR" ]] || { echo "❌ Error: Policy input directory '$POLICY_INPUT_DIR' does not exist."; exit 1; } [[ -f "$GLOBAL_SETTINGS" ]] || { echo "❌ Error: Global settings file '$GLOBAL_SETTINGS' not found."; exit 1; } mkdir -p "$POLICY_OUTPUT_DIR" mkdir -p "$LOGGING_OUTPUT_DIR" # ======================== # POLICY COMPILATION # ======================== echo "📦 Compiling WAF policies from: $POLICY_INPUT_DIR" for POLICY_FILE in "$POLICY_INPUT_DIR"/*.json; do [[ -f "$POLICY_FILE" ]] || continue BASENAME=$(basename "$POLICY_FILE" .json) OUTPUT_FILE="$POLICY_OUTPUT_DIR/${BASENAME}.tgz" echo "⚙️ [Policy] Compiling $(basename "$POLICY_FILE") -> $(basename "$OUTPUT_FILE")" docker run --rm \ -v "$POLICY_INPUT_DIR":"$POLICY_INPUT_DIR" \ -v "$POLICY_OUTPUT_DIR":"$POLICY_OUTPUT_DIR" \ -v "$(dirname "$GLOBAL_SETTINGS")":"$(dirname "$GLOBAL_SETTINGS")" \ "$DOCKER_IMAGE" \ -g "$GLOBAL_SETTINGS" \ -p "$POLICY_FILE" \ -o "$OUTPUT_FILE" done # ======================== # LOGGING COMPILATION # ======================== echo "📝 Compiling WAF logging configs from: $LOGGING_INPUT_DIR" if [[ -d "$LOGGING_INPUT_DIR" ]]; then for LOG_FILE in "$LOGGING_INPUT_DIR"/*.json; do [[ -f "$LOG_FILE" ]] || continue BASENAME=$(basename "$LOG_FILE" .json) OUTPUT_FILE="$LOGGING_OUTPUT_DIR/${BASENAME}.tgz" echo "⚙️ [Logging] Compiling $(basename "$LOG_FILE") -> $(basename "$OUTPUT_FILE")" docker run --rm \ -v "$LOGGING_INPUT_DIR":"$LOGGING_INPUT_DIR" \ -v "$LOGGING_OUTPUT_DIR":"$LOGGING_OUTPUT_DIR" \ "$DOCKER_IMAGE" \ -l "$LOG_FILE" \ -o "$OUTPUT_FILE" done else echo "⚠️ Skipping logging config compilation: directory '$LOGGING_INPUT_DIR' does not exist." fi # ======================== # COMPLETION MESSAGE # ======================== RUNTIME=$SECONDS printf "\n✅ Compilation complete.\n" echo " - Policies output: $POLICY_OUTPUT_DIR" echo " - Logging output: $LOGGING_OUTPUT_DIR" echo printf "⏱️ Total time taken: %02d minutes %02d seconds\n" $((RUNTIME / 60)) $((RUNTIME % 60)) echo waf_policy_auto_compile.sh: #!/bin/bash ############################################################################### # waf_policy_auto_compile.sh # # - Only prints colorized summary output to terminal if -v/--verbose is used # - Mails a styled HTML report using a template, substituting version numbers/status/colors # - Debug output (step_log) only to syslog if -d/--debug is used # - Otherwise: completely silent except for errors # - All main blocks are modularized in functions ############################################################################### set -euo pipefail IFS=$'\n\t' # ===== CONFIGURABLE VARIABLES ===== WORKROOT="/root/napv5" WORKDIR="$WORKROOT/waf-compiler" DOCKERFILE="$WORKDIR/Dockerfile" BUNDLE_DIR="$WORKDIR/test" NEW_BUNDLE="$BUNDLE_DIR/test_new.tgz" OLD_BUNDLE="$BUNDLE_DIR/test_old.tgz" NEW_META="$BUNDLE_DIR/test_new_meta.json" COMPILER_IMAGE="waf-compiler-latest:custom" EMAIL_RECIPIENT="example@example.com" EMAIL_SUBJECT="WAF Compiler Update Notification" NGINX_RELOAD_CMD="docker exec nginx-plus nginx -s reload" HTML_TEMPLATE="$WORKDIR/version_report_template.html" HTML_REPORT="$WORKDIR/version_report.html" VERBOSE=0 DEBUG=0 # ===== DEBUG AND ERROR LOGGING ===== exec 2> >(tee -a /tmp/waf_policy_auto_compile_error.log | /usr/bin/logger -t waf_policy_auto_compile_error) step_log() { if [ "$DEBUG" -eq 1 ]; then echo "DEBUG: $1" | /usr/bin/logger -t waf_policy_auto_compile fi } # ===== ARGUMENT PARSING ===== while [[ $# -gt 0 ]]; do case "$1" in -v|--verbose) VERBOSE=1 shift ;; -d|--debug) DEBUG=1 echo "Debug log can be found in the syslog..." shift ;; -*) echo "Unknown option: $1" >&2 exit 1 ;; *) shift ;; esac done # ----- LOG INITIAL ENVIRONMENT IF DEBUG ----- step_log "waf_policy_auto_compile starting (PID $$)" step_log "Script PATH: $PATH" step_log "which docker: $(which docker 2>/dev/null)" step_log "which jq: $(which jq 2>/dev/null)" # ===== COLOR DEFINITIONS ===== color_reset="\033[0m" color_green="\033[1;32m" color_yellow="\033[1;33m" # ===== LOGGING FUNCTIONS ===== log() { # Only log to terminal if VERBOSE is enabled if [ "$VERBOSE" -eq 1 ]; then echo "[$(date --iso-8601=seconds)] $*" fi } # ===== HTML REPORT GENERATOR ===== generate_html_report() { local attack_old="$1" local attack_new="$2" local attack_status="$3" local attack_class="$4" local bot_old="$5" local bot_new="$6" local bot_status="$7" local bot_class="$8" local threat_old="$9" local threat_new="${10}" local threat_status="${11}" local threat_class="${12}" local datetime datetime=$(date --iso-8601=seconds) cp "$HTML_TEMPLATE" "$HTML_REPORT" sed -i "s|{{ATTACK_OLD}}|$attack_old|g" "$HTML_REPORT" sed -i "s|{{ATTACK_NEW}}|$attack_new|g" "$HTML_REPORT" sed -i "s|{{ATTACK_STATUS}}|$attack_status|g" "$HTML_REPORT" sed -i "s|{{ATTACK_CLASS}}|$attack_class|g" "$HTML_REPORT" sed -i "s|{{BOT_OLD}}|$bot_old|g" "$HTML_REPORT" sed -i "s|{{BOT_NEW}}|$bot_new|g" "$HTML_REPORT" sed -i "s|{{BOT_STATUS}}|$bot_status|g" "$HTML_REPORT" sed -i "s|{{BOT_CLASS}}|$bot_class|g" "$HTML_REPORT" sed -i "s|{{THREAT_OLD}}|$threat_old|g" "$HTML_REPORT" sed -i "s|{{THREAT_NEW}}|$threat_new|g" "$HTML_REPORT" sed -i "s|{{THREAT_STATUS}}|$threat_status|g" "$HTML_REPORT" sed -i "s|{{THREAT_CLASS}}|$threat_class|g" "$HTML_REPORT" sed -i "s|{{RUN_DATETIME}}|$datetime|g" "$HTML_REPORT" } # ===== BUILD COMPILER IMAGE ===== build_compiler() { step_log "about to build_compiler" docker build --no-cache --platform linux/amd64 \ --secret id=nginx-crt,src="$WORKROOT/nginx-repo.crt" \ --secret id=nginx-key,src="$WORKROOT/nginx-repo.key" \ -t "$COMPILER_IMAGE" \ -f "$DOCKERFILE" "$WORKDIR" > "$WORKDIR/waf_compiler_build.log" 2>&1 || { echo "ERROR: docker build failed. Dumping build log:" | /usr/bin/logger -t waf_policy_auto_compile_error cat "$WORKDIR/waf_compiler_build.log" | /usr/bin/logger -t waf_policy_auto_compile_error exit 1 } step_log "after build_compiler" } # ===== COMPILE TEST POLICY ===== compile_test_policy() { step_log "about to compile_test_policy" docker run --rm -v "$BUNDLE_DIR:/bundle" "$COMPILER_IMAGE" \ -p /bundle/test.json -o /bundle/test_new.tgz > "$NEW_META" step_log "after compile_test_policy" if [ -f "$NEW_META" ]; then step_log "$(cat "$NEW_META")" else step_log "NEW_META does not exist" fi } # ===== CHECK OLD_BUNDLE ===== check_old_bundle() { step_log "about to check OLD_BUNDLE" if [ -f "$OLD_BUNDLE" ]; then step_log "$(ls -l "$OLD_BUNDLE")" else step_log "OLD_BUNDLE does not exist" fi } # ===== GET NEW VERSIONS FUNCTION ===== get_new_versions() { jq -r ' { "attack": .attack_signatures_package.version, "bot": .bot_signatures_package.version, "threat": .threat_campaigns_package.version }' "$NEW_META" } # ===== VERSION EXTRACTION FROM OLD BUNDLE ===== extract_bundle_versions() { docker run --rm -v "$BUNDLE_DIR:/bundle" "$COMPILER_IMAGE" \ -dump -bundle "/bundle/test_old.tgz" } extract_versions_from_dump() { extract_bundle_versions | awk ' BEGIN { print "{" } /attack-signatures:/ { in_attack=1; next } /bot-signatures:/ { in_bot=1; next } /threat-campaigns:/ { in_threat=1; next } in_attack && /version:/ { gsub("version: ", "") printf "\"attack\":\"%s\",\n", $1 in_attack=0 } in_bot && /version:/ { gsub("version: ", "") printf "\"bot\":\"%s\",\n", $1 in_bot=0 } in_threat && /version:/ { gsub("version: ", "") printf "\"threat\":\"%s\"\n", $1 in_threat=0 } END { print "}" } ' } get_old_versions() { extract_versions_from_dump } # ===== GET & PRINT VERSIONS ===== get_versions() { step_log "about to get_new_versions" new_versions=$(get_new_versions) step_log "new_versions: $new_versions" step_log "after get_new_versions" step_log "about to get_old_versions" old_versions=$(get_old_versions) step_log "old_versions: $old_versions" step_log "after get_old_versions" } # ===== VERSION COMPARISON ===== compare_versions() { step_log "compare_versions start" attack_old=$(echo "$old_versions" | jq -r .attack) attack_new=$(echo "$new_versions" | jq -r .attack) bot_old=$(echo "$old_versions" | jq -r .bot) bot_new=$(echo "$new_versions" | jq -r .bot) threat_old=$(echo "$old_versions" | jq -r .threat) threat_new=$(echo "$new_versions" | jq -r .threat) attack_status=$([[ "$attack_old" != "$attack_new" ]] && echo "Updated" || echo "No Change") bot_status=$([[ "$bot_old" != "$bot_new" ]] && echo "Updated" || echo "No Change") threat_status=$([[ "$threat_old" != "$threat_new" ]] && echo "Updated" || echo "No Change") attack_class=$([[ "$attack_status" == "Updated" ]] && echo "warn" || echo "ok") bot_class=$([[ "$bot_status" == "Updated" ]] && echo "warn" || echo "ok") threat_class=$([[ "$threat_status" == "Updated" ]] && echo "warn" || echo "ok") echo "Attack:$attack_status Bot:$bot_status Threat:$threat_status" > "$WORKDIR/status_flags.txt" [[ "$attack_status" == "Updated" ]] && attack_status_colored="${color_yellow}*** Updated ***${color_reset}" || attack_status_colored="${color_green}No Change${color_reset}" [[ "$bot_status" == "Updated" ]] && bot_status_colored="${color_yellow}*** Updated ***${color_reset}" || bot_status_colored="${color_green}No Change${color_reset}" [[ "$threat_status" == "Updated" ]] && threat_status_colored="${color_yellow}*** Updated ***${color_reset}" || threat_status_colored="${color_green}No Change${color_reset}" { echo -e "Version comparison for container \033[1mNAPv5\033[0m:\n" echo -e "Attack Signatures:" echo -e " Current Version: $attack_old" echo -e " New Version: $attack_new" echo -e " Status: $attack_status_colored\n" echo -e "Threat Campaigns:" echo -e " Current Version: $threat_old" echo -e " New Version: $threat_new" echo -e " Status: $threat_status_colored\n" echo -e "Bot Signatures:" echo -e " Current Version: $bot_old" echo -e " New Version: $bot_new" echo -e " Status: $bot_status_colored" } > "$WORKDIR/version_report.ansi" sed 's/\x1B\[[0-9;]*[mK]//g' "$WORKDIR/version_report.ansi" > "$WORKDIR/version_report.txt" step_log "Calling log_versions_syslog" log_versions_syslog "$attack_old" "$attack_new" "$attack_status" "$attack_class" \ "$bot_old" "$bot_new" "$bot_status" "$bot_class" \ "$threat_old" "$threat_new" "$threat_status" "$threat_class" step_log "compare_versions finished" } # ===== SYSLOG VERSION LOGGING and HTML REPORT GEN ===== log_versions_syslog() { # Args: # 1-attack_old 2-attack_new 3-attack_status 4-attack_class # 5-bot_old 6-bot_new 7-bot_status 8-bot_class # 9-threat_old 10-threat_new 11-threat_status 12-threat_class local msg msg="AttackSig (current: $1, latest: $2), BotSig (current: $5, latest: $6), ThreatCamp (current: $9, latest: $10)" /usr/bin/logger -t waf_policy_auto_compile "$msg" # Also print to terminal if VERBOSE is enabled if [ "$VERBOSE" -eq 1 ]; then echo "$msg" fi # Always (re)generate HTML for the mail at this point generate_html_report "$@" } # ===== RESPONSE ACTIONS ===== compile_all_policies() { log "Change detected – compiling all policies..." if [ "$VERBOSE" -eq 1 ]; then "$WORKDIR/compile_waf_policies.sh" else "$WORKDIR/compile_waf_policies.sh" > /dev/null 2>&1 fi } reload_nginx() { log "Reloading NGINX..." eval "$NGINX_RELOAD_CMD" } rotate_bundles() { log "Archiving new test bundle as old..." mv "$NEW_BUNDLE" "$OLD_BUNDLE" rm -f "$NEW_META" } send_report_email() { local html_report="$1" mail -s "$EMAIL_SUBJECT" -a "Content-Type: text/html" "$EMAIL_RECIPIENT" < "$html_report" } # ===== MAIN LOGIC ===== main() { build_compiler compile_test_policy check_old_bundle get_versions compare_versions if [[ "$VERBOSE" -eq 1 ]]; then cat "$WORKDIR/version_report.ansi" fi if grep -q "Updated" "$WORKDIR/status_flags.txt"; then if [[ "$VERBOSE" -eq 1 ]]; then echo "Detected updates. Recompiling policies, reloading NGINX, sending report." fi compile_all_policies reload_nginx rotate_bundles send_report_email "$HTML_REPORT" else log "No changes detected – nothing to do." fi log "Done." } main "$@" And should be it.222Views2likes4CommentsF5 NGINX One Console July features
Introduction We are very excited to announce the new set of F5 NGINX One Console features that were released in July: • F5 NGINX App Protect WAF Policy Orchestration • F5 NGINX Ingress Controller Fleet Visibility The NGINX One Console is a central management service in the F5 Distributed Cloud that makes it easier to deploy, manage, monitor, and operate NGINX. It is available to all NGINX and F5 Distributed Cloud subscribers and is included as a part of their subscription. If you have any questions on how to get access, please reach out to me. Workshops Additionally, we are offering complementary workshops on how to use the NGINX One Console in August and September (North American Time Zone). These include a presentation, hands-on labs, and demos with live support and guidance. July 30 - F5 Test Drive Labs - F5 NGINX One - https://www.f5.com/company/events/test-drive-labs#agenda Aug 19 - NGINXperts Workshop - NGINX One Console - https://www.eventbrite.com/e/nginxpert-workshops-nginx-one-console-tickets-1511189742199 Sept 9 - F5 Test Drive Labs - NGINX One - https://www.f5.com/company/events/test-drive-labs#agenda Sept 10 - NGINXperts Workshop - NGINX One Console - https://www.eventbrite.com/e/nginxpert-workshops-nginx-one-console-tickets-1393597239859?aff=oddtdtcreator NGINX App Protect WAF Policy Orchestration You can now centrally manage NGINX App Protect WAF in the NGINX One Console. Easily create and edit WAF policies in the NGINX One Console, compare changes, and publish to instances and configuration sync groups. Both App Protect v4 and v5 are supported! Find the guides on how to secure F5 NGINX with NGINX App Protect and the NGINX One Console here: https://docs.nginx.com/nginx-one/nap-integration/ NGINX Ingress Controller Fleet Visibility NGINX Ingress Controller deployments can now be monitored in the NGINX One Console. See all versions of NGINX Ingress Controller deployments, what underlying instances make up the control pods, and see the underlying NGINX configuration. Coming later this quarter, CVE monitoring for NGINX Ingress Controller and F5 NGINX Gateway Fabric inclusion. For details, see how you can Connect Ingress Controller to NGINX One Console: https://docs.nginx.com/nginx-one/k8s/add-nic/ Find all the latest additions to the NGINX One Console in our changelog: https://docs.nginx.com/nginx-one/changelog/https://docs.nginx.com/nginx-one/changelog/ The NGINX Impact in F5’s Application Delivery & Security Platform NGINX One Console is part of F5’s Application Delivery & Security Platform. It helps organizations deliver, improve, and secure new applications and APIs. This platform is a unified solution designed to ensure reliable performance, robust security, and seamless scalability. It is used for applications deployed across cloud, hybrid, and edge architectures. The NGINX One Console is also a key component of NGINX One, the all-in-one, subscription-based package that unifies all of NGINX’s capabilities. NGINX One brings together the features of NGINX Plus, NGINX App Protect, and NGINX Kubernetes and management solutions into a single, easy-to-consume package. As a cornerstone of the NGINX One package, NGINX One Console extends the capabilities of open-source NGINX. It adds features designed specifically for enterprise-grade performance, scalability, and security. Find all the latest additions to the NGINX One Console in our changelog: https://docs.nginx.com/nginx-one/changelog/https://docs.nginx.com/nginx-one/changelog/91Views0likes0CommentsMitigating OWASP Web Application Security Top 10 risks using F5 NGINX App Protect
The OWASP Web Application Security Top 10 outlines the most critical security risks to web applications, serving as a global standard for understanding and mitigating vulnerabilities. Based on data from over 500,000 real-world applications, the list highlights prevalent security issues. The 2021 edition introduces new categories such as "Insecure Design" and "Software and Data Integrity Failures" emphasizing secure design principles and proactive security throughout the software development lifecycle. For more information please visit: OWASP Web Application Security Top 10 - 2021 F5 products provide controls to secure applications against these risks. F5 NGINX App Protect offers security controls using both positive and negative security models to protect applications from OWASP Top 10 risks. The positive security model combines validated user sessions, user input, and application response, while the negative security model uses attack signatures to detect and block OWASP Top 10 application security threats. This guide outlines how to implement effective protection based on the specific needs of your application. Note - The OWASP Web Application Security Top 10 risks listed below are tested on both F5 NGINX App Protect versions 4.x and 5.x A01:2021-Broken Access Control Problem statement: As the risk name suggests, Broken Access Control refers to failures in access control mechanisms that lead to a vulnerable application. In this demonstration, the application is susceptible to “Directory/Path Traversal” via the URL, which allows unauthorized access to sensitive information stored on the server. Solution: F5 NGINX App Protect WAF(Web Application Firewall) offers an inherent solution to the “Directory/Path Traversal” vulnerability discussed, through its “app_protect_default_policy” bundle. This policy, which will be active by default when “App Protect” is enabled in the nginx configuration, helps prevent Directory/Path Traversal attacks by validating the values provided to the “page” key in URL. The attack request is recorded in the security log, indicating that the attack type is Predictable Resource Location, Path Traversal. The request was blocked, and the signatures responsible for detecting the attack are also visible. Note: The security log shown in the image below is not the default log configuration but has been customized by following the instructions provided in the link. A02:2021-Cryptographic Failures Problem statement: Earlier this attack was known as “Sensitive Data Exposure”, focusing on cryptographic failures that often result in the exposure of sensitive data. The “Juice Shop” demo application, as demonstrated below, is vulnerable to sensitive information disclosure due to the insecure storage of data, which is displayed in plain text to end users. Solution: F5 NGINX App Protect WAF provides best in class “Data Guard” policy, which can block as well as mask (based on policy configuration) sensitive information displayed to the end users. After applying the policy to mask the sensitive data, it’s observed the sensitive information which was visible(Fig. 2.1) is masked now. The attack request is recorded in the security log, indicating that the dataguard_mask policy is triggered, and the request was alerted. . 2.4 – Request captured in NGINX App Protect security log A03:2021-Injection Problem statement: An injection vulnerability arises when an application fails to properly handle user-supplied data, sending it to an interpreter (e.g., a database or operating system) as part of a query or command. Without proper validation, filtering, or sanitization, attackers can inject malicious code, leading to unauthorized access, data breaches, privilege escalation, or system compromise. For example, the DVWA demo application shown below lacks input validation, making it vulnerable to SQL injection attacks that can compromise confidential data. Solution: F5 NGINX App Protect WAF has a robust set of attack signatures which are pre-bundled in default policy. The SQL-Injection vulnerability discussed above can be prevented by enabling App Protect which has around 1000+ signatures related to variety of Injection attacks. The attack request is recorded in the security log, indicating that the attack type is SQL-Injection. The request was blocked, and the signatures responsible for detecting the attack are also visible. A04:2021-Insecure Design Problem statement: The growing reliance on web applications exposes them to security risks, with insecure design being a key concern. For example, a retail chain’s e-commerce website lacks protection against bots used by scalpers to buy high-end video cards in bulk for resale. This causes negative publicity and frustrates genuine customers. Implementing anti-bot measures and domain logic rules can help block fraudulent transactions, with F5 NGINX App Protect providing effective protection against such attacks. Solution: Secure design is an ongoing process that continuously evaluates threats, ensures robust code, and integrates threat modeling into development. It involves constant validation, accurate flow analysis, and thorough documentation. By using F5 NGINX App Protect WAF, which includes bot defense, web applications can effectively prevent bot-driven attacks, identifying and blocking them early to protect against fraudulent transactions. The attack request is recorded in the security log, indicating that the attack type is Non-browser Client. The request was blocked, and the violation stating “VIOL_BOT_CLIENT”. Note: The security log shown in the image below is the default log configuration Request captured in NGINX App Protect security log A05:2021-Security Misconfiguration Problem statement: Security misconfiguration occurs when security settings are improperly configured, exposing web applications to various threats. One such vulnerability is Cross-Site Request Forgery (CSRF), where attackers trick authenticated users into making unauthorized requests. Without proper protection mechanisms, attackers can exploit this misconfiguration to perform malicious actions on behalf of the user. The demonstration using WebGoat below shows how an improperly configured application becomes vulnerable to CSRF, allowing attackers to carry out unauthorized actions. Execute the above malicious script by copying the file path and pasting in new tab of the WebGoat authenticated browser. The script will automatically load the malicious code and redirects to the vulnerable page. Solution: F5 NGINX App Protect WAF provides a comprehensive support against CSRF attack. Users can configure the CSRF policy based on their requirements by following the configuration settings here. In this demonstration, default CSRF policy is used to block the attack. Default CSRF policy used to block CSRF attacks The security log captures the attack request, identifying the type of attack which is CSRF. The request was successfully blocked, and the violations saying “CSRF attack detected” is also visible. A06:2021-Vulnerable and Outdated Components Problem statement: Vulnerable and Outdated Components risk arises when a web application uses third-party libraries or software with known security vulnerabilities that are not updated. Additionally, vulnerable pages like “phpmyadmin.php” that expose sensitive details—such as application versions, user credentials, and database information—further increase the risk. Attackers can use this information to exploit known vulnerabilities or gain unauthorized access, leading to potential data breaches or system compromise. Solution: The vulnerability discussed above can be mitigated using F5 NGINX App Protect WAF Attack Signatures, which includes specific "Signature ID" for various vulnerabilities. These Signature IDs can be incorporated into the policy file to block attacks. For instance, Signature ID 200000014 can be used to block access to phpmyadmin.php page. Attack signatures can be found here. The attack request is recorded in the security log, indicating that the attack type is Predictable Resource Location. The request was blocked, and the signatures responsible for detecting the “/phpmyadmin/ page” attack are also visible. A07:2021-Identification and Authentication Failures Problem statement: Effective authentication and secure session management are crucial in preventing authentication-related vulnerabilities in daily tasks. Applications with weak authentication mechanisms are vulnerable to automated attacks, such as credential stuffing, where attackers use wordlists to perform spray attacks, allowing attackers to determine whether specific credentials are valid, thus increasing the risk of brute-force and other automated attacks. Brute force attacks are attempts to break in to secured areas of a web application by trying exhaustive, systematic, username/password combinations to discover legitimate authentication credentials. Solution: To prevent brute force attacks, F5 NGINX App Protect WAF monitors IP addresses, usernames, and the number of failed login attempts beyond a maximum threshold. When brute force patterns are detected, the F5 NGINX App Protect WAF policy either trigger an alarm or block the attack if the failed login attempts reached a maximum threshold for a specific username or coming from a specific IP address. Note – Brute force attack prevention is supported starting from versions v4.13 and v5.5 The security log captures the attack request, identifying the type of attack as Brute Force Attack. The request was successfully blocked, and the “VIOL_BRUTE_FORCE” violations is also visible. A08:2021-Software and Data Integrity Failures Problem statement: Added as a new entry in the OWASP Top 10 2021, software and data integrity failures, particularly in the context of insecure deserialization, occur when an application deserializes untrusted data without proper validation or security checks. This vulnerability allows attackers to modify or inject malicious data into the deserialization process, potentially leading to remote code execution, privilege escalation, or data manipulation. In this demonstration, a serialized PHP command O:18:"PHPObjectInjection":1:{s:6:"inject";s:18:"system ('ps -ef');";} is passed in the URL to retrieve the running processes. Solution: F5 NGINX App Protect WAF can prevent Serialization Injection PHP attacks by leveraging its default policy bundle, which includes an extensive set of signatures specifically designed to address deserialization vulnerabilities. The security log captures the attack request, identifying the type of attack. The request was successfully blocked, and the signatures used to detect the 'PHP Short Object Serialization Injection' attack are also visible. A09:2021-Security Logging and Monitoring Failures Problem statement: Security logging and monitoring failures occur when critical application activities such as logins, transactions, and user actions are not adequately logged or monitored. This lack of visibility makes it difficult to detect and respond to security breaches, attack attempts, or suspicious user behavior. Without proper logging and monitoring, attackers can exploit vulnerabilities without detection, potentially leading to data loss, revenue impact, or reputational damage. Insufficient logging also hinders the ability to escalate and mitigate security incidents effectively, making the application more vulnerable to exploitation. Solution: F5 NGINX App Protect WAF provides different options to track logging details of applications for end-to-end visibility of every request both from a security and performance perspective. Users can change configurations as per their requirements and can also configure different logging mechanisms with different levels. Check the links below for more details on logging: Version 4 and earlier Version 5 A10:2021-Server-Side Request Forgery Problem statement: Server-Side Request Forgery (SSRF) occurs when a web application fetches a remote resource without properly validating the user-supplied URL. This vulnerability allows attackers to manipulate the application into sending malicious requests to internal systems or external resources, bypassing security measures like firewalls or VPNs. SSRF attacks can expose sensitive internal data or resources that are not meant to be publicly accessible, making them a significant security risk, especially with modern cloud architectures. In this demonstration, patient health records, which should be accessible only within the network, can be retrieved publicly through SSRF. Solution: Server-Side Request Forgery (SSRF) attacks can be prevented by utilizing the default policy bundle of F5 NGINX App Protect WAF, which includes a comprehensive set of signatures designed to detect and mitigate SSRF vulnerabilities. By enabling App Protect, you gain strong defense against SSRF attacks as well as other prevalent security threats, thanks to the default policy's pre-configured signatures that cover a wide range of attack vectors. The security log captures the attack request, identifying the type of attack. The request was successfully blocked, and the signatures used to detect the 'SSRF' attack are also visible. Request captured in NGINX App Protect security log Conclusion: Protecting applications from attacks is simple with F5 NGINX App Protect WAF, a high-performance, lightweight, and platform-agnostic solution that supports diverse deployment options, from edge load balancers to Kubernetes clusters. By leveraging its advanced security controls, organizations can effectively mitigate the OWASP Web Application Security Top 10 risks, ensuring robust protection across distributed architectures and hybrid environments. Ultimately, F5 NGINX App Protect helps strengthen overall security, providing comprehensive defense for modern applications. References: F5 NGINX App Protect WAF OWASP Top 10 - 2021 F5 NGINX App Protect WAF Documentation F5 Attack Signatures440Views2likes2CommentsWhat is Message Queue Telemetry Transport (MQTT)? How to secure MQTT?
MQTT is a messaging protocol broadly used in IoT and connected services, very lightweight and reliable even over poor quality networks. It is designed lightweight so it can work on constrained devices but, even in its latest version MQTTv5, the attack surface is very large.438Views0likes1CommentMitigate OWASP LLM Security Risk: Sensitive Information Disclosure Using F5 NGINX App Protect
This short WAF security article covered the critical security gaps present in current generative AI applications, emphasizing the urgent need for robust protection measures in LLM design deployments. Finally we also demonstrated how F5 Nginx App Protect v5 offers an effective solution to mitigate the OWASP LLM Top 10 risks.426Views2likes0CommentsGlobal Live Webinar (10/16): Unlocking WAF and API Security with NGINX App Protect
Unlocking WAF and API Security with NGINX App Protect This webinar event is open to all F5 users regardless of geographic location. Date: Wednesday, October 16, 2024 Time: 10:00am PT | 1:00pm ET Speakers: Navpreet Gill - Senior Product Marketing Manager, F5 NGINX Aviv Dahan - Senior Product Manager, F5 NGINX What's the webinar about? APIs are pivotal in today’s digital landscape, driving innovation and efficiency by enabling seamless interactions between services and platforms. However, this increased reliance on APIs has expanded the attack surface, exposing vulnerabilities that traditional security measures struggle to defend against. Join our webinar to discover how F5 NGINX App Protect v5.0 offers consistent, enterprise-grade security for APIs and services with tailored visibility, control, and protection across on-premises, cloud, and hybrid environments all while ensuring zero impact on the data plane through the separation of control plane. What you'll learn: Defend diverse services, AI technologies, platforms, and applications in cloud environments, including container-based orchestrations, with extended support beyond K8s, and K3s. Enhance security with tailored visibility and control, ensuring zero impact on the data plane through the separation of the control plane. Leverage lightweight, high-performance WAF, DoS, and API security. Watch a demo showcasing how NGINX App Protect secures services using APIs, including its deployment on NIC and NAPaaS on Azure. Register Today, Click Here Note. If you can't make the live webinar (10/16), please still register and will send you the link to the on-demand recording after session.57Views0likes0CommentsDashboards for NGINX App Protect
Introduction NGINX App Protect is a new generation WAF from F5 which is built in accordance with UNIX philosophy such that it does one thing well everything else comes from integrations. NGINX App Protect is extremely good at HTTP traffic security. It inherited a powerful WAF engine from BIG-IP and light footprint and high performance from NGINX. Therefore NGINX App Protect brings fine-grained security to all kinds of insertion points where NGINX use to be either on-premises or cloud-based. Therefore NGINX App Protect is a powerful and flexible security tool but without any visualization capabilities which are essential for a good security product. As mentioned above everything besides primary functionality comes from integrations. In order to introduce visualization capabilities, I've developed an integration between NGINX App Protect and ELK stack (Elasticsearch, Logstash, Kibana) as one of the most adopted stacks for log collection and visualization. Based on logs from NGINX App Protect ELK generates dashboards to clearly visualize what WAF is doing. Overview Dashboard Currently, there are two dashboards available. "Overview" dashboard provides a high-level view of the current situation and also allows to discover historical trends. You are free to select any time period of interest and filter data simply by clicking on values right on the dashboard. Table at the bottom of the dashboard lists all requests within a time frame and allows to see how exactly a request looked like. False Positives Dashboard Another useful dashboard called "False Positives" helps to identify false positives and adjust WAF policy based on findings. For example, the chart below shows the number of unique IPs that hit a signature. Under normal conditions when traffic is mostly legitimate "per signature" graphs should fluctuate around zero because legitimate users are not supposed to hit any of signatures. Therefore if there is a spike and amount of unique IPs which hit a signature is close to the total amount of sources then likely there is a false positive and policy needs to be adjusted. Conclusion This is an open-source and community-driven project. The more people contribute the better it becomes. Feel free to use it for your projects and contribute code or ideas directly to the repo. The plan is to make these dashboards suitable for all kinds of F5 WAF flavors including AWAF and EAP. This should be simple because it only requires logstash pipeline adjustment to unify logs format stored in elasticsearch index. If you have a project for AWAF or EAP going on and would like to use dashboards please feel free to develop and create a pull request with an adjusted logstash pipeline to normalize logs from other WAFs. Github repo: https://github.com/464d41/f5-waf-elk-dashboards Feel free to reach me with questions. Good luck!2.2KViews4likes1CommentImplementing BIG-IP WAF logging and visibility with ELK
Scope This technical article is useful for BIG-IP users familiar with web application security and the implementation and use of the Elastic Stack. This includes, application security professionals, infrastructure management operators and SecDevOps/DevSecOps practitioners. The focus is for WAF logs exclusively. Firewall, Bot, or DoS mitigation logging into the Elastic Stack is the subject of a future article. Introduction This article focusses on the required configuration for sending Web Application Firewall (WAF) logs from the BIG-IP Advanced WAF (or BIG-IP ASM) module to an Elastic Stack (a.k.a. Elasticsearch-Logstash-Kibana or ELK). First, this article goes over the configuration of BIG-IP. It is configured with a security policy and a logging profile attached to the virtual server that is being protected. This can be configured via the BIG-IP user interface (TMUI) or through the BIG-IP declarative interface (AS3). The configuration of the Elastic Strack is discussed next. The configuration of filters adapted to processing BIP-IP WAF logs. Finally, the article provides some initial guidance to the metrics that can be taken into consideration for visibility. It discusses the use of dashboards and provides some recommendations with regards to the potentially useful visualizations. Pre-requisites and Initial Premise For the purposes of this article and to follow the steps outlined below, the user will need to have at least one BIG-IP Adv. WAF running TMOS version 15.1 or above (note that this may work with previous version but has not been tested). The target BIG-IP is already configured with: A virtual Server A WAF policy An operational Elastic Stack is also required. The administrator will need to have configuration and administrative privileges on both the BIG-IP and Elastic Stack infrastructure. They will also need to be familiar with the network topology linking the BIG-IP with the Elastic Search cluster/infrastructure. It is assumed that you want to use your Elastic Search (ELK) logging infrastructure to gain visibility into BIG-IP WAF events. Logging Profile Configuration An essential part of getting WAF logs to the proper destination(s) is the Logging Profile. The following will go over the configuration of the Logging Profile that sends data to the Elastic Stack. Overview of the steps: Create Logging Profile Associate Logging Profile with the Virtual Server After following the procedure below On the wire, logs lines sent from the BIG-IP are comma separated value pairs that look something like the sample below: Aug 25 03:07:19 localhost.localdomainASM:unit_hostname="bigip1",management_ip_address="192.168.41.200",management_ip_address_2="N/A",http_class_name="/Common/log_to_elk_policy",web_application_name="/Common/log_to_elk_policy",policy_name="/Common/log_to_elk_policy",policy_apply_date="2020-08-10 06:50:39",violations="HTTP protocol compliance failed",support_id="5666478231990524056",request_status="blocked",response_code="0",ip_client="10.43.0.86",route_domain="0",method="GET",protocol="HTTP",query_string="name='",x_forwarded_for_header_value="N/A",sig_ids="N/A",sig_names="N/A",date_time="2020-08-25 03:07:19",severity="Error",attack_type="Non-browser Client,HTTP Parser Attack",geo_location="N/A",ip_address_intelligence="N/A",username="N/A",session_id="0",src_port="39348",dest_port="80",dest_ip="10.43.0.201",sub_violations="HTTP protocol compliance failed:Bad HTTP version",virus_name="N/A",violation_rating="5",websocket_direction="N/A",websocket_message_type="N/A",device_id="N/A",staged_sig_ids="",staged_sig_names="",threat_campaign_names="N/A",staged_threat_campaign_names="N/A",blocking_exception_reason="N/A",captcha_result="not_received",microservice="N/A",tap_event_id="N/A",tap_vid="N/A",vs_name="/Common/adv_waf_vs",sig_cves="N/A",staged_sig_cves="N/A",uri="/random",fragment="",request="GET /random?name=' or 1 = 1' HTTP/1.1\r\n",response="Response logging disabled" Please choose one of the methods below. The configuration can be done through the web-based user interface (TMUI), the command line interface (TMSH), directly with a declarative AS3 REST API call, or with the BIG-IP native REST API. This last option is not discussed herein. TMUI Steps: Create Profile Connect to the BIG-IP web UI and login with administrative rights Navigate to Security >> Event Logs >> Logging Profiles Select “Create” Fill out the configuration fields as follows: Profile Name (mandatory) Enable Application Security Set Storage Destination to Remote Storage Set Logging Format to Key-Value Pairs (Splunk) In the Server Addresses field, enter an IP Address and Port then click on Add as shown below: Click on Create Add Logging Profile to virtual server with the policy Select target virtual server and click on the Security tab (Local Traffic >> Virtual Servers : Virtual Server List >> [target virtualserver] ) Highlight the Log Profile from the Available column and put it in the Selected column as shown in the example below (log profile is “log_all_to_elk”): Click on Update At this time the BIG-IP will forward logs Elastic Stack. TMSH Steps: Create profile ssh into the BIG-IP command line interface (CLI) from the tmsh prompt enter the following: create security log profile [name_of_profile] application add { [name_of_profile] { logger-type remote remote-storage splunk servers add { [IP_address_for_ELK]:[TCP_Port_for_ELK] { } } } } For example: create security log profile dc_show_creation_elk application add { dc_show_creation_elk { logger-type remote remote-storage splunk servers add { 10.45.0.79:5244 { } } } } 3. ensure that the changes are saved: save sys config partitions all Add Logging Profile to virtual server with the policy 1. From the tmsh prompt (assuming you are still logged in) enter the following: modify ltm virtual [VS_name] security-log-profiles add { [name_of_profile] } For example: modify ltm virtual adv_waf_vs security-log-profiles add { dc_show_creation_elk } 2. ensure that the changes are saved: save sys config partitions all At this time the BIG-IP sends logs to the Elastic Stack. AS3 Application Services 3 (AS3) is a BIG-IP configuration API endpoint that allows the user to create an application from the ground up. For more information on F5’s AS3, refer to link. In order to attach a security policy to a virtual server, the AS3 declaration can either refer to a policy present on the BIG-IP or refer to a policy stored in XML format and available via HTTP to the BIG-IP (ref. link). The logging profile can be created and associated to the virtual server directly as part of the AS3 declaration. For more information on the creation of a WAF logging profile, refer to the documentation found here. The following is an example of a pa rt of an AS3 declaration that will create security log profile that can be used to log to Elastic Stack: "secLogRemote": { "class": "Security_Log_Profile", "application": { "localStorage": false, "maxEntryLength": "10k", "protocol": "tcp", "remoteStorage": "splunk", "reportAnomaliesEnabled": true, "servers": [ { "address": "10.45.0.79", "port": "5244" } ] } In the sample above, the ELK stack IP address is 10.45.0.79 and listens on port 5244 for BIG-IP WAF logs. Note that the log format used in this instance is “Splunk”. There are no declared filters and thus, only the illegal requests will get logged to the Elastic Stack. A sample AS3 declaration can be found here. ELK Configuration The Elastic Stack configuration consists of creating a new input on Logstash. This is achieved by adding an input/filter/ output configuration to the Logstash configuration file. Optionally, the Logstash administrator might want to create a separate pipeline – for more information, refer to this link. The following is a Logstash configuration known to work with WAF logs coming from BIG-IP: input { syslog { port => 5244 } } filter { grok { match => { "message" => [ "attack_type=\"%{DATA:attack_type}\"", ",blocking_exception_reason=\"%{DATA:blocking_exception_reason}\"", ",date_time=\"%{DATA:date_time}\"", ",dest_port=\"%{DATA:dest_port}\"", ",ip_client=\"%{DATA:ip_client}\"", ",is_truncated=\"%{DATA:is_truncated}\"", ",method=\"%{DATA:method}\"", ",policy_name=\"%{DATA:policy_name}\"", ",protocol=\"%{DATA:protocol}\"", ",request_status=\"%{DATA:request_status}\"", ",response_code=\"%{DATA:response_code}\"", ",severity=\"%{DATA:severity}\"", ",sig_cves=\"%{DATA:sig_cves}\"", ",sig_ids=\"%{DATA:sig_ids}\"", ",sig_names=\"%{DATA:sig_names}\"", ",sig_set_names=\"%{DATA:sig_set_names}\"", ",src_port=\"%{DATA:src_port}\"", ",sub_violations=\"%{DATA:sub_violations}\"", ",support_id=\"%{DATA:support_id}\"", "unit_hostname=\"%{DATA:unit_hostname}\"", ",uri=\"%{DATA:uri}\"", ",violation_rating=\"%{DATA:violation_rating}\"", ",vs_name=\"%{DATA:vs_name}\"", ",x_forwarded_for_header_value=\"%{DATA:x_forwarded_for_header_value}\"", ",outcome=\"%{DATA:outcome}\"", ",outcome_reason=\"%{DATA:outcome_reason}\"", ",violations=\"%{DATA:violations}\"", ",violation_details=\"%{DATA:violation_details}\"", ",request=\"%{DATA:request}\"" ] } break_on_match => false } mutate { split => { "attack_type" => "," } split => { "sig_ids" => "," } split => { "sig_names" => "," } split => { "sig_cves" => "," } split => { "staged_sig_ids" => "," } split => { "staged_sig_names" => "," } split => { "staged_sig_cves" => "," } split => { "sig_set_names" => "," } split => { "threat_campaign_names" => "," } split => { "staged_threat_campaign_names" => "," } split => { "violations" => "," } split => { "sub_violations" => "," } } if [x_forwarded_for_header_value] != "N/A" { mutate { add_field => { "source_host" => "%{x_forwarded_for_header_value}"}} } else { mutate { add_field => { "source_host" => "%{ip_client}"}} } geoip { source => "source_host" } } output { elasticsearch { hosts => ['localhost:9200'] index => "big_ip-waf-logs-%{+YYY.MM.dd}" } } After adding the configuration above to the Logstash parameters, you will need to restart the Logstash instance to take the new logs into configuration. The sample above is also available here. The Elastic Stack is now ready to process the incoming logs. You can start sending traffic to your policy and start seeing logs populating the Elastic Stack. If you are looking for a test tool to generate traffic to your Virtual Server, F5 provides a simple WAF tester tool that can be found here. At this point, you can start creating dashboards on the Elastic Stack that will satisfy your operational needs with the following overall steps: · Ensure that the log index is being created (Stack Management >> Index Management) · Create a Kibana Index Pattern (Stack Management>>Index patterns) · You can now peruse the logs from the Kibana discover menu (Discover) · And start creating visualizations that will be included in your Dashboards (Dashboards >> Editing Simple WAF Dashboard) A complete Elastic Stack configuration can be found here – note that this can be used with both BIG-IP WAF and NGINX App Protect. Conclusion You can now leverage the widely available Elastic Stack to log and visualize BIG-IP WAF logs. From dashboard perspective it may be useful to track the following metrics: -Request Rate -Response codes -The distribution of requests in term of clean, blocked or alerted status -Identify the top talkers making requests -Track the top URL’s being accessed -Top violator source IP An example or the dashboard might look like the following:14KViews5likes6Comments