devops
789 TopicsNGINX App Protect v5 Signature Notifications
When working with NAP (NGINX App Protect) you don't have an easy way of knowing when any of the signatures are updated. As an old BigIP guy I find that rather strange. Here you have build-in automatic updates and notifications. Unfortunately there isn't any API's you can probe which would have been the best way of doing it. Hopefully it will come one day. However, "friction" and "hard" will not keep me from finding a solution π I have previously made a solution for NAPv4 and I have tried mentally to get me going on a NAPv5 version. The reason for the delay is in the different way NAPv4 and NAPv5 are designed. Where NAPv4 is one module loaded in NGINX, NAPv5 is completely detached from NGINX (well almost, you still need to load a small module to get the traffic from NGINX to NAP) and only works with containers. NAPv5 has moved the signature "storage" from the actual host it is running on (e.g. an installed package) to the policy. This has the consequence that finding a valid "source of truth", for the latest signature versions, is not as simple as building a new image and see which versions got installed. There are very good reasons for this design that I will come back to later. When you fire up NAPv5 you got three containers for the data plane (NGINX, waf-enforcer and waf-config-mgr) and one for the "control plane" (waf-compiler). For this solution the "control plane" is the useful one. It isn't really a control plane but it gives a nice picture of how it is detached from the actual processing of traffic. When you update your signatures you are actually doing it through the waf-compiler. The waf-compiler is a container hosting the actual signature databases and every time a new verison is released you need to rebuild this container and compile your policies into a new version and reload NGINX. And this is what I take advantage of when I look for signature updates. It has the upside that you only need the waf-compiler to get the information you need. My solution will take care of the entire process and make sure that you are always running with the latest signatures. Back to the reason why the split of functions is a very good thing. When you build a new version of the NGINX image and deploy it into production, NAP needs to compile the policies as they load. During the compilation NGINX is not moving any traffic! This becomes a annoying problem even when you have a low number of policies. I have installations where it takes 5 to 10 minutes from deployment of the new image until it starts moving traffic. That is a crazy long time when you are used to working with micro-services and expect everything to flip within seconds. If you have your NAPv4 hooked up to a NGINX Instance Manager (NIM) the problem is somewhat mitigated as NIM will compile the policies before sending them to the gateways. NIM is not a nimble piece of software so it doesn't always fit into the environment. And now here is my hack to the notification problem: The solution consist of two bash scripts and one html template. The template is used when sending a notification mail. I wanted it to be pretty and that was easiest with html. Strictly speaking you could do with just a simple text based mail. Save all three in the same directory. The main script is called "waf_policy_auto_compile.sh"and is the one you put into crontab. The main script will build a new waf-compiler image and compile a test policy. The outcome of that is information about what versions are the newest. It will then extract versions from an old policy and simply see if any of the versions differ. For this to work you need to have an uncompiled policy (you can just use the default one) and a compiled version of it ready beforehand. When a diff has been identified the notification logic is executed and a second script is called: "compile_waf_policies.sh". It basically just trawls through the directory of you policies and logging profiles and compiles a new version of them all. It is not necessary to recompile the logging profiles, so this will probably change in the next version. As the compilation completes the main script will nudge NGINX to reload thus implement all the new versions. You can run "waf_policy_auto_compile.sh" with a verbose flag (-v) and a debug flag (-d). The verbose flag is intended to be used when you run it on a terminal and want the information displayed there. Debug is, well, for debug π The construction of the scripts are based on my own needs but they should be easy to adjust for any need. I will be happy for any feedback, so please don't hold back π version_report_template.html: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>WAF Policy Version Report</title> <style> body { font-family: system-ui, sans-serif; } .ok { color: #28a745; font-weight: bold; } .warn { color: #f0ad4e; font-weight: bold; } .section { margin-bottom: 1.2em; } .label { font-weight: bold; } </style> </head> <body> <h2>WAF Policy Version Report</h2> <div class="section"> <div class="label">Attack Signatures:</div> <div>Current: <span>{{ATTACK_OLD}}</span></div> <div>New: <span>{{ATTACK_NEW}}</span></div> <div>Status: <span class="{{ATTACK_CLASS}}">{{ATTACK_STATUS}}</span></div> </div> <div class="section"> <div class="label">Bot Signatures:</div> <div>Current: <span>{{BOT_OLD}}</span></div> <div>New: <span>{{BOT_NEW}}</span></div> <div>Status: <span class="{{BOT_CLASS}}">{{BOT_STATUS}}</span></div> </div> <div class="section"> <div class="label">Threat Campaigns:</div> <div>Current: <span>{{THREAT_OLD}}</span></div> <div>New: <span>{{THREAT_NEW}}</span></div> <div>Status: <span class="{{THREAT_CLASS}}">{{THREAT_STATUS}}</span></div> </div> <div>Run completed: {{RUN_DATETIME}}</div> </body> </html> compile_waf_policies.sh: #!/bin/bash # ============================================================================== # Script Name: compile_waf_policies.sh # # Description: # Compiles: # 1. WAF policy JSON files from the 'policies' directory # 2. WAF logging JSON files from the 'logging' directory # using the 'waf-compiler-latest:custom' Docker image. Output goes to # '/opt/napv5/app_protect_etc_config' where NGINX and waf-config-mgr # can reach them. # # Requirements: # - Docker installed and accessible # - Docker image 'waf-compiler-latest:custom' present locally # # Usage: # ./compile_waf_policies.sh # ============================================================================== set -euo pipefail IFS=$'\n\t' SECONDS=0 # Track total execution time # ======================== # CONFIGURABLE VARIABLES # ======================== BASE_DIR="/root/napv5/waf-compiler" OUTPUT_DIR="/opt/napv5/app_protect_etc_config" POLICY_INPUT_DIR="$BASE_DIR/policies" POLICY_OUTPUT_DIR="$OUTPUT_DIR" LOGGING_INPUT_DIR="$BASE_DIR/logging" LOGGING_OUTPUT_DIR="$OUTPUT_DIR" GLOBAL_SETTINGS="$BASE_DIR/global_settings.json" DOCKER_IMAGE="waf-compiler-latest:custom" # ======================== # VALIDATION # ======================== echo "π§ Validating paths..." [[ -d "$POLICY_INPUT_DIR" ]] || { echo "β Error: Policy input directory '$POLICY_INPUT_DIR' does not exist."; exit 1; } [[ -f "$GLOBAL_SETTINGS" ]] || { echo "β Error: Global settings file '$GLOBAL_SETTINGS' not found."; exit 1; } mkdir -p "$POLICY_OUTPUT_DIR" mkdir -p "$LOGGING_OUTPUT_DIR" # ======================== # POLICY COMPILATION # ======================== echo "π¦ Compiling WAF policies from: $POLICY_INPUT_DIR" for POLICY_FILE in "$POLICY_INPUT_DIR"/*.json; do [[ -f "$POLICY_FILE" ]] || continue BASENAME=$(basename "$POLICY_FILE" .json) OUTPUT_FILE="$POLICY_OUTPUT_DIR/${BASENAME}.tgz" echo "βοΈ [Policy] Compiling $(basename "$POLICY_FILE") -> $(basename "$OUTPUT_FILE")" docker run --rm \ -v "$POLICY_INPUT_DIR":"$POLICY_INPUT_DIR" \ -v "$POLICY_OUTPUT_DIR":"$POLICY_OUTPUT_DIR" \ -v "$(dirname "$GLOBAL_SETTINGS")":"$(dirname "$GLOBAL_SETTINGS")" \ "$DOCKER_IMAGE" \ -g "$GLOBAL_SETTINGS" \ -p "$POLICY_FILE" \ -o "$OUTPUT_FILE" done # ======================== # LOGGING COMPILATION # ======================== echo "π Compiling WAF logging configs from: $LOGGING_INPUT_DIR" if [[ -d "$LOGGING_INPUT_DIR" ]]; then for LOG_FILE in "$LOGGING_INPUT_DIR"/*.json; do [[ -f "$LOG_FILE" ]] || continue BASENAME=$(basename "$LOG_FILE" .json) OUTPUT_FILE="$LOGGING_OUTPUT_DIR/${BASENAME}.tgz" echo "βοΈ [Logging] Compiling $(basename "$LOG_FILE") -> $(basename "$OUTPUT_FILE")" docker run --rm \ -v "$LOGGING_INPUT_DIR":"$LOGGING_INPUT_DIR" \ -v "$LOGGING_OUTPUT_DIR":"$LOGGING_OUTPUT_DIR" \ "$DOCKER_IMAGE" \ -l "$LOG_FILE" \ -o "$OUTPUT_FILE" done else echo "β οΈ Skipping logging config compilation: directory '$LOGGING_INPUT_DIR' does not exist." fi # ======================== # COMPLETION MESSAGE # ======================== RUNTIME=$SECONDS printf "\nβ Compilation complete.\n" echo " - Policies output: $POLICY_OUTPUT_DIR" echo " - Logging output: $LOGGING_OUTPUT_DIR" echo printf "β±οΈ Total time taken: %02d minutes %02d seconds\n" $((RUNTIME / 60)) $((RUNTIME % 60)) echo waf_policy_auto_compile.sh: #!/bin/bash ############################################################################### # waf_policy_auto_compile.sh # # - Only prints colorized summary output to terminal if -v/--verbose is used # - Mails a styled HTML report using a template, substituting version numbers/status/colors # - Debug output (step_log) only to syslog if -d/--debug is used # - Otherwise: completely silent except for errors # - All main blocks are modularized in functions ############################################################################### set -euo pipefail IFS=$'\n\t' # ===== CONFIGURABLE VARIABLES ===== WORKROOT="/root/napv5" WORKDIR="$WORKROOT/waf-compiler" DOCKERFILE="$WORKDIR/Dockerfile" BUNDLE_DIR="$WORKDIR/test" NEW_BUNDLE="$BUNDLE_DIR/test_new.tgz" OLD_BUNDLE="$BUNDLE_DIR/test_old.tgz" NEW_META="$BUNDLE_DIR/test_new_meta.json" COMPILER_IMAGE="waf-compiler-latest:custom" EMAIL_RECIPIENT="example@example.com" EMAIL_SUBJECT="WAF Compiler Update Notification" NGINX_RELOAD_CMD="docker exec nginx-plus nginx -s reload" HTML_TEMPLATE="$WORKDIR/version_report_template.html" HTML_REPORT="$WORKDIR/version_report.html" VERBOSE=0 DEBUG=0 # ===== DEBUG AND ERROR LOGGING ===== exec 2> >(tee -a /tmp/waf_policy_auto_compile_error.log | /usr/bin/logger -t waf_policy_auto_compile_error) step_log() { if [ "$DEBUG" -eq 1 ]; then echo "DEBUG: $1" | /usr/bin/logger -t waf_policy_auto_compile fi } # ===== ARGUMENT PARSING ===== while [[ $# -gt 0 ]]; do case "$1" in -v|--verbose) VERBOSE=1 shift ;; -d|--debug) DEBUG=1 echo "Debug log can be found in the syslog..." shift ;; -*) echo "Unknown option: $1" >&2 exit 1 ;; *) shift ;; esac done # ----- LOG INITIAL ENVIRONMENT IF DEBUG ----- step_log "waf_policy_auto_compile starting (PID $$)" step_log "Script PATH: $PATH" step_log "which docker: $(which docker 2>/dev/null)" step_log "which jq: $(which jq 2>/dev/null)" # ===== COLOR DEFINITIONS ===== color_reset="\033[0m" color_green="\033[1;32m" color_yellow="\033[1;33m" # ===== LOGGING FUNCTIONS ===== log() { # Only log to terminal if VERBOSE is enabled if [ "$VERBOSE" -eq 1 ]; then echo "[$(date --iso-8601=seconds)] $*" fi } # ===== HTML REPORT GENERATOR ===== generate_html_report() { local attack_old="$1" local attack_new="$2" local attack_status="$3" local attack_class="$4" local bot_old="$5" local bot_new="$6" local bot_status="$7" local bot_class="$8" local threat_old="$9" local threat_new="${10}" local threat_status="${11}" local threat_class="${12}" local datetime datetime=$(date --iso-8601=seconds) cp "$HTML_TEMPLATE" "$HTML_REPORT" sed -i "s|{{ATTACK_OLD}}|$attack_old|g" "$HTML_REPORT" sed -i "s|{{ATTACK_NEW}}|$attack_new|g" "$HTML_REPORT" sed -i "s|{{ATTACK_STATUS}}|$attack_status|g" "$HTML_REPORT" sed -i "s|{{ATTACK_CLASS}}|$attack_class|g" "$HTML_REPORT" sed -i "s|{{BOT_OLD}}|$bot_old|g" "$HTML_REPORT" sed -i "s|{{BOT_NEW}}|$bot_new|g" "$HTML_REPORT" sed -i "s|{{BOT_STATUS}}|$bot_status|g" "$HTML_REPORT" sed -i "s|{{BOT_CLASS}}|$bot_class|g" "$HTML_REPORT" sed -i "s|{{THREAT_OLD}}|$threat_old|g" "$HTML_REPORT" sed -i "s|{{THREAT_NEW}}|$threat_new|g" "$HTML_REPORT" sed -i "s|{{THREAT_STATUS}}|$threat_status|g" "$HTML_REPORT" sed -i "s|{{THREAT_CLASS}}|$threat_class|g" "$HTML_REPORT" sed -i "s|{{RUN_DATETIME}}|$datetime|g" "$HTML_REPORT" } # ===== BUILD COMPILER IMAGE ===== build_compiler() { step_log "about to build_compiler" docker build --no-cache --platform linux/amd64 \ --secret id=nginx-crt,src="$WORKROOT/nginx-repo.crt" \ --secret id=nginx-key,src="$WORKROOT/nginx-repo.key" \ -t "$COMPILER_IMAGE" \ -f "$DOCKERFILE" "$WORKDIR" > "$WORKDIR/waf_compiler_build.log" 2>&1 || { echo "ERROR: docker build failed. Dumping build log:" | /usr/bin/logger -t waf_policy_auto_compile_error cat "$WORKDIR/waf_compiler_build.log" | /usr/bin/logger -t waf_policy_auto_compile_error exit 1 } step_log "after build_compiler" } # ===== COMPILE TEST POLICY ===== compile_test_policy() { step_log "about to compile_test_policy" docker run --rm -v "$BUNDLE_DIR:/bundle" "$COMPILER_IMAGE" \ -p /bundle/test.json -o /bundle/test_new.tgz > "$NEW_META" step_log "after compile_test_policy" if [ -f "$NEW_META" ]; then step_log "$(cat "$NEW_META")" else step_log "NEW_META does not exist" fi } # ===== CHECK OLD_BUNDLE ===== check_old_bundle() { step_log "about to check OLD_BUNDLE" if [ -f "$OLD_BUNDLE" ]; then step_log "$(ls -l "$OLD_BUNDLE")" else step_log "OLD_BUNDLE does not exist" fi } # ===== GET NEW VERSIONS FUNCTION ===== get_new_versions() { jq -r ' { "attack": .attack_signatures_package.version, "bot": .bot_signatures_package.version, "threat": .threat_campaigns_package.version }' "$NEW_META" } # ===== VERSION EXTRACTION FROM OLD BUNDLE ===== extract_bundle_versions() { docker run --rm -v "$BUNDLE_DIR:/bundle" "$COMPILER_IMAGE" \ -dump -bundle "/bundle/test_old.tgz" } extract_versions_from_dump() { extract_bundle_versions | awk ' BEGIN { print "{" } /attack-signatures:/ { in_attack=1; next } /bot-signatures:/ { in_bot=1; next } /threat-campaigns:/ { in_threat=1; next } in_attack && /version:/ { gsub("version: ", "") printf "\"attack\":\"%s\",\n", $1 in_attack=0 } in_bot && /version:/ { gsub("version: ", "") printf "\"bot\":\"%s\",\n", $1 in_bot=0 } in_threat && /version:/ { gsub("version: ", "") printf "\"threat\":\"%s\"\n", $1 in_threat=0 } END { print "}" } ' } get_old_versions() { extract_versions_from_dump } # ===== GET & PRINT VERSIONS ===== get_versions() { step_log "about to get_new_versions" new_versions=$(get_new_versions) step_log "new_versions: $new_versions" step_log "after get_new_versions" step_log "about to get_old_versions" old_versions=$(get_old_versions) step_log "old_versions: $old_versions" step_log "after get_old_versions" } # ===== VERSION COMPARISON ===== compare_versions() { step_log "compare_versions start" attack_old=$(echo "$old_versions" | jq -r .attack) attack_new=$(echo "$new_versions" | jq -r .attack) bot_old=$(echo "$old_versions" | jq -r .bot) bot_new=$(echo "$new_versions" | jq -r .bot) threat_old=$(echo "$old_versions" | jq -r .threat) threat_new=$(echo "$new_versions" | jq -r .threat) attack_status=$([[ "$attack_old" != "$attack_new" ]] && echo "Updated" || echo "No Change") bot_status=$([[ "$bot_old" != "$bot_new" ]] && echo "Updated" || echo "No Change") threat_status=$([[ "$threat_old" != "$threat_new" ]] && echo "Updated" || echo "No Change") attack_class=$([[ "$attack_status" == "Updated" ]] && echo "warn" || echo "ok") bot_class=$([[ "$bot_status" == "Updated" ]] && echo "warn" || echo "ok") threat_class=$([[ "$threat_status" == "Updated" ]] && echo "warn" || echo "ok") echo "Attack:$attack_status Bot:$bot_status Threat:$threat_status" > "$WORKDIR/status_flags.txt" [[ "$attack_status" == "Updated" ]] && attack_status_colored="${color_yellow}*** Updated ***${color_reset}" || attack_status_colored="${color_green}No Change${color_reset}" [[ "$bot_status" == "Updated" ]] && bot_status_colored="${color_yellow}*** Updated ***${color_reset}" || bot_status_colored="${color_green}No Change${color_reset}" [[ "$threat_status" == "Updated" ]] && threat_status_colored="${color_yellow}*** Updated ***${color_reset}" || threat_status_colored="${color_green}No Change${color_reset}" { echo -e "Version comparison for container \033[1mNAPv5\033[0m:\n" echo -e "Attack Signatures:" echo -e " Current Version: $attack_old" echo -e " New Version: $attack_new" echo -e " Status: $attack_status_colored\n" echo -e "Threat Campaigns:" echo -e " Current Version: $threat_old" echo -e " New Version: $threat_new" echo -e " Status: $threat_status_colored\n" echo -e "Bot Signatures:" echo -e " Current Version: $bot_old" echo -e " New Version: $bot_new" echo -e " Status: $bot_status_colored" } > "$WORKDIR/version_report.ansi" sed 's/\x1B\[[0-9;]*[mK]//g' "$WORKDIR/version_report.ansi" > "$WORKDIR/version_report.txt" step_log "Calling log_versions_syslog" log_versions_syslog "$attack_old" "$attack_new" "$attack_status" "$attack_class" \ "$bot_old" "$bot_new" "$bot_status" "$bot_class" \ "$threat_old" "$threat_new" "$threat_status" "$threat_class" step_log "compare_versions finished" } # ===== SYSLOG VERSION LOGGING and HTML REPORT GEN ===== log_versions_syslog() { # Args: # 1-attack_old 2-attack_new 3-attack_status 4-attack_class # 5-bot_old 6-bot_new 7-bot_status 8-bot_class # 9-threat_old 10-threat_new 11-threat_status 12-threat_class local msg msg="AttackSig (current: $1, latest: $2), BotSig (current: $5, latest: $6), ThreatCamp (current: $9, latest: $10)" /usr/bin/logger -t waf_policy_auto_compile "$msg" # Also print to terminal if VERBOSE is enabled if [ "$VERBOSE" -eq 1 ]; then echo "$msg" fi # Always (re)generate HTML for the mail at this point generate_html_report "$@" } # ===== RESPONSE ACTIONS ===== compile_all_policies() { log "Change detected β compiling all policies..." if [ "$VERBOSE" -eq 1 ]; then "$WORKDIR/compile_waf_policies.sh" else "$WORKDIR/compile_waf_policies.sh" > /dev/null 2>&1 fi } reload_nginx() { log "Reloading NGINX..." eval "$NGINX_RELOAD_CMD" } rotate_bundles() { log "Archiving new test bundle as old..." mv "$NEW_BUNDLE" "$OLD_BUNDLE" rm -f "$NEW_META" } send_report_email() { local html_report="$1" mail -s "$EMAIL_SUBJECT" -a "Content-Type: text/html" "$EMAIL_RECIPIENT" < "$html_report" } # ===== MAIN LOGIC ===== main() { build_compiler compile_test_policy check_old_bundle get_versions compare_versions if [[ "$VERBOSE" -eq 1 ]]; then cat "$WORKDIR/version_report.ansi" fi if grep -q "Updated" "$WORKDIR/status_flags.txt"; then if [[ "$VERBOSE" -eq 1 ]]; then echo "Detected updates. Recompiling policies, reloading NGINX, sending report." fi compile_all_policies reload_nginx rotate_bundles send_report_email "$HTML_REPORT" else log "No changes detected β nothing to do." fi log "Done." } main "$@" And should be it.40Views1like1CommentModify UCS Archive so it doesn't backup epsec images
Problem this snippet solves: Currently, if you have APM installed, the UCS Archive process, also backs up the epsec images. I have written a bash script, which modifies the UCS Archive process, so that it does not include these in the UCS Archive process, and it also modifies the bigip.conf that is archived, so that it does not contain references to these images. By default, APM has it's own epsec image in /var/sam/images so when your UCS Archive is loaded to a new system, or a rebuilt system, it will just use the default epsec image for that system. This means that if you have upload a new epsec image to fix an issue, you will need to ensure that this is done on any system you restore the UCS Archive too. How to use this snippet: Just save the bash script to a file like /shared/bin/modify_ucs.sh Then run the script:- # sh /shared/bin/modify_ucs.sh The script modifies /usr/libdata/configsync/cs.dat and creates two files config_save_pre and config_save_post in the same folder. It also creates a backup of cs.dat as cs.YYYY_MM_DD_HH_MM.bak The /usr filesystem is mounted RO, so I remount it RW to do this. To remove changes: mount -o remount,rw /usr cd /usr/libdata/configsync/ mv -f `ls -1t cs.dat.[0-9][0-9][0-9][0-9]*.bak|head -1` cs.dat rm -f config_save_p[or][se]* mount -o remount,ro /usr This modification does not survive a upgrade, so you will need to run the script again after any upgrade If you are running a cron job to create a daily/weekly backup, you can just call this script before you run the tmsh save sys ucs command, as it checks to see if the modification has already been done. Code : 70454507Views0likes1CommentUltimate irule debug - Capture and investigate
Problem this snippet solves: I decided to share this Irule for different reasons. When I help our community on devcentral, I regularly see people making recurring requests: How do I do to capture the queries header. How do I do to capture the response header. How do I check the information in the POST Request. How do I check response data (body). What cypher/protocol I use (SSL/TLS). I set up client certificate authentication but I do not know if it works and if I pass my certificate auth. I want to retrieve information from my authentication certificate (subject, issuer, β¦). My authenticating by certificate does not work and I get an error of what I have to do. I have latencies when dealing with my request. where does the latency come from (F5, server,..). I set up sso (kerberos delegation, json post, Form sso). I do not feel that my request is sent to the backend (or the kerberos token). Does F5 add information or modify the request/response. Which pool member has been selected My VS donβt answer (where does the problem come from) β¦ instead of having an Irule for each request why not consolidate everything and provide a compact Irule. this Irule can help you greatly during your investigations and allows you to capture these different items: How to use this snippet: you have a function that allows you to activate the desired logs (1 to activate and 0 to disable) as describe below: array set app_arrway_referer { client_dest_ip_port 1 client_cert 1 http_request 1 http_request_release 1 http_request_payload 0 http_lb_selected 1 http_response 0 http_response_release 0 http_response_payload 0 http_time_process 0 } the posted logs will be preceded by a UID which will allow you to follow from the beginning to the end of the process of your request / answer. you can for example make a grep on the log to follow the complete process (request / answer). the UID is generated in the following way: `set uid [string range [AES::key 256] 15 23] client_dest_ip_port: this section will allow you to see source IP/Port and destination IP/Port. <CLIENT_ACCEPTED>: ----------- client_dest_ip_port ----------- <CLIENT_ACCEPTED>: uid: 382951fe9 - Client IP Src: 10.20.30.4:60419 <CLIENT_ACCEPTED>: uid: 382951fe9 - Client IP Dest:192.168.30.45:443 <CLIENT_ACCEPTED>: ----------- client_dest_ip_port ----------- client_cert: this section will allow you to check the result code for peer certificate verification ( and also if you have provide a certificate auth). moreover you will be able to recover the information of your authentication certficat (issuer, subject, β¦). if your authentication certificate that you provid is not valid an error message will be returned (ex: certificate chain too long, invalid CA certificate, β¦). all errors are listed in the link below: https://devcentral.f5.com/wiki/iRules.SSL__verify_result.ashx <HTTP_REQUEST>: ----------- client_cert ----------- <HTTP_REQUEST>: uid: 382951fe9 - cert number: 0 <HTTP_REQUEST>: uid: 382951fe9 - subject: OU=myOu, CN=youssef <HTTP_REQUEST>: uid: 382951fe9 - Issuer Info: DC=com, DC=domain, CN=MobIssuer <HTTP_REQUEST>: uid: 382951fe9 - cert serial: 22:00:30:5c:de:dd:ec:23:6e:b5:e6:77:bj:01:00:00:22:3c:dc <HTTP_REQUEST>: ----------- client_cert ----------- OR <HTTP_REQUEST>: ----------- client_cert ----------- <HTTP_REQUEST>: uid: 382951fe9 - No client certificate provided <HTTP_REQUEST>: ----------- client_cert ----------- http_request: This section allow you to retrieve the complete client HTTP request headers (that is, the method, URI, version, and all headers). I also added the protocol, the ciphers and the name of the vs used. <HTTP_REQUEST>: ----------- http_request ----------- <HTTP_REQUEST>: uid: 382951fe9 - protocol: https <HTTP_REQUEST>: uid: 382951fe9 - cipher name: ECDHE-RSA-AES128-GCM-SHA256 <HTTP_REQUEST>: uid: 382951fe9 - cipher version: TLSv1.2 <HTTP_REQUEST>: uid: 382951fe9 - VS Name: /Common/vs-myapp-443 <HTTP_REQUEST>: uid: 382951fe9 - Request: POST myapp.mydomain.com/browser-management/users/552462/playlist/play/api <HTTP_REQUEST>: uid: 382951fe9 - Host: myapp.mydomain.com <HTTP_REQUEST>: uid: 382951fe9 - Connection: keep-alive <HTTP_REQUEST>: uid: 382951fe9 - Content-Length: 290 <HTTP_REQUEST>: uid: 382951fe9 - Accept: application/json, text/javascript, */*; q=0.01 <HTTP_REQUEST>: uid: 382951fe9 - X-Requested-With: XMLHttpRequest <HTTP_REQUEST>: uid: 382951fe9 - User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.106 Safari/537.36 <HTTP_REQUEST>: uid: 382951fe9 - Referer: https://myapp.mydomain.com/ <HTTP_REQUEST>: uid: 382951fe9 - Accept-Encoding: gzip, deflate, sdch, br <HTTP_REQUEST>: uid: 382951fe9 - Accept-Language: en-US,en;q=0.8 <HTTP_REQUEST>: uid: 382951fe9 - Cookie: RLT=SKjpfdkFDKjkufd976HJhldds=; secureauth=true; STT="LKJSDKJpjslkdjslkjKJSHjfdskjhoLHkjh78dshjhd980szKJH"; ASP.SessionId=dsliulpoiukj908798dsjkh <HTTP_REQUEST>: uid: 382951fe9 - X-Forwarded-For: 10.10.10.22 <HTTP_REQUEST>: ----------- http_request ----------- http_request_release: This section triggered when the system is about to release HTTP data on the serverside of the connection. This event is triggered after modules process the HTTP request. So it will allow you to check request after F5 process. suppose that you have put APM with SSO kerberos, you will be able to see the kerberos token insert by F5. Or XFF insert by HTTP profileβ¦ <HTTP_REQUEST_RELEASE>: ----------- http_request_release ----------- <HTTP_REQUEST_RELEASE>: uid: 382951fe9 - VS Name: /Common/vs-myapp-443 <HTTP_REQUEST_RELEASE>: uid: 382951fe9 - Request: GET myapp.mydomain.com/browser-management/users/552462/playlist/play/api <HTTP_REQUEST_RELEASE>: uid: 382951fe9 - Host: myapp.mydomain.com <HTTP_REQUEST_RELEASE>: uid: 382951fe9 - Connection: keep-alive <HTTP_REQUEST_RELEASE>: uid: 382951fe9 - Accept: application/json, text/javascript, */*; q=0.01 <HTTP_REQUEST_RELEASE>: uid: 382951fe9 - X-Requested-With: XMLHttpRequest <HTTP_REQUEST_RELEASE>: uid: 382951fe9 - User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.106 Safari/537.36 <HTTP_REQUEST_RELEASE>: uid: 382951fe9 - Referer: https://myapp.mydomain.com/ <HTTP_REQUEST_RELEASE>: uid: 382951fe9 - Accept-Encoding: gzip, deflate, sdch, br <HTTP_REQUEST_RELEASE>: uid: 382951fe9 - Accept-Language: en-US,en;q=0.8 <HTTP_REQUEST_RELEASE>: uid: 382951fe9 - Cookie: RLT=SKjpfdkFDKjkufd976HJhldds=; secureauth=true; STT="LKJSDKJpjslkdjslkjKJSHjfdskjhoLHkjh78dshjhd980szKJH"; ASP.SessionId=dsliulpoiukj908798dsjkh <HTTP_REQUEST_RELEASE>: uid: 382951fe9 - X-Forwarded-For: 10.10.10.22 <HTTP_REQUEST_RELEASE>: ----------- http_request_release ----------- http_request_payload: This section will allow you to retrieve the HTTP request body. <HTTP_REQUEST>: ----------- http_request_payload ----------- <HTTP_REQUEST>: uid: 382951fe9 - Content-Length header null in request If GET or POST withtout content) <HTTP_REQUEST>: ----------- http_request_payload ----------- or <HTTP_REQUEST>: ----------- http_request_payload ----------- <HTTP_REQUEST>: uid: 382951fe9 - post payload: { id: 24, retrive: 'identity', service: 'IT'} <HTTP_REQUEST>: ----------- http_request_payload ----------- http_lb_selected This section will allow you to you to see which pool member has been selected. Once the pool memeber has been selected, you will not see this logs again until another load balancing decision will be made. If you want to see the selected pool memeber for each request you can see this information in "http_response". <HTTP_REQUEST>: ----------- http_lb_selected ----------- <LB_SELECTED>: uid: 382951fe9 - pool member IP: /Common/pool-name 10.22.33.54 443 <HTTP_REQUEST>: ----------- http_lb_selected ----------- http_response: This section will allow you to retrieve the response status and header lines from the server response. You can also see which pool member has been selected. <HTTP_RESPONSE>: ----------- http_response ----------- <HTTP_RESPONSE>: uid: 382951fe9 - status: 200 <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - pool member IP: /Common/pool-name 10.22.33.54 443 <HTTP_RESPONSE>: uid: 382951fe9 - Cache-Control: no-cache <HTTP_RESPONSE>: uid: 382951fe9 - Pragma: no-cache <HTTP_RESPONSE>: uid: 382951fe9 - Content-Type: application/json; charset=utf-8 <HTTP_RESPONSE>: uid: 382951fe9 - Expires: -1 <HTTP_RESPONSE>: uid: 382951fe9 - Server: Microsoft-IIS/8.5 <HTTP_RESPONSE>: uid: 382951fe9 - X-Powered-By: ASP.NET <HTTP_RESPONSE>: uid: 382951fe9 - Date: Fri, 28 Oct 2018 06:46:59 GMT <HTTP_RESPONSE>: uid: 382951fe9 - Content-Length: 302 <HTTP_RESPONSE>: ----------- http_response ----------- http_response_release: This section triggered when the system is about to release HTTP data on the clientside of the connection. This event is triggered after modules process the HTTP response. you can make sure that the answer has not been altering after the f5 process. You can also see which pool member has been selected. <HTTP_RESPONSE_RELEASE>: ----------- http_response_release ----------- <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - status: 200 <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - pool member IP: /Common/pool-name 10.22.33.54 443 <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - Cache-Control: no-cache <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - Pragma: no-cache <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - Content-Type: application/json; charset=utf-8 <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - Expires: -1 <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - Server: Microsoft-IIS/8.5 <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - X-Powered-By: ASP.NET <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - Date: Fri, 28 Oct 2018 06:46:59 GMT <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - Content-Length: 302 <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - Strict-Transport-Security: max-age=16070400; includeSubDomains <HTTP_RESPONSE_RELEASE>: ----------- http_response_release ----------- http_response_payload: This section will allow you to Collects an amount of HTTP body data that you specify. <HTTP_RESPONSE_DATA>: ----------- http_response_payload ----------- <HTTP_RESPONSE_DATA>: uid: 382951fe9 - Response (Body) payload: { "username" : "youssef", "genre" : "unknown", "validation-factors" : { "validationFactors" : [ { "name" : "remote_address", "value" : "127.0.0.1" } ] }} <HTTP_RESPONSE_DATA>: ----------- http_response_payload ----------- http_time_process: this part will allow you to put back information which can be useful to you to target the latency problematic. it is clear that it is not precise and that f5 offers other tools for that. but you will be able to quickly see which elements take the most time to be processed. you will be able to see how long f5 takes to process the request, the response and how long the backend server takes time to respond. <HTTP_RESPONSE_RELEASE>: ----------- http_time_process ----------- <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - Time to request (F5 request time) = 5 (ms) <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - Time to response (F5 response time) = 0 (ms) <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - Time to server (server backend process time) = 4 (ms) <HTTP_RESPONSE_RELEASE>: ----------- http_time_process ----------- Code : when CLIENT_ACCEPTED { # set a unique id for transaction set uid [string range [AES::key 256] 15 23] # set what's you want to retrieve 0 or 1 array set app_arrway_referer { client_dest_ip_port 1 client_cert 1 http_request 1 http_request_release 1 http_request_payload 1 http_lb_selected 1 http_response 1 http_response_release 1 http_response_payload 1 http_time_process 1 } if {$app_arrway_referer(client_dest_ip_port)} { log local0. " ----------- client_dest_ip_port ----------- " clientside { log local0. "uid: $uid - Client IP Src: [IP::client_addr]:[TCP::client_port]" } log local0. "uid: $uid - Client IP Dest:[IP::local_addr]:[TCP::local_port]" log local0. " ----------- client_dest_ip_port ----------- " log local0. " " } } when HTTP_REQUEST { set http_request_time [clock clicks -milliseconds] # Triggered when the system receives a certificate message from the client. The message may contain zero or more certificates. if {$app_arrway_referer(client_cert)} { log local0. " ----------- client_cert ----------- " # SSL::cert count - Returns the total number of certificates that the peer has offered. if {[SSL::cert count] > 0}{ # Check if there was no error in validating the client cert against LTM's server cert if { [SSL::verify_result] == 0 }{ for {set i 0} {$i < [SSL::cert count]} {incr i}{ log local0. "uid: $uid - cert number: $i" log local0. "uid: $uid - subject: [X509::subject [SSL::cert $i]]" log local0. "uid: $uid - Issuer Info: [X509::issuer [SSL::cert $i]]" log local0. "uid: $uid - cert serial: [X509::serial_number [SSL::cert $i]]" } } else { # https://devcentral.f5.com/s/wiki/iRules.SSL__verify_result.ashx (OpenSSL verify result codes) log local0. "uid: $uid - Cert Info: [X509::verify_cert_error_string [SSL::verify_result]]" } } else { log local0. "uid: $uid - No client certificate provided" } log local0. " ----------- client_cert ----------- " log local0. " " } if {$app_arrway_referer(http_request)} { log local0. " ----------- http_request ----------- " if { [PROFILE::exists clientssl] == 1 } { log local0. "uid: $uid - protocol: https" log local0. "uid: $uid - cipher name: [SSL::cipher name]" log local0. "uid: $uid - cipher version: [SSL::cipher version]" } log local0. "uid: $uid - VS Name: [virtual]" log local0. "uid: $uid - Request: [HTTP::method] [HTTP::host][HTTP::uri]" foreach aHeader [HTTP::header names] { log local0. "uid: $uid - $aHeader: [HTTP::header value $aHeader]" } log local0. " ----------- http_request ----------- " log local0. " " } set collect_length_request [HTTP::header value "Content-Length"] set contentlength 1 if {$app_arrway_referer(http_request_payload)} { if { [catch { if { $collect_length_request > 0 && $collect_length_request < 1048577 } { set collect_length $collect_length_request } else { set collect_length 1048576 } if { $collect_length > 0 } { HTTP::collect $collect_length_request set contentlength 1 } }] } { # no DATA in POST Request log local0. " ----------- http_request_payload ----------- " log local0. "uid: $uid - Content-Length header null in request" log local0. " ----------- http_request_payload ----------- " log local0. " " set contentlength 0 } } } when HTTP_REQUEST_DATA { if {$app_arrway_referer(http_request_payload)} { log local0. " ----------- http_request_payload ----------- " if {$contentlength} { set postpayload [HTTP::payload] log local0. "uid: $uid - post payload: $postpayload" #HTTP::release } log local0. " ----------- http_request_payload ----------- " log local0. " " } } when HTTP_REQUEST_RELEASE { if {$app_arrway_referer(http_request_release)} { log local0. " ----------- http_request_release ----------- " if { [PROFILE::exists clientssl] == 1 } { log local0. "uid: $uid - cipher protocol: https" log local0. "uid: $uid - cipher name: [SSL::cipher name]" log local0. "uid: $uid - cipher version: [SSL::cipher version]" } log local0. "uid: $uid - VS Name: [virtual]" log local0. "uid: $uid - Request: [HTTP::method] [HTTP::host][HTTP::uri]" foreach aHeader [HTTP::header names] { log local0. "uid: $uid - $aHeader: [HTTP::header value $aHeader]" } log local0. " ----------- http_request_release ----------- " log local0. " " } set http_request_time_release [clock clicks -milliseconds] } when LB_SELECTED { if {$app_arrway_referer(http_lb_selected)} { log local0. " ----------- http_lb_selected ----------- " log local0. "uid: $uid - pool member IP: [LB::server]" log local0. " ----------- http_lb_selected ----------- " log local0. " " } } when HTTP_RESPONSE { set http_response_time [clock clicks -milliseconds] set content_length [HTTP::header "Content-Length"] if {$app_arrway_referer(http_response)} { log local0. " ----------- http_response ----------- " log local0. "uid: $uid - status: [HTTP::status]" log local0. "uid: $uid - pool member IP: [LB::server]" foreach aHeader [HTTP::header names] { log local0. "uid: $uid - $aHeader: [HTTP::header value $aHeader]" } log local0. " ----------- http_response ----------- " log local0. " " } if {$app_arrway_referer(http_response_payload)} { if { $content_length > 0 && $content_length < 1048577 } { set collect_length $content_length } else { set collect_length 1048576 } if { $collect_length > 0 } { HTTP::collect $collect_length } } } when HTTP_RESPONSE_DATA { if {$app_arrway_referer(http_response_payload)} { log local0. " ----------- http_response_payload ----------- " set payload [HTTP::payload] log local0. "uid: $uid - Response (Body) payload: $payload" log local0. " ----------- http_response_payload ----------- " log local0. " " } } when HTTP_RESPONSE_RELEASE { set http_response_time_release [clock clicks -milliseconds] if {$app_arrway_referer(http_response_release)} { log local0. " ----------- http_response_release ----------- " log local0. "uid: $uid - status: [HTTP::status]" log local0. "uid: $uid - pool member IP: [LB::server]" foreach aHeader [HTTP::header names] { log local0. "uid: $uid - $aHeader: [HTTP::header value $aHeader]" } log local0. " ----------- http_response_release ----------- " log local0. " " } if {$app_arrway_referer(http_time_process)} { log local0. " ----------- http_time_process ----------- " log local0.info "uid: $uid - Time to request (F5 request time) = [expr $http_request_time - $http_request_time_release] (ms)" log local0.info "uid: $uid - Time to response (F5 response time) = [expr $http_response_time - $http_response_time_release] (ms)" log local0.info "uid: $uid - Time to server (server backend process time) = [expr $http_request_time_release - $http_response_time] (ms)" log local0. " ----------- http_time_process ----------- " log local0. " " } } Tested this on version: 13.05.4KViews6likes12CommentsF5 MCP(Model Context Protocol) Server
This project is a MCP( Model Context Protocol ) server designed to interact with F5 devices using the iControl REST API. It provides a set of tools to manage F5 objects such as virtual servers (VIPs), pools, iRules, and profiles. The server is implemented using the FastMCP framework and exposes functionalities for creating, updating, listing, and deleting F5 objects.477Views1like0CommentsBIG-IP Report
Problem this snippet solves: Overview This is a script which will generate a report of the BIG-IP LTM configuration on all your load balancers making it easy to find information and get a comprehensive overview of virtual servers and pools connected to them. This information is used to relay information to NOC and developers to give them insight in where things are located and to be able to plan patching and deploys. I also use it myself as a quick way get information or gather data used as a foundation for RFC's, ie get a list of all external virtual servers without compression profiles. The script has been running on 13 pairs of load balancers, indexing over 1200 virtual servers for several years now and the report is widely used across the company and by many companies and governments across the world. It's easy to setup and use and only requires auditor (read-only) permissions on your devices. Demo/Preview Interactive demo http://loadbalancing.se/bigipreportdemo/ Screen shots The main report: The device overview: Certificate details: How to use this snippet: Installation instructions BigipReport REST This is the only branch we're updating since middle of 2020 and it supports 12.x and upwards (maybe even 11.6). Downloads: https://loadbalancing.se/downloads/bigipreport-v5.7.13.zip Documentation, installation instructions and troubleshooting: https://loadbalancing.se/bigipreport-rest/ Docker support https://loadbalancing.se/2021/01/05/running-bigipreport-on-docker/ Kubernetes support https://loadbalancing.se/2021/04/16/bigipreport-on-kubernetes/ BIG-IP Report (Legacy) Older version of the report that only runs on Windows and is depending on a Powershell plugin originally written by Joe Pruitt (F5) BIG-IP Report (only download this if you have v10 devices): https://loadbalancing.se/downloads/bigipreport-5.4.0-beta.zip iControl Snapin https://loadbalancing.se/downloads/f5-icontrol.zip Documentation and Installation Instructions https://loadbalancing.se/bigip-report/ Upgrade instructions Protect the report using APM and active directory Written by DevCentral member Shann_P: https://loadbalancing.se/2018/04/08/protecting-bigip-report-behind-an-apm-by-shannon-poole/ Got issues/problems/feedback? Still have issues? Drop a comment below. We usually reply quite fast. Any bugs found, issues detected or ideas contributed makes the report better for everyone, so it's always appreciated. --- Join us on Discord: https://discord.gg/7JJvPMYahA Code : BigIP Report Tested this on version: 12, 13, 14, 15, 1615KViews20likes97CommentsF5 iApp Automated Backup
Problem this snippet solves: This is now available on GitHub! Please look on GitHub for the latest version, and submit any bugs or questions as an "Issue" on GitHub: (Note: DevCentral admin update - Daniel's project appears abandoned so it's been forked and updated to the link below. @damnski on github added some SFTP code that has been merged in as well.) https://github.com/f5devcentral/f5-automated-backup-iapp Intro Building on the significant work of Thomas Schockaert (and several other DevCentralites) I enhanced many aspects I needed for my own purposes, updated many things I noticed requested on the forums, and added additional documentation and clarification. As you may see in several of my comments on the original posts, I iterated through several 2.2.x versions and am now releasing v3.0.0. Below is the breakdown! Also, I have done quite a bit of testing (mostly on v13.1.0.1 lately) and I doubt I've caught everything, especially with all of the changes. Please post any questions or issues in the comments. Cheers! Daniel Tavernier (tabernarious) Related posts: Git Repository for f5-automated-backup-iapp (https://github.com/tabernarious/f5-automated-backup-iapp) https://community.f5.com/t5/technical-articles/f5-automated-backups-the-right-way/ta-p/288454 https://community.f5.com/t5/crowdsrc/complete-f5-automated-backup-solution/ta-p/288701 https://community.f5.com/t5/crowdsrc/complete-f5-automated-backup-solution-2/ta-p/274252 https://community.f5.com/t5/technical-forum/automated-backup-solution/m-p/24551 https://community.f5.com/t5/crowdsrc/tkb-p/CrowdSRC v3.2.1 (20201210) Merged v3.1.11 and v3.2.0 for explicit SFTP support (separate from SCP). Tweaked the SCP and SFTP upload directory handling; detailed instructions are in the iApp. Tested on 13.1.3.4 and 14.1.3 v3.1.11 (20201210) Better handling of UCS passphrases, and notes about characters to avoid. I successfully tested this exact passphrase in the 13.1.3.4 CLI (surrounded with single quote) and GUI (as-is): `~!@#$%^*()aB1-_=+[{]}:./? I successfully tested this exact passphrase in 14.1.3 (square-braces and curly-braces would not work): `~!@#$%^*()aB1-_=+:./? Though there may be situations these could work, avoid these characters (separated by spaces): " ' & | ; < > \ [ ] { } , Moved changelog and notes from the template to CHANGELOG.md and README.md. Replaced all tabs (\t) with four spaces. v3.1.10 (20201209) Added SMB Version and SMB Security options to support v14+ and newer versions of Microsoft Windows and Windows Server. Tested SMB/CIFS on 13.1.3.4 and 14.1.3 against Windows Server 2019 using "2.0" and "ntlmsspi" v3.1.0: Removed "app-service none" from iCall objects. The iCall objects are now created as part of the Application Service (iApp) and are properly cleaned up if the iApp is redeployed or deleted. Reasonably tested on 11.5.4 HF2 (SMB worked fine using "mount -t cifs") and altered requires-bigip-version-min to match. Fixing error regarding "script did not successfully complete: (can't read "::destination_parameters__protocol_enable": no such variable" by encompassing most of the "implementation" in a block that first checks $::backup_schedule__frequency_select for "Disable". Added default value to "filename format". Changed UCS default value for $backup_file_name_extension to ".ucs" and added $fname_noext. Removed old SFTP sections and references (now handled through SCP/SFTP). Adjusted logging: added "sleep 1" to ensure proper logging; added $backup_directory to log message. Adjusted some help messages. New v3.0.0 features: Supports multiple instances! (Deploy multiple copies of the iApp to save backups to different places or perhaps to keep daily backups locally and send weekly backups to a network drive.) Fully ConfigSync compatible! (Encrypted values now in $script instead of local file.) Long passwords supported! (Using "-A" with openssl which reads/writes base64 encoded strings as a single line.) Added $script error checking for all remote backup types! (Using 'catch' to prevent tcl errors when $script aborts.) Backup files are cleaned up after any $script errors due to new error checking. Added logging! (Run logs sent to '/var/log/ltm' via logger command which is compatible with BIG-IP Remote Logging configuration (syslog). Run logs AND errors sent to '/var/tmp/scriptd.out'. Errors may include plain-text passwords which should not be in /var/log/ltm or syslog.) Added custom cipher option for SCP! (In case BIG-IP and the destination server are not cipher-compatible out of the box.) Added StrictHostKeyChecking=no option. (This is insecure and should only be used for testing--lots of warnings.) Combined SCP and SFTP because they are both using SCP to perform the remote copy. (Easier to maintain!) Original v1.x.x and v2.x.x features kept (copied from an original post): It allows you to choose between both UCS or SCF as backup-types. (whilst providing ample warnings about SCF not being a very good restore-option due to the incompleteness in some cases) It allows you to provide a passphrase for the UCS archives (the standard GUI also does this, so the iApp should too) It allows you to not include the private keys (same thing: standard GUI does it, so the iApp does it too) It allows you to set a Backup Schedule for every X minutes/hours/days/weeks/months or a custom selection of days in the week It allows you to set the exact time, minute of the hour, day of the week or day of the month when the backup should be performed (depending on the usefulness with regards to the schedule type) It allows you to transfer the backup files to external devices using 4 different protocols, next to providing local storage on the device itself SCP (username/private key without password) SFTP (username/private key without password) FTP (username/password) SMB (now using TMOS v12.x.x compatible 'mount -t cifs', with username/password) Local Storage (/var/local/ucs or /var/local/scf) It stores all passwords and private keys in a secure fashion: encrypted by the master key of the unit (f5mku), rendering it safe to store the backups, including the credentials off-box It has a configurable automatic pruning function for the Local Storage option, so the disk doesn't fill up (i.e. keep last X backup files) It allows you to configure the filename using the date/time wildcards from the tcl [clock] command, as well as providing a variable to include the hostname It requires only the WebGUI to establish the configuration you desire It allows you to disable the processes for automated backup, without you having to remove the Application Service or losing any previously entered settings For the external shellscripts it automatically generates, the credentials are stored in encrypted form (using the master key) It allows you to no longer be required to make modifications on the linux command line to get your automated backups running after an RMA or restore operation It cleans up after itself, which means there are no extraneous shellscripts or status files lingering around after the scripts execute How to use this snippet: Find and download the latest iApp template on GitHub (e.g "f5.automated_backup.v3.2.1.tmpl.tcl"). Import the text file as an iApp Template in the BIG-IP GUI. Create an Application Service using the imported Template. Answer the questions (paying close attention to the help sections). Check /var/tmp/scriptd.out for general logs and errors. Tested this on version: 16.022KViews5likes102CommentsTrigger js challenge/Captcha for ip reputation/ip intelligence categories
Problem solved by this Code Snippet Because some ISP or cloud providers do not monitor their users a lot of times client ip addresses are marked as "spam sources" or "windows exploits" and as the ip addresses are dynamic and after time a legitimate user can use this ip addresses the categories are often stopped in the IP intelligence profile or under the ASM/AWAF policy. To still make use of this categories the users coming from those ip addresses can be forced to solve captcha checks or at least to be checked for javascript support! How to use this Code Snippet Have AWAF/ASM and ip intelligence licensed Add AWAF/ASM policy with irule support option (by default not enabled under the policy) or/and Bot profile under the Virtual server Optionally add IP intelligence profile or enable the Ip intelligence under the WAF policy without the categories that cause a lot of false positives, Add the irule and if needed modify the categories for which it triggers Do not forget to first create the data group, used in the code or delete that part of the code and to uncomment the Bot part of the code, if you plan to do js check and not captcha and maybe comment the captcha part ! Code Snippet Meta Information Version: 17.1.3 Coding Language: TCL Code You can find the code and further documentation in my GitHub repository: reputation-javascript-captcha-challlenge/ at main Β· Nikoolayy1/reputation-javascript-captcha-challlenge when HTTP_REQUEST { # Take the ip address for ip reputation/intelligence check from the XFF header if it comes from the whitelisted source ip addresses in data group "client_ip_class" if { [HTTP::header exists "X-Forwarded-For"] && [class match [IP::client_addr] equals "/Common/client_ip_class"] } { set trueIP [HTTP::header "X-Forwarded-For"] } else { set trueIP [IP::client_addr] } # Check if IP reputation is triggered and it is containing "Spam Sources" if { ([llength [IP::reputation $trueIP]] != 0) && ([IP::reputation $trueIP] contains "Spam Sources") }{ log local0. "The category is [IP::reputation $trueIP] from [IP::client_addr]" # Set the variable 1 or bulean true as to trigger ASM captcha or bot defense javascript set js_ch 1 } else { set js_ch 0 } # Custom response page just for testing if there is no real backend origin server for testing if {!$js_ch} { HTTP::respond 200 content { <html> <head> <title>Apology Page</title> </head> <body> We are sorry, but the site you are looking for is temporarily out of service<br> If you feel you have reached this page in error, please try again. </body> </html> } } } # when BOTDEFENSE_ACTION { # Trigger bot defense action javascript check for Spam Sources # if {$js_ch && (not ([BOTDEFENSE::reason] starts_with "passed browser challenge")) && ([BOTDEFENSE::action] eq "allow") }{ # BOTDEFENSE::action browser_challenge # } # } when ASM_REQUEST_DONE { # Trigger ASM captcha check only for users comming from Spam sources that have not already passed the captcha check (don't have the captcha cookie) if {$js_ch && [ASM::captcha_status] ne "correct"} { set res [ASM::captcha] if {$res ne "ok"} { log local0. "Cannot send captcha_challenge: \"$res\"" } } } Extra References: BOTDEFENSE::action ASM::captcha ASM::captcha_status198Views1like1CommentList of F5 iControl REST API Endpoints
As I could not find a complete API Reference for the F5 iControl REST API, I created a list myself. This list is not (yet) complete. I created it with the help of an API crawler and added manually some endpoints extracted from the F5 documentation. It would be great if this list gets more complete by time. Feel free to fork this repository and create a pull requests for additions and corrections. Any help and feedback is very welcome! At the moment it is a simple plain text file. My future plans are: Complete the list of endpoints Publish an OpenAPI 3 file You can find the list in my public GitHub repository. Happy RESTing!267Views2likes2CommentsGTM type A and AAAA wideip NetworkMap to generate a json with python f5-sdk
Code is community submitted, community supported, and recognized as βUse At Your Own Riskβ. Short Description GTM type A and AAAA NetworkMap to a json with python f5-sdk, code support check AS3 wideip. test well in BIGIP VE V14.1.5 and V16.1.2, code should work on version V12+ BIGIP Important: gtm/ltm server name can not contains character ":" and "\" and "/" and gtm server virtual server name(VS NAME) also can not contains character ":" and "\", because I use fullPath.split(':') to read GTM-Server Name and Virtual server name(correct example such as "fullPath":"/Common/DC-2-GTM-ipv4:/Common/vs_cmcc_99_22" ) otherwise it will raise HTTP 404, below is the error format example: GTM Server name ZSCTEST:DC-1-LTM-ZSC-ipv4 GTM Server Virtual Server Name(VS NAME) test:vs "name":"ZSCTEST\\:DC-1-LTM-ZSC-ipv4:test:vs","partition":"Common","fullPath":"/Common/ZSCTEST\\:DC-1-LTM-ZSC-ipv4:test:vs" Problem solved by this Code Snippet collect GTM type A and AAAA data and generate a json file How to use this Code Snippet Firstly, install python f5 sdk pip install f5-sdk Secondly, modify the following IP, account and password corresponding to your BIGIP GTM device mgmt = ManagementRoot('192.168.5.109', 'admin', 'xt32112300') python f5-sdk send a GET request to GTM in the form of ~partition~name, but the format of the GET request for the AS3 published wideip should be ~partition~Folder~name, so when retrieving the AS3 published wideip, the URL constructed will report HTTP 404. After reading the error source code sitepackages\icontrol\session.py There is a function def _ validate_ name_ partition_ Subpath (element): def _validate_name_partition_subpath(element): # '/' and '~' are illegal characters in most cases, however there are # few exceptions (GTM Regions endpoint being one of them where the # validation of name should not apply. """ if '~' in element: error_message =\ "instance names and partitions cannot contain '~', but it's: %s"\ % element raise InvalidInstanceNameOrFolder(error_message) """ the determination of whether the name carries the character ~ will cause the structure of AS3 name=i.subPath + '~' + i.name doesn't work. so, delete the judgment of ~ or use """ """ notes code will support AS3 wideip check Finally, if the code runs no error, it will generate a "F5-GTM-Wideip-XXX(date format)-NetworkMap.json" file in your local working directory Code Snippet Meta Information Version: 1.0 Coding Language: python Full Code Snippet from f5.bigip import ManagementRoot import json import time wideip_NetworkMap = {} mgmt = ManagementRoot('192.168.5.109', 'admin', 'xt32112300') gtm_wideip = [] """ author: xuwen email: 1099061067@qq.com date: 2022/12/16 """ # GTM A Wideip for i in mgmt.tm.gtm.wideips.a_s.get_collection(): try: type_A_wideip = mgmt.tm.gtm.wideips.a_s.a.load(name=i.subPath + '~' + i.name if hasattr(i, 'subPath') else i.name, partition=i.partition) except Exception as e: print('type A widip name {} error msg is '.format(i.name) + str(e)) else: gtm_A_wideip = {} type_A_wideip_name = i.name type_A_wideip_partition = i.partition if hasattr(type_A_wideip, 'aliases'): gtm_A_wideip['aliases'] = type_A_wideip.aliases if hasattr(type_A_wideip, 'rules'): gtm_A_wideip['iRules'] = type_A_wideip.rules if hasattr(type_A_wideip, 'enabled'): gtm_A_wideip['enabled'] = True else: gtm_A_wideip['disabled'] = True if hasattr(type_A_wideip, 'subPath'): gtm_A_wideip['subPath'] = type_A_wideip.subPath gtm_A_wideip.update(name=type_A_wideip_name, partition=type_A_wideip_partition, wideip_type='A', poolLbMode=type_A_wideip.poolLbMode, persistence=type_A_wideip.persistence, lastResortPool=type_A_wideip.lastResortPool, fullPath=type_A_wideip.fullPath) # print(gtm_A_wideip) if hasattr(type_A_wideip, 'pools'): gtm_A_wideip['pools'] = [] for pool_name in type_A_wideip.pools: gtm_A_pool = {} # gtm_A_pool_name = pool_name['name'] gtm_A_pool['name'] = pool_name['name'] gtm_A_pool['partition'] = pool_name['partition'] gtm_A_pool['type'] = 'A' gtm_A_pool['order'] = pool_name['order'] gtm_A_pool['ratio'] = pool_name['ratio'] if 'subPath' in pool_name.keys(): gtm_A_pool['subPath'] = pool_name['subPath'] gslb_A_pool = mgmt.tm.gtm.pools.a_s.a.load(name=pool_name['subPath'] + '~' + pool_name['name'], partition=pool_name['partition']) # gslb_A_pool = mgmt.tm.gtm.pools.a_s.a.load(name=pool_name['subPath'] + '~' + pool_name['name'] if 'subPath' in pool_name.keys() else pool_name['name'], partition=pool_name['partition']) else: gslb_A_pool = mgmt.tm.gtm.pools.a_s.a.load(name=pool_name['name'], partition=pool_name['partition']) gtm_A_pool['fullPath'] = gslb_A_pool.fullPath gtm_A_pool['ttl'] = gslb_A_pool.ttl gtm_A_pool['loadBalancingMode'] = gslb_A_pool.loadBalancingMode gtm_A_pool['alternateMode'] = gslb_A_pool.alternateMode gtm_A_pool['fallbackMode'] = gslb_A_pool.fallbackMode gtm_A_pool['fallbackIp'] = gslb_A_pool.fallbackIp gtm_A_pool['Members'] = [] # gslb_pool_members_vs_name_list = [str(mem.raw) for mem in gslb_A_pool.members_s.get_collection()] gslb_pool_members_vs_fullPath_list = [(mem.memberOrder, mem.fullPath, mem.ratio) for mem in gslb_A_pool.members_s.get_collection()] for pool_memberOrder, pool_member_fullPath, pool_member_ratio in gslb_pool_members_vs_fullPath_list: # print(pool_member_fullPath) # "fullPath":"/Common/DC-2-GTM-ipv4:/Common/vs_cmcc_99_22" gtm_server_name = pool_member_fullPath.split(':')[0] gtm_pool_members_member_name = pool_member_fullPath.split(':')[1] dc_gtm_virtualserver = mgmt.tm.gtm.servers.server.load(name=gtm_server_name.split('/')[2], partition=gtm_server_name.split('/')[1]) virtualservers_virtualserver = dc_gtm_virtualserver.virtual_servers_s.virtual_server.load( name=gtm_pool_members_member_name ) virtualserver_destination = virtualservers_virtualserver.destination virtualserver_Member_Address = virtualserver_destination.split(':')[0] virtualserver_Service_Port = virtualserver_destination.split(':')[1] gtm_A_pool['Members'].append({ 'Member': gtm_pool_members_member_name, 'Member Order': pool_memberOrder, 'ratio': pool_member_ratio, 'Member Address': virtualserver_Member_Address, 'Service Port': virtualserver_Service_Port, 'Translation Address': virtualservers_virtualserver.translationAddress, 'Translation Service Port': virtualservers_virtualserver.translationPort }) gtm_A_wideip['pools'].append(gtm_A_pool) if hasattr(type_A_wideip, 'poolsCname'): gtm_A_wideip['poolsCname'] = [] for pool_name in type_A_wideip.poolsCname: gtm_A_cnamepool = {} # gtm_A_pool_name = pool_name['name'] gtm_A_cnamepool['name'] = pool_name['name'] gtm_A_cnamepool['partition'] = pool_name['partition'] gtm_A_cnamepool['type'] = 'CNAME' gtm_A_cnamepool['order'] = pool_name['order'] gtm_A_cnamepool['ratio'] = pool_name['ratio'] if 'subPath' in pool_name.keys(): gtm_A_cnamepool['subPath'] = pool_name['subPath'] gslb_A_cnamepool = mgmt.tm.gtm.pools.cnames.cname.load(name=pool_name['subPath'] + '~' + pool_name['name'], partition=pool_name['partition']) else: gslb_A_cnamepool = mgmt.tm.gtm.pools.cnames.cname.load(name=pool_name['name'], partition=pool_name['partition']) gtm_A_cnamepool['fullPath'] = gslb_A_cnamepool.fullPath gtm_A_cnamepool['ttl'] = gslb_A_cnamepool.ttl gtm_A_cnamepool['loadBalancingMode'] = gslb_A_cnamepool.loadBalancingMode gtm_A_cnamepool['alternateMode'] = gslb_A_cnamepool.alternateMode gtm_A_cnamepool['fallbackMode'] = gslb_A_cnamepool.fallbackMode gtm_A_cnamepool['Members'] = [] gslb_pool_members_domainname_fullPath_list = [(mem.name, mem.memberOrder, mem.fullPath, mem.ratio) for mem in gslb_A_cnamepool.members_s.get_collection()] for pool_member_name, pool_memberOrder, pool_member_fullPath, pool_member_ratio in gslb_pool_members_domainname_fullPath_list: gtm_A_cnamepool['Members'].append({ 'Member': pool_member_name, 'Member Order': pool_memberOrder, 'ratio': pool_member_ratio, 'fullPath': pool_member_fullPath }) gtm_A_wideip['poolsCname'].append(gtm_A_cnamepool) # print(gtm_A_wideip) gtm_wideip.append(gtm_A_wideip) # GTM AAAA Wideip for i in mgmt.tm.gtm.wideips.aaaas.get_collection(): try: type_AAAA_wideip = mgmt.tm.gtm.wideips.aaaas.aaaa.load(name=i.subPath + '~' + i.name if hasattr(i, 'subPath') else i.name, partition=i.partition) except Exception as e: print('type AAAA widip name {} error msg is '.format(i.name) + str(e)) else: gtm_AAAA_wideip = {} type_AAAA_wideip_name = i.name type_AAAA_wideip_partition = i.partition if hasattr(type_AAAA_wideip, 'aliases'): gtm_AAAA_wideip['aliases'] = type_AAAA_wideip.aliases if hasattr(type_AAAA_wideip, 'rules'): gtm_AAAA_wideip['iRules'] = type_AAAA_wideip.rules if hasattr(type_AAAA_wideip, 'enabled'): gtm_AAAA_wideip['enabled'] = True else: gtm_AAAA_wideip['disabled'] = True if hasattr(type_AAAA_wideip, 'subPath'): gtm_AAAA_wideip['subPath'] = type_AAAA_wideip.subPath gtm_AAAA_wideip.update(name=type_AAAA_wideip_name, partition=type_AAAA_wideip_partition, wideip_type='AAAA', poolLbMode=type_AAAA_wideip.poolLbMode, persistence=type_AAAA_wideip.persistence, lastResortPool=type_AAAA_wideip.lastResortPool, fullPath=type_AAAA_wideip.fullPath) if hasattr(type_AAAA_wideip, 'pools'): gtm_AAAA_wideip['pools'] = [] for pool_name in type_AAAA_wideip.pools: gtm_AAAA_pool = {} # gtm_A_pool_name = pool_name['name'] gtm_AAAA_pool['name'] = pool_name['name'] gtm_AAAA_pool['partition'] = pool_name['partition'] gtm_AAAA_pool['type'] = 'AAAA' gtm_AAAA_pool['order'] = pool_name['order'] gtm_AAAA_pool['ratio'] = pool_name['ratio'] if 'subPath' in pool_name.keys(): gtm_AAAA_pool['subPath'] = pool_name['subPath'] gslb_AAAA_pool = mgmt.tm.gtm.pools.aaaas.aaaa.load(name=pool_name['subPath'] + '~' + pool_name['name'], partition=pool_name['partition']) else: gslb_AAAA_pool = mgmt.tm.gtm.pools.aaaas.aaaa.load(name=pool_name['name'], partition=pool_name['partition']) gtm_AAAA_pool['fullPath'] = gslb_AAAA_pool.fullPath gtm_AAAA_pool['ttl'] = gslb_AAAA_pool.ttl gtm_AAAA_pool['loadBalancingMode'] = gslb_AAAA_pool.loadBalancingMode gtm_AAAA_pool['alternateMode'] = gslb_AAAA_pool.alternateMode gtm_AAAA_pool['fallbackMode'] = gslb_AAAA_pool.fallbackMode gtm_AAAA_pool['fallbackIp'] = gslb_AAAA_pool.fallbackIp gtm_AAAA_pool['Members'] = [] # gslb_pool_members_vs_name_list = [str(mem.raw) for mem in gslb_A_pool.members_s.get_collection()] gslb_pool_members_vs_fullPath_list = [(mem.memberOrder, mem.fullPath, mem.ratio) for mem in gslb_AAAA_pool.members_s.get_collection()] for pool_memberOrder, pool_member_fullPath, pool_member_ratio in gslb_pool_members_vs_fullPath_list: # print(pool_member_fullPath) # "fullPath":"/Common/DC-2-GTM-ipv4:/Common/vs_cmcc_99_22" gtm_server_name = pool_member_fullPath.split(':')[0] gtm_pool_members_member_name = pool_member_fullPath.split(':')[1] dc_gtm_virtualserver = mgmt.tm.gtm.servers.server.load(name=gtm_server_name.split('/')[2], partition=gtm_server_name.split('/')[1]) virtualservers_virtualserver = dc_gtm_virtualserver.virtual_servers_s.virtual_server.load( name=gtm_pool_members_member_name ) virtualserver_destination = virtualservers_virtualserver.destination virtualserver_Member_Address = virtualserver_destination.split('.')[0] virtualserver_Service_Port = virtualserver_destination.split('.')[1] gtm_AAAA_pool['Members'].append({ 'Member': gtm_pool_members_member_name, 'Member Order': pool_memberOrder, 'ratio': pool_member_ratio, 'Member Address': virtualserver_Member_Address, 'Service Port': virtualserver_Service_Port, 'Translation Address': virtualservers_virtualserver.translationAddress, 'Translation Service Port': virtualservers_virtualserver.translationPort }) gtm_AAAA_wideip['pools'].append(gtm_AAAA_pool) if hasattr(type_AAAA_wideip, 'poolsCname'): gtm_AAAA_wideip['poolsCname'] = [] for pool_name in type_AAAA_wideip.poolsCname: gtm_AAAA_cnamepool = {} gtm_AAAA_cnamepool['name'] = pool_name['name'] gtm_AAAA_cnamepool['partition'] = pool_name['partition'] gtm_AAAA_cnamepool['type'] = 'CNAME' gtm_AAAA_cnamepool['order'] = pool_name['order'] gtm_AAAA_cnamepool['ratio'] = pool_name['ratio'] if 'subPath' in pool_name.keys(): gtm_AAAA_cnamepool['subPath'] = pool_name['subPath'] gslb_AAAA_cnamepool = mgmt.tm.gtm.pools.cnames.cname.load(name=pool_name['subPath'] + '~' + pool_name['name'], partition=pool_name['partition']) else: gslb_AAAA_cnamepool = mgmt.tm.gtm.pools.cnames.cname.load(name=pool_name['name'], partition=pool_name['partition']) gtm_AAAA_cnamepool['fullPath'] = gslb_AAAA_cnamepool.fullPath gtm_AAAA_cnamepool['ttl'] = gslb_AAAA_cnamepool.ttl gtm_AAAA_cnamepool['loadBalancingMode'] = gslb_AAAA_cnamepool.loadBalancingMode gtm_AAAA_cnamepool['alternateMode'] = gslb_AAAA_cnamepool.alternateMode gtm_AAAA_cnamepool['fallbackMode'] = gslb_AAAA_cnamepool.fallbackMode gtm_AAAA_cnamepool['Members'] = [] gslb_pool_members_domainname_fullPath_list = [(mem.name, mem.memberOrder, mem.fullPath, mem.ratio) for mem in gslb_AAAA_cnamepool.members_s.get_collection()] for pool_member_name, pool_memberOrder, pool_member_fullPath, pool_member_ratio in gslb_pool_members_domainname_fullPath_list: gtm_AAAA_cnamepool['Members'].append({ 'Member': pool_member_name, 'Member Order': pool_memberOrder, 'ratio': pool_member_ratio, 'fullPath': pool_member_fullPath }) gtm_AAAA_wideip['poolsCname'].append(gtm_AAAA_cnamepool) # print(gtm_AAAA_wideip) gtm_wideip.append(gtm_AAAA_wideip) gtm_networkmap = {} gtm_networkmap.update(wideips=gtm_wideip) print(gtm_networkmap) with open(r"./F5-GTM-Wideip-{}-NetworkMap.json".format(time.strftime("%Y-%m-%d", time.localtime())), "w") as f: f.write(json.dumps(gtm_networkmap, indent=4, ensure_ascii=False))2.3KViews2likes7CommentsLogstash pipeline tester
Code is community submitted, community supported, and recognized as βUse At Your Own Riskβ. Short Description A tool that makes developing logstash pipelines much much easier. Problem solved by this Code Snippet Oh. The problem... Have you ever tried to write a logstash pipeline? Did you suffer hair loss and splitting migraines? So did I. Presenting, logstash pipeline tester which gives you a web interface where you can paste raw logs, send them to the included logstash instance and see the result directly in the interface. The included logstash instance is also configured to automatically reload once it detects a config change. How to use this Code Snippet TLDR; Don't do this, read the manual or checkout the video below Still here? Ok then! π Install docker Clone the repo Run these commands in the repo root folder:sudo docker-compose build # Skip sudo if running Windows sudo docker compose upβ # Skip sudo if running WindowsGo to http://localhost:8080 on your PC/Mac Pick a pipeline and send data Edit the pipeline Send data Rince, repeat Version info v1.0.27: Dependency updates, jest test retries and more since 1.0.0 https://github.com/epacke/logstash-pipeline-tester/releases/tag/v1.0.29 Video on how to get started: https://youtu.be/Q3IQeXWoqLQ Please note that I accidentally started the interface on port 3000 in the video while the shipped version uses port 8080. It took me roughly 5 hours and more retakes than I can count to make this video, so that mistake will be preserved for the internet to laugh at. π The manual: https://loadbalancing.se/2020/03/11/logstash-testing-tool/ Code Snippet Meta Information Version: Check GitHub Coding Language: NodeJS, Typescript + React Full Code Snippet https://github.com/epacke/logstash-pipeline-tester2.4KViews3likes15Comments