devops
790 TopicsF5 DNS/GTM External Monitor(EAV) with SNI support and response code check
I have used this monitor for XC Distributed Cloud as the HTTP LB share by default the same tenant IP address and SNI support is needed. You can order dedicated public IP addresses for each HTTP LB and enable "Default Load Balancer" ( https://my.f5.com/manage/s/article/K000152902 ) option but it will cost you extra 😉 The script is a modified version of External https health monitor for SNI-enabled pool as to handle response codes and to set the SNI globally for the entire pool and it's members. If you are uploading from Windows machine see External monitor fails to run as you could hit the bug. This could be needed for F5 DNS/GTM below 16.1 that do not support SNI in HTTPS monitors. The only mandatory variable is "SNI" that should be set in the external monitor config that references this uploaded bash script. The "URI" variable by default is set to "/" and "$2" variable by default is empty or 443, the default expected response code 200. #!/bin/sh # External monitoring script for checking HTTP status code # $1 = IP (::ffff:nnn.nnn.nnn.nnn notation or hostname) # $2 = port (optional; defaults to 443 if not provided) # Default SNI to IP if not explicitly provided node_ip=$(echo "$1" | sed 's/::ffff://') # Remove IPv6 compatibility prefix SNI=${SNI:-"$node_ip"} # Assign sanitized IP to SNI # Default variables MON_NAME=${MON_NAME:-"MyExtMon$$"} pidfile="/var/run/$MON_NAME.$1..$2.pid" # PID file path DEBUG=${DEBUG:-0} # Enable debugging if set to 1 EXPECTED_STATUS=${EXPECTED_STATUS:-200} # Default HTTP status code to 200 URI=${URI:-"/"} # Default URI DEFAULT_PORT=443 # Default port (used if $2 is unset) # Set port to default if $2 is not provided if [ -z "${2}" ]; then PORT=${DEFAULT_PORT} else PORT=${2} fi # Kill old process if pidfile exists if [ -f "$pidfile" ]; then kill -9 -$(cat "$pidfile") > /dev/null 2>&1 fi echo "$$" > "$pidfile" # Perform the HTTP(S) request via single curl (fetch status code only) status_code=$(curl -s -o /dev/null -w '%{http_code}' --connect-timeout 5 --resolve "${SNI}:${PORT}:${node_ip}" "https://${SNI}:${PORT}${URI}") # Cleanup rm -f "$pidfile" > /dev/null 2>&1 # Output server status based on HTTP status code match if [ "$status_code" -eq "$EXPECTED_STATUS" ]; then echo "up" else echo "down" fi # Debugging if [ "$DEBUG" -eq 1 ]; then echo "Debugging on..." echo "SNI=${SNI}" echo "URI=${URI}" echo "IP=${node_ip}" echo "PORT=${PORT}" echo "MON_NAME=${MON_NAME}" echo "STATUS_CODE=${status_code}" echo "EXPECTED_STATUS=${EXPECTED_STATUS}" echo "curl -s -o /dev/null -w '%{http_code}' --connect-timeout 5 --resolve ${SNI}:${PORT}:${node_ip} https://${SNI}:${PORT}${URI}" fi123Views0likes1CommentF5 Velos/rSeries/F5OS code for automating config backup with the new RESTCONF API and Ansible
On the new F5OS devices a new RESTCONF based API interface is used that allows everything to be done via that API. Now you can even send API command to make F5 to export the configuration file in outbound connection with HTTPS/SCP and this is an extra security for me. F5 has even released Ansible collections for Velos but some things are still not possible with the collection but with Ansible the URI module can used to do the things I am doing with Postman as even the HTTP headers can be added in the URI module. Some may use python but personally I like Ansible more (look at the end of this article for the Ansible Example) 🙂 https://clouddocs.f5.com/products/orchestration/ansible/devel/velos/velos.html https://clouddocs.f5.com/products/orchestration/ansible/devel/f5os/f5os.html https://docs.ansible.com/ansible/latest/collections/ansible/builtin/uri_module.html This code allows the automation of the configuration backups for the F5 Velos/rSeries using the new API. To get started with the F5OS API I recommend going through the Devcentral article https://community.f5.com/t5/technical-articles/exploring-f5os-automation-features-on-velos/ta-p/295318 The Velos Postman collections are at https://clouddocs.f5.com/api/velos-api/velos-api-workflows.html The Velos API documentation can be found at F5OS/F5OS-C OpenAPI Documentation . The F5OS API supports Basic and Bearer token authentication but it is much better to use the BASIC auth just to retrive the Token as shown in the examples below. Generate Bearer Token in Postman. This is from the F5 Postman collection. Endpoint: https://{{Chassis1_System_Controller_IP}}:8888/restconf/data/openconfig-system:system/aaa 'No body' 2. Create a config backup (now it is not called UCS but database configuration backup in F5OS). Endpoint: 8888/restconf/data/openconfig-system:system/f5-database:database/f5-database:config-backup Body: { "f5-database:name": "api-backup", "f5-database:overwrite": "true" } Note! For rSeries "f5-database:overwrite": "true" may need to be removed as 1.3.1 does not support to select to overwrite an existing backup or not. 3a. Download the config backup with ‘root’ with SCP from ‘/var/confd/configs/’, for example Back up and restore the F5OS-C configuration on a VELOS system 3b. Make F5 to send the backup with HTTPS to the backup server with the new file transfer utility that can be triggered with API commands for the F5 to start the file transfer. Endpoint: :8888/restconf/data/f5-utils-file-transfer:file/export Body: { "f5-utils-file-transfer:username": "test", "f5-utils-file-transfer:password": "test", "f5-utils-file-transfer:local-file": "configs/api-backup", "f5-utils-file-transfer:remote-url": "https://1.1.1.1/file" } In some versions the variable "insecure" : "true" can't be set, so maybe the web server will need a valid and not self-signed SSL cert. 3c. Export the backup with SCP/AFTP initiated from the F5 device with an API command. This is something that will be possible in the future as it seems as of now it is still not possible as I tried to follow the API documentation but sometimes, I get errors about missing element ‘’known-hosts’’ but this file should be created with the below API call as maybe the workaround is to go to the Linux with a root account and create this file but I still have not found where to create it. Another error is unknown element ‘remote-host’ but this should exist, so it is a bug or the documentation has some mistakes but as this is a new feature it will work eventually. As a note you need to add the fingerprints for the Velos or rSeries to start the SCP connection as an extra security step and this is really nice 😀 Endpoint: /restconf/data/f5-utils-file-transfer:file/known-hosts Body: { "f5-utils-file-transfer:known-host": [ { "remote-host": "string", "config": { "remote-host": "string", "key-type": "rsa", "fingerprint": "string" }, "state": { "remote-host": "string", "key-type": "rsa", "fingerprint": "string" } } ] } Now with F5OS when accessing the GUI, you can use Fiddler or F12 (the devtools) just to see the RESTCONF commands that are used and the use them in Postman/Ansible/Python etc. EDIT: 4. Using Ansible URI Module with F5OS for Basic Auth, Token generation and Config Backup Here is an example to do the same tasks but with using the Ansible URI module. The Ansible URI module allows us to make our own API requests when there is no build-in module and it even supports basic and form based authentication and after that the token can be saved and used a varible in the next requests that generate the backup and then the backup can be transfered with SCP triggered with cron job or another URI module task can be written that uses the file transfer utility. Ansible Playbook using jinja2 template as json body: root@niki1:/home/niki/ansible# cat f5os_backup.yml --- - name: F5OS_BACKUP hosts: lb connection: local gather_facts: false vars: Chassis_IP : X.X.X.X backup_name : api3_backup tasks: - name: Create a Basic request ansible.builtin.uri: url: https://{{ Chassis_IP }}:8888/restconf/data/openconfig-system:system/aaa user: xxx password: xxx method: GET force_basic_auth: yes status_code: 200 body_format: json validate_certs: false headers: Content-Type: application/yang-data+json X-Auth-Token: rctoken return_content: yes register: result - name: Save the token to a fact variable set_fact: metatoken: "{{ result.x_auth_token }}" - name: Create Backup ansible.builtin.uri: url: https://{{ Chassis_IP }}:8888/restconf/data/openconfig-system:system/f5-database:database/f5-database:config-backup method: POST status_code: 200 body_format: json validate_certs: false body: "{{ lookup('ansible.builtin.template','f5os.json') }}" headers: Content-Type: application/yang-data+json X-Auth-Token: "{{ metatoken }}" f5os.json Template: { "f5-database:name": "{{ backup_name }}", "f5-database:overwrite": "true" } Edit: Now there is an F5 Ansible collection for this 🙂 https://clouddocs.f5.com/products/orchestration/ansible/devel/f5os/modules_3_0/f5os_config_backup_module.html As of F5OS 1.8 now ":8888/restconf" can be replaced with ":443/api".2.9KViews0likes0CommentsTLS Server Name Indication
Problem this snippet solves: Extensions to TLS encryption protocols after TLS v1.0 have added support for passing the desired servername as part of the initial encryption negotiation. This functionality makes it possible to use different SSL certificates with a single IP address by changing the server's response based on this field. This process is called Server Name Indication (http://en.wikipedia.org/wiki/Server_Name_Indication). It is not supported on all browsers, but has a high level of support among widely-used browsers. Only use this functionality if you know the bulk of the browsers accessing your site support SNI - the fact that IE on Windows XP does not precludes the wide use of this functionality for most sites, but only for now. As older browsers begin to die off, SNI will be a good weapon in your arsenal of virtual hosting tools. You can test if your browser supports SNI by clicking here: https://alice.sni.velox.ch/ Supported Browsers: * Internet Explorer 7 or later, on Windows Vista or higher * Mozilla Firefox 2.0 or later * Opera 8.0 or later (the TLS 1.1 protocol must be enabled) * Opera Mobile at least version 10.1 beta on Android * Google Chrome (Vista or higher. XP on Chrome 6 or newer) * Safari 2.1 or later (Mac OS X 10.5.6 or higher and Windows Vista or higher) * MobileSafari in Apple iOS 4.0 or later (8) * Windows Phone 7 * MicroB on Maemo Unsupported Browsers: * Konqueror/KDE in any version * Internet Explorer (any version) on Windows XP * Safari on Windows XP * wget * BlackBerry Browser * Windows Mobile up to 6.5 * Android default browser (Targeted for Honeycomb but won't be fixed until next version for phone users as Honeycomb will be reserved to tablets only) * Oracle Java JSSE Note: The iRule listed here is only supported on v10 and above. Note: Support for SNI was added in 11.1.0. See SOL13452 for more information. How to use this snippet: Create a string-type datagroup to be called "tls_servername". Each hostname that needs to be supported on the VIP must be input along with its matching clientssl profile. For example, for the site "testsite.site.com" with a ClientSSL profile named "clientssl_testsite", you should add the following values to the datagroup. String: testsite.site.com Value: clientssl_testsite If you wish to switch pool context at the time the servername is detected in TLS, then you need to create a string-type datagroup called "tls_servername_pool". You will input each hostname to be supported by the VIP and the pool to direct the traffic towards. For the site "testsite.site.com" to be directed to the pool "testsite_pool_80", add the following to the datagroup: String: testsite.site.com Value: testsite_pool_80 Apply the iRule below to a chosen VIP. When applied, this iRule will detect if an SNI field is present and dynamically switch the SSL profile and pool to use the configured certificate. Important: The VIP must have a clientSSL profile AND a default pool set. If you don't set this, the iRule will likely break. There is also no real errorhandling for incorrect/inaccurate entries in the datagroup lists -- if you enter a bad value, it'll fail. This allows you to support multiple certificates and multiple pools per VS IP address. when CLIENT_ACCEPTED { if { [PROFILE::exists clientssl] } { # We have a clientssl profile attached to this VIP but we need # to find an SNI record in the client handshake. To do so, we'll # disable SSL processing and collect the initial TCP payload. set default_tls_pool [LB::server pool] set detect_handshake 1 SSL::disable TCP::collect } else { # No clientssl profile means we're not going to work. log local0. "This iRule is applied to a VS that has no clientssl profile." set detect_handshake 0 } } when CLIENT_DATA { if { ($detect_handshake) } { # If we're in a handshake detection, look for an SSL/TLS header. binary scan [TCP::payload] cSS tls_xacttype tls_version tls_recordlen # TLS is the only thing we want to process because it's the only # version that allows the servername extension to be present. When we # find a supported TLS version, we'll check to make sure we're getting # only a Client Hello transaction -- those are the only ones we can pull # the servername from prior to connection establishment. switch $tls_version { "769" - "770" - "771" { if { ($tls_xacttype == 22) } { binary scan [TCP::payload] @5c tls_action if { not (($tls_action == 1) && ([TCP::payload length] > $tls_recordlen)) } { set detect_handshake 0 } } } default { set detect_handshake 0 } } if { ($detect_handshake) } { # If we made it this far, we're still processing a TLS client hello. # # Skip the TLS header (43 bytes in) and process the record body. For TLS/1.0 we # expect this to contain only the session ID, cipher list, and compression # list. All but the cipher list will be null since we're handling a new transaction # (client hello) here. We have to determine how far out to parse the initial record # so we can find the TLS extensions if they exist. set record_offset 43 binary scan [TCP::payload] @${record_offset}c tls_sessidlen set record_offset [expr {$record_offset + 1 + $tls_sessidlen}] binary scan [TCP::payload] @${record_offset}S tls_ciphlen set record_offset [expr {$record_offset + 2 + $tls_ciphlen}] binary scan [TCP::payload] @${record_offset}c tls_complen set record_offset [expr {$record_offset + 1 + $tls_complen}] # If we're in TLS and we've not parsed all the payload in the record # at this point, then we have TLS extensions to process. We will detect # the TLS extension package and parse each record individually. if { ([TCP::payload length] >= $record_offset) } { binary scan [TCP::payload] @${record_offset}S tls_extenlen set record_offset [expr {$record_offset + 2}] binary scan [TCP::payload] @${record_offset}a* tls_extensions # Loop through the TLS extension data looking for a type 00 extension # record. This is the IANA code for server_name in the TLS transaction. for { set x 0 } { $x < $tls_extenlen } { incr x 4 } { set start [expr {$x}] binary scan $tls_extensions @${start}SS etype elen if { ($etype == "00") } { # A servername record is present. Pull this value out of the packet data # and save it for later use. We start 9 bytes into the record to bypass # type, length, and SNI encoding header (which is itself 5 bytes long), and # capture the servername text (minus the header). set grabstart [expr {$start + 9}] set grabend [expr {$elen - 5}] binary scan $tls_extensions @${grabstart}A${grabend} tls_servername set start [expr {$start + $elen}] } else { # Bypass all other TLS extensions. set start [expr {$start + $elen}] } set x $start } # Check to see whether we got a servername indication from TLS. If so, # make the appropriate changes. if { ([info exists tls_servername] ) } { # Look for a matching servername in the Data Group and pool. set ssl_profile [class match -value [string tolower $tls_servername] equals tls_servername] set tls_pool [class match -value [string tolower $tls_servername] equals tls_servername_pool] if { $ssl_profile == "" } { # No match, so we allow this to fall through to the "default" # clientssl profile. SSL::enable } else { # A match was found in the Data Group, so we will change the SSL # profile to the one we found. Hide this activity from the iRules # parser. set ssl_profile_enable "SSL::profile $ssl_profile" catch { eval $ssl_profile_enable } if { not ($tls_pool == "") } { pool $tls_pool } else { pool $default_tls_pool } SSL::enable } } else { # No match because no SNI field was present. Fall through to the # "default" SSL profile. SSL::enable } } else { # We're not in a handshake. Keep on using the currently set SSL profile # for this transaction. SSL::enable } # Hold down any further processing and release the TCP session further # down the event loop. set detect_handshake 0 TCP::release } else { # We've not been able to match an SNI field to an SSL profile. We will # fall back to the "default" SSL profile selected (this might lead to # certificate validation errors on non SNI-capable browsers. set detect_handshake 0 SSL::enable TCP::release } } }1.2KViews0likes7CommentsCommand Performance
Problem this snippet solves: The article Ten Steps to iRules Optimization illustrates some ways to optimize your iRules. I took a look at the control statements and built a little iRule that will test those assertions and generate performance graphs using Google Charts to present the findings. How to use this snippet: Dependencies This iRule relies on external Class files for the test on the "class match" command. The class names should be in the form of "class_xxx" where xxx is the list size you want to test. Include xxx number of entries with values from 0 to xxx-1. For a list size of 10, the class should look like this: # Snippet in bigip.conf class calc_10 { "0" "1" "2" "3" "4" "5" "6" "7" "8" "9" } I used perl to generate larger classes of size 100, 1000, 5000, and 10000 for my tests. Usage Assign the iRule to a virtual server and then browse to the url http://virtualserver/calccommands. I've included query string arguments to override the default test parameters as follows ls=nnn - List Size. You will need a class defined titled calc_10 for a value of ls=10. i=nnn - Number of iterations. This will be how many times the test is performed for each list size. gw=nnn - Graph Width (default value of 300) gh=nnn - Graph Height (default value of 200) ym=nnn - Graph Y Max value (default 500) An example usage is: http://virtualserver/calccommands?ls=1000&i=500. This will work on a list size of 1000 with 500 iterations per test. Code : when HTTP_REQUEST { #-------------------------------------------------------------------------- # read in parameters #-------------------------------------------------------------------------- set listsize [URI::query [HTTP::uri] "ls"]; set iterations [URI::query [HTTP::uri] "i"]; set graphwidth [URI::query [HTTP::uri] "gw"]; set graphheight [URI::query [HTTP::uri] "gh"]; set ymax [URI::query [HTTP::uri] "ym"]; #-------------------------------------------------------------------------- # set defaults #-------------------------------------------------------------------------- if { ("" == $iterations) || ($iterations > 10000) } { set iterations 500; } if { "" == $listsize } { set listsize 5000; } if { "" == $graphwidth } { set graphwidth 300; } if { "" == $graphheight } { set graphheight 200; } if { "" == $ymax } { set ymax 500; } set modulus [expr $listsize / 5]; set autosize 0; #-------------------------------------------------------------------------- # build lookup list #-------------------------------------------------------------------------- set matchlist "0"; for {set i 1} {$i < $listsize} {incr i} { lappend matchlist "$i"; } set luri [string tolower [HTTP::path]] switch -glob $luri { "/calccommands" { #---------------------------------------------------------------------- # check for existence of class file. If it doesn't exist # print out a nice error message. Otherwise, generate a page of # embedded graphs that route back to this iRule for processing #---------------------------------------------------------------------- if { [catch { class match "1" equals calc_$listsize } ] } { # error set content "<CENTER>BIG-IP Version $static::tcl_platform(tmmVersion)" append content "<H1 id="community-286142-toc-hId-1039316030"><FONT color="red">ERROR: class file 'calc_$listsize' not found</FONT></H1>"; append content ""; } else { # Build the html and send requests back in for the graphs... set content "<CENTER>BIG-IP Version $static::tcl_platform(tmmVersion)" append content "<P>List Size: ${listsize}</P><P></P><HR size="3" width="75%" /><P>" set c 0; foreach item $matchlist { set mod [expr $c % $modulus]; if { $mod == 0 } { append content "<IMG src="$luri/$item" append content "?ls=${listsize}&i=${iterations}&gw=${graphwidth}&gh=${graphheight}&ym=${ymax}" />"; } incr c; } append content "</P></CENTER>"; } HTTP::respond 200 content $content; } "/calccommands/*" { #---------------------------------------------------------------------- # Time various commands (switch, switch -glob, if/elseif, matchclass, # class match) and generate redirect to a Google Bar Chart #---------------------------------------------------------------------- set item [getfield $luri "/" 3] set labels "|" set values "" #---------------------------------------------------------------------- # Switch #---------------------------------------------------------------------- set expression "set t1 \[clock clicks -milliseconds\]; \n" append expression "for { set y 0 } { \$y < $iterations } { incr y } { " append expression "switch $item {" foreach i $matchlist { append expression "\"$i\" { } "; } append expression " } " append expression " } \n" append expression "set t2 \[clock clicks -milliseconds\]"; eval $expression; set duration [expr {$t2 - $t1}] if { [expr {$duration < 0}] } { log local0. "NEGATIVE TIME ($item, matchclass: $t1 -> $t2"; } append labels "s|"; if { $values ne "" } { append values ","; } append values "$duration"; if { $autosize && ($duration > $ymax) } { set ymax $duration } #---------------------------------------------------------------------- # Switch -glob #---------------------------------------------------------------------- set expression "set t1 \[clock clicks -milliseconds\]; \n" append expression "for { set y 0 } { \$y < $iterations } { incr y } { " append expression "switch -glob $item {" foreach i $matchlist { append expression "\"$i\" { } "; } append expression " } " append expression " } \n" append expression "set t2 \[clock clicks -milliseconds\]"; eval $expression; set duration [expr {$t2 - $t1}] if { [expr {$duration < 0}] } { log local0. "NEGATIVE TIME ($item, matchclass: $t1 -> $t2"; } append labels "s-g|"; if { $values ne "" } { append values ","; } append values "$duration"; if { $autosize && ($duration > $ymax) } { set ymax $duration } #---------------------------------------------------------------------- # If/Elseif #---------------------------------------------------------------------- set z 0; set y 0; set expression "set t1 \[clock clicks -milliseconds\]; \n" append expression "for { set y 0 } { \$y < $iterations } { incr y } { " foreach i $matchlist { if { $z > 0 } { append expression "else"; } append expression "if { $item eq \"$i\" } { } "; incr z; } append expression " } \n"; append expression "set t2 \[clock clicks -milliseconds\]"; eval $expression; set duration [expr {$t2 - $t1}] if { [expr {$duration < 0}] } { log local0. "NEGATIVE TIME ($item, matchclass: $t1 -> $t2"; } append labels "If|"; if { $values ne "" } { append values ","; } append values "$duration"; if { $autosize && ($duration > $ymax) } { set ymax $duration } #---------------------------------------------------------------------- # Matchclass on list #---------------------------------------------------------------------- set expression "set t1 \[clock clicks -milliseconds\]; \n" append expression "for { set y 0 } { \$y < $iterations } { incr y } { " append expression "if { \[matchclass $item equals \$matchlist \] } { }" append expression " } \n"; append expression "set t2 \[clock clicks -milliseconds\]"; eval $expression; set duration [expr {$t2 - $t1}] if { [expr {$duration < 0}] } { log local0. "NEGATIVE TIME ($item, matchclass: $t1 -> $t2"; } append labels "mc|"; if { $values ne "" } { append values ","; } append values "$duration"; if { $autosize && ($duration > $ymax) } { set ymax $duration } #---------------------------------------------------------------------- # class match (with class) #---------------------------------------------------------------------- set expression "set t1 \[clock clicks -milliseconds\]; \n" append expression "for { set y 0 } { \$y < $iterations } { incr y } { " append expression "if { \[class match $item equals calc_$listsize \] } { }" append expression " } \n"; append expression "set t2 \[clock clicks -milliseconds\]"; log local0. $expression; eval $expression; set duration [expr {$t2 - $t1}] if { [expr {$duration < 0}] } { log local0. "NEGATIVE TIME ($item, matchclass: $t1 -> $t2"; } append labels "c|"; if { $values ne "" } { append values ","; } append values "$duration"; if { $autosize && ($duration > $ymax) } { set ymax $duration } #---------------------------------------------------------------------- # build redirect for the google chart and issue a redirect #---------------------------------------------------------------------- set mod [expr $item % 10] set newuri "http://${mod}.chart.apis.google.com/chart?chxl=0:${labels}&chxr=1,0,${ymax}&chxt=x,y" append newuri "&chbh=a&chs=${graphwidth}x${graphheight}&cht=bvg&chco=A2C180&chds=0,${ymax}&chd=t:${values}" append newuri "&chdl=(in+ms)&chtt=Perf+(${iterations}-${item}/${listsize})&chg=0,2&chm=D,0000FF,0,0,3,1" HTTP::redirect $newuri; } } }408Views0likes2CommentsTrigger js challenge/Captcha for ip reputation/ip intelligence categories
Problem solved by this Code Snippet Because some ISP or cloud providers do not monitor their users a lot of times client ip addresses are marked as "spam sources" or "windows exploits" and as the ip addresses are dynamic and after time a legitimate user can use this ip addresses the categories are often stopped in the IP intelligence profile or under the ASM/AWAF policy. This usually happens in Public Clouds that do not monitor what their users do and the IP gets marked as bad then another good user after a day or two has this ip address and this causes the issue. For many of my clients I had to stop the ip reputation/ip intelligence category "spam sources" and in some cases "windows exploits" so having a javascript/captcha checks seems a nice compromise 😎 To still make use of this categories the users coming from those ip addresses can be forced to solve captcha checks or at least to be checked for javascript support! How to use this Code Snippet Have AWAF/ASM and ip intelligence licensed Add AWAF/ASM policy with irule support option (by default not enabled under the policy) or/and Bot profile under the Virtual server Optionally add IP intelligence profile or enable the Ip intelligence under the WAF policy without the categories that cause a lot of false positives, Add the irule and if needed modify the categories for which it triggers Do not forget to first create the data group, used in the code or delete that part of the code and to uncomment the Bot part of the code, if you plan to do js check and not captcha and maybe comment the captcha part ! Code Snippet Meta Information Version: 17.1.3 Coding Language: TCL Code You can find the code and further documentation in my GitHub repository: reputation-javascript-captcha-challlenge/ at main · Nikoolayy1/reputation-javascript-captcha-challlenge when HTTP_REQUEST { # Take the ip address for ip reputation/intelligence check from the XFF header if it comes from the whitelisted source ip addresses in data group "client_ip_class" if { [HTTP::header exists "X-Forwarded-For"] && [class match [IP::client_addr] equals "/Common/client_ip_class"] } { set trueIP [HTTP::header "X-Forwarded-For"] } else { set trueIP [IP::client_addr] } # Check if IP reputation is triggered and it is containing "Spam Sources" if { ([llength [IP::reputation $trueIP]] != 0) && ([IP::reputation $trueIP] contains "Spam Sources") }{ log local0. "The category is [IP::reputation $trueIP] from [IP::client_addr]" # Set the variable 1 or bulean true as to trigger ASM captcha or bot defense javascript set js_ch 1 } else { set js_ch 0 } # Custom response page just for testing if there is no real backend origin server for testing if {!$js_ch} { HTTP::respond 200 content { <html> <head> <title>Apology Page</title> </head> <body> We are sorry, but the site you are looking for is temporarily out of service<br> If you feel you have reached this page in error, please try again. </body> </html> } } } # when BOTDEFENSE_ACTION { # Trigger bot defense action javascript check for Spam Sources # if {$js_ch && (not ([BOTDEFENSE::reason] starts_with "passed browser challenge")) && ([BOTDEFENSE::action] eq "allow") }{ # BOTDEFENSE::action browser_challenge # } # } when ASM_REQUEST_DONE { # Trigger ASM captcha check only for users comming from Spam sources that have not already passed the captcha check (don't have the captcha cookie) if {$js_ch && [ASM::captcha_status] ne "correct"} { set res [ASM::captcha] if {$res ne "ok"} { log local0. "Cannot send captcha_challenge: \"$res\"" } } } Extra References: BOTDEFENSE::action ASM::captcha ASM::captcha_status275Views1like1CommentNGINX App Protect v5 Signature Notifications
When working with NAP (NGINX App Protect) you don't have an easy way of knowing when any of the signatures are updated. As an old BigIP guy I find that rather strange. Here you have build-in automatic updates and notifications. Unfortunately there isn't any API's you can probe which would have been the best way of doing it. Hopefully it will come one day. However, "friction" and "hard" will not keep me from finding a solution 😆 I have previously made a solution for NAPv4 and I have tried mentally to get me going on a NAPv5 version. The reason for the delay is in the different way NAPv4 and NAPv5 are designed. Where NAPv4 is one module loaded in NGINX, NAPv5 is completely detached from NGINX (well almost, you still need to load a small module to get the traffic from NGINX to NAP) and only works with containers. NAPv5 has moved the signature "storage" from the actual host it is running on (e.g. an installed package) to the policy. This has the consequence that finding a valid "source of truth", for the latest signature versions, is not as simple as building a new image and see which versions got installed. There are very good reasons for this design that I will come back to later. When you fire up NAPv5 you got three containers for the data plane (NGINX, waf-enforcer and waf-config-mgr) and one for the "control plane" (waf-compiler). For this solution the "control plane" is the useful one. It isn't really a control plane but it gives a nice picture of how it is detached from the actual processing of traffic. When you update your signatures you are actually doing it through the waf-compiler. The waf-compiler is a container hosting the actual signature databases and every time a new verison is released you need to rebuild this container and compile your policies into a new version and reload NGINX. And this is what I take advantage of when I look for signature updates. It has the upside that you only need the waf-compiler to get the information you need. My solution will take care of the entire process and make sure that you are always running with the latest signatures. Back to the reason why the split of functions is a very good thing. When you build a new version of the NGINX image and deploy it into production, NAP needs to compile the policies as they load. During the compilation NGINX is not moving any traffic! This becomes a annoying problem even when you have a low number of policies. I have installations where it takes 5 to 10 minutes from deployment of the new image until it starts moving traffic. That is a crazy long time when you are used to working with micro-services and expect everything to flip within seconds. If you have your NAPv4 hooked up to a NGINX Instance Manager (NIM) the problem is somewhat mitigated as NIM will compile the policies before sending them to the gateways. NIM is not a nimble piece of software so it doesn't always fit into the environment. And now here is my hack to the notification problem: The solution consist of two bash scripts and one html template. The template is used when sending a notification mail. I wanted it to be pretty and that was easiest with html. Strictly speaking you could do with just a simple text based mail. Save all three in the same directory. The main script is called "waf_policy_auto_compile.sh"and is the one you put into crontab. The main script will build a new waf-compiler image and compile a test policy. The outcome of that is information about what versions are the newest. It will then extract versions from an old policy and simply see if any of the versions differ. For this to work you need to have an uncompiled policy (you can just use the default one) and a compiled version of it ready beforehand. When a diff has been identified the notification logic is executed and a second script is called: "compile_waf_policies.sh". It basically just trawls through the directory of you policies and logging profiles and compiles a new version of them all. It is not necessary to recompile the logging profiles, so this will probably change in the next version. As the compilation completes the main script will nudge NGINX to reload thus implement all the new versions. You can run "waf_policy_auto_compile.sh" with a verbose flag (-v) and a debug flag (-d). The verbose flag is intended to be used when you run it on a terminal and want the information displayed there. Debug is, well, for debug 😝 The construction of the scripts are based on my own needs but they should be easy to adjust for any need. I will be happy for any feedback, so please don't hold back 😄 version_report_template.html: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>WAF Policy Version Report</title> <style> body { font-family: system-ui, sans-serif; } .ok { color: #28a745; font-weight: bold; } .warn { color: #f0ad4e; font-weight: bold; } .section { margin-bottom: 1.2em; } .label { font-weight: bold; } </style> </head> <body> <h2>WAF Policy Version Report</h2> <div class="section"> <div class="label">Attack Signatures:</div> <div>Current: <span>{{ATTACK_OLD}}</span></div> <div>New: <span>{{ATTACK_NEW}}</span></div> <div>Status: <span class="{{ATTACK_CLASS}}">{{ATTACK_STATUS}}</span></div> </div> <div class="section"> <div class="label">Bot Signatures:</div> <div>Current: <span>{{BOT_OLD}}</span></div> <div>New: <span>{{BOT_NEW}}</span></div> <div>Status: <span class="{{BOT_CLASS}}">{{BOT_STATUS}}</span></div> </div> <div class="section"> <div class="label">Threat Campaigns:</div> <div>Current: <span>{{THREAT_OLD}}</span></div> <div>New: <span>{{THREAT_NEW}}</span></div> <div>Status: <span class="{{THREAT_CLASS}}">{{THREAT_STATUS}}</span></div> </div> <div>Run completed: {{RUN_DATETIME}}</div> </body> </html> compile_waf_policies.sh: #!/bin/bash # ============================================================================== # Script Name: compile_waf_policies.sh # # Description: # Compiles: # 1. WAF policy JSON files from the 'policies' directory # 2. WAF logging JSON files from the 'logging' directory # using the 'waf-compiler-latest:custom' Docker image. Output goes to # '/opt/napv5/app_protect_etc_config' where NGINX and waf-config-mgr # can reach them. # # Requirements: # - Docker installed and accessible # - Docker image 'waf-compiler-latest:custom' present locally # # Usage: # ./compile_waf_policies.sh # ============================================================================== set -euo pipefail IFS=$'\n\t' SECONDS=0 # Track total execution time # ======================== # CONFIGURABLE VARIABLES # ======================== BASE_DIR="/root/napv5/waf-compiler" OUTPUT_DIR="/opt/napv5/app_protect_etc_config" POLICY_INPUT_DIR="$BASE_DIR/policies" POLICY_OUTPUT_DIR="$OUTPUT_DIR" LOGGING_INPUT_DIR="$BASE_DIR/logging" LOGGING_OUTPUT_DIR="$OUTPUT_DIR" GLOBAL_SETTINGS="$BASE_DIR/global_settings.json" DOCKER_IMAGE="waf-compiler-latest:custom" # ======================== # VALIDATION # ======================== echo "🔧 Validating paths..." [[ -d "$POLICY_INPUT_DIR" ]] || { echo "❌ Error: Policy input directory '$POLICY_INPUT_DIR' does not exist."; exit 1; } [[ -f "$GLOBAL_SETTINGS" ]] || { echo "❌ Error: Global settings file '$GLOBAL_SETTINGS' not found."; exit 1; } mkdir -p "$POLICY_OUTPUT_DIR" mkdir -p "$LOGGING_OUTPUT_DIR" # ======================== # POLICY COMPILATION # ======================== echo "📦 Compiling WAF policies from: $POLICY_INPUT_DIR" for POLICY_FILE in "$POLICY_INPUT_DIR"/*.json; do [[ -f "$POLICY_FILE" ]] || continue BASENAME=$(basename "$POLICY_FILE" .json) OUTPUT_FILE="$POLICY_OUTPUT_DIR/${BASENAME}.tgz" echo "⚙️ [Policy] Compiling $(basename "$POLICY_FILE") -> $(basename "$OUTPUT_FILE")" docker run --rm \ -v "$POLICY_INPUT_DIR":"$POLICY_INPUT_DIR" \ -v "$POLICY_OUTPUT_DIR":"$POLICY_OUTPUT_DIR" \ -v "$(dirname "$GLOBAL_SETTINGS")":"$(dirname "$GLOBAL_SETTINGS")" \ "$DOCKER_IMAGE" \ -g "$GLOBAL_SETTINGS" \ -p "$POLICY_FILE" \ -o "$OUTPUT_FILE" done # ======================== # LOGGING COMPILATION # ======================== echo "📝 Compiling WAF logging configs from: $LOGGING_INPUT_DIR" if [[ -d "$LOGGING_INPUT_DIR" ]]; then for LOG_FILE in "$LOGGING_INPUT_DIR"/*.json; do [[ -f "$LOG_FILE" ]] || continue BASENAME=$(basename "$LOG_FILE" .json) OUTPUT_FILE="$LOGGING_OUTPUT_DIR/${BASENAME}.tgz" echo "⚙️ [Logging] Compiling $(basename "$LOG_FILE") -> $(basename "$OUTPUT_FILE")" docker run --rm \ -v "$LOGGING_INPUT_DIR":"$LOGGING_INPUT_DIR" \ -v "$LOGGING_OUTPUT_DIR":"$LOGGING_OUTPUT_DIR" \ "$DOCKER_IMAGE" \ -l "$LOG_FILE" \ -o "$OUTPUT_FILE" done else echo "⚠️ Skipping logging config compilation: directory '$LOGGING_INPUT_DIR' does not exist." fi # ======================== # COMPLETION MESSAGE # ======================== RUNTIME=$SECONDS printf "\n✅ Compilation complete.\n" echo " - Policies output: $POLICY_OUTPUT_DIR" echo " - Logging output: $LOGGING_OUTPUT_DIR" echo printf "⏱️ Total time taken: %02d minutes %02d seconds\n" $((RUNTIME / 60)) $((RUNTIME % 60)) echo waf_policy_auto_compile.sh: #!/bin/bash ############################################################################### # waf_policy_auto_compile.sh # # - Only prints colorized summary output to terminal if -v/--verbose is used # - Mails a styled HTML report using a template, substituting version numbers/status/colors # - Debug output (step_log) only to syslog if -d/--debug is used # - Otherwise: completely silent except for errors # - All main blocks are modularized in functions ############################################################################### set -euo pipefail IFS=$'\n\t' # ===== CONFIGURABLE VARIABLES ===== WORKROOT="/root/napv5" WORKDIR="$WORKROOT/waf-compiler" DOCKERFILE="$WORKDIR/Dockerfile" BUNDLE_DIR="$WORKDIR/test" NEW_BUNDLE="$BUNDLE_DIR/test_new.tgz" OLD_BUNDLE="$BUNDLE_DIR/test_old.tgz" NEW_META="$BUNDLE_DIR/test_new_meta.json" COMPILER_IMAGE="waf-compiler-latest:custom" EMAIL_RECIPIENT="example@example.com" EMAIL_SUBJECT="WAF Compiler Update Notification" NGINX_RELOAD_CMD="docker exec nginx-plus nginx -s reload" HTML_TEMPLATE="$WORKDIR/version_report_template.html" HTML_REPORT="$WORKDIR/version_report.html" VERBOSE=0 DEBUG=0 # ===== DEBUG AND ERROR LOGGING ===== exec 2> >(tee -a /tmp/waf_policy_auto_compile_error.log | /usr/bin/logger -t waf_policy_auto_compile_error) step_log() { if [ "$DEBUG" -eq 1 ]; then echo "DEBUG: $1" | /usr/bin/logger -t waf_policy_auto_compile fi } # ===== ARGUMENT PARSING ===== while [[ $# -gt 0 ]]; do case "$1" in -v|--verbose) VERBOSE=1 shift ;; -d|--debug) DEBUG=1 echo "Debug log can be found in the syslog..." shift ;; -*) echo "Unknown option: $1" >&2 exit 1 ;; *) shift ;; esac done # ----- LOG INITIAL ENVIRONMENT IF DEBUG ----- step_log "waf_policy_auto_compile starting (PID $$)" step_log "Script PATH: $PATH" step_log "which docker: $(which docker 2>/dev/null)" step_log "which jq: $(which jq 2>/dev/null)" # ===== COLOR DEFINITIONS ===== color_reset="\033[0m" color_green="\033[1;32m" color_yellow="\033[1;33m" # ===== LOGGING FUNCTIONS ===== log() { # Only log to terminal if VERBOSE is enabled if [ "$VERBOSE" -eq 1 ]; then echo "[$(date --iso-8601=seconds)] $*" fi } # ===== HTML REPORT GENERATOR ===== generate_html_report() { local attack_old="$1" local attack_new="$2" local attack_status="$3" local attack_class="$4" local bot_old="$5" local bot_new="$6" local bot_status="$7" local bot_class="$8" local threat_old="$9" local threat_new="${10}" local threat_status="${11}" local threat_class="${12}" local datetime datetime=$(date --iso-8601=seconds) cp "$HTML_TEMPLATE" "$HTML_REPORT" sed -i "s|{{ATTACK_OLD}}|$attack_old|g" "$HTML_REPORT" sed -i "s|{{ATTACK_NEW}}|$attack_new|g" "$HTML_REPORT" sed -i "s|{{ATTACK_STATUS}}|$attack_status|g" "$HTML_REPORT" sed -i "s|{{ATTACK_CLASS}}|$attack_class|g" "$HTML_REPORT" sed -i "s|{{BOT_OLD}}|$bot_old|g" "$HTML_REPORT" sed -i "s|{{BOT_NEW}}|$bot_new|g" "$HTML_REPORT" sed -i "s|{{BOT_STATUS}}|$bot_status|g" "$HTML_REPORT" sed -i "s|{{BOT_CLASS}}|$bot_class|g" "$HTML_REPORT" sed -i "s|{{THREAT_OLD}}|$threat_old|g" "$HTML_REPORT" sed -i "s|{{THREAT_NEW}}|$threat_new|g" "$HTML_REPORT" sed -i "s|{{THREAT_STATUS}}|$threat_status|g" "$HTML_REPORT" sed -i "s|{{THREAT_CLASS}}|$threat_class|g" "$HTML_REPORT" sed -i "s|{{RUN_DATETIME}}|$datetime|g" "$HTML_REPORT" } # ===== BUILD COMPILER IMAGE ===== build_compiler() { step_log "about to build_compiler" docker build --no-cache --platform linux/amd64 \ --secret id=nginx-crt,src="$WORKROOT/nginx-repo.crt" \ --secret id=nginx-key,src="$WORKROOT/nginx-repo.key" \ -t "$COMPILER_IMAGE" \ -f "$DOCKERFILE" "$WORKDIR" > "$WORKDIR/waf_compiler_build.log" 2>&1 || { echo "ERROR: docker build failed. Dumping build log:" | /usr/bin/logger -t waf_policy_auto_compile_error cat "$WORKDIR/waf_compiler_build.log" | /usr/bin/logger -t waf_policy_auto_compile_error exit 1 } step_log "after build_compiler" } # ===== COMPILE TEST POLICY ===== compile_test_policy() { step_log "about to compile_test_policy" docker run --rm -v "$BUNDLE_DIR:/bundle" "$COMPILER_IMAGE" \ -p /bundle/test.json -o /bundle/test_new.tgz > "$NEW_META" step_log "after compile_test_policy" if [ -f "$NEW_META" ]; then step_log "$(cat "$NEW_META")" else step_log "NEW_META does not exist" fi } # ===== CHECK OLD_BUNDLE ===== check_old_bundle() { step_log "about to check OLD_BUNDLE" if [ -f "$OLD_BUNDLE" ]; then step_log "$(ls -l "$OLD_BUNDLE")" else step_log "OLD_BUNDLE does not exist" fi } # ===== GET NEW VERSIONS FUNCTION ===== get_new_versions() { jq -r ' { "attack": .attack_signatures_package.version, "bot": .bot_signatures_package.version, "threat": .threat_campaigns_package.version }' "$NEW_META" } # ===== VERSION EXTRACTION FROM OLD BUNDLE ===== extract_bundle_versions() { docker run --rm -v "$BUNDLE_DIR:/bundle" "$COMPILER_IMAGE" \ -dump -bundle "/bundle/test_old.tgz" } extract_versions_from_dump() { extract_bundle_versions | awk ' BEGIN { print "{" } /attack-signatures:/ { in_attack=1; next } /bot-signatures:/ { in_bot=1; next } /threat-campaigns:/ { in_threat=1; next } in_attack && /version:/ { gsub("version: ", "") printf "\"attack\":\"%s\",\n", $1 in_attack=0 } in_bot && /version:/ { gsub("version: ", "") printf "\"bot\":\"%s\",\n", $1 in_bot=0 } in_threat && /version:/ { gsub("version: ", "") printf "\"threat\":\"%s\"\n", $1 in_threat=0 } END { print "}" } ' } get_old_versions() { extract_versions_from_dump } # ===== GET & PRINT VERSIONS ===== get_versions() { step_log "about to get_new_versions" new_versions=$(get_new_versions) step_log "new_versions: $new_versions" step_log "after get_new_versions" step_log "about to get_old_versions" old_versions=$(get_old_versions) step_log "old_versions: $old_versions" step_log "after get_old_versions" } # ===== VERSION COMPARISON ===== compare_versions() { step_log "compare_versions start" attack_old=$(echo "$old_versions" | jq -r .attack) attack_new=$(echo "$new_versions" | jq -r .attack) bot_old=$(echo "$old_versions" | jq -r .bot) bot_new=$(echo "$new_versions" | jq -r .bot) threat_old=$(echo "$old_versions" | jq -r .threat) threat_new=$(echo "$new_versions" | jq -r .threat) attack_status=$([[ "$attack_old" != "$attack_new" ]] && echo "Updated" || echo "No Change") bot_status=$([[ "$bot_old" != "$bot_new" ]] && echo "Updated" || echo "No Change") threat_status=$([[ "$threat_old" != "$threat_new" ]] && echo "Updated" || echo "No Change") attack_class=$([[ "$attack_status" == "Updated" ]] && echo "warn" || echo "ok") bot_class=$([[ "$bot_status" == "Updated" ]] && echo "warn" || echo "ok") threat_class=$([[ "$threat_status" == "Updated" ]] && echo "warn" || echo "ok") echo "Attack:$attack_status Bot:$bot_status Threat:$threat_status" > "$WORKDIR/status_flags.txt" [[ "$attack_status" == "Updated" ]] && attack_status_colored="${color_yellow}*** Updated ***${color_reset}" || attack_status_colored="${color_green}No Change${color_reset}" [[ "$bot_status" == "Updated" ]] && bot_status_colored="${color_yellow}*** Updated ***${color_reset}" || bot_status_colored="${color_green}No Change${color_reset}" [[ "$threat_status" == "Updated" ]] && threat_status_colored="${color_yellow}*** Updated ***${color_reset}" || threat_status_colored="${color_green}No Change${color_reset}" { echo -e "Version comparison for container \033[1mNAPv5\033[0m:\n" echo -e "Attack Signatures:" echo -e " Current Version: $attack_old" echo -e " New Version: $attack_new" echo -e " Status: $attack_status_colored\n" echo -e "Threat Campaigns:" echo -e " Current Version: $threat_old" echo -e " New Version: $threat_new" echo -e " Status: $threat_status_colored\n" echo -e "Bot Signatures:" echo -e " Current Version: $bot_old" echo -e " New Version: $bot_new" echo -e " Status: $bot_status_colored" } > "$WORKDIR/version_report.ansi" sed 's/\x1B\[[0-9;]*[mK]//g' "$WORKDIR/version_report.ansi" > "$WORKDIR/version_report.txt" step_log "Calling log_versions_syslog" log_versions_syslog "$attack_old" "$attack_new" "$attack_status" "$attack_class" \ "$bot_old" "$bot_new" "$bot_status" "$bot_class" \ "$threat_old" "$threat_new" "$threat_status" "$threat_class" step_log "compare_versions finished" } # ===== SYSLOG VERSION LOGGING and HTML REPORT GEN ===== log_versions_syslog() { # Args: # 1-attack_old 2-attack_new 3-attack_status 4-attack_class # 5-bot_old 6-bot_new 7-bot_status 8-bot_class # 9-threat_old 10-threat_new 11-threat_status 12-threat_class local msg msg="AttackSig (current: $1, latest: $2), BotSig (current: $5, latest: $6), ThreatCamp (current: $9, latest: $10)" /usr/bin/logger -t waf_policy_auto_compile "$msg" # Also print to terminal if VERBOSE is enabled if [ "$VERBOSE" -eq 1 ]; then echo "$msg" fi # Always (re)generate HTML for the mail at this point generate_html_report "$@" } # ===== RESPONSE ACTIONS ===== compile_all_policies() { log "Change detected – compiling all policies..." if [ "$VERBOSE" -eq 1 ]; then "$WORKDIR/compile_waf_policies.sh" else "$WORKDIR/compile_waf_policies.sh" > /dev/null 2>&1 fi } reload_nginx() { log "Reloading NGINX..." eval "$NGINX_RELOAD_CMD" } rotate_bundles() { log "Archiving new test bundle as old..." mv "$NEW_BUNDLE" "$OLD_BUNDLE" rm -f "$NEW_META" } send_report_email() { local html_report="$1" mail -s "$EMAIL_SUBJECT" -a "Content-Type: text/html" "$EMAIL_RECIPIENT" < "$html_report" } # ===== MAIN LOGIC ===== main() { build_compiler compile_test_policy check_old_bundle get_versions compare_versions if [[ "$VERBOSE" -eq 1 ]]; then cat "$WORKDIR/version_report.ansi" fi if grep -q "Updated" "$WORKDIR/status_flags.txt"; then if [[ "$VERBOSE" -eq 1 ]]; then echo "Detected updates. Recompiling policies, reloading NGINX, sending report." fi compile_all_policies reload_nginx rotate_bundles send_report_email "$HTML_REPORT" else log "No changes detected – nothing to do." fi log "Done." } main "$@" And should be it.206Views2likes4CommentsModify UCS Archive so it doesn't backup epsec images
Problem this snippet solves: Currently, if you have APM installed, the UCS Archive process, also backs up the epsec images. I have written a bash script, which modifies the UCS Archive process, so that it does not include these in the UCS Archive process, and it also modifies the bigip.conf that is archived, so that it does not contain references to these images. By default, APM has it's own epsec image in /var/sam/images so when your UCS Archive is loaded to a new system, or a rebuilt system, it will just use the default epsec image for that system. This means that if you have upload a new epsec image to fix an issue, you will need to ensure that this is done on any system you restore the UCS Archive too. How to use this snippet: Just save the bash script to a file like /shared/bin/modify_ucs.sh Then run the script:- # sh /shared/bin/modify_ucs.sh The script modifies /usr/libdata/configsync/cs.dat and creates two files config_save_pre and config_save_post in the same folder. It also creates a backup of cs.dat as cs.YYYY_MM_DD_HH_MM.bak The /usr filesystem is mounted RO, so I remount it RW to do this. To remove changes: mount -o remount,rw /usr cd /usr/libdata/configsync/ mv -f `ls -1t cs.dat.[0-9][0-9][0-9][0-9]*.bak|head -1` cs.dat rm -f config_save_p[or][se]* mount -o remount,ro /usr This modification does not survive a upgrade, so you will need to run the script again after any upgrade If you are running a cron job to create a daily/weekly backup, you can just call this script before you run the tmsh save sys ucs command, as it checks to see if the modification has already been done. Code : 70454533Views0likes1CommentUltimate irule debug - Capture and investigate
Problem this snippet solves: I decided to share this Irule for different reasons. When I help our community on devcentral, I regularly see people making recurring requests: How do I do to capture the queries header. How do I do to capture the response header. How do I check the information in the POST Request. How do I check response data (body). What cypher/protocol I use (SSL/TLS). I set up client certificate authentication but I do not know if it works and if I pass my certificate auth. I want to retrieve information from my authentication certificate (subject, issuer, …). My authenticating by certificate does not work and I get an error of what I have to do. I have latencies when dealing with my request. where does the latency come from (F5, server,..). I set up sso (kerberos delegation, json post, Form sso). I do not feel that my request is sent to the backend (or the kerberos token). Does F5 add information or modify the request/response. Which pool member has been selected My VS don’t answer (where does the problem come from) … instead of having an Irule for each request why not consolidate everything and provide a compact Irule. this Irule can help you greatly during your investigations and allows you to capture these different items: How to use this snippet: you have a function that allows you to activate the desired logs (1 to activate and 0 to disable) as describe below: array set app_arrway_referer { client_dest_ip_port 1 client_cert 1 http_request 1 http_request_release 1 http_request_payload 0 http_lb_selected 1 http_response 0 http_response_release 0 http_response_payload 0 http_time_process 0 } the posted logs will be preceded by a UID which will allow you to follow from the beginning to the end of the process of your request / answer. you can for example make a grep on the log to follow the complete process (request / answer). the UID is generated in the following way: `set uid [string range [AES::key 256] 15 23] client_dest_ip_port: this section will allow you to see source IP/Port and destination IP/Port. <CLIENT_ACCEPTED>: ----------- client_dest_ip_port ----------- <CLIENT_ACCEPTED>: uid: 382951fe9 - Client IP Src: 10.20.30.4:60419 <CLIENT_ACCEPTED>: uid: 382951fe9 - Client IP Dest:192.168.30.45:443 <CLIENT_ACCEPTED>: ----------- client_dest_ip_port ----------- client_cert: this section will allow you to check the result code for peer certificate verification ( and also if you have provide a certificate auth). moreover you will be able to recover the information of your authentication certficat (issuer, subject, …). if your authentication certificate that you provid is not valid an error message will be returned (ex: certificate chain too long, invalid CA certificate, …). all errors are listed in the link below: https://devcentral.f5.com/wiki/iRules.SSL__verify_result.ashx <HTTP_REQUEST>: ----------- client_cert ----------- <HTTP_REQUEST>: uid: 382951fe9 - cert number: 0 <HTTP_REQUEST>: uid: 382951fe9 - subject: OU=myOu, CN=youssef <HTTP_REQUEST>: uid: 382951fe9 - Issuer Info: DC=com, DC=domain, CN=MobIssuer <HTTP_REQUEST>: uid: 382951fe9 - cert serial: 22:00:30:5c:de:dd:ec:23:6e:b5:e6:77:bj:01:00:00:22:3c:dc <HTTP_REQUEST>: ----------- client_cert ----------- OR <HTTP_REQUEST>: ----------- client_cert ----------- <HTTP_REQUEST>: uid: 382951fe9 - No client certificate provided <HTTP_REQUEST>: ----------- client_cert ----------- http_request: This section allow you to retrieve the complete client HTTP request headers (that is, the method, URI, version, and all headers). I also added the protocol, the ciphers and the name of the vs used. <HTTP_REQUEST>: ----------- http_request ----------- <HTTP_REQUEST>: uid: 382951fe9 - protocol: https <HTTP_REQUEST>: uid: 382951fe9 - cipher name: ECDHE-RSA-AES128-GCM-SHA256 <HTTP_REQUEST>: uid: 382951fe9 - cipher version: TLSv1.2 <HTTP_REQUEST>: uid: 382951fe9 - VS Name: /Common/vs-myapp-443 <HTTP_REQUEST>: uid: 382951fe9 - Request: POST myapp.mydomain.com/browser-management/users/552462/playlist/play/api <HTTP_REQUEST>: uid: 382951fe9 - Host: myapp.mydomain.com <HTTP_REQUEST>: uid: 382951fe9 - Connection: keep-alive <HTTP_REQUEST>: uid: 382951fe9 - Content-Length: 290 <HTTP_REQUEST>: uid: 382951fe9 - Accept: application/json, text/javascript, */*; q=0.01 <HTTP_REQUEST>: uid: 382951fe9 - X-Requested-With: XMLHttpRequest <HTTP_REQUEST>: uid: 382951fe9 - User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.106 Safari/537.36 <HTTP_REQUEST>: uid: 382951fe9 - Referer: https://myapp.mydomain.com/ <HTTP_REQUEST>: uid: 382951fe9 - Accept-Encoding: gzip, deflate, sdch, br <HTTP_REQUEST>: uid: 382951fe9 - Accept-Language: en-US,en;q=0.8 <HTTP_REQUEST>: uid: 382951fe9 - Cookie: RLT=SKjpfdkFDKjkufd976HJhldds=; secureauth=true; STT="LKJSDKJpjslkdjslkjKJSHjfdskjhoLHkjh78dshjhd980szKJH"; ASP.SessionId=dsliulpoiukj908798dsjkh <HTTP_REQUEST>: uid: 382951fe9 - X-Forwarded-For: 10.10.10.22 <HTTP_REQUEST>: ----------- http_request ----------- http_request_release: This section triggered when the system is about to release HTTP data on the serverside of the connection. This event is triggered after modules process the HTTP request. So it will allow you to check request after F5 process. suppose that you have put APM with SSO kerberos, you will be able to see the kerberos token insert by F5. Or XFF insert by HTTP profile… <HTTP_REQUEST_RELEASE>: ----------- http_request_release ----------- <HTTP_REQUEST_RELEASE>: uid: 382951fe9 - VS Name: /Common/vs-myapp-443 <HTTP_REQUEST_RELEASE>: uid: 382951fe9 - Request: GET myapp.mydomain.com/browser-management/users/552462/playlist/play/api <HTTP_REQUEST_RELEASE>: uid: 382951fe9 - Host: myapp.mydomain.com <HTTP_REQUEST_RELEASE>: uid: 382951fe9 - Connection: keep-alive <HTTP_REQUEST_RELEASE>: uid: 382951fe9 - Accept: application/json, text/javascript, */*; q=0.01 <HTTP_REQUEST_RELEASE>: uid: 382951fe9 - X-Requested-With: XMLHttpRequest <HTTP_REQUEST_RELEASE>: uid: 382951fe9 - User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.106 Safari/537.36 <HTTP_REQUEST_RELEASE>: uid: 382951fe9 - Referer: https://myapp.mydomain.com/ <HTTP_REQUEST_RELEASE>: uid: 382951fe9 - Accept-Encoding: gzip, deflate, sdch, br <HTTP_REQUEST_RELEASE>: uid: 382951fe9 - Accept-Language: en-US,en;q=0.8 <HTTP_REQUEST_RELEASE>: uid: 382951fe9 - Cookie: RLT=SKjpfdkFDKjkufd976HJhldds=; secureauth=true; STT="LKJSDKJpjslkdjslkjKJSHjfdskjhoLHkjh78dshjhd980szKJH"; ASP.SessionId=dsliulpoiukj908798dsjkh <HTTP_REQUEST_RELEASE>: uid: 382951fe9 - X-Forwarded-For: 10.10.10.22 <HTTP_REQUEST_RELEASE>: ----------- http_request_release ----------- http_request_payload: This section will allow you to retrieve the HTTP request body. <HTTP_REQUEST>: ----------- http_request_payload ----------- <HTTP_REQUEST>: uid: 382951fe9 - Content-Length header null in request If GET or POST withtout content) <HTTP_REQUEST>: ----------- http_request_payload ----------- or <HTTP_REQUEST>: ----------- http_request_payload ----------- <HTTP_REQUEST>: uid: 382951fe9 - post payload: { id: 24, retrive: 'identity', service: 'IT'} <HTTP_REQUEST>: ----------- http_request_payload ----------- http_lb_selected This section will allow you to you to see which pool member has been selected. Once the pool memeber has been selected, you will not see this logs again until another load balancing decision will be made. If you want to see the selected pool memeber for each request you can see this information in "http_response". <HTTP_REQUEST>: ----------- http_lb_selected ----------- <LB_SELECTED>: uid: 382951fe9 - pool member IP: /Common/pool-name 10.22.33.54 443 <HTTP_REQUEST>: ----------- http_lb_selected ----------- http_response: This section will allow you to retrieve the response status and header lines from the server response. You can also see which pool member has been selected. <HTTP_RESPONSE>: ----------- http_response ----------- <HTTP_RESPONSE>: uid: 382951fe9 - status: 200 <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - pool member IP: /Common/pool-name 10.22.33.54 443 <HTTP_RESPONSE>: uid: 382951fe9 - Cache-Control: no-cache <HTTP_RESPONSE>: uid: 382951fe9 - Pragma: no-cache <HTTP_RESPONSE>: uid: 382951fe9 - Content-Type: application/json; charset=utf-8 <HTTP_RESPONSE>: uid: 382951fe9 - Expires: -1 <HTTP_RESPONSE>: uid: 382951fe9 - Server: Microsoft-IIS/8.5 <HTTP_RESPONSE>: uid: 382951fe9 - X-Powered-By: ASP.NET <HTTP_RESPONSE>: uid: 382951fe9 - Date: Fri, 28 Oct 2018 06:46:59 GMT <HTTP_RESPONSE>: uid: 382951fe9 - Content-Length: 302 <HTTP_RESPONSE>: ----------- http_response ----------- http_response_release: This section triggered when the system is about to release HTTP data on the clientside of the connection. This event is triggered after modules process the HTTP response. you can make sure that the answer has not been altering after the f5 process. You can also see which pool member has been selected. <HTTP_RESPONSE_RELEASE>: ----------- http_response_release ----------- <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - status: 200 <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - pool member IP: /Common/pool-name 10.22.33.54 443 <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - Cache-Control: no-cache <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - Pragma: no-cache <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - Content-Type: application/json; charset=utf-8 <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - Expires: -1 <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - Server: Microsoft-IIS/8.5 <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - X-Powered-By: ASP.NET <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - Date: Fri, 28 Oct 2018 06:46:59 GMT <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - Content-Length: 302 <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - Strict-Transport-Security: max-age=16070400; includeSubDomains <HTTP_RESPONSE_RELEASE>: ----------- http_response_release ----------- http_response_payload: This section will allow you to Collects an amount of HTTP body data that you specify. <HTTP_RESPONSE_DATA>: ----------- http_response_payload ----------- <HTTP_RESPONSE_DATA>: uid: 382951fe9 - Response (Body) payload: { "username" : "youssef", "genre" : "unknown", "validation-factors" : { "validationFactors" : [ { "name" : "remote_address", "value" : "127.0.0.1" } ] }} <HTTP_RESPONSE_DATA>: ----------- http_response_payload ----------- http_time_process: this part will allow you to put back information which can be useful to you to target the latency problematic. it is clear that it is not precise and that f5 offers other tools for that. but you will be able to quickly see which elements take the most time to be processed. you will be able to see how long f5 takes to process the request, the response and how long the backend server takes time to respond. <HTTP_RESPONSE_RELEASE>: ----------- http_time_process ----------- <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - Time to request (F5 request time) = 5 (ms) <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - Time to response (F5 response time) = 0 (ms) <HTTP_RESPONSE_RELEASE>: uid: 382951fe9 - Time to server (server backend process time) = 4 (ms) <HTTP_RESPONSE_RELEASE>: ----------- http_time_process ----------- Code : when CLIENT_ACCEPTED { # set a unique id for transaction set uid [string range [AES::key 256] 15 23] # set what's you want to retrieve 0 or 1 array set app_arrway_referer { client_dest_ip_port 1 client_cert 1 http_request 1 http_request_release 1 http_request_payload 1 http_lb_selected 1 http_response 1 http_response_release 1 http_response_payload 1 http_time_process 1 } if {$app_arrway_referer(client_dest_ip_port)} { log local0. " ----------- client_dest_ip_port ----------- " clientside { log local0. "uid: $uid - Client IP Src: [IP::client_addr]:[TCP::client_port]" } log local0. "uid: $uid - Client IP Dest:[IP::local_addr]:[TCP::local_port]" log local0. " ----------- client_dest_ip_port ----------- " log local0. " " } } when HTTP_REQUEST { set http_request_time [clock clicks -milliseconds] # Triggered when the system receives a certificate message from the client. The message may contain zero or more certificates. if {$app_arrway_referer(client_cert)} { log local0. " ----------- client_cert ----------- " # SSL::cert count - Returns the total number of certificates that the peer has offered. if {[SSL::cert count] > 0}{ # Check if there was no error in validating the client cert against LTM's server cert if { [SSL::verify_result] == 0 }{ for {set i 0} {$i < [SSL::cert count]} {incr i}{ log local0. "uid: $uid - cert number: $i" log local0. "uid: $uid - subject: [X509::subject [SSL::cert $i]]" log local0. "uid: $uid - Issuer Info: [X509::issuer [SSL::cert $i]]" log local0. "uid: $uid - cert serial: [X509::serial_number [SSL::cert $i]]" } } else { # https://devcentral.f5.com/s/wiki/iRules.SSL__verify_result.ashx (OpenSSL verify result codes) log local0. "uid: $uid - Cert Info: [X509::verify_cert_error_string [SSL::verify_result]]" } } else { log local0. "uid: $uid - No client certificate provided" } log local0. " ----------- client_cert ----------- " log local0. " " } if {$app_arrway_referer(http_request)} { log local0. " ----------- http_request ----------- " if { [PROFILE::exists clientssl] == 1 } { log local0. "uid: $uid - protocol: https" log local0. "uid: $uid - cipher name: [SSL::cipher name]" log local0. "uid: $uid - cipher version: [SSL::cipher version]" } log local0. "uid: $uid - VS Name: [virtual]" log local0. "uid: $uid - Request: [HTTP::method] [HTTP::host][HTTP::uri]" foreach aHeader [HTTP::header names] { log local0. "uid: $uid - $aHeader: [HTTP::header value $aHeader]" } log local0. " ----------- http_request ----------- " log local0. " " } set collect_length_request [HTTP::header value "Content-Length"] set contentlength 1 if {$app_arrway_referer(http_request_payload)} { if { [catch { if { $collect_length_request > 0 && $collect_length_request < 1048577 } { set collect_length $collect_length_request } else { set collect_length 1048576 } if { $collect_length > 0 } { HTTP::collect $collect_length_request set contentlength 1 } }] } { # no DATA in POST Request log local0. " ----------- http_request_payload ----------- " log local0. "uid: $uid - Content-Length header null in request" log local0. " ----------- http_request_payload ----------- " log local0. " " set contentlength 0 } } } when HTTP_REQUEST_DATA { if {$app_arrway_referer(http_request_payload)} { log local0. " ----------- http_request_payload ----------- " if {$contentlength} { set postpayload [HTTP::payload] log local0. "uid: $uid - post payload: $postpayload" #HTTP::release } log local0. " ----------- http_request_payload ----------- " log local0. " " } } when HTTP_REQUEST_RELEASE { if {$app_arrway_referer(http_request_release)} { log local0. " ----------- http_request_release ----------- " if { [PROFILE::exists clientssl] == 1 } { log local0. "uid: $uid - cipher protocol: https" log local0. "uid: $uid - cipher name: [SSL::cipher name]" log local0. "uid: $uid - cipher version: [SSL::cipher version]" } log local0. "uid: $uid - VS Name: [virtual]" log local0. "uid: $uid - Request: [HTTP::method] [HTTP::host][HTTP::uri]" foreach aHeader [HTTP::header names] { log local0. "uid: $uid - $aHeader: [HTTP::header value $aHeader]" } log local0. " ----------- http_request_release ----------- " log local0. " " } set http_request_time_release [clock clicks -milliseconds] } when LB_SELECTED { if {$app_arrway_referer(http_lb_selected)} { log local0. " ----------- http_lb_selected ----------- " log local0. "uid: $uid - pool member IP: [LB::server]" log local0. " ----------- http_lb_selected ----------- " log local0. " " } } when HTTP_RESPONSE { set http_response_time [clock clicks -milliseconds] set content_length [HTTP::header "Content-Length"] if {$app_arrway_referer(http_response)} { log local0. " ----------- http_response ----------- " log local0. "uid: $uid - status: [HTTP::status]" log local0. "uid: $uid - pool member IP: [LB::server]" foreach aHeader [HTTP::header names] { log local0. "uid: $uid - $aHeader: [HTTP::header value $aHeader]" } log local0. " ----------- http_response ----------- " log local0. " " } if {$app_arrway_referer(http_response_payload)} { if { $content_length > 0 && $content_length < 1048577 } { set collect_length $content_length } else { set collect_length 1048576 } if { $collect_length > 0 } { HTTP::collect $collect_length } } } when HTTP_RESPONSE_DATA { if {$app_arrway_referer(http_response_payload)} { log local0. " ----------- http_response_payload ----------- " set payload [HTTP::payload] log local0. "uid: $uid - Response (Body) payload: $payload" log local0. " ----------- http_response_payload ----------- " log local0. " " } } when HTTP_RESPONSE_RELEASE { set http_response_time_release [clock clicks -milliseconds] if {$app_arrway_referer(http_response_release)} { log local0. " ----------- http_response_release ----------- " log local0. "uid: $uid - status: [HTTP::status]" log local0. "uid: $uid - pool member IP: [LB::server]" foreach aHeader [HTTP::header names] { log local0. "uid: $uid - $aHeader: [HTTP::header value $aHeader]" } log local0. " ----------- http_response_release ----------- " log local0. " " } if {$app_arrway_referer(http_time_process)} { log local0. " ----------- http_time_process ----------- " log local0.info "uid: $uid - Time to request (F5 request time) = [expr $http_request_time - $http_request_time_release] (ms)" log local0.info "uid: $uid - Time to response (F5 response time) = [expr $http_response_time - $http_response_time_release] (ms)" log local0.info "uid: $uid - Time to server (server backend process time) = [expr $http_request_time_release - $http_response_time] (ms)" log local0. " ----------- http_time_process ----------- " log local0. " " } } Tested this on version: 13.05.7KViews6likes12CommentsF5 MCP(Model Context Protocol) Server
This project is a MCP( Model Context Protocol ) server designed to interact with F5 devices using the iControl REST API. It provides a set of tools to manage F5 objects such as virtual servers (VIPs), pools, iRules, and profiles. The server is implemented using the FastMCP framework and exposes functionalities for creating, updating, listing, and deleting F5 objects.1.2KViews1like0CommentsBIG-IP Report
Problem this snippet solves: Overview This is a script which will generate a report of the BIG-IP LTM configuration on all your load balancers making it easy to find information and get a comprehensive overview of virtual servers and pools connected to them. This information is used to relay information to NOC and developers to give them insight in where things are located and to be able to plan patching and deploys. I also use it myself as a quick way get information or gather data used as a foundation for RFC's, ie get a list of all external virtual servers without compression profiles. The script has been running on 13 pairs of load balancers, indexing over 1200 virtual servers for several years now and the report is widely used across the company and by many companies and governments across the world. It's easy to setup and use and only requires auditor (read-only) permissions on your devices. Demo/Preview Interactive demo http://loadbalancing.se/bigipreportdemo/ Screen shots The main report: The device overview: Certificate details: How to use this snippet: Installation instructions BigipReport REST This is the only branch we're updating since middle of 2020 and it supports 12.x and upwards (maybe even 11.6). Downloads: https://loadbalancing.se/downloads/bigipreport-v5.7.13.zip Documentation, installation instructions and troubleshooting: https://loadbalancing.se/bigipreport-rest/ Docker support https://loadbalancing.se/2021/01/05/running-bigipreport-on-docker/ Kubernetes support https://loadbalancing.se/2021/04/16/bigipreport-on-kubernetes/ BIG-IP Report (Legacy) Older version of the report that only runs on Windows and is depending on a Powershell plugin originally written by Joe Pruitt (F5) BIG-IP Report (only download this if you have v10 devices): https://loadbalancing.se/downloads/bigipreport-5.4.0-beta.zip iControl Snapin https://loadbalancing.se/downloads/f5-icontrol.zip Documentation and Installation Instructions https://loadbalancing.se/bigip-report/ Upgrade instructions Protect the report using APM and active directory Written by DevCentral member Shann_P: https://loadbalancing.se/2018/04/08/protecting-bigip-report-behind-an-apm-by-shannon-poole/ Got issues/problems/feedback? Still have issues? Drop a comment below. We usually reply quite fast. Any bugs found, issues detected or ideas contributed makes the report better for everyone, so it's always appreciated. --- Join us on Discord: https://discord.gg/7JJvPMYahA Code : BigIP Report Tested this on version: 12, 13, 14, 15, 1615KViews20likes97Comments