Super-NetOps
16 TopicsDecrypting tcpdumps in Wireshark without key files (such as when FIPS is in use)
Problem this snippet solves: This procedure allows you to decrypt a tcpdump made on the F5 without requiring access to the key file. Despite multiple F5 pages that claim to document this procedure, none of them worked for me. This solution includes the one working iRule I found, trimmed down to the essentials. The bash command is my own, which generates a file with all the required elements from the LTM log lines generated by the iRule, needed to decrypt the tcpdump in Wireshark 3.x. How to use this snippet: Upgrade Wireshark to Version 3+. Apply this iRule to the virtual server targeted by the tcpdump: rule sessionsecret { when CLIENTSSL_HANDSHAKE { log local0.debug "CLIENT_RANDOM [SSL::clientrandom] [SSL::sessionsecret]" log local0.debug "RSA Session-ID:[SSL::sessionid] Master-Key:[SSL::sessionsecret]" } when SERVERSSL_HANDSHAKE { log local0.debug "CLIENT_RANDOM [SSL::clientrandom] [SSL::sessionsecret]" log local0.debug "RSA Session-ID:[SSL::sessionid] Master-Key:[SSL::sessionsecret]" } } Run tcpdump on the F5 using all required hooks to grab both client and server traffic. tcpdump -vvni 0.0:nnnp -s0 host <ip> -w /var/tmp/`date +%F-%H%M`.pcap Conduct tests to reproduce the problem, then stop the tcpdump (Control C)and remove the iRule from the virtual server. Collect the log lines into a file. cat /var/log/ltm | grep -oe "RSA Session.*$" -e "CLIENT_RANDOM.*$" > /var/tmp/pms Copy the .pcap and pms files to the computer running Wireshark 3+. Reference the "pms" file in "Wireshark > Preferences > Protocols > TLS > (Pre)-Master-Secret log filename" (hence the pms file name). Ensure that Wireshark > Analyze > Enabled Protocols > "F5 Ethernet trailer" and "f5ethtrailer" boxes are checked. Open the PCAP file in Wireshark; it will be decrypted. IMPORTANT TIP: Decrypting any large tcpdump brings a workstation to its knees, even to the point of running out of memory. A much better approach is to temporarily move the pms file, open the tcpdump in its default encrypted state, identify the problem areas using filters or F5 TCP conversation and export them to a much smaller file. Then you can move the pms file back to the expected location and decrypt the smaller file quickly and without significant impact on the CPU and memory. Code : Please refer to the "How to use this Code Snippet" section above. This procedure was successfully tested in 12.1.2 with a full-proxy virtual server. Tested this on version: 12.11.9KViews8likes8CommentsiCall CRL update with Route Domains and Auto-Sync
Problem this snippet solves: iCall script to update CRL file within F5 BIG-IP when the HTTP request must run from a specific Route Domain and also uses logger to write logs to the default LTM location. The original was to also update an iFile of the CRL file for use within an iRule however I have removed that due to it being a very special case (I may add another snippet later to detail that one). Important point here is we update the CRL file located within a folder (or partition) that was linked to a Sync-Only Device Group with auto-sync enabled e.g. CRL files are created and saved to /Common/ crl / This way the iCall script does not need to trigger any sort sync and the rest of the configuration can be left as manual sync. Code : sys icall handler periodic /Common/someCrl-CrlUpdate { arguments { { name rd value 2 } { name url value https://172.31.0.1/somepath/to/crlUpdateFile.crl } { name host value somecrl.CADomein.com } { name folder value tempCrlDirectory } { name sslCrl value /Common/crl/someCrlFile.crl } } interval 600 script /Common/iCallCrlUpdate } sys icall script /Common/iCallCrlUpdate { app-service none definition { set logTag "iCallCrlUpdate" set logLevel "notice" # Getting handler provided arguments foreach arg { rd url host folder sslCrl ifileCrl } { set $arg $EVENT::context($arg) } # Create a directory to save files to disk set crlDir /var/tmp/$folder exec mkdir -p $crlDir exec /bin/logger -i -t $logTag -p local0.$logLevel "Running, CRL URL=$url, Host=$host, SSL CRL=$sslCrl, iFile CRL=$ifileCrl, Directory=$crlDir, rd=$rd" # Download CRL file from provided route domain (rd) and url arguments and save to temporary directory set status [exec /usr/bin/rdexec $rd /usr/bin/curl-apd -s -o $crlDir/LatestCRL.crl -w %{http_code} -H Host:$host $url] if {$status == 200} { # Update F5 SSL CRL file tmsh::modify sys file ssl-crl $sslCrl source-path file:$crlDir/LatestCRL.crl exec /bin/logger -t $logTag -p local0.$logLevel "F5 CRL files update complete." } else { exec /bin/logger -i -t $logTag -p local0.error "Command /usr/bin/rdexec $rd /usr/bin/curl-apd -s -o $crlDir/LatestCRL.crl -w '%{http_code}' -H 'Host: onsitecrl.trustwise.com' $url, failed with status=$status" } } description none events none } Tested this on version: 12.1810Views2likes0CommentsCisco ACE to F5 Conversion - Python 3
Problem this snippet solves: The goal of this script is to allow for automate migration from Cisco ACE to F5 LTM configuration. It is now a little bit old and have wanted to add to it and update it for a while but other projects have kept me busy so have made it public (Code can be found here https://gitlab.com/stratalabs/ace2f5-py) This have only tested on 11.x and some 12.x configuration and not all configuration items from a Cisco ACE configuraiton convert but you normally get a warning/alert on items you will need to do manually. How to use this snippet: Generate the F5 configuration: python ace2f5.py -f <ACE configuration input file> [-o <F5 configuration output file>] [-n NOT FULLY IMPLEMENTED (will disable out, i.e. validation to screen only] If no output file is defined will output to ACE configuration file name plue '.checking' Can also run and stay in Python CLI using the -i option e.g. python -i ace2f5.py -f <ACE configuration input file> After manually checking the output file run the following to generate a clean F5 TMOS configuration file with a .output extension. This .output is the file to be imported into F5 LTM python checking-output.py -f <ace2f5.py checking file> Tested this on version: 11.6749Views2likes1CommentTMSH iRules stats output to CSV
Problem this snippet solves: Needed to record the resource impact of iRules on the F5 system, this script was written to convert the iRule stats output from TMSH into CSV format. If you clear the stats down and get the stats post testing or after a set interval e.g. 1 hour of production traffic. You should be able to work out the resource hungry iRules and as we did optimise them. How to use this snippet: Usage: statformating.py -i <iRule Stats input txt file> -o <output CSV file> This script takes the output of F5 iRule stats from the commend `tmsh show ltm rule` and formats into a CSV file to allow for easy analysis of iRule resource impact. Recommend clearing all iRule stats between running. The following commands are examples used to clear and capture the iRule stats for a traffic test: Clear all iRules from Bash: tmsh reset-stats ltm rule "/*/*" Run test traffic Capture iRule stats and save to file from Bash: tmsh show ltm rule "/*/*" > irulestats.txt Copy off irulesstas.txt file and run against this script using: statformating.py -i irulestats.txt -o irulestats.csv Code : #!/usr/bin/python ''' Usage: statformating.py -i -o This script takes the output of F5 iRule stats from the commend `tmsh show ltm rule` and formats into a CSV file to allow for easy analysis of iRule resource impact. Recommend clearing all iRule stats between running. The following commands are examples used to clear and capture the iRule stats for a traffic test: Clear all iRules from Bash: `tmsh reset-stats ltm rule "/*/*"` Run test traffic Capture iRule stats and save to file from Bash: `tmsh show ltm rule "/*/*" > irulestats.txt` Copy off irulesstas.txt file and run agains this script using: statformating.py -i irulestats.txt -o irulestats.csv ''' import re import os import sys import argparse def iruleStatsFormat(inputFile, outputFile): ''' iruleStatsFormat(_io.TextIOWrapper, _io.TextIOWrapper) -> NoneType Takes 'inputFile' text file object of F5 iRule raw stats, inputFile example: ------------------------------------------------------------------------------------------------------- Ltm::Rule Event: /partitionname/http_header_rule:HTTP_REQUEST ------------------------------------------------------------------------------------------------------- Priority 4 Executions Total 22 Failures 0 Aborts 0 CPU Cycles on Executing Average 42382 Maximum 115095 Minimum 14157 (raw) ------------------------------------------------------------------------------------------ Ltm::Rule Event: /partitionname/http_dc_cookie_decrypt:CLIENT_ACCEPTED ------------------------------------------------------------------------------------------ Priority 13 Executions Total 52 Failures 0 Aborts 0 CPU Cycles on Executing Average 139933 Maximum 189777 Minimum 110646 (raw) Formats into a CSV and write to 'outputFile' text file object, outputFile example: partitionname,http_header_rule,HTTP_REQUEST,4,22,0,0,42382,115095,14157 partitionname,http_dc_cookie_decrypt,CLIENT_ACCEPTED,13,52,0,0,139933,189777,110646 ''' print('\nReading \'{}\''.format(inputFile.name)) iruleStats = inputFile.read() iruleStats = re.sub(r'[ ]{2,}', ' ', iruleStats) iruleStats = re.sub(r'\n\s\(raw\)\s{1,}', '', iruleStats) iruleStats = re.sub(r'[-]{2,}\n', '', iruleStats) iruleStats = re.sub(r'\n ', r'\n', iruleStats) iruleStats = re.sub(r'CPU Cycles on Executing\n', '', iruleStats) iruleStats = re.sub(r'Executions \n', '', iruleStats) iruleStats = re.sub(r'\nPriority (\d{1,})\nTotal (\d{1,})\nFailures (\d{1,})\nAborts (\d{1,})\nAverage (\d{1,})\nMaximum (\d{1,})\nMinimum (\d{1,})', r'\t\1\t\2\t\3\t\4\t\5\t\6\t\7', iruleStats) iruleStats = re.sub(r'Ltm::Rule Event: /(.*?)/(.*?):(.*?\t)', r'\1\t\2\t\3', iruleStats) iruleStats = re.sub(r'Ltm::Rule Event: (.*?):(.*?\t)', r'Common\t\1\t\2', iruleStats) iruleStats = re.sub(r'\n{2,}', r'\n', iruleStats) iruleStats = re.sub(r'\t', r',', iruleStats) print('Saving output csv to \'{}\''.format(outputFile.name)) print(iruleStats, file=outputFile) def validateFile(fileName): ''' validateFile(str) -> str Takes a filename and validates if file already exists. If so prompts the user to confirm or provide another filename. returns filename - str ''' if fileName and os.path.isfile(fileName): if input('\nFile \'{}\' already exists do you want to overwrite? (y to overwrite) '.format(fileName)) != 'y': fileName = validateFile(input('Enter new output filename? ')) elif not fileName: fileName = validateFile(input('Invalid output filename, enter new output filename? ')) return fileName if __name__=='__main__': # Read cli arguments into argparse, -i input file, -o (optional) as output file parser = argparse.ArgumentParser() parser.add_argument('-i', dest='input', help='Raw F5 iRule stats input file', nargs='?', type=argparse.FileType('rt'), required=True) parser.add_argument('-o', dest='output', help='F5 iRule stats output file (CSV format)', nargs='?', type=argparse.FileType('wt')) args = parser.parse_args() # Check input file is valid and contains some data if args.input: # if output file not provided use input filename replacing extention with 'csv' if not args.output: # Validates output file to ensure not overwriting existing file, opens as writeable args.output = open(validateFile(args.input.name[:-3] + 'csv'), 'wt') # Calls formatting function iruleStatsFormat(args.input, args.output) else: parser.print_help() Tested this on version: 12.1696Views2likes0CommentsiControlREST Auth Token and Transaction Example (Postman)
Problem this snippet solves: This is an example set for iControlREST which generates an Authentication Token and a Transaction session to add a new Data Group. This is only a single change but you can add many changes into the Transaction before VALIDATING and committing them. Steps taken: Get Auth Token - Request to generate a new Authentication Token and saves into the Environment variable X-F5-Auth-Token Extend Token Timeout - Increases the timeout value of the Auth Token, not always needed but good if you are running the command manually Get New Transaction - Request to generate a new Transaction session and saves into the Environment variable Coordination-Id POST new DG in Transaction - Creates a new Data Group Get Transaction Commands - Optional request to list all the commands and the order in the transaction Commit Transaction - Sends VALIDATING request to validate and commit the commends Get DG test - Optional to get the Data Group to confirm it has been created Find more information about iControlREST Transactions here https://devcentral.f5.com/s/articles/demystifying-icontrol-rest-part-7-understanding-transactions-21404 and in the user guides https://clouddocs.f5.com/api/icontrol-rest/ How to use this snippet: Download and install Postman https://www.getpostman.com/downloads/ Save the below JSON to a file and import as a new Postman Collection (see https://learning.getpostman.com/docs/postman/collections/intro_to_collections/ and https://learning.getpostman.com/docs/postman/collections/data_formats/#importing-postman-data). Finally setup a new Environment (https://learning.getpostman.com/docs/postman/environments_and_globals/manage_environments/) within Postman and ensure you have the following elements: hostIP - the Management IP of the F5 BIG-IP system hostName - the Hostname of the F5 BIG-IP system f5user - the username used to generate an Authentication Token f5pass - the password used to generate an Authentication Token X-F5-Auth-Token - leave blank will auto populate Coordination-Id - leave blank will auto populate e.g. Then you can run the Postman collection one request at a time or run via Postman's Collection Runner (https://learning.getpostman.com/docs/postman/collection_runs/using_environments_in_collection_runs). Code : { "info": { "_postman_id": "67195ea2-5ac0-4599-a650-5951b1bc1184", "name": "iControl Transaction Example", "schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json" }, "item": [ { "name": "Get Auth Token", "event": [ { "listen": "test", "script": { "id": "6e3f6680-4199-4c4a-a210-272b4d2eef38", "exec": [ "tests[\"Status code is 200\"] = responseCode.code === 200;", "var jsonData = JSON.parse(responseBody);", "postman.setEnvironmentVariable(\"X-F5-Auth-Token\", jsonData.token.name);", "", "" ], "type": "text/javascript" } } ], "request": { "method": "POST", "header": [ { "key": "Host", "type": "text", "value": "{{hostName}}" }, { "key": "Content-Type", "value": "application/json" } ], "body": { "mode": "raw", "raw": "{\r\n\t\"username\":\"{{f5user}}\",\r\n\t\"password\":\"{{f5pass}}\",\r\n\t\"loginProviderName\": \"tmos\"\r\n}" }, "url": { "raw": "https://{{hostIP}}/mgmt/shared/authn/login", "protocol": "https", "host": [ "{{hostIP}}" ], "path": [ "mgmt", "shared", "authn", "login" ] } }, "response": [] }, { "name": "Extend Token Timeout Copy", "event": [ { "listen": "test", "script": { "id": "3bcdcdc6-fcad-46db-b9c0-4d7a8e8e1a69", "exec": [ "var jsonData = JSON.parse(responseBody);", "tests[\"Status code is 200\"] = responseCode.code === 200;", "tests[\"Token has been set\"] = jsonData.timeout == 36000;", "tests[\"Token is valid\"] = jsonData.userName === postman.getEnvironmentVariable(\"f5user\");", "" ], "type": "text/javascript" } } ], "request": { "method": "PATCH", "header": [ { "key": "Host", "value": "{{hostName}}", "type": "text" }, { "key": "Content-Type", "value": "application/json" }, { "key": "X-F5-Auth-Token", "value": "{{X-F5-Auth-Token}}" } ], "body": { "mode": "raw", "raw": "{\n\t\"timeout\":\"36000\"\n}" }, "url": { "raw": "https://{{hostIP}}/mgmt/shared/authz/tokens/{{X-F5-Auth-Token}}", "protocol": "https", "host": [ "{{hostIP}}" ], "path": [ "mgmt", "shared", "authz", "tokens", "{{X-F5-Auth-Token}}" ] } }, "response": [] }, { "name": "Get New Transaction", "event": [ { "listen": "test", "script": { "id": "cb847d93-2c3a-4990-8242-020d95532be6", "exec": [ "var jsonRsponse = JSON.parse(responseBody)", "pm.environment.set(\"Coordination-Id\", jsonRsponse.transId);", "", "" ], "type": "text/javascript" } } ], "request": { "auth": { "type": "noauth" }, "method": "POST", "header": [ { "key": "Host", "value": "{{hostName}}", "type": "text" }, { "key": "content-type", "value": "application/json", "type": "text" }, { "key": "X-F5-Auth-Token", "value": "{{X-F5-Auth-Token}}", "type": "text" } ], "body": { "mode": "raw", "raw": "{}" }, "url": { "raw": "https://{{hostIP}}/mgmt/tm/transaction/", "protocol": "https", "host": [ "{{hostIP}}" ], "path": [ "mgmt", "tm", "transaction", "" ] } }, "response": [] }, { "name": "POST new DG in Transaction", "request": { "auth": { "type": "noauth" }, "method": "POST", "header": [ { "key": "Host", "value": "{{hostName}}", "type": "text" }, { "key": "content-type", "value": "application/json", "type": "text" }, { "key": "X-F5-Auth-Token", "value": "{{X-F5-Auth-Token}}", "type": "text" }, { "key": "X-F5-REST-Coordination-Id", "value": "{{Coordination-Id}}", "type": "text" } ], "body": { "mode": "raw", "raw": "{\n \"partition\": \"Common\",\n \"name\": \"url_filter_dg\",\n \"records\": [\n {\n \"name\": \"/data\",\n \"data\": \"Allow\"\n },\n {\n \"name\": \"/filter\",\n \"data\": \"Block\"\n },\n {\n \"name\": \"/hello\",\n \"data\": \"Black\"\n },\n {\n \"name\": \"/login\",\n \"data\": \"Allow\"\n }\n ],\n \"type\":\"string\"\n}" }, "url": { "raw": "https://{{hostIP}}/mgmt/tm/ltm/data-group/internal", "protocol": "https", "host": [ "{{hostIP}}" ], "path": [ "mgmt", "tm", "ltm", "data-group", "internal" ] } }, "response": [] }, { "name": "PUT DG in Transaction", "request": { "auth": { "type": "basic", "basic": [ { "key": "password", "value": "admin", "type": "string" }, { "key": "username", "value": "admin", "type": "string" } ] }, "method": "PUT", "header": [ { "key": "Host", "value": "{{hostName}}", "type": "text" }, { "key": "content-type", "value": "application/json", "type": "text" }, { "key": "X-F5-REST-Coordination-Id", "value": "{{Coordination-Id}}", "type": "text" } ], "body": { "mode": "raw", "raw": "{\n \"records\": [\n {\n \"name\": \"/data\",\n \"data\": \"Allow\"\n },\n {\n \"name\": \"/filter\",\n \"data\": \"Block\"\n },\n {\n \"name\": \"/hello\",\n \"data\": \"Allow\"\n },\n {\n \"name\": \"/login\",\n \"data\": \"Allow\"\n }\n ]\n}" }, "url": { "raw": "https://{{hostIP}}/mgmt/tm/ltm/data-group/internal/~common~url_filter_dg", "protocol": "https", "host": [ "{{hostIP}}" ], "path": [ "mgmt", "tm", "ltm", "data-group", "internal", "~common~url_filter_dg" ] } }, "response": [] }, { "name": "Get Transaction Commands", "event": [ { "listen": "test", "script": { "id": "cb847d93-2c3a-4990-8242-020d95532be6", "exec": [ "", "", "" ], "type": "text/javascript" } } ], "protocolProfileBehavior": { "disableBodyPruning": true }, "request": { "auth": { "type": "noauth" }, "method": "GET", "header": [ { "key": "Host", "value": "{{hostName}}", "type": "text" }, { "key": "content-type", "value": "application/json", "type": "text" }, { "key": "X-F5-Auth-Token", "value": "{{X-F5-Auth-Token}}", "type": "text" } ], "body": { "mode": "raw", "raw": "{}" }, "url": { "raw": "https://{{hostIP}}/mgmt/tm/transaction/{{Coordination-Id}}/commands", "protocol": "https", "host": [ "{{hostIP}}" ], "path": [ "mgmt", "tm", "transaction", "{{Coordination-Id}}", "commands" ] } }, "response": [] }, { "name": "Commit Transaction", "event": [ { "listen": "test", "script": { "id": "8308b285-b26b-4ddf-8ea9-e4f420cccd42", "exec": [ "var jsonResponse = JSON.parse(responseBody)", "", "pm.test(\"Transaction status is COMPLETED\", function () {", "", " pm.expect(jsonResponse.state == \"COMPLETED\");", "});" ], "type": "text/javascript" } } ], "request": { "auth": { "type": "noauth" }, "method": "PATCH", "header": [ { "key": "Host", "value": "{{hostName}}", "type": "text" }, { "key": "content-type", "value": "application/json", "type": "text" }, { "key": "X-F5-Auth-Token", "value": "{{X-F5-Auth-Token}}", "type": "text" }, { "key": "X-F5-REST-Coordination-Id", "value": "1557741207510527", "type": "text", "disabled": true } ], "body": { "mode": "raw", "raw": "{ \"state\":\"VALIDATING\" }" }, "url": { "raw": "https://{{hostIP}}/mgmt/tm/transaction/{{Coordination-Id}}", "protocol": "https", "host": [ "{{hostIP}}" ], "path": [ "mgmt", "tm", "transaction", "{{Coordination-Id}}" ] } }, "response": [] }, { "name": "Get DG test", "request": { "auth": { "type": "noauth" }, "method": "GET", "header": [ { "key": "Host", "value": "{{hostName}}", "type": "text" }, { "key": "X-F5-Auth-Token", "value": "{{X-F5-Auth-Token}}", "type": "text" } ], "body": { "mode": "raw", "raw": "" }, "url": { "raw": "https://{{hostIP}}/mgmt/tm/ltm/data-group/internal/~common~url_filter_dg", "protocol": "https", "host": [ "{{hostIP}}" ], "path": [ "mgmt", "tm", "ltm", "data-group", "internal", "~common~url_filter_dg" ] } }, "response": [] } ] } Tested this on version: No Version Found1.1KViews2likes0CommentsUpgrade BigIP using Ansible
Problem this snippet solves: A simple, and possibly poor, ansible playbook for upgrading devices. Allows separating devices into two "reboot groups" to allow rolling restarts of clusters. How to use this snippet: Clone or download the repository. Update the hosts.ini inventory file to your requirements Run ansible-playbook -i hosts.ini upgrade.yaml The script will identify a boot location to use from the first two on your big-ip system, will upload and install the image, and will then activate the boot location for each "reboot group" sequentially. Tested this on version: No Version Found984Views1like4CommentsUpload small files (certs/keys/etc) to BIG-IP via iControl REST (base64 encoded using cli script)
Problem this snippet solves: I had a need to upload SSL certificates and keys to the BIG-IP in a simple manner that was fast. I did so by creating a cli script that would accept a filename, base64 encoded file and the file type (ascii/binary), then would write out the file to disk. How to use this snippet: curl -sk -u username:password -X POST -H 'Content-Type: application/json'https://PRIMARY_F5/mgmt/tm/cli/script-d '{"command":"run","utilCmdArgs":"file_creation [filename] [filetype] [file_base64encoded"}' Code : create cli script fileupload { proc script::init {} { } proc script::run {} { package require base64 set filename [lindex $tmsh::argv 1] set filetype [lindex $tmsh::argv 2] set contents [lindex $tmsh::argv 3] if { [file exists $filename] == 0} { if { $filetype == "binary"} { set result [::base64::decode $contents] set finalfile [open $filename "w"] fconfigure $finalfile -encoding binary puts -nonewline $finalfile [::base64::decode $contents] close $finalfile return 0 } if { $filetype == "ascii"} { set result [::base64::decode $contents] set finalfile [open $filename "w"] fconfigure $finalfile puts -nonewline $finalfile [::base64::decode $contents] close $finalfile return 0 } } if { [file exists $filename] == 1} { puts "File already exists" error "File already exists" return 1 } } proc script::help {} { } proc script::tabc {} { } total-signing-status not-all-signed } Tested this on version: 12.1373Views1like0CommentsBacking up Master Keys via iControl REST
Problem this snippet solves: Having a backup copy of a chassis master key can be valuable during a recovery event. For organizations with large deployments, automating the process certainly helps. How to use this snippet: To back up the magic key via API, use the bash util command URI with the proper payload: Sample URI: https://192.168.1.100/mgmt/tm/util/bash Payload: {"command":"run","utilCmdArgs":"-c 'f5mku -K'"} Response: {"kind":"tm:util:bash:runstate","command":"run","utilCmdArgs":"-c 'f5mku -K'","commandResult":"ZFLI5n83NuetlE9A+bYqwg==\n"} The master key is the commandResult field, minus the trailing \n. Code : curl -sku admin:admin -H "content-type: application/json" -X POST https://192.168.1.100/mgmt/tm/util/bash -d {"command":"run","utilCmdArgs":"-c 'f5mku -K'"} Tested this on version: 13.0470Views1like0CommentsiRule stats formatter
Problem this snippet solves: When you have a load of iRule stats in text format from your F5 device and need to get them into a nicer format. The following Python 3 script takes in a text file in the following format: ------------------------------------------------------------------------------------------------------ Ltm::Rule Event: /web_testing/test_environment_rule:HTTP_RESPONSE ------------------------------------------------------------------------------------------------------ Priority 12 Executions Total 31686860 Failures 0 Aborts 0 CPU Cycles on Executing Average 404058 Maximum 10703959 Minimum 264201 (raw) ------------------------------------------------------------------------------------------------------ Ltm::Rule Event: /web_testing/test_environment_rule:HTTP_REQUEST ------------------------------------------------------------------------------------------------------ Priority 899 Executions Total 31686860 Failures 0 Aborts 0 CPU Cycles on Executing Average 404058 Maximum 10703959 Minimum 264201 Put through the following python script to output a CSV file for further data manipulation. How to use this snippet: Python3 script, to use run the following (can also add in '--o' to define an output file, if not will replace the file extension '.txt' with '.csv' by default): python statformating.py --i my_irule_stats.txt output will be something like Openning 'my_irule_stats.txt' Saving output csv to 'my_irule_stats.csv' Usage/help output: usage: statformating.py [-h] [--i INPUT] [--o OUTPUT] optional arguments: -h, --help show this help message and exit --i INPUT iRule Stats File input file name --o OUTPUT iRule Stats File output csv file name Code : import re import os import argparse def iruleStatsFormat(inputFile, outputFile): print('Openning \'{}\''.format(inputFile)) iruleStats = open(inputFile, 'rt').read() iruleStats = re.sub(r'[ ]{2,}', ' ', iruleStats) iruleStats = re.sub(r'\n\s\(raw\)\s{1,}', '', iruleStats) iruleStats = re.sub(r'[-]{2,}\n', '', iruleStats) iruleStats = re.sub(r'\n ', r'\n', iruleStats) iruleStats = re.sub(r'CPU Cycles on Executing\n', '', iruleStats) iruleStats = re.sub(r'Executions \n', '', iruleStats) iruleStats = re.sub(r'\nPriority (\d{1,})\nTotal (\d{1,})\nFailures (\d{1,})\nAborts (\d{1,})\nAverage (\d{1,})\nMaximum (\d{1,})\nMinimum (\d{1,})', r'\t\1\t\2\t\3\t\4\t\5\t\6\t\7', iruleStats) iruleStats = re.sub(r'Ltm::Rule Event: /(.*?)/(.*?):(.*?\t)', r'\1\t\2\t\3', iruleStats) iruleStats = re.sub(r'Ltm::Rule Event: (.*?):(.*?\t)', r'Common\t\1\t\2', iruleStats) iruleStats = re.sub(r'\n{2,}', r'\n', iruleStats) iruleStats = re.sub(r'\t', r',', iruleStats) print('Saving output csv to \'{}\''.format(outputFile)) with open(outputFile, 'wt') as f: print(iruleStats, file=f) if __name__=='__main__': parser = argparse.ArgumentParser() parser.add_argument("--i", dest='input', help="iRule Stats File input file name", type=str) parser.add_argument("--o", dest='output', help="iRule Stats File output csv file name", type=str, default="") args = parser.parse_args() if args.input and os.path.isfile(args.input): if not args.output: args.output = args.input[:-3] + 'csv' iruleStatsFormat(args.input, args.output) else: parser.print_help()271Views1like0CommentsBlock IP Addresses With Data Group And Log Requests On ASM Event Log
Problem this snippet solves: This is Irule which will block IP Addresses that are not allowed in your organization. instead of adding each IP Address in Security ›› Application Security : IP Addresses : IP Address Exceptions you can create a data group and use a simple IRULE to block hundreds of Addressess. Also,createing a unique signature to specify the request of the illigile IP Address. First, You will need to create Data Group under Local Traffic ›› iRules : Data Group List and add your illigile IP Addresses to the list. If you have hundreds of IP's that you want to block, you can to it in TMSH using this command: TMSH/modify ltm data-group internal <Data-Group-Name> { records add {IP-ADDRESS} } Now, We are ready to create the IRULE under Local Traffic ›› iRules : iRule List Last, Create violation list under Security ›› Options : Application Security : Advanced Configuration : Violations List Create -> Name:Illegal_IP_Address -> Type:Access Violation -> Severity:Critical -> Update Don't forgat to enable trigger ASM IRULE events with "Normal Mode" How to use this snippet: Code : when HTTP_REQUEST { set reqBlock 0 if { [class match [IP::remote_addr] equals ] } { set reqBlock 1 # log local0. "HTTP_REQUEST [IP::client_addr]" } } when ASM_REQUEST_DONE { if { $reqBlock == 1} { ASM::raise "Illegal_IP_Address" # log local0. "ASM_REQUEST_DONE [IP::client_addr]" } } Tested this on version: 13.01.6KViews1like5Comments