bash
20 TopicsList BIG-IP Next Instance Backups on Central Manager
In the Central Manager GUI, you can create/schedule BIG-IP Next Instance backups, but outside of the listing shown there, you can't download the files from that view if you want to archive them for off-box requirements. Finding them in the Central Manager command line to download them via secure copy (scp) requires some kubernetes-fu knowhow, mainly, interrogating the persistent volume claims and persistent volumes: kubectl get pvc mbiq-local-storage-pv-claim -o yaml | grep volumeName kubectl get pv <volumename result> -o yaml | grep "path: " This script takes the guesswork out of all that and let's you focus on more important things. Example output: admin@cm1:~$ ./lbu.sh Backup path: /var/lib/rancher/k3s/storage/pvc-ae75faee-101e-49eb-89f7-b66542da1281_default_mbiq-local-storage-pv-claim/backup total 3860 4 drwxrwxrwx 2 root root 4096 Mar 7 19:33 . 4 drwxrwxrwx 7 root root 4096 Feb 2 00:01 .. 1780 -rw-r--r-- 1 ubuntu lxd 1821728 Feb 28 18:40 3b9ef4d8-0f0b-453d-b350-c8720a30db16.2024-02-28.18-39-59.backup.tar.gz 288 -rw-r--r-- 1 ubuntu lxd 292464 Feb 28 18:39 7bf4e3ac-e8a2-44a3-bead-08be6c590071.2024-02-28.18-39-15.backup.tar.gz 1784 -rw-r--r-- 1 ubuntu lxd 1825088 Mar 7 19:33 7bf4e3ac-e8a2-44a3-bead-08be6c590071.2024-03-07.19-32-56.backup.tar.gz Script Source125Views1like0CommentsF5 Automation - TCL & Bash
Problem this snippet solves: This is a really simple way to automate CLI command execution on multiple F5 devices using Bash & TCL scripting. How to use this snippet: On a linux machine that is utilized to connect to the F5 device: Create a directory mkdir F5_Check Within the "F5_Check" directory, create the following 3 files: F5_Host.txt (This file contains F5's IP address) F5_Bash_v1 (This is the bash script used to collect username/password for F5) F5_Out_v1.exp (This is the TCL script executes the relevant commands on F5) Explanation of the 3 files: File Content: F5_Out_v1.exp is provided as code share. This is the main TCL script that is utiliezd to execute CLI commands on multiple F5 devices. File Content: F5_Bash_v1 #!/bin/bash # Collect the username and password for F5 access echo -n "Enter the username " read -s -e user echo -ne '\n' echo -n "Enter the password " read -s -e password echo -ne '\n' # Feed the expect script a device list & the collected username & passwords for device in `cat ~/F5_Check/F5_Host.txt`; do ./F5_Out_v1.exp $device $password $user ; done File Contents: F5_Host.txt This contains the management IP of the F5 devices. Example: cat F5_Host.txt 10.12.12.200 10.12.12.201 10.12.12.202 10.12.12.203 Code : #!/usr/bin/expect -f # Set variables set hostname [lindex $argv 0] set password [lindex $argv 1] set username [lindex $argv 2] # Log results log_file -a ~/F5_Check/F5LOG.log # Announce which device we are working on and the time send_user "\n" send_user ">>>>> Working on $hostname @ [exec date] <<<<<\n" send_user "\n" # SSH access to device spawn ssh $username@$hostname expect { "no)? " { send "yes\n" expect "*assword: " sleep 1 send "$password\r" } "*assword: " { sleep 1 send "$password\r" } } expect "(tmos)#" send "sys\n" expect "(tmos.sys)#" send "show software\n" expect "#" send "exit\n" expect "#" send "quit\n" expect ":~\$" exit Tested this on version: 11.51.8KViews0likes2CommentsUse F5 LTM as HTTP Proxy
Problem this snippet solves: LTM product can be used as a HTTP Proxy for servers and PC. This code explains minimum requirements to configure proxy feature without SWG module (configurations from Explicit Forward Proxy documentation without documentation ) and without explicit proxy iApp. How to use this snippet: All these commands must be run in bash shell. Create HTTP PROXY VIRTUAL SERVER Configure variables used in next commands Variable HTTPBaseName is used to create : Resolver object : RESOLVER_${HTTPBaseName} HTTP profile : http_${HTTPBaseName} virtual server : VS_${HTTPBaseName} HTTPBaseName="HTTP_FORWARD_PROXY" VS_IP="192.168.2.80" VS_PORT="8080" create DNS resolver with your DNS server (1.1.1.1 is for demo using cloudflare) tmsh create net dns-resolver RESOLVER_${HTTPBaseName} { forward-zones replace-all-with { . { nameservers replace-all-with { 1.1.1.1:domain { } } } } route-domain 0 } create HTTP profile type explicit, using DNS resolver. The parameter default-connect-handling allow enables HTTPS connections without SSL inspection tmsh create ltm profile http http_${HTTPBaseName} { defaults-from http-explicit explicit-proxy { default-connect-handling allow dns-resolver RESOLVER_${HTTPBaseName} } proxy-type explicit } create HTTP proxy Virtual server tmsh create ltm virtual VS_${HTTPBaseName} { destination ${VS_IP}:${VS_PORT} ip-protocol tcp mask 255.255.255.255 profiles replace-all-with { http_${HTTPBaseName} { } tcp } source 0.0.0.0/0 source-address-translation { type automap } translate-address enabled translate-port enabled} ENABLE SSL FORWARD PROXY This section is not required to forward HTTPS requests but only to enable SSL inspection on HTTPS requests. Note : Following configuration requires SSL, Forward Proxy License. Configure variables used in next commands Variable SSLBaseName is used to create : certificate / key pair : ${SSLBaseName} Client SSL profile : clientssl_${SSLBaseName} Server SSL profile : serverssl_${SSLBaseName} virtual server : VS_${SSLBaseName} SSLBaseName="SSL_FORWARD_PROXY" dirname="/var/tmp" CASubject="/C=FR/O=DEMO\ COMPANY/CN=SSL\ FORWARD\ PROXY\ CA" Create self-signed certificate for CA purpose (not available in WebUI) Self-signed certificates created in WebUI doesn't have CA capability required for SSL FORWARD PROXY. openssl genrsa -out ${dirname}/${SSLBaseName}.key 4094 openssl req -sha512 -new -x509 -days 3650 -key ${dirname}/${SSLBaseName}.key -out ${dirname}/${SSLBaseName}.crt -subj "${CASubject}" Import certificates in TMOS tmsh install sys crypto key ${SSLBaseName}.key from-local-file ${dirname}/${SSLBaseName}.key; tmsh install sys crypto cert ${SSLBaseName}.crt from-local-file ${dirname}/${SSLBaseName}.crt; After CA Certificate is imported, browse in WebUI, retrieve it and import it in client browsers trusted CA Create SSL profiles for SSL FORWARD PROXY tmsh create ltm profile client-ssl clientssl_${SSLBaseName} { cert-lookup-by-ipaddr-port disabled defaults-from clientssl mode enabled proxy-ca-cert ${SSLBaseName}.crt proxy-ca-key ${SSLBaseName}.key ssl-forward-proxy enabled } tmsh create ltm profile server-ssl serverssl_${SSLBaseName} { defaults-from serverssl ssl-forward-proxy enabled } create SSL FORWARD PROXY Virtual server tmsh create ltm virtual VS_${SSLBaseName} { destination 0.0.0.0:https ip-protocol tcp profiles replace-all-with { clientssl_${SSLBaseName} { context clientside } serverssl_${SSLBaseName} { context serverside } http { } tcp { } } source 0.0.0.0/0 translate-address disabled translate-port disabled vlans replace-all-with { http-tunnel } vlans-enabled } Change HTTP EXPLICIT PROXY Default Connect Handling to Deny tmsh modify ltm profile http http_${HTTPBaseName} explicit-proxy { default-connect-handling deny } Note : These commands were tested in both 12.1 and 13.1 versions. Code : No Code11KViews1like24CommentsBIGIP LTM Automated Pool Monitor Flap Troubleshooting Script in Bash
Problem this snippet solves: A bash script is mainly for collecting data when F5 BIG-IP LTM pool member monitor flaps in a period of time and help determine the Root Cause of BIGIP monitor health check failure; Script will monitor the LTM logs, if new pool member down message event occurs, script will perform following functions: 1. Turn on LTM bigd debug ; 2. Start to tcpdump capture to capture relevant traffics; 3. Turn off bigd debug and terminate tcpdump process when timer elapse (timer is configurable) 4. Generate qkview (optinal) 5. Tar ball full logs files under /var/log/ directory (optinal) Script has been tested on v11.x Code : #!/usr/bin/bash ##########identify the log file that script is monitoring filename="/var/log/ltm" ##########identify the period of time that debug and tcpdump are running, please change it according to the needs; timer=60 ##########IP address of pool member flaps poolMemberIP="10.10.10.229" ##########self IP address of LTM is usd to send LTM Health Monitor traffics ltmSelfip="10.10.10.248" ##########pool member service port number poolMemberPort="443" ##########TMOS command to turn on bigd debug turnonBigdDebug="tmsh modify sys db bigd.debug value enable" ##########TMOS command to turn off bigd debug turnoffBigdDebug="tmsh modify sys db bigd.debug value disable" ##########BASH command to tar BIGIP log files tarLogs="tar -czpf /var/tmp/logfiles.tar.gz /var/log/*" ####### function file check: following code will check if /var/log/ltm exist on the system, ####### if it exists, script will be running and perform subsequent functions if [ -f $filename ] then echo "/var/log/ltm exists and program is running to collect data when BG-IP pool member flaps" else ####### if it does not exist, programe will be terminated and log following message echo "no /var/log/ltm file found and program is terminated" exit 0 fi ####### function file check ends ###### write timestap to /var/log/ltm for tracking purpose echo "$(date) monitoring the log" >> $filename ###### start to monitor the /var/log/ltm for new events tail -f -n 0 $filename | while read -r line do ###### counter for pool down message appears hit=$(echo "$line" | grep -c "$poolMemberIP:$poolMemberPort monitor status down") #echo $hit ###### if [ "$hit" == "1" ]; then ###### diplay the pool down log event in file /var/log/ltm echo $line ###### show timestamp of debug is on echo "$(date) Turning on system bigddebug" ###### turn on bigd debug echo $($turnonBigdDebug) ###### turn on tcpdump capture echo $(tcpdump -ni 0.0:nnn -s0 -w /var/tmp/Monitor.pcap port $poolMemberPort and \(host $poolMemberIP and host $ltmSelfip\)) & ###### running timer sleep $timer ###### show timestamp of debug is off echo "$(date) Truning off system bigddebug" ###### turn off bigd debug echo $($turnoffBigdDebug) ###### terminate tcpdump process echo $(killall tcpdump) ###### generate qkview, it's an optional function, enable it by remove "#" sign #echo $(qkview) ###### tar log files, it's an optional function, enable it by remove "#" sign #echo $($tarLogs) break #else #echo "Monitor in progress" fi done ###### show message that programe is end echo "$(date) exiting from programe" ###### exit from the program exit 0 Tested this on version: 11.6948Views0likes6CommentsQuick and dirty shell script to find unused certificates
Problem this snippet solves: This has been edited quite a bit since I first posted so it's probably not as quick and dirty as it was before. This in response to a question regarding removing unused certificateshttps://devcentral.f5.com/questions/how-to-find-the-unused-ssl-certificates-63166 The following bash script will output any installed certificate names to a file, then iterate over each line. If the certificate is not referenced in bigip.conf in either the /config/ or within a partition folder, then it can be reasonably assumed it is not in use and can be safely deleted. The script will give you the option to delete any certs that are not in use and save a UCS archive (just in case) If there are any keys associated with the certificate, this will be deleted too. As the moment, the script will not look for keys without an equivalent cert, e.g. my-cert.key and my-cert.crt. So you many still end up with rouge keys. I'll look to get this updated eventually. There is an array called ignoreCerts ignoreCerts=("f5-irule.crt" "ca-bundle.crt") Here you can add certificates you may want to ignore. For example, f5-irule.crt is used to sign F5 provided iRules and bigip.conf does not reference it. Add any additional certs to this array to ensure they are not deleted Script can be downloaded directly from GitLab using the link below https://gitlab.com/stratalabs/f5-devcental/snippets/1863498/raw?inline=false How to use this snippet: paste into vi chmod +x myScript.sh ./myScript.sh Code : #!/bin/sh function buildInstalledCertsArray { tmsh save sys config partitions all tmsh list sys file ssl-cert | awk '/crt/ {print $4}' | sed '/^[[:space:]]*$/d' > /var/tmp/installedCerts.tmp # iterate over tmp file to create array of used certificates while read line; do for i in "${!ignoreCerts[@]}"; do if [[ $line = ${ignoreCerts[$i]} ]]; then ignore="true" else if [[ $ignore != "true" ]];then ignore="" else # do not add cert to array if already added if [[ ! " ${instCertsArr[@]} " =~ " ${line} " ]]; then instCertsArr+=("$line") fi fi fi done done /dev/null 2>&1) if ! [ -z "$hasKey" ];then deleteKeys+=("${cert%.*}.key") fi done } function deleteUnusedCerts { if [ ${#deleteCerts[@]} -eq 0 ]; then echo "-------------------------------------------------------------------------" echo "There are no unused certificates to delete, existing" echo "-------------------------------------------------------------------------" exit 0 else echo "-------------------------------------------------------------------------" echo "The following keys are not in use can can be deleted:" for cert in "${deleteCerts[@]}"; do echo " ${cert}" done echo "-------------------------------------------------------------------------" read -p "would you like to delete these unused certificates? (y/n)?" answer case ${answer:0:1} in y|Y ) createUcsArchive echo "-------------------------------------------------------------------------" echo "deleting certs..." for cert in "${deleteCerts[@]}"; do delete sys file ssl-key $cert echo " $cert" done if [ ${#deleteKeys[@]} -eq 0 ]; then echo "-------------------------------------------------------------------------" echo "no associated keys to delete, exiting" exit 0 else echo "-------------------------------------------------------------------------" echo "deleting keys..." for key in "${deleteKeys[@]}"; do delete sys file ssl-key $cert echo "$key" exit 0 done fi ;; * ) exit 0 ;; esac fi } function createUcsArchive { echo today=`date +%Y-%m-%d.%H.%M.%S` echo "Creating UCS archive auto.${today}.ucs" tmsh save sys ucs ${today}.ucs } # initialise vars instCertsArr=() deleteCerts=() # ignore certs defined here - f5-irile.crt is used to sign F5 iRules ignoreCerts=("f5-irule.crt" "ca-bundle.crt") # build installed certificates array - excluding certs to ignore buildInstalledCertsArray # check if installed certs are used in bigip.conf (including partitions) - ltm sys files are exluded from results buildDeleteCertsArray # build list of associated keys (not all certs will have keys) buildDeleteKeysArray # optionally delete unused certs deleteUnusedCerts Tested this on version: No Version Found1.8KViews3likes7CommentsDownload a BIG-IP UCS archive with "curl".
Problem this snippet solves: Download a BIG-IP UCS archive using the program "curl" and verifies the output file's signature. Tested on 13.1.1. How to use this snippet: Edit the code to input the hostname of your F5 UI, admin credentials, source UCS file name (defaults to config.ucs), and the output file name. Code : #!/bin/bash # # Download a UCS archive (across a stable network) with curl. # #------------------------------------------------------------------------- F5_HOST='myhost.example.com' CREDENTIALS='admin:admin' FINAL_FILE='/tmp/config.ucs' ARCHIVE_NAME_ON_SERVER='config.ucs' DEBUG='' #------------------------------------------------------------------------- # # Get the md5 checksum for the archive. # #------------------------------------------------------------------------- ARCHIVE_CHECKSUM=$(curl -sku $CREDENTIALS -X POST -H "Content-type: application/json" \ -d "{\"command\":\"run\", \"utilCmdArgs\": \"-c '/usr/bin/md5sum /var/local/ucs/$ARCHIVE_NAME_ON_SERVER'\"}" \ https://$F5_HOST/mgmt/tm/util/bash | awk -F':' '{print $NF}' | awk -F'"' '{ print $2 }' | awk '{print $1}') [ -z "$ARCHIVE_CHECKSUM" ] && echo "Failed to get archive signature. Aborting." && exit 1 [ ! -z "$DEBUG" ] && echo "Archive checksum: $ARCHIVE_CHECKSUM" #------------------------------------------------------------------------- # # Find out the size of the archive and the size of the data packet. # #------------------------------------------------------------------------- Content_Range=$(curl -I -kv -u $CREDENTIALS -H 'Content-Type: application/json' -X GET "https://$F5_HOST/mgmt/shared/file-transfer/ucs-downloads/$ARCHIVE_NAME_ON_SERVER" 2>/dev/null | grep "Content-Range: " | cut -d ' ' -f 2) FIRST_CONTENT_RANGE=$(echo -n $Content_Range | cut -d '/' -f 1 | tr -d '\r') [ ! -z "$DEBUG" ] && echo -n "FIRST_CONTENT_RANGE: " [ ! -z "$DEBUG" ] && echo $FIRST_CONTENT_RANGE NUMBER_OF_LAST_BYTE=$(echo -n $FIRST_CONTENT_RANGE | cut -d '-' -f 2) [ ! -z "$DEBUG" ] && echo -n "NUMBER_OF_LAST_BYTE: " [ ! -z "$DEBUG" ] && echo $NUMBER_OF_LAST_BYTE INITIAL_CONTENT_LENGTH=$NUMBER_OF_LAST_BYTE CONTENT_LENGTH=$(($NUMBER_OF_LAST_BYTE+1)) [ ! -z "$DEBUG" ] && echo -n "CONTENT_LENGTH: " [ ! -z "$DEBUG" ] && echo $CONTENT_LENGTH DFILE_SIZE=$(echo -n $Content_Range | cut -d '/' -f 2 | tr -d '\r' ) [ ! -z "$DEBUG" ] && echo -n "DFILE_SIZE: " [ ! -z "$DEBUG" ] && echo $DFILE_SIZE LAST_END_BYTE=$((DFILE_SIZE-1)) CUMULATIVE_NO=0 [ ! -z "$DEBUG" ] && echo "CUMULATIVE_NO: $CUMULATIVE_NO" SEQ=0 LAST=0 #------------------------------------------------------------------------- # # Clean up: Remove the previous output file. # #------------------------------------------------------------------------- /bin/rm $FINAL_FILE 2>/dev/null #------------------------------------------------------------------------- # # Get the archive file. # #------------------------------------------------------------------------- while true do if [ $LAST -gt 0 ]; then [ ! -z "$DEBUG" ] && echo 'End of run reached.' break fi if [ $SEQ -eq 0 ]; then NEXT_RANGE=$FIRST_CONTENT_RANGE CUMULATIVE_NO=$NUMBER_OF_LAST_BYTE CONTENT_LENGTH=$INITIAL_CONTENT_LENGTH else START_BYTE=$(($CUMULATIVE_NO+1)) END_BYTE=$(($START_BYTE + $CONTENT_LENGTH)) if [ $END_BYTE -gt $LAST_END_BYTE ]; then [ ! -z "$DEBUG" ] && echo "END_BYTE greater than LAST_END_BYTE: $END_BYTE:$LAST_END_BYTE" LAST=1 let END_BYTE=$LAST_END_BYTE [ ! -z "$DEBUG" ] && echo "Getting the last data packet." fi NEXT_RANGE="${START_BYTE}-${END_BYTE}" CUMULATIVE_NO=$END_BYTE fi [ ! -z "$DEBUG" ] && echo "NEXT_RANGE: $NEXT_RANGE" let SEQ+=1 [ ! -z "$DEBUG" ] && echo "SEQ: $SEQ" OUTPUT_FILE_NAME="/tmp/$$_downloaded_ucs_archive_file_part_$SEQ"; curl -H "Content-Range: ${NEXT_RANGE}/${DFILE_SIZE}" -s -k -u $CREDENTIALS -H 'Content-Type: application/json' -X GET "https://$F5_HOST/mgmt/shared/file-transfer/ucs-downloads/$ARCHIVE_NAME_ON_SERVER" -o $OUTPUT_FILE_NAME cat $OUTPUT_FILE_NAME >> $FINAL_FILE /bin/rm $OUTPUT_FILE_NAME [ ! -z "$DEBUG" ] && echo "End of loop $SEQ" done #------------------------------------------------------------------------- # # Verify downloaded file. # #------------------------------------------------------------------------- FINAL_FILE_CHECKSUM=$(/usr/bin/md5sum $FINAL_FILE | awk '{print $1}') if [ "$FINAL_FILE_CHECKSUM" == "$ARCHIVE_CHECKSUM" ]; then echo "Download completed and verified." else echo "Downloaded file has incorrect checksum." exit 1 fi # END -------------------------------------------------------------------- Tested this on version: 13.01.5KViews2likes5CommentsBASH Script to make UCS and FTP off to remote Server
Problem this snippet solves: Automate UCS Backup and copy via FTP to remote FTP Server. How to use this snippet: Run as a script to save and ship off a UCS file, via FTP, from the BIGIP Device to a Remote Server. Code : #!/bin/bash # set the date variable TODAY=$(date +'%Y%m%d') # Set FTP Remote Hostname or IP FTPHOST="Your IP" # FTP User name and password USER='Your User' PASSWORD='your password' # ID Hostname for Backup File host="$HOSTNAME" # Used to identify the first 3 letters of the hostname which can be # to separted backups on the remote FTP Server by Site ID or Device ID folder=$(echo $HOSTNAME -s|cut -c 1-3) #run the F5 bigpipe config builder cd /var/local/ucs tmsh save sys ucs /var/local/ucs/$host.ucs #Rename the config.ucs and append the date to the end NUM=0 until [ "$NUM" -eq 5 ] do if [ -f /var/local/ucs/$host.ucs ] then mv $host.ucs $host-$TODAY.ucs ; break else sleep 5 fi NUM=`expr "$NUM" + 1` done [[ ! -f /var/local/ucs/$host-$TODAY.ucs ]] && exit 1 #Open the FTP connection and move the file ftp -inv $FTPHOST < Tested this on version: 12.0772Views0likes2CommentsWindows File Share Monitor SMB CIFS
Problem this snippet solves: This external monitor performs a health check of a Windows file share using CIFS/Samba. There is an inbuilt SMB monitor for LTM. However, GTM does not (yet?) have this. See the comments in the script for details on how to implement it. Please post any questions about this monitor in the Advanced Design/Config forum Code : #!/bin/bash # Samba (CIFS) external monitor script # # Use smbget to perform a health check of an SMB/CIFS pool member IP address and port for LTM or GTM # # v0.3 - 2011-04-20 - Aaron Hooley - F5 Networks - hooleylists at gmail dot com - Initial version tested on 10.2.1 LTM and GTM # # Save this script as /usr/bin/monitors/smb_monitor.bash # Make executable using chmod 755 /usr/bin/monitors/smb_monitor.bash # # Example LTM monitor which references this script: # #monitor smb_external_monitor { # defaults from external # DEBUG "1" # FILE "/share/test.txt" # PASSWORD "Test123!" # run "smb_monitor.bash" # SEARCH_STRING "got it" # USERNAME "aaron" #} # # Example GTM monitor which references this script: #monitor "smb_external_monitor" { # defaults from "external" # interval 10 # timeout 40 # probe_interval 1 # probe_timeout 5 # probe_num_probes 1 # probe_num_successes 1 # dest *:* # "SEARCH_STRING" "got it" # "DEBUG" "1" # run "smb_monitor.bash" # "USERNAME" "aaron" # "FILE" "/share/test.txt" # args "" # "PASSWORD" "Test123!" # partition "Common" #} # Log debug to local0.debug (/var/log/ltm)? # Check if a variable named DEBUG exists from the monitor definition # This can be set using a monitor variable DEBUG=0 or 1 if [ -n "$DEBUG" ] then if [ $DEBUG -eq 1 ] then logger -p local0.debug "EAV `basename $0` (PID $$): Start of PID $$" logger -p local0.debug "EAV `basename $0` (PID $$): \$DEBUG: $DEBUG" fi else # If the monitor config didn't specify debug, enable/disable it here DEBUG=0 #logger -p local0.debug "EAV `basename $0` (PID $$): \$DEBUG: $DEBUG" fi # If user and pass are both not set, then use anonymous/guest access for the server if [ "x$USERNAME" = "x" ] && [ "x$PASSWORD" = "x" ] then GUEST_FLAG="--guest" if [ $DEBUG -eq 1 ]; then logger -p local0.debug "EAV `basename $0` (PID $$): No username and no password specified, using guest access"; fi else GUEST_FLAG="" fi # Check if a variable named USERNAME exists from the monitor definition # This can be set using a monitor variable USERNAME=my_username if [ -n "$USERNAME" ] then if [ $DEBUG -eq 1 ]; then logger -p local0.debug "EAV `basename $0` (PID $$): Username: $USERNAME"; fi USERNAME="-u $USERNAME" else if [ $DEBUG -eq 1 ]; then logger -p local0.debug "EAV `basename $0` (PID $$): No username specified"; fi USERNAME='' fi # Check if a variable named PASSWORD exists from the monitor definition # This can be set using a monitor variable PASSWORD=my_password if [ -n "$PASSWORD" ] then if [ $DEBUG -eq 1 ]; then logger -p local0.debug "EAV `basename $0` (PID $$): Password: $PASSWORD"; fi # Set the password flag PASSWORD="-p $PASSWORD" else if [ $DEBUG -eq 1 ]; then logger -p local0.debug "EAV `basename $0` (PID $$): No password specified"; fi PASSWORD='' fi # Check if a variable named FILE exists from the monitor definition # This can be set using a monitor variable FILE=/path/to/file.txt if [ -n "$FILE" ] then # Replace \ with / for *nix paths FILE=${FILE//\\/\//replacement} if [ $DEBUG -eq 1 ]; then logger -p local0.debug "EAV `basename $0` (PID $$): Checking \$FILE: $FILE"; fi else FILE="/" logger -p local0.notice "EAV `basename $0` (PID $$): \$FILE is not defined, checking smb://$IP/" fi # Remove IPv6/IPv4 compatibility prefix (LTM passes addresses in IPv6 format) IP=`echo $1 | sed 's/::ffff://'` # Save the port for use in the shell command. smbget doesn't seem to support a port other than 445. PORT=$2 if [ "$PORT" != 445 ] then logger -p local0.debug "EAV `basename $0` (PID $$): Port $PORT will be ignored. This monitor only supports port 445 due to smbget limitation." fi # Check if there is a prior instance of the monitor running pidfile="/var/run/`basename $0`.$IP.$PORT.pid" if [ -f $pidfile ] then kill -9 `cat $pidfile` > /dev/null 2>&1 logger -p local0.debug "EAV `basename $0` (PID $$): Exceeded monitor interval, needed to kill past check for ${IP}:${PORT} with PID `cat $pidfile`" fi # Add the current PID to the pidfile echo "$$" > $pidfile if [ $DEBUG -eq 1 ]; then logger -p local0.debug "EAV `basename $0` (PID $$): Running for ${IP}:${PORT}"; fi # Send the request and check the response. If we have a string to search for, use grep to look for it. # Check if a variable named SEARCH_STRING exists from the monitor definition # This can be set using a monitor variable SEARCH_STRING=my_string if [ -n "$SEARCH_STRING" ] then SUCCESS_STATUS=0 if [ $DEBUG -eq 1 ]; then logger -p local0.debug "EAV `basename $0` (PID $$): Checking ${IP}${FILE} for "$SEARCH_STRING" with status of $SUCCESS_STATUS using\ smbget $USERNAME $PASSWORD $GUEST_FLAG --nonprompt --quiet --stdout smb://${IP}${FILE} | grep \"$SEARCH_STRING\" 1>/dev/null 2>/dev/null"; fi smbget $USERNAME $PASSWORD $GUEST_FLAG --nonprompt --quiet --stdout smb://${IP}${FILE} | grep $SEARCH_STRING 2>&1 > /dev/null else SUCCESS_STATUS=1 if [ $DEBUG -eq 1 ]; then logger -p local0.debug "EAV `basename $0` (PID $$): Checking ${IP}${FILE} with status of $SUCCESS_STATUS using\ smbget $USERNAME $PASSWORD $GUEST_FLAG --nonprompt --quiet --stdout smb://${IP}${FILE} 1>/dev/null 2>/dev/null"; fi smbget $USERNAME $PASSWORD $GUEST_FLAG --nonprompt --quiet --stdout smb://${IP}${FILE} 1>/dev/null 2>/dev/null fi # Check if the command ran successfully # # For some reason, smbget returns a status of 1 for success which is the opposite of typical commands. See this page (or its cache) for details: # http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6828364 # http://webcache.googleusercontent.com/search?q=cache:Ef3KgrvGnygJ:bugs.opensolaris.org/bugdatabase/view_bug.do%3Fbug_id%3D6828364+&cd=2&hl=en&ct=clnk&gl=us # # Note that any standard output will result in the script execution being stopped # So do any cleanup before echoing to STDOUT if [ $? -eq $SUCCESS_STATUS ] then rm -f $pidfile if [ $DEBUG -eq 1 ]; then logger -p local0.debug "EAV `basename $0` (PID $$): Succeeded for ${IP}:${PORT}"; fi echo "UP" else rm -f $pidfile if [ $DEBUG -eq 1 ]; then logger -p local0.debug "EAV `basename $0` (PID $$): Failed for ${IP}:${PORT}"; fi fi986Views0likes2CommentsSuper HTTP Monitor
Problem this snippet solves: Super HTTP supports GET and POST requests, HTTP 1.0 and 1.1, Host headers, User-Agent headers, HTTP and HTTPS. It supports cookies. It supports authentication (basic, digest, and ntlm). It supports checking through a proxy. Most notably it supports chains of HTTP requests with cookie preservation between them, which I think will be very useful for LTM and GTM allowing you to validate end-to-end functionality. Note that this monitor will do just about whatever you need, but if you just want to a simple HTTP monitor try the built-in monitor first. Although this is fairly efficient (i.e. doesn't do more work than it needs to) it can never be anywhere as efficient as the built-in monitor. Note that the native HTTP/S monitors now support NTLM / NTLMv2 authentication. How to use this snippet: Create a new file containing the code below in /usr/bin/monitors on the LTM filesystem. Permissions on the file must be 700 or better, giving root rwx access to the file. See comments within the code for documentation. Code : #!/bin/bash # (c) Copyright 2007 F5 Networks, Inc. # Kirk Bauer # Version 1.3, Aug 5, 2010 # Revision History # 8/5/10: Version 1.3: Fixed problem with cookie parsing # 12/16/09: Version 1.2: Fixed problem with NTLM health checks # 3/11/07: Version 1.1: Added ability for multiple regexes in MATCH_REGEX # 2/28/07: Version 1.0.1: Initial Release # When defining an external monitor using this script, the argument # field may contain the path for the request. In addition a large # number of variables may be defined as described below. # # The argument field can contain an optional path for the request, # if nothing is specified the default path of / is assumed. This # is also where you can put query parameters. Some examples: # /index.asp # /verify_user.html?user=testuser # # This script can retrieve a chain of URLs. This is useful for two # scenarios. The first is if you want to check a number of different # pages on a site, you can use one custom monitor that checks all of # them instead of defining a bunch of separate monitors. # # The other scenario is when you want to perform a test that requires # more than one page in sequence, such as something that tracks state # with cookies. In order to do a test login on some sites, for example, # you must first go to one URL where you are assigned a cookie, then # you must login through another URL with that cookie along with your # username/password. This script automatically stores and sends cookies # for chains of requests. # # The next section describes per-request variables. These are options # that can be specified a number of times for a number of separate # requests. In the most basic case, you must specify the URI_PATH for # each request. The only exception is that the last path is taken from # the "argument" string if URI_PATH is not specified. So, to do three # requests in a row, specify: # URI_PATH_1=/path/to/request1 # STATUS_CODE_1=200 # URI_PATH_2=/path/to/request2 # STATUS_CODE_2=200 # URI_PATH=/path/to/request3 # MATCH_REGEX="you are logged in" # # It is important to understand that there is always at least one # request and that last request uses variables with no number appended # to them. All other requests are done in numerical order before that # last request. If you have more than 10 requests you need to use # 2-digit numbers instead of 1 in the example above. # ############################################################# # Per-request Variables # (names provided are for the last (possibly only) request, # for other requests append a _# on the end as described # above). ############################################################# # Define the request: # URI_PATH: the full path you want to request, such as /index.html. # This is required for every request, except that it need not # be defined for the last (sometimes onyl) request if you specify # the path in the "argument" field. You may include a query string # at the end, like /index.html?test=true # QUERY_STRING: You may specify the GET query string here instead of # appending it to the URI_PATH. Example: # name1=value1&name2=value2 # NODE_ADDR: The IP address to connect to. By default this will be # the pool member IP that is being checked. Can also be a hostname # if DNS resolution works on the BIG-IP. # NODE_PORT: The port to connect to. By default this will be the # port of the pool member being checked. # PROTOCOL: Either http or https. If not specified, assumed to be # http unless the port is 443. # POST_DATA: You may define post data to make this a POST request, # such as: # name1=value1&name2=value2 # HOST_HEADER: The host header to send to the remote server. Default # will be the value of NODE_ADDR. # REFERER: The referer URL to send in the request (this variable is # misspelled just like the HTTP header is). # # Authentication options for each request: # USERNAME: provide this username to the webserver # PASSWORD: provide this password to the webserver # AUTHTYPE: "basic", "digest", or "ntlm" (default is basic) # # The following variables may be defined and determine what constitutes # an "up" status. If none of these are specified, the script will return # "up" only if the web server returns a status of 200 (OK). Any or all # of these may be specified. # HTTPS_HOSTNAME: you may optionally specify the hostname that the # certificate should present for https checks. # STATUS_CODE: numerical status code to match # NOT_STATUS_CODE: numerical status code that shouldn't be matched # MATCH_REGEX: regular expression that should be matched in the headers # or body. OPTIONAL: multiple regexes to match may be specified using # the format: # MATCH_REGEX = ®ex1®ex2®ex3 # If using multiple regexes, you must start the string with & and # the regexes themselves cannot contain the & character # NOT_MATCH_REGEX: regular expression that should not be matched in the # headers or body. # ############################################################# # Cookies ############################################################# # You can set any number of cookies by specifying one or more variables # named COOKIE_Name. So if you set the variables COOKIE_country = usa # and COOKIE_language = en, then the cookie string would be # "Cookie: country=usa; language=en". These cookies will be sent for # every request. If you are doing multiple requests then any cookies # sent by the server will replace any existing cookie of the same name # in future requests. This script does not consider domain or path # but instead just sends all cookies for all requests. # ############################################################# # Global Variables ############################################################# # HTTP/HTTPS Options (apply to all requests): # USER_AGENT: set to the user agent string you want to send. # Default is something similar to: # curl/7.15.3 (i686-redhat-linux-gnu) libcurl/7.15.3 OpenSSL/0.9.7i zlib/1.1.4 # HTTP_VERSION: set to "1.0" or "1.1", defaults to 1.1. # SSL_VERSION: set to "tlsv1", "sslv2", or "sslv3". # CIPHERS: override SSL ciphers that can be used (see "man # ciphers"), default is "DEFAULT". # # Global Proxy Settings (optional): # PROXY_HOST: IP address of the proxy to use (or hostname if DNS resolution works) # PROXY_PORT: Port to connect to on the proxy (required if PROXY_HOST is specified) # PROXY_TYPE: "http", "socks4", or "socks5" (defaults to http) # PROXY_AUTHTYPE: "basic", "digest", or "ntlm" (basic is default if a username is specified) # PROXY_USERNAME: username to provide to the proxy # PROXY_PASSWORD: password to provide to the proxy # # Other Variables: # LOG_FAILURES: set to "1" to enable logging of failures which will # log monitor failures to /var/log/ltm (viewable in the GUI under # System -> Logs -> Local Traffic(tab) # LOG_COOKIES: set to "1" to log cookie activity to /var/log/ltm (also # logs each request as it is made). # DEBUG: set to "1" to create .output and .trace files in /var/run for # each request for debugging purposes. SCRIPTNAME=${MON_TMPL_NAME:-$0} # Collect arguments global_node_ip=$(echo "$1" | sed 's/::ffff://') global_port="${2:-80}" [ -z "$URI_PATH" ] && URI_PATH="${3:-/}" # Handle PID file pidfile="/var/run/$SCRIPTNAME.$global_node_ip.$global_port.pid" tmpfile="/var/run/$SCRIPTNAME.$global_node_ip.$global_port.tmp" [ -f "$pidfile" ] && kill -9 $(cat $pidfile) >/dev/null 2>&1 rm -f "$pidfile" ; echo "$$" > "$pidfile" rm -f "$tmpfile" fail () { [ -n "$LOG_FAILURES" ] && [ -n "$*" ] && logger -p local0.notice "$SCRIPTNAME($global_node_ip:$global_port): $*" rm -f "$tmpfile" rm -f "$pidfile" exit 1 } make_request () { # First argument is blank for last request or "_#" for others local id="$1" # Collect the arguments to use for this request, first start with ones # that have default values if not specified local node_ip="$global_node_ip" [ -n "$(eval echo \$NODE_ADDR$id)" ] && node_ip="$(eval echo \$NODE_ADDR$id)" local port="$global_port" [ -n "$(eval echo \$NODE_PORT$id)" ] && port="$(eval echo \$NODE_PORT$id)" local protocol="http" [ "$port" -eq "443" ] && protocol="https" [ -n "$(eval echo \$PROTOCOL$id)" ] && protocol="$(eval echo \$PROTOCOL$id)" # Now the rest come straight from the environment variables local authtype="$(eval echo \$AUTHTYPE$id)" local username="$(eval echo \$USERNAME$id)" local password="$(eval echo \$PASSWORD$id)" local host_header="$(eval echo \$HOST_HEADER$id)" local referer="$(eval echo \$REFERER$id)" local uri_path="$(eval echo \$URI_PATH$id)" local query_string="$(eval echo \$QUERY_STRING$id)" [ -n "$query_string" ] && query_string="?$query_string" local post_data="$(eval echo \$POST_DATA$id)" local https_hostname="$(eval echo \$HTTPS_HOSTNAME$id)" local status_code="$(eval echo \$STATUS_CODE$id)" local not_status_code="$(eval echo \$NOT_STATUS_CODE$id)" local match_regex="$(eval echo \$MATCH_REGEX$id)" local not_match_regex="$(eval echo \$NOT_MATCH_REGEX$id)" # Determine what we are checking for [ -z "$match_regex" ] && [ -z "$not_match_regex" ] && [ -z "$status_code" ] && [ -z "$not_status_code" ] && status_code=200 [ -n "$https_hostname" ] && [ "$protocol" == "https" ] && { # The cert will contain a hostname but curl is going by IP so it will fail but give us the hostname in the error local actual_ssl_hostname=$(curl $global_args --cacert '/config/ssl/ssl.crt/ca-bundle.crt' "$protocol://$node_ip:$port$uri_path$query_string" 2>&1 | sed -n "s/^.*SSL: certificate subject name '\(.*\)' does not match target host name.*$/\1/p") [ "$actual_ssl_hostname" == "$https_hostname" ] || fail "HTTPS Hostname '$actual_ssl_hostname' does not match HTTPS_HOSTNAME$id=$https_hostname" } # Determine argument string for curl local args="" [ -n "$host_header" ] && args="$args --header 'Host: $host_header'" [ -n "$referer" ] && args="$args --referer '$referer'" [ -n "$post_data" ] && args="$args --data '$post_data'" # IP used in URL will never match hostname in cert, use HTTPS_HOSTNAME to check separately [ "$protocol" == "https" ] && args="$args --insecure" [ -n "$DEBUG" ] && args="$args --trace-ascii '$tmpfile.trace$id'" [ -n "$username" ] && { # Specify authentication information args="$args --user '$username:$password'" [ "$authtype" == "digest" ] && args="$args --digest" [ "$authtype" == "ntlm" ] && args="$args --ntlm" } # Determine cookies to send, if any local cookie_str="" for i in ${!COOKIE_*} ; do cookie_name=$(echo $i | sed 's/^COOKIE_//') cookie_str="$cookie_str; $cookie_name=$(eval echo "$"$i)" done cookie_str="$(echo "$cookie_str" | sed 's/^; //')" [ -n "$LOG_COOKIES" ] && logger -p local0.notice "$SCRIPTNAME($global_node_ip:$global_port): $protocol://$node_ip:$port$uri_path$query_string: cookie string [$cookie_str]" [ -n "$cookie_str" ] && args="$args --cookie '$cookie_str'" # Make request eval curl -i $global_args $args "'$protocol://$node_ip:$port$uri_path$query_string'" >"$tmpfile" 2>/dev/null || fail "$protocol://$node_ip:$port$uri_path$query_string: Request failed: $!" [ -n "$DEBUG" ] && cp "$tmpfile" "$tmpfile.debug$id" # Validate Check Conditions [ -n "$status_code" ] || [ -n "$not_status_code" ] && { local actual_status_code=$(head -n 1 "$tmpfile" | sed "s/^HTTP\/.\.. \([0123456789][0123456789][0123456789]\) .*$/\1/") [ "$actual_status_code" -eq 401 ] && [ "$authtype" == "ntlm" ] && { # Skip past 401 Unauthorized response and look at second response code actual_status_code=$(grep '^HTTP/' "$tmpfile" | tail -n 1 | sed "s/^HTTP\/.\.. \([0123456789][0123456789][0123456789]\) .*$/\1/") } [ -n "$status_code" ] && [ "$actual_status_code" -ne "$status_code" ] && fail "$protocol://$node_ip:$port$uri_path$query_string: Status code ($actual_status_code) not what was expected (STATUS_CODE$id=$status_code)" [ -n "$not_status_code" ] && [ "$not_status_code" -eq "$status_code" ] && fail "$protocol://$node_ip:$port$uri_path$query_string: Status code ($actual_status_code) was what was not expected (NOT_STATUS_CODE$id=$not_status_code)" } [ -n "$match_regex" ] && { if echo "$match_regex" | grep -q '^&' ; then IFS="&" match_regex="$(echo "$match_regex" | sed 's/^&//')" for regex in $match_regex ; do egrep -q "$regex" "$tmpfile" || fail "$protocol://$node_ip:$port$uri_path$query_string: Did not find [MATCH_REGEX$id=$regex] in response" done unset IFS else egrep -q "$match_regex" "$tmpfile" || fail "$protocol://$node_ip:$port$uri_path$query_string: Did not find [MATCH_REGEX$id=$match_regex] in response" fi } [ -n "$not_match_regex" ] && egrep -q "$not_match_regex" "$tmpfile" && fail "$protocol://$node_ip:$port$uri_path$query_string: Found [NOT_MATCH_REGEX$id=$not_match_regex] in response" # Store cookies from response for next request (if any) [ -z "$id" ] && return `sed -n "s/^Set-Cookie: \([^=]\+\)=\([^;]\+\);.*$/export COOKIE_\1='\2';/ip" "$tmpfile"` } # Build global option string global_args="" [ "$HTTP_VERSION" == "1.0" ] && global_args="$global_args --http1.0" [ "$SSL_VERSION" == "tlsv1" ] && global_args="$global_args --tlsv1" [ "$SSL_VERSION" == "sslv2" ] && global_args="$global_args --sslv2" [ "$SSL_VERSION" == "sslv3" ] && global_args="$global_args --sslv3" [ -n "$USER_AGENT" ] && global_args="$global_args --user-agent '$USER_AGENT'" [ -n "$CIPHERS" ] && global_args="$global_args --ciphers '$CIPHERS'" [ -n "$PROXY_HOST" ] && [ -n "$PROXY_PORT" ] && { if [ "$PROXY_TYPE" == "socks4" ] ; then global_args="$global_args --socks4 '$PROXY_HOST:$PROXY_PORT'" elif [ "$PROXY_TYPE" == "socks5" ] ; then global_args="$global_args --socks5 '$PROXY_HOST:$PROXY_PORT'" else global_args="$global_args --proxy '$PROXY_HOST:$PROXY_PORT'" fi [ -n "$PROXY_USERNAME" ] && { global_args="$global_args --proxy-user '$PROXY_USERNAME:$PROXY_PASSWORD'" [ "$PROXY_AUTHTYPE" == "digest" ] && global_args="$global_args --proxy-digest" [ "$PROXY_AUTHTYPE" == "ntlm" ] && global_args="$global_args --proxy-ntlm" } } requests="$(echo ${!URI_PATH_*} | sort)" for request in $requests ; do id=$(echo $request | sed 's/^URI_PATH//') make_request "$id" done # Perform last request make_request "" # If we got here without calling fail() and exiting, status was good rm -f "$tmpfile" echo "up" rm -f "$pidfile" exit 0530Views0likes3CommentsBIG-IP Backup Script In Bash
Problem this snippet solves: A script that can backup F5 configurations, runs daily, and FTP's the backup to a defined remote server. You will need to change the ftphost, user and password variables to reflect the FTP server you are connecting to as well as change the directory where the script has "cd /F5/JAX1" to reflect the clients directory structure. Cron job setup To ensure a daily backup, save the script source below in /etc/cron.daily: Code : #!/bin/bash # set the date variable today=$(date +'%Y%m%d') ftphost="ADD FTP IP HERE" user="ADD FTP USERNAME HERE" password="ADD FTP PASSWORD HERE" #run the F5 bigpipe config builder cd / bigpipe config save /config.ucs #Rename the config.ucs and append the date to the end NUM=0 until [ "$NUM" -eq 5 ] do if [ -f /config.ucs ] then mv config.ucs config-$today.ucs ; break else sleep 5 fi NUM=`expr "$NUM" + 1` done [[ ! -f /config-$today.ucs ]] && exit 1 #Open the FTP connection and move the file ftp -in <738Views0likes3Comments