BigIP UCS Backup script; looking for some guidance on design
Greetings, I've began to work on a bash script, intended to be ran locally on each F5 appliance via a cron task. The criteria for this script has been, Saves the UCS /w encryption using {Hostname}-YYYY-MM-DD.ucs naming format. Uploads the generated UCS file to a SFTP server SFTP native commands are a MUST, SCP will not work due to it's reliance on command shell/login. Rollover after X # of saved files in order to prevent storage exhaustion on the target SFTP Server I strongly doubt any form of deduplication will work with a encrypted UCS Sends an email notification if the backup failed I've so far written a script that addresses the first 3 criteria and have been waiting for those to go through their paces in testing before adding in notification logic. The commands and logic being used have gotten more complex, the further I've gotten into the script's development. This has lead to some concerns about whether this is the best approach given the nature of the F5 BigIP systems being a vendor appliance and worry that there's a large possibility commands may stop working correctly after a major x. version update, requiring an overhaul of a fairly complex script. I'm almost wondering if setting up an AWX/Tower host in our environment and then using the f5networks Ansible Module for the majority of the heavy lifting followed by some basic logic for file rotation, would be a better long term approach. Ansible would also be a bit more flexible in that I wouldn't have to hardcore values that diverge between individual hosts into the script itself. It's however not clear if the F5networks ansible module supports SFTP as I only see SCP referenced. https://my.f5.com/manage/s/article/K35454259 Advice and insight is much appreciated! #!/bin/bash # F5 backup script based on https://my.f5.com/manage/s/article/K000138297 # User-configurable Variables UCS_DIR="/var/ucs" REMOTE_USER="svc_f5backup" REMOTE_HOST="myhost.contoso.local" REMOTE_DIR="/data/f5/dev" SSH_KEY="/shared/scripts/f5-backup/mykeys/f5user" ENCRYPTION_PASSPHRASE='' # Blank out the value to not encrypt the UCS backup. LOG_FILE="/var/log/backupscript.log" MAX_FILES=45 # Maximum number of backup files to keep # Dynamic Variables (do not edit) HOSTNAME=$(/bin/hostname) DATE=$(date +%Y-%m-%d) UCS_FILE="${UCS_DIR}/${HOSTNAME}-${DATE}.ucs" # Start logging echo "$(date +'%Y-%m-%d %H:%M:%S') - Starting backup script." >> ${LOG_FILE} # Save the UCS backup file if [ -n "${ENCRYPTION_PASSPHRASE}" ]; then echo "Running the UCS save operation (encrypted)." >> ${LOG_FILE} tmsh save /sys ucs ${UCS_FILE} passphrase "${ENCRYPTION_PASSPHRASE}" >> ${LOG_FILE} 2>&1 else echo "Running the UCS save operation (not encrypted)." >> ${LOG_FILE} tmsh save /sys ucs ${UCS_FILE} >> ${LOG_FILE} 2>&1 fi # Create a temporary batch file for SFTP commands BATCH_FILE=$(mktemp) echo "cd ${REMOTE_DIR}" > $BATCH_FILE echo "put ${UCS_FILE}" >> $BATCH_FILE echo "bye" >> $BATCH_FILE # Log that the transfer is starting echo "Starting SFTP transfer." >> ${LOG_FILE} # Execute SFTP command and capture the output transfer_command_output=$(sftp -b "$BATCH_FILE" -i "${SSH_KEY}" -oBatchMode=no "${REMOTE_USER}@${REMOTE_HOST}" 2>&1) transfer_status=$? # Extract the "Transferred:" line transfer_summary=$(echo "$transfer_command_output" | grep "^Transferred: sent") if [ $transfer_status -eq 0 ]; then if [ -n "$transfer_summary" ]; then echo "UCS file copied to the SFTP server successfully (remote:${REMOTE_HOST}:${REMOTE_DIR}/${UCS_FILE}). $transfer_summary" >> ${LOG_FILE} else echo "UCS file copied to the SFTP server successfully (remote:${REMOTE_HOST}:${REMOTE_DIR}/${UCS_FILE}). Please check the log for details." >> ${LOG_FILE} fi else echo "$transfer_command_output" >> ${LOG_FILE} echo "UCS SFTP copy operation failed. Please read the log for details." >> ${LOG_FILE} rm -f $BATCH_FILE exit 1 fi # Clean up the temporary batch file rm -f $BATCH_FILE # Rollover backup files if the number exceeds MAX_FILES echo "Checking and maintaining the maximum number of backup files." >> ${LOG_FILE} # Create a list of files to delete sftp -i "${SSH_KEY}" -oBatchMode=no "${REMOTE_USER}@${REMOTE_HOST}" <<EOF > file_list.txt cd ${REMOTE_DIR} ls -1 ${HOSTNAME}-*.ucs bye EOF # Filter out unwanted lines and sort the files alphanumerically grep -v 'sftp>' file_list.txt | grep -v '^cd ' | sort > filtered_file_list.txt # Determine files to delete files_to_delete=$(head -n -${MAX_FILES} filtered_file_list.txt) if [ -n "$files_to_delete" ]; then # Create a temporary batch file for SFTP cleanup commands CLEANUP_BATCH_FILE=$(mktemp) echo "cd ${REMOTE_DIR}" > $CLEANUP_BATCH_FILE for file in $files_to_delete; do echo "Deleting $file" >> ${LOG_FILE} echo "rm $file" >> $CLEANUP_BATCH_FILE done echo "bye" >> $CLEANUP_BATCH_FILE # Execute SFTP cleanup command and log the output cleanup_command_output=$(sftp -b "$CLEANUP_BATCH_FILE" -i "${SSH_KEY}" -oBatchMode=no "${REMOTE_USER}@${REMOTE_HOST}" 2>&1) echo "$cleanup_command_output" >> ${LOG_FILE} # Clean up the temporary batch file rm -f $CLEANUP_BATCH_FILE else echo "No files to delete. Total files within limit." >> ${LOG_FILE} fi # Clean up the file lists rm -f file_list.txt filtered_file_list.txt # Delete the local copy of the UCS archive tmsh delete /sys ucs ${UCS_FILE} >> ${LOG_FILE} 2>&1 echo "$(date +'%Y-%m-%d %H:%M:%S') - Backup script completed." >> ${LOG_FILE}107Views0likes2CommentsIssue with a simple Bash Script for adding an iRule to a list of Virtual Servers.
Hello Community, I am having an issue with a bash script for an F5 BIG-IP Load Balancer, intended to read and iterate over a .txt list of Virtual Server names, look up the partition for a given VS, and add an iRule to it. When running the script I am only hitting the outermost 'else' statement for being unable to find the partition and VS name. My script logic is based on F5 Support solution K41961653: <p> #!/bin/bash # Prompt the user for the iRule name and read it into the 'new' variable echo "Please enter the iRule name:" read new noneRules='rules none' while IFS= read -r vs_name; do # Retrieve the partition and virtual server name full_vs_info=$(tmsh -c "cd /; list ltm virtual recursive" | grep "$vs_name" | grep -m1 "^ltm virtual") echo "Full VS Info Debug: $full_vs_info" # Extract the partition and virtual server name from the retrieved information if [[ $full_vs_info =~ ltm\ virtual\ (.+)/(.+) ]]; then partition="${BASH_REMATCH[1]}" vs_name="${BASH_REMATCH[2]}" # Format the tmsh command to include the partition rule=$(tmsh list ltm virtual /$partition/$vs_name rules | egrep -v "\{|\}" | xargs) if [[ "$rule" == "$noneRules" ]]; then tmsh modify ltm virtual /$partition/$vs_name rules { $new } echo "iRule $new was added to $vs_name in partition $partition" else# tmsh modify ltm virtual /$partition/$vs_name rules { $rule $new } echo "iRules $rule were conserved and added $new to $vs_name in partition $partition" fi else echo "Could not find partition and virtual server name for $vs_name" fi done < /shared/tmp/test_list.txt tmsh save sys config </p> As far as I was able to troubleshoot, the problem I am encountering appears to be with line 11 of my script where I attempt to assign the string "ltm virtual SomePartition/VS_Example.com {" to the "full_vs_info" variable using: full_vs_info=$(tmsh -c "cd /; list ltm virtual recursive" | grep "$vs_name" | grep -m1 "^ltm virtual") When I run the tmsh command [tmsh -c "cd /; list ltm virtual recursive" | grep "VS_Example.com" | grep -m1 "^ltm virtual"] on its own, from the F5's Bash shell, I am getting the output I expect: "ltm virtual SomePartition/VS_Example.com {" However, when I run the script with the debug echo , it only outputs "Full VS Info Debug:", and ends the script with "Could not find partition and virtual server name for $vs_name" and a sys config save. I am attempting to run this on a BIG-IP, version 15.1.10.2, build 0.44.2. I am quite new to both Bash scripting and F5 LBs. All feedback and criticism is highly appreciated! Thanks in advance!84Views0likes2Comments/var/ directory running out of space
I'm getting a broadcast message on the bash CLI that warns me about the /var/ directory running out of room. 011d0004:3: Disk partition /var has only 14% free There are four Tomcat files of identical size taking up almost a GB of space, but I don't know what purpose they serve. -rw-r--r--. 1 tomcat tomcat 240192271 2017-11-05 19:51 1509911444508.upload -rw-r--r--. 1 tomcat tomcat 240192271 2017-11-05 10:46 1509878765278.upload -rw-r--r--. 1 tomcat tomcat 240192271 2017-11-05 10:39 1509878372606.upload -rw-r--r--. 1 tomcat tomcat 240192271 2017-11-05 10:38 1509878267154.upload Does anyone know what the purpose of these files are, and whether or not it's safe to remove them?229Views0likes1CommentExternal monitor Realsec Cryptosec HSM
Hi all, I am trying to monitor a HSM appliance using the external monitor template provided on this link: link text I modified this part in the template so it should send CCCCNC and the response 00000000, I really am not sure if this is the correct line. I uploaded the script and attached it to the pool and it is available and actively sending monitor request towards the pool members, however looking at the payload in wireshark there is no data being send. I verify it in wireshark (Follow TCP stream). Send the request request and check the response echo -n 'CCCCNC' | nc $IP $PORT | grep "00000000" 2>&1 > /dev/null Someone over here with some bash scripting experience? Thanks in advance.256Views0likes1CommentCombine and Modify Grep Output
Hi, I'm trying to write a script to output virtual server stats. Can anyone help me modify the output? Right now the GREP output looks like this: Ltm::Virtual Server: www.website.com.443 Bits In 2.1G 0 - Bits Out 2.3G 0 - Ltm::Virtual Server: www.website.com.80 Bits In 1.4G 0 - Bits Out 740.8M 0 - I need help making the output like this: www.website.com.443 Bits In 2.1G Bits Out 2.3G www.website.com.80 Bits In 1.4G Bits Out 740.8M Afterwards, I can diff files from different dates, and figure out which VIPS are not being used.378Views0likes2CommentsWindows-File-Share-Monitor-SMB-CIFS
Hi, I am trying to use the: https://devcentral.f5.com/wiki/AdvDesignConfig.Windows-File-Share-Monitor-SMB-CIFS.ashx?lc=1 In the article the monitor for gtm is detailed as: monitor "smb_external_monitor" { defaults from "external" interval 10 timeout 40 probe_interval 1 probe_timeout 5 probe_num_probes 1 probe_num_successes 1 dest *:* "SEARCH_STRING" "got it" "DEBUG" "1" run "smb_monitor.bash" "USERNAME" "aaron" "FILE" "/share/test.txt" args "" "PASSWORD" "Test123!" partition "Common" } My monitor is 11.5.1 so the tmsh syntax is a little different: gtm monitor external /Common/smb_external_monitor { defaults-from /Common/external destination *:* interval 30 probe-timeout 5 run /Common/smb_monitor.bash timeout 120 user-defined DEBUG 1 user-defined FILE /F5GTM/F5GTMTST.txt user-defined PASSWORD ****** user-defined SEARCH_STRING up user-defined USERNAME f5gtm } I have also tried manually setting the debug to 1 in the script as suggested. I get nothing in /var/log/ltm and the monitor is failing. Any ideas? Thanks, Ben427Views0likes1CommentLet's Encrypt with Cloudflare DNS and F5 REST API
Hi all This is a followup on the now very old Let's Encrypt on a Big-IP article. It has served me, and others, well but is kind of locked to a specific environment and doesn't scale well. I have been going around it for some time but couldn't find the courage (aka time) to get started. However, due to some changes to my DNS provider (they were aquired and shut down) I finally took the plunges and moved my domains to a provider with an API and that gave me the opportunity to make a more nimble solution. To make things simple I chose Cloudflare as the community proliferation is enormous and it is easy to find examples and tools. I though think that choosing another provide with an open API isn't such a big deal. After playing around with different tools I realized that I didn't need them as it ended up being much easier to just use curl. So, if the other providers have just a somewhat close resemblance it shouldn't be such a big task converting the scripts to fit. There might be finer and more advanced solutions out there, but my goal was that I needed a solution that had as few dependencies as possible and if I could make that only Bash and Curl it would be perfect. And that is what I ended up with 😎 Just put 5 files in the same directory, adjust the config to your environment, and BAM you're good to go!!😻 And if you need to run it somewhere else just copy the directory over and continue like nothing was changed. That is what I call portability 😁 Find all the details here: Let's Encrypt with Cloudflare DNS and F5 REST API Please just drop me a line if you have any questions or feedback or find any bugs.2.3KViews1like6CommentsAnsible Module for bash against F5 LTM
Hi folks, I'm trying to find an Ansible module that will actually work for bash against F5 LTMs. I've tried command, shell, and ansible.builtin.shell with no luck. Alternatively an Ansible module that could execute a shell script already on the F5 LTMs would work as well. Here are a couple examples of the bash commands I'm trying to execute: tmsh save sys ucs lb1.ucs scp /var/local/ucs/lb1.ucs admin@192.168.0.1:/var/local/ucs/ tmsh load sys ucs base.ucs sleep 120 tmsh load sys ucs platform-migrate lb1.ucs sleep 120 tmsh modify cm traffic-group traffic-group-1 ha-order none tmsh modify cm device-group Employee_Sync_Failover devices none tmsh delete cm trust-domain all tmsh modify cm device lb1.fb configsync-ip none unicast-address none mirror-ip any6 tmsh delete net route all tmsh delete net self all tmsh delete net vlan all tmsh modify sys global-settings mgmt-dhcp enabled tmsh save sys ucs USE2-LBEMPL01A.ucs cd /opt/aws/awscli-2.2.29/bin/dist ./aws s3 cp /var/local/ucs/lb2.ucs s3://f5-bubble-sync-fb5095-us-east-2/lb2/lb2.ucs891Views0likes3CommentsMigration Test Script for LTM
Hello community! I have done quite a few F5 to F5 migrations now. But I have always wondered about utilizing some script to test the applications functionality after a cutover beyond just looking at if the VIP's, pools, nodes are marked as available and running one off curls or tcpdump/ssldumps. Let's start with just LTM. (Although having something similar for GTM would be nice). I was thinking of a bash script or something similar that would: 1) Run curls to all VIP's and the respective ports 2) Collect the relevant results for only important data so it's not so busy (perhaps HTTP response code and 2-3 lines of the body message) 3) Take file of output manually from both F5's (old and new) and compare content in diff editor (notepad++ or beyond compare, etc) A lot of these migrations I have done, all the IP addresses stay the same (VIP's & pool members). It would be nice if the script were able to just match the VIP name in the results as most migrations keep the name the same if the IP addresses do move. Any suggestions? Thanks!215Views0likes1CommentBug (ID 775845) Workaround; REST API httpd restart
So this is less of a question, but a post to help my fellow BIG-IP LTM administrators, since the solution I came up with is quite the hack, but it works for me, so your mileage may vary, and of course -- test in non-production environments. So some background: I am a F5 administrator and a automation engineer. My main focus is automating much of my work as an administrator to take mundane and repetitive tasks out of my and my colleagues/organizations workflow. So, when it came time to renew the device certificates for my F5 VMs and hosts, combined with the most recently reduction in SSL certificate term length and guidance to renew certs often, I set forth to automate the entire stack of processes that are required to renew device certificates (create key/csr, submit csr to CA and obtain cert, upload cert to F5 and restart the httpd service to read in the new certificates). I was able to script everything using Python and REST API calls to the F5s and InCommon CA to get the certificates created and put on the F5s. The problem I ran into was the feature to restart the httpd service via a REST API call was broken (aka Bug ID 775845). I tried using the REST API call: /tm/sys/service -X POST -d '{"name":"httpd", "command":"restart"}' I also attempted to use the bash command call: /mgmt/tm/util/bash -X POST -d "{ "command": "run", "utilCmdArgs": "-c 'service httpd restart'" } NONE worked, as documented in the is KB article: https://support.f5.com/csp/article/K13292945 So I needed a workaround, and my solution incorporates a batch script that basically preemptively kills off httpd and then restarts it (as you see in the KB shows as a fix). First, you need the following bash script (which is actually incorporated into the script below so one can ensure that it always present on the F5 VM or host that needs to have the httpd daemon restarted). #/bin/bash # Pause, restart httpd # Greg Jewett, 2021-08-26, jewettg@austin.utexas.edu # # A known bug (Bug ID 775845) when using the REST API to restart the httpd service. # The pause is to allow the REST API call to complete, as script will be launched # in background, and should have successful exit code. This script provides an # immediate fix to bring environment back up, without manually restarting the # httpd daemon on each VM or host. service httpd status | logger -p local0.notice -t RST_HTTPD logger -p local0.notice -t RST_HTTPD Waiting 2 seconds... sleep 2s logger -p local0.notice -t RST_HTTPD Restarting httpd daemon thepids=`pgrep -d " " -f "/usr/sbin/httpd"` echo "httpd pids are: $thepids" for aPid in $thepids; do echo "Killing PID $aPid" kill -9 $aPid done service httpd start | logger -p local0.notice -t RST_HTTPD service httpd status | logger -p local0.notice -t RST_HTTPD logger -p local0.notice -t RST_HTTPD Done NOTE: I am having to attach the rest of my solution via comments, as the platform was allowing me to post a big chuck of text (>10k chars). See below.896Views0likes1Comment