backup
21 TopicsBackup and synchronization - In case of a file created in bash
Hi, Well, for one time, I think I will ask something basic. I just have a doubt about it. -When we work from the GUI, all save/sync are automatic, on a cluster F5. That's ok. -When we work in tmsh, we need to make a "tmsh save" to save what we did. Then the synchro copy the change to the second node. Ok. But... What if I create a file (in my case, a ssh key file) in bash ? If I create my file, of course, it will be saved (of course). But the F5 will synchronize it automatically to the second node ? I mean : it is not a configuration. So, how it works, in that case. I am in "manual with incremental sync", by the way (I suppose that too enter in consideration). Sorry, I must be very basic, as question. But the cluster has a role and a configuration a little bit apart. I do not want to make any mess on it, so I prefer ask stupid questions than to take any risk on it. Best regards, Christian26Views0likes3CommentsVMWare Backups of active VEs?
We collect nightly UCS files of all of our BIG-IP VEs. If we need to restore from UCS, it requires requesting the build of a new guest before we can apply the latest UCS backup (VMWare is managed by a different team). Most of our other organization VMs have snapshots taken that can be used for quick restoration in the case of failure. We do not have snapshots taken of our VEs because it is not recommended: K000093184: Since the Snapshot 'freezes' or 'pauses' TMM this prevents real-time access to the CPU. Due to this F5 does not support the Snapshot process being used on a BIG-IP. Other than restoring from UCS files, are there any other recommended automated backup procedures of ACTIVE VMWare BIG-IP VEs that full backups can be done from?Solved90Views0likes5CommentsBigIP UCS Backup script; looking for some guidance on design
Greetings, I've began to work on a bash script, intended to be ran locally on each F5 appliance via a cron task. The criteria for this script has been, Saves the UCS /w encryption using {Hostname}-YYYY-MM-DD.ucs naming format. Uploads the generated UCS file to a SFTP server SFTP native commands are a MUST, SCP will not work due to it's reliance on command shell/login. Rollover after X # of saved files in order to prevent storage exhaustion on the target SFTP Server I strongly doubt any form of deduplication will work with a encrypted UCS Sends an email notification if the backup failed I've so far written a script that addresses the first 3 criteria and have been waiting for those to go through their paces in testing before adding in notification logic. The commands and logic being used have gotten more complex, the further I've gotten into the script's development. This has lead to some concerns about whether this is the best approach given the nature of the F5 BigIP systems being a vendor appliance and worry that there's a large possibility commands may stop working correctly after a major x. version update, requiring an overhaul of a fairly complex script. I'm almost wondering if setting up an AWX/Tower host in our environment and then using the f5networks Ansible Module for the majority of the heavy lifting followed by some basic logic for file rotation, would be a better long term approach. Ansible would also be a bit more flexible in that I wouldn't have to hardcore values that diverge between individual hosts into the script itself. It's however not clear if the F5networks ansible module supports SFTP as I only see SCP referenced. https://my.f5.com/manage/s/article/K35454259 Advice and insight is much appreciated! #!/bin/bash # F5 backup script based on https://my.f5.com/manage/s/article/K000138297 # User-configurable Variables UCS_DIR="/var/ucs" REMOTE_USER="svc_f5backup" REMOTE_HOST="myhost.contoso.local" REMOTE_DIR="/data/f5/dev" SSH_KEY="/shared/scripts/f5-backup/mykeys/f5user" ENCRYPTION_PASSPHRASE='' # Blank out the value to not encrypt the UCS backup. LOG_FILE="/var/log/backupscript.log" MAX_FILES=45 # Maximum number of backup files to keep # Dynamic Variables (do not edit) HOSTNAME=$(/bin/hostname) DATE=$(date +%Y-%m-%d) UCS_FILE="${UCS_DIR}/${HOSTNAME}-${DATE}.ucs" # Start logging echo "$(date +'%Y-%m-%d %H:%M:%S') - Starting backup script." >> ${LOG_FILE} # Save the UCS backup file if [ -n "${ENCRYPTION_PASSPHRASE}" ]; then echo "Running the UCS save operation (encrypted)." >> ${LOG_FILE} tmsh save /sys ucs ${UCS_FILE} passphrase "${ENCRYPTION_PASSPHRASE}" >> ${LOG_FILE} 2>&1 else echo "Running the UCS save operation (not encrypted)." >> ${LOG_FILE} tmsh save /sys ucs ${UCS_FILE} >> ${LOG_FILE} 2>&1 fi # Create a temporary batch file for SFTP commands BATCH_FILE=$(mktemp) echo "cd ${REMOTE_DIR}" > $BATCH_FILE echo "put ${UCS_FILE}" >> $BATCH_FILE echo "bye" >> $BATCH_FILE # Log that the transfer is starting echo "Starting SFTP transfer." >> ${LOG_FILE} # Execute SFTP command and capture the output transfer_command_output=$(sftp -b "$BATCH_FILE" -i "${SSH_KEY}" -oBatchMode=no "${REMOTE_USER}@${REMOTE_HOST}" 2>&1) transfer_status=$? # Extract the "Transferred:" line transfer_summary=$(echo "$transfer_command_output" | grep "^Transferred: sent") if [ $transfer_status -eq 0 ]; then if [ -n "$transfer_summary" ]; then echo "UCS file copied to the SFTP server successfully (remote:${REMOTE_HOST}:${REMOTE_DIR}/${UCS_FILE}). $transfer_summary" >> ${LOG_FILE} else echo "UCS file copied to the SFTP server successfully (remote:${REMOTE_HOST}:${REMOTE_DIR}/${UCS_FILE}). Please check the log for details." >> ${LOG_FILE} fi else echo "$transfer_command_output" >> ${LOG_FILE} echo "UCS SFTP copy operation failed. Please read the log for details." >> ${LOG_FILE} rm -f $BATCH_FILE exit 1 fi # Clean up the temporary batch file rm -f $BATCH_FILE # Rollover backup files if the number exceeds MAX_FILES echo "Checking and maintaining the maximum number of backup files." >> ${LOG_FILE} # Create a list of files to delete sftp -i "${SSH_KEY}" -oBatchMode=no "${REMOTE_USER}@${REMOTE_HOST}" <<EOF > file_list.txt cd ${REMOTE_DIR} ls -1 ${HOSTNAME}-*.ucs bye EOF # Filter out unwanted lines and sort the files alphanumerically grep -v 'sftp>' file_list.txt | grep -v '^cd ' | sort > filtered_file_list.txt # Determine files to delete files_to_delete=$(head -n -${MAX_FILES} filtered_file_list.txt) if [ -n "$files_to_delete" ]; then # Create a temporary batch file for SFTP cleanup commands CLEANUP_BATCH_FILE=$(mktemp) echo "cd ${REMOTE_DIR}" > $CLEANUP_BATCH_FILE for file in $files_to_delete; do echo "Deleting $file" >> ${LOG_FILE} echo "rm $file" >> $CLEANUP_BATCH_FILE done echo "bye" >> $CLEANUP_BATCH_FILE # Execute SFTP cleanup command and log the output cleanup_command_output=$(sftp -b "$CLEANUP_BATCH_FILE" -i "${SSH_KEY}" -oBatchMode=no "${REMOTE_USER}@${REMOTE_HOST}" 2>&1) echo "$cleanup_command_output" >> ${LOG_FILE} # Clean up the temporary batch file rm -f $CLEANUP_BATCH_FILE else echo "No files to delete. Total files within limit." >> ${LOG_FILE} fi # Clean up the file lists rm -f file_list.txt filtered_file_list.txt # Delete the local copy of the UCS archive tmsh delete /sys ucs ${UCS_FILE} >> ${LOG_FILE} 2>&1 echo "$(date +'%Y-%m-%d %H:%M:%S') - Backup script completed." >> ${LOG_FILE}146Views0likes2CommentsAPI Calls to F5 limited to 1024 KB download
Hi, I am interacting with the F5 API in order to download ASM policies for the purpose of automating the backups. The process works fine however policies larger than 1024 KB are cut off at this size of 1024 KB. Initially I suspected that there was a default limit on the curl request however I have not been able to find information on how to increase this with the curl request. Is this a limitation on the F5 API or the Curl request? wget is not an option as this is not natively supported on the F5 virtual appliance. My script lives on the appliance, downloads the relevant policies and then pushes them to a SMB share. The only issue is that the ASM policies that are larger than 1024 KB are being cut off at 1024 KB. The API calls are as per the documentation here: http://cdn.f5.com/websites/devcentral.f5.com/downloads/icontrol-rest-api-user-guide-13-0-0.pdf specifically: GET https://x.x.x.x/mgmt/tm/asm/policies POST https://x.x.x.x/mgmt/tm/asm/tasks/export-policy GET https://x.x.x.x/mgmt/tm/asm/file-transfer/downloads/$asmPolicy Excluding the processing in my script the API calls I make are shown below: I expect the issue resides in the download api call. Is there a switch I can add to increase this limit? curl -ku 'username:password' -X GET https://x.x.x.x/mgmt/tm/asm/policies | jq '.items[] | "pol_name:" + .name + ";api_id:" + .id' >> $wdir/asmDetails.txt curl -ku 'username:password' -X POST https://x.x.x.x/mgmt/tm/asm/tasks/export-policy -H 'Content-Type: application/json' -d '{"filename":"'$asmPolicy'","policyReference":{"link":"https://localhost/mgmt/tm/asm/policies/'$asmIDs'"}}' curl -ku 'username:password' -X GET https://x.x.x.x/mgmt/tm/asm/file-transfer/downloads/$asmPolicy > $wdir/asmBackup/"$folderName"/$number-$asmPolicy-$hostname-"$dateStamp".xml Thanks1.5KViews0likes16Commentsplatform migration carry over Geolocation data file and ASM signature data file
I am working on platform migration from i5600 to i7600 by backing up UCS file and restoring it back to the i7600. I am wondering if geolocation data file, ASM signature, and bot signature will be updated as well. Recently I restore UCS file but see the Geolocation data file is 2020 which is last year and causing customer complain. When I did the geoip_lookup, it points to /usr/share/GeoIP/v2/F5GeoIP.dat which mean that there is no Geolocation data file under /shared/GeoIP/v2/F5GeoIP.dat and use the default location. What is the best way for me to compare the settings and configuration before and after the platform migration. I thought that UCS backup and restore should cover all the settings but I still missing Geolocation data file.682Views0likes0CommentsAnsible F5 Backup
I've trawled the internet and Dev/Central to find a suitable Ansible playbook to do the following. Backup and F5 with the same filename so that I can push to our Gitlab for version control. The Ansible modules seem to either generate a random filename which isn't reusable in a playbook, if I specify source then the current UCS does not get overwritten, if I copy to the local filesystem with the same target name the module appends date and time to the file which will not give any consistency to GItlab. This is so far what I have come up with, the code is in its most basic form for testing only. - name: Clean the local backup directory path: "{{ item }}" state: absent with_fileglob: - "/ansible//dailybackups/*" connection: local - name: Clean the previous UCS file from F5 bigip_ucs: state: absent ucs: "{{ inventory_hostname }}.ucs" provider: server: 1.1.1.1 user: admin password: admin validate_certs: no delegate_to: localhost - name: Save the running configuration of the BIG-IP bigip_ucs_fetch: backup: yes src: "{{ inventory_hostname }}.ucs" dest: /ansible/dailybackups/{{ inventory_hostname }}.ucs provider: server: 1.1.1.1 user: admin password: admin validate_certs: no delegate_to: localhost So to perform a repeatable function I am forced to delete the file from the local file system to be copied to, erase the current UCS file on the F5 which is used as the backup, and then backup the F5 and pull the file to the local file system. Surely there is a slicker way of doing what can be done on a Cisco device in 4 lines. (NB) I have excluded the Git function, these 3 plays are merely to pull a consistent named UCS file and store to the local filesystem.648Views0likes0CommentsAutomated ASM Backup - working bash script now to automate or convert to iCall/tcl
Hi All, I have put together a BASH script that when run performs a backup of the ASM policies and copies them to a remote location. The script runs great and I have had it set as a Cron job in my lab setup to automate the backups. Unfortunately, the business does not want a script running as a Cron job on the F5. I have had it suggested to me to use iCall. I have seen only limited information regarding iCall that was written in a way that someone that has never seen iCall could understand. This got me far enough to understand that iCall runs tcl scripts, not bash scripts! The result being if I was to use iCall I would need to re-write the script completely. I am looking for 2 options here: A means to automate running a bash script on the F5. OR detailed information or getting started with iCall - Better yet, converting bash to tcl. To illustrate my issue, my bash script lives on the F5 and does the following: reads a counter value from a file curl command to the management interface and copies a list of ASM policy details to a txt file. greps the policy names from the original txt file to a new txt file. greps the policy IDs from the original txt file to a new txt file. sets a parameter with the current data and time as the value makes a localDirectory using the data and time parameter as the folder name (this ensures a known date of the backup - also ensures you can re-run and get a new folder on the same day if required) uses curl post and get commands to get the policies from the F5. curl upload-file command to copy files to remote smb location adjust the counter performs a cleanup of any files that were created locally. If I switch over to using iCall the above all needs to be done with tcl - I am not sure how much of that is supported. I have found that "echo" is replaced with "puts", is there a "curl", "cat", etc equivalent? Thanks in advanceSolved1.3KViews0likes6Commentsiapps f5.automated_backup problem
Hello, I'm using the iapps f5.automated_backup version 2.0.3 (I know it is an old one), the apps is working great, I use it on a test and a prod cluster. But since few days the app on the prod cluster stopped working, it doesn't run the backup on the configured schedule. When I try to check the configuration going on the iapps and clicking "reconfigure" the config never show up, I have the message "Loading... Receiving configuration data from your device." indefinitely. Anybody know what I can try to solve the problem ? Is there a service I can restart or something like that ? I suppose that a a full reboot could solve the problem but as it is a production device I'm trying to find another way to solve the problem. thanks for your help ! LucasSolved668Views0likes3CommentsAdding Cron Jobs to the F5 - Is it OK? or should it be avoided?
Hi All, I have created a backup script that would reside on the F5 device, copy all ASM policies to XML and then push them to a remote fileshare. I have planned to have this script run via a cron job on the F5 once per month. When attempting to get approval from the business to implement this on the production devices, concern was raised around setting a cron job on the F5s. I personally did not feel that this would be an issue. Can anyone shed some light on this issue? Are others setting Cron jobs on the F5 or avoiding doing so for any reason in particular. If I want to schedule a script to run every month, is there a better alternative that I could use on the F5? Thank you.814Views0likes2CommentsUser for ASM Automated Backup Script
Hi Guys, I have a script that allows me to backup ASM policies in moments, the catch however is that this script requires credentials for a user with Advanced Shell Access. Advanced Shell Access requires Administrative privileges. As a result, this script then creates a security issue even when properly stored and access simply due to the hardcoded credentials in the script. I am aiming to reduce the severity of this issue in one of two ways: - Is it possible to have a user with read only permissions in the portal and advanced shell access on the box? or can I create an API only user? - Alternatively is anyone aware of how I can swap out credentials from my script so that if the script was discovered, credentials would not be identified? Thanks415Views0likes1Comment