Namecheap and BIG-IP Integration via API
The script below will be attached to an EAV monitor, which is linked to a dummy pool. The script is designed to monitor F5XC DNSaaS (which is the current Authoritative DNS) and check if it can resolve DNS queries. If it cannot, the script will trigger an API call to Namecheap (our domain registrar) to change the nameservers back to Primary BIG-IP DNS. Simultaneously, the script will update the domain's NS records from F5XC to BIG-IP. #!/bin/sh # Define variables pidfile="/var/run/$MONITOR_NAME.$1.$2.pid" statusfile="/var/run/dns_status" check_string="RESPONSE-OK" # NAMECHEAP API USER API_USER="sampleapiuser" # NAMECHEAP APIKEY API_KEY="<apikey>" # NAMECHEAP ACCOUNT USERNAME USERNAME="namecheapuser1" # NAMECHEAP COMMAND TO CHANGE THE NAMESERVER COMMAND="namecheap.domains.dns.setCustom" # NAMECHEAP ALLOWED API CLIENT IP, WE SET IT TO BIG-IP IP CLIENT_IP="13.213.88.106" # SECOND LEVEL DOMAIN SLD="f5sg" # TOP LEVEL DOMAIN TLD="com" F5XC_NAMESERVERS="ns1.f5clouddns.com,ns2.f5clouddns.com" BIGIP_NAMESERVERS="gtm1.f5sg.com,gtm2.f5sg.com" # BIGIP ADMIN PASSWORD ADMIN_PASS="XXXXXXX" # Function to update DNS to F5XC nameservers sendapi_xc() { #tmsh modify ltm virtual VS_APP2 enabled F5XC_API_URL="https://api.namecheap.com/xml.response?ApiUser=$API_USER&ApiKey=$API_KEY&UserName=$USERNAME&Command=$COMMAND&ClientIp=$CLIENT_IP&SLD=$SLD&TLD=$TLD&NameServers=$F5XC_NAMESERVERS" curl -X GET "$F5XC_API_URL" >/dev/null 2>&1 } # Function to update DNS to BIGIP nameservers sendapi_bigip() { #tmsh modify ltm virtual VS_APP2 disabled BIGIP_API_URL="https://api.namecheap.com/xml.response?ApiUser=$API_USER&ApiKey=$API_KEY&UserName=$USERNAME&Command=$COMMAND&ClientIp=$CLIENT_IP&SLD=$SLD&TLD=$TLD&NameServers=$BIGIP_NAMESERVERS" curl -X GET "$BIGIP_API_URL" >/dev/null 2>&1 } # Functions to manage zone records using F5 iControl REST API addzr_xc() { curl -sku admin:$ADMIN_PASS "https://127.0.0.1:8443/mgmt/tm/util/bash" -X POST -H "Content-Type: application/json" -d "{\"command\":\"run\",\"utilCmdArgs\":\"-c 'echo arr external f5sg.com. f5sg.com. 50 NS ns1.f5clouddns.com. | zrsh'\"}" >/dev/null 2>&1 curl -sku admin:$ADMIN_PASS "https://127.0.0.1:8443/mgmt/tm/util/bash" -X POST -H "Content-Type: application/json" -d "{\"command\":\"run\",\"utilCmdArgs\":\"-c 'echo arr external f5sg.com. f5sg.com. 50 NS ns2.f5clouddns.com. | zrsh'\"}" >/dev/null 2>&1 } delzr_bip() { curl -sku admin:$ADMIN_PASS "https://127.0.0.1:8443/mgmt/tm/util/bash" -X POST -H "Content-Type: application/json" -d "{\"command\":\"run\",\"utilCmdArgs\":\"-c 'echo drr external f5sg.com. f5sg.com. 50 NS gtm1.f5sg.com. | zrsh'\"}" >/dev/null 2>&1 curl -sku admin:$ADMIN_PASS "https://127.0.0.1:8443/mgmt/tm/util/bash" -X POST -H "Content-Type: application/json" -d "{\"command\":\"run\",\"utilCmdArgs\":\"-c 'echo drr external f5sg.com. f5sg.com. 50 NS gtm2.f5sg.com. | zrsh'\"}" >/dev/null 2>&1 } addzr_bip() { curl -sku admin:$ADMIN_PASS "https://127.0.0.1:8443/mgmt/tm/util/bash" -X POST -H "Content-Type: application/json" -d "{\"command\":\"run\",\"utilCmdArgs\":\"-c 'echo arr external f5sg.com. f5sg.com. 50 NS gtm1.f5sg.com. | zrsh'\"}" >/dev/null 2>&1 curl -sku admin:$ADMIN_PASS "https://127.0.0.1:8443/mgmt/tm/util/bash" -X POST -H "Content-Type: application/json" -d "{\"command\":\"run\",\"utilCmdArgs\":\"-c 'echo arr external f5sg.com. f5sg.com. 50 NS gtm2.f5sg.com. | zrsh'\"}" >/dev/null 2>&1 } delzr_xc() { curl -sku admin:$ADMIN_PASS "https://127.0.0.1:8443/mgmt/tm/util/bash" -X POST -H "Content-Type: application/json" -d "{\"command\":\"run\",\"utilCmdArgs\":\"-c 'echo drr external f5sg.com. f5sg.com. 50 NS ns1.f5clouddns.com. | zrsh'\"}" >/dev/null 2>&1 curl -sku admin:$ADMIN_PASS "https://127.0.0.1:8443/mgmt/tm/util/bash" -X POST -H "Content-Type: application/json" -d "{\"command\":\"run\",\"utilCmdArgs\":\"-c 'echo drr external f5sg.com. f5sg.com. 50 NS ns2.f5clouddns.com. | zrsh'\"}" >/dev/null 2>&1 } # Manage the PID file to ensure only one instance of the script runs if [ -f $pidfile ]; then kill -9 -`cat $pidfile` > /dev/null 2>&1 fi echo "$$" > $pidfile # Run dig command and store the output in a variable response=$(dig @ns1.f5clouddns.com f5sg.com TXT +short) # Compare response and take action if echo "$response" | grep -q "$check_string"; then previous_status=$(cat "$statusfile" 2>/dev/null) if [ "$response" != "$previous_status" ]; then sendapi_xc addzr_xc delzr_bip fi echo "up" echo "$response" > "$statusfile" else previous_status=$(cat "$statusfile" 2>/dev/null) if [ "$response" != "$previous_status" ]; then sendapi_bigip addzr_bip delzr_xc fi echo "$response" > "$statusfile" fi rm -f "$pidfile"65Views0likes0CommentsBIG-IP Wide-IP to F5XC DNSLB converter
This is a conceptual sample script that converts BIG-IP Wide-IP records to F5XC DNSLB records. This bash script can be run using a cron job to check for configuration changes and synchronize them to F5XC. We used the F5XC API to post and update the configuration. You need to get an APIToken from your F5XC tenant and change the value on the POST commands on the script below. Note: Since this is not a full-blown converter script, it is limited to handling only a single Wide-IP pool member. You need to configure a GTM pool to include the IP addresses that need to be load balanced. Check the main article for more details. #!/bin/bash # Get list of wide IPs wideip_output=$(tmsh list gtm wideip all-properties one-line) # Get list of Pool pool_output=$(tmsh list gtm pool a one-line all-properties) # Declare associative arrays declare -A wideip_list declare -A current_wideip_info declare -A zone_array declare -A subdomain_info declare -A a_record_per_zone declare -A pool_list declare -A membersip_array # Unset variables function unset_arrays { unset current_wideip_info name subdomain domain type aliases description status failure_rcode last_resort_pool load_balancing_decision_log metadata minimal_response partition persist_cidr_ipv4 persist_cidr_ipv6 persistence pool_lb_mode pools pool_cname topology_edns0 ttl_persistence poolnames poolnames_array zone_array subdomain_info a_record_per_zone dnslb_name pool_list membersip_array } # Print wide IP details function print_wideip { for wideip in "${!wideip_list[@]}"; do echo "Wide IP: $wideip, Details: ${wideip_list[$wideip]}" done } # Create Zone function create_zone { curl -X POST -H "Authorization: APIToken XXXXX" -H "Accept: application/json" -H "Access-Control-Allow-Origin: *" -H "x-volterra-apigw-tenant: cag-waap2023" -H "Content-Type: application/json" -d "{\"metadata\":{\"name\":\"$zone\",\"namespace\":\"system\"},\"spec\":{\"primary\":{\"allow_http_lb_managed_records\":true},\"default_soa_parameters\":{},\"dnssec_mode\":{},\"rr_set_group\":[],\"soa_parameters\":{\"refresh\":3600,\"expire\":0,\"retry\":60,\"negative_ttl\":0,\"ttl\":0}}}" https://cag-waap2023.console.ves.volterra.io/api/config/dns/namespaces/system/dns_zones } # Create DNSLB function create_dnslb { curl -X POST -H "Authorization: APIToken XXXXX" -H "Accept: application/json" -H "Access-Control-Allow-Origin: *" -H "x-volterra-apigw-tenant: cag-waap2023" -H "Content-Type: application/json" -d "{\"metadata\":{\"name\":\"$dnslbname\",\"namespace\":\"system\",\"labels\":{},\"annotations\":{},\"disable\":false},\"spec\":{\"record_type\":\"A\",\"rule_list\":{\"rules\":[{\"geo_location_set\":{\"tenant\":\"cag-waap2023-gwjvytud\",\"namespace\":\"system\",\"name\":\"geo-1\",\"kind\":\"geo_location_set\"},\"pool\":{\"tenant\":\"cag-waap2023-gwjvytud\",\"namespace\":\"system\",\"name\":\"$xcdnslbpoolname\",\"kind\":\"dns_lb_pool\"},\"score\":100}]},\"response_cache\":{\"disable\":{}}}}" https://cag-waap2023.console.ves.volterra.io/api/config/dns/namespaces/system/dns_load_balancers } # Loop through each line of output while IFS= read -r line; do pool_name=$(awk '{print $4}' <<< "$line") dnslbpool_name=$(echo "$pool_name" | sed 's/[^a-zA-Z0-9]/-/g; s/.*/\L&/') pool_type=$(awk '{print $3}' <<< "$line") lbmode=$(grep -o 'load-balancing-mode [^ ]*' <<< "$line" | awk '{print $2}') # Convert load_balancing_mode to lowercase if it is "ROUND_ROBIN" if [[ "$lbmode" == "round-robin" ]]; then lbmode="ROUND_ROBIN" elif [[ "$lbmode" == "static-persistence" ]]; then lbmode="STATIC_PERSIST" elif [[ "$lbmode" == "global-availability" ]]; then lbmode="PRIORITY" elif [[ "$lbmode" == "ratio" ]]; then lbmode="RATIO_MEMBER" fi # Extract members block using awk #members=$(awk -F 'members {| }' '{print $2}' <<< "$line") members=$(echo "$line" | grep -o -P '(?<=members \{ ).*?(?=\} \})') membernames=$(echo "$members" | grep -oP '\S+(?=\s*{)') # Temporary array to hold member IP addresses declare -a temp_members_array temp_members_array=($(awk -F ':' '{print $2}' <<< "$membernames")) monitor=$(awk -F 'monitor ' '{print $2}' <<< "$line" | awk '{print $1}') ttl=$(awk '{print $2}' <<< "$(grep -o 'ttl [^ ]*' <<< "$line")") # Assign values to the associative array membersip_array["$dnslbpool_name"]="${temp_members_array[@]}" # Store extracted values in the array pool_list["$dnslbpool_name"]="pool_type: $pool_type, lbmode: $lbmode, monitor: $monitor, members: ${membersip_array["$dnslbpool_name"]}, ttl: $ttl" done <<< "$pool_output" # Loop through each pool in the pool_list for dnslbpool_name in "${!pool_list[@]}"; do # Extract only the TTL value from the string ttl=$(awk -F 'ttl: ' '{print $2}' <<< "${pool_list[$dnslbpool_name]}") lbmode=$(awk -F 'lbmode: ' '{print $2}' <<< "${pool_list[$dnslbpool_name]}" | awk -F ',' '{print $1}') members=$(awk -F 'members: ' '{print $2}' <<< "${pool_list[$dnslbpool_name]}" | awk -F ',' '{print $1}') pool_type=$(awk -F 'pool_type: ' '{print $2}' <<< "${pool_list[$dnslbpool_name]}" | awk -F ',' '{print $1}') # Check if pool_type is "a" if [[ "$pool_type" == "a" ]]; then # Initialize an empty string to store the JSON strings members_string="" # Loop through each record in the current zone for ip in ${membersip_array["$dnslbpool_name"]}; do # Create JSON string for each member and append to the existing string members_string+="{\"ip_endpoint\":\"$ip\",\"ratio\":10,\"priority\":1}," done # Remove the trailing comma from the JSON string members_string="${members_string%,}" # Create DNSLB Pools curl -X POST \ -H "Authorization: APIToken Rs0aGJm/lda/JmbE00c9lFXWw4I=" \ -H "Accept: application/json" \ -H "Access-Control-Allow-Origin: *" \ -H "x-volterra-apigw-tenant: cag-waap2023" \ -H "Content-Type: application/json" \ -d "{\"metadata\":{\"name\":\"$dnslbpool_name\",\"namespace\":\"system\"},\"spec\":{\"a_pool\":{\"members\":[$members_string],\"disable_health_check\":null,\"max_answers\":1},\"ttl\":\"$ttl\",\"load_balancing_mode\":\"$lbmode\"}}" \ "https://cag-waap2023.console.ves.volterra.io/api/config/dns/namespaces/system/dns_lb_pools" fi done # Unset variables to free up memory unset pool_list membersip_array # Loop through each line of output while IFS= read -r line; do # Extracting specific details using awk and sed based on the current line name=$(echo "$line" | awk '{print $4}') dnslb_name=$(echo "$name" | sed 's/\./-/g') subdomain=$(echo "$name" | cut -d'.' -f1) domain=$(echo "$name" | sed 's/^[^.]*\.//') type=$(echo "$line" | awk '{print $3}') aliases=$(echo "$line" | grep -o 'aliases [^}]*' | awk '{print $2}') description=$(echo "$line" | grep -o 'description [^ ]*' | sed 's/description //') status=$(echo "$line" | awk '{print $12}') failure_rcode=$(echo "$line" | grep -o 'failure-rcode [^ ]*' | sed 's/failure-rcode //') last_resort_pool=$(echo "$line" | grep -o 'last-resort-pool [^ ]*' | sed 's/last-resort-pool //') load_balancing_decision_log=$(echo "$line" | grep -o 'load-balancing-decision-log-verbosity [^ ]*' | sed 's/load-balancing-decision-log-verbosity //') metadata=$(echo "$line" | grep -o 'metadata [^ ]*' | sed 's/metadata //') minimal_response=$(echo "$line" | grep -o 'minimal-response [^ ]*' | sed 's/minimal-response //') partition=$(echo "$line" | grep -o 'partition [^ ]*' | sed 's/partition //') persist_cidr_ipv4=$(echo "$line" | grep -o 'persist-cidr-ipv4 [^ ]*' | sed 's/persist-cidr-ipv4 //') persist_cidr_ipv6=$(echo "$line" | grep -o 'persist-cidr-ipv6 [^ ]*' | sed 's/persist-cidr-ipv6 //') persistence=$(echo "$line" | grep -o ' persistence [^ ]*' | sed 's/persistence //') pool_lb_mode=$(echo "$line" | grep -o 'pool-lb-mode [^ ]*' | sed 's/pool-lb-mode //') pools=$(echo "$line" | grep -o -P '(?<=pools \{ ).*?(?=\} \})') pool_cname=$(echo "$line" | grep -o 'pools-cname [^ ]*' | sed 's/pools-cname //') topology_edns0=$(echo "$line" | grep -o 'topology-prefer-edns0-client-subnet [^ ]*' | sed 's/topology-prefer-edns0-client-subnet //') ttl_persistence=$(echo "$line" | grep -o 'ttl-persistence [^ ]*' | sed 's/ttl-persistence //') # Use grep to find strings before "{" poolnames=$(echo "$pools" | grep -oP '\S+(?=\s*{)' | sed 's/[^a-zA-Z0-9]/-/g; s/.*/\L&/') # Convert matches to an array readarray -t poolnames_array <<< "$poolnames" # Store extracted values in the associative array current_wideip_info=([Type]="$type" [Subdomain]="$subdomain" [Domain]="$domain" [Status]="$status" [DNSLB]="$dnslb_name" [Pools]="${poolnames_array[@]}" [Pool_LB_Mode]="$pool_lb_mode") # Assign wideip_info to wideip_list wideip_list["$name"]="${current_wideip_info[@]}" # Add subdomains to zone_array if [ -n "${zone_array[$domain]}" ]; then zone_array["$domain"]="${zone_array[$domain]},$subdomain" else zone_array["$domain"]=$subdomain fi # Store subdomain information in subdomain_info array subdomain_info["$subdomain"]="${current_wideip_info[@]}" # Store subdomain type "a" and add it to the array for that zone if [ "$type" == "a" ]; then a_record_per_zone[$domain]="${a_record_per_zone[$domain]}${a_record_per_zone[$domain]:+,}$subdomain" fi done <<< "$wideip_output" for zone in "${!zone_array[@]}"; do create_zone done # Loop through each domain in a_record_per_zone and echo its A record subdomains for domain in "${!a_record_per_zone[@]}"; do echo "Domain: $domain" echo "A Record Subdomains: ${a_record_per_zone[$domain]}" echo "--------------------------" # Initialize an empty string to store the JSON strings a_records_string="" # Loop through each record in the current zone for record in ${a_record_per_zone[$domain]//,/ }; do # Create JSON string for each A record and append to the existing string xcdnslbpoolname=$(echo ${wideip_list[$record.$domain]} | awk '{for (i=6; i<=(NF-1); i++) {printf "%s", $i; if (i < NF-1) printf " "}}') #echo "${a_record_per_zone[$domain]}" #echo "xcdnslbpoolname: $xcdnslbpoolname" # Check if xcdnslbpoolname has multiple strings if [[ $xcdnslbpoolname == *" "* ]]; then echo "Multiple strings found in xcdnslbpoolname" # Split xcdnslbpoolname into an array based on space IFS=' ' read -ra pool_names <<< "$xcdnslbpoolname" # Initialize an empty string to store the JSON strings pools_string="" # Loop through each pool name in the array for pool_name in "${pool_names[@]}"; do # Create JSON string for each member and append to the existing string pools_string+="{\"geo_location_set\":{\"tenant\":\"cag-waap2023-gwjvytud\",\"namespace\":\"system\",\"name\":\"geo-1\",\"kind\":\"geo_location_set\"},\"pool\":{\"tenant\":\"cag-waap2023-gwjvytud\",\"namespace\":\"system\",\"name\":\"$pool_name\",\"kind\":\"dns_lb_pool\"},\"score\":100}," done # Remove the trailing comma from the JSON string pools_string="${pools_string%,}" else pools_string="{\"geo_location_set\":{\"tenant\":\"cag-waap2023-gwjvytud\",\"namespace\":\"system\",\"name\":\"geo-1\",\"kind\":\"geo_location_set\"},\"pool\":{\"tenant\":\"cag-waap2023-gwjvytud\",\"namespace\":\"system\",\"name\":\"$xcdnslbpoolname\",\"kind\":\"dns_lb_pool\"},\"score\":100}" fi dnslbname=$(echo "dnslb-$record-$domain" | sed 's/\./-/g') #create_dnslb curl -X POST -H "Authorization: APIToken XXXXX" -H "Accept: application/json" -H "Access-Control-Allow-Origin: *" -H "x-volterra-apigw-tenant: cag-waap2023" -H "Content-Type: application/json" -d "{\"metadata\":{\"name\":\"$dnslbname\",\"namespace\":\"system\",\"labels\":{},\"annotations\":{},\"disable\":false},\"spec\":{\"record_type\":\"A\",\"rule_list\":{\"rules\":[$pools_string]},\"response_cache\":{\"disable\":{}}}}" https://cag-waap2023.console.ves.volterra.io/api/config/dns/namespaces/system/dns_load_balancers a_records_string+="{\"ttl\":3600,\"lb_record\": {\"name\":\"$record\",\"value\":{\"namespace\": \"system\",\"name\":\"$dnslbname\"}}}," done # Remove the trailing comma from the JSON string a_records_string="${a_records_string%,}" # Print the final JSON string echo "$a_records_string" #update zone record curl -X PUT -H "Authorization: APIToken XXXXX" -H "Accept: application/json" -H "Access-Control-Allow-Origin: *" -H "x-volterra-apigw-tenant: cag-waap2023" -H "Content-Type: application/json" -d "{\"metadata\":{\"name\":\"$domain\",\"namespace\":\"system\"},\"spec\":{\"primary\":{\"allow_http_lb_managed_records\":true,\"default_rr_set_group\":[$a_records_string],\"default_soa_parameters\":{},\"dnssec_mode\":{},\"rr_set_group\":[],\"soa_parameters\":{\"refresh\":3600,\"expire\":0,\"retry\":60,\"negative_ttl\":0,\"ttl\":0}}}}" https://cag-waap2023.console.ves.volterra.io/api/config/dns/namespaces/system/dns_zones/$domain done unset_arrays81Views0likes0CommentsF5 Archiver Ansible Playbook
Problem this snippet solves: Centralized scheduled archiving (backups) on F5 BIG-IP devices are a pain however, in the new world of Infrastructure as Code (IaC) and Super-NetOps tools like Ansible can provide the answer. I have a playbook I have been working on to allow me to backup off box quickly, UCS files are saves to a folder names tmp under the local project folder, this can be changed by editing the following line in the f5Archiver.yml file: dest: "tmp/{{ inventory_hostname }}-{{ date['stdout'] }}.ucs" The playbook can be run from a laptop on demand or via some scheduler (like cron ) or as part of a CI/CD pipelines. How to use this snippet: F5 Archiver Ansible Playbook Gitlab: StrataLabs: AnsibleF5Archiver Overview This Ansible playbook takes a list of F5 devices from a hosts file located within the inventory directory, creates a UCS archive and copies locally into the 'tmp' direcotry. Requirements This Ansible playbook requires the following: * ansible >= 2.5 * python module f5-sdk * F5 BIG-IP running TMOS >= 12 Usage Run using the ansible-playbook command using the inventory -i option to use the invertory directory instead of the default inventory host file. NOTE: F5 username and password are not set in the playbook and so need to be passed into the playbook as extra variables using the --extra-vars option, the variables are f5User for the username and f5Pwd for the password. The below examples use the default admin:admin . To check the playbook before using run the following commands ansible-playbook -i inventory --extra-vars "f5User=admin f5Pwd=admin" f5Archiver.yml --syntax-check ansible-playbook -i inventory --extra-vars "f5User=admin f5Pwd=admin" f5Archiver.yml --check Once happy run the following to execute the playbook ansible-playbook -i inventory --extra-vars "f5User=admin f5Pwd=admin" f5Archiver.yml Tested this on version: 12.11.8KViews2likes1CommentBIGIP LTM Automated Pool Monitor Flap Troubleshooting Script in Bash
Problem this snippet solves: A bash script is mainly for collecting data when F5 BIG-IP LTM pool member monitor flaps in a period of time and help determine the Root Cause of BIGIP monitor health check failure; Script will monitor the LTM logs, if new pool member down message event occurs, script will perform following functions: 1. Turn on LTM bigd debug ; 2. Start to tcpdump capture to capture relevant traffics; 3. Turn off bigd debug and terminate tcpdump process when timer elapse (timer is configurable) 4. Generate qkview (optinal) 5. Tar ball full logs files under /var/log/ directory (optinal) Script has been tested on v11.x Code : #!/usr/bin/bash ##########identify the log file that script is monitoring filename="/var/log/ltm" ##########identify the period of time that debug and tcpdump are running, please change it according to the needs; timer=60 ##########IP address of pool member flaps poolMemberIP="10.10.10.229" ##########self IP address of LTM is usd to send LTM Health Monitor traffics ltmSelfip="10.10.10.248" ##########pool member service port number poolMemberPort="443" ##########TMOS command to turn on bigd debug turnonBigdDebug="tmsh modify sys db bigd.debug value enable" ##########TMOS command to turn off bigd debug turnoffBigdDebug="tmsh modify sys db bigd.debug value disable" ##########BASH command to tar BIGIP log files tarLogs="tar -czpf /var/tmp/logfiles.tar.gz /var/log/*" ####### function file check: following code will check if /var/log/ltm exist on the system, ####### if it exists, script will be running and perform subsequent functions if [ -f $filename ] then echo "/var/log/ltm exists and program is running to collect data when BG-IP pool member flaps" else ####### if it does not exist, programe will be terminated and log following message echo "no /var/log/ltm file found and program is terminated" exit 0 fi ####### function file check ends ###### write timestap to /var/log/ltm for tracking purpose echo "$(date) monitoring the log" >> $filename ###### start to monitor the /var/log/ltm for new events tail -f -n 0 $filename | while read -r line do ###### counter for pool down message appears hit=$(echo "$line" | grep -c "$poolMemberIP:$poolMemberPort monitor status down") #echo $hit ###### if [ "$hit" == "1" ]; then ###### diplay the pool down log event in file /var/log/ltm echo $line ###### show timestamp of debug is on echo "$(date) Turning on system bigddebug" ###### turn on bigd debug echo $($turnonBigdDebug) ###### turn on tcpdump capture echo $(tcpdump -ni 0.0:nnn -s0 -w /var/tmp/Monitor.pcap port $poolMemberPort and \(host $poolMemberIP and host $ltmSelfip\)) & ###### running timer sleep $timer ###### show timestamp of debug is off echo "$(date) Truning off system bigddebug" ###### turn off bigd debug echo $($turnoffBigdDebug) ###### terminate tcpdump process echo $(killall tcpdump) ###### generate qkview, it's an optional function, enable it by remove "#" sign #echo $(qkview) ###### tar log files, it's an optional function, enable it by remove "#" sign #echo $($tarLogs) break #else #echo "Monitor in progress" fi done ###### show message that programe is end echo "$(date) exiting from programe" ###### exit from the program exit 0 Tested this on version: 11.6942Views0likes6CommentsPython BigRest VS and Pool status
Problem this snippet solves: This gets the status of all Virtual servers and the pool members and write to a text file. If one of the pool member is up, then the pool will be marked as up as well. How to use this snippet: Based on Python 3 and BigREST SDK Code : #Import needed libraries from bigrest.bigip import BIGIP import getpass #Replace the host name as needed host="xx.xx.xx.xx" user=input('Username') pw= getpass.getpass(prompt='Password:') #Declare the Output filename out="output.txt" #Connect to device device = BIGIP(host, user, pw) #Open output file for writing outf=open(out, 'w') #Get the virtual server info virtuals = device.load("/mgmt/tm/ltm/virtual") for virtual in virtuals: #Get the status of the VS vstat=device.load("/mgmt/tm/ltm/virtual/"+virtual.properties["fullPath"].replace("/","~")+"/stats") vss=list(vstat.properties['entries'].values())[0]['nestedStats']['entries']['status.availabilityState']['description'] print("VS name is ", virtual.properties["fullPath"], vss) outf.write("VS name is " + virtual.properties["fullPath"]+'\t'+vss+'\n') #Get the pool name, if exists try: pool= virtual.properties["pool"] except: pool = None print ('Unassigned pool') outf.write("No pool assigned to this VS \n\n") if pool: #Get the pool members info and their status pool= virtual.properties["pool"] pooldetail= device.load("/mgmt/tm/ltm/pool/"+pool.replace('/', '~')+"/members") pstate = 'Down' print ("Pool members are: ") outf.write("Pool members are: \n") for members in pooldetail: print (members.properties['fullPath'], members.properties['state']) outf.write(members.properties['fullPath']+'\t' +members.properties['state']+'\n') #Mark the pool as up, if atleast one member is up if pstate=='Down': if members.properties['state'] == 'up': pstate = 'up' print ("Pool is ", pool, pstate, '\n') outf.write("Pool is "+ pool + '\t' +pstate+'\n\n') outf.flush() outf.close() Tested this on version: 13.1547Views0likes0CommentsPython BigREST ucs save and download
Problem this snippet solves: This uses BIGREST SDK API to save/download the ucs file. Adding it here since I couldn't find a similar one, How to use this snippet: Based on Python3 and BigREST (https://bigrest.readthedocs.io) Code : #Import needed libraries from bigrest.bigip import BIGIP import getpass #Replace the host name as needed host="xx.xx.xx.xx" user=input('Username') pw= getpass.getpass(prompt='Password:') #Declare the ucs filename, if needed ucsfile="test.ucs" #Connect to device device = BIGIP(host, user, pw) data = {} data["command"] = "save" data["name"] = ucsfile #You may get timeout exception below even if the file has been created task = device.task_start("/mgmt/tm/sys/ucs", data) device.task_wait(task) if device.task_completed(task): device.task_result(task) print("Backup has been completed.") else: raise Exception() #The below will download the file to the same folder as the script device.download("/mgmt/shared/file-transfer/ucs-downloads/", ucsfile) Tested this on version: 13.1912Views0likes0CommentsAnsible HA pair deployment using excel spreadsheet. No Ansible knowledge required
Problem this snippet solves: No Ansible knowledge required. Just fill in the spreadsheet and run the playbook. Easily customisable if you want to get more complex Please see: https://github.com/bwearp/simple-ha-pair How to use this snippet: simple-ha-pair Using ansible and an xlsx spreadsheet to set up an HA pair Tested on BIG-IP Software version 12.1.2 The default admin password of admin has been used This project uses the xls_to_facts.py module by Matt Mullen https://github.com/mamullen13316/ansible_xls_to_facts Requirements: BIG-IP Requirements The BIG-IP devices will need to have their management IP, netmask, and management gateway configured They will also need to be licensed and provisionned with ltm. It is possible to both provision and license the devices with ansible but it is not within the remit of this project. For additional information on Ansible and F5 Ansible modules, please see: http://clouddocs.f5.com/products/orchestration/ansible/devel/index.html Ansible Control Machine Requirements I am using Centos, other OS are available Note: It will be easiest to carry out the below as the root user You will need Python 2.7+ $ yum install python You will need pip $ curl 'https://bootstrap.pypa.io/get-pip.py' > get-pip.py && sudo python get-pip.py You will need ansible 2.5+ $ pip install ansible If 2.5+ is not yet available, which it wasn't at the time of writing, please download directly from git $ yum install git $ pip install --upgrade git+https://github.com/ansible/ansible.git You will need to add a few other modules $ pip install f5-sdk bigsuds netaddr deepdiff request objectpath openpyxl You will need to create and copy a root ssh-key to BOTH the bigip devices $ ssh-keygen Accept the defaults $ ssh-copy-id -i /root/.ssh/id_rsa.pub root@<bigip-management-ip> Example: $ ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.1.203 You will need to download the files using git - see above for git installation $ git clone https://github.com/bwearp/simple-ha-pair/ $ cd simple-ha-pair Executing the playbook You will then need to edit the simple-ha-pair.xlsx file to your preferences Then execute the playbook as root $ ansible-playbook simple-ha-pair.yml NOTES: In the simple-ha-pair.xlsx spreadsheet: The HA VLAN must be called 'HA' The settings where yes/no are required must be yes/no and not YES/NO or Yes/No One device must have primary=yes and the other must have primary=no I have added only Standard Virtual Servers with http, client & server ssl profiles, but hopefully it is pretty obvious from the simple-ha-pair.yml playbook how to add in others. Trunks haven't been added. This is because you can't have trunks in VE and also there is no F5 ansible module to add trunks. It could be done relatively easily using the bigip_command module, and hopefully the bigip_command examples in the simple-ha-pair.yml file will show that. I haven't added in persistence settings, as this would require a dropdown list of some kind. Is simple enough to do. Automation does not sit well with complication To update if there are any changes, please cd to the same folder and run: $ git pull You will notice there is also a reset.yml playbook to reset the devices to factory defaults. To run the reset.yml playbook as root: $ ansible-playbook reset.yml Code : https://github.com/bwearp/simple-ha-pair/blob/master/simple-ha-pair.yml Tested this on version: 12.1736Views0likes0CommentsAutomating BIG-IP deployments using Ansible
Problem this snippet solves: Provides the opportunity to easily test deployment models and use cases of BIG-IP in AWS EC2. While AWS is used to provide a virtual compute and networking infrastructure, best practices shown here may be applicable to other public and private ‘cloud’ environments. Shows how the lifecycle of BIG-IP services can be automated using open-source configuration management and orchestration tools, in conjunction with the APIs provided by the BIG-IP platform. How to use this snippet: See README.md and /docs in the linked Github repository. Code : https://github.com/F5Networks/aws-deployments/ Tested this on version: 11.6375Views0likes0Comments