BIG-IP
124 TopicsF5 BIG-IP Multi-Site Dashboard
A comprehensive real-time monitoring dashboard for F5 BIG-IP Application Delivery Controllers featuring multi-site support, DNS hostname resolution, member state tracking, and advanced filtering capabilities. A 170KB modular JavaScript application runs entirely in your browser, served directly from the F5's high-speed operational dataplane. One or more sites operate as Dashboard Front-Ends serving the dashboard interface (HTML, JavaScript, CSS) via iFiles, while other sites operate as API Hosts providing pool data through optimized JSON-based dashboard API calls. This provides unified visibility across multiple sites from a single interface without requiring even a read-only account on any of the BIG-IPs, allowing you to switch between locations and see consistent pool, member, and health status data with almost no latency and very little overhead. Think of it as an extension of the F5 GUI: near real-time state tracking, DNS hostname resolution (if configured), advanced search/filtering, and the ability to see exactly what changed and when. It gives application teams and operations teams direct visibility into application pool state without needing to wait for answers from F5 engineers, eliminating the organizational bottleneck that slows down troubleshooting when every minute counts. https://github.com/hauptem/F5-Multisite-Dashboard156Views2likes0CommentsAutomating F5 Licensing - without direct internet access
Hello DevCentral Community! I'm excited to share a project I've been working on recently: **Automating F5 BIG-IP VE Licensing** without needing direct internet access! The project covers: Retrieving a Dossier automatically via iControl REST API. Interacting with F5 licensing servers through proxies or offline. Re-activating licenses post-upgrade using custom scripts. Full Python 3 support (moving away from BigSuds/Python 2 limitations). ✅ The idea is to help users who need to automate the licensing process, especially for secure or offline environments. I'll be sharing: Scripts Use cases Lessons learned Tips for real-world deployments If you're interested in automating your BIG-IP licensing process, feel free to follow along! Feedback, ideas, or collaboration is most welcome! 🚀 import requests import json import urllib3 urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) class BigIPLicenseManager: def __init__(self, host, username, password, registration_key): self.host = host self.username = username self.password = password self.registration_key = registration_key self.base_url = f"https://{self.host}/mgmt/tm/sys/license" self.headers = {'Content-Type': 'application/json'} def get_dossier(self): payload = { "command": "install", "registrationKey": self.registration_key } response = requests.post( self.base_url, auth=(self.username, self.password), headers=self.headers, json=payload, verify=False ) if response.status_code == 200: data = response.json() dossier = data.get('dossier') if dossier: print("[+] Dossier retrieved successfully.") return dossier else: print("[-] No dossier found in response.") return None else: print(f"[-] Failed to retrieve dossier: {response.text}") return None def install_license(self, license_text): payload = { "command": "install", "licenseText": license_text } response = requests.post( self.base_url, auth=(self.username, self.password), headers=self.headers, json=payload, verify=False ) if response.status_code == 200: print("[+] License installed successfully.") else: print(f"[-] Failed to install license: {response.text}") if __name__ == "__main__": # Define your BIG-IP credentials and registration key here bigip_host = "192.168.1.245" bigip_username = "admin" bigip_password = "admin" registration_key = "AAAAA-BBBBB-CCCCC-DDDDD-EEEEE" manager = BigIPLicenseManager( bigip_host, bigip_username, bigip_password, registration_key ) dossier = manager.get_dossier() if dossier: # Print the dossier to manually activate it via activate.f5.com print("\n[!] Submit the following dossier to F5 activation server:") print(dossier) # After getting the license text (offline or from a licensing server) license_text = input("\nPaste the license text here:\n") manager.install_license(license_text.strip())209Views3likes1CommentProxy Protocol v2 Initiator
Problem this snippet solves: Proxy Protocol v1 related articles have already been posted on DevCentral, but there is no v2 support iRule code available. A customer wanted to support Proxy Protocol v2, so I wrote an iRule code for supporting v2. Proxy protocol for the BIG-IP (f5.com) How to use this snippet: Back-end server must handle Proxy header prior data exchange. Code : when CLIENT_ACCEPTED { # DEBUG On/Off set DEBUG 0 set v2_proxy_header "0d0a0d0a000d0a515549540a" # v2 version and command : 0x21 - version 2 & PROXY command set v2_ver_command "21" # v2 address family and transport protocol : 0x11 - AF_INET (IPv4) & TCP protocol set v2_af_tp "11" # v2 Address Size : 0x000C - 12 bytes for IPv4 + TCP set v2_address_length "000c" # Get TCP port - 2 byte hexadecimal format set src_port [format "%04x" [TCP::client_port]] set dst_port [format "%04x" [TCP::local_port]] # Get Src Address and convert to 4 byte hexadecimal format foreach val [split [IP::client_addr] "."] { append src_addr [format "%02x" $val] } # Get Dst Address and convert to 4 byte hexadecimal format foreach val [split [IP::local_addr] "."] { append dst_addr [format "%02x" $val] } # Build proxy v2 data set proxy_data [binary format H* "${v2_proxy_header}${v2_ver_command}${v2_af_tp}${v2_address_length}${src_addr}${dst_addr}${src_port}${dst_port}"] if { $DEBUG } { binary scan $proxy_data H* proxy_dump log local0. "[IP::client_addr]:[TCP::client_port]_[IP::local_addr]:[TCP::local_port] - proxy_data dump : $proxy_dump" } } when SERVER_CONNECTED { TCP::respond $proxy_data }948Views2likes1CommentF5 iApp Automated Backup
Problem this snippet solves: This is now available on GitHub! Please look on GitHub for the latest version, and submit any bugs or questions as an "Issue" on GitHub: (Note: DevCentral admin update - Daniel's project appears abandoned so it's been forked and updated to the link below. @damnski on github added some SFTP code that has been merged in as well.) https://github.com/f5devcentral/f5-automated-backup-iapp Intro Building on the significant work of Thomas Schockaert (and several other DevCentralites) I enhanced many aspects I needed for my own purposes, updated many things I noticed requested on the forums, and added additional documentation and clarification. As you may see in several of my comments on the original posts, I iterated through several 2.2.x versions and am now releasing v3.0.0. Below is the breakdown! Also, I have done quite a bit of testing (mostly on v13.1.0.1 lately) and I doubt I've caught everything, especially with all of the changes. Please post any questions or issues in the comments. Cheers! Daniel Tavernier (tabernarious) Related posts: Git Repository for f5-automated-backup-iapp (https://github.com/tabernarious/f5-automated-backup-iapp) https://community.f5.com/t5/technical-articles/f5-automated-backups-the-right-way/ta-p/288454 https://community.f5.com/t5/crowdsrc/complete-f5-automated-backup-solution/ta-p/288701 https://community.f5.com/t5/crowdsrc/complete-f5-automated-backup-solution-2/ta-p/274252 https://community.f5.com/t5/technical-forum/automated-backup-solution/m-p/24551 https://community.f5.com/t5/crowdsrc/tkb-p/CrowdSRC v3.2.1 (20201210) Merged v3.1.11 and v3.2.0 for explicit SFTP support (separate from SCP). Tweaked the SCP and SFTP upload directory handling; detailed instructions are in the iApp. Tested on 13.1.3.4 and 14.1.3 v3.1.11 (20201210) Better handling of UCS passphrases, and notes about characters to avoid. I successfully tested this exact passphrase in the 13.1.3.4 CLI (surrounded with single quote) and GUI (as-is): `~!@#$%^*()aB1-_=+[{]}:./? I successfully tested this exact passphrase in 14.1.3 (square-braces and curly-braces would not work): `~!@#$%^*()aB1-_=+:./? Though there may be situations these could work, avoid these characters (separated by spaces): " ' & | ; < > \ [ ] { } , Moved changelog and notes from the template to CHANGELOG.md and README.md. Replaced all tabs (\t) with four spaces. v3.1.10 (20201209) Added SMB Version and SMB Security options to support v14+ and newer versions of Microsoft Windows and Windows Server. Tested SMB/CIFS on 13.1.3.4 and 14.1.3 against Windows Server 2019 using "2.0" and "ntlmsspi" v3.1.0: Removed "app-service none" from iCall objects. The iCall objects are now created as part of the Application Service (iApp) and are properly cleaned up if the iApp is redeployed or deleted. Reasonably tested on 11.5.4 HF2 (SMB worked fine using "mount -t cifs") and altered requires-bigip-version-min to match. Fixing error regarding "script did not successfully complete: (can't read "::destination_parameters__protocol_enable": no such variable" by encompassing most of the "implementation" in a block that first checks $::backup_schedule__frequency_select for "Disable". Added default value to "filename format". Changed UCS default value for $backup_file_name_extension to ".ucs" and added $fname_noext. Removed old SFTP sections and references (now handled through SCP/SFTP). Adjusted logging: added "sleep 1" to ensure proper logging; added $backup_directory to log message. Adjusted some help messages. New v3.0.0 features: Supports multiple instances! (Deploy multiple copies of the iApp to save backups to different places or perhaps to keep daily backups locally and send weekly backups to a network drive.) Fully ConfigSync compatible! (Encrypted values now in $script instead of local file.) Long passwords supported! (Using "-A" with openssl which reads/writes base64 encoded strings as a single line.) Added $script error checking for all remote backup types! (Using 'catch' to prevent tcl errors when $script aborts.) Backup files are cleaned up after any $script errors due to new error checking. Added logging! (Run logs sent to '/var/log/ltm' via logger command which is compatible with BIG-IP Remote Logging configuration (syslog). Run logs AND errors sent to '/var/tmp/scriptd.out'. Errors may include plain-text passwords which should not be in /var/log/ltm or syslog.) Added custom cipher option for SCP! (In case BIG-IP and the destination server are not cipher-compatible out of the box.) Added StrictHostKeyChecking=no option. (This is insecure and should only be used for testing--lots of warnings.) Combined SCP and SFTP because they are both using SCP to perform the remote copy. (Easier to maintain!) Original v1.x.x and v2.x.x features kept (copied from an original post): It allows you to choose between both UCS or SCF as backup-types. (whilst providing ample warnings about SCF not being a very good restore-option due to the incompleteness in some cases) It allows you to provide a passphrase for the UCS archives (the standard GUI also does this, so the iApp should too) It allows you to not include the private keys (same thing: standard GUI does it, so the iApp does it too) It allows you to set a Backup Schedule for every X minutes/hours/days/weeks/months or a custom selection of days in the week It allows you to set the exact time, minute of the hour, day of the week or day of the month when the backup should be performed (depending on the usefulness with regards to the schedule type) It allows you to transfer the backup files to external devices using 4 different protocols, next to providing local storage on the device itself SCP (username/private key without password) SFTP (username/private key without password) FTP (username/password) SMB (now using TMOS v12.x.x compatible 'mount -t cifs', with username/password) Local Storage (/var/local/ucs or /var/local/scf) It stores all passwords and private keys in a secure fashion: encrypted by the master key of the unit (f5mku), rendering it safe to store the backups, including the credentials off-box It has a configurable automatic pruning function for the Local Storage option, so the disk doesn't fill up (i.e. keep last X backup files) It allows you to configure the filename using the date/time wildcards from the tcl [clock] command, as well as providing a variable to include the hostname It requires only the WebGUI to establish the configuration you desire It allows you to disable the processes for automated backup, without you having to remove the Application Service or losing any previously entered settings For the external shellscripts it automatically generates, the credentials are stored in encrypted form (using the master key) It allows you to no longer be required to make modifications on the linux command line to get your automated backups running after an RMA or restore operation It cleans up after itself, which means there are no extraneous shellscripts or status files lingering around after the scripts execute How to use this snippet: Find and download the latest iApp template on GitHub (e.g "f5.automated_backup.v3.2.1.tmpl.tcl"). Import the text file as an iApp Template in the BIG-IP GUI. Create an Application Service using the imported Template. Answer the questions (paying close attention to the help sections). Check /var/tmp/scriptd.out for general logs and errors. Tested this on version: 16.023KViews5likes102CommentsExport Virtual Server Configuration in CSV - tmsh cli script
Problem this snippet solves: This is a simple cli script used to collect all the virtuals name, its VIP details, Pool names, members, all Profiles, Irules, persistence associated to each, in all partitions. A sample output would be like below, One can customize the code to extract other fields available too. The same logic can be allowed to pull information's from profiles stats, certificates etc. Update: 5th Oct 2020 Added Pool members capture in the code. After the Pool-Name, Pool-Members column will be found. If a pool does not have members - field not present: "members" will shown in the respective Pool-Members column. If a pool itself is not bound to the VS, then Pool-Name, Pool-Members will have none in the respective columns. Update: 21st Jan 2021 Added logic to look for multiple partitions & collect configs Update: 12th Feb 2021 Added logic to add persistence to sheet. Update: 26th May 2021 Added logic to add state & status to sheet. Update: 24th Oct 2023 Added logic to add hostname, Pool Status, Total-Connections & Current-Connections. Note: The codeshare has multiple version, use the latest version alone. The reason to keep the other versions is for end users to understand & compare, thus helping them to modify to their own requirements. Hope it helps. How to use this snippet: Login to the LTM, create your script by running the below commands and paste the code provided in snippet tmsh create cli script virtual-details So when you list it, it should look something like below, [admin@labltm:Active:Standalone] ~ # tmsh list cli script virtual-details cli script virtual-details { proc script::run {} { puts "Virtual Server,Destination,Pool-Name,Profiles,Rules" foreach { obj } [tmsh::get_config ltm virtual all-properties] { set profiles [tmsh::get_field_value $obj "profiles"] set remprof [regsub -all {\n} [regsub -all " context" [join $profiles "\n"] "context"] " "] set profilelist [regsub -all "profiles " $remprof ""] puts "[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],[tmsh::get_field_value $obj "pool"],$profilelist,[tmsh::get_field_value $obj "rules"]" } } total-signing-status not-all-signed } [admin@labltm:Active:Standalone] ~ # And you can run the script like below, tmsh run cli script virtual-details > /var/tmp/virtual-details.csv And get the output from the saved file, cat /var/tmp/virtual-details.csv Old Codes: cli script virtual-details { proc script::run {} { puts "Virtual Server,Destination,Pool-Name,Profiles,Rules" foreach { obj } [tmsh::get_config ltm virtual all-properties] { set profiles [tmsh::get_field_value $obj "profiles"] set remprof [regsub -all {\n} [regsub -all " context" [join $profiles "\n"] "context"] " "] set profilelist [regsub -all "profiles " $remprof ""] puts "[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],[tmsh::get_field_value $obj "pool"],$profilelist,[tmsh::get_field_value $obj "rules"]" } } total-signing-status not-all-signed } ###=================================================== ###2.0 ###UPDATED CODE BELOW ### DO NOT MIX ABOVE CODE & BELOW CODE TOGETHER ###=================================================== cli script virtual-details { proc script::run {} { puts "Virtual Server,Destination,Pool-Name,Pool-Members,Profiles,Rules" foreach { obj } [tmsh::get_config ltm virtual all-properties] { set poolname [tmsh::get_field_value $obj "pool"] set profiles [tmsh::get_field_value $obj "profiles"] set remprof [regsub -all {\n} [regsub -all " context" [join $profiles "\n"] "context"] " "] set profilelist [regsub -all "profiles " $remprof ""] if { $poolname != "none" }{ set poolconfig [tmsh::get_config /ltm pool $poolname] foreach poolinfo $poolconfig { if { [catch { set member_name [tmsh::get_field_value $poolinfo "members" ]} err] } { set pool_member $err puts "[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"]" } else { set pool_member "" set member_name [tmsh::get_field_value $poolinfo "members" ] foreach member $member_name { append pool_member "[lindex $member 1] " } puts "[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"]" } } } else { puts "[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,none,$profilelist,[tmsh::get_field_value $obj "rules"]" } } } total-signing-status not-all-signed } ###=================================================== ### Version 3.0 ### UPDATED CODE BELOW FOR MULTIPLE PARTITION ### DO NOT MIX ABOVE CODE & BELOW CODE TOGETHER ###=================================================== cli script virtual-details { proc script::run {} { puts "Partition,Virtual Server,Destination,Pool-Name,Pool-Members,Profiles,Rules" foreach all_partitions [tmsh::get_config auth partition] { set partition "[lindex [split $all_partitions " "] 2]" tmsh::cd /$partition foreach { obj } [tmsh::get_config ltm virtual all-properties] { set poolname [tmsh::get_field_value $obj "pool"] set profiles [tmsh::get_field_value $obj "profiles"] set remprof [regsub -all {\n} [regsub -all " context" [join $profiles "\n"] "context"] " "] set profilelist [regsub -all "profiles " $remprof ""] if { $poolname != "none" }{ set poolconfig [tmsh::get_config /ltm pool $poolname] foreach poolinfo $poolconfig { if { [catch { set member_name [tmsh::get_field_value $poolinfo "members" ]} err] } { set pool_member $err puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"]" } else { set pool_member "" set member_name [tmsh::get_field_value $poolinfo "members" ] foreach member $member_name { append pool_member "[lindex $member 1] " } puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"]" } } } else { puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,none,$profilelist,[tmsh::get_field_value $obj "rules"]" } } } } total-signing-status not-all-signed } ###=================================================== ### Version 4.0 ### UPDATED CODE BELOW FOR CAPTURING PERSISTENCE ### DO NOT MIX ABOVE CODE & BELOW CODE TOGETHER ###=================================================== cli script virtual-details { proc script::run {} { puts "Partition,Virtual Server,Destination,Pool-Name,Pool-Members,Profiles,Rules,Persist" foreach all_partitions [tmsh::get_config auth partition] { set partition "[lindex [split $all_partitions " "] 2]" tmsh::cd /$partition foreach { obj } [tmsh::get_config ltm virtual all-properties] { set poolname [tmsh::get_field_value $obj "pool"] set profiles [tmsh::get_field_value $obj "profiles"] set remprof [regsub -all {\n} [regsub -all " context" [join $profiles "\n"] "context"] " "] set profilelist [regsub -all "profiles " $remprof ""] set persist [lindex [lindex [tmsh::get_field_value $obj "persist"] 0] 1] if { $poolname != "none" }{ set poolconfig [tmsh::get_config /ltm pool $poolname] foreach poolinfo $poolconfig { if { [catch { set member_name [tmsh::get_field_value $poolinfo "members" ]} err] } { set pool_member $err puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"],$persist" } else { set pool_member "" set member_name [tmsh::get_field_value $poolinfo "members" ] foreach member $member_name { append pool_member "[lindex $member 1] " } puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"],$persist" } } } else { puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,none,$profilelist,[tmsh::get_field_value $obj "rules"],$persist" } } } } total-signing-status not-all-signed } ###=================================================== ### 5.0 ### UPDATED CODE BELOW ### DO NOT MIX ABOVE CODE & BELOW CODE TOGETHER ###=================================================== cli script virtual-details { proc script::run {} { puts "Partition,Virtual Server,Destination,Pool-Name,Pool-Members,Profiles,Rules,Persist,Status,State" foreach all_partitions [tmsh::get_config auth partition] { set partition "[lindex [split $all_partitions " "] 2]" tmsh::cd /$partition foreach { obj } [tmsh::get_config ltm virtual all-properties] { foreach { status } [tmsh::get_status ltm virtual [tmsh::get_name $obj]] { set vipstatus [tmsh::get_field_value $status "status.availability-state"] set vipstate [tmsh::get_field_value $status "status.enabled-state"] } set poolname [tmsh::get_field_value $obj "pool"] set profiles [tmsh::get_field_value $obj "profiles"] set remprof [regsub -all {\n} [regsub -all " context" [join $profiles "\n"] "context"] " "] set profilelist [regsub -all "profiles " $remprof ""] set persist [lindex [lindex [tmsh::get_field_value $obj "persist"] 0] 1] if { $poolname != "none" }{ set poolconfig [tmsh::get_config /ltm pool $poolname] foreach poolinfo $poolconfig { if { [catch { set member_name [tmsh::get_field_value $poolinfo "members" ]} err] } { set pool_member $err puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"],$persist,$vipstatus,$vipstate" } else { set pool_member "" set member_name [tmsh::get_field_value $poolinfo "members" ] foreach member $member_name { append pool_member "[lindex $member 1] " } puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"],$persist,$vipstatus,$vipstate" } } } else { puts "$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,none,$profilelist,[tmsh::get_field_value $obj "rules"],$persist,$vipstatus,$vipstate" } } } } total-signing-status not-all-signed } Latest Code: cli script virtual-details { proc script::run {} { set hostconf [tmsh::get_config /sys global-settings hostname] set hostname [tmsh::get_field_value [lindex $hostconf 0] hostname] puts "Hostname,Partition,Virtual Server,Destination,Pool-Name,Pool-Status,Pool-Members,Profiles,Rules,Persist,Status,State,Total-Conn,Current-Conn" foreach all_partitions [tmsh::get_config auth partition] { set partition "[lindex [split $all_partitions " "] 2]" tmsh::cd /$partition foreach { obj } [tmsh::get_config ltm virtual all-properties] { foreach { status } [tmsh::get_status ltm virtual [tmsh::get_name $obj]] { set vipstatus [tmsh::get_field_value $status "status.availability-state"] set vipstate [tmsh::get_field_value $status "status.enabled-state"] set total_conn [tmsh::get_field_value $status "clientside.tot-conns"] set curr_conn [tmsh::get_field_value $status "clientside.cur-conns"] } set poolname [tmsh::get_field_value $obj "pool"] set profiles [tmsh::get_field_value $obj "profiles"] set remprof [regsub -all {\n} [regsub -all " context" [join $profiles "\n"] "context"] " "] set profilelist [regsub -all "profiles " $remprof ""] set persist [lindex [lindex [tmsh::get_field_value $obj "persist"] 0] 1] if { $poolname != "none" }{ foreach { p_status } [tmsh::get_status ltm pool $poolname] { set pool_status [tmsh::get_field_value $p_status "status.availability-state"] } set poolconfig [tmsh::get_config /ltm pool $poolname] foreach poolinfo $poolconfig { if { [catch { set member_name [tmsh::get_field_value $poolinfo "members" ]} err] } { set pool_member $err puts "$hostname,$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_status,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"],$persist,$vipstatus,$vipstate,$total_conn,$curr_conn" } else { set pool_member "" set member_name [tmsh::get_field_value $poolinfo "members" ] foreach member $member_name { append pool_member "[lindex $member 1] " } puts "$hostname,$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,$pool_status,$pool_member,$profilelist,[tmsh::get_field_value $obj "rules"],$persist,$vipstatus,$vipstate,$total_conn,$curr_conn" } } } else { puts "$hostname,$partition,[tmsh::get_name $obj],[tmsh::get_field_value $obj "destination"],$poolname,none,none,$profilelist,[tmsh::get_field_value $obj "rules"],$persist,$vipstatus,$vipstate,$total_conn,$curr_conn" } } } } } Tested this on version: 13.09.9KViews9likes26CommentsTACACS+ External Monitor (Python)
Problem this snippet solves: This script is an external monitor for TACACS+ that simulates a TACACS+ client authenticating a test user, and marks the status of a pool member as up if the authentication is successful. If the connection is down/times out, or the authentication fails due to invalid account settings, the script marks the pool member status as down. This is heavily inspired by the Radius External Monitor (Python) by AlanTen. How to use this snippet: Prerequisite This script uses the TACACS+ Python client by Ansible (tested on version 2.6). Create the directory /config/eav/tacacs_plus on BIG-IP Copy all contents from tacacs_plus package into /config/eav/tacacs_plus. You may also need to download six.py from https://raw.githubusercontent.com/benjaminp/six/master/six.py and place it in /config/eav/tacacs_plus. You will need to have a test account provisioned on the TACACS+ server for the script to perform authentication. Installation On BIG-IP, import the code snippet below as an External Monitor Program File. Monitor Configuration Set up an External monitor with the imported file, and configure it with the following environment variables: KEY: TACACS+ server secret USER: Username for test account PASSWORD: Password for test account MOD_PATH: Path to location of Python package tacacs_plus, default: /config/eav TIMEOUT: Duration to wait for connectivity to TACACS server to be established, default: 3 Troubleshooting SSH to BIG-IP and run the script locally $ cd /config/filestore/files_d/Common_d/external_monitor_d/ # Get name of uploaded file, e.g.: $ ls -la ... -rwxr-xr-x. 1 tomcat tomcat 1883 2021-09-17 04:05 :Common:tacacs-monitor_39568_7 # Run the script with the corresponding variables $ KEY=<my_tacacs_key> USER=<testuser> PASSWORD=<supersecure> python <external program file, e.g.:Common:tacacs-monitor_39568_7> <TACACS+ server IP> <TACACS+ server port> Code : #!/usr/bin/env python # # Filename : tacacs_plus_mon.py # Author : Leon Seng # Version : 1.2 # Date : 2021/09/21 # Python ver: 2.6+ # F5 version: 12.1+ # # ========== Installation # Import this script via GUI: # System > File Management > External Monitor Program File List > Import... # Name it however you want. # Get, modify and copy the following modules: # ========== Required modules # -- six -- # https://pypi.org/project/six/ # Copy six.py into /config/eav # # -- tacacs_plus -- # https://pypi.org/project/tacacs_plus/ | https://github.com/ansible/tacacs_plus # Copy tacacs_plus directory into /config/eav # ========== Environment Variables # NODE_IP - Supplied by F5 monitor as first argument # NODE_PORT - Supplied by F5 monitor as second argument # KEY - TACACS+ server secret # USER - Username for test account # PASSWORD - Password for test account # MOD_PATH - Path to location of Python package tacacs_plus, default: /config/eav # TIMEOUT - Duration to wait for connectivity to TACACS server to be established, default: 3 import os import socket import sys if os.environ.get('MOD_PATH'): sys.path.append(os.environ.get('MOD_PATH')) else: sys.path.append('/config/eav') # https://github.com/ansible/tacacs_plus from tacacs_plus.client import TACACSClient node_ip = sys.argv[1] node_port = int(sys.argv[2]) key = os.environ.get("KEY") user = os.environ.get("USER") password = os.environ.get("PASSWORD") timeout = int(os.environ.get("TIMEOUT", 3)) # Determine if node IP is IPv4 or IPv6 family = None try: socket.inet_pton(socket.AF_INET, node_ip) family = socket.AF_INET except socket.error: # not a valid address try: socket.inet_pton(socket.AF_INET6, node_ip) family = socket.AF_INET6 except socket.error: sys.exit(1) # Authenticate against TACACS server client = TACACSClient(node_ip, node_port, key, timeout=timeout, family=family) try: auth = client.authenticate(user, password) if auth.valid: print "up" except socket.error: # EAV script marks node as DOWN when no output is present pass Tested this on version: 12.11.5KViews1like0CommentsEspecial Load Balancing Active-Passive Scenario (I)
Problem this snippet solves: This code was written to solve this issue REF - https://devcentral.f5.com/s/feed/0D51T00006i7jWpSAI Specification: 2 clusters with 2 nodes each one. each cluster will be served as active-passive method. each node in the cluster will be served as round robin. when a cluster changes to active, it will keep this status although the initial active cluster change back to up status. Only one BIG-IP device. There are many topics suggesting to use "Manual Resume" trying to goal this specifications, but this requires to manually restore each node when is back online. My initial idea was to have an unattended virtual server. To do so, I use a combination of persistence and an internal virtual server loadbalancing (Vip-targeting-Vip in the same device). How to use this snippet: This scenario is composed by the next set of objects: 4 nodes (Node1, Node2, Node3, Node4) 1 additional node called "internal_node" (which represents the VIP used on VIP-Targeting-VIP) 2 pools called "ClusterA_pool" and "ClusterB_pool" (which points to each pair of nodes) 1 additional pool called "MyPool" (which points the two internal VIP) 2 virtual servers called "ClusterA_vs" and "ClusterB_vs" (which use RoundRobin to the pools of the same name) 1 virtual server called "MyVS" (which is the visible VS and points to "MyPool") By the way, I use a "Slow Ramp Time" of 0 to reduce the failover time. Following you can find an example of configuration: ----------------- ltm virtual MyVS { destination 10.130.40.150:http ip-protocol tcp mask 255.255.255.255 persist { universal { default yes } } pool MyPool profiles { tcp { } } rules { MyRule } source 0.0.0.0/0 translate-address enabled translate-port enabled vs-index 53 } ltm virtual ClusterA_vs { destination 10.130.40.150:1001 ip-protocol tcp mask 255.255.255.255 pool ClusterA_pool profiles { tcp { } } source 0.0.0.0/0 translate-address enabled translate-port enabled vs-index 54 } ltm virtual ClusterB_vs { destination 10.130.40.150:1002 ip-protocol tcp mask 255.255.255.255 pool ClusterB_pool profiles { tcp { } } source 0.0.0.0/0 translate-address enabled translate-port enabled vs-index 55 } ltm pool ClusterA_pool { members { Node1:http { address 10.130.40.201 session monitor-enabled state up } Node2:http { address 10.130.40.202 session monitor-enabled state up } } monitor tcp slow-ramp-time 0 } ltm pool ClusterB_pool { members { Node3:http { address 10.130.40.203 session monitor-enabled state up } Node4:http { address 10.130.40.204 session monitor-enabled state up } } monitor tcp slow-ramp-time 0 } ltm node local_node { address 10.130.40.150 } ----------------- Code : when CLIENT_ACCEPTED { set initial 0 set entry "" } when LB_SELECTED { incr initial # Checks if persistence entry exists catch { set entry [persist lookup uie [virtual name]] } # Loadbalancing selection base on persistence if { $entry eq "" } { set selection [LB::server port] } else { set selection [lindex [split $entry " "] 2] set status [LB::status pool MyPool member [LB::server addr] $selection] if { $status ne "up" } { catch { [persist delete uie [virtual name]] } set selection [LB::server port] } } # Adds a new persistence entry catch { persist add uie [virtual name] } # Applies the selection switch $selection { # This numbers represents the ports used at the VIP-targeting-VIP "1001" { LB::reselect virtual ClusterA_vs } "1002" { LB::reselect virtual ClusterB_vs } } } Tested this on version: 12.12.6KViews0likes1CommentAutomated backup F5 configuration to remote server
Problem this snippet solves: Hi, I made simple script that auto backup SCF and UCF files to the remote server. I read great article about autobackup based on the iApp (https://devcentral.f5.com/codeshare/f5-iapp-automated-backup-1114), but I wonder is that way to make it simplest. I don't think that my script is better, but only simple. This scritp based on TFTP communication so it isn't secure. What you have to do is: Create a script file on every f5 and place it for example on directory /var/tmp/. I named file script_backup.sh. Change IP address TFTP_SERVER to your remote server Change mod of file to execute: chmod 755 ./script_backup.sh Add line to the CRONTAB to run this script every X time Edit crontab: crontab -e Add line like this. Of course you can change the time when you want start script, it's only example: 30 0 * * 6 /var/tmp/script_backup.sh That's all. I hope you enjoy this script. I also wonder why f5 don't have native mechanism to auto backup on the remote server. It's the most basic function in other systems. Code : TFTP_SERVER=10.0.0.0 DATETIME="`date +%Y%m%d%H%M`" OUT_DIR='/var/tmp' FILE_UCS="f5_lan_${HOSTNAME}.ucs" FILE_SCF="f5_lan_${HOSTNAME}.scf" FILE_CERT="f5_lan_${HOSTNAME}.cert.tar" cd ${OUT_DIR} tmsh save /sys ucs "${OUT_DIR}/${FILE_UCS}" tmsh save /sys config file "${OUT_DIR}/${FILE_SCF}" no-passphrase tar -cf "${OUT_DIR}/${FILE_CERT}" /config/ssl tftp $TFTP_SERVER <<-END 1>&2 mode binary put ${FILE_UCS} put ${FILE_SCF} put ${FILE_CERT} quit END rm -f "${FILE_UCS}" rm -f "${FILE_SCF}" rm -f "${FILE_CERT}" rm -f "${FILE_SCF}.tar" RTN_CODE=$? exit $RTN_COD9KViews0likes6Comments