devops
788 TopicsF5 MCP(Model Context Protocol) Server
This project is a MCP( Model Context Protocol ) server designed to interact with F5 devices using the iControl REST API. It provides a set of tools to manage F5 objects such as virtual servers (VIPs), pools, iRules, and profiles. The server is implemented using the FastMCP framework and exposes functionalities for creating, updating, listing, and deleting F5 objects.208Views0likes0CommentsBIG-IP Report
Problem this snippet solves: Overview This is a script which will generate a report of the BIG-IP LTM configuration on all your load balancers making it easy to find information and get a comprehensive overview of virtual servers and pools connected to them. This information is used to relay information to NOC and developers to give them insight in where things are located and to be able to plan patching and deploys. I also use it myself as a quick way get information or gather data used as a foundation for RFC's, ie get a list of all external virtual servers without compression profiles. The script has been running on 13 pairs of load balancers, indexing over 1200 virtual servers for several years now and the report is widely used across the company and by many companies and governments across the world. It's easy to setup and use and only requires auditor (read-only) permissions on your devices. Demo/Preview Interactive demo http://loadbalancing.se/bigipreportdemo/ Screen shots The main report: The device overview: Certificate details: How to use this snippet: Installation instructions BigipReport REST This is the only branch we're updating since middle of 2020 and it supports 12.x and upwards (maybe even 11.6). Downloads: https://loadbalancing.se/downloads/bigipreport-v5.7.13.zip Documentation, installation instructions and troubleshooting: https://loadbalancing.se/bigipreport-rest/ Docker support https://loadbalancing.se/2021/01/05/running-bigipreport-on-docker/ Kubernetes support https://loadbalancing.se/2021/04/16/bigipreport-on-kubernetes/ BIG-IP Report (Legacy) Older version of the report that only runs on Windows and is depending on a Powershell plugin originally written by Joe Pruitt (F5) BIG-IP Report (only download this if you have v10 devices): https://loadbalancing.se/downloads/bigipreport-5.4.0-beta.zip iControl Snapin https://loadbalancing.se/downloads/f5-icontrol.zip Documentation and Installation Instructions https://loadbalancing.se/bigip-report/ Upgrade instructions Protect the report using APM and active directory Written by DevCentral member Shann_P: https://loadbalancing.se/2018/04/08/protecting-bigip-report-behind-an-apm-by-shannon-poole/ Got issues/problems/feedback? Still have issues? Drop a comment below. We usually reply quite fast. Any bugs found, issues detected or ideas contributed makes the report better for everyone, so it's always appreciated. --- Join us on Discord: https://discord.gg/7JJvPMYahA Code : BigIP Report Tested this on version: 12, 13, 14, 15, 1614KViews20likes97CommentsF5 iApp Automated Backup
Problem this snippet solves: This is now available on GitHub! Please look on GitHub for the latest version, and submit any bugs or questions as an "Issue" on GitHub: (Note: DevCentral admin update - Daniel's project appears abandoned so it's been forked and updated to the link below. @damnski on github added some SFTP code that has been merged in as well.) https://github.com/f5devcentral/f5-automated-backup-iapp Intro Building on the significant work of Thomas Schockaert (and several other DevCentralites) I enhanced many aspects I needed for my own purposes, updated many things I noticed requested on the forums, and added additional documentation and clarification. As you may see in several of my comments on the original posts, I iterated through several 2.2.x versions and am now releasing v3.0.0. Below is the breakdown! Also, I have done quite a bit of testing (mostly on v13.1.0.1 lately) and I doubt I've caught everything, especially with all of the changes. Please post any questions or issues in the comments. Cheers! Daniel Tavernier (tabernarious) Related posts: Git Repository for f5-automated-backup-iapp (https://github.com/tabernarious/f5-automated-backup-iapp) https://community.f5.com/t5/technical-articles/f5-automated-backups-the-right-way/ta-p/288454 https://community.f5.com/t5/crowdsrc/complete-f5-automated-backup-solution/ta-p/288701 https://community.f5.com/t5/crowdsrc/complete-f5-automated-backup-solution-2/ta-p/274252 https://community.f5.com/t5/technical-forum/automated-backup-solution/m-p/24551 https://community.f5.com/t5/crowdsrc/tkb-p/CrowdSRC v3.2.1 (20201210) Merged v3.1.11 and v3.2.0 for explicit SFTP support (separate from SCP). Tweaked the SCP and SFTP upload directory handling; detailed instructions are in the iApp. Tested on 13.1.3.4 and 14.1.3 v3.1.11 (20201210) Better handling of UCS passphrases, and notes about characters to avoid. I successfully tested this exact passphrase in the 13.1.3.4 CLI (surrounded with single quote) and GUI (as-is): `~!@#$%^*()aB1-_=+[{]}:./? I successfully tested this exact passphrase in 14.1.3 (square-braces and curly-braces would not work): `~!@#$%^*()aB1-_=+:./? Though there may be situations these could work, avoid these characters (separated by spaces): " ' & | ; < > \ [ ] { } , Moved changelog and notes from the template to CHANGELOG.md and README.md. Replaced all tabs (\t) with four spaces. v3.1.10 (20201209) Added SMB Version and SMB Security options to support v14+ and newer versions of Microsoft Windows and Windows Server. Tested SMB/CIFS on 13.1.3.4 and 14.1.3 against Windows Server 2019 using "2.0" and "ntlmsspi" v3.1.0: Removed "app-service none" from iCall objects. The iCall objects are now created as part of the Application Service (iApp) and are properly cleaned up if the iApp is redeployed or deleted. Reasonably tested on 11.5.4 HF2 (SMB worked fine using "mount -t cifs") and altered requires-bigip-version-min to match. Fixing error regarding "script did not successfully complete: (can't read "::destination_parameters__protocol_enable": no such variable" by encompassing most of the "implementation" in a block that first checks $::backup_schedule__frequency_select for "Disable". Added default value to "filename format". Changed UCS default value for $backup_file_name_extension to ".ucs" and added $fname_noext. Removed old SFTP sections and references (now handled through SCP/SFTP). Adjusted logging: added "sleep 1" to ensure proper logging; added $backup_directory to log message. Adjusted some help messages. New v3.0.0 features: Supports multiple instances! (Deploy multiple copies of the iApp to save backups to different places or perhaps to keep daily backups locally and send weekly backups to a network drive.) Fully ConfigSync compatible! (Encrypted values now in $script instead of local file.) Long passwords supported! (Using "-A" with openssl which reads/writes base64 encoded strings as a single line.) Added $script error checking for all remote backup types! (Using 'catch' to prevent tcl errors when $script aborts.) Backup files are cleaned up after any $script errors due to new error checking. Added logging! (Run logs sent to '/var/log/ltm' via logger command which is compatible with BIG-IP Remote Logging configuration (syslog). Run logs AND errors sent to '/var/tmp/scriptd.out'. Errors may include plain-text passwords which should not be in /var/log/ltm or syslog.) Added custom cipher option for SCP! (In case BIG-IP and the destination server are not cipher-compatible out of the box.) Added StrictHostKeyChecking=no option. (This is insecure and should only be used for testing--lots of warnings.) Combined SCP and SFTP because they are both using SCP to perform the remote copy. (Easier to maintain!) Original v1.x.x and v2.x.x features kept (copied from an original post): It allows you to choose between both UCS or SCF as backup-types. (whilst providing ample warnings about SCF not being a very good restore-option due to the incompleteness in some cases) It allows you to provide a passphrase for the UCS archives (the standard GUI also does this, so the iApp should too) It allows you to not include the private keys (same thing: standard GUI does it, so the iApp does it too) It allows you to set a Backup Schedule for every X minutes/hours/days/weeks/months or a custom selection of days in the week It allows you to set the exact time, minute of the hour, day of the week or day of the month when the backup should be performed (depending on the usefulness with regards to the schedule type) It allows you to transfer the backup files to external devices using 4 different protocols, next to providing local storage on the device itself SCP (username/private key without password) SFTP (username/private key without password) FTP (username/password) SMB (now using TMOS v12.x.x compatible 'mount -t cifs', with username/password) Local Storage (/var/local/ucs or /var/local/scf) It stores all passwords and private keys in a secure fashion: encrypted by the master key of the unit (f5mku), rendering it safe to store the backups, including the credentials off-box It has a configurable automatic pruning function for the Local Storage option, so the disk doesn't fill up (i.e. keep last X backup files) It allows you to configure the filename using the date/time wildcards from the tcl [clock] command, as well as providing a variable to include the hostname It requires only the WebGUI to establish the configuration you desire It allows you to disable the processes for automated backup, without you having to remove the Application Service or losing any previously entered settings For the external shellscripts it automatically generates, the credentials are stored in encrypted form (using the master key) It allows you to no longer be required to make modifications on the linux command line to get your automated backups running after an RMA or restore operation It cleans up after itself, which means there are no extraneous shellscripts or status files lingering around after the scripts execute How to use this snippet: Find and download the latest iApp template on GitHub (e.g "f5.automated_backup.v3.2.1.tmpl.tcl"). Import the text file as an iApp Template in the BIG-IP GUI. Create an Application Service using the imported Template. Answer the questions (paying close attention to the help sections). Check /var/tmp/scriptd.out for general logs and errors. Tested this on version: 16.022KViews5likes102CommentsTrigger js challenge/Captcha for ip reputation/ip intelligence categories
Problem solved by this Code Snippet Because some ISP or cloud providers do not monitor their users a lot of times client ip addresses are marked as "spam sources" or "windows exploits" and as the ip addresses are dynamic and after time a legitimate user can use this ip addresses the categories are often stopped in the IP intelligence profile or under the ASM/AWAF policy. To still make use of this categories the users coming from those ip addresses can be forced to solve captcha checks or at least to be checked for javascript support! How to use this Code Snippet Have AWAF/ASM and ip intelligence licensed Add AWAF/ASM policy with irule support option (by default not enabled under the policy) or/and Bot profile under the Virtual server Optionally add IP intelligence profile or enable the Ip intelligence under the WAF policy without the categories that cause a lot of false positives, Add the irule and if needed modify the categories for which it triggers Do not forget to first create the data group, used in the code or delete that part of the code and to uncomment the Bot part of the code, if you plan to do js check and not captcha and maybe comment the captcha part ! Code Snippet Meta Information Version: 17.1.3 Coding Language: TCL Code You can find the code and further documentation in my GitHub repository: reputation-javascript-captcha-challlenge/ at main · Nikoolayy1/reputation-javascript-captcha-challlenge when HTTP_REQUEST { # Take the ip address for ip reputation/intelligence check from the XFF header if it comes from the whitelisted source ip addresses in data group "client_ip_class" if { [HTTP::header exists "X-Forwarded-For"] && [class match [IP::client_addr] equals "/Common/client_ip_class"] } { set trueIP [HTTP::header "X-Forwarded-For"] } else { set trueIP [IP::client_addr] } # Check if IP reputation is triggered and it is containing "Spam Sources" if { ([llength [IP::reputation $trueIP]] != 0) && ([IP::reputation $trueIP] contains "Spam Sources") }{ log local0. "The category is [IP::reputation $trueIP] from [IP::client_addr]" # Set the variable 1 or bulean true as to trigger ASM captcha or bot defense javascript set js_ch 1 } else { set js_ch 0 } # Custom response page just for testing if there is no real backend origin server for testing if {!$js_ch} { HTTP::respond 200 content { <html> <head> <title>Apology Page</title> </head> <body> We are sorry, but the site you are looking for is temporarily out of service<br> If you feel you have reached this page in error, please try again. </body> </html> } } } # when BOTDEFENSE_ACTION { # Trigger bot defense action javascript check for Spam Sources # if {$js_ch && (not ([BOTDEFENSE::reason] starts_with "passed browser challenge")) && ([BOTDEFENSE::action] eq "allow") }{ # BOTDEFENSE::action browser_challenge # } # } when ASM_REQUEST_DONE { # Trigger ASM captcha check only for users comming from Spam sources that have not already passed the captcha check (don't have the captcha cookie) if {$js_ch && [ASM::captcha_status] ne "correct"} { set res [ASM::captcha] if {$res ne "ok"} { log local0. "Cannot send captcha_challenge: \"$res\"" } } } Extra References: BOTDEFENSE::action ASM::captcha ASM::captcha_status90Views1like1CommentList of F5 iControl REST API Endpoints
As I could not find a complete API Reference for the F5 iControl REST API, I created a list myself. This list is not (yet) complete. I created it with the help of an API crawler and added manually some endpoints extracted from the F5 documentation. It would be great if this list gets more complete by time. Feel free to fork this repository and create a pull requests for additions and corrections. Any help and feedback is very welcome! At the moment it is a simple plain text file. My future plans are: Complete the list of endpoints Publish an OpenAPI 3 file You can find the list in my public GitHub repository. Happy RESTing!239Views2likes2CommentsGTM type A and AAAA wideip NetworkMap to generate a json with python f5-sdk
Code is community submitted, community supported, and recognized as ‘Use At Your Own Risk’. Short Description GTM type A and AAAA NetworkMap to a json with python f5-sdk, code support check AS3 wideip. test well in BIGIP VE V14.1.5 and V16.1.2, code should work on version V12+ BIGIP Important: gtm/ltm server name can not contains character ":" and "\" and "/" and gtm server virtual server name(VS NAME) also can not contains character ":" and "\", because I use fullPath.split(':') to read GTM-Server Name and Virtual server name(correct example such as "fullPath":"/Common/DC-2-GTM-ipv4:/Common/vs_cmcc_99_22" ) otherwise it will raise HTTP 404, below is the error format example: GTM Server name ZSCTEST:DC-1-LTM-ZSC-ipv4 GTM Server Virtual Server Name(VS NAME) test:vs "name":"ZSCTEST\\:DC-1-LTM-ZSC-ipv4:test:vs","partition":"Common","fullPath":"/Common/ZSCTEST\\:DC-1-LTM-ZSC-ipv4:test:vs" Problem solved by this Code Snippet collect GTM type A and AAAA data and generate a json file How to use this Code Snippet Firstly, install python f5 sdk pip install f5-sdk Secondly, modify the following IP, account and password corresponding to your BIGIP GTM device mgmt = ManagementRoot('192.168.5.109', 'admin', 'xt32112300') python f5-sdk send a GET request to GTM in the form of ~partition~name, but the format of the GET request for the AS3 published wideip should be ~partition~Folder~name, so when retrieving the AS3 published wideip, the URL constructed will report HTTP 404. After reading the error source code sitepackages\icontrol\session.py There is a function def _ validate_ name_ partition_ Subpath (element): def _validate_name_partition_subpath(element): # '/' and '~' are illegal characters in most cases, however there are # few exceptions (GTM Regions endpoint being one of them where the # validation of name should not apply. """ if '~' in element: error_message =\ "instance names and partitions cannot contain '~', but it's: %s"\ % element raise InvalidInstanceNameOrFolder(error_message) """ the determination of whether the name carries the character ~ will cause the structure of AS3 name=i.subPath + '~' + i.name doesn't work. so, delete the judgment of ~ or use """ """ notes code will support AS3 wideip check Finally, if the code runs no error, it will generate a "F5-GTM-Wideip-XXX(date format)-NetworkMap.json" file in your local working directory Code Snippet Meta Information Version: 1.0 Coding Language: python Full Code Snippet from f5.bigip import ManagementRoot import json import time wideip_NetworkMap = {} mgmt = ManagementRoot('192.168.5.109', 'admin', 'xt32112300') gtm_wideip = [] """ author: xuwen email: 1099061067@qq.com date: 2022/12/16 """ # GTM A Wideip for i in mgmt.tm.gtm.wideips.a_s.get_collection(): try: type_A_wideip = mgmt.tm.gtm.wideips.a_s.a.load(name=i.subPath + '~' + i.name if hasattr(i, 'subPath') else i.name, partition=i.partition) except Exception as e: print('type A widip name {} error msg is '.format(i.name) + str(e)) else: gtm_A_wideip = {} type_A_wideip_name = i.name type_A_wideip_partition = i.partition if hasattr(type_A_wideip, 'aliases'): gtm_A_wideip['aliases'] = type_A_wideip.aliases if hasattr(type_A_wideip, 'rules'): gtm_A_wideip['iRules'] = type_A_wideip.rules if hasattr(type_A_wideip, 'enabled'): gtm_A_wideip['enabled'] = True else: gtm_A_wideip['disabled'] = True if hasattr(type_A_wideip, 'subPath'): gtm_A_wideip['subPath'] = type_A_wideip.subPath gtm_A_wideip.update(name=type_A_wideip_name, partition=type_A_wideip_partition, wideip_type='A', poolLbMode=type_A_wideip.poolLbMode, persistence=type_A_wideip.persistence, lastResortPool=type_A_wideip.lastResortPool, fullPath=type_A_wideip.fullPath) # print(gtm_A_wideip) if hasattr(type_A_wideip, 'pools'): gtm_A_wideip['pools'] = [] for pool_name in type_A_wideip.pools: gtm_A_pool = {} # gtm_A_pool_name = pool_name['name'] gtm_A_pool['name'] = pool_name['name'] gtm_A_pool['partition'] = pool_name['partition'] gtm_A_pool['type'] = 'A' gtm_A_pool['order'] = pool_name['order'] gtm_A_pool['ratio'] = pool_name['ratio'] if 'subPath' in pool_name.keys(): gtm_A_pool['subPath'] = pool_name['subPath'] gslb_A_pool = mgmt.tm.gtm.pools.a_s.a.load(name=pool_name['subPath'] + '~' + pool_name['name'], partition=pool_name['partition']) # gslb_A_pool = mgmt.tm.gtm.pools.a_s.a.load(name=pool_name['subPath'] + '~' + pool_name['name'] if 'subPath' in pool_name.keys() else pool_name['name'], partition=pool_name['partition']) else: gslb_A_pool = mgmt.tm.gtm.pools.a_s.a.load(name=pool_name['name'], partition=pool_name['partition']) gtm_A_pool['fullPath'] = gslb_A_pool.fullPath gtm_A_pool['ttl'] = gslb_A_pool.ttl gtm_A_pool['loadBalancingMode'] = gslb_A_pool.loadBalancingMode gtm_A_pool['alternateMode'] = gslb_A_pool.alternateMode gtm_A_pool['fallbackMode'] = gslb_A_pool.fallbackMode gtm_A_pool['fallbackIp'] = gslb_A_pool.fallbackIp gtm_A_pool['Members'] = [] # gslb_pool_members_vs_name_list = [str(mem.raw) for mem in gslb_A_pool.members_s.get_collection()] gslb_pool_members_vs_fullPath_list = [(mem.memberOrder, mem.fullPath, mem.ratio) for mem in gslb_A_pool.members_s.get_collection()] for pool_memberOrder, pool_member_fullPath, pool_member_ratio in gslb_pool_members_vs_fullPath_list: # print(pool_member_fullPath) # "fullPath":"/Common/DC-2-GTM-ipv4:/Common/vs_cmcc_99_22" gtm_server_name = pool_member_fullPath.split(':')[0] gtm_pool_members_member_name = pool_member_fullPath.split(':')[1] dc_gtm_virtualserver = mgmt.tm.gtm.servers.server.load(name=gtm_server_name.split('/')[2], partition=gtm_server_name.split('/')[1]) virtualservers_virtualserver = dc_gtm_virtualserver.virtual_servers_s.virtual_server.load( name=gtm_pool_members_member_name ) virtualserver_destination = virtualservers_virtualserver.destination virtualserver_Member_Address = virtualserver_destination.split(':')[0] virtualserver_Service_Port = virtualserver_destination.split(':')[1] gtm_A_pool['Members'].append({ 'Member': gtm_pool_members_member_name, 'Member Order': pool_memberOrder, 'ratio': pool_member_ratio, 'Member Address': virtualserver_Member_Address, 'Service Port': virtualserver_Service_Port, 'Translation Address': virtualservers_virtualserver.translationAddress, 'Translation Service Port': virtualservers_virtualserver.translationPort }) gtm_A_wideip['pools'].append(gtm_A_pool) if hasattr(type_A_wideip, 'poolsCname'): gtm_A_wideip['poolsCname'] = [] for pool_name in type_A_wideip.poolsCname: gtm_A_cnamepool = {} # gtm_A_pool_name = pool_name['name'] gtm_A_cnamepool['name'] = pool_name['name'] gtm_A_cnamepool['partition'] = pool_name['partition'] gtm_A_cnamepool['type'] = 'CNAME' gtm_A_cnamepool['order'] = pool_name['order'] gtm_A_cnamepool['ratio'] = pool_name['ratio'] if 'subPath' in pool_name.keys(): gtm_A_cnamepool['subPath'] = pool_name['subPath'] gslb_A_cnamepool = mgmt.tm.gtm.pools.cnames.cname.load(name=pool_name['subPath'] + '~' + pool_name['name'], partition=pool_name['partition']) else: gslb_A_cnamepool = mgmt.tm.gtm.pools.cnames.cname.load(name=pool_name['name'], partition=pool_name['partition']) gtm_A_cnamepool['fullPath'] = gslb_A_cnamepool.fullPath gtm_A_cnamepool['ttl'] = gslb_A_cnamepool.ttl gtm_A_cnamepool['loadBalancingMode'] = gslb_A_cnamepool.loadBalancingMode gtm_A_cnamepool['alternateMode'] = gslb_A_cnamepool.alternateMode gtm_A_cnamepool['fallbackMode'] = gslb_A_cnamepool.fallbackMode gtm_A_cnamepool['Members'] = [] gslb_pool_members_domainname_fullPath_list = [(mem.name, mem.memberOrder, mem.fullPath, mem.ratio) for mem in gslb_A_cnamepool.members_s.get_collection()] for pool_member_name, pool_memberOrder, pool_member_fullPath, pool_member_ratio in gslb_pool_members_domainname_fullPath_list: gtm_A_cnamepool['Members'].append({ 'Member': pool_member_name, 'Member Order': pool_memberOrder, 'ratio': pool_member_ratio, 'fullPath': pool_member_fullPath }) gtm_A_wideip['poolsCname'].append(gtm_A_cnamepool) # print(gtm_A_wideip) gtm_wideip.append(gtm_A_wideip) # GTM AAAA Wideip for i in mgmt.tm.gtm.wideips.aaaas.get_collection(): try: type_AAAA_wideip = mgmt.tm.gtm.wideips.aaaas.aaaa.load(name=i.subPath + '~' + i.name if hasattr(i, 'subPath') else i.name, partition=i.partition) except Exception as e: print('type AAAA widip name {} error msg is '.format(i.name) + str(e)) else: gtm_AAAA_wideip = {} type_AAAA_wideip_name = i.name type_AAAA_wideip_partition = i.partition if hasattr(type_AAAA_wideip, 'aliases'): gtm_AAAA_wideip['aliases'] = type_AAAA_wideip.aliases if hasattr(type_AAAA_wideip, 'rules'): gtm_AAAA_wideip['iRules'] = type_AAAA_wideip.rules if hasattr(type_AAAA_wideip, 'enabled'): gtm_AAAA_wideip['enabled'] = True else: gtm_AAAA_wideip['disabled'] = True if hasattr(type_AAAA_wideip, 'subPath'): gtm_AAAA_wideip['subPath'] = type_AAAA_wideip.subPath gtm_AAAA_wideip.update(name=type_AAAA_wideip_name, partition=type_AAAA_wideip_partition, wideip_type='AAAA', poolLbMode=type_AAAA_wideip.poolLbMode, persistence=type_AAAA_wideip.persistence, lastResortPool=type_AAAA_wideip.lastResortPool, fullPath=type_AAAA_wideip.fullPath) if hasattr(type_AAAA_wideip, 'pools'): gtm_AAAA_wideip['pools'] = [] for pool_name in type_AAAA_wideip.pools: gtm_AAAA_pool = {} # gtm_A_pool_name = pool_name['name'] gtm_AAAA_pool['name'] = pool_name['name'] gtm_AAAA_pool['partition'] = pool_name['partition'] gtm_AAAA_pool['type'] = 'AAAA' gtm_AAAA_pool['order'] = pool_name['order'] gtm_AAAA_pool['ratio'] = pool_name['ratio'] if 'subPath' in pool_name.keys(): gtm_AAAA_pool['subPath'] = pool_name['subPath'] gslb_AAAA_pool = mgmt.tm.gtm.pools.aaaas.aaaa.load(name=pool_name['subPath'] + '~' + pool_name['name'], partition=pool_name['partition']) else: gslb_AAAA_pool = mgmt.tm.gtm.pools.aaaas.aaaa.load(name=pool_name['name'], partition=pool_name['partition']) gtm_AAAA_pool['fullPath'] = gslb_AAAA_pool.fullPath gtm_AAAA_pool['ttl'] = gslb_AAAA_pool.ttl gtm_AAAA_pool['loadBalancingMode'] = gslb_AAAA_pool.loadBalancingMode gtm_AAAA_pool['alternateMode'] = gslb_AAAA_pool.alternateMode gtm_AAAA_pool['fallbackMode'] = gslb_AAAA_pool.fallbackMode gtm_AAAA_pool['fallbackIp'] = gslb_AAAA_pool.fallbackIp gtm_AAAA_pool['Members'] = [] # gslb_pool_members_vs_name_list = [str(mem.raw) for mem in gslb_A_pool.members_s.get_collection()] gslb_pool_members_vs_fullPath_list = [(mem.memberOrder, mem.fullPath, mem.ratio) for mem in gslb_AAAA_pool.members_s.get_collection()] for pool_memberOrder, pool_member_fullPath, pool_member_ratio in gslb_pool_members_vs_fullPath_list: # print(pool_member_fullPath) # "fullPath":"/Common/DC-2-GTM-ipv4:/Common/vs_cmcc_99_22" gtm_server_name = pool_member_fullPath.split(':')[0] gtm_pool_members_member_name = pool_member_fullPath.split(':')[1] dc_gtm_virtualserver = mgmt.tm.gtm.servers.server.load(name=gtm_server_name.split('/')[2], partition=gtm_server_name.split('/')[1]) virtualservers_virtualserver = dc_gtm_virtualserver.virtual_servers_s.virtual_server.load( name=gtm_pool_members_member_name ) virtualserver_destination = virtualservers_virtualserver.destination virtualserver_Member_Address = virtualserver_destination.split('.')[0] virtualserver_Service_Port = virtualserver_destination.split('.')[1] gtm_AAAA_pool['Members'].append({ 'Member': gtm_pool_members_member_name, 'Member Order': pool_memberOrder, 'ratio': pool_member_ratio, 'Member Address': virtualserver_Member_Address, 'Service Port': virtualserver_Service_Port, 'Translation Address': virtualservers_virtualserver.translationAddress, 'Translation Service Port': virtualservers_virtualserver.translationPort }) gtm_AAAA_wideip['pools'].append(gtm_AAAA_pool) if hasattr(type_AAAA_wideip, 'poolsCname'): gtm_AAAA_wideip['poolsCname'] = [] for pool_name in type_AAAA_wideip.poolsCname: gtm_AAAA_cnamepool = {} gtm_AAAA_cnamepool['name'] = pool_name['name'] gtm_AAAA_cnamepool['partition'] = pool_name['partition'] gtm_AAAA_cnamepool['type'] = 'CNAME' gtm_AAAA_cnamepool['order'] = pool_name['order'] gtm_AAAA_cnamepool['ratio'] = pool_name['ratio'] if 'subPath' in pool_name.keys(): gtm_AAAA_cnamepool['subPath'] = pool_name['subPath'] gslb_AAAA_cnamepool = mgmt.tm.gtm.pools.cnames.cname.load(name=pool_name['subPath'] + '~' + pool_name['name'], partition=pool_name['partition']) else: gslb_AAAA_cnamepool = mgmt.tm.gtm.pools.cnames.cname.load(name=pool_name['name'], partition=pool_name['partition']) gtm_AAAA_cnamepool['fullPath'] = gslb_AAAA_cnamepool.fullPath gtm_AAAA_cnamepool['ttl'] = gslb_AAAA_cnamepool.ttl gtm_AAAA_cnamepool['loadBalancingMode'] = gslb_AAAA_cnamepool.loadBalancingMode gtm_AAAA_cnamepool['alternateMode'] = gslb_AAAA_cnamepool.alternateMode gtm_AAAA_cnamepool['fallbackMode'] = gslb_AAAA_cnamepool.fallbackMode gtm_AAAA_cnamepool['Members'] = [] gslb_pool_members_domainname_fullPath_list = [(mem.name, mem.memberOrder, mem.fullPath, mem.ratio) for mem in gslb_AAAA_cnamepool.members_s.get_collection()] for pool_member_name, pool_memberOrder, pool_member_fullPath, pool_member_ratio in gslb_pool_members_domainname_fullPath_list: gtm_AAAA_cnamepool['Members'].append({ 'Member': pool_member_name, 'Member Order': pool_memberOrder, 'ratio': pool_member_ratio, 'fullPath': pool_member_fullPath }) gtm_AAAA_wideip['poolsCname'].append(gtm_AAAA_cnamepool) # print(gtm_AAAA_wideip) gtm_wideip.append(gtm_AAAA_wideip) gtm_networkmap = {} gtm_networkmap.update(wideips=gtm_wideip) print(gtm_networkmap) with open(r"./F5-GTM-Wideip-{}-NetworkMap.json".format(time.strftime("%Y-%m-%d", time.localtime())), "w") as f: f.write(json.dumps(gtm_networkmap, indent=4, ensure_ascii=False))2.3KViews2likes7CommentsLogstash pipeline tester
Code is community submitted, community supported, and recognized as ‘Use At Your Own Risk’. Short Description A tool that makes developing logstash pipelines much much easier. Problem solved by this Code Snippet Oh. The problem... Have you ever tried to write a logstash pipeline? Did you suffer hair loss and splitting migraines? So did I. Presenting, logstash pipeline tester which gives you a web interface where you can paste raw logs, send them to the included logstash instance and see the result directly in the interface. The included logstash instance is also configured to automatically reload once it detects a config change. How to use this Code Snippet TLDR; Don't do this, read the manual or checkout the video below Still here? Ok then! 🙂 Install docker Clone the repo Run these commands in the repo root folder:sudo docker-compose build # Skip sudo if running Windows sudo docker compose up # Skip sudo if running WindowsGo to http://localhost:8080 on your PC/Mac Pick a pipeline and send data Edit the pipeline Send data Rince, repeat Version info v1.0.27: Dependency updates, jest test retries and more since 1.0.0 https://github.com/epacke/logstash-pipeline-tester/releases/tag/v1.0.29 Video on how to get started: https://youtu.be/Q3IQeXWoqLQ Please note that I accidentally started the interface on port 3000 in the video while the shipped version uses port 8080. It took me roughly 5 hours and more retakes than I can count to make this video, so that mistake will be preserved for the internet to laugh at. 🙂 The manual: https://loadbalancing.se/2020/03/11/logstash-testing-tool/ Code Snippet Meta Information Version: Check GitHub Coding Language: NodeJS, Typescript + React Full Code Snippet https://github.com/epacke/logstash-pipeline-tester2.4KViews3likes15CommentsServerside SNI injection iRule
Problem this snippet solves: Hi Folks, the iRule below can be used to inject a TLS SNI extension to the server side based on e.g. HOST-Header values. The iRule is usefull if your pool servers depending on valid SNI records and you don't want to configure dedicated Server SSL Profiles for each single web application. Cheers, Kai How to use this snippet: Attach the iRule to the Virtual Server where you need to insert a TLS SNI expension Tweak the $sni_value variable within the HTTP_REQUEST to meet your requirements or move it to a different event as needed. Make sure you've cleared the "Server Name" option in your Server_SSL_Profile. Code : when HTTP_REQUEST { #Set the SNI value (e.g. HTTP::host) set sni_value [getfield [HTTP::host] ":" 1] } when SERVERSSL_CLIENTHELLO_SEND { # SNI extension record as defined in RFC 3546/3.1 # # - TLS Extension Type = int16( 0 = SNI ) # - TLS Extension Length = int16( $sni_length + 5 byte ) # - SNI Record Length = int16( $sni_length + 3 byte) # - SNI Record Type = int8( 0 = HOST ) # - SNI Record Value Length = int16( $sni_length ) # - SNI Record Value = str( $sni_value ) # # Calculate the length of the SNI value, Compute the SNI Record / TLS extension fields and add the result to the SERVERSSL_CLIENTHELLO SSL::extensions insert [binary format SSScSa* 0 [expr { [set sni_length [string length $sni_value]] + 5 }] [expr { $sni_length + 3 }] 0 $sni_length $sni_value] } Tested this on version: 12.06.8KViews8likes31CommentsRequest Client Certificate And Pass To Application
Problem this snippet solves: We are using BigIP to dynamically request a client certificate. This example differs from the others available in that it actually passes the x509 certificate to the server for processing using a custom http header. The sequence of event listeners required to accomplish this feat is: HTTP_REQUEST, which invokes CLIENTSSL_HANDSHAKE, which is followed by HTTP_REQUEST_SEND The reason is that CLIENTSSL_HANDSHAKE occurs after HTTP_REQUEST event is processed entirely, but HTTP_REQUEST_SEND occurs after it. The certificate appears in PEM encoding and is slightly mangled; you need to emit newlines to get back into proper PEM format: -----BEGIN CERTIFICATE------ Mabcdefghj... -----END CERTIFICATE----- This certificate can be converted to DER encoding by jettisoning the BEGIN and END markers and doing base64 decode on the string. Code : # Initialize the variables on new client tcp session. when CLIENT_ACCEPTED { set collecting 0 set renegtried 0 } # Runs for each new http request when HTTP_REQUEST { # /_hst name and ?_hst=1 parameter triggers client cert renegotiation if { $renegtried == 0 and [SSL::cert count] == 0 and ([HTTP::uri] matches_regex {^[^?]*/_hst(\?|/|$)} or [HTTP::uri] matches_regex {[?&]_hst=1(&|$)}) } { # Collecting means buffering the request. The collection goes on # until SSL::renegotiate occurs, which happens after the HTTP # request has been received. The maximum data buffered by collect # is 1-4 MB. HTTP::collect set collecting 1 SSL::cert mode request SSL::renegotiate } } # After a handshake, we log that we have tried it. This is to prevent # constant attempts to renegotiate the SSL session. I'm not sure of this # feature; this may in fact be a mistake, but we can change it at any time. # It is transparent if we do: the connections only work slower. It would, # however, make BigIP detect inserted smartcards immediately. Right answer # depends on the way the feature is used by applications. when CLIENTSSL_HANDSHAKE { if { $collecting == 1 } { set renegtried 1 # Release allows the request processing to occur normally from this # point forwards. The next event to fire is HTTP_REQUEST_SEND. HTTP::release } } # Inject headers based on earlier renegotiations, if any. when HTTP_REQUEST_SEND { clientside { # Security: reject any user-submitted headers by our magic names. HTTP::header remove "X-ENV-SSL_CLIENT_CERTIFICATE" HTTP::header remove "X-ENV-SSL_CLIENT_CERTIFICATE_FAILED" # if certificate is available, send it. Otherwise, send a header # indicating a failure, if we have already attempted a renegotiate. if { [SSL::cert count] > 0 } { HTTP::header insert "X-ENV-SSL_CLIENT_CERTIFICATE" [X509::whole [SSL::cert 0]] } elseif { $renegtried == 1 } { # This header has some debug value: if the FAILED header is not # present, BigIP is probably not configured to do client certs # at all. HTTP::header insert "X-ENV-SSL_CLIENT_CERTIFICATE_FAILED" "true" } } }1.8KViews1like3CommentsHTTP request cloning
Problem this snippet solves: These iRules send a copy of HTTP request headers and payloads to one or more pool members These are the current iRule versions of the example from Colin's article. Code : ########### # First Rule # ########### rule http_request_clone_one_pool { # Clone HTTP requests to one clone pool when RULE_INIT { # Log debug locally to /var/log/ltm? 1=yes, 0=no set static::hsl_debug 1 # Pool name to clone requests to set static::hsl_pool "my_syslog_pool" } when CLIENT_ACCEPTED { if {[active_members $static::hsl_pool]==0}{ log "[IP::client_addr]:[TCP::client_port]: [virtual name] $static::hsl_pool down, not logging" set bypass 1 return } else { set bypass 0 } # Open a new HSL connection if one is not available set hsl [HSL::open -proto TCP -pool $static::hsl_pool] if {$static::hsl_debug}{log local0. "[IP::client_addr]:[TCP::client_port]: New hsl handle: $hsl"} } when HTTP_REQUEST { # If the HSL pool is down, do not run more code here if {$bypass}{ return } # Insert an XFF header if one is not inserted already # So the client IP can be tracked for the duplicated traffic HTTP::header insert X-Forwarded-For [IP::client_addr] # Check for POST requests if {[HTTP::method] eq "POST"}{ # Check for Content-Length between 1b and 1Mb if { [HTTP::header Content-Length] >= 1 and [HTTP::header Content-Length] < 1048576 }{ HTTP::collect [HTTP::header Content-Length] } elseif {[HTTP::header Content-Length] == 0}{ # POST with 0 content-length, so just send the headers HSL::send $hsl "[HTTP::request]\n" if {$static::hsl_debug}{log local0. "[IP::client_addr]:[TCP::client_port]: Sending [HTTP::request]"} } } else { # Request with no payload, so send just the HTTP headers to the clone pool HSL::send $hsl "[HTTP::request]\n" if {$static::hsl_debug}{log local0. "[IP::client_addr]:[TCP::client_port]: Sending [HTTP::request]"} } } when HTTP_REQUEST_DATA { # The parser does not allow HTTP::request in this event, but it works set request_cmd "HTTP::request" if {$static::hsl_debug}{log local0. "[IP::client_addr]:[TCP::client_port]: Collected [HTTP::payload length] bytes,\ sending [expr {[string length [eval $request_cmd]] + [HTTP::payload length]}] bytes total"} HSL::send $hsl "[eval $request_cmd][HTTP::payload]\nf" } } ############# # Second Rule # ############# rule http_request_close_xnum_pools { # Clone HTTP requests to X clone pools when RULE_INIT { # Set up an array of pool names to clone the traffic to # Each pool should be one server that will get a copy of each HTTP request set static::clone_pools(0) http_clone_pool1 set static::clone_pools(1) http_clone_pool2 set static::clone_pools(2) http_clone_pool3 set static::clone_pools(3) http_clone_pool4 # Log debug messages to /var/log/ltm? 0=no, 1=yes set static::clone_debug 1 set static::pool_count [array size static::clone_pools] for {set i 0}{$i < $static::pool_count}{incr i}{ log local0. "Configured for cloning to pool $clone_pools($i)" } } when CLIENT_ACCEPTED { # Open a new HSL connection to each clone pool if one is not available for {set i 0}{$i < $static::pool_count}{incr i}{ set hsl($i) [HSL::open -proto TCP -pool $static::clone_pools($i)] if {$static::clone_debug}{log local0. "[IP::client_addr]:[TCP::client_port]: hsl handle ($i) for $static::clone_pools($i): $hsl($i)"} } } when HTTP_REQUEST { # Insert an XFF header if one is not inserted already # So the client IP can be tracked for the duplicated traffic HTTP::header insert X-Forwarded-For [IP::client_addr] # Check for POST requests if {[HTTP::method] eq "POST"}{ # Check for Content-Length between 1b and 1Mb if { [HTTP::header Content-Length] >= 1 and [HTTP::header Content-Length] < 1048576 }{ HTTP::collect [HTTP::header Content-Length] } elseif {[HTTP::header Content-Length] == 0}{ # POST with 0 content-length, so just send the headers for {set i 0}{$i < $static::pool_count}{incr i}{ HSL::send $hsl($i) "[HTTP::request]\n" if {$static::clone_debug}{log local0. "[IP::client_addr]:[TCP::client_port]: Sending to $static::clone_pools($i), request: [HTTP::request]"} } } } else { # Request with no payload, so send just the HTTP headers to the clone pool for {set i 0}{$i < $static::pool_count}{incr i}{ HSL::send $hsl($i) [HTTP::request] if {$static::clone_debug}{log local0. "[IP::client_addr]:[TCP::client_port]: Sending to $static::clone_pools($i), request: [HTTP::request]"} } } } when HTTP_REQUEST_DATA { # The parser does not allow HTTP::request in this event, but it works set request_cmd "HTTP::request" for {set i 0}{$i < $static::pool_count}{incr i}{ if {$static::clone_debug}{log local0. "[IP::client_addr]:[TCP::client_port]: Collected [HTTP::payload length] bytes,\ sending [expr {[string length [eval $request_cmd]] + [HTTP::payload length]}] bytes total\ to $static::clone_pools($i), request: [eval $request_cmd][HTTP::payload]"} HSL::send $hsl($i) "[eval $request_cmd][HTTP::payload]\n" } } }1.7KViews0likes6Comments