application delivery
608 TopicsRadware config translation
Problem this snippet solves: This is a simple Python script that translates Radware text configuration file and outputs as config snippet and certificate files. $ ./rad2f5 Usage: rad2f5 <filename> [partition] Example: ./rad2f5 radwareconfig.txt MyPartition Using partition /MyPartition/ IP routes:46 Non-floating IP interfaces:26 Floating IP interfaces:19 Monitors:349 Virtual Servers:430 Pools:152 Certificates: 31 SSL profiles: 17 WARNING! Found L7 Regular Expression at "appdirector l7 farm-selection method-table setCreate l7Rule-Redirect" Layer 7 rules:12 Layer 7 policies:4 !! Copy all certificate files to BIG-IP /var/tmp and use load_certs.sh to load !! # Configuration #--------------------------------------# ltm policy /MyPartition/l7rule-Redirect { controls { forwarding } requires { http } rules { Redirect { actions { 0 { forward select pool SORRYPOOL } } ordinal 1 } } net route /MyPartition/10.20.30.40_32 { network 10.20.30.40/32 gateway 10.1.1.1 } How to use this snippet: This includes translation of routes and non-floating self-IPs, monitors, pools, virtual servers, certificates, SSL profiles and some layer 7 rules. To install, either download the file attached here, extract and run it or use pip: pip install rad2f5 To load the configuration to the f5 device, output to a file eg ./rad2f5 filename>newcfg.txt , remove the statistics header from the file with a text editor, upload the file into /var/tmp on the BIG-IP and test loading with tmsh load sys config merge file /var/tmp/<filename> verify . Fix any issues and load with tmsh load sys config merge file /var/tmp/<filename> The script will also output the certificates and keys which should be uploaded to the BIG-IP /var/tmp directory and run file check_certs.sh and load_certs.sh to check and load the certs and keys. Works with Python v2 and v3. If you want something to be added then message me with details of the text and i will try to add it. Feel free to add or change anything Code : #!/usr/bin/env python # v1.1 14/3/2018 Deal with lack of text in monitors ( line 157 ) # v2 3/1/2019 Updated after changes suggested by Avinash Piare # v3 6/11/2019 Updated to deal with \\r\n at the end of the line # v3.1 6/11/2019 Minor changes to handle VS names with spaces # 3.4 18/4/2020 Fixed minor issues # Peter White 6/11/2019 # usage ./rad2f5 [partition] import sys import re import textwrap import ipaddress print ("Running with Python v" + sys.version) # Set debug to true to output the workings of the script ie the arrays etc debug = False def parse_options(options): # Function to split options and return a dict containing option and value # Example: -pl 27 -v 123 -pa 172.28.1.234 # Return: { 'pl':'27', 'v':'123', 'pa':'172.28.1.234' } output = dict() # Deal with non-quotes eg -b 12345 for r in re.finditer(r'-(\w{1,3}) (\S+)',options): output[r.group(1)] = r.group(2) # Deal with quotes eg -u "User Name" for v in re.finditer(r'-(\w{1,3}) (".+")',options): output[v.group(1)] = v.group(2) return output def splitvars(vars): # Split a string on | and then =, return a dict of ABC=XYZ # eg HDR=User-Agent|TKN=Googlebot-Video/1.0|TMATCH=EQ| # Return is { 'HDR':'User-Agent','TKN': 'Googlebot-Video/1.0', 'TMATCH': 'EQ' } vlist = vars.split("|") o = {} for v in vlist: if v == '': continue w = v.split("=") if len(w) > 1: o[w[0]] = w[1] else: o[w[0]] = '' return o def array_add(a,d): # Function to safely add to an array #global a if not len(a): a = [ d ] else: a.append( d ) return a def setup_pools(name): global pools if not name in pools: pools[name] = { 'destinations': [], 'member-disabled': [], 'member-ids': [], 'priority-group': [] } else: if not 'destinations' in pools[name]: pools[name]['destinations'] = [] if not 'member-disabled' in pools[name]: pools[name]['member-disabled'] = [] if not 'member-ids' in pools[name]: pools[name]['member-ids'] = [] if not 'priority-group' in pools[name]: pools[name]['priority-group'] = [] def getVsFromPool(poolname): # This retrieves the names of virtual servers that use a specific pool global farm2vs if poolname in farm2vs: return farm2vs[poolname] else: return False def is_ipv6(ip): if ':' in ip: return True else: return False # Check there is an argument given if not len(sys.argv) > 1: exit("Usage: rad2f5 [partition]") if len(sys.argv) > 2: # Second command-line variable is the partition partition = "/" + sys.argv[2] + "/" print ("Using partition " + partition) else: partition = "/Common/" # Check input file fh = open(sys.argv[1],"r") if not fh: exit("Cannot open file " + argv[1]) rawfile = fh.read() # v2 # Remove any instances of slash at the end of line # eg health-monitoring check create\ # HC_DNS -id 5 -m DNS -p 53 -a \ #HOST=ns.domain.com|ADDR=1.1.1.1| -d 2.2.2.2 #file = re.sub('^security certificate table\\\\\n','',rawfile) file = re.sub('\\\\\n','',rawfile) file = re.sub('\\\\\r\n','',file) ################ LACP trunks ########################## # # # ############################################################# #net linkaggr trunks set T-1 -lap G-13,G-14 -lam Manual -lat Fast -law 3 -laa 32767 trunks = {} for r in re.finditer(r'net linkaggr trunks set (\S+) (.+)',file): name = r.group(1) options = parse_options(r.group(2)) if 'lap' in options: # Manage interfaces ints = options['lap'].split(',') trunks[name] = { 'interfaces': ints } print ("LACP trunks:" + str(len(trunks))) if debug: print('DEBUG LACP trunks: ' + str(trunks)) ################ IP routes ########################## # # # ############################################################# routes = {} for r in re.finditer(r'net route table create (\S+) (\S+) (\S+) (\S+)',file): if r.group(1) == '0.0.0.0': name = 'default' else: name = r.group(1) + "_" + r.group(2) routes[name] = { 'network': r.group(1), 'mask': r.group(2), 'gateway': r.group(3), 'interface': r.group(4)} print ("IP routes:" + str(len(routes))) if debug: print('DEBUG routes: ' + str(routes)) ################ Non-floating self-IPs ############### # # # # ############################################################# selfIpNonFloating = {} for s in re.finditer(r'net ip-interface create (\S+) (\S+) (.+)',file): output = { 'interface': s.group(2) } opts = parse_options(s.group(3)) if 'pl' in opts: output['mask'] = opts['pl'] if 'v' in opts: output['vlan'] = opts['v'] if 'pa' in opts: output['peerAddress'] = opts['pa'] selfIpNonFloating[s.group(1)] = output print ("Non-floating IP interfaces:" + str(len(selfIpNonFloating))) if debug: print("DEBUG non-floating IPs: " + str(selfIpNonFloating)) ################ Floating self-IPs ################### # # # ############################################################# selfIpFloating = {} for s in re.finditer(r'redundancy vrrp virtual-routers create (\S+) (\S+) (\S+) (.+)',file): output = { 'version': s.group(1),'interface': s.group(3), 'ipAddresses': []} opts = parse_options(s.group(4)) #if 'as' in opts: # Enabled - assume all are enabled if 'pip' in opts: output['peerIpAddress'] = opts['pip'] selfIpFloating[s.group(2)] = output # Retrieve IP addresses for Floating self-IPs # for s in re.finditer(r'redundancy vrrp associated-ip create (\S+) (\S+) (\S+) (\S+)',file): selfIpFloating[s.group(2)]['ipAddresses'] = array_add(selfIpFloating[s.group(2)]['ipAddresses'],s.group(4)) print ("Floating IP interfaces:" + str(len(selfIpFloating))) if debug: print("DEBUG Floating IPs: " + str(selfIpFloating)) ################ Monitors ########################### # # -m XYZ is the monitor type may or may not be present ( not present for icmp type ) # -id is the monitor ID, -p is the port, -d is the destination # ############################################################# monitors = {} for m in re.finditer(r'health-monitoring check create (\S+) (.+)',file): output = { 'name': m.group(1) } opts = parse_options(m.group(2)) if 'id' in opts: id = opts['id'] else: print ("No ID for monitor " + m.group(1)) continue if 'm' in opts: output['type'] = opts['m'] else: output['type'] = 'icmp' if 'a' in opts: output['text'] = opts['a'] else: # Added in v2 output['text'] = '' if 'p' in opts: output['port'] = opts['p'] else: output['port'] = '' monitors[id] = output print ("Monitors:" + str(len(monitors))) if debug: print("DEBUG Monitors:" + str(monitors)) ################ Virtual Servers ##################### # # 0 = name, 1= IP, 2=protocol, 3=port, 4=source 5=options # -ta = type, -ht = http policy, -fn = pool name, -sl = ssl profile name, -ipt = translation? # -po = policy name # ############################################################# farm2vs = {} virtuals = {} snatPools = {} for v in re.finditer(r'appdirector l4-policy table create (\S+) (\S+) (\S+) (\S+) (\S+) (.*)',file): # Puke if VS has quotes if v.group(1).startswith('"'): name = v.group(1).strip('"').replace(' ','_') + v.group(2).strip('"').replace(' ','_') address,protocol,port = v.group(3),v.group(4),v.group(5) source = v.group(6).split(' ')[0] options = ' '.join(v.group(6).split(' ')[1:]) else: name,address,protocol,port,source,options = v.group(1),v.group(2),v.group(3),v.group(4),v.group(5),v.group(6) opts = parse_options(options) # Get rid of ICMP virtual servers if protocol == 'ICMP': continue print ("Name " + name + " has quotes, address: " + address + " protocol: " + protocol) if port == 'Any': port = '0' output = {'source': source, 'destination': address + ":" + port, 'protocol': protocol, 'port': port } if 'fn' in opts: output['pool'] = opts['fn'] if not opts['fn'] in farm2vs: farm2vs[opts['fn']] = [] farm2vs[opts['fn']].append(name) output['port'] = port if 'po' in opts: output['policy'] = opts['po'] if 'ipt' in opts: output['snat'] = 'snatpool_' + opts['ipt'] if not 'snatpool_' + opts['ipt'] in snatPools: #Create snatpool snatPools['snatpool_' + opts['ipt']] = { 'members': [opts['ipt']] } # Set the correct profiles profiles = [] # Layer 4 if protocol == "TCP" or protocol == "UDP": profiles.append(protocol.lower()) else: profiles.append('ipother') # http if port == '80' or port == '8080': profiles.append('http') profiles.append('oneconnect') # ftp if port == '21': profiles.append('ftp') # RADIUS if port == '1812' or port == '1813': profiles.append('radius') # SSL if 'sl' in opts: profiles.append(opts['sl']) profiles.append('http') profiles.append('oneconnect') output['profiles'] = profiles # virtuals[name] = output print ("Virtual Servers:" + str(len(virtuals))) if debug: print("DEBUG Virtual Servers:" + str(virtuals)) #print("DEBUG Farm to VS Mapping:" + str(farm2vs)) ################ Pools ############################### # # Pools config options are distributed across multiple tables # This sets the global config such as load balancing algorithm # -dm This sets the distribution method. cyclic = Round Robin, Fewest Number of Users = least conn, Weighted Cyclic = ratio, Response Time = fastest # -as this is the admin state # -cm is the checking method ie monitor # -at is the activation time ############################################################# pools = {} for p in re.finditer(r'appdirector farm table setCreate (\S+) (.*)',file): name = p.group(1) setup_pools(name) opts = parse_options(p.group(2)) output = {} # Admin state output['poolDisabled'] = False if 'as' in opts: if opts['as'] == 'Disabled': output['poolDisabled'] = True # Deal with distribution methods method = 'round-robin' if 'dm' in opts: if opts['dm'] == '"Fewest Number of Users"': method = 'least-conn' elif opts['dm'] == 'Hashing': method = 'hash' else: method = 'round-robin' output['lbMethod'] = method if 'at' in opts: output['slowRamp'] = opts['at'] pools[name] = output # This sets the pool members # 0=name, 1=node, 2=node address, 3=node port, 4=? -id =id -sd ? -as admin state eg Disable, -om operation mode eg Backup ( fallback server ), -rt backup server address 0.0.0.0 for p in re.finditer(r'appdirector farm server table create (\S+) (\S+) (\S+) (\S+) (\S+) (.*)',file): name,node,node_address,node_port,node_hostname = p.group(1),p.group(2),p.group(3),p.group(4),p.group(5) opts = parse_options(p.group(6)) setup_pools(name) if node_port == 'None': node_port = '0' pools[name]['destinations'] = array_add(pools[name]['destinations'],node_address + ":" + node_port) # Manage monitor if node_port == '80' or node_port == '8080': monitor = 'http' elif node_port == '443': monitor = 'https' elif node_port == '53': monitor = 'dns' elif node_port == '21': monitor = 'ftp' elif node_port == '0': monitor = 'gateway-icmp' else: monitor = 'tcp' pools[name]['monitor'] = monitor # Retrieve pool member ID if 'id' in opts: pools[name]['member-ids'] = array_add(pools[name]['member-ids'],opts['id']) else: print ("ID not found for pool " + name) # Check if member is disabled if 'as' in opts and opts['as'] == 'Disabled': # This pool member is disabled pools[name]['member-disabled'] = array_add(pools[name]['member-disabled'],True) else: pools[name]['member-disabled'] = array_add(pools[name]['member-disabled'],False) # Check if member is backup ie Priority Groups if 'om' in opts and opts['om'] == 'Backup': pools[name]['priority-group'] = array_add(pools[name]['priority-group'],20) else: pools[name]['priority-group'] = array_add(pools[name]['priority-group'],0) print ("Pools:" + str(len(pools)) ) if debug: print("DEBUG pools: " + str(pools)) ################ SSL certificates ################### # # #Name: certificate_name \ #Type: certificate \ #-----BEGIN CERTIFICATE----- \ #MIIEcyFCA7agAwIAAgISESFWs9QGF # ############################################################# certs = {} for r in re.finditer(r'Name: (\S+) Type: (\S+) (-----BEGIN CERTIFICATE-----.+?-----END CERTIFICATE-----)',file): name,type,text = r.group(1),r.group(2),r.group(3) certs[name] = { 'type': type,'text': text.replace(' \\\r\n','') } # Print out to file or something print ("SSL Certificates: " + str(len(certs))) if debug: print("DEBUG certs: " + str(certs)) ################ SSL Keys ################### # # #Name: New_Root_Cert Type: key Passphrase: -----BEGIN RSA PRIVATE KEY----- Proc-Type: 4,ENCRYPTED DEK-Info: [key] [key] -----END RSA PRIVATE KEY----- \ # ############################################################# keys = {} # Keys with passphrase for r in re.finditer(r'Name: (\S+) Type: key Passphrase: (\S+) (-----BEGIN RSA PRIVATE KEY-----.+?-----END RSA PRIVATE KEY-----)',file): name,passphrase,text = r.group(1),r.group(2),r.group(3) keys[name] = { 'passphrase': passphrase,'text': text.replace(' \\\r\n','') } # Keys without passphrase for r in re.finditer(r'Name: (\S+) Type: key (-----BEGIN RSA PRIVATE KEY-----.+?-----END RSA PRIVATE KEY-----)',file): name,text = r.group(1),r.group(2) keys[name] = { 'text': text.replace(' \\\r\n','') } print ("SSL Keys: " + str(len(keys))) if debug: print("DEBUG SSL keys: " + str(keys)) ################ SSL profiles ######################## # # -c is cert, -u is ciphers, -t is chain cert, -fv - versions ############################################################## sslProfiles = {} for s in re.finditer(r'^appdirector l4-policy ssl-policy create (\S+) (.+)',file.replace('\\\r\n',''),re.MULTILINE): name = s.group(1) output = {} opts = parse_options(s.group(2)) if 'c' in opts: # Certificate output['certificate'] = opts['c'] if 't' in opts: # Chain cert output['chaincert'] = opts['t'] if 'fv' in opts: # TLS Version output['version'] = opts['fv'] if 'u' in opts: # User-defined Ciphers output['cipher'] = opts['u'] if 'lp' in opts: # ? output['lp'] = opts['lp'] if 'pa' in opts: # ? output['pa'] = opts['pa'] if 'cb' in opts: # ? output['cb'] = opts['cb'] if 'i' in opts: # Backend SSL Cipher -> Values: Low, Medium, High, User Defined output['i'] = opts['i'] if 'bs' in opts: # Backend SSL in use ie serverSSL output['serverssl'] = opts['bs'] sslProfiles[name] = output print ("SSL profiles: " + str(len(sslProfiles))) if debug: print("DEBUG SSL Profiles: " + str(sslProfiles)) ################ SNAT pools ########################## # # Note that this only works for addresses in the same /24 subnet ############################################################# for s in re.finditer(r'appdirector nat client address-range create (\S+) (\S+)',file): name = s.group(1) if sys.version.startswith('2'): start = ipaddress.ip_address(unicode(name,'utf_8')) end = ipaddress.ip_address(unicode(s.group(2),'utf_8')) else: start = ipaddress.ip_address(name) end = ipaddress.ip_address(s.group(2)) current = start ipAddresses = [] while current <= end: ipAddresses.append(str(current)) current += 1 snatPools['snatpool_' + name] = { 'members': ipAddresses } print ("SNAT Pools:" + str(len(snatPools))) if debug: print("DEBUG SNAT Pools: " + str(snatPools)) #################################################################### ################ Layer 7 functions ########################## # # #################################################################### l7rules = { } for r in re.finditer(r'appdirector l7 farm-selection method-table setCreate (\S+) (.+)',file): name = r.group(1) l7rules[name] = { 'match': [], 'action': [] } opts = parse_options(r.group(2)) if 'cm' in opts and opts['cm'] == '"Header Field"' and 'ma' in opts: # This is a rule to insert a header params = splitvars(opts['ma']) if 'HDR' in params and 'TKN' in params: if params['TKN'] == "$Client_IP": params['TKN'] = '[IP::client_addr]' l7rules[name]['action'].append("http-header\ninsert\nname " + params['HDR'] + "\nvalue " + params['TKN']) if 'cm' in opts and opts['cm'] == 'URL' and 'ma' in opts: # This does a match on Host and URL params = splitvars(opts['ma']) if 'HN' in params: l7rules[name]['match'].append("http-host\nhost\nvalues { " + params['HN'] + " }") if 'P' in params: l7rules[name]['match'].append("http-uri\npath\nvalues { " + params['P'] + " }") if 'cm' in opts and opts['cm'] == '"Regular Expression"' and 'ma' in opts: params = splitvars(opts['ma']) if 'EXP' in params and params['EXP'] == '.': # This is a regex which matches everything print ("REGEX: " + name) else: print ("WARNING! Found L7 Regular Expression at \"appdirector l7 farm-selection method-table setCreate " + name + "\". Manually set the match in output config.") # Note that there can be multiple entries ie multiple rules per policy l7policies = {} for p in re.finditer(r'appdirector l7 farm-selection policy-table setCreate (\S+) (\d+) (.+)',file): name = p.group(1) precedence = p.group(2) opts = parse_options(p.group(3)) if 'fn' in opts: farm = opts['fn'] else: farm = '' if 'm1' in opts: rule = opts['m1'] if 'pa' in opts: # Retain HTTP Persistency (PRSST)-If the argument is ON (or undefined), AppDirector maintains HTTP 1.1 # HTTP Redirect To (RDR)-Performs HTTP redirection to the specified name or IP address. # HTTPS Redirect To (RDRS)-AppDirector redirects the HTTP request to the specified name or IP address and modifies the request to a HTTPS request. # Redirection Code (RDRC)-Defines the code to be used for redirection. # RDRC=PRMN stand for Permanent I assume , as in HTTP 301 # SIP Redirect To (RDRSIP)-Performs SIP redirection to the specified name or IP address. params = splitvars(opts['pa']) if 'RDR' in params: url = 'http://' + params['RDR'] elif 'RDRS' in params: url = 'https://' + params['RDRS'] else: url = '' if not name in l7policies: l7policies[name] = [] if rule in l7rules: if farm != '': l7rules[rule]['action'].append("forward\nselect\npool " + farm) elif url != '': l7rules[rule]['action'].append("http-redirect\nhost " + url) l7policies[name].append({ 'precedence': precedence, 'farm': farm, 'rule': rule }) print ("Layer 7 rules:" + str(len(l7rules))) print ("Layer 7 policies:" + str(len(l7policies))) if debug: print("DEBUG L7 rules: " + str(l7rules)) print("DEBUG L7 policies: " + str(l7policies)) # # We have retrieved the required configuration from the input file # Now start outputting the config ################# output SSL certificates import script ###################### # # ######################################################################## if len(certs): print ("-- Creating SSL certs and keys --") with open("load_certs.sh",'w') as loadScript: loadScript.write("#!/bin/bash\n# Script to load SSL certs and keys from /var/tmp\n") ##### Manage Certificates ######## for cert in certs: # Write certs to load_certs.sh if certs[cert]['type'] == 'certificate' or certs[cert]['type'] == 'interm': loadScript.write("tmsh install sys crypto cert " + partition + cert + ".crt from-local-file /var/tmp/" + cert + ".crt\n") # Create the certificate files with open(cert + ".crt","w") as certFile: for m in re.finditer(r'(-----BEGIN CERTIFICATE-----)\s?(.+)\s?(-----END CERTIFICATE-----)',certs[cert]['text']): certFile.write(m.group(1) + "\n") certFile.write(textwrap.fill(m.group(2),64) + "\n") certFile.write(m.group(3) + "\n") print ("Created SSL certificate file " + cert + ".crt") ##### Manage Keys ################# for key in keys: # Write keys to load_certs.sh loadScript.write("tmsh install sys crypto key " + partition + key + ".key") if 'passphrase' in keys[key]: loadScript.write(" passphrase " + keys[key]['passphrase']) loadScript.write(" from-local-file /var/tmp/" + key + ".key\n") # Create the key files with open(key + ".key","w") as keyFile: for n in re.finditer(r'(-----BEGIN RSA PRIVATE KEY-----)(.+)(-----END RSA PRIVATE KEY-----)',keys[key]['text']): keyFile.write(n.group(1) + "\n") if 'Proc-Type:' in n.group(2) and 'DEK-Info:' in n.group(2): # File is encrypted, separate the first lines for o in re.finditer(r'(Proc-Type: \S+) (DEK-Info: \S+) (.+)',n.group(2)): keyFile.write(o.group(1) + "\n") keyFile.write(o.group(2) + "\n\n") keyFile.write(textwrap.fill(o.group(3),64) + "\n") else: # File is not encrypted, output as it is keyFile.write(textwrap.fill(n.group(2),64) + "\n") keyFile.write(n.group(3) + "\n") print ("Created SSL key file " + key + ".key") print ("-- Finished creating SSL certs and keys --") print ("!! Copy all .crt and .key SSL files to BIG-IP /var/tmp and use load_certs.sh to load !!" ) ####################################################################### ################# output SSL certificates checking script ###################### # # ######################################################################## with open("check_certs.sh",'w') as checkScript: checkScript.write("#!/bin/bash\n# Script to check SSL certs and keys\n") checkScript.write("\n# Check SSL Certs\n") for cert in certs: if certs[cert]['type'] == 'certificate' or certs[cert]['type'] == 'interm': checkScript.write("openssl x509 -in " + cert + ".crt -noout || echo \"Error with file " + cert + ".crt\"\n") checkScript.write("\n# Check SSL Keys\n") for key in keys: checkScript.write("openssl rsa -in " + key + ".key -check || echo \"Error with file " + key + ".key\"\n") print ("!! Run the check_certs.sh script to check the certificates are valid !!" ) ####################################################################### # # # # # # # # print ("\n\n\n\n\n\n# Configuration \n#--------------------------------------#") ################# output policy config ##################### # # ############################################################# #print ("L7 rules: " + str(l7rules)) for i in l7policies.keys(): output = "ltm policy " + partition + i + " {\n\tcontrols { forwarding }\n\trequires { http }\n\t" output += "rules {\n\t" ordinal = 1 for j in l7policies[i]: output += "\t" + j['rule'] + " { \n" if len(l7rules[j['rule']]['match']): # Deal with conditions output += "\t\t\tconditions { \n" l = 0 for k in l7rules[j['rule']]['match']: output += "\t\t\t\t" + str(l) + " { \n\t\t\t\t\t" + k.replace('\n','\n\t\t\t\t\t') l += 1 output += "\n\t\t\t\t }\n" output += "\n\t\t\t }\n" if len(l7rules[j['rule']]['action']): # Deal with actions output += "\t\t\tactions { \n" m = 0 for n in l7rules[j['rule']]['action']: output += "\t\t\t\t" + str(m) + " { \n\t\t\t\t\t" + n.replace('\n','\n\t\t\t\t\t') m += 1 output += "\n\t\t\t\t }\n" output += "\n\t\t\t }\n" output += "\t\tordinal " + str(ordinal) output += "\n\t\t}\n\t" ordinal += 1 output += "\n\t}\n}\n" print (output) ################# output LACP trunk config ############## # # ############################################################# for trunk in trunks.keys(): output = "net trunk " + partition + trunk + " {\n" if 'interfaces' in trunks[trunk]: output += "\tinterfaces {\n" for int in trunks[trunk]['interfaces']: output += "\t\t" + int + "\n" output += "\t}\n" output += "}\n" print (output) ################# output vlan config ############## # # ############################################################# for ip in selfIpNonFloating.keys(): output = "" if 'vlan' in selfIpNonFloating[ip]: vlanName = "VLAN-" + selfIpNonFloating[ip]['vlan'] output += "net vlan " + partition + vlanName + " {\n" if 'interface' in selfIpNonFloating[ip]: output += "\tinterfaces {\n" output += "\t\t" + selfIpNonFloating[ip]['interface'] + " {\n" output += "\t\t\ttagged\n\t\t}\n" output += "\t}\n" output += "\ttag " + selfIpNonFloating[ip]['vlan'] + "\n}\n" print(output) ################# output non-floating self-ip config ####### # # ############################################################# for ip in selfIpNonFloating.keys(): output = "" if 'mask' in selfIpNonFloating[ip]: mask = "/" + selfIpNonFloating[ip]['mask'] else: if is_ipv6(ip): mask = "/64" else: mask = "/32" if 'vlan' in selfIpNonFloating[ip]: vlanName = "VLAN-" + selfIpNonFloating[ip]['vlan'] else: continue output += "net self " + partition + "selfip_" + ip + " {\n\taddress " + ip + mask + "\n\t" output += "allow-service none\n\t" output += "traffic-group traffic-group-local-only\n" output += "\tvlan " + vlanName + "\n" output += "}\n" print (output) ################# output network route config ############## # # ############################################################# for route in routes.keys(): network,mask,gateway = routes[route]['network'],routes[route]['mask'],routes[route]['gateway'] output = "net route " + partition + route + " {\n\tnetwork " + network + "/" + mask + "\n\t" output += "gateway " + gateway + "\n}" print (output) ################# output SSL profiles config ########################## # # ######################################################################## for s in sslProfiles: #print (str(sslProfiles[s])) output = "ltm profile client-ssl " + partition + s + " {\n" output += "\tcert-key-chain { " + s + " { cert " output += sslProfiles[s]['certificate'] + ".crt " output += " key " output += sslProfiles[s]['certificate'] + ".key " if 'chaincert' in sslProfiles[s]: output += "chain " + sslProfiles[s]['chaincert'] output += " }\n\tdefaults-from /Common/clientssl\n}\n" print (output) if 'serverssl' in sslProfiles[s]: print("WARNING: VSs using Client SSL profile " + s + " should have Server SSL profile assigned") ################# output monitor config #################### # # ############################################################# for m in monitors.keys(): if monitors[m]['type'] == '"TCP Port"': output = "ltm monitor tcp " + partition + monitors[m]['name'] + " {\n\tdefaults-from /Common/tcp\n}\n" if monitors[m]['type'] == '"UDP Port"': output = "ltm monitor udp " + partition + monitors[m]['name'] + " {\n\tdefaults-from /Common/udp\n}\n" if monitors[m]['type'] == 'HTTP': ''' https://webhelp.radware.com/AppDirector/v214/214Traffic%20Management%20and%20Application%20Acceleration.03.001.htm https://webhelp.radware.com/AppDirector/v214/HM_Checks_Table.htm Arguments: Host name - HOST=10.10.10.53 path - PATH=/hc.htm HTTP Header HTTP method - MTD=G (G/H/P) send HTTP standard request or proxy request - PRX=N use of no-cache - NOCACHE=N text for search within a HTTP header and body, and an indication whether the text appears Username and Password for basic authentication NTLM authentication option - AUTH=B Up to four valid HTTP return codes - C1=200 -a PATH=/hc.htm|C1=200|MEXIST=Y|MTD=G|PRX=N|NOCACHE=N|AUTH=B| ''' output = "ltm monitor http " + partition + monitors[m]['name'] + " {\n\tdefaults-from /Common/http\n" vars = splitvars(monitors[m]['text']) output += "\tsend \"" if 'MTD' in vars: if vars['MTD'] == 'G': output += "GET " elif vars['MTD'] == 'P': output += "POST " elif vars['MTD'] == 'H': output += "HEAD " else: output += "GET " if 'PATH' in vars: output += vars['PATH'] + ' HTTP/1.0\\r\\n\\r\\n"\n' if 'C1' in vars: output += "\trecv \"^HTTP/1\.. " + vars['C1'] + "\"\n" output += "}\n" if monitors[m]['type'] == 'HTTPS': output = "ltm monitor https " + partition + monitors[m]['name'] + " {\n\tdefaults-from /Common/https\n" vars = splitvars(monitors[m]['text']) output += "\tsend \"" if 'MTD' in vars: if vars['MTD'] == 'G': output += "GET " elif vars['MTD'] == 'P': output += "POST " elif vars['MTD'] == 'H': output += "HEAD " else: output += "GET " if 'PATH' in vars: output += vars['PATH'] + ' HTTP/1.0\\r\\n\\r\\n"\n' output += "}\n" if monitors[m]['type'] == 'LDAP': ''' The Health Monitoring module enhances the health checks for LDAP servers by allowing performing searches in the LDAP server. Before Health Monitoring performs the search, it issues a Bind request command to the LDAP server. After performing the search, it closes the connection with the Unbind command. A successful search receives an answer from the server that includes a "searchResultEntry" message. An unsuccessful search receives an answer that only includes only a "searchResultDone" message. Arguments: Username A user with privileges to search the LDAP server. Password The password of the user. Base Object The location in the directory from which the LDAP search begins. Attribute Name The attribute to look for. For example, CN - Common Name. Search Value The value to search for. Search scope baseObject, singleLevel, wholeSubtree Search Deref Aliases neverDerefAliases, dereflnSearching, derefFindingBaseObj, derefAlways USER=cn=healthcheck,dc=domain|PASS=1234|BASE=dc=domain|ATTR=cn|VAL=healthcheck|SCP=1|DEREF=3| ''' output = "ltm monitor ldap " + partition + monitors[m]['name'] + " {\n\tdefaults-from /Common/ldap\n" vars = splitvars(monitors[m]['text']) if 'BASE' in vars: output += "\tbase \"" + vars['BASE'] + "\"\n" if 'USER' in vars: output += "\tusername \"" + vars['USER'] + "\"\n" if 'PASS' in vars: output += "\tpassword \"" + vars['PASS'] + "\"\n" output += "}\n" if monitors[m]['type'] == 'icmp': output = "ltm monitor gateway-icmp " + partition + monitors[m]['name'] + " {\n\tdefaults-from /Common/gateway-icmp\n}\n" if monitors[m]['type'] == 'DNS': output = "ltm monitor dns " + partition + monitors[m]['name'] + " {\n\tdefaults-from /Common/dns\n" vars = splitvars(monitors[m]['text']) if 'HOST' in vars: output += "\tqname \"" + vars['HOST'] + "\"\n" if 'ADDR' in vars: output += "\trecv \"" + vars['ADDR'] + "\"\n" output += "}\n" if monitors[m]['type'] == '"Radius Accounting"': output = "ltm monitor radius-accounting " + partition + monitors[m]['name'] + " {\n\tdefaults-from /Common/radius-accounting\n" vars = splitvars(monitors[m]['text']) if 'USER' in vars: output += "\tusername \"" + vars['USER'] + "\"\n" #if 'PASS' in vars: if 'SECRET' in vars: output += "\tsecret \"" + vars['SECRET'] + "\"\n" output += "}\n" if monitors[m]['type'] == '"Radius Authentication"': output = "ltm monitor radius " + partition + monitors[m]['name'] + " {\n\tdefaults-from /Common/radius\n" vars = splitvars(monitors[m]['text']) if 'USER' in vars: output += "\tusername \"" + vars['USER'] + "\"\n" #if 'PASS' in vars: if 'SECRET' in vars: output += "\tsecret \"" + vars['SECRET'] + "\"\n" output += "}\n" print (output) ################# output SNAT pool config ###################### # # ############################################################# for s in snatPools.keys(): output = "ltm snatpool " + partition + s + " {\n\t" output += "members {\n" #run through pool range and add members as individual IP addresses below each other for member in snatPools[s]['members']: output += "\t " + member + "\n" output += "\t}" output += "\n}" print (output) ################# output pool config ###################### # # ############################################################# for p in pools.keys(): output = "ltm pool " + partition + p + " {\n\t" if 'monitor' in pools[p]: output += "monitor min 1 of { " + pools[p]['monitor'] + " }\n\t" if 'lbMethod' in pools[p]: output += "load-balancing-mode " + pools[p]['lbMethod'] + "\n\t" output += "members {\n" if 'destinations' in pools[p] and len(pools[p]['destinations']): for i,v in enumerate(pools[p]['destinations']): d = re.sub(':\d+$','',v) output += "\t\t" + v + " {\n\t\t\taddress " + d + "\n" if pools[p]['member-disabled'][i]: output += "\t\t\tstate down\n" output += "\t\t\tpriority-group " + str(pools[p]['priority-group'][i]) + "\n" output += "\t\t}\n" output += "\t}\n}" print (output) ################# output virtual server config ###################### # # ###################################################################### for v in virtuals: output = "ltm virtual " + partition + v + " {\n" output += "\tdestination " + virtuals[v]['destination'] + "\n" output += "\tip-protocol " + virtuals[v]['protocol'].lower() + "\n" if 'pool' in virtuals[v] and virtuals[v]['pool']: output += "\tpool " + virtuals[v]['pool'] + "\n" if 'profiles' in virtuals[v] and len(virtuals[v]['profiles']): output += "\tprofiles {\n" for p in virtuals[v]['profiles']: output += "\t\t" + p + " { }\n" output += "\t}\n" if 'snat' in virtuals[v]: output += "\tsource-address-translation {\n\t" if virtuals[v]['snat'] == 'automap': output += "\ttype automap\n\t" else: output += "\ttype snat\n\t" output += "\tpool " + virtuals[v]['snat'] + "\n\t" output += "}\n" if 'policy' in virtuals[v]: output += "\tpolicies {\n\t" output += "\t" + virtuals[v]['policy'] + " { }\n\t}\n" output += "}" print (output) Tested this on version: 12.02.9KViews0likes31CommentsBIG-IP Report
Problem this snippet solves: Overview This is a script which will generate a report of the BIG-IP LTM configuration on all your load balancers making it easy to find information and get a comprehensive overview of virtual servers and pools connected to them. This information is used to relay information to NOC and developers to give them insight in where things are located and to be able to plan patching and deploys. I also use it myself as a quick way get information or gather data used as a foundation for RFC's, ie get a list of all external virtual servers without compression profiles. The script has been running on 13 pairs of load balancers, indexing over 1200 virtual servers for several years now and the report is widely used across the company and by many companies and governments across the world. It's easy to setup and use and only requires auditor (read-only) permissions on your devices. Demo/Preview Interactive demo http://loadbalancing.se/bigipreportdemo/ Screen shots The main report: The device overview: Certificate details: How to use this snippet: Installation instructions BigipReport REST This is the only branch we're updating since middle of 2020 and it supports 12.x and upwards. Downloads: https://loadbalancing.se/downloads/bigipreport-v5.7.16.zip Documentation, installation instructions and troubleshooting: https://loadbalancing.se/bigipreport-rest/ Docker support https://loadbalancing.se/2021/01/05/running-bigipreport-on-docker/ Kubernetes support https://loadbalancing.se/2021/04/16/bigipreport-on-kubernetes/ BIG-IP Report (Legacy) Older version of the report that only runs on Windows and is depending on a Powershell plugin originally written by Joe Pruitt (F5) BIG-IP Report (only download this if you have v10 devices): https://loadbalancing.se/downloads/bigipreport-5.4.0-beta.zip iControl Snapin https://loadbalancing.se/downloads/f5-icontrol.zip Documentation and Installation Instructions https://loadbalancing.se/bigip-report/ Upgrade instructions Protect the report using APM and active directory Written by DevCentral member Shann_P: https://loadbalancing.se/2018/04/08/protecting-bigip-report-behind-an-apm-by-shannon-poole/ Got issues/problems/feedback? Still have issues? Drop a comment below. We usually reply quite fast. Any bugs found, issues detected or ideas contributed makes the report better for everyone, so it's always appreciated. --- Join us on Discord: https://discord.gg/7JJvPMYahA Code : BigIP Report Tested this on version: 12, 13, 14, 15, 1618KViews21likes99CommentsComplete MFA solution with GA stored in Active Directory
Problem this snippet solves: All modern business applications require Multi-Factor Authentication (MFA) to be used for remote access by employees. There are many vendors on market selling enterprise MFA solutions that may be utilised with F5 BIG-IP Access Policy Manager (APM). Those solutions are complex and allow customers to create flexible policies which allow them to decide when and whom will be authorised to access protected applications. But what about those customers which have no needs for using complex enterprise solutions or does not have adequate budget for such spendings? How to use this snippet: For those customers I would like to present my One-Time Password (OTP) application which requires BIG-IP LTM/APM/iRulesLX. Shared secret value is stored in Active Directory and QR code is generated in user's browser. All you need after implementing this application on your BIG-IP is. to ask your users to get any OTP-compatible mobile application, like Google Authenticator or Microsoft Authenticator Please see https://github.com/akhmarov/f5_otp/ for instructions UPDATE 1: New version now support APM 15.1+ Modern Customization UPDATE 2: Added trusted device support UPDATE 3: Added multi-tenancy support Tested this on version: 15.11.7KViews1like9CommentsDNS: El protocolo que nadie ve...hasta que se convierte en el problema más grande de la arquitectura
En el día a día de operación, el DNS rara vez está en el centro de la conversación. No suele ser el protagonista de los roadmaps, ni el primer punto que se revisa en un diseño nuevo. Muchas veces se asume que "ya está ahí", funcionando, resolviendo nombres sin mayor fricción. Y eso es precisamente lo peligroso. Después de años trabajando con arquitecturas empresariales y de Service Providers, hay una constante difícil de ignorar: el DNS es uno de los componentes más críticos de la infraestructura moderna y, al mismo tiempo, uno de los más subestimados. Cuando todo funciona, nadie pregunta por él. Cuando algo falla, todo lo demás deja de importar. He visto aplicaciones "arriba", enlaces estables, firewalls en verde... pero usuarios incapaces de acceder a un servicio. El problema no estaba en la aplicación ni en la red, sino en ese primer paso que todos damos por sentado: resolver un nombre. DNS en la vida real: cuando la teoría se encuentra con la operación En el papel, el DNS parece simple, pero en producción, no lo es. En arquitecturas reales, el DNS participa activamente en decisiones críticas como: A qué sitio o región se dirige un usuario Qué backend recibe una solicitud Cómo se distribuye la carga entre múltiples data centers Qué tan rápido se recupera un servicio ante una degradación o caída En Service Providers y grandes organizaciones, esto se vuelve evidente durante eventos inesperados. Un sismo, una noticia de alto impacto, un evento deportivo global, una actualización urgente de una aplicación, una app que se vuelve tendencia, un anuncio gubernamental o una contingencia sanitaria pueden disparar el tráfico digital en cuestión de minutos. En estos escenarios, el primer crecimiento significativo no ocurre en HTTP ni en TLS, sino en las consultas DNS. Millones de usuarios abren aplicaciones, refrescan portales y generan resoluciones simultáneas hacia los mismos dominios. Si el DNS no está preparado para absorber ese pico: La aplicación nunca recibe tráfico Los balanceadores no alcanzan a participar La experiencia del usuario se degrada de inmediato Aquí es donde soluciones como F5 BIG-IP DNS dejan de ser un "servicio de nombres" y se convierten en un componente activo de control de tráfico, capaz de tomar decisiones basadas en: Estado real de los servicios Salud de los backends Políticas de disponibilidad y continuidad El boom de la inteligencia artificial y su impacto directo en DNS La llegada masiva de la Inteligencia Artificial no solo ha cambiado cómo se consumen aplicaciones; ha cambiado radicalmente el patrón del tráfico. Las arquitecturas modernas de IA introducen: Más microservicios desacoplados Más APIs internas y externas Endpoints efímeros que aparecen y desaparecen Resoluciones dinámicas y frecuentes Cada llamada a un modelo, cada inferencia distribuida y cada backend que escala automáticamente comienza con una consulta DNS. Pero el impacto va mucho más allá de la teoría. Los datos de 2025 lo confirman de forma contundente: Según Humansecurity, el tráfico generado por sistemas de inteligencia artificial casi se triplicó en un solo año, con un crecimiento mensual del 187% entre enero y diciembre. Y dentro de ese universo, el segmento que más creció fue el de agentes autónomos de IA — bots que ya no solo leen la web, sino que interactúan con ella: navegan, consultan APIs, completan transacciones. Ese tipo de tráfico creció más de un 7,800% interanual. ¿Qué significa esto para el DNS? Cada vez que un agente de IA resuelve una consulta, busca un endpoint o escala un servicio, hay una resolución DNS detrás. Y no estamos hablando de patrones predecibles. El tráfico de IA se comporta de forma diferente al tráfico humano: es más agresivo en ráfagas, menos predecible en horarios, y crece a un ritmo ocho veces mayor que el tráfico generado por personas. A esto se suma que los crawlers de IA — los sistemas que rastrean contenido para entrenamiento, búsqueda y acciones en tiempo real — se han convertido en una de las fuentes más grandes de tráfico automatizado en internet. Durante 2025, solo el crawler de Google generó aproximadamente el 4.5% de todas las solicitudes HTML hacia sitios protegidos por las principales plataformas de seguridad. Y el crawling por "acción de usuario" (cuando un usuario le pide algo a un chatbot y este sale a buscar información en la web) creció más de 15 veces en el mismo periodo. Desde una perspectiva técnica, esto tiene implicaciones claras: Crecimiento sostenido y acelerado del volumen de consultas DNS Mayor sensibilidad a latencia y jitter en la resolución Mayor impacto de fallas parciales o intermitentes Patrones de tráfico menos predecibles y más difíciles de planificar El DNS deja de ser infraestructura "estática" y se convierte en un componente dinámico del plano de datos, íntimamente ligado al desempeño de los modelos y servicios de IA. Y quien no esté dimensionando su infraestructura DNS con esta variable en mente, va a sentir el impacto más temprano que tarde. El DNS como punto crítico en eventos globales A lo largo de los años, múltiples eventos han demostrado el impacto del DNS a escala global. Desde interrupciones masivas de servicios digitales hasta ataques de amplificación, el patrón es consistente: cuando el DNS se ve afectado, el impacto se multiplica. Y no hay que ir lejos para encontrar ejemplos. Solo en octubre de 2025, dos incidentes dejaron muy claro lo que pasa cuando el DNS falla en entornos de producción reales. En el primero, un error en el sistema automatizado de gestión DNS de una de las principales plataformas de nube pública dejó inaccesibles servicios críticos en la región más grande de su infraestructura. Los servidores estaban operativos, las bases de datos intactas... pero el DNS los hacía invisibles. La falla se propagó en cascada a más de cien servicios, afectó a miles de empresas en más de 60 países y generó millones de reportes de caída en cuestión de horas. Plataformas de mensajería, gaming, streaming, servicios financieros y hasta aplicaciones gubernamentales quedaron fuera de línea. No fue un ataque. Fue un error de automatización en DNS. Días después, un proveedor de telecomunicaciones europeo con millones de clientes de red fija experimentó una falla de resolución DNS que dejó sin acceso a internet a gran parte de su base de usuarios durante varias horas. El tráfico observado cayó más del 75%. Y la causa raíz fue, nuevamente, un problema de resolución DNS — no un corte de fibra, no un ataque DDoS, no una caída de energía. F5 Labs ha documentado ampliamente cómo: Ataques volumétricos utilizan DNS como vector principal La amplificación DNS sigue siendo una técnica efectiva Eventos globales generan patrones de tráfico anómalos que exponen debilidades de diseño Estos escenarios no distinguen industria ni tamaño. Afectan por igual a proveedores de servicios, plataformas digitales, infraestructura crítica y aplicaciones empresariales. La diferencia no está en si ocurren, sino en qué tan preparada está la arquitectura para absorberlos sin colapsar. Primero la observabilidad: entender antes de reaccionar Durante mucho tiempo, el DNS fue una caja negra. Funcionaba... o no. Hoy, la conversación cambia por completo cuando incorporamos observabilidad real a la ecuación. F5 Insight, componente clave dentro de la plataforma F5 ADSP, permite: Analizar patrones reales de consultas Identificar picos atípicos y comportamientos anómalos Correlacionar eventos externos con tráfico DNS Entender el porqué detrás del comportamiento del sistema Pero F5 Insight va más allá de dashboards y métricas. Incorpora inteligencia artificial para ofrecer guía proactiva, detección de anomalías y algo que cambia la forma de operar: la capacidad de interactuar con tus datos operacionales en lenguaje natural, usando integración con LLMs y el Model Context Protocol (MCP). En la práctica, esto significa pasar de revisar dashboards buscando qué pasó, a recibir narrativas operacionales que te dicen qué está pasando, por qué, y qué deberías priorizar. Para entornos con alta densidad de consultas DNS, donde los patrones cambian en minutos y los picos pueden ser impredecibles,especialmente con el crecimiento del tráfico de IA, ese salto de reactivo a proactivo no es un lujo, es una necesidad operativa. En escenarios de alto QPS, el problema no siempre es la capacidad, sino cómo se comporta el tráfico. Ver el DNS como datos, no solo como servicio, marca una diferencia enorme en operación y planeación.Plataforma y arquitectura: cuando el DNS ya no es "básico" Plataforma y arquitectura: cuando el DNS ya no es "básico" Cuando el DNS se vuelve crítico, la plataforma importa. Arquitecturas basadas en VELOS permiten separar capacidad, resiliencia y crecimiento del ciclo de vida de la aplicación, soportando cargas DNS masivas con aislamiento, alta disponibilidad y escalamiento ordenado. Por su parte, rSeries ofrece un modelo más compacto y eficiente, ideal para despliegues donde el rendimiento por unidad, la latencia y la eficiencia operativa son clave. Y cuando la aplicación ya no vive en un solo data center, F5 Distributed Cloud Services extiende las capacidades del DNS a arquitecturas híbridas y multicloud, manteniendo: Políticas consistentes Visibilidad unificada Control desde el edge hasta el backend El punto no es dónde corre el DNS, sino que acompañe a la aplicación sin perder control, resiliencia ni observabilidad. Seguridad integrada desde el primer paquete El DNS también es una superficie de ataque. Lo hemos comprobado una y otra vez durante eventos de seguridad globales. Y el panorama no se está simplificando: durante 2025, se identificaron más de 100 millones de dominios nuevos, de los cuales una cuarta parte fue clasificada como maliciosos o sospechosos. Los atacantes están registrando volúmenes sin precedentes de dominios, usando automatización para montar campañas masivas que evaden defensas tradicionales. Ignorar al DNS desde el punto de vista de seguridad es dejar una puerta abierta. Integrar protección dentro de la misma plataforma de entrega permite: Mitigar ataques sin introducir latencia adicional Aplicar políticas de forma consistente Mantener visibilidad unificada del tráfico Seguridad, rendimiento y disponibilidad no son capas separadas; son partes del mismo flujo. En ocasiones olvidamos Que el DNS no suele fallar de forma dramática. Falla de forma silenciosa... y por eso es tan peligroso. Invertir en su diseño, observabilidad y seguridad no es sobreingeniería. Es entender que todo empieza ahí. En un mundo donde la IA está triplicando el volumen de tráfico automatizado año con año, donde los agentes autónomos generan resoluciones DNS a un ritmo que no existía hace 18 meses, y donde un solo error de automatización en DNS puede dejar fuera a miles de empresas en más de 60 países... el DNS vuelve a ocupar el lugar que siempre tuvo en la arquitectura, aunque muchos lo hayan olvidado. La pregunta no es si el DNS es crítico. La pregunta es si tu arquitectura está lista para tratarlo como tal.113Views1like0CommentsContext Cloak: Hiding PII from LLMs with F5 BIG-IP
The Story As I dove deeper into the world of AI -- MCP servers, LLM orchestration, tool-calling models, agentic workflows -- one question kept nagging me: how do you use the power of LLMs to process sensitive data without actually exposing that data to the model? Banks, healthcare providers, government agencies -- they all want to leverage AI for report generation, customer analysis, and workflow automation. But the data they need to process is full of PII: Social Security Numbers, account numbers, names, phone numbers. Sending that to an LLM (whether cloud-hosted or self-hosted) creates a security and compliance risk that most organizations can't accept. I've spent years working with F5 technology, and when I learned that BIG-IP TMOS v21 added native support for the MCP protocol, the lightbulb went on. BIG-IP already sits in the data path between clients and servers. It already inspects, transforms, and enforces policy on HTTP traffic. What if it could transparently cloak PII before it reaches the LLM, and de-cloak it on the way back? That's Context Cloak. The Problem An analyst asks an LLM: "Generate a financial report for John Doe, SSN 078-05-1120, account 4532-1189-0042." The LLM now has real PII. Whether it's logged, cached, fine-tuned on, or exfiltrated -- that data is exposed. Traditional approaches fall short: Approach What Happens The Issue Masking (****) LLM can't see the data Can't reason about what it can't see Tokenization (<<SSN:001>>) LLM sees placeholders Works with larger models (14B+); smaller models may hallucinate Do nothing LLM sees real PII Security and compliance violation The Solution: Value Substitution Context Cloak takes a different approach -- substitute real PII with realistic fake values: John Doe --> Maria Garcia 078-05-1120 --> 523-50-6675 4532-1189-0042 --> 7865-4412-3375 The LLM sees what looks like real data and reasons about it naturally. It generates a perfect financial report for "Maria Garcia." On the way back, BIG-IP swaps the fakes back to the real values. The user sees a report about John Doe. The LLM never knew John Doe existed. This is conceptually a substitution cipher -- every real value maps to a consistent fake within the session, and the mapping is reversed transparently. When I was thinking about this concept, my mind kept coming back to James Veitch's TED talk about messing with email scammers. Veitch tells the scammer they need to use a code for security: Lawyer --> Gummy Bear Bank --> Cream Egg Documents --> Jelly Beans Western Union --> A Giant Gummy Lizard The scammer actually uses the code. He writes back: "I am trying to raise the balance for the Gummy Bear so he can submit all the needed Fizzy Cola Bottle Jelly Beans to the Creme Egg... Send 1,500 pounds via a Giant Gummy Lizard." The real transaction details -- the amounts, the urgency, the process -- all stayed intact. Only the sensitive terms were swapped. The scammer didn't even question it. That idea stuck with me -- what if we could do the same thing to protect PII from LLMs? But rotate the candy -- so it's not a static code book, but a fresh set of substitutions every session. Watch the talk: https://www.ted.com/talks/james_veitch_this_is_what_happens_when_you_reply_to_spam_email?t=280 Why BIG-IP? F5 BIG-IP was the natural candidate: Already in the data path -- BIG-IP is a reverse proxy that organizations already deploy MCP protocol support -- TMOS v21 added native MCP awareness via iRules iRules -- Tcl-based traffic manipulation for real-time HTTP payload inspection and rewriting Subtables -- in-memory key-value storage perfect for session-scoped cloaking maps iAppLX -- deployable application packages with REST APIs and web UIs Trust boundary -- BIG-IP is already the enforcement point for SSL, WAF, and access control How Context Cloak Works An analyst asks a question in Open WebUI Open WebUI calls MCP tools through the BIG-IP MCP Virtual Server The MCP server queries Postgres and returns real customer data (name, SSN, accounts, transactions) BIG-IP's MCP iRule scans the structured JSON response, extracts PII from known field names, generates deterministic fakes, and stores bidirectional mappings in a session-keyed subtable. The response passes through unmodified so tool chaining works. Open WebUI receives real data and composes a prompt When the prompt goes to the LLM through the BIG-IP Inference VS, the iRule uses [string map] to swap every real PII value with its fake counterpart The LLM generates its response using fake data BIG-IP intercepts the response and swaps fakes back to reals. The analyst sees a report about John Doe with his real SSN and account numbers. Two Cloaking Modes Context Cloak supports two modes, configurable per PII field: Substitute Mode Replaces PII with realistic fake values. Names come from a deterministic pool, numbers are digit-shifted, emails are derived. The LLM reasons about the data naturally because it looks real. John Doe --> Maria Garcia (name pool) 078-05-1120 --> 523-50-6675 (digit shift +5) 4532-1189-0042 --> 7865-4412-3375 (digit shift +3) john@email.com --> maria.g@example.net (derived) Best for: fields the LLM needs to reason about naturally -- names in reports, account numbers in summaries. Tokenize Mode Replaces PII with structured placeholders: 078-05-1120 --> <<SSN:32.192.169.232:001>> John Doe --> <<name:32.192.169.232:001>> 4532-1189-0042 --> <<digit_shift:32.192.169.232:001>> A guidance prompt is automatically injected into the LLM request, instructing it to reproduce the tokens exactly as-is. Larger models (14B+ parameters) handle this reliably; smaller models (7B) may struggle. Best for: defense-in-depth with F5 AI Guardrails. The tokens are intentionally distinctive -- if one leaks through de-cloaking, a guardrails policy can catch it. Both modes can be mixed per-field in the same request. The iAppLX Package Context Cloak is packaged as an iAppLX extension -- a deployable application on BIG-IP with a REST API and web-based configuration UI. When deployed, it creates all required BIG-IP objects: data groups, iRules, HTTP profiles, SSL profiles, pools, monitors, and virtual servers. The PII Field Configuration is the core of Context Cloak. The admin selects which JSON fields in MCP responses contain PII and chooses the cloaking mode per field: Field Aliases Mode Type / Label full_name customer_name Substitute Name Pool ssn Tokenize SSN account_number Substitute Digit Shift phone Substitute Phone email Substitute Email The iRules are data-group-driven -- no PII field names are hardcoded. Change the data group via the GUI, and the cloaking behavior changes instantly. This means Context Cloak works with any MCP server, not just the financial demo. Live Demo Enough theory -- here's what it looks like in practice. Step 1: Install the RPM Installing Context Cloak via BIG-IP Package Management LX Step 2: Configure and Deploy Context Cloak GUI -- MCP server, LLM endpoint, PII fields, one-click deploy Deployment output showing session config and saved configuration Step 3: Verify Virtual Servers BIG-IP Local Traffic showing MCP VS and Inference VS created by Context Cloak Step 4: Baseline -- No Cloaking Without Context Cloak: real PII flows directly to the LLM in cleartext This is the "before" picture. The LLM sees everything: real names, real SSNs, real account numbers. Demo 1: Substitute Mode -- SSN Lookup Prompt: "Show me the SSN number for John Doe. Just display the number." Substitute mode -- Open WebUI + Context Cloak GUI showing all fields as Substitute Result: User sees real SSN 078-05-1120. LLM saw a digit-shifted fake. Demo 2: Substitute Mode -- Account Lookup Prompt: "What accounts are associated to John Doe?" Left: Open WebUI with real data. Right: vLLM logs showing "Maria Garcia" with fake account numbers What the LLM saw: "customer_name": "Maria Garcia" "account_number": "7865-4412-3375" (checking) "account_number": "7865-4412-3322" (investment) "account_number": "7865-4412-3376" (savings) What the user saw: Customer: John Doe Checking: 4532-1189-0042 -- $45,230.18 Investment: 4532-1189-0099 -- $312,500.00 Savings: 4532-1189-0043 -- $128,750.00 Switching to Tokenize Mode Changing PII fields from Substitute to Tokenize in the GUI Demo 3: Mixed Mode -- Tokenized SSN SSN set to Tokenize, name set to Substitute. Prompt: "Show me the SSN number for Jane Smith. Just display the number." Mixed mode -- real SSN de-cloaked on left, <<SSN:...>> token visible in vLLM logs on right What the LLM saw: "customer_name": "Maria Thompson" "ssn": "<<SSN:32.192.169.232:001>>" What the user saw: Jane Smith, SSN 219-09-9999 Both modes operating on the same customer record, in the same request. Demo 4: Full Tokenize -- The Punchline ALL fields set to Tokenize mode. Prompt: "Show me the SSN and account information for Carlos Rivera. Display all the numbers." Full tokenize -- every PII field as a token, all de-cloaked on return What the LLM saw -- every PII field was a token: "full_name": "<<name:32.192.169.232:001>>" "ssn": "<<SSN:32.192.169.232:002>>" "phone": "<<phone:32.192.169.232:002>>" "email": "<<email:32.192.169.232:001>>" "account_number": "<<digit_shift:32.192.169.232:002>>" (checking) "account_number": "<<digit_shift:32.192.169.232:003>>" (investment) "account_number": "<<digit_shift:32.192.169.232:004>>" (savings) What the user saw -- all real data restored: Name: Carlos Rivera SSN: 323-45-6789 Checking: 6789-3345-0022 -- $89,120.45 Investment: 6789-3345-0024 -- $890,000.00 Savings: 6789-3345-0023 -- $245,000.00 And here's the best part. Qwen's last line in the response: "Please note that the actual numerical values for the SSN and account numbers are masked due to privacy concerns." The LLM genuinely believed it showed the user masked data. It apologized for the "privacy masking" -- not knowing that BIG-IP had already de-cloaked every token back to the real values. The user saw the full, real, unmasked report. What's Next: F5 AI Guardrails Integration Context Cloak's tokenize mode is designed to complement F5 AI Guardrails. The <<TYPE:ID:SEQ>> format is intentionally distinctive -- if any token leaks through de-cloaking, a guardrails policy can catch it as a pattern match violation. The vision: Context Cloak as the first layer of defense (PII never reaches the LLM), AI Guardrails as the safety net (catches anything that slips through). Defense in depth for AI data protection. Other areas I'm exploring: Hostname-based LLM routing -- BIG-IP as a model gateway with per-route cloaking policies JSON profile integration -- native BIG-IP JSON DOM parsing instead of regex Auto-discovery of MCP tool schemas for PII field detection Centralized cloaking policy management across multiple BIG-IP instances Try It Yourself The complete project is open source: https://github.com/j2rsolutions/f5_mcp_context_cloak The repository includes Terraform for AWS infrastructure, Kubernetes manifests, the iAppLX package (RPM available in Releases), iRules, sample financial data, a test script, comprehensive documentation, and a full demo walkthrough with GIFs (see docs/demo-evidence.md). A Note on Production Readiness I want to be clear: this is a lab proof-of-concept. I have not tested this in a production environment. The cloaking subtable stores PII in BIG-IP memory, the fake name pool is small (100 combinations), the SSL certificates are self-signed, and there's no authentication on the MCP server. There are edge cases around streaming responses, subtable TTL expiry, and LLM-derived values that need more work. But the core concept is proven: BIG-IP can transparently cloak PII in LLM workflows using value substitution and tokenization, and the iAppLX packaging makes it deployable and configurable without touching iRule code. I'd love to hear what the community thinks. Is this approach viable for your use cases? What PII types would you need to support? How would you handle the edge cases? What would it take to make this production-ready for your environment? Let me know in the comments -- and if you want to contribute, PRs are welcome! Demo Environment F5 BIG-IP VE v21.0.0.1 on AWS (m5.xlarge) Qwen 2.5 14B Instruct AWQ on vLLM 0.8.5 (NVIDIA L4, 24GB VRAM) MCP Server: FastMCP 1.26 + PostgreSQL 16 on Kubernetes (RKE2) Open WebUI v0.8.10 Context Cloak iAppLX v0.2.0 References Managing MCP in iRules -- Part 1 Managing MCP in iRules -- Part 2 Managing MCP in iRules -- Part 3 Model Context Protocol Specification James Veitch: This is what happens when you reply to spam email (TED, skip to 4:40)159Views1like0CommentsEmbedding APM Protected Resources into Third-Party Sites
This is the story of a fight with CORS (Cross-Origin Resource Sharing) and the process of understanding how it actually works. APM is a powerful tool that provides solutions for both simple and complex use cases. However, I have historically struggled with one specific scenario: embedding an APM-protected application into a third-party site. Unfortunately APM doesn't provide any mechanism to make this easy. Everything works when accessing the APM application directly. But when the same application was embedded in another site, it failed silently—nothing rendered, and the browser console filled with errors. Since I never had the opportunity to fully investigate the issue, alternative solutions were usually chosen. Recently, I worked with a customer who needed protection for a web application, and APM seemed like the right fit. The application was divided into public and private components, where only the private resources required authentication. This appeared straightforward, and I initially assumed that some iRule logic would be sufficient. But unbeknownst to me the site was not always the entry point to the application which meant that I now had the same old problem with serving content to a 3. party site. Now I was forced to look into CORS for real and how to handle this beast. I used a fair amount of time to get a better understanding and to my surprise it wasn't really all that difficult to fix, it looked like I just needed to inject a couple of headers in the response and then it would all be dandy. This is where the complexity began - specifically around iRules on a virtual server with APM enabled; I just couldn't get it to insert the headers the way I wanted. While there are multiple ways to solve this, I chose a layered virtual server design to keep the logic clean and predictable. This weird construct has been my goto solution for some time now and proven quite reliable. With a layered VS you have two virtual servers which are glued together via an iRule command "virtual". This gives you the option to run modules, iRules, policies etc. independently of each other. One of the VS' is the entry point and based on the logic you then forward traffic to the second one. In this setup the frontend virtual server - the one exposed to the users - has a single iRule responsible for: Handling CORS preflight requests by returning the appropriate headers Injecting CORS headers into responses when the origin is allowed Forwarding all non-preflight requests to the APM virtual server This separation ensures full control over HTTP processing without being constrained by APM behavior. On the inner APM virtual server, an access profile is attached which handles authentication. I usually set a non-reachable IP address on the inner VS to ensure that it can only be reached via the frontend VS. Initially, I planned to use a per-request policy on the inner APM VS with a URL branch agent to determine whether a request should be authenticated or not. While this looked correct on paper, it failed in practice. The third-party site was requesting embedded resources (such as images), and its logic could not handle APM redirects (e.g., '/my.policy'). One possible workaround was to use "clientless mode" by inserting the special header: HTTP::header insert "clientless-mode" 1 However, this introduces additional logic without providing real benefits and I wasn't sure if the application would handle the session cookie correctly or just create a million new APM sessions. Instead, I implemented an iRule that performs a datagroup lookup. If a match is found, APM is bypassed entirely for that request. This approach is simpler to maintain and reduces load by not utilizing the APM module. Below is a diagram illustrating the request flow and decision logic: A user browses the 3. party site which has links to our site. The Origin site. The browser retrieves the resources on our site. Should the browser decided to make a preflight request, the iRule will verify the Origin header from a datagroup and if allowed return the CORS headers. Forward the traffic to the inner APM VS. The iRule on the inner APM VS looks up in a datagroup and find no match and the authentication process is executed. The iRule on the inner APM VS looks up in a datagroup and find a match and disables APM. On the response from the backend we check if the origin was on the approved list. and if so we inject the CORS headers which will allow for the browser to show the content. A user browses the site directly which will bypass any CORS logic. Here is the iRule for the frontend VS: # ============================================================================= # iRule: Outer virtual for redirect + CORS before APM # ----------------------------------------------------------------------------- # Purpose: # - Redirect "/" on portal.example.com to /portal/home/ # - Handle CORS preflight locally before APM # - Forward all other traffic to the inner APM virtual # - Inject CORS headers into normal responses # # Dependencies: # - Attached to outer/public virtual server # - Inner virtual server exists and is called via "virtual" # - Internal string datagroup for allowed CORS origin hostnames # # Datagroups: # - portal_example_com_allowed_cors_origins_dg # # Notes: # - Datagroup must contain lowercase hostnames only # - Empty datagroup = allow all origins # # ============================================================================= when RULE_INIT { set static::portal_example_com_outer_cors_debug_enabled 1 } when HTTP_REQUEST { # Initialize request-scoped state explicitly. set cors_origin "" set cors_origin_host "" set cors_is_allowed 0 if { $static::portal_example_com_outer_cors_debug_enabled } { log local0. "debug: HTTP_REQUEST start - host=[HTTP::host] uri=[HTTP::uri] method=[HTTP::method]" } # ------------------------------------------------------------------------- # Root redirect # ------------------------------------------------------------------------- if { ([string tolower [getfield [HTTP::host] ":" 1]] eq "portal.example.com") && ([HTTP::uri] eq "/") } { if { $static::portal_example_com_outer_cors_debug_enabled } { log local0. "debug: root redirect matched" } HTTP::redirect "https://portal.example.com/portal/home/" return } # ------------------------------------------------------------------------- # Origin evaluation (empty DG = allow all) # ------------------------------------------------------------------------- if { [HTTP::header exists "Origin"] } { set cors_origin [HTTP::header "Origin"] set cors_origin_host [string tolower [URI::host $cors_origin]] if { $static::portal_example_com_outer_cors_debug_enabled } { log local0. "debug: origin='$cors_origin' parsed_host='$cors_origin_host'" } if { ([class size portal_example_com_allowed_cors_origins_dg] == 0) || ($cors_origin_host ne "" && [class match -- $cors_origin_host equals portal_example_com_allowed_cors_origins_dg]) } { set cors_is_allowed 1 if { $static::portal_example_com_outer_cors_debug_enabled } { log local0. "debug: origin allowed" } } else { if { $static::portal_example_com_outer_cors_debug_enabled } { log local0. "debug: origin NOT allowed" } } } else { if { $static::portal_example_com_outer_cors_debug_enabled } { log local0. "debug: no Origin header" } } # ------------------------------------------------------------------------- # Preflight handling # ------------------------------------------------------------------------- if { $cors_is_allowed } { if { ([HTTP::method] eq "OPTIONS") && ([HTTP::header exists "Access-Control-Request-Method"]) } { if { $static::portal_example_com_outer_cors_debug_enabled } { log local0. "debug: handling preflight locally" } HTTP::respond 200 noserver \ "Access-Control-Allow-Origin" $cors_origin \ "Access-Control-Allow-Methods" "GET, POST, OPTIONS" \ "Access-Control-Allow-Headers" [HTTP::header "Access-Control-Request-Headers"] \ "Access-Control-Max-Age" "86400" \ "Vary" "Origin" return } } if { $static::portal_example_com_outer_cors_debug_enabled } { log local0. "debug: forwarding to inner virtual" } # ------------------------------------------------------------------------- # Forward to inner APM virtual # ------------------------------------------------------------------------- virtual portal.example.com_https_vs } when HTTP_RESPONSE { if { $cors_is_allowed && $cors_origin ne "" } { if { $static::portal_example_com_outer_cors_debug_enabled } { log local0. "debug: injecting CORS headers" } HTTP::header replace "Access-Control-Allow-Origin" $cors_origin HTTP::header replace "Vary" "Origin" } } Here is the iRule for the inner APM VS: # ============================================================================= # iRule: Selective APM bypass for portal.example.com using datagroups # ----------------------------------------------------------------------------- # Purpose: # Disable APM only for explicitly public endpoint prefixes while preserving # APM protection for everything else. # # Dependencies: # - BIG-IP LTM # - BIG-IP APM # - Internal string datagroup for public path prefixes # # Datagroups: # - gisportal_public_path_prefixes_dg # # Notes: # - Matching is done against the raw request path derived from HTTP::uri. # - Only the query string is stripped. # - Datagroup entries are matched using starts_with behavior. # - APM is disabled only when a request matches an explicit public prefix. # - All other requests remain protected by APM. # - Debug logging can be enabled in RULE_INIT by setting: # static::portal_example_com_apm_bypass_debug 1 # # ============================================================================= when RULE_INIT { set static::portal_example_com_apm_bypass_debug 1 } when HTTP_REQUEST { set normalized_host [string tolower [getfield [HTTP::host] ":" 1]] set normalized_uri [string tolower [HTTP::uri]] # ------------------------------------------------------------------------- # Extract request path from raw URI # ------------------------------------------------------------------------- set query_delimiter_index [string first "?" $normalized_uri] if { $query_delimiter_index >= 0 } { set normalized_path [string range $normalized_uri 0 [expr {$query_delimiter_index - 1}]] } else { set normalized_path $normalized_uri } if { $static::portal_example_com_apm_bypass_debug } { log local0. "debug: host='$normalized_host' uri='$normalized_uri' path='$normalized_path'" } if { $normalized_host ne "portal.example.com" } { if { $static::portal_example_com_apm_bypass_debug } { log local0. "debug: host did not match target host, skipping rule" } return } # ------------------------------------------------------------------------- # PUBLIC # ------------------------------------------------------------------------- set matched_public_prefix [class match -name -- $normalized_path starts_with gisportal_public_path_prefixes_dg] if { $static::portal_example_com_apm_bypass_debug } { if { $matched_public_prefix ne "" } { log local0. "debug: matched PUBLIC prefix '$matched_public_prefix' -> disabling APM" } else { log local0. "debug: no public prefix matched -> APM remains enabled" } } if { $matched_public_prefix ne "" } { ACCESS::disable return } } I can recommend spending time on YouTube to get a better feeling about what CORS is about and why it is being used. You might run into another problem with APM if you try to embed the entire application, with APM authentication logic, into a frame in a 3. party app. I have not addressed it in this example, but you could extend the iRule logic to inject CSP headers to make this possible. I might address this in another article. I hope this example can help you fast-track pass all the headaches I struggled with. If you have any feedback just let me know. I don't know everything about CORS and I'm sure there are areas which requires special attention that I haven't addressed. Tell me so we can enrich the solution by sharing knowledge.300Views2likes1CommentThe State Of HTTP/2 Full Proxy With F5 LTM
In this article, I will attempt to summarize the known challenges of an HTTP/2 full proxy setup, point out possible solutions, and document known bugs and incompatibilities. Most major browsers had added HTTP/2 support by the end of 2015. However, I hardly ever see F5 LTM setups with HTTP/2 full proxy configured.1.7KViews5likes14CommentsFTP Session Logging
Problem this snippet solves: This iRule logs FTP connections and username information. By default connection mapping from client through BIG-IP to server is logged as well as the username entered by the client. Optionally you can log the entire FTP session by uncommenting the log message in CLIENT_DATA. Code : # This iRule logs FTP connections and username information. # By default connection mapping from client through BIG-IP to server is logged # as well as the username entered by the client. Optionally you can log the # entire FTP session by uncommenting the log message in CLIENT_DATA. when CLIENT_ACCEPTED { set vip [IP::local_addr]:[TCP::local_port] set user "unknown" } when CLIENT_DATA { # uncomment for full session logging #log local0. "[IP::client_addr]:[TCP::client_port]: collected payload ([TCP::payload length]): [TCP::payload]" # check if payload contains the string we want to replace if { [TCP::payload] contains "USER" } { # use a regular expression to save the user name ## regex modified by arkashik regexp "USER \(\[a-zA-Z0-9_-]+)" [TCP::payload] all user # log connection mapping from client through BIG-IP to server log local0. "FTP connection from $client. Mapped to $inside -> $node, user $user" TCP::release TCP::collect } else { TCP::release TCP::collect } } when SERVER_CONNECTED { set client "[IP::client_addr]:[TCP::client_port]" set node "[IP::server_addr]:[TCP::server_port]" set inside "[serverside {IP::local_addr}]:[serverside {TCP::local_port}]" TCP::collect } when SERVER_DATA { TCP::release clientside { TCP::collect } }1.5KViews1like5Comments- 816Views3likes7Comments
F5 Per-App AS3 Part 2 How to see if there are manual changes!
Code version: The code was tested on 17.1.5.3 AS3: 3.55 For more about AS3 and per-app AS3 see my previous code share Part 1 article: https://community.f5.com/kb/codeshare/f5-per-app-as3-part-1-how-share-tenant-specific-object/345072 First we will send Per-App AS3 declaration as shown below. { "id": "per-app-declarationn", "schemaVersion": "3.55.0", "controls": { "class": "Controls", "logLevel": "debug", "trace": true, "traceResponse": true }, "A2": { "class": "Application", "service": { "class": "Service_HTTP", "virtualAddresses": [ "10.0.3.10" ], "pool": "web2_pool" }, "web2_pool": { "class": "Pool", "monitors": [ "http" ], "members": [{ "servicePort": 80, "serverAddresses": [ "192.7.21.10", "192.7.21.11" ] }] } } } Then we will change for example virtual server ip from 10.0.3.10 to 10.0.3.11 and we will send the same declaration but with "dryRun" set to true as this will cause AS3 to validate the config but not to execute it and with trace and traceResponse we will get the difference 😎 { "id": "per-app-declarationn", "schemaVersion": "3.55.0", "controls": { "class": "Controls", "logLevel": "debug", "trace": true, "dryRun": true, "traceResponse": true }, "A2": { "class": "Application", "service": { "class": "Service_HTTP", "virtualAddresses": [ "10.0.3.10" ], "pool": "web2_pool" }, "web2_pool": { "class": "Pool", "monitors": [ "http" ], "members": [{ "servicePort": 80, "serverAddresses": [ "192.7.21.10", "192.7.21.11" ] }] } } } Now we see that the IP has been changed from 10.0.3.10 to 10.0.3.11 and here we go now we have the difference ! This can be added in CI/CD to always first do "dry-run" using the original declaration to see if there are changes before executing the new AS3 declaration that could be for example changing the IP address to 10.0.3.12 but using the official way. Look at the Json reply "diff" section that is seen thanks to trace and traceResponse options and an automation can just check this section and stop the new deployment if the manual changes need to be reviewed first. For AS3 basic declaration not Per-App actually the "dry-run" is a different. F5 likes changing the naming like Local Traffic policies to Endpoint Policies or naming of TLS profiles between GU/TMSH and AS3 😅 { "$schema": "https://raw.githubusercontent.com/F5Networks/f5-appsvcs-extension/refs/heads/main/schema/3.55.0/as3-schema.json", "class": "AS3", "action": "dry-run", "logLevel": "debug", "trace": true, "traceResponse": true, "persist": true, "declaration": { "class": "ADC", "schemaVersion": "3.55.0", "id": "BIG-IP-Example-Tenant", "Example": { "class": "Tenant", "Shared": { "class": "Application", "template": "shared", "Example_Response": { "remark": "Used for F5 response", "class": "iRule", "iRule": { "base64": "d2hlbiBIVFRQX1JFUVVFU1Qgew0KICAgSFRUUDo6cmVzcG9uZCAyMDAgY29udGVudCB7DQogICAgICA8aHRtbD4NCiAgICAgICAgIDxoZWFkPg0KICAgICAgICAgICAgPHRpdGxlPkFwb2xvZ3kgUGFnZTwvdGl0bGU+DQogICAgICAgICA8L2hlYWQ+DQogICAgICAgICA8Ym9keT4NCiAgICAgICAgICAgIFdlIGFyZSBzb3JyeSwgYnV0IHRoZSBzaXRlIHlvdSBhcmUgbG9va2luZyBmb3IgaXMgdGVtcG9yYXJpbHkgb3V0IG9mIHNlcnZpY2U8YnI+DQogICAgICAgICAgICBJZiB5b3UgZmVlbCB5b3UgaGF2ZSByZWFjaGVkIHRoaXMgcGFnZSBpbiBlcnJvciwgcGxlYXNlIHRyeSBhZ2Fpbi4NCiAgICAgICAgIDwvYm9keT4NCiAgICAgIDwvaHRtbD4NCiAgIH0NCn0=" } } } } } } This will not show if someone has manually added a vlan for example as only changes on the apps that were deployed with AS3 will be seen. For those you will get error like the one below when you try to delete the partition. "" 0107082a:3: All objects must be removed from a partition "" https://my.f5.com/manage/s/article/K02718312 https://my.f5.com/manage/s/article/K000138638 Github link: https://github.com/Nikoolayy1/AS3-Per-App-Manual-Changes/tree/main136Views1like0Comments