python
169 TopicsA taste of Troubleshooting Automation on F5 boxes using Python
Quick Intro This article is like a getting started guide for those unfamiliar with troubleshooting automation in Python on BIG-IP. We're going to create a very simple script that checks CPU usage of a given process and: If CPU > 90%, it prints a message saying that CPU is high Otherwise, it prints a message saying that CPU is OK The idea of a simple script is so you don't get distracted with the script itself and follow along my thought as I add more features. The following command prints out the current CPU usage of a given process (tmm.0) in this case: Creating a function that returns CPU usage Python has a dynamic prompt known as REPL where we can test our code in real time and all we need to do on BIG-IP is to type keyword python: So, functions in Python abstracts a more complex task and returns a result. Functions are defined like this: For example: Before we create our function to return CPU usage from a particular daemon, we need to know how to execute Linux commands using Python: Now, we can finally create our function that returns CPU usage of a given daemon by storing above command's output into a variable and returning it: Keep above function handy! Creating a loop to check if CPU is high Now we're going to add above function to a loop. Why? Because we want our script to flag to us when CPU usage is above 90%, remember? One solution is to use a while loop. If you don't know what's a while loop then that's a very simple example that shows you that the loop keeps going until a certain condition we define is satisfied. For example, in this case here we set a variable number = 0 and while number is not yet 10, we keep increasing number by 1 and print it: PS: number += 1 is the same as number = number + 1 For our loop, we can just store the value from our previous function into a variable calledactualusageand use anif/elseclause to print out a message when CPU is high and a different one when it's not: PS: while True would just run an infinite loop. The only way to leave such a loop would be to explicitly usebreakkeyword when a condition we specify is satisfied inside our loop.. Adding 1 second break interval to loop Notice that previous loop will probably eat up a lot of CPU resources so we're better off waiting at least 1 second before each check using time.sleep as seen below: Now our script will wait for 1 second between checks. Bundling our script up together Now let's bundle it up together and see what we've got up to now: We're importing commands and time so we can use commands.getoutput() and time.sleep() functions respectively. Notice I've added a path to python executable in the first line of our script above. This will allow us to execute our script using ./script.py rather than python script.py. Isn't it better? Let's test it: Lovely. Our script seems to be working fine. I pressed Ctrl+C to exit the loop. Making our script accept process name as argument Up to now, we've been passing the process name (tmm.0) directly to our script. Wouldn't it be better if we added process name as an argument like./check-cpu-usage.py <process name>? The simplest way to add arguments in Python is to usesys.argv. Here's an example: Note that sys.argv[0]always returns script's name andsys.argv[1]the first parameter we typed in. Yes, if we wanted to return a second parameter it would besys.argv[2]and so on. The idea here is to store whatever we type in as argument into a variable and copy that variable to checkcpu() function: Now let's confirm it works: Much better, eh? Final test Let's create a simplesillyscript.shto eat up a lot of CPU cycles deliberately and finish up our test: If you're a beginner in Python's world, you can stop here and try the script with a few other processes and play with it. Otherwise, I'd suggest going through Appendix sections. Appendix 1: Making our script return an error when no argument is passed to it Let's add the ability to return an error if we accidentally execute the script without any arguments. For this we can usetryandexceptclause: Basically, sys.argv[1] (our first argument) is copied to a variable named argument. Therefore, if we execute the script without any arguments, except clause is triggered printing an error message. Easy eh? Let's try: Now, we can incorporate that to our script: Appendix 2: Making our script automatically stop after 5 seconds We can also add a timer for the script to automatically finish after a certain amount of time. Here's how we do it: If we run the above code, we'll see nothing going on for 5 seconds and script will stop and print a message: We can do something similar to our script and make it exit after 5 seconds by storing the time our script started (in start variable) and testing if current time.time() is ever higher than start + 5 (notice I stored 5 into a variable named MAX_TIME_RUNNING): Now, let's confirm it works The command prompt returned after 5 seconds and there was no need to press Ctrl+C. We could fine tune and improve our script even further but this is enough to give you a taste of Python programming language.1.5KViews6likes3CommentsVIPTest: Rapid Application Testing for F5 Environments
VIPTest is a Python-based tool for efficiently testing multiple URLs in F5 environments, allowing quick assessment of application behavior before and after configuration changes. It supports concurrent processing, handles various URL formats, and provides detailed reports on HTTP responses, TLS versions, and connectivity status, making it useful for migrations and routine maintenance.569Views5likes2CommentsDemystifying iControl REST Part 5: Transferring Files
iControl REST. It’s iControl SOAP’s baby, brother, introduced back in TMOS version 11.4 as an early access feature but released fully in version 11.5. Several articles on basic usage have been written on iControl REST so the intent here isn’t basic use, but rather to demystify some of the finer details of using the API. This article will cover the details on how to transfer files to/from the BIG-IP using iControl REST and the python programming language. (Note: this functionality requires 12.0+.) The REST File Transfer Worker The file transfer worker allows a client to transfer files through a series of GET operations for downloads and POST operations for uploads. The Content-Range header is used for both as a means to chunk the content. For downloads, the worker listens onthe following interfaces. Description Method URI File Location Download a File GET /mgmt/cm/autodeploy/software-image-downloads/ /shared/images/ Upload an Image File POST /mgmt/cm/autodeploy/software-image-uploads/ /shared/images/ Upload a File POST /mgmt/shared/file-transfer/uploads/ /var/config/rest/downloads/ Download a QKView GET /mgmt/shared/file-transfer/qkview-downloads/ /var/tmp/ Download a UCS GET /mgmt/shared/file-transfer/ucs-downloads/ /var/local/ucs/ Upload ASM Policy POST /mgmt/tm/asm/file-transfer/uploads/ /var/ts/var/rest/ Download ASM Policy GET /mgmt/tm/asm/file-transfer/downloads/ /var/ts/var/rest/ Binary and text files are supported. The magic in the transfer is the Content-Range header, which has the following format: Content-Range: start-end/filesize Where start/end are the chunk's delimiters in the file and filesize is well, the file size. Any file larger than 1M needs to be chunked with this header as that limit is enforced by the worker. This is done to avoid potential denial of service attacks and out of memory errors. There are benefits of chunking as well: Accurate progress bars Resuming interrupted downloads Random access to file content possible Uploading a File The function is shown below. Note that whereas normally with the REST API the Content-Type is application/json, with file transfers that changes to application/octet-stream. The workflow for the function works like this (line number in parentheses) : Set the Chunk Size (3) Set the Content-Type header (4-6) Open the file (7) Get the filename (apart from the path) from the absolute path (8) If the extension is an .iso file (image) put it in /shared/images, otherwise it’ll go in /var/config/rest/downloads (9-12) Disable ssl warnings requests (required with my version: 2.8.1. YMMV) (14) Set the total file size for use with the Content-Range header (15) Set the start variable to 0 (17) Begin loop to iterate through the file and upload in chunks (19) Read data from the file and if there is no more data, break the loop (20-22) set the current bytes read, if less than the chunk size, then this is the last chunk, so set the end to the size from step 7. Otherwise, add current bytes length to the start value and set that as the end. (24-28) Set the Content-Range header value and then add that to the header (30-31) Make the POST request, uploading the content chunk (32-36) Increment the start value by the current bytes content length (38) def _upload(host, creds, fp): chunk_size = 512 * 1024 headers = { 'Content-Type': 'application/octet-stream' } fileobj = open(fp, 'rb') filename = os.path.basename(fp) if os.path.splitext(filename)[-1] == '.iso': uri = 'https://%s/mgmt/cm/autodeploy/software-image-uploads/%s' % (host, filename) else: uri = 'https://%s/mgmt/shared/file-transfer/uploads/%s' % (host, filename) requests.packages.urllib3.disable_warnings() size = os.path.getsize(fp) start = 0 while True: file_slice = fileobj.read(chunk_size) if not file_slice: break current_bytes = len(file_slice) if current_bytes < chunk_size: end = size else: end = start + current_bytes content_range = "%s-%s/%s" % (start, end - 1, size) headers['Content-Range'] = content_range requests.post(uri, auth=creds, data=file_slice, headers=headers, verify=False) start += current_bytes Downloading a File Downloading is very similar but there are some differences. Here is the workflow that is different, followed by the code. Note that the local path where the file will be downloaded to is given as part of the filename. URI is set to downloads worker. The only supported download directory at this time is /shared/images. (8) Open the local file so received data can be written to it (11) Make the request (22-26) If response code is 200 and if size is greater than 0, increment the current bytes and write the data to file, otherwise exit the loop (28-40) Set the value of the returned Content-Range header to crange and if initial size (0), set the file size to the size variable (42-46) If the file is smaller than the chunk size, adjust the chunk size down to the total file size and continue (51-55) Do the math to get ready to download the next chunk (57-62) def _download(host, creds, fp): chunk_size = 512 * 1024 headers = { 'Content-Type': 'application/octet-stream' } filename = os.path.basename(fp) uri = 'https://%s/mgmt/cm/autodeploy/software-image-downloads/%s' % (host, filename) requests.packages.urllib3.disable_warnings() with open(fp, 'wb') as f: start = 0 end = chunk_size - 1 size = 0 current_bytes = 0 while True: content_range = "%s-%s/%s" % (start, end, size) headers['Content-Range'] = content_range #print headers resp = requests.get(uri, auth=creds, headers=headers, verify=False, stream=True) if resp.status_code == 200: # If the size is zero, then this is the first time through the # loop and we don't want to write data because we haven't yet # figured out the total size of the file. if size > 0: current_bytes += chunk_size for chunk in resp.iter_content(chunk_size): f.write(chunk) # Once we've downloaded the entire file, we can break out of # the loop if end == size: break crange = resp.headers['Content-Range'] # Determine the total number of bytes to read if size == 0: size = int(crange.split('/')[-1]) - 1 # If the file is smaller than the chunk size, BIG-IP will # return an HTTP 400. So adjust the chunk_size down to the # total file size... if chunk_size > size: end = size # ...and pass on the rest of the code continue start += chunk_size if (current_bytes + chunk_size) > size: end = size else: end = start + chunk_size - 1 Now you know how to upload and download files. Let’s do something with it! A Use Case - Upload Cert & Key to BIG-IP and Create a Clientssl Profile! This whole effort was sparked by a use case in Q&A, so I had to deliver the goods with more than just moving files around. The complete script is linked at the bottom, but there are a few steps required to get to a clientssl certificate: Upload the key & certificate Create the file object for key/cert Create the clientssl profile You know how to do step 1 now. Step 2 is to create the file object for the key and certificate. After a quick test to see which file is the certificate, you set both files, build the payload, then make the POST requests to bind the uploaded files to the file object. def create_cert_obj(bigip, b_url, files): f1 = os.path.basename(files[0]) f2 = os.path.basename(files[1]) if f1.endswith('.crt'): certfilename = f1 keyfilename = f2 else: keyfilename = f1 certfilename = f2 certname = f1.split('.')[0] payload = {} payload['command'] = 'install' payload['name'] = certname # Map Cert to File Object payload['from-local-file'] = '/var/config/rest/downloads/%s' % certfilename bigip.post('%s/sys/crypto/cert' % b_url, json.dumps(payload)) # Map Key to File Object payload['from-local-file'] = '/var/config/rest/downloads/%s' % keyfilename bigip.post('%s/sys/crypto/key' % b_url, json.dumps(payload)) return certfilename, keyfilename Notice we return the key/cert filenames so they can be used for step 3 to establish the clientssl profile. In this example, I name the file object and the clientssl profile to the name of the certfilename (minus the extension) but you can alter this to allow the objects names to be provided. To build the profile, just create the payload with the custom key/cert and make the POST request and you are done! def create_ssl_profile(bigip, b_url, certname, keyname): payload = {} payload['name'] = certname.split('.')[0] payload['cert'] = certname payload['key'] = keyname bigip.post('%s/ltm/profile/client-ssl' % b_url, json.dumps(payload)) Much thanks to Tim Rupp who helped me get across the finish line with some counting and rest worker errors we were troubleshooting on the download function. Get the Code Upload a File Download a File Upload Cert/Key & Build a Clientssl Profile8.7KViews4likes45CommentsBIGREST - A Python SDK for F5 iControl REST API
This article is written by, and published on behalf of, DevCentral MVP Leonardo Souza. --- Hello all, this is going to be my shortest article so far. As you probably know already both BIG-IP and BIG-IQ have an iControl REST API. However, if you play with that very often, you will find yourself creating some scripts to perform some common tasks. If you put those scripts together, you kind of have an SDK that other people can use to simplify the use of the API. Almost all vendors these days have an SDK for their products, and the language of choice is mainly Python because of the language simplicity. As the article title says, I wrote BIGREST that is a Python SDK to work with iControl REST API. The SDK fully supports both BIG-IP and BIG-IQ. I wanted to advance my Python andiControl REST knowledge, so this was a useful way of doing that. You may be wondering "Isn't there already a Python SDK foriControl REST?", so let me explain that part. I have used the existing SDK many times in the past, and it was very helpful. The existing Python SDK, the F5-SDK (https://github.com/F5Networks/f5-common-python) is limited, as it mainly supports BIG-IP, and the only supported BIG-IQ functionality is license pools. I wanted to help with the F5-SDK and extend it for BIG-IQ so I looked into the code but I decided the changes I wanted to make made more sense to start from scratch. Some details about these differences are here HTTP paths HTTP paths can be seen as just a tmsh command. In the following examples, HTTP path is “/mgmt/tm/ltm/pool”. F5-SDK mgmt.tm.ltm.pools.pool.create(name='mypool', partition='Common') Python code for every HTTP path; requires more code. BIGREST device.create("/mgmt/tm/ltm/pool", {"name": “mypool”, “partition”: “Common”}) The user tells the HTTP path they want to use. less code to write and support all current HTTP paths and new HTTP paths are automatically supported. BIG-IQ and Python Support F5-SDK created to support BIG-IP REST API supports Python 2 and Python 3 Python 2 was discontinued in 2020 BIGREST created to support BIG-IP and BIG-IQ. supports only Python 3 The code can use new Python 3 functionalities to make it simpler to write and read. Method Names F5-SDK uses some names of the REST API like collection. Example: mgmt.tm.ltm.pools.get_collection() BIGREST tries to use only tmsh names. Example: device.load("/mgmt/tm/ltm/pool") In this case, you load the objects to memory, and if you want you save them after. Similar to load the configuration from the disk using tmsh, and saving it to the disk after. I wrote a very extensive documentation explaining how the SDK works, so you will find all the details there. For more information, including the link for the code and documentation, go to the code share: https://devcentral.f5.com/s/articles/BIGREST2.2KViews3likes1CommentGetting Started with the f5-common-python SDK
If you have dabbled with python and iControl over the years, you might be familiar with some of my other “Getting Stared with …” articles on python libraries. I started my last, on Bigsuds, this way: I imagine the progression for you, the reader, will be something like this in the first six- or seven-hundred milliseconds after reading the title: Oh cool! Wait, what? Don’t we already have like two libraries for python? Really, a third library for python? It’s past time to update those numbers as the forth library in our python support evolution, the f5-common-python SDK, has been available since March of last year!I still love Bigsuds, but it only supports the iControl SOAP interface. The f5-common-python SDK is under continuous development in support of the iControl REST interface, and like Bigsuds, does a lot of the API heavy lifting for you so you can just focus on the logic of bending BIG-IP configuration to your will. Not all endpoints are supported yet, but please feel free to open an issue on the GitHub repo if there’s something missing you need for your project.In this article, I’ll cover the basics of installing the SDK and how to utilize the core functionality. Installing the SDK This section is going to be really short, as the SDK is uploaded to PyPI after reach release, though you can clone the GitHub project and run the development branch with latest features if you so desire. I'd recommend installing in a virtual environment to keep your system python uncluttered, but YMMV. pip install f5-sdk A simple one-liner and we're done! Moving on... Instantiating BIG-IP The first thing you’ll want to do with your shiny new toy is authenticate to the BIG-IP. You can use basic or token authentication to do so. I disable the certificate security warnings on my test boxes, but the first two lines in the sample code below are not necessary if you are using valid certificates >>> import requests >>> requests.packages.urllib3.disable_warnings() >>> from f5.bigip import ManagementRoot >>> # Basic Authentication >>> b = ManagementRoot('ltm3.test.local', 'admin', 'admin') >>> # Token Authentication >>> b = ManagementRoot('ltm3.test.local', 'admin', 'admin', token=True) >>> b.tmos_version u'12.1.0' The b object has credentials attached and various other attributes as well, such as the tmos_version attribute shown above. This is the root object you’ll use (of course you don’t have to call it b, you can call it plutoWillAlwaysBeAPlanetToMe if you want to, but that’s a lot more typing) for all the modules you might interact with on the system. Nomenclature The method mappings are tied to the tmsh and REST URL ids. Consider the tmsh command tmsh list /ltm pool . In the URL, this would be https://ip/mgmt/tm/ltm/pool. For the SDK, at the collection level the command would be b.tm.ltm.pools . It's plural here because we are signifying the collection. If there is a collection already ending in an s, like the subcollection of a pool in members, it would be addressed as members_s. This will be more clear as we work through examples in later articles, but I wanted to provide a little guidance before moving on. Working with Collections There are two types of collections (well three if you include subcollections, but we’ll cover those in a later article,) organizing collections and collections. An organizing collection is a superset of other collections. For example, the ltm or net module listing would be an organizing collection, whereas ltm/pool or net/vlan would be collections. To retrieve either type, you use the get_collection method as shown below, with abbreviated output. # The LTM Organizing Collection >>> for x in b.tm.ltm.get_collection(): ... print x ... {u'reference': {u'link': u'https://localhost/mgmt/tm/ltm/auth?ver=12.1.0'}} {u'reference': {u'link': u'https://localhost/mgmt/tm/ltm/data-group?ver=12.1.0'}} {u'reference': {u'link': u'https://localhost/mgmt/tm/ltm/dns?ver=12.1.0'}} # The Net/Vlan Collection: >>> vlans = b.tm.net.vlans.get_collection() >>> for vlan in vlans: ... print vlan.name ... vlan10 vlan102 vlan103 Working with Named Resources A named resource, like a pool, vip, or vlan, is a fully configurable object for which the CURDLE methods are supported. These methods are: create() update() refresh() delete() load() exists() Let’s work through all these methods with a pool object. >>> b.tm.ltm.pools.pool.exists(name='mypool2017', partition='Common') False >>> p1 = b.tm.ltm.pools.pool.create(name='mypool2017', partition='Common') >>> p2 = b.tm.ltm.pools.pool.load(name='mypool2017', partition='Common') >>> p1.loadBalancingMode = 'least-connections-member' >>> p1.update() >>> assert p1.loadBalancingMode == p2.loadBalancingMode Traceback (most recent call last): File "", line 1, in AssertionError >>> p2.refresh() >>> assert p1.loadBalancingMode == p2.loadBalancingMode >>> p1.delete() >>> b.tm.ltm.pools.pool.exists(name='mypool2017', partition='Common') False Notice in line 1, I am looking to see if the pool called mypool2017 exists, to which I get a return value of False. So I can go ahead and create that pool as shown in line 3. In line 4, I load the same pool so I have two local python objects (p1, p2) that reference the same BIG-IP pool (mypool2017.) In line 5, I update the load balancing algorithm from the default of round robin to least connections member. But at this point, only the local python object has been updated. To update the BIG-IP, in line 6 I apply that method to the object. Now if I assert the LB algorithm between the local p1 and p2 python objects as shown in line 7, it fails, because we have updated p1, but p2 is still as it was when I initially loaded it. Refreshing p2 as shown in line 11 will update it (the local python object, not the BIG-IP pool.) Now I assert again in line 12, and it does not fail. As this was just an exercise, I delete the new pool (could be done on p1 or p2 since they reference the same BIG-IP object) in line 13, and a quick check to see if it exists in line 14 returns false. The great thing is that even though the endpoints change from pool to virtual to rule and so on, the methods used for them do not. Next Steps This is just the tip of the iceberg! There is much more to cover, so come back for the next installment, where we’ll cover unnamed resources and commands. If you can't wait, feel free to dig into the SDK documentation.13KViews3likes70CommentsGenerate the iRules Runtime Calculator Excel Spreadsheet with the Python SDK
This last week I noticed an internal request for the iRules Runtime Calculator Excel Spreadsheet that we've hosted here on DevCentral for many years. The spreadsheet requires the following input from the user: The BIG-IP CPU "speed", which is found by multiplying the number of cores times the MHz value reported on the processor times 1,000,000. These values are found in the /proc/cpuinfo system file. The min/avg/max CPU cycles for each event in the iRule the user is analyzing. This requires: The user to activate iRule timing by using the "timing on" command in the iRule. I usually do this on the first line if I'm looking at global performance. You can also enable on specific events by using it in the event call like "when EVENT timing on { }". Showing the rule output to collect the cycles via the tmsh "show tmsh ltm rule RULE field-fmt" and then copying that over to the spreadsheet. This is all well and good, but to do this repeatedly while troubleshooting and/or optimizing is quite tedious. In this article, I’ll show you how to auto-generate the spreadsheet using the F5 Python SDK and a handy little module called XlsxWriter. Running Test Traffic to Populate the iRule Statistics This is the simple part. In my lab, I just use the apachebench command line utility either directly from my Mac or from the Ubuntu shell on my Windows 10 box. I like to hit at least ten-thousand requests just to dial down the extremes of the max value, but you can go much longer than I did for these tests. ab -n 5000 -c 5 -k http://192.168.102.50/ This makes 5000 requests with 5 concurrent requests and session keepalives enabled. This will result in (if resetting your iRule stats between runs) a total of 5000 requests in the HTTP events, but a far reduced number of CLIENT_ACCEPTED. Interrogating the BIG-IP for iRule Information and Stats First, let’s look at examples of the spreadsheet requirements. # CPU Info from /proc/cpuinfo [root@ltm13:Active:Standalone] config # cat /proc/cpuinfo | grep ^processor processor : 0 processor : 1 [root@ltm13:Active:Standalone] config # cat /proc/cpuinfo | grep MHz cpu MHz : 3400.606 cpu MHz : 3400.606 One could use the iControl REST bash utility to cat that file, grep the important data, then parse it down into the necessary information. In fact, my first iteration of the script did exactly that and used regex to parse the important stuff. But one of my early testers had an issue with no access to the bash utility. So I had to find another way to eliminate that as a problem for others, and found that tmsh access to sys/hardware returned those values, albeit buried in tangled web of nested stats. To get to a cpu speed for our spreadsheet, we need to access those stat values and then perform the calculation. # Selflinks for cpu cores/speed, stats are deep nested hw_sub1 = 'https://localhost/mgmt/tm/sys/hardware/hardware-version' hw_sub2 = 'https://localhost/mgmt/tm/sys/hardware/hardware-version/cpus' hw_sub3 = 'https://localhost/mgmt/tm/sys/hardware/hardwareVersion/cpus/versions' hw_sub4_cores = 'https://localhost/mgmt/tm/sys/hardware/hardwareVersion/cpus/versions/1' hw_sub4_speed = 'https://localhost/mgmt/tm/sys/hardware/hardwareVersion/cpus/versions/2' # Grab the hardware info from BIG-IP hw = obj.tm.sys.hardware.load() # Grab the processor MHz value recorded for the processor cpu_MHz = hw.entries\ .get(hw_sub1).get('nestedStats').get('entries')\ .get(hw_sub2).get('nestedStats').get('entries')\ .get(hw_sub3).get('nestedStats').get('entries')\ .get(hw_sub4_speed).get('nestedStats').get('entries')\ ['version']['description'] # Grab the number of cores recorded for the system cpu_cores = hw.entries\ .get(hw_sub1).get('nestedStats').get('entries')\ .get(hw_sub2).get('nestedStats').get('entries')\ .get(hw_sub3).get('nestedStats').get('entries')\ .get(hw_sub4_cores).get('nestedStats').get('entries')\ ['version']['description'] # The cores value has text in addition to the count, isolate and store cpu_cores = cpu_cores.split(' ')[0] # Calculate the total CPU speed in Hz, not MHz cpu_speed = float(cpu_MHz) * int(cpu_cores) * 1000000 Next we need an iRule applied to a virtual server so we an populate the spreadsheet with data, so this simple test iRule works nicely. # iRule timing command examples timing on when CLIENT_ACCEPTED { log local0. "Timestamp: [clock clicks -milliseconds]" } when HTTP_REQUEST { log local0. "Timestamp: [clock clicks -milliseconds]" } when HTTP_RESPONSE { log local0. "Timestamp: [clock clicks -milliseconds]" } The iRule is available on the command line via “tmsh list ltm rule RULE”. In the SDK, we load it like this: # Grab the iRule r1 = args.rule[0] r = obj.tm.ltm.rules.rule.load(name=r1) Finally, the statistics we are after for each event are the number of executions and the minimum, average, and maximum CPU cycles it took to complete the event. # iRule min/avg/max cycles information ltm rule-event event_order:HTTP_RESPONSE { aborts 0 avg-cycles 43.0K event-type HTTP_RESPONSE failures 0 max-cycles 853.9K min-cycles 6.5K name event_order priority 500 total-executions 2.0K } You can see this information on the command line via “tmsh show ltm rule RULE”. In the SDK, we take the rule object we’ve already loaded (r) and load the stats: # Grab the iRule stats rstats = r.stats.load() From the SDK perspective, the BIG-IP work required is complete. The heavy lifting in this script is accessing the XlsxWriter module, which we’ll cover next. Working with XlsxWriter This python module is amazing. It's powerful feature-wise, and it's one of the better documented projects I've worked with. Major tip of the hat to the author, John McNamara! The first thing we need to do is create a workbook. Excel Workbooks Given that you might want to track several runs of the same iRule and to make it easy to distinguish what workbook you need to open, the format for the name is iRulesRuntimeCalculator__iRuleName__timestamp. Also, we want to make the default window size large enough to see most of the data without having to resize it after opening. # Get the current time timestr = time.strftime("%Y%m%d-%H%M%S") # Name the workbook iRuleRuntimeCalculator__<rulename>__<timestamp> fname = 'iRulesRuntimeCalculator__{}__{}.xlsx'.format(r.name, timestr) workbook = xlsxwriter.Workbook(fname) # Set the initial Excel window size workbook.set_size(1500,1200) We also want to included some different cell formatting options for titles, headers, percentages, etc. Those formats are below. # iRule textbox formatting textbox_options = { 'width': 1200, 'height': 1400, 'font': { 'color': 'white', 'size': 16 }, 'align': { 'vertical': 'top' }, 'gradient': { 'colors': ['#00205f', '#84358e'] } } # Title Block formatting title_format = workbook.add_format({ 'bold': 1, 'border': 1, 'align': 'center', 'valign': 'vcenter', }) title_format.set_font_size(20) title_format.set_font_color('white') title_format.set_bg_color('#00205f') # Section Header Formatting secthdr_format = workbook.add_format({ 'bold': 1, 'border': 1, 'align': 'center', 'valign': 'vcenter', }) secthdr_format.set_font_size(16) secthdr_format.set_font_color('white') secthdr_format.set_bg_color('#00205f') # Table Data Formatting - BOLD for headers and total tabledata_format = workbook.add_format({ 'bold': 1, 'border': 1, 'align': 'center', 'valign': 'vcenter', }) tabledata_format.set_font_size(14) # Table Data Formatting - ints for rule data max requests tabledata2_format = workbook.add_format({ 'bold': 1, 'border': 1, 'align': 'center', 'valign': 'vcenter', 'num_format': '0', }) tabledata2_format.set_font_size(14) # Table Data Formatting - percentages for rule data tabledata3_format = workbook.add_format({ 'bold': 1, 'border': 1, 'align': 'center', 'valign': 'vcenter', 'num_format': '0.0000000000000%', }) tabledata3_format.set_font_size(14) Now that we have our workbook created and the formats we'll use to populate cells with data, let's move on to the worksheets. Excel Worksheets First, let's create the worksheets. We need two: one for the iRule performance data and one for the iRule itself. worksheet1 = workbook.add_worksheet('iRule Stats') worksheet2 = workbook.add_worksheet('iRule Contents') Next, let's generate the header information for worksheet1 that you'll find in the original document linked at the top of this article. worksheet1.set_column(1, 1, 30) worksheet1.set_column(2, 2, 15) worksheet1.set_column(3, 5, 25) worksheet1.merge_range('B2:F2', 'iRules Runtime Calculator - {}'.format(r.name), title_format) worksheet1.write_string(4, 1, 'BIG-IP version: {}, OS version: {} {}, ' 'Python version: {}'.format(bigip_version, platform.system(), platform.release(), platform.python_version())) worksheet1.write_string(5, 1, 'For more details, see article "Intermediate iRules: ' 'Evaluating Performance" on DevCentral:') worksheet1.write_string(6, 1, 'https://devcentral.f5.com/s/articles/intermediate-irules-evaluating-performance-20433') worksheet1.write_string(8, 1, 'Cycles/Sec', tabledata_format) worksheet1.write_number(8, 3, cpu_speed, tabledata_format) This is writing values to the specified cell/column location in the worksheet and applying the formatting we created above as appropriate. The result in the generated spreadsheet for this section of cod is shown below. Providing the BIG-IP, OS, and Python versions will help provide context between iterations of testing as any or all of those things can change. (Darwin is Mac OS, I have no idea why. Drop a comment below if you know!) The second section in this worksheet is the section that you have to fill out manually if you use the original document. But we'll take the data we collected from the BIG-IP in the earlier code samples and populate it instead. rowval = 12 event_list = [] for sl in rstats.entries: raw_data = rstats.entries.get(sl).get('nestedStats').get('entries') event_name = raw_data['eventType']['description'] event_list.append(event_name) executions = raw_data['totalExecutions']['value'] min_cycles = raw_data['minCycles']['value'] avg_cycles = raw_data['avgCycles']['value'] max_cycles = raw_data['maxCycles']['value'] worksheet1.write_row(rowval, 1, (str(event_name), int(executions), int(min_cycles), int(avg_cycles), int(max_cycles)), tabledata_format) rowval += 1 worksheet1.write_string(rowval, 1, 'Total', tabledata_format) worksheet1.write_formula(rowval, 2, '=MAX({})'.format(xl_range(12,2,rowval-1,2)), tabledata_format) worksheet1.write_formula(rowval, 3, '=SUM({})'.format(xl_range(12,3,rowval-1,3)), tabledata_format) worksheet1.write_formula(rowval, 4, '=SUM({})'.format(xl_range(12,4,rowval-1,4)), tabledata_format) worksheet1.write_formula(rowval, 5, '=SUM({})'.format(xl_range(12,5,rowval-1,5)), tabledata_format) We have to track what row we're on, because you might have more or less than the three events I've chosen to use in this example iRule and not tracking would end badly otherwise. We do that here with the rowval variable. We also create a list variable called event_list to store the names of the events for populating the analysis tables. Next, we iterate through the events in the rule statistics object and write each event's data with the write_row method. Finally, after we've iterated through the events, we use the write_formula method to summarize the cycles from each event. This results in the following table in the spreadsheet: What follows in this spreadsheet is a series of three analysis tables. I'll walk through the final section on expected requests/sec here but the entire rule is linked at the conclusion of this article. # increment rowval again to start third analysis table rowval += 3 # Populate the Max # of requests table based on the rule stats worksheet1.merge_range('B{0}:F{0}'.format(rowval), 'Max Requests', secthdr_format) worksheet1.write_row(rowval, 1, ('Event Name', '# of Requests', 'MIN', 'AVG', 'MAX'), tabledata_format) rowval += 1 for event in event_list: worksheet1.write_string(rowval, 1, event, tabledata_format) worksheet1.write_formula(rowval, 2, '=C{}'.format(rowval - (3 + len(event_list))), tabledata_format) worksheet1.write_formula(rowval, 3, '=1/D{}'.format(rowval - (3 + len(event_list))), tabledata2_format) worksheet1.write_formula(rowval, 4, '=1/E{}'.format(rowval - (3 + len(event_list))), tabledata2_format) worksheet1.write_formula(rowval, 5, '=1/F{}'.format(rowval - (3 + len(event_list))), tabledata2_format) rowval += 1 worksheet1.write_string(rowval, 1, 'Total', tabledata_format) worksheet1.write_formula(rowval, 2, '=MAX({})'.format(xl_range(rowval-len(event_list),2,rowval-1,2)), tabledata_format) worksheet1.write_formula(rowval, 3, '=1/D{}'.format(rowval - (3 + len(event_list))), tabledata2_format) worksheet1.write_formula(rowval, 4, '=1/E{}'.format(rowval - (3 + len(event_list))), tabledata2_format) worksheet1.write_formula(rowval, 5, '=1/F{}'.format(rowval - (3 + len(event_list))), tabledata2_format) Each section starts with incrementing the rowval variable to advance the curson down the spreadsheet so as to provide a little space between sections. After writing the section header info, we iterate through the event list we stored, and calculate the max number of requests (at the reported min/avg/max cycles per event), and then taking an overall total not on our calculated totals from this section, but from the totals in the previous section. This results in the following screenshot: This wraps up the work in worksheet1. I did notice that on Windows, the formatting for integers instead of decimals doesn't seem to work. The final step for content is to write the iRule itself to worksheet2. This is a one liner thanks to our formatting options and the insert_textbox method. worksheet2.insert_textbox('B2', r.apiAnonymous, textbox_options) The screenshot of that handywork is below. Wrapping Up Finally, let's print a notification to the console and then close the workbook and call it good! print('\n\n\tHoly iRule perfomance analysis, Batman! Your mission file is {}\n\n'.format(fname)) # Close the workbook workbook.close() I didn't include the BIG-IP instantiation or some of the other little details in the snippets of code above, but they are all included in the codeshare entry. Usage: (py35) mymac:scripts me$ python runtime_calc_v2.py ltm3.test.local admin event_order Well hello admin, please enter your password: Holy iRule perfomance analysis, Batman! Your mission file is iRulesRuntimeCalculator__event_order__20190910-181009.xlsx Give it a shot and let me know what you think!1.9KViews3likes3CommentsParsing F5 BIG-IP LTM DNS profile statistics and extracting values with Python
Introduction Hello there! Arvin here from the F5 SIRT. A little while ago, I published F5BIG-IP Advanced Firewall Manager (AFM) DNS NXDOMAIN Query Attack Type Walkthroughpart one and two, where I went through the process of reviewing BIG-IP LTM DNS profile statistics and used it to set BIG-IP AFM DNS NXDOMAIN Query attack type detection and mitigation thresholds with the goal of mitigating DNS NXDOMAIN Floods. In this article, I continue to look atBIG-IP LTM DNS profile statistics, find ways of parsing it and to extract specific values of interest through Python. Python for Network Engineers Python has emerged as a go-to language for network engineers, providing a powerful and accessible toolset for managing and automating network tasks. Known for its simplicity and readability, Python enables network engineers to script routine operations, automate repetitive tasks, and interact with network devices through APIs. With extensive libraries and frameworks tailored to networking, Python empowers engineers to streamline configurations, troubleshoot issues, and enhance network efficiency. Its versatility makes it an invaluable asset for network automation, allowing engineers to adapt to evolving network requirements and efficiently manage complex infrastructures. Whether you're retrieving data, configuring devices, or optimizing network performance, Python simplifies the process for network engineers, making it an essential skill in the modern networking landscape. The Tools ChatGPT3.5 The "Python for Network Engineers" intro came from ChatGPT3.5 [:)]. Throughout this article, the python coding "bumps" avoidance and approaches came from ChatGPT3.5. Instead of googling, I asked ChatGPT "a lot" so I could get the python scripts to get the output I wanted. https://chat.openai.com/ Visual Studio Code UsingVisual Studio Code (VSCode) to build the scripts was very helpful, especially the tooltip / hints which tells me and help make sense of the available options for the modules used and describing the python data structures. Python 3.10 (From ChatGPT)Python 3.10, the latest version of the Python programming language, brings forth new features and optimizations that enhance the language's power and simplicity. With Python's commitment to readability and ease of use, version 3.10 introduces structural pattern matching, allowing developers to express complex logic more concisely. Other improvements include precise types, performance enhancements, and updates to syntax for cleaner code. Python 3.10 continues to be a versatile and accessible language, serving diverse needs from web development to data science and automation. Its vibrant community and extensive ecosystem of libraries make Python 3.10 a top choice for developers seeking both efficiency and clarity in their code. Python Script - extract DNS A requests value from LTM DNS profile statistics iControl REST output This python script will extract DNS A requests value from LTM DNS profile statistics iControl REST output. Python has many modules that can be used to simplify tasks. iControl REST output is in json format, so as expected, I used the json module. I wanted to format the output data in csv format so the extracted data can later be used in other tools that consume csv formatted data, thus, I used the csv module. I also used the os, time/datetime and tabulate modules for working with the filesystem (I used a Windows machine to run Python and VSCode) to write csv files. Create variables with date and time information that will be used in formatting the csv file name, keep track of the "A record requests" value at script execution, and present a tabulated output of the captured time and data when the script is executed. I also used BIGREST module to query/retrieve the "show ltm dns profile <DNS profile> statistics" instead of getting the output from iControl REST request sent through other methods. https://bigrest.readthedocs.io/introduction.html https://bigrest.readthedocs.io/bigip_show.html Here is the sample script output Here is the sample CSV-formatted data in a csv file with timestamp of the script run I created a github repository for the Python script and its sample script output and csv data see https://github.com/arvfopa/scripts/tree/main https://github.com/arvfopa/scripts/blob/main/extractAreq- Python Script "extractAreq" https://github.com/arvfopa/scripts/blob/main/extractAreq_output- "extractAreq" output Bumps along the way BIGREST module I initially encountered an error 'certificate verify failed: self signed certificate' when provided only the IP address and credentials used in the BIGIP class of the bigrest.bigip python module raise SSLError(e, request=request) requests.exceptions.SSLError: HTTPSConnectionPool(host='IP address', port=443): Max retries exceeded with url: /mgmt/shared/echo-query (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl.c:1007)'))) This is fixed by setting "session_verify" argument of the BIGIP class to "false" to disables SSL certificate validation device = BIGIP("<IP address>", "<username>", "<password>" , session_verify=False) https://bigrest.readthedocs.io/utils.html I also received this error "TypeError: Object of type RESTObject is not JSON serializable" raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type RESTObject is not JSON serializable I reread the BIGREST documentation and found that the output is a python dictionary and can is printed in json format.I rechecked the script and removed the json related syntax and module, and the script runs fine and still gets the same output. I updated the script on github with the simplified changes. https://bigrest.readthedocs.io/restobject.html Here's a sample of the RESTObject properties dictionary values. Plenty of data can be extracted. Example, "clientside.pktsIn" value, a virtual server statistic, can be observed and should detection and mitigation thresholds for AFM, say, UDP protocol DoS attack type, need to be set. This value can be monitored over time to understand how many packets a virtual server receives. ============== {'clientside.bitsIn': {'value': 0}, 'clientside.bitsOut': {'value': 0}, 'clientside.curConns': {'value': 0}, 'clientside.evictedConns': {'value': 0}, 'clientside.maxConns': {'value': 0}, 'clientside.pktsIn': {'value': 0}, 'clientside.pktsOut': {'value': 0}, 'clientside.slowKilled': {'value': 0}, 'clientside.totConns': {'value': 0}, 'cmpEnableMode': {'description': 'all-cpus'}, 'cmpEnabled': {'description': 'enabled'}, 'csMaxConnDur': {'value': 0}, 'csMeanConnDur': {'value': 0}, 'csMinConnDur': {'value': 0}, 'destination': {'description': '10.73.125.137:53'}, 'ephemeral.bitsIn': {'value': 0}, 'ephemeral.bitsOut': {'value': 0}, 'ephemeral.curConns': {'value': 0}, 'ephemeral.evictedConns': {'value': 0}, ============== CSV filename issue I encountered thiserror,"OSError: [Errno 22] Invalid argument: 'dns_stats_2023-12-07_18:01:11.csv". This is related to writing of the output csv file. I asked ChatGPT what this was about andwas providedwith thisanswer. ======================= The error you're encountering, "[Errno 22] Invalid argument," typically suggests an issue with the filename or file path. In this case, it seems to be related to the colon (':') character in the filename. In some operating systems (like Windows), certain characters are not allowed in filenames, and ":" is one of them. Since you're including a timestamp in the filename, it's common to replace such characters with alternatives. You can modify the timestamp format to use underscores or hyphens instead of colons. ==================== The timestamp variable in the script stores the value of the formatted timestamp that will be used in the filename. It initially used a colon (:) as the hour/min/sec separator. It was changed to dash (-) so it would not encounter this error. timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S") Checking the file csv file if it exists The function "write_to_csv" writes the time of collection (formatted_date) andextracted value of DNS A requests count (AReqsvalue). It is called every 10 seconds [time.sleep(10)] for a minute [end_time = time.time() + 60] and writes the output to a file in csv format. The "tabulate" function formats the output of the script. Getting the arrangement of the execution wrong would result in unexpected output. The "file_exists" check to write the "headers" was added to make sure that the "headers" are only written once. "write_to_csv" function ======================== def write_to_csv(formatted_date, AReqsvalue): current_datetime = datetime.now() formatted_date = current_datetime.strftime("%Y-%m-%d %H:%M:%S") csv_filename = f"dns_stats_{timestamp}.csv" headers = ["Date", "DNS A requests"] stats = [[formatted_date, AReqsvalue]] file_exists = os.path.exists(csv_filename) print(tabulate(stats, headers, tablefmt="fancy_grid")) with open(csv_filename, mode='a', newline='') as file: writer = csv.writer(file) if not file_exists: writer.writerow(headers) writer.writerows(stats) end_time = time.time() + 60 while time.time() < end_time: write_to_csv(formatted_date, AReqsvalue) time.sleep(10) ========================== Using ChatGPT In building this script, I usedChatGPT "a lot" and it helped to provide make more sense of the module options, errors and sample scripts. It has been a helpful tool. It tracks your conversation/questions to it and kind of understands the context/topic. "ChatGPT can make mistakes. Consider checking important information." is written at the bottom of the page. The data I used in this article are data from a lab environment.That said, when using public AI/ML systems, we should ensure we do not send any sensitive, proprietary information. Organizations have rolled out their own privacy policies when using AI/ML systems, be sure to follow your own organization's policies. Conclusion Using python to parse and extract values of interest from LTM profile statistics offers flexibility and hopefully simplifying observing and recording these data for further use. In particular, setting values for BIG-IP AFM DoS Detection and Mitigation thresholds will be easier if such data has been observed as it, in my opinion, is the "pulse" of the traffic the BIG-IP processes. As noted in the sample json data output, we can see many statistics that can be reviewed and observed to make configuration changes relevant, for example, mitigating a connection spike by setting a VS connection/rate limit. We can look at the "Conns" values and use the observed values to set a connection limit. Example: 'clientside.curConns': {'value': 0}, 'clientside.evictedConns': {'value': 0}, 'clientside.maxConns': {'value': 0}, 'clientside.totConns':{'value': 0} That's it for now. I hope this article has been educational. The F5 SIRT createssecurity-related content posted here in DevCentral, sharing the team's security mindset and knowledge. Feel free to view the articles that are tagged withthe following: F5 SIRT series-F5SIRT-this-week-in-security TWIS734Views2likes0CommentsDemystifying iControl REST Part 7 - Understanding Transactions
iControl REST. It’s iControl SOAP’s baby, brother, introduced back in TMOS version 11.4 as an early access feature but released fully in version 11.5. Several articles on basic usage have been written about the rest interface so the intent here isn’t basic use, but rather to demystify some of the finer details of using the API. A few months ago, a question in Q&A from community member spirrello asking how to update a tcp profile on a virtual. He was using bigsuds, the python wrapper for the soap interface. For the rest interface on this particular object, this is easy; just use the put method and supply the payload mapping the updated profile. But for soap, this requires a transaction. There are some changes to BIG-IP via the rest interface, however, like updating an ssl cert or key, that likewise will require a transaction to accomplish. In this article, I’ll show you how to use transactions with the rest interface. The Fine Print From the iControl REST user guide, the life cycle of a transaction progresses through three phases: Creation - This phase occurs when the transaction is created using a POST command. Modification - This phase occurs when commands are added to the transaction, or changes are made to the sequence of commands in the transaction. Commit - This phase occurs when iControl REST runs the transaction. To create a transaction, post to /tm/transaction POST https://192.168.25.42/mgmt/tm/transaction {} Response: { "transId":1389812351, "state":"STARTED", "timeoutSeconds":30, "kind":"tm:transactionstate", "selfLink":"https://localhost/mgmt/tm/transaction/1389812351?ver=11.5.0" } Note the transId, the state, and the timeoutSeconds. You'll need the transId to add or re-sequence commands within the transaction, and the transaction will expire after 30 seconds if no commands are added. You can list all transactions, or the details of a specific transaction with a get request. GET https://192.168.25.42/mgmt/tm/transaction GET https://192.168.25.42/mgmt/tm/transaction/transId To add a command to the transaction, you use the normal method uris, but include the X-F5-REST-Coordination-Id header. This example creates a pool with a single member. POST https://192.168.25.42/mgmt/tm/ltm/pool X-F5-REST-Coordination-Id:1389812351 { "name":"tcb-xact-pool", "members": [ {"name":"192.168.25.32:80","description":"First pool for transactions"} ] } Not a great example because there is no need for a transaction here, but we'll roll with it! There are several other option methods for interrogating the transaction itself, see the user guide for details. Now we can commit the transaction. To do that, you reference the transaction id in the URI, remove the X-F5-REST-Coordination-Id header and use the patch method with payload key/value state: VALIDATING . PATCH https://localhost/mgmt/tm/transaction/1389812351 { "state":"VALIDATING" } That's all there is to it! Now that you've seen the nitty gritty details, let's take a look at some code samples. Roll Your Own In this example, I am needing to update and ssl key and certificate. If you try to update the cert or the key, it will complain that they do not match, so you need to update both at the same time. Assuming you are writing all your code from scratch, this is all it takes in python. Note on line 21 I post with an empty payload, and then on line 23, I add the header with the transaction id. I make my modifications and then in line 31, I remove the header, and finally on line 32, I patch to the transaction id with the appropriate payload. import json import requests btx = requests.session() btx.auth = (f5_user, f5_password) btx.verify = False btx.headers.update({'Content-Type':'application/json'}) urlb = 'https://{0}/mgmt/tm'.format(f5_host) domain = 'mydomain.local_sslobj' chain = 'mychain_sslobj try: key = btx.get('{0}/sys/file/ssl-key/~Common~{1}'.format(urlb, domain)) cert = btx.get('{0}/sys/file/ssl-cert/~Common~{1}'.format(urlb, domain)) chain = btx.get('{0}/sys/file/ssl-cert/~Common~{1}'.format(urlb, 'chain')) if (key.status_code == 200) and (cert.status_code == 200) and (chain.status_code == 200): # use a transaction txid = btx.post('{0}/transaction'.format(urlb), json.dumps({})).json()['transId'] # set the X-F5-REST-Coordination-Id header with the transaction id btx.headers.update({'X-F5-REST-Coordination-Id': txid}) # make modifications modkey = btx.put('{0}/sys/file/ssl-key/~Common~{1}'.format(urlb, domain), json.dumps(keyparams)) modcert = btx.put('{0}/sys/file/ssl-cert/~Common~{1}'.format(urlb, domain), json.dumps(certparams)) modchain = btx.put('{0}/sys/file/ssl-cert/~Common~{1}'.format(urlb, 'le-chain'), json.dumps(chainparams)) # remove header and patch to commit the transaction del btx.headers['X-F5-REST-Coordination-Id'] cresult = btx.patch('{0}/transaction/{1}'.format(urlb, txid), json.dumps({'state':'VALIDATING'})).json() A Little Help from a Friend The f5-common-python library was released a few months ago to relieve you of a lot of the busy work with building requests. This is great, especially for transactions. To simplify the above code just to the transaction steps, consider: # use a transaction txid = btx.post('{0}/transaction'.format(urlb), json.dumps({})).json()['transId'] # set the X-F5-REST-Coordination-Id header with the transaction id btx.headers.update({'X-F5-REST-Coordination-Id': txid}) # do stuff here # remove header and patch to commit the transaction del btx.headers['X-F5-REST-Coordination-Id'] cresult = btx.patch('{0}/transaction/{1}'.format(urlb, txid), json.dumps({'state':'VALIDATING'})).json() With the library, it's simplified to: tx = b.tm.transactions.transaction with TransactionContextManager(tx) as api: # do stuff here api.do_stuff Yep, it's that simple. So if you haven't checked out the f5-common-python library, I highly suggest you do! I'll be writing about how to get started using it next week, and perhaps a follow up on how to contribute to it as well, so stay tuned!2.9KViews2likes9CommentsDemystifying iControl REST Part 3 - How to pass query parameters and tmsh options
iControl REST. It’s iControl SOAP’s baby, brother, introduced back in TMOS version 11.4 as an early access feature but released fully in version 11.5. Several articles on basic usage have been written on iControl REST (see the resources at the bottom of this article) so the intent here isn’t basic use, but rather to demystify some of the finer details of using the API. The first article covered URI specifics, the second article discussed subcollections, and this third article will cover query parameters. Query Parameter Definitions F5 has documented a number of query parameters that can be passed into iControl ReST calls in order to modify their behavior. The first set follows the OData (open data protocol) standard. The filter parameter also supports several operators. $filter $select $skip $top Yes, the dollar sign is important and necessary on these parameters. The operators you can use on these parameters are below. Note that the eq operator can only be used with the filter. eq - equal ne - not equal lt - less than le - less than or equal gt - greater than ge - greater than or equal Logical Operators: and or not Beyond the OData parameters, there are a few custom parameters as well. expandSubcollections - allows you to get the subcollection data in the initial request for objects that have subcollections. options - allows you to add arguments to the tmsh equivalent command. An example will be shown below. ver - This is for the specific TMOS version. Setting this parameter guarantees consistent behavior through code upgrades. Please note that the JSON return data for a number of calls has changed between the initial release in 11.5.0 and the current release. No items have been removed, but key/value pairs in the output have been added. Note the lack of a dollar sign on the custom parameters. Example #1 - Filter Now that we have the parameters and operators defined, let’s take a look at some examples. First, we’ll take a look at the $filter parameter. If you want to limit your results to a particular partition, your URL will look something like this: https://172.16.44.128/mgmt/tm/ltm/pool?$filter=partition eq staging https://172.16.44.128/mgmt/tm/ltm/pool?$filter=partition%20eq%20staging https://172.16.44.128/mgmt/tm/ltm/pool?$filter=partition+eq+staging As long as your client tool supports it, any of these formats will work, but the resulting selfLink reflects the latter format: selfLink:"https://localhost/mgmt/tm/ltm/pool?$filter=partition+eq+staging Example #2 - Select I didn’t post the return data from example 1 because it’s a lot of data, even for a small set of returned results. Most of it is all the fields in a pool that are there and important, but default and not of as immediate importance as others. This is where the $select parameter comes in. If you just want to take a look at the name of the pool and say the load balancing mode, your URL will look like this (still filtering for the staging partition:) https://172.16.44.128/mgmt/tm/ltm/pool?$filter=partition+eq+staging&$select=name,loadBalancingMode This results in a smaller subset of data limited to the fields we “selected" items: [5] 0: { name: "sp1" loadBalancingMode: "round-robin" }- 1: { name: "sp2" loadBalancingMode: "round-robin" }- 2: { name: "sp3" loadBalancingMode: "round-robin" } Example #3 - Top & Skip For larger sets of data, you can page through the objects in chunks with $top and $skip. If $skip is not specified when $top is used, it behaves as though set to 0.Please note, however, that paging is restricted to collections and sub collections, so whereas this would work to page through the defined data groups, it would not work to page through the records of a data group. Let’s add the top parameter to our previous URL: https://172.16.44.128/mgmt/tm/ltm/pool?$filter=partition eq staging&$select=name,loadBalancingMode&$top=2 #Results { kind: "tm:ltm:pool:poolcollectionstate" selfLink: "https://localhost/mgmt/tm/ltm/pool?$filter=partition+eq+staging&$select=name%2CloadBalancingMode&$top=2&ver=12.0.0" currentItemCount: 2 itemsPerPage: 2 pageIndex: 1 startIndex: 1 totalItems: 5 totalPages: 3 items: [2] 0: { name: "sp1" loadBalancingMode: "round-robin" } - 1: { name: "sp2" loadBalancingMode: "round-robin" } - - nextLink: "https://localhost/mgmt/tm/ltm/pool?$filter=partition+eq+staging&$select=name%2CloadBalancingMode&$top=2&$skip=2&ver=12.0.0" So we got the same data back as before, only 2 items instead of the original 5, as well as some additional fields that weren’t there previously.Note that once $top is used, the key/value pairs “nextLink”, “currentItems”, and “totalItems” are added to the response. “nextLink” is the URI that will grab the next $top number of results from the query. If you parse this value and use it (once you have replaced the localhost with your actual host information), you will not have to perform any paging calculations. “currentItems” tells you how many items have been returned in the current call. If this value is less than $top, then you know you have reached the end of the items. “totalItems” tells you how many items would be returned if one did not page using $top. Example #4 - Options For this next example, we’ll start with tmsh. If you want to get the connections on the BIG-IP, you type “tmsh show sys conn” at the command line. This can be a very large set of data, however, so there are options on the command line to narrow this down, like cs-client-addr, cs-client-port, and so on. So a narrowed down request at the command line would look like “tmsh show sys conn cs-client-addr 10.0.0.1 cs-client-port 62223.” To translate this command to an API request, you need to use the options parameter https://172.16.44.128/mgmt/tm/sys/connection?options=cs-server-addr+192.168.102.50+cs-server-port+80 #Results { kind: "tm:sys:connection:connectionstats" selfLink: "https://localhost/mgmt/tm/sys/connection?options=cs-server-addr+192.168.102.50+cs-server-port+80&ver=11.6.0" apiRawValues: { apiAnonymous: "Sys::Connections 192.168.102.5:57359 192.168.102.50:80 192.168.102.5:57359 192.168.103.11:8080 tcp 2 (tmm: 1) none Total records returned: 1 " }- } Example #5 - Expanding Subcollections For our final example, we’ll use the expandSubcollections parameter. This is useful for querying objects like pools that have subcollections. Without the parameter specified, the pool data is returned with the specification that pool members is a subcollection, but doesn’t return the set of pool members. By providing the parameter, the pool members are returned along with the pool definition (only first one shown for brevity.) https://172.16.44.128/mgmt/tm/ltm/pool/testpool?expandSubcollections=true #Results { kind: "tm:ltm:pool:poolstate" name: "testpool" fullPath: "testpool" generation: 1 selfLink: "https://localhost/mgmt/tm/ltm/pool/testpool?expandSubcollections=true&ver=11.6.0" allowNat: "yes" allowSnat: "yes" ignorePersistedWeight: "disabled" ipTosToClient: "pass-through" ipTosToServer: "pass-through" linkQosToClient: "pass-through" linkQosToServer: "pass-through" loadBalancingMode: "round-robin" minActiveMembers: 0 minUpMembers: 0 minUpMembersAction: "failover" minUpMembersChecking: "disabled" queueDepthLimit: 0 queueOnConnectionLimit: "disabled" queueTimeLimit: 0 reselectTries: 0 serviceDownAction: "none" slowRampTime: 10 membersReference: { link: "https://localhost/mgmt/tm/ltm/pool/~Common~testpool/members?ver=11.6.0" isSubcollection: true items: [7] 0: { kind: "tm:ltm:pool:members:membersstate" name: "192.168.103.10:80" partition: "Common" fullPath: "/Common/192.168.103.10:80" generation: 1 selfLink: "https://localhost/mgmt/tm/ltm/pool/~Common~testpool/members/~Common~192.168.103.10:80?ver=11.6.0" address: "192.168.103.10" connectionLimit: 0 dynamicRatio: 1 ephemeral: "false" fqdn: { autopopulate: "disabled" } ... Example #6 - Careful! Select Revisited If you want to select just the name and address from the pool members in the previous example, it's not as simple as adding the $select=name,address to that query. Remember from the earlier article about your object/component/sub-component tiers. Instead, you need to specify the members subcollection in the URL, then attach your select parameter. https://172.16.44.128/mgmt/tm/ltm/pool/testpool/members?$select=name,address #Results { kind: "tm:ltm:pool:members:memberscollectionstate" selfLink: "https://localhost/mgmt/tm/ltm/pool/testpool/members?$select=name%2Caddress&ver=11.6.0" items: [7] 0: { name: "192.168.103.10:80" address: "192.168.103.10" }- 1: { name: "192.168.103.10:8080" address: "192.168.103.10" }- 2: { name: "192.168.103.11:80" address: "192.168.103.11" } ... A Whiteboard Wednesday Shout Out to iControl REST I’ve summarized a lot of what was covered in this article in the following Whiteboard Wednesday video. Enjoy!9.1KViews2likes26CommentsDemystifying iControl REST Part 2 - Understanding sub collections and how to use them
iControl REST. It’s iControl SOAP’s baby brother, introduced back in TMOS version 11.4 as an early access feature but was released fully in version 11.5. Several articles on basic usage have been written on iControl REST (see the resources at the bottom of this article) so the intent here isn’t basic use, but rather to demystify some of the finer details of using the API. The first article of this series covered the URI’s role in the API. This second article will cover how the URI path plays a role in how the API functions. Working with Subcollections When manipulating F5 configuration items with iControl Rest, subcollections are a powerful tool. They allow one to manipulate specific items in the subcollection instead of having to manipulate the entire sub collection. Take the pool object for example. If we just query the pool (in this case testpool,) you’ll notice the returned data does not list the pool members #query (via the Chrome advanced REST client using Authorization & Content-Type headers:) https://172.16.44.128/mgmt/tm/ltm/pool/~Common~testpool #query (via curl:) curl -k -u admin:admin https://172.16.44.128/mgmt/tm/ltm/pool/~Common~testpool #response: { kind: "tm:ltm:pool:poolstate" name: "testpool" fullPath: "testpool" generation: 1 selfLink: "https://localhost/mgmt/tm/ltm/pool/testpool?ver=11.6.0" allowNat: "yes" allowSnat: "yes" ignorePersistedWeight: "disabled" ipTosToClient: "pass-through" ipTosToServer: "pass-through" linkQosToClient: "pass-through" linkQosToServer: "pass-through" loadBalancingMode: "round-robin" minActiveMembers: 0 minUpMembers: 0 minUpMembersAction: "failover" minUpMembersChecking: "disabled" queueDepthLimit: 0 queueOnConnectionLimit: "disabled" queueTimeLimit: 0 reselectTries: 0 serviceDownAction: "none" slowRampTime: 10 membersReference: { link: "https://localhost/mgmt/tm/ltm/pool/~Common~testpool/members?ver=11.6.0" isSubcollection: true }- } Notice the isSubcollection: true for the membersReference? This is an indicator that there is a subcollection for the members keyword. If you then query the members for that pool, you will get the subcollection. #query (via the Chrome advanced REST client using Authorization & Content-Type headers:) https://172.16.44.128/mgmt/tm/ltm/pool/~Common~testpool/members #query (via curl:) curl -k -u admin:admin https://172.16.44.128/mgmt/tm/ltm/pool/~Common~testpool/members #response: { kind: "tm:ltm:pool:members:memberscollectionstate" selfLink: "https://localhost/mgmt/tm/ltm/pool/testpool/members?ver=11.6.0" items: [4] 0: { kind: "tm:ltm:pool:members:membersstate" name: "192.168.103.10:80" partition: "Common" fullPath: "/Common/192.168.103.10:80" generation: 1 selfLink: "https://localhost/mgmt/tm/ltm/pool/testpool/members/~Common~192.168.103.10:80?ver=11.6.0" address: "192.168.103.10" connectionLimit: 0 dynamicRatio: 1 ephemeral: "false" fqdn: { autopopulate: "disabled" }- inheritProfile: "enabled" logging: "disabled" monitor: "default" priorityGroup: 0 rateLimit: "disabled" ratio: 1 session: "user-enabled" state: "unchecked" } } You can see that there are four pool members as the item count is four, but I’m only showing one of them here for brevity. The pool members can be added, modified, or deleted at this level. Add a pool member URI: https://172.16.44.128/mgmt/tm/ltm/pool/~Common~testpool/members Method: POST JSON: {“name”:”192.168.103.12:80”} Modify a pool member URI: https://172.16.44.128/mgmt/tm/ltm/pool/~Common~testpool/members/~Common~192.168.103.12:80 Method: PUT JSON: {“name”:”192.168.103.12:80”,”connectionLimit”:”50”} Delete a pool member URI:https://172.16.44.128/mgmt/tm/ltm/pool/~Common~testpool/members/~Common~192.168.103.12:80 Method: DELETE JSON: none (an error will trigger if you send any data) So for subcollections like pool members, adding and deleting is pretty straight forward. Unfortunately, not all lists of configuration items are treated as subcollections. For example, take the data group. You can see for the data group testdb below, the records are not a subcollection. { kind: "tm:ltm:data-group:internal:internalstate" name: "testdb" fullPath: "testdb" generation: 1 selfLink: "https://localhost/mgmt/tm/ltm/data-group/internal/testdb?ver=11.6.0" type: "string" records: [3] 0: { name: "a" data: "one" }- 1: { name: "b" data: "two" }- 2: { name: "c" }- - } Because this is not a subcollection, any modifications to the records of this object are treated as complete replacements. Thus, this request to add a record (d) to the list: URL: https://172.16.44.128/mgmt/tm/ltm/data-group/internal/testdb Method: PUT JSON: {“records” : [ { “name”: “d” } ] } will result in a data group with only 1 entry (d)! If one wanted to add (d) to the data group, one would have to issue the same request above, but with complete JSON data representing the original records PLUS the new record. The same goes if you want to delete a record. You need to submit all the records in JSON format sans the one you wish to delete via the PUT request above. Note: Whereas it's true an update to the records attribute requires a full replacement, it IS possible to update individual records by using the options query parameter instead of updating the records attribute. For details, see the update section of this article. Thus, to modify items that are not subcollections, one would have to issue a get and parse the existing items into a list. Then one would need to modify the list as desired (adding and/or deleting items), and then issue a PUT to the object URI with the modified list as the json data. A python example of doing just that is shown below. This script grabs the records for the MyNetworks data group, adds three new networks to it, then removes those three networks to return it to its original state. __author__ = 'rahm' def get_dg(rq, url, dg_details): dg = rq.get('%s/ltm/data-group/%s/%s' % (url, dg_details[0], dg_details[1])).json() return dg def extend_dg(rq, url, dg_details, additional_records): dg = rq.get('%s/ltm/data-group/%s/%s' % (url, dg_details[0], dg_details[1])).json() current_records = dg['records'] new_records = [] for record in current_records: nr = [ {'name': record['name']}] new_records.extend(nr) for record in additional_records: nr = [ {'name': record}] new_records.extend(nr) payload = {} payload['records'] = new_records rq.put('%s/ltm/data-group/%s/%s' % (url, dg_details[0], dg_details[1]), json.dumps(payload)) def contract_dg(rq, url, dg_details, removal_records): dg = rq.get('%s/ltm/data-group/%s/%s' % (url, dg_details[0], dg_details[1])).json() new_records = [] for record in removal_records: nr = [ {'name': record}] new_records.extend(nr) current_records = dg['records'] new_records = [x for x in current_records if x not in new_records] payload = {} payload['records'] = new_records rq.put('%s/ltm/data-group/%s/%s' % (url, dg_details[0], dg_details[1]), json.dumps(payload)) if __name__ == "__main__": import requests, json b = requests.session() b.auth = ('admin', 'admin') b.verify = False b.headers.update({'Content-Type' : 'application/json'}) b_url_base = 'https://172.16.44.128/mgmt/tm' dg_details = ['internal', 'myNetworks'] net_changes = ['3.0.0.0/8', '4.0.0.0/8'] print "\nExisting Records for %s Data-Group:\n\t%s" % (dg_details[1], get_dg(b, b_url_base, dg_details)['records']) extend_dg(b, b_url_base, dg_details, net_changes) print "\nUpdated Records for %s Data-Group:\n\t%s" % (dg_details[1], get_dg(b, b_url_base, dg_details)['records']) contract_dg(b, b_url_base, dg_details, net_changes) print "\nUpdated Records for %s Data-Group:\n\t%s" % (dg_details[1], get_dg(b, b_url_base, dg_details)['records']) When running this against my lab BIG-IP, I get this output on my console Existing Records for myNetworks Data-Group: [{u'name': u'1.0.0.0/8'}, {u'name': u'2.0.0.0/8'}] Updated Records for myNetworks Data-Group: [{u'name': u'1.0.0.0/8'}, {u'name': u'2.0.0.0/8'}, {u'name': u'3.0.0.0/8'}, {u'name': u'4.0.0.0/8'}] Updated Records for myNetworks Data-Group: [{u'name': u'1.0.0.0/8'}, {u'name': u'2.0.0.0/8'}] Process finished with exit code 0 Hopefully this has been helpful in showing the power of subcollections and the necessary steps to update objects like data-groups that are not. Much thanks again to Pat Chang for the bulk of the content in this articleNext up: Query Parameters and Options.3.2KViews2likes5Comments