BIG-IP Geolocation Updates – Part 6

BIG-IP Geolocation Updates – Part 6


Management of geolocation services within the BIG-IP require updates to the geolocation database so that the inquired IP addresses are correctly characterized for service delivery and security enforcement.  Traditionally managed device, where the devices are individually logged into and manually configured can benefit from a bit of automation without having to describe to an entire CI/CD pipeline and change in operational behavior.  Additionally, a fully fledged CI/CD pipeline that embraces a full declarative model would also need a strategy around managing and performing the updates.  This could be done via BIG-IQ; however, many organizations prefer BIG-IQ to monitor rather than manage their devices and so a different strategy is required.

This article series hopes to demonstrate some techniques and code that can work in either a classically managed fleet of devices or fully automated environment.  If you have embraced BIG-IQ fully, this might not be relevant but is hopefully worth a cursory review depending on how you leverage BIG-IQ.

Assumptions and prerequisites

There are a few technology assumptions that will be imposed onto the reader that should be mentioned:

  1. The solution will be presented in Python, specifically 3.10.2 although some lower versions could be supported.  The use of the ‘walrus operator” ( := ) was made in a few places which requires version 3.8 or greater.  Support for earlier versions would require some porting.
  2. Visual Studio Code was used to create and test all the code.  A modest level of expertise would be valuable, but likely not required by the reader.
  3. An understanding of BIG-IP is necessary and assumed.
  4. A cursory knowledge of the F5 Automation Toolchain is necessary as some of the API calls to the BIG-IP will leverage their use, however this is NOT a declarative operation.
  5. Github is used to store the source for this article and a basic understanding of retrieving code from a github repository would be valuable.

References to the above technologies are provided here:

Lastly, an effort was made to make this code high-quality and resilient.  I ran the code base through pylint until it was clean and handle most if not all exceptional cases.  However, no formal QA function or load testing was performed other than my own.  The code is presented as-is with no guarantees expressed or implied.  That being said, it is hoped that this is a robust and usable example either as a script or slightly modified into a library and imported into the reader’s project.

Credits and Acknowledgements

Mark_Menger , for his continued review and support in all things automation based.

Mark Hermsdorfer, who reviewed some of my initial revisions and showed me the proper way to get http chunking to work.  He also has an implementation on github that is referenced in the code base that you should look at. 

Article Series

DevCentral places a limit on the size of an article and having learned from my previous submission I will try to organize this series a bit more cleanly.  This is an overview of the items covered in each section

Part 1 - Design and dependencies

  • Basic flow of a geolocation update
  • The imports list
  • The API library dictionary
  • The status_code_to_msg dictionary
  • Custom Exceptions
  • Method enumeration

Part 2 – Send_Request()

  • Function - send_request

Part 3 - Functions and Implementation 

  • Function – get_auth_token
  • Function – backup_geo_db
  • Function – get_geoip_version

Part 4 - Functions and Implementation Continued

  • Function – fix_md5_file

Part 5 - Functions and Implementation Continued

  • Function – upload_geolocation_update

Part 6 (This article) - Functions and Implementation Conclusion

  • Function – install_geolocation_update

Part 7 - Pulling it together

  • Function – compare_versions
  • Function – validate_file
  • Function – print_usage
  • Command Line script

Functions and Implementation Conclusion

In this part, we are going to finalize the main routines with install_geolocation_update.  This is an involved routine that needs to perform the installation, verify it and then cleanup after itself.


def install_geolocation_update(uri, token, zip_file):
    Makes a temp directory, copies zip, unzips archive and installs each of the
    geolocation RPMs

    uri : str       Base URL to call api
    token : str     Valid access token for this API endpoint
    zip_file: str   Name of zip file to install

    True on success
    False on failure

The routine takes a uri and token which will, as many times before, function as our API end point and authorization to the respective BIG-IP.  Lastly, it also takes zip_file which is the name of the package that we uploaded and want to install.  The routine will return True on success and False on failure.

    assert uri is not None
    assert token is not None

    tmp_folder = '/shared/tmp/geoupdate'
    rpmlist = []

    with requests.Session() as session:
        url = f"{uri}{library['bash']}"
        session.headers.update({'Content-Type': 'application/json'})
        session.headers.update({'X-F5-Auth-Token' : token})
        session.verify = False

        # Create a new directory in /shared/tmp/geoupdate
        data = {'command':'run'}
        data['utilCmdArgs'] = f"-c 'mkdir {tmp_folder}'"
        if (response:=send_request(url, Method.POST, session, json.dumps(data))) is None:
            print("Unable to create tmp folder for installation")
            return False

        # unzip the archive into the /shared/tmp/geoupdate directory
        data['utilCmdArgs'] = f"-c 'unzip -u /var/config/rest/downloads/{zip_file} -d {tmp_folder} -x README.txt'"
        if (response:=send_request(url, Method.POST, session, json.dumps(data))) is None:
            print("Error while trying to unzip archive")
            return False

The routine starts of by asserting some values, again this is not meant to be functional enforcement as much as its intent is for debugging code while authoring.  We then set up a temp folder and an empty list that we will use later for the response.  The well-known Session is created using a with statement and within that context we set up our url, some headers and disable the verify switch.  All of this should be pretty standard now.

Next, we create a temporary directory on the BIG-IP and if this fails, return False to the caller.  Notice that the body is crafted this time by creating a dictionary and then using the json.dumps() routine to convert it into JSON.  Next, we set up a call to unzip the zip_file into that temp directory, building the json body the same way again.  If this fails, we again return False to the caller.  If it succeeds the function continues.

        # Process the response and extract the names of the rpms
        for line in response.json()['commandResult'].splitlines():
            name = line.split()[1]

            if name.endswith('.rpm'):

        # For each rpm, run the update/installer
        for rpm in rpmlist:
            data['utilCmdArgs'] = f"-c 'geoip_update_data -f {rpm}'"
            if (response:=send_request(url, Method.POST, session, json.dumps(data))) is None:
                print("Error while trying to install rpm update {rpm}")
                return False

With the response in hand, we convert it first into json and then extract out the value for the key ‘commandResult’.  This will have several lines of data that we want to parse.  What is happening here, is while the zip file is unarchiving, its echoing the files that are being extracted.  We capture this list of files and will use it to feed our install process.  What is convenient here, is we don’t care about what files are in the archive.  More importantly, we shouldn’t because different updates may have different databases and we cannot predict what will be in the archive from release to release.  This insulates us from any details and simplifies our process.

The for loop then feeds each line of ‘commandResult’, one at a time, into line where we slice the name (the filename actually) from each line.  If that slice ends with .rpm, we know we have identified one of the rpm files that were unarchived.  We don’t care about txt files or anything else and this neatly ignores them.  Once the for loop completes, we will have a list of the rpm files that were extracted into that temp directory.

Now, we walk that list one at a time and make an API call to run geoip_update_data on each of those rpms, which installs the update.  If this fails somewhere, we return False otherwise the for loop continues until all the rpms have been processed and the update is complete.

        # cleanup tmp folder
        data['utilCmdArgs'] = f"-c 'rm -rf {tmp_folder}'"
        if (response:=send_request(url, Method.POST, session, json.dumps(data))) is None:
            print("Error while trying to delete temp folder {tmp_folder}")

        # cleanup uploads
        data['utilCmdArgs'] = f"-c 'rm -f /var/config/rest/downloads/{zip_file}*'"
        if (response:=send_request(url, Method.POST, session, json.dumps(data))) is None:
            print("Error while trying to delete uploads in /var/config/rest/downloads")

        # cleanup backup folder
        data['utilCmdArgs'] = "-c 'rm -rf /shared/GeoIP_backup'"
        if (response:=send_request(url, Method.POST, session, json.dumps(data))) is None:
            print("Error while trying to delete geolocation backup, /shared/GeoIP_backup")

    return True

Presuming that the updates were successfully executed, we now move on to cleaning up after ourselves.  A bit of a comment on design before proceeding:  It may be a better design to catch or raise a failure in the process and have the exception go through and perform this cleanup.  The reason is if the update fails the system is returned to the same state it was BEFORE the update was attempted.  However, in doing this you also have no means to go and evaluate why there was a failure.  This is a bit of a design decision and one you should review.  As this routine stands, a failure returns to the caller and the system is in an ‘unknown’ state.  A subsequent run of this script would likely be okay but making a directory that already exists would fail.  So, part of your design needs to evaluate how a failure condition is mitigated.  For now, this is a simpler design to explain and teach but consider these edge conditions as necessary for your own environment.

Cleanup is simple, deleting the temp folder, then removing the uploads and then finally removing the GeoIp backup folder.  There are some arguments that this should be left and possibly versioned, but that is again a design decision best weighed in your own environment.  Finally, we return True indicating success at the update.

Wrap up

This concludes part 6 of the series.  In part 7 of the series we will write some final helper routines and then put together a script that pulls all this together.  You can access the entire series here:

Updated May 12, 2022
Version 2.0

Was this article helpful?

No CommentsBe the first to comment