devops
24107 TopicsHow to get a F5 BIG-IP VE Developer Lab License
(applies to BIG-IP TMOS Edition) To assist operational teams teams improve their development for the BIG-IP platform, F5 offers a low cost developer lab license. This license can be purchased from your authorized F5 vendor. If you do not have an F5 vendor, and you are in either Canada or the US you can purchase a lab license online: CDW BIG-IP Virtual Edition Lab License CDW Canada BIG-IP Virtual Edition Lab License Once completed, the order is sent to F5 for fulfillment and your license will be delivered shortly after via e-mail. F5 is investigating ways to improve this process. To download the BIG-IP Virtual Edition, log into my.f5.com (separate login from DevCentral), navigate down to the Downloads card under the Support Resources section of the page. Select BIG-IP from the product group family and then the current version of BIG-IP. You will be presented with a list of options, at the bottom, select the Virtual-Edition option that has the following descriptions: For VMware Fusion or Workstation or ESX/i: Image fileset for VMware ESX/i Server For Microsoft HyperV: Image fileset for Microsoft Hyper-V KVM RHEL/CentoOS: Image file set for KVM Red Hat Enterprise Linux/CentOS Note: There are also 1 Slot versions of the above images where a 2nd boot partition is not needed for in-place upgrades. These images include _1SLOT- to the image name instead of ALL. The below guides will help get you started with F5 BIG-IP Virtual Edition to develop for VMWare Fusion, AWS, Azure, VMware, or Microsoft Hyper-V. These guides follow standard practices for installing in production environments and performance recommendations change based on lower use/non-critical needs for development or lab environments. Similar to driving a tank, use your best judgement. Deploying F5 BIG-IP Virtual Edition on VMware Fusion Deploying F5 BIG-IP in Microsoft Azure for Developers Deploying F5 BIG-IP in AWS for Developers Deploying F5 BIG-IP in Windows Server Hyper-V for Developers Deploying F5 BIG-IP in VMware vCloud Director and ESX for Developers Note: F5 Support maintains authoritative Azure, AWS, Hyper-V, and ESX/vCloud installation documentation. VMware Fusion is not an official F5-supported hypervisor so DevCentral publishes the Fusion guide with the help of our Field Systems Engineering teams.113KViews14likes153CommentsBIG-IP Geolocation Updates – Part 7
BIG-IP Geolocation Updates – Part 7 Introduction Management of geolocation services within the BIG-IP require updates to the geolocation database so that the inquired IP addresses are correctly characterized for service delivery and security enforcement. Traditionally managed device, where the devices are individually logged into and manually configured can benefit from a bit of automation without having to describe to an entire CI/CD pipeline and change in operational behavior. Additionally, a fully fledged CI/CD pipeline that embraces a full declarative model would also need a strategy around managing and performing the updates. This could be done via BIG-IQ; however, many organizations prefer BIG-IQ to monitor rather than manage their devices and so a different strategy is required. This article series hopes to demonstrate some techniques and code that can work in either a classically managed fleet of devices or fully automated environment. If you have embraced BIG-IQ fully, this might not be relevant but is hopefully worth a cursory review depending on how you leverage BIG-IQ. Assumptions and prerequisites There are a few technology assumptions that will be imposed onto the reader that should be mentioned: The solution will be presented in Python, specifically 3.10.2 although some lower versions could be supported. The use of the ‘walrus operator” ( := ) was made in a few places which requires version 3.8 or greater. Support for earlier versions would require some porting. Visual Studio Code was used to create and test all the code. A modest level of expertise would be valuable, but likely not required by the reader. An understanding of BIG-IP is necessary and assumed. A cursory knowledge of the F5 Automation Toolchain is necessary as some of the API calls to the BIG-IP will leverage their use, however this is NOT a declarative operation. Github is used to store the source for this article and a basic understanding of retrieving code from a github repository would be valuable. References to the above technologies are provided here: Python 3.10.2 Visual Studio Code F5 BIG-IP F5 Automation and Orchestration GitHub repository for this article Lastly, an effort was made to make this code high-quality and resilient. I ran the code base through pylint until it was clean and handle most if not all exceptional cases. However, no formal QA function or load testing was performed other than my own. The code is presented as-is with no guarantees expressed or implied. That being said, it is hoped that this is a robust and usable example either as a script or slightly modified into a library and imported into the reader’s project. Credits and Acknowledgements Mark_Menger , for his continued review and support in all things automation based. Mark Hermsdorfer, who reviewed some of my initial revisions and showed me the proper way to get http chunking to work. He also has an implementation on github that is referenced in the code base that you should look at. Article Series DevCentral places a limit on the size of an article and having learned from my previous submission I will try to organize this series a bit more cleanly. This is an overview of the items covered in each section Part 1 - Design and dependencies Basic flow of a geolocation update The imports list The API library dictionary The status_code_to_msg dictionary Custom Exceptions Method enumeration Part 2 – Send_Request() Function - send_request Part 3 - Functions and Implementation Function – get_auth_token Function – backup_geo_db Function – get_geoip_version Part 4 - Functions and Implementation Continued Function – fix_md5_file Part 5 - Functions and Implementation Continued Function – upload_geolocation_update Part 6 - Functions and Implementation Conclusion Function – install_geolocation_update Part 7 (This article) - Pulling it together Function – compare_versions Function – validate_file Function – print_usage Command Line script Pulling it together With the completion of the main functional routines, we are now ready to pull everything together. There are a few additional routines that we will add that will simplify the use of the library/script so we will start there first. compare_versions() First up is a simple function that allows us to compare two version strings from our geolocation lookup tool to determine if indeed an update was successful, and the database is in use. def compare_versions(start, end): """ Helper function to compare two geolocation db version and output message Parameters ---------- start : str Beginning version string of geolocation db end : str Ending version string of geolocation db Returns ------- 0 on success 1 on failure """ The routine takes a start and end, both strings, that represent the two strings to compare. The function returns a simple 0 on success, meaning that the end string represents a later date than the start string. Otherwise, a 1 is returned for the other cases. print(f"Starting GeoIP Version: {start}\nEnding GeoIP Version: {end}") if int(start) < int(end): print("GeoIP DB updated!") return 0 print("ERROR GeoIP DB NOT updated!") return 1 Looking at the body of the function, it will first print out what the starting and ending versions are to the console and then check to see if start is less than end. Notice that the strings are casted to an int in both cases. If this statement is true, it prints out that the DB was updated and returns 0. Otherwise, it prints out that the DB was not updated and returns 1. validate_file() stuff def validate_file(path, file): """ Verifies that the file exists and if in the same directory, keeps the basename. If its in a relative or different directory, returns the full path resolving links and so on. Parameters ---------- path : str Argument 0 from sys.argv.. the passed current working directory and exe name file : str Name of the file to check Returns ------- Corrected file with full path Raises ------ FileNotFoundError if file doesn't exist """ The routine accepts a path and a file name as arguments. The path should be the path passed from sys.argv in most cases although depending on how this is being integrated it may be from a working directory. The file name is the name of the file to check. The routine will return a corrected file with the full path. If the file doesn’t exist, it will raise a FileNotFoundError. assert path is not None assert file is not None # unlikely to raise, but there could be an errno.xx for oddly linked CWDs cwd = os.path.dirname(os.path.realpath(path)) # Verify the zip exists, if its in the same directory, clean up the path if not os.path.exists(file): raise FileNotFoundError(f"Unable to find file {file}") # If cwd and file is in same location, just use the basename for the file if cwd == os.path.dirname(os.path.realpath(file)): retval = os.path.basename(file) # otherwise use the full path (and resolve links) to the file else: retval = os.path.realpath(file) return retval Moving onto the body of the function, it first asserts the path and file arguments are not None. Next, we get the current working directory by taking the path and running it through realpath() to deal with any oddly linked directories and then return only the directory name. Next, we check if the file exists, notice it doesn’t matter where it is. If it cannot be found, we raise a FileNotFoundError and the exception leaves the routine. Next, we do the same operation on the file as we did the current working directory, saved in cwd, and compare them. If they are the same, then the file resides in the same current working directory, and we just return the base name of the file (the filename with no path). Otherwise, we return the file ensuring we have the real path which should handle relative paths and ensure we don’t get odd issues when trying to access the file. We the return the return value. print_usage() def print_usage(): """ Prints out the correct way to call the script """ This routine doesn’t take any arguments and is only meant to simplify returning the usage for the script. print("Usage: geolocation-update.py <hostname/ip> <credentials> <zip> <md5>") print("\t<hostname/ip> is the resolvable address of the F5 device") print("\t<credentials> are the username and password formatted as username:password") print("\t<zip> is the name, and path if not in the same directory, to the geolocation zip package") print("\t<md5> is the name, and path if not in the same directory, to the geolocation zip md5 file") print("\nNOTE: You can omit the password and instead put it in an env variable named BIGIP_PASS") There is not much to explain here as it just prints out usage to the console. Obviously, depending on how you integrate these routines, you may need to change this appropriately. Command Line Script Now we need a way to wrap all this together. Thus far, this code has been presented in a somewhat library-like fashion, although you would need to do some things to make it a module of course. However, to illustrate how to use it all we can set up this code so it can be executed standalone. ############################################################################### # main() entry point if run from cmdline as script ############################################################################### if __name__ == "__main__": # Disable/suppress warnings about unverified SSL: import urllib3 requests.packages.urllib3.disable_warnings() urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) For a standalone execution, we check if __name__ is equivalent to the string “__main__”. This is a pythonic “trick” which allows the interpreter to figure out if the module being run is the main program. If it is, its basically being run as a script. Otherwise, __name__ will be set to the modules name and we know its being imported into another piece of code. We import urllib3 which we need in a moment and then disable some annoying warnings that will tell us we are potentially connecting to unsafe web sources. try: if len(sys.argv) < 5: raise ValueError # Extract cmd line arguments and massage them accordingly g_path = sys.argv[0] g_bigip = f"https://{sys.argv[1]}" g_creds = sys.argv[2] g_zip_file = validate_file(g_path, sys.argv[3]) g_md5_file = validate_file(g_path, sys.argv[4]) # Handle username/password from creds or environment variable if ('BIGIP_PASS' in os.environ ) and (os.environ['BIGIP_PASS'] is not None) and (not ":" in g_creds): g_username = g_creds g_password = os.environ['BIGIP_PASS'] else: creds = g_creds.split(':',1) g_username = creds[0] g_password = creds[1] except ValueError: print("Wrong number of arguments.") print_usage() sys.exit(-1) except FileNotFoundError as e: print(f"{e}. Exiting..") print_usage() sys.exit(-1) The first part of the script, we wand to verify the command line arguments and massage a few things. We do a quick sanity check on the number of arguments passed and if they are less than 5 we know we are missing some data to run correctly. Next, we extract the command line arguments into some global variables. The variable g_bigip is formatted slightly to save us some time putting the protocol later on. Some better checking could be performed to ensure its not already formatted that way. The username and potentially password is put into g_creds, which we will clean up in a moment. The last two, g_zip_file and g_md5_file hold the file names for the respective after being processed by validate_file(). Next, we check and see if the password for the passed username is in an environment variable. If there is no “:” in the string, meaning the caller did not pass <username>:<password> to us and an environment variable is set then we can set g_username and g_password. Otherwise, we extract g_username and g_password from g_creds and move forward. We handle exceptional cases for not enough arguments and if validate_file raises FileNotFoundError and exit in both cases. # Get the access token print("Getting access token") if( g_token := get_auth_token(g_bigip, g_username, g_password) ) is None: print("Problem getting access token, exiting") sys.exit(-1) # Attempt to backup existing db print("Backing up existing db") backup_geo_db(g_bigip, g_token) # Get starting date/version of geolocation db for comparison startVersion = get_geoip_version(g_bigip, g_token) # Upload geolocation update zip file print("Uploading geolocation updates") if False is upload_geolocation_update(g_bigip, g_token, g_zip_file, g_md5_file): print("Unable to upload zip and/or md5 file. Exiting.") sys.exit(-1) # Install geolocation update print("Installing geolocation updates") if False is install_geolocation_update(g_bigip, g_token, g_zip_file): print("Unable to install the geolocation updates. Exiting.") sys.exit(-1) # Get end date/version of geolocation db for comparison endVersion = get_geoip_version(g_bigip, g_token) sys.exit( compare_versions(startVersion, endVersion) ) Finally, we construct what could be considered the “main loop” which, because of all the code we have written thus far, is quite pithy. First, we attempt to get an access token which we will need for authorization going forward. Next, we attempt to back up the db on the BIG-IP. Notice we don’t verify that and simply go forward if that were to fail. A more conservative approach would want to handle this differently. Next, we get the starting version so we can compare that after we process the update. Then, we upload the geolocation update files. If this fails, we do catch it and exit with an error. Next, we install the geolocation update and again, if it fails exit the script with an error. Lastly, we get the version string again and then compare versions as we exit the routine. And that, finally, concludes the project. Wrap up This concludes part 7 of the series, and the conclusion of the series overall. Hopefully, this provides a suitable framework for performing geolocation updates that you can either use as is or incorporate into your toolset or CI/CD pipeline. It should pass a lint check and was vigorously tested by myself, but I would encourage a more rigorous and formal review for production purposes. Hopefully this has provided some insight and ideas to solving geolocation database maintenance in your environment. You can access the entire series here: Part 1 Part 2 Part 3 Part 4 Part 5 Part 6 Part 772KViews0likes0CommentsHow to tell nginx to use a forward proxy to reach a specific destination
Hello. I accidentally closed my previous post, so I recreate this discussion because of the following problem I'm encountering. Here is the situation : I have multiple servers which are in a secure network zone I have another server where nginx is installed and is used as a reverse proxy. The NGINX server has access to a remote destination (a gitlab server) through a forward proxy (squid) So the flow is the following : Servers in secure zone --> Server Nginx as reverse proxy --> Server squid as forward proxy --> an internal gitlab in another network zone. Is it possible to tell nginx to use the squid forward proxy to reach the gitlab server, please ? For the moment, I have this configuration : server { listen 443 ssl; server_name <ALIAS DNS OF NGINX SERVER>; ssl_certificate /etc/nginx/certs/mycert.crt; ssl_certificate_key /etc/nginx/certs/mykey.key; ssl_session_cache shared:SSL:1m; ssl_prefer_server_ciphers on; access_log /var/log/nginx/mylog.access.log; error_log /var/log/nginx/mylog.error.log debug; location / { proxy_pass https://the-gitlab-host:443; } } But it does not work. When I try to perform a git command from a server in secure zone, it fails and in the nginx logs I see a timeout, which is normal, because nginx does not use the squid forward proxy to reach the gitlab server. Thank you in advance for your help ! Best regards.Solved51KViews0likes12CommentsControlling a Pool Members Ratio and Priority Group with iControl
A Little Background A question came in through the iControl forums about controlling a pool members ratio and priority programmatically. The issue really involves how the API’s use multi-dimensional arrays but I thought it would be a good opportunity to talk about ratio and priority groups for those that don’t understand how they work. In the first part of this article, I’ll talk a little about what pool members are and how their ratio and priorities apply to how traffic is assigned to them in a load balancing setup. The details in this article were based on BIG-IP version 11.1, but the concepts can apply to other previous versions as well. Load Balancing In it’s very basic form, a load balancing setup involves a virtual ip address (referred to as a VIP) that virtualized a set of backend servers. The idea is that if your application gets very popular, you don’t want to have to rely on a single server to handle the traffic. A VIP contains an object called a “pool” which is essentially a collection of servers that it can distribute traffic to. The method of distributing traffic is referred to as a “Load Balancing Method”. You may have heard the term “Round Robin” before. In this method, connections are passed one at a time from server to server. In most cases though, this is not the best method due to characteristics of the application you are serving. Here are a list of the available load balancing methods in BIG-IP version 11.1. Load Balancing Methods in BIG-IP version 11.1 Round Robin: Specifies that the system passes each new connection request to the next server in line, eventually distributing connections evenly across the array of machines being load balanced. This method works well in most configurations, especially if the equipment that you are load balancing is roughly equal in processing speed and memory. Ratio (member): Specifies that the number of connections that each machine receives over time is proportionate to a ratio weight you define for each machine within the pool. Least Connections (member): Specifies that the system passes a new connection to the node that has the least number of current connections in the pool. This method works best in environments where the servers or other equipment you are load balancing have similar capabilities. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the current number of connections per node or the fastest node response time. Observed (member): Specifies that the system ranks nodes based on the number of connections. Nodes that have a better balance of fewest connections receive a greater proportion of the connections. This method differs from Least Connections (member), in that the Least Connections method measures connections only at the moment of load balancing, while the Observed method tracks the number of Layer 4 connections to each node over time and creates a ratio for load balancing. This dynamic load balancing method works well in any environment, but may be particularly useful in environments where node performance varies significantly. Predictive (member): Uses the ranking method used by the Observed (member) methods, except that the system analyzes the trend of the ranking over time, determining whether a node's performance is improving or declining. The nodes in the pool with better performance rankings that are currently improving, rather than declining, receive a higher proportion of the connections. This dynamic load balancing method works well in any environment. Ratio (node): Specifies that the number of connections that each machine receives over time is proportionate to a ratio weight you define for each machine across all pools of which the server is a member. Least Connections (node): Specifies that the system passes a new connection to the node that has the least number of current connections out of all pools of which a node is a member. This method works best in environments where the servers or other equipment you are load balancing have similar capabilities. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the number of current connections per node, or the fastest node response time. Fastest (node): Specifies that the system passes a new connection based on the fastest response of all pools of which a server is a member. This method might be particularly useful in environments where nodes are distributed across different logical networks. Observed (node): Specifies that the system ranks nodes based on the number of connections. Nodes that have a better balance of fewest connections receive a greater proportion of the connections. This method differs from Least Connections (node), in that the Least Connections method measures connections only at the moment of load balancing, while the Observed method tracks the number of Layer 4 connections to each node over time and creates a ratio for load balancing. This dynamic load balancing method works well in any environment, but may be particularly useful in environments where node performance varies significantly. Predictive (node): Uses the ranking method used by the Observed (member) methods, except that the system analyzes the trend of the ranking over time, determining whether a node's performance is improving or declining. The nodes in the pool with better performance rankings that are currently improving, rather than declining, receive a higher proportion of the connections. This dynamic load balancing method works well in any environment. Dynamic Ratio (node) : This method is similar to Ratio (node) mode, except that weights are based on continuous monitoring of the servers and are therefore continually changing. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the number of current connections per node or the fastest node response time. Fastest (application): Passes a new connection based on the fastest response of all currently active nodes in a pool. This method might be particularly useful in environments where nodes are distributed across different logical networks. Least Sessions: Specifies that the system passes a new connection to the node that has the least number of current sessions. This method works best in environments where the servers or other equipment you are load balancing have similar capabilities. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the number of current sessions. Dynamic Ratio (member): This method is similar to Ratio (node) mode, except that weights are based on continuous monitoring of the servers and are therefore continually changing. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the number of current connections per node or the fastest node response time. L3 Address: This method functions in the same way as the Least Connections methods. We are deprecating it, so you should not use it. Weighted Least Connections (member): Specifies that the system uses the value you specify in Connection Limit to establish a proportional algorithm for each pool member. The system bases the load balancing decision on that proportion and the number of current connections to that pool member. For example,member_a has 20 connections and its connection limit is 100, so it is at 20% of capacity. Similarly, member_b has 20 connections and its connection limit is 200, so it is at 10% of capacity. In this case, the system select selects member_b. This algorithm requires all pool members to have a non-zero connection limit specified. Weighted Least Connections (node): Specifies that the system uses the value you specify in the node's Connection Limitand the number of current connections to a node to establish a proportional algorithm. This algorithm requires all nodes used by pool members to have a non-zero connection limit specified. Ratios The ratio is used by the ratio-related load balancing methods to load balance connections. The ratio specifies the ratio weight to assign to the pool member. Valid values range from 1 through 100. The default is 1, which means that each pool member has an equal ratio proportion. So, if you have server1 a with a ratio value of “10” and server2 with a ratio value of “1”, server1 will get served 10 connections for every one that server2 receives. This can be useful when you have different classes of servers with different performance capabilities. Priority Group The priority group is a number that groups pool members together. The default is 0, meaning that the member has no priority. To specify a priority, you must activate priority group usage when you create a new pool or when adding or removing pool members. When activated, the system load balances traffic according to the priority group number assigned to the pool member. The higher the number, the higher the priority, so a member with a priority of 3 has higher priority than a member with a priority of 1. The easiest way to think of priority groups is as if you are creating mini-pools of servers within a single pool. You put members A, B, and C in to priority group 5 and members D, E, and F in priority group 1. Members A, B, and C will be served traffic according to their ratios (assuming you have ratio loadbalancing configured). If all those servers have reached their thresholds, then traffic will be distributed to servers D, E, and F in priority group 1. he default setting for priority group activation is Disabled. Once you enable this setting, you can specify pool member priority when you create a new pool or on a pool member's properties screen. The system treats same-priority pool members as a group. To enable priority group activation in the admin GUI, select Less than from the list, and in the Available Member(s) box, type a number from 0 to 65535 that represents the minimum number of members that must be available in one priority group before the system directs traffic to members in a lower priority group. When a sufficient number of members become available in the higher priority group, the system again directs traffic to the higher priority group. Implementing in Code The two methods to retrieve the priority and ratio values are very similar. They both take two parameters: a list of pools to query, and a 2-D array of members (a list for each pool member passed in). long [] [] get_member_priority( in String [] pool_names, in Common__AddressPort [] [] members ); long [] [] get_member_ratio( in String [] pool_names, in Common__AddressPort [] [] members ); The following PowerShell function (utilizing the iControl PowerShell Library), takes as input a pool and a single member. It then make a call to query the ratio and priority for the specific member and writes it to the console. function Get-PoolMemberDetails() { param( $Pool = $null, $Member = $null ); $AddrPort = Parse-AddressPort $Member; $RatioAofA = (Get-F5.iControl).LocalLBPool.get_member_ratio( @($Pool), @( @($AddrPort) ) ); $PriorityAofA = (Get-F5.iControl).LocalLBPool.get_member_priority( @($Pool), @( @($AddrPort) ) ); $ratio = $RatioAofA[0][0]; $priority = $PriorityAofA[0][0]; "Pool '$Pool' member '$Member' ratio '$ratio' priority '$priority'"; } Setting the values with the set_member_priority and set_member_ratio methods take the same first two parameters as their associated get_* methods, but add a third parameter for the priorities and ratios for the pool members. set_member_priority( in String [] pool_names, in Common::AddressPort [] [] members, in long [] [] priorities ); set_member_ratio( in String [] pool_names, in Common::AddressPort [] [] members, in long [] [] ratios ); The following Powershell function takes as input the Pool and Member with optional values for the Ratio and Priority. If either of those are set, the function will call the appropriate iControl methods to set their values. function Set-PoolMemberDetails() { param( $Pool = $null, $Member = $null, $Ratio = $null, $Priority = $null ); $AddrPort = Parse-AddressPort $Member; if ( $null -ne $Ratio ) { (Get-F5.iControl).LocalLBPool.set_member_ratio( @($Pool), @( @($AddrPort) ), @($Ratio) ); } if ( $null -ne $Priority ) { (Get-F5.iControl).LocalLBPool.set_member_priority( @($Pool), @( @($AddrPort) ), @($Priority) ); } } In case you were wondering how to create the Common::AddressPort structure for the $AddrPort variables in the above examples, here’s a helper function I wrote to allocate the object and fill in it’s properties. function Parse-AddressPort() { param($Value); $tokens = $Value.Split(":"); $r = New-Object iControl.CommonAddressPort; $r.address = $tokens[0]; $r.port = $tokens[1]; $r; } Download The Source The full source for this example can be found in the iControl CodeShare under PowerShell PoolMember Ratio and Priority.31KViews0likes3CommentsWhere are F5's archived deployment guides?
Archived F5 Deployment Guides This article contains an index of F5’s archived deployment guides, previously hosted on F5 | Multi-Cloud Security and Application Delivery. They are all now hosted on cdn.f5.com. Archived guides... are no longer supported and no longer being updated - provided for reference only. may refer to products or versions, by F5 or 3rd parties that are end-of-life (EOL) or end-of-support (EOS). may refer to iApp templates that are deprecated. For current/updated iApps and FAST templates see myF5 K13422: F5-supported and F5-contributed iApp templates Current F5 Deployment Guides Deployment Guides (https://www.f5.com/resources/deployment-guides) IMPORTANT: The guidance found in archived guides is no longer supported by F5, Inc. and is supplied for reference only. For assistance configuring F5 devices with 3 rd party applications we recommend contacting F5 Professional Services here: Request Professional Services | F5 Archived Deployment Guide Index Deployment Guide Name (links to off-platform) Written for… CA Bundle Iapp BIG-IP V11.5+, V12.X, V13 Microsoft Internet Information Services 7.0, 7.5, 8.0, 10 BIG-IP V11.4 - V13: LTM, AAM, AFM Microsoft Exchange Server 2016 BIG-IP V11 - V13: LTM, APM, AFM Microsoft Sharepoint 2016 BIG-IP V11.4 - V13: LTM, APM, ASM, AFM, AAM Microsoft Active Directory Federation Services BIG-IP V11 - V13: LTM, APM SAP Netweaver: Erp Central Component BIG-IP V11.4: LTM, AAM, AFM, ASM SAP Netweaver: Enterprise Portal BIG-IP V11.4: LTM, AAM, AFM, ASM Microsoft Dynamics CRM 2013 And 2011 BIG-IP V11 - V13: LTM, APM, AFM IBM Qradar BIG-IP V11.3: LTM Microsoft Dynamics CRM 2016 and 2015 BIG-IP V11 - V13: LTM, APM, AFM SSL Intercept V1.5 BIG-IP V12.0+: LTM IBM Websphere 7 BIG-IP LTM, WEBACCELERATOR, FIREPASS Microsoft Dynamics CRM 4.0 BIG-IP V9.X: LTM SSL Intercept V1.0 BIG-IP V11.4+, V12.0: LTM, AFM SMTP Servers BIG-IP V11.4, V12.X, V13: LTM, AFM Oracle E-Business Suite 12 BIG-IP V11.4 - V13: LTM, AFM, AAM HTTP Applications BIG-IP V11.4 - V13: LTM, AFM, AAM Amazon Web Services Availability Zones BIG-IP LTM VE: V12.1.0 HF2+, V13 Oracle PeopleSoft Enterprise Applications BIG-IP V11.4+: LTM, AAM, AFM, ASM HTTP Applications: Downloadable IApp: BIG-IP V11.4 - V13: LTM, APM, AFM, ASM Oracle Weblogic 12.1, 10.3 BIG-IP V11.4: LTM, AFM, AAM IBM Lotus Sametime BIG-IP V10: LTM Analytics BIG-IP V11.4 - V14.1: LTM, APM, AAM, ASM, AFM Cacti Open Source Network Monitoring System BIG-IP V10: LTM NIST SP-800-53R4 Compliance BIG-IP: V12 Apache HTTP Server BIG-IP V11, V12: LTM, APM, AFM, AAM Diameter Traffic Management BIG-IP V10: LTM Nagios Open Source Network Monitoring System BIG-IP V10: LTM F5 BIG-IP Apm With IBM, Oracle and Microsoft BIG-IP V10: APM Apache Web Server BIG-IP V9.4.X, V10: LTM, WA DNS Traffic Management BIG-IP V10: LTM Diameter Traffic Management BIG-IP V11.4+, V12: LTM Citrix XenDesktop BIG-IP V10: LTM F5 As A SAML 2.0 Identity Provider For Common SaaS Applications BIG-IP V11.3+, V12.0 Apache Tomcat BIG-IP V10: LTM Citrix Presentation Server BIG-IP V9.X: LTM Npath Routing - Direct Server Return BIG-IP V11.4 - V13: LTM Data Center Firewall BIG-IP V11.6+, V12: AFM, LTM Citrix XenApp Or XenDesktop Iapp V2.3.0 BIG-IP V11, V12: LTM, APM, AFM Citrix XenApp Or XenDesktop BIG-IP V10.2.1: APM30KViews0likes0CommentsBIG-IP Geolocation Updates – Part 1
BIG-IP Geolocation Updates – Part 1 Introduction Management of geolocation services within the BIG-IP require updates to the geolocation database so that the inquired IP addresses are correctly characterized for service delivery and security enforcement. Traditionally managed device, where the devices are individually logged into and manually configured can benefit from a bit of automation without having to describe to an entire CI/CD pipeline and change in operational behavior. Additionally, a fully fledged CI/CD pipeline that embraces a full declarative model would also need a strategy around managing and performing the updates. This could be done via BIG-IQ; however, many organizations prefer BIG-IQ to monitor rather than manage their devices and so a different strategy is required. This article hopes to demonstrate some techniques and code that can work in either a classically managed fleet of devices or fully automated environment. If you have embraced BIG-IQ fully, this might not be relevant but is hopefully worth a cursory review depending on how you leverage BIG-IQ. Assumptions and prerequisites There are a few technology assumptions that will be imposed onto the reader that should be mentioned: The solution will be presented in Python, specifically 3.10.2 although some lower versions could be supported. The use of the ‘walrus operator” ( := ) was made in a few places which requires version 3.8 or greater. Support for earlier versions would require some porting. Visual Studio Code was used to create and test all the code. A modest level of expertise would be valuable, but likely not required by the reader. An understanding of BIG-IP is necessary and assumed. A cursory knowledge of the F5 Automation Toolchain is necessary as some of the API calls to the BIG-IP will leverage their use, however this is NOT a declarative operation. Github is used to store the source for this article and a basic understanding of retrieving code from a github repository would be valuable. References to the above technologies are provided here: Python 3.10.2 Visual Studio Code F5 BIG-IP F5 Automation and Orchestration GitHub repository for this article Lastly, an effort was made to make this code high-quality and resilient. I ran the code base through pylint until it was clean and handle most if not all exceptional cases. However, no formal QA function or load testing was performed other than my own. The code is presented as-is with no guarantees expressed or implied. That being said, it is hoped that this is a robust and usable example either as a script or slightly modified into a library and imported into the reader’s project. Credits and Acknowledgements Mark_Menger, for his continued review and support in all things automation based. Mark Hermsdorfer, who reviewed some of my initial revisions and showed me the proper way to get http chunking to work. He also has an implementation on github that is referenced in the code base that you should look at. Article Series DevCentral places a limit on the size of an article and having learned from my previous submission I will try to organize this series a bit more cleanly. This is an overview of the items covered in each section: Part 1 (This article) Design and dependencies Basic flow of a geolocation update The imports list The API library dictionary The status_code_to_msg dictionary Custom Exceptions Method enumeration Part 2 – Send_Request() Function - send_request Part 3 - Functions and Implementation Function – get_auth_token Function – backup_geo_db Function – get_geoip_version Part 4 - Functions and Implementation Continued Function – fix_md5_file Part 5 - Functions and Implementation Continued Function – upload_geolocation_update Part 6 - Functions and Implementation Conclusion Function – install_geolocation_update Part 7 - Pulling it together Function – compare_versions Function – validate_file Function – print_usage Command Line script Design and dependencies For this article, the design of the code will roughly follow the steps outlined in the F5 article located here F5 - K11176. It is worth looking through this article and its suggestions if your intentions are to use this article and its code as a reference and build your own solution from scratch. The rough design process is as follows: This flow is extracted from the article reference as a rough outline of the steps we want to take. Since we are using Python and we know that REST calls will be made in order to facilitate interaction with the BIG-IP device, we can also will in some of the dependencies we will need: from enum import Enum from datetime import datetime import os import sys import json import shutil import requests Skipping ahead to a bit of implementation details, we will use enums to control the type of API call, we want to make, GET, POST, etc. to a general API calling function. There are other ways to approach this, and I admit this is a C/C++ centric idea, so I’ll accept the charge as being slightly un-pythonic. The datetime library will have a couple of uses, mostly to stamp backup files and, if we should decide, logging (spoiler: the latter was not implemented). The os and sys imports are routine library imports for various utilities and functions. The shutil import is used once for making a backup of a file. The json and requests imports are critical for manipulating json content and for interacting with the BIG-IP via REST calls, vis-à-vis HTTP. Global Variables There are a few global variables defined at the top of the source file that are namely used to simplify and shorten the code that follows. The first of these is a dictionary library: # Library of REST API end points for frequently used automation calls library = {'auth-token':'/mgmt/shared/authn/login', 'mng-tokens':'/mgmt/shared/authz/tokens', 'pass-policy':'/mgmt/tm/auth/password-policy', 'pass-change':'/mgmt/tm/auth/user/', 'get-version':'/mgmt/tm/sys/version', 'file-xfr':'/mgmt/shared/file-transfer/uploads/', # /var/config/rest/downloads/ 'iso-xfr':'/cm/autodeploy/software-image-uploads/', # /shared/images/ 'mgmt-tasks':'/mgmt/shared/iapp/package-management-tasks', 'do-info':'/mgmt/shared/declarative-onboarding/info', 'as3-info':'/mgmt/shared/appsvcs/info', 'do-upload':'/mgmt/shared/declarative-onboarding/', 'do-tasks':'/mgmt/shared/declarative-onboarding/task/', 'as3-upload':'/mgmt/shared/appsvcs/declare?async=true', 'as3-tasks':'/mgmt/shared/appsvcs/task/', 'bash':'/mgmt/tm/util/bash', } When building a request call, we will need to define the specific API endpoint which in a REST call is defined by the URL we are calling. To make this a more usable, the domain name will be obtained as the target system and then the path will be one of these dictionary values. The scheme or protocol will always be ‘https’ as is necessary for making the call to a BIG-IP. These will be connected to create the API endpoint for our call. The library dictionary ensures that this is done the same way for every call and if we should want to change, or must change, we only have one place to do it as opposed to numerous places in the file. The next dictionary, status_code_to_msg, was created to refactor some error handling code: # Dictionary to translate a status code to an error message status_code_to_msg = {400:"400 Bad request. The url is wrong or malformed\n", 401:"401 Unauthorized. The client is not authorized for this action or auth token is expired\n", 404:"404 Not Found. The server was unable to find the requested resource\n", 415:"415 Unsupported media type. The request data format is not supported by the server\n", 422:"422 Unprocessable Entity. The request data was properly formatted but contained invalid or missing data\n", 500:"500 Internal Server Error. The server threw an error while processing the request\n", } When we send a request, there are several different possibilities which may result. We can raise an exception which will allow us to handle 4xx and 5xx failures all at once, which are results that need to be addressed differently than various 2xx responses. However, this can then create a large chain of if-else clauses which mostly just capture the error. This dictionary allows us to refactor many lines of code into just 2 or 3 and provides for easy expansion should it be deemed necessary. Custom Exceptions We add a couple of custom exceptions that are specific for the code: # Define some exception classes to handle failure cases and consolidate some of the errors class ValidationError(Exception): """ ValidationError for some of the necessary items in send_request """ class InvalidURL(Exception): """ For raising exception if invalid on null urls are passed to send_request """ def __init__(self, message, errors): super().__init__(message) self.errors = errors The ValidationError is arguably not needed as a suitable built-in exception exists, however while refactoring the code there was some cases where more information was being added and reported and thus it was never expunged. The InvalidURL exception does capture a bit of information that we use when reporting an error but still remains a very ‘vanilla’ custom exception. Method enumeration Last is the Method enumeration: class Method(Enum): """ Class Method(Enum) provides simple enumeration for controlling way send_request communicates to target """ GET = 1 POST = 2 PATCH = 3 DELETE = 4 This enumeration controls which type of REST call the send_request, which has not been discussed, call will send a request. During construction of the code there were some alternative ideas on how to implement this that might have made more sense for this additional complexity but where then refactored out to be more straightforward and readable. The enumeration still felt elegant from a function calling perspective, so it remains. Wrap up This concludes part 1 of the series. In part 2 we will construct a routine to send and receive calls to the big-ip as well as handle the numerous issues that can arise. You can access the entire series here: Part 1 Part 2 Part 3 Part 4 Part 5 Part 6 Part 729KViews2likes0CommentsBigIP Report Old
Problem this snippet solves: This codeshare has been deprecated due to a hosting platform corruption. I have moved code and conversation to a new record (on the same original URL) https://devcentral.f5.com/s/articles/bigip-report can be Overview This is a script which will generate a report of the BigIP LTM configuration on all your load balancers making it easy to find information and get a comprehensive overview of virtual servers and pools connected to them. This information is used to relay information to our NOC and developers to give them insight in where things are located and to be able to plan patching and deploys. I also use it myself as a quick way get information or gather data used as a foundation for RFC's, ie get a list of all external virtual servers without compression profiles. The script has been running on 13 pairs of load balancers, indexing over 1200 virtual servers for several years now and the report is widely used across the company and by many companies and governments across the world. It's easy to setup and use and only requires guest permissions on your devices. Demo/Preview Please note that it takes time to make these so sometimes they're a bit outdated and they only cover one HA pair. However, they still serve the purpose of showing what you can expect from the report. Interactive demo http://loadbalancing.se/bigipreportdemo/ Screen shots The main report: The device overview: Certificate details: How to use this snippet: This codeshare has been deprecated due to a hosting platform corruption. I have moved code and conversation to a new record (on the same original URL) https://devcentral.f5.com/s/articles/bigip-report Installation instructions BigipReport REST This is the only branch we're updating since middle of 2020 and it supports 12.x and upwards (maybe even 11.6). Download: https://loadbalancing.se/downloads/bigipreport-v5.5.4.zip Documentation, installation instructions and troubleshooting: https://loadbalancing.se/bigipreport-rest/ Docker support This will be the recommended way of running bigipreport in the near future. It's still undergoing testing but it's looking really good so far. https://loadbalancing.se/2021/01/05/running-bigipreport-on-docker/ BigipReport (Legacy) Older version of the report that only runs on Windows and is depending on a Powershell plugin originally written by Joe Pruitt (F5). BigipReport (Stable): https://loadbalancing.se/downloads/bigipreport-5.3.1.zip BigipReport (BETA): https://loadbalancing.se/downloads/bigipreport-5.4.0-beta.zip iControl Snapin: https://loadbalancing.se/downloads/f5-icontrol.zip Documentation and installation instructions: https://loadbalancing.se/bigip-report/ Upgrade instructions Protect the report using APM and active directory Written by DevCentral member Shann_P: https://loadbalancing.se/2018/04/08/protecting-bigip-report-behind-an-apm-by-shannon-poole/ Got issues/problems/feedback? Still have issues? Drop a comment below. We usually reply quite fast. Any bugs found, issues detected or ideas contributed makes the report better for everyone, so it's always appreciated. --- Also trying out a Discord channel now. You're welcome to hang out with us there: https://discord.gg/7JJvPMYahA Code : 85931,86647,90730 Tested this on version: 13.029KViews16likes974CommentsBIG-IP Geolocation Updates – Part 2
BIG-IP Geolocation Updates – Part 2 Introduction Management of geolocation services within the BIG-IP require updates to the geolocation database so that the inquired IP addresses are correctly characterized for service delivery and security enforcement. Traditionally managed device, where the devices are individually logged into and manually configured can benefit from a bit of automation without having to describe to an entire CI/CD pipeline and change in operational behavior. Additionally, a fully fledged CI/CD pipeline that embraces a full declarative model would also need a strategy around managing and performing the updates. This could be done via BIG-IQ; however, many organizations prefer BIG-IQ to monitor rather than manage their devices and so a different strategy is required. This article series hopes to demonstrate some techniques and code that can work in either a classically managed fleet of devices or fully automated environment. If you have embraced BIG-IQ fully, this might not be relevant but is hopefully worth a cursory review depending on how you leverage BIG-IQ. Assumptions and prerequisites There are a few technology assumptions that will be imposed onto the reader that should be mentioned: The solution will be presented in Python, specifically 3.10.2 although some lower versions could be supported. The use of the ‘walrus operator” ( := ) was made in a few places which requires version 3.8 or greater. Support for earlier versions would require some porting. Visual Studio Code was used to create and test all the code. A modest level of expertise would be valuable, but likely not required by the reader. An understanding of BIG-IP is necessary and assumed. A cursory knowledge of the F5 Automation Toolchain is necessary as some of the API calls to the BIG-IP will leverage their use, however this is NOT a declarative operation. Github is used to store the source for this article and a basic understanding of retrieving code from a github repository would be valuable. References to the above technologies are provided here: Python 3.10.2 Visual Studio Code F5 BIG-IP F5 Automation and Orchestration GitHub repository for this article Lastly, an effort was made to make this code high-quality and resilient. I ran the code base through pylint until it was clean and handle most if not all exceptional cases. However, no formal QA function or load testing was performed other than my own. The code is presented as-is with no guarantees expressed or implied. That being said, it is hoped that this is a robust and usable example either as a script or slightly modified into a library and imported into the reader’s project. Credits and Acknowledgements Mark_Menger , for his continued review and support in all things automation based. Mark Hermsdorfer, who reviewed some of my initial revisions and showed me the proper way to get http chunking to work. He also has an implementation on github that is referenced in the code base that you should look at. Article Series DevCentral places a limit on the size of an article and having learned from my previous submission I will try to organize this series a bit more cleanly. This is an overview of the items covered in each section: Part 1 - Design and dependencies Basic flow of a geolocation update The imports list The API library dictionary The status_code_to_msg dictionary Custom Exceptions Method enumeration Part 2 (This article) – Send_Request() Function - send_request Part 3 - Functions and Implementation Function – get_auth_token Function – backup_geo_db Function – get_geoip_version Part 4 - Functions and Implementation Continued Function – fix_md5_file Part 5 - Functions and Implementation Continued Function – upload_geolocation_update Part 6 - Functions and implementation Conclusion Function – install_geolocation_update Part 7 - Pulling it together Function – compare_versions Function – validate_file Function – print_usage Command Line script Functions and Implementation In this part of the series, we are going to construct a routine that sends and receives information to the BIG-IP. There are lot of things that can go wrong, like incorrect API endpoints, timeouts, server errors and services unavailability. This routine is intended to concentrate managing those details in one place so that every function that needs to make a call will have less complexity in it. send_request() This function evolved through many changes as this code was developed, starting with being a very simplistic call to now being almost too complex. However, an attempt was made to make this the single call to send a request to the BIG-IP and handle the numerous ways in which that could fail. To start off with, and this will be continued throughout the code, the function gets a docstring that explains the inputs and outputs: def send_request(url, method=Method.GET, session=None, data=None): """ send_request is used to send a REST call to the device. By default it assumes that this is a GET request (through the default enumeration). The passed session and data are also by default set to None. In the case of data, this is ignored as its only relevant for a POST or PATCH call. However the session is checked against the default and raises if its None. PATCH and DELETE are also not implemented yet and raise. Parameters ---------- url : str The url endpoint to send the request to method : Method, defaults to Method.GET One of the valid Method enumerations session : obj, defaults to None Active / valid session object data : str, defaults to None JSON formatted string passed as body in request Returns ------- response str on success None on failure Raises ------ notImplemented For improper methods ValidationError If the session object is None or inactive The url parameter is missing """ The arguments ‘url’ and ‘session’ are required arguments and the routine will return a response string on success and None on failure. However, it will raise a couple of exceptions in a couple of cases as well. if not url: raise InvalidURL("The url is invalid", url) error_message = None response = None try: if None is session: raise ValidationError("Invalid session provided") # Send request and then raise exceptions for 4xx and 5xx issues if method is Method.GET: response = session.get(url) elif method is Method.POST: response = session.post(url, data) elif method is Method.PATCH: raise NotImplementedError("The PATCH method is not implemented yet") elif method is Method.DELETE: raise NotImplementedError("The DELETE method is not implemented yet") else: raise NotImplementedError(f"The HTTP method {method} is not supported") response.raise_for_status() The first part of the routine checks if there is no url and if so, raises an InvalidURL exception. This is left outside of the try/except clause that follows as the calling code should handle this and the chain of except clauses are more focused on a failure to send the request. The next set of routines follows a python try/except block but let us break that down a little first: try: # Something risky except: # Optional handling of exception (if required) # Can be more than one else: # Run this if there was NO exception finally: # Always run this no matter what Most people are familiar with try/except however Python implements some additional blocks that are used here. The ‘try’ block is the code that we want to run that we think might have a failure or raise for some sort of condition. In our case, we will raise which will automatically cause an exception if there was a 4xx or 5xx response, but we will get to that in a minute. The except block is what to do it there is an exception. You can ‘catch’ (a c++ piece of terminology) different types of exceptions and handle them differently which we will do but the important thing to understand here is if there is a problem in your ‘try’ block, the system will run through these except blocks looking for something to handle it. It the code in the ‘try’ block runs fine, or rather doesn’t raise, then the else block will be run. Finally, pun intended, the ‘finally’ block is run regardless of an exception or not. While this might seem complicated it does make it easier to handle multiple failures as a group which was why it was implemented here. The ‘try’ block starts out by defining error_message and response to None, which we check against later. It then verifies that the session is not None, or in this case that the function caller passed something for session. If its still None it raises and the except block processes this. At one point, there was consideration given to creating a session here for very simple requests, but this code was never refactored in that way. However, if this was used in a larger context, I would refactor it in that manner and then the default argument value would make more sense. Next, we go through the different possibilities for the method variable, of which method.GET and method.POST are the only options that are implemented. Next, the return value is then used to call response.raise_for_status(). The raise_for_status() call is a convenient built-in routine that sorts out 4xx and 5xx responses and then puts them into an HTTPError exception with the respective message. except requests.exceptions.HTTPError: # Handle 4xx and 5xx errors here. Common 4xx and 5xx REST errors here if not (error_message := status_code_to_msg.get(response.status_code) ): error_message = f"{response.status_code}. Uncommon REST/HTTP error" except requests.exceptions.TooManyRedirects as e_redir: # Handle excessive 3xx errors here error_message = f"{'TooManyRedirects'}: {e_redir}" except requests.exceptions.ConnectionError as e_conn: # Handle connection errors here error_message = f"{'ConnectionError'}: {e_conn}" except requests.exceptions.Timeout as e_tout: # Handle timeout errors here error_message = f"{'Timeout'}: {e_tout}" except requests.exceptions.RequestException as e_general: # Handle ambiguous exceptions while handling request error_message = f"{'RequestException'}: {e_general}" Next, we handle the numerous possible exceptions that might take place. The raise_for_status() call simplifies this for us a little and we handle all the 4xx and 5xx in the HTTPError exception block. This is where we take advantage of the status_code_to_msg dictionary and format a specific error message based on the specific code returned. This saves us numerous if-else statements and elegantly handles the case where we don’t have a specific message handler. The next exception, TooManyRedirects, handles an excessive number of 3xx responses. This block could take action to address this situation but for now we just format error_message with a message and move on. The ConnectionError and Timeout exception handlers also just format a message in error_message. Again, more robust handling could be implemented say to verify that the IP address is pingable in the case of a connection error or do some additional checks or reattempt in the case of a Timeout. These would make more sense in a larger project, and the logic is provided here should it make sense to the reader. The last exception handles any other ambiguous exceptions that might have been raised from requests. else: return response finally: # if error message isn't None, there is an error to process and we should return None if error_message: print(f"send_request() Error:\n{error_message}") print(f"url:\t{url}\nmethod:\t{method.value}\ndata:\t{data}") if response is not None: print(f"response: {response.json()}") return None Next, follows the else block which will be run if no exceptions have occurred. In this case, the function just returns the response. Initially, I had designed some ideas to format the response to json if desired or extract content in the response so that calling functions would require less processing of the response. Pylint had already informed me this function was too complex and didn’t think this project required more ‘cleverness’ so I left it simple. Again, this is where modifications could be valuable and make this more usable in a larger context. Then the ‘finally’ block is processed. The only thing left to do is print out the failure and provide a little more data. If the response was valid, which would happen in the case of 4xx or 5xx responses, then it can be printed as well for additional information. Lastly, the function returns None which is the appropriate response if the ‘else’ block is not executed. Wrap up This concludes part 2 of the series. In part 3, we will start working on the implementation of our routines and explore get_auth_token, backup_geo_db, and get_geoip_version. You can access the entire series here: Part 1 Part 2 Part 3 Part 4 Part 5 Part 6 Part 729KViews0likes0CommentsBIG-IP Geolocation Updates – Part 3
BIG-IP Geolocation Updates – Part 3 Introduction Management of geolocation services within the BIG-IP require updates to the geolocation database so that the inquired IP addresses are correctly characterized for service delivery and security enforcement. Traditionally managed device, where the devices are individually logged into and manually configured can benefit from a bit of automation without having to describe to an entire CI/CD pipeline and change in operational behavior. Additionally, a fully fledged CI/CD pipeline that embraces a full declarative model would also need a strategy around managing and performing the updates. This could be done via BIG-IQ; however, many organizations prefer BIG-IQ to monitor rather than manage their devices and so a different strategy is required. This article series hopes to demonstrate some techniques and code that can work in either a classically managed fleet of devices or fully automated environment. If you have embraced BIG-IQ fully, this might not be relevant but is hopefully worth a cursory review depending on how you leverage BIG-IQ. Assumptions and prerequisites There are a few technology assumptions that will be imposed onto the reader that should be mentioned: The solution will be presented in Python, specifically 3.10.2 although some lower versions could be supported. The use of the ‘walrus operator” ( := ) was made in a few places which requires version 3.8 or greater. Support for earlier versions would require some porting. Visual Studio Code was used to create and test all the code. A modest level of expertise would be valuable, but likely not required by the reader. An understanding of BIG-IP is necessary and assumed. A cursory knowledge of the F5 Automation Toolchain is necessary as some of the API calls to the BIG-IP will leverage their use, however this is NOT a declarative operation. Github is used to store the source for this article and a basic understanding of retrieving code from a github repository would be valuable. References to the above technologies are provided here: Python 3.10.2 Visual Studio Code F5 BIG-IP F5 Automation and Orchestration GitHub repository for this article Lastly, an effort was made to make this code high-quality and resilient. I ran the code base through pylint until it was clean and handle most if not all exceptional cases. However, no formal QA function or load testing was performed other than my own. The code is presented as-is with no guarantees expressed or implied. That being said, it is hoped that this is a robust and usable example either as a script or slightly modified into a library and imported into the reader’s project. Credits and Acknowledgements Mark_Menger , for his continued review and support in all things automation based. Mark Hermsdorfer, who reviewed some of my initial revisions and showed me the proper way to get http chunking to work. He also has an implementation on github that is referenced in the code base that you should look at. Article Series DevCentral places a limit on the size of an article and having learned from my previous submission I will try to organize this series a bit more cleanly. This is an overview of the items covered in each section: Part 1 - Design and dependencies Basic flow of a geolocation update The imports list The API library dictionary The status_code_to_msg dictionary Custom Exceptions Method enumeration Part 2 – Send_Request() Function - send_request Part 3 (This article) - Functions and Implementation Function – get_auth_token Function – backup_geo_db Function – get_geoip_version Part 4 - Functions and Implementation Continued Function – fix_md5_file Part 5 - Functions and implementation Continued Function – upload_geolocation_update Part 6 - Functions and implementation Conclusion Function – install_geolocation_update Part 7 - Pulling it together Function – compare_versions Function – validate_file Function – print_usage Command Line script Functions and Implementation With send_request out of the way, we can now focus on a few ancillary routines that we will need while we perform the main functions of the library/script. get_auth_token() With send_request() completed we can now move on to one of the first steps in our design for this operation. Its not required to uses authorization tokens, as you could send authentication every time you make a REST call, but that wouldn’t be as clean a way to perform these operations and it would tax your target system unnecessarily in the process. def get_auth_token(uri=None, username='admin', password='admin'): """ takes credentials and attempts to obtain an access token from the target system. Parameters ---------- uri : str Base URL to call api username : str, defaults to 'admin', username for account on target system password : str, defaults to 'admin', password for account on target system Returns ------- token : str on success None on failure """ The function get_auth_token() defaults the username and password to ‘admin’ which is default for a BIG-IP prior to 14.x, and an alarming good guess otherwise and the uri to none. If the function succeeds it will extract and return an authorization token, otherwise it will return None. assert uri is not None url = f"{uri}{library['auth-token']}" data = {'username':username, 'password':password, 'loginProviderName':'tmos'} with requests.Session() as session: session.headers.update({'Content-Type': 'application/json'}) session.verify = False # Get authentication token if (response:= send_request(url, method=Method.POST, session=session, data=json.dumps(data)) ) is None: print("Error attempting to get access token.") return None # Save token and double check its good token = response.json()['token']['token'] url = f"{uri}{library['mng-tokens']}/{token}" session.headers.update({'X-F5-Auth-Token' : token}) if (response := send_request(url, Method.GET, session) ) is None: print(f"Error attempting to validate access token {token}.") return None return token First, we assert that uri is not None. The asserts were more of a developmental tool I used as I was putting this code together, as I wanted to be sure that assumptions about passed parameters were not challenged and create more debugging for myself as I got things working. Arguments could be made to manage this differently in finalized code, and they would be valid as asserts are really should be used for debugging. This drifts into the ‘code religion’ area a bit, of which I will tiptoe carefully but, in my view, asserts are developmental tools as production code (optimized) will remove asserts and this can create ‘but it works in dev’ type defects which are needlessly problematic. I left these in here because they could be optimized away, and I felt that this code was going to evolve and be used for other projects. Next, we build the target url for this API call from our library. This request requires a POST and some additional data passed in the body of the HTTP request which we put into the variable data as a dictionary. Next, we build a Session object and if successful the ‘with’ block executes with that Session object in the session variable. This request requires that the data passed be of type json, so we update the headers to reflect the data being passed and then disable verify so a system without valid (self-signed) certificates will not cause a problem. Finally, we pass this data on to send_request(). Notice that data is converted from a dictionary to json using the json.dumps() call. We also use the ‘walrus operator’ here to assign the return value into response and check it against None. If send_request() had a problem, it's return value is None, we print an error and then return None to our caller. If it succeeded, then we convert the response to json and extract the token into the variable token. Just to be thorough, we change the url to a call that allows us to manage, or check, tokens and update the headers to include our newly acquired token. The same arrangement is made to send the request, assign to response, and check it against None where if this fails, we will assume something is wrong and return None. Otherwise, the function returns the token. backup_geo_db() The next function to discuss is backup_geo_db(). This call uses a chain of bash shell calls that allow us to create a backup database and then copy the existing db, if it exists, into that directory. Currently that backup location is in /shared/GeoIP_backup. I picked that location simply to ensure adequate space (/var seems to get crowded). The /tmp directory is another viable target although I figured there would be a significant percentage that would prefer not to delete the backup and the end of this operation and /tmp is not a suitable location unless you feel its ephemeral. def backup_geo_db(uri, token=None): """ Creates a backup directory on target device and then backs the existing geolocation db up to that location. Parameters ---------- uri : str Base URL to call api token : str Valid access token for this API endpoint Returns ------- True on success False on failure """ The backup_geo_db() function takes a uri and a token, which is defaulted to None, as its arguments. This function simply returns True on success and False on failure. assert uri is not None assert token is not None with requests.Session() as session: session.headers.update({'Content-Type': 'application/json'}) session.headers.update({'X-F5-Auth-Token' : token}) session.verify = False # Create the backup directory url = f"{uri}{library['bash']}" data = b'{"command": "run", "utilCmdArgs": "-c \'mkdir /shared/GeoIP_backup\'"}' # If the backup directory was created, copy the existing db into the backup directory if (send_request(url, Method.POST, session, data)) is not None: data = b'{"command": "run", "utilCmdArgs": "-c \'cp -R /shared/GeoIP/* /shared/GeoIP_backup/\'"}' if (send_request(url, Method.POST, session, data) ) is None: print("Unable to backup existing geolocation database") return False else: print("Unable to create backup directory, geolocation db will not be backed up") return False return True The function starts with a few asserts to ensure that uri and token are not None. We then create a Session object, place it into session and if this is valid continue with the ‘with’ block. For thus API call, we need the content type to be json, we need the authorization token, and to set verify to False so that self-signed certificates are ignored. This information is added into the session headers. We then create the url which will call a bash function on the BIG-IP. This call uses a POST method, and the body of the request contains information on the specifics for the command that gets passed to bash. We load the data variable with a binary string that has the command, run, and then the arguments to make the backup directory. This is passed to send_request() and if send_request() returns None, it drops to the else below, prints a message on the backup failure and the function returns False. Otherwise, we continue to change the data variable, which is the request body, to copy the /shared/GeoIP/ directory to /shared/GeoIP_backup/. If you have not referred to the K article, it might be worth reviewing as there were directory changes made that might be relevant to you and to adjust the copy source directory accordingly. This is sent to send_request() and if it returns None, the function returns False. Otherwise, execution drops passed all this and to the final return statement which returns True. get_geoip_version() Next, we look at get_geoip_version() which calls a utility on the BIG-IP to look up an IP address and we can extract the version of the database from this information. The function takes a uri and an authorization token that is defaulted to None. def get_geoip_version(uri, token=None): """ Makes a call to run 'geoip_lookup 104.219.101.154' on the F5 to extract the db date/version Parameters ---------- uri : str Base URL to call api token : str Valid access token for this API endpoint Returns ------- str date/version string on success None on failure """ The function returns None on failure, and a string containing the date/version of the geolocation database on success. The IP address is hardcoded and was extracted from the K article K11176, step 7. If you prefer, you can change this to something like 8.8.8.8. assert uri is not None assert token is not None with requests.Session() as session: session.headers.update({'Content-Type': 'application/json'}) session.headers.update({'X-F5-Auth-Token' : token}) session.verify = False retval = None url = f"{uri}{library['bash']}" data = b'{"command": "run", "utilCmdArgs": "-c \'geoip_lookup 104.219.101.154\'"}' if( response:=send_request(url, Method.POST, session, data)) is not None: # Convert the response to json, find the commandResult string and splitlines it into a list for line in response.json()['commandResult'].splitlines(): # Walk the list until we find the Copyright and then return the last 8 characters if "Copyright" in line: retval = line[-8:] return retval The function starts off asserting that the uri and token are not None. We then create a Session object and put it into session. If this succeeds, then the ‘with’ block proceeds where we set the json and authorization headers and setting the session to ignore self-signed certificates. We also set retval to None as they default return value unless the function can set it to something else. Then, we set the url and data variables to tell the BIG-IP to run a bash command, ‘geoip_lookup 104.219.101.154’. We send this to send_request() and assign the return value into response. If the response is not None, we then go on to process the response. To process the response, we first convert it to json and then extract out the value of ‘commandResult’ and split this into lines. We then enter a for look for each line setting line to the current line to process, this is all accomplished on one line. We then look to see if ‘Copyright’ is in the current line and if it is we set retval to that last eight characters which should be the version date of the geolocation database. Finally, retval is returned to the caller. Wrap up This concludes part 3 of the series. Part 4 of the series will cover the fix_md5_file routine. You can access the entire series here: Part 1 Part 2 Part 3 Part 4 Part 5 Part 6 Part 729KViews0likes0CommentsBIG-IP Geolocation Updates – Part 5
BIG-IP Geolocation Updates – Part 5 Introduction Management of geolocation services within the BIG-IP require updates to the geolocation database so that the inquired IP addresses are correctly characterized for service delivery and security enforcement. Traditionally managed device, where the devices are individually logged into and manually configured can benefit from a bit of automation without having to describe to an entire CI/CD pipeline and change in operational behavior. Additionally, a fully fledged CI/CD pipeline that embraces a full declarative model would also need a strategy around managing and performing the updates. This could be done via BIG-IQ; however, many organizations prefer BIG-IQ to monitor rather than manage their devices and so a different strategy is required. This article series hopes to demonstrate some techniques and code that can work in either a classically managed fleet of devices or fully automated environment. If you have embraced BIG-IQ fully, this might not be relevant but is hopefully worth a cursory review depending on how you leverage BIG-IQ. Assumptions and prerequisites There are a few technology assumptions that will be imposed onto the reader that should be mentioned: The solution will be presented in Python, specifically 3.10.2 although some lower versions could be supported. The use of the ‘walrus operator” ( := ) was made in a few places which requires version 3.8 or greater. Support for earlier versions would require some porting. Visual Studio Code was used to create and test all the code. A modest level of expertise would be valuable, but likely not required by the reader. An understanding of BIG-IP is necessary and assumed. A cursory knowledge of the F5 Automation Toolchain is necessary as some of the API calls to the BIG-IP will leverage their use, however this is NOT a declarative operation. Github is used to store the source for this article and a basic understanding of retrieving code from a github repository would be valuable. References to the above technologies are provided here: Python 3.10.2 Visual Studio Code F5 BIG-IP F5 Automation and Orchestration GitHub repository for this article Lastly, an effort was made to make this code high-quality and resilient. I ran the code base through pylint until it was clean and handle most if not all exceptional cases. However, no formal QA function or load testing was performed other than my own. The code is presented as-is with no guarantees expressed or implied. That being said, it is hoped that this is a robust and usable example either as a script or slightly modified into a library and imported into the reader’s project. Credits and Acknowledgements Mark_Menger , for his continued review and support in all things automation based. Mark Hermsdorfer, who reviewed some of my initial revisions and showed me the proper way to get http chunking to work. He also has an implementation on github that is referenced in the code base that you should look at. Article Series DevCentral places a limit on the size of an article and having learned from my previous submission I will try to organize this series a bit more cleanly. This is an overview of the items covered in each section Part 1 - Design and dependencies Basic flow of a geolocation update The imports list The API library dictionary The status_code_to_msg dictionary Custom Exceptions Method enumeration Part 2 – Send_Request() Function - send_request Part 3 - Functions and Implementation Function – get_auth_token Function – backup_geo_db Function – get_geoip_version Part 4 - Functions and Implementation Continued Function – fix_md5_file Part 5 (This article) - Functions and Implementation Continued Function – upload_geolocation_update Part 6 - Functions and Implementation Conclusion Function – install_geolocation_update Part 7 - Pulling it together Function – compare_versions Function – validate_file Function – print_usage Command Line script Functions and Implementation Continued This part of the series will go through upload_geolocation_update which is a lengthy but important routine that will upload the updates as well as verify that they made it there in tact and without corruption. upload_geolocation_update() The next function is upload_geolocation_update() and there is a lot going on within this routine. The broad steps are as follows: Upload the md5 file Upload the zip file using chunking Run the md5sum utility to verify that the upload is not corrupt def upload_geolocation_update(uri, token, zip_file, md5_file): """ Uploads an md5 and zip file for geolocation db update of a BIG-IP Parameters ---------- uri : str Base URL to call api token : str Valid access token for this API endpoint zip_file : str full path to a geolocation zip file md5_file : str full path to a respective md5 file for the zip_file Returns ------- True on success False on failure """ The function takes a uri and authorization token, consistent with the other functions to this point. It also takes the zip and md5 files, which should include the full path, which will be uploaded. The function returns True on success and False on failure. assert uri is not None assert token is not None assert zip_file is not None assert os.path.splitext(zip_file)[-1] == '.zip' assert md5_file is not None assert os.path.splitext(md5_file)[-1] == '.md5' with requests.Session() as session: session.headers.update({'Content-Type':'application/octet-stream'}) session.headers.update({'X-F5-Auth-Token' : token}) session.verify = False # Upload md5 file, its small so a simple upload is all thats necessary fix_md5_file(md5_file, '/var/config/rest/downloads', savebu=True) url = f"{uri}{library['file-xfr']}{md5_file}" size = os.stat(md5_file).st_size content_range = f"0-{size-1}/{size}" session.headers.update({'Content-Range':content_range}) with open(md5_file, 'rb') as fileobj: if(response:=send_request(url, Method.POST, session, data=fileobj)) is None: # Fail hard on md5 upload failure? return False The function begins with an extensive number of asserts to ensure there were passed variables and in the case of the zip and md5 file, if the sanity check for the file extensions pass. We then start off with the same ‘with’ clause and creating a Session object that we have seen. Next, we call fix_md5_file(), which was discussed in the previous section, to correct the md5 file as discussed with the path that our target upload will have, /var/config/rest/downloads. The url requires a little more effort this time around in that we need to state in the API call what the destination file name is. This is analogous to a copy command where there is a source and destination. We need to compute the content_range header and we do that by first finding out the size of the file on the local system. This upload is small so we can do it in one bite, so this is just a formality. Next, we open the md5 file for read/binary and if successful put it into fileobj and continue with the ‘with’ clause. We send everything to send_request() to upload the file and if we get None in return, we consider the operation a failure and the function returns False. You could debate that failure of the md5 upload is not critical and the routine should just continue, albeit you can verify the hash in the way we are doing it here. A fallback option would be to set a flag and then attempt to upload the zip file. If that fails, then you are out as there is nothing left you can do. If the zip upload succeeds, then you check that flag and either run the md5sum as is show here (the md5 upload succeeded) or run md5sum remotely, capture the hash value, and then compare it to the local copy. However, it seemed this level of complexity for the failure scenarios that could cause this was a bit of overkill which is why I did not author it that way. # upload zip file, this time it must be chunked url = f"{uri}{library['file-xfr']}{zip_file}" size = os.stat(zip_file).st_size chunk_size = 512 * 1024 start = 0 with open(zip_file, 'rb') as fileobj: # Read a 'slice' of the file, chunk_size in bytes while( fslice := fileobj.read(chunk_size) ): # Compute the size, start, end and modify the content-range header slice_size = len(fslice) if slice_size < chunk_size: end = size else: end = start+slice_size session.headers.update({'Content-Range':f"{start}-{end-1}/{size}"} ) # Send the slice, if we get a failure, dump out and return false if(response:=send_request(url, Method.POST, session, data=fslice)) is None: return False start += slice_size The next part of the function is to upload the zip file, and this is large enough to where it needs to be chunked. Thanks to Mark Hermsdorfer for sharing how he did this. The trick to this is you need to read the file in chunk sizes, compute a new content-range and then call send_request() with the slice or chunk you are uploading. The target system will stitch or concatenate the file together for you. We start off by setting the url and obtaining the size of the file as we did before for the md5 file. We setup a chunk size of 512K and initialize start to zero, which will be our index as we upload. We open the zip file in read/binary mode and then enter a while loop. The while loop continues as we have file data to read, and the read() function will keep track of where we are in the file for us. We then need to compute the slice size, as we expect at some point, we are going to have a slice that is likely smaller than the chunk_size, the ‘remainder’ as it is. We account for this, and ‘end’ is either computed as start + slice_size or just size. These calculations are then used to format our Content-Range header and we write that into the session header. Next, we sent the request and if we get a response of None, we return False. There are other ways this could be managed, accounting for timeouts or connection issues and doing retires which would be valid. The send_request() exception handlers would need to be modified to allow for this, where 4xx and 5xx errors can be processed the manner they are currently. Then connection related errors are either re-raised or passed to the caller so that a routine such as this could try and remediate or retry. For the purposes of this article, we will assume our IT connectivity is dependable and responsive, but there is no debate this is not always the case. Lastly, we move our index, start, by slice_size and then continue with the while loop until the entire file has been uploaded. # Verify the upload was successful by checking the md5sum. with requests.Session() as session: url = f"{uri}{library['bash']}" session.headers.update({'Content-Type': 'application/json'}) session.headers.update({'X-F5-Auth-Token' : token}) session.verify = False # Verify that the file upload is sane and passes an md5 check data = {'command':'run'} data['utilCmdArgs'] = f"-c 'md5sum -c {'/var/config/rest/downloads'}/{md5_file}'" if (response:=send_request(url, Method.POST, session, json.dumps(data))) is not None: retval = response.json()['commandResult'].split() if retval[1] != 'OK': print("MD5 Failed check. Uploaded zip integrity is questionable") return False return True The final stanza of this code base is to verify that the upload succeeded, and the zip file is not corrupted. F5 performs md5 hashes of its files and provides them on the download site, which we have manipulated and uploaded to the BIG-IP device along with the zip file. The last step is to run a bash command on the BIG-IP that performs an md5 hash of the uploaded file and compares that to the mf5 file. If they are the same, the file is good. If not, then something happened, and the archive is questionable. We take a slightly different approach this time to building the body of the request just to illustrate a different technique by creating a dictionary called data and initializing it would he “command : run” key/value pair. Then we add an additional key value pair for the utilCmdArgs. We then send the request to send_request(), notice that we use json.dumps to translate the dictionary into json as we pass the argument. The function send_request() places its return value into response and if its not None we extract the value out of response. We then check to see if the bash command’s response was OK and if not print the failure and the routine returns False. Otherwise, this drops out of the if statement and the routine returns True. Wrap up This concludes part 5 of the series. Part 6 of the series will get into install_geolocation_update which installs and verifies the installation of the update. You can access the entire series here: Part 1 Part 2 Part 3 Part 4 Part 5 Part 6 Part 728KViews0likes0Comments