iControl
270 TopicsControlling a Pool Members Ratio and Priority Group with iControl
A Little Background A question came in through the iControl forums about controlling a pool members ratio and priority programmatically. The issue really involves how the API’s use multi-dimensional arrays but I thought it would be a good opportunity to talk about ratio and priority groups for those that don’t understand how they work. In the first part of this article, I’ll talk a little about what pool members are and how their ratio and priorities apply to how traffic is assigned to them in a load balancing setup. The details in this article were based on BIG-IP version 11.1, but the concepts can apply to other previous versions as well. Load Balancing In it’s very basic form, a load balancing setup involves a virtual ip address (referred to as a VIP) that virtualized a set of backend servers. The idea is that if your application gets very popular, you don’t want to have to rely on a single server to handle the traffic. A VIP contains an object called a “pool” which is essentially a collection of servers that it can distribute traffic to. The method of distributing traffic is referred to as a “Load Balancing Method”. You may have heard the term “Round Robin” before. In this method, connections are passed one at a time from server to server. In most cases though, this is not the best method due to characteristics of the application you are serving. Here are a list of the available load balancing methods in BIG-IP version 11.1. Load Balancing Methods in BIG-IP version 11.1 Round Robin: Specifies that the system passes each new connection request to the next server in line, eventually distributing connections evenly across the array of machines being load balanced. This method works well in most configurations, especially if the equipment that you are load balancing is roughly equal in processing speed and memory. Ratio (member): Specifies that the number of connections that each machine receives over time is proportionate to a ratio weight you define for each machine within the pool. Least Connections (member): Specifies that the system passes a new connection to the node that has the least number of current connections in the pool. This method works best in environments where the servers or other equipment you are load balancing have similar capabilities. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the current number of connections per node or the fastest node response time. Observed (member): Specifies that the system ranks nodes based on the number of connections. Nodes that have a better balance of fewest connections receive a greater proportion of the connections. This method differs from Least Connections (member), in that the Least Connections method measures connections only at the moment of load balancing, while the Observed method tracks the number of Layer 4 connections to each node over time and creates a ratio for load balancing. This dynamic load balancing method works well in any environment, but may be particularly useful in environments where node performance varies significantly. Predictive (member): Uses the ranking method used by the Observed (member) methods, except that the system analyzes the trend of the ranking over time, determining whether a node's performance is improving or declining. The nodes in the pool with better performance rankings that are currently improving, rather than declining, receive a higher proportion of the connections. This dynamic load balancing method works well in any environment. Ratio (node): Specifies that the number of connections that each machine receives over time is proportionate to a ratio weight you define for each machine across all pools of which the server is a member. Least Connections (node): Specifies that the system passes a new connection to the node that has the least number of current connections out of all pools of which a node is a member. This method works best in environments where the servers or other equipment you are load balancing have similar capabilities. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the number of current connections per node, or the fastest node response time. Fastest (node): Specifies that the system passes a new connection based on the fastest response of all pools of which a server is a member. This method might be particularly useful in environments where nodes are distributed across different logical networks. Observed (node): Specifies that the system ranks nodes based on the number of connections. Nodes that have a better balance of fewest connections receive a greater proportion of the connections. This method differs from Least Connections (node), in that the Least Connections method measures connections only at the moment of load balancing, while the Observed method tracks the number of Layer 4 connections to each node over time and creates a ratio for load balancing. This dynamic load balancing method works well in any environment, but may be particularly useful in environments where node performance varies significantly. Predictive (node): Uses the ranking method used by the Observed (member) methods, except that the system analyzes the trend of the ranking over time, determining whether a node's performance is improving or declining. The nodes in the pool with better performance rankings that are currently improving, rather than declining, receive a higher proportion of the connections. This dynamic load balancing method works well in any environment. Dynamic Ratio (node) : This method is similar to Ratio (node) mode, except that weights are based on continuous monitoring of the servers and are therefore continually changing. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the number of current connections per node or the fastest node response time. Fastest (application): Passes a new connection based on the fastest response of all currently active nodes in a pool. This method might be particularly useful in environments where nodes are distributed across different logical networks. Least Sessions: Specifies that the system passes a new connection to the node that has the least number of current sessions. This method works best in environments where the servers or other equipment you are load balancing have similar capabilities. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the number of current sessions. Dynamic Ratio (member): This method is similar to Ratio (node) mode, except that weights are based on continuous monitoring of the servers and are therefore continually changing. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the number of current connections per node or the fastest node response time. L3 Address: This method functions in the same way as the Least Connections methods. We are deprecating it, so you should not use it. Weighted Least Connections (member): Specifies that the system uses the value you specify in Connection Limit to establish a proportional algorithm for each pool member. The system bases the load balancing decision on that proportion and the number of current connections to that pool member. For example,member_a has 20 connections and its connection limit is 100, so it is at 20% of capacity. Similarly, member_b has 20 connections and its connection limit is 200, so it is at 10% of capacity. In this case, the system select selects member_b. This algorithm requires all pool members to have a non-zero connection limit specified. Weighted Least Connections (node): Specifies that the system uses the value you specify in the node's Connection Limitand the number of current connections to a node to establish a proportional algorithm. This algorithm requires all nodes used by pool members to have a non-zero connection limit specified. Ratios The ratio is used by the ratio-related load balancing methods to load balance connections. The ratio specifies the ratio weight to assign to the pool member. Valid values range from 1 through 100. The default is 1, which means that each pool member has an equal ratio proportion. So, if you have server1 a with a ratio value of “10” and server2 with a ratio value of “1”, server1 will get served 10 connections for every one that server2 receives. This can be useful when you have different classes of servers with different performance capabilities. Priority Group The priority group is a number that groups pool members together. The default is 0, meaning that the member has no priority. To specify a priority, you must activate priority group usage when you create a new pool or when adding or removing pool members. When activated, the system load balances traffic according to the priority group number assigned to the pool member. The higher the number, the higher the priority, so a member with a priority of 3 has higher priority than a member with a priority of 1. The easiest way to think of priority groups is as if you are creating mini-pools of servers within a single pool. You put members A, B, and C in to priority group 5 and members D, E, and F in priority group 1. Members A, B, and C will be served traffic according to their ratios (assuming you have ratio loadbalancing configured). If all those servers have reached their thresholds, then traffic will be distributed to servers D, E, and F in priority group 1. he default setting for priority group activation is Disabled. Once you enable this setting, you can specify pool member priority when you create a new pool or on a pool member's properties screen. The system treats same-priority pool members as a group. To enable priority group activation in the admin GUI, select Less than from the list, and in the Available Member(s) box, type a number from 0 to 65535 that represents the minimum number of members that must be available in one priority group before the system directs traffic to members in a lower priority group. When a sufficient number of members become available in the higher priority group, the system again directs traffic to the higher priority group. Implementing in Code The two methods to retrieve the priority and ratio values are very similar. They both take two parameters: a list of pools to query, and a 2-D array of members (a list for each pool member passed in). long [] [] get_member_priority( in String [] pool_names, in Common__AddressPort [] [] members ); long [] [] get_member_ratio( in String [] pool_names, in Common__AddressPort [] [] members ); The following PowerShell function (utilizing the iControl PowerShell Library), takes as input a pool and a single member. It then make a call to query the ratio and priority for the specific member and writes it to the console. function Get-PoolMemberDetails() { param( $Pool = $null, $Member = $null ); $AddrPort = Parse-AddressPort $Member; $RatioAofA = (Get-F5.iControl).LocalLBPool.get_member_ratio( @($Pool), @( @($AddrPort) ) ); $PriorityAofA = (Get-F5.iControl).LocalLBPool.get_member_priority( @($Pool), @( @($AddrPort) ) ); $ratio = $RatioAofA[0][0]; $priority = $PriorityAofA[0][0]; "Pool '$Pool' member '$Member' ratio '$ratio' priority '$priority'"; } Setting the values with the set_member_priority and set_member_ratio methods take the same first two parameters as their associated get_* methods, but add a third parameter for the priorities and ratios for the pool members. set_member_priority( in String [] pool_names, in Common::AddressPort [] [] members, in long [] [] priorities ); set_member_ratio( in String [] pool_names, in Common::AddressPort [] [] members, in long [] [] ratios ); The following Powershell function takes as input the Pool and Member with optional values for the Ratio and Priority. If either of those are set, the function will call the appropriate iControl methods to set their values. function Set-PoolMemberDetails() { param( $Pool = $null, $Member = $null, $Ratio = $null, $Priority = $null ); $AddrPort = Parse-AddressPort $Member; if ( $null -ne $Ratio ) { (Get-F5.iControl).LocalLBPool.set_member_ratio( @($Pool), @( @($AddrPort) ), @($Ratio) ); } if ( $null -ne $Priority ) { (Get-F5.iControl).LocalLBPool.set_member_priority( @($Pool), @( @($AddrPort) ), @($Priority) ); } } In case you were wondering how to create the Common::AddressPort structure for the $AddrPort variables in the above examples, here’s a helper function I wrote to allocate the object and fill in it’s properties. function Parse-AddressPort() { param($Value); $tokens = $Value.Split(":"); $r = New-Object iControl.CommonAddressPort; $r.address = $tokens[0]; $r.port = $tokens[1]; $r; } Download The Source The full source for this example can be found in the iControl CodeShare under PowerShell PoolMember Ratio and Priority.28KViews0likes3CommentsSelf-Contained BIG-IP Provisioning with iRules and pyControl – Part 2
As I stated last week in Part 1, iRules work on the live data between client and server, and iControl works on the management plane out of band to collect information or to modify or create configuration objects. However, what if you could combine forces, wholly contained on your BIG-IP LTM? That’s the scenario I started tackling last week. In this second part, I’ll address the iRule and iControl components necessary to complete a provisioning task. Self-Contained BIG-IPProvisioning with iRules and pyControl - Part 1 The iRule code I’ll get the solution working prior to making it extensible and flashy. First thing I’ll do to enhance the test iRule from last week is to utilize the restful interface approach I used to access table information in this tech tip on Restful Access to BIG-IP Subtables - DevCentral . I want to pass an action, a pool name, and a pool member through the URL to the system. In this case, the action will be enable or disable, and the pool and pool member should be valid entries. In the event a pool or pool member is passed that is not valid, there is simple error checking in place, which will alert the client. Note also in the code below that I incorporated code from George’s basic auth tech tip. 1: when HTTP_REQUEST { 2: binary scan [md5 [HTTP::password]] H* password 3: 4: if { ([HTTP::path] starts_with "/provision") } { 5: if { [class lookup [HTTP::username] provisioners] equals $password } { 6: scan [lrange [split [URI::path [HTTP::uri]] "/"] 2 end-1] %s%s%s action pname pmem 7: if { [catch {[members $pname]} errmsg] && !($errmsg equals "")} { 8: if { [members -list $pname] contains "[getfield $pmem ":" 1] [getfield $pmem ":" 2]" } { 9: set pmstat1 [LB::status pool $pname member [getfield $pmem ":" 1] [getfield $pmem ":" 2]] 10: log local0. "#prov=$action,$pname,$pmem" 11: after 250 12: set pmstat2 [LB::status pool $pname member [getfield $pmem ":" 1] [getfield $pmem ":" 2]] 13: HTTP::respond 200 content "<html><body>Original Status<br />$pmem-$pmstat1<p>New Status<br />$pmem-$pmstat2</body></html>" 14: } else { HTTP::respond 200 content "$pmem not a valid pool member." } 15: } else { HTTP::respond 200 content "$pname not a valid pool." } 16: } else { HTTP::respond 401 WWW-Authenticate "Basic realm=\"Secured Area\"" } 17: 18: } 19: } I added the “after 250” command to give the pyControl script some time to receive the syslog event and process before checking the status again. This is strictly for display purposes back to the client. The pyControl code I already have the syslog listener in place, and now with the iRule passing real data, this is what arrives via syslog: <134>Oct 13 11:04:55 tmm tmm[4848]: Rule ltm_provisioning <HTTP_REQUEST>: #prov=enable,cacti-pool,10.10.20.200:80 Not really ready for processing, is it? So I need to manipulate the data a little to get it into manageable objects for pyControl. # Receive messages while True: data,addr = syslog_socket.recvfrom(1024) rawdata = data.split(' ')[-1] provdata = rawdata.split('=')[-1] dataset = provdata.split(',') dataset = map(string.strip, dataset) Splitting on whitespace removes all the syslog overhead, the I split again on the “=” sign to get the actual objects I need. Next, I split the data to eliminate the commons and create a list,and the finally, I strip any newline charcters (which happens to exist on the last element, the IP:port object). Now that I have the objects I need, I can go to work setting the enable/disable state on the pool members. Thankfully, the disable pool member pyControl code already exists in the iControl codeshare, I just need to do a little modification. None of the def’s need to change, so I added them as is. Because the arguments are coming from syslog and not from a command line, I have no need for the sys module, so I won’t import that. I do need to import string, though to strip the newline character in the map(string.strip, dataset) command above. The dataset object is a list and the elements are 0:action, 1:pool, 2:pool_member. pyControl expects the members to be a list, even if it’s only one member, which is true of this scenario. ### Process Pool, Pool Members, & Desired State ### POOL = dataset[1] members = [dataset[2]] sstate_seq = b.LocalLB.PoolMember.typefactory.create('LocalLB.PoolMember.MemberSessionStateSequence') sstate_seq.item = session_state_factory(b, members) session_objects = session_state_factory(b, members) if dataset[0] == 'enable': enable_member(b, session_objects) elif dataset[0] == 'disable': disable_member(b, session_objects) else: print "\nNo such action: \"dataset[0]\" " More error checking here, if the action passed was enable/disable, I call the appropriate def, otherwise, I log to console. OK. Time to test it out! Results In this video, I walk through an example of enabling/disabling a pool member Provision Through iRules The Fine Print Obviously, there is risk with such a system. Allowing provisioning of your application delivery through the data plane can be a dangerous thing, so tread carefully. Also, customizing syslog and installing and running pyControl on box will not survive hotfixes and upgrades, so processes would need to be in place to ensure functionality going forward. Finally, I didn’t cover ensuring the pyControl script is running in the background and is started on system boot, but that is a step that would be required as well. For test purposes, I just ran the script on the console. Conclusion There is an abundance of valuable and helpful information on how to utilize the BIG-IP system, iRules, and iControl. Without too much work on my own, I was able to gather snippets of code here and there to deliver a self-contained provisioning system. The system is short on functionality (toggling pool members), but could be extended with a little elbow grease and MUCH error checking. For the full setup, check out the LTM Provisioning via iRules wiki entry in Advanced Design and Configuration wiki. Related Articles HTTP Basic Access Authentication iRule Style > DevCentral > F5 ... F5 DevCentral > Community > Group Details - Python iControl Library Getting Started with pyControl v2: Installing on Windows ... Getting Started with pyControl v2: Installing on Ubuntu Desktop ... Experimenting with pyControl on LTM VE > DevCentral > F5 ... Getting Started with pyControl v2: Understanding the TypeFactory ... Getting Started with pyControl v2: Constructor Changes ... LTM 9.4.2+: Custom Syslog Configuration > DevCentral > F5 ... LTM 9.4.2+ Custom Syslog errors when bpsh used - DevCentral - F5 ... Custom syslog-ng facility - DevCentral - F5 DevCentral > Community ... Customizing syslog-ng f_local0 filter - DevCentral - F5 DevCentral ... How to write iRule log statements to a custom log file, and rotate ... Syslog locally and remote with specific facility level ... DevCentral Wiki: Syslog NG Email Configuration Technorati Tags: F5 DevCentral,pyControl,iControl,iRules,syslog-ng,provisioning313Views0likes0CommentsGetting started with Ansible
Ansible is an orchestration and automation engine. It provides a means for you to automate the administration of different devices, from Linux to Windows and different special purpose appliances in-between. Ansible falls into the world of DevOps related tools. You may have heard of others that play in this area as well including. Chef Puppet Saltstack In this article I'm going to briefly skim the surface of what Ansible is and how you can get started using it. I've been toying around with it for some years now, and (most recently at F5) using it to streamline some development work I've been involved in. If you, like me, are a fan of dabbling with interesting tools and swear by the "Automate all the Things!" catch-phrase, then you might take an interest in Ansible. We're going to start small though and build upon what we learn. My goal here is to eventually bring you all to the point where we're doing some crazy awesome things with Ansible and F5 products. I'll also go into some brief detail on features of Ansible that make it relatively painless to interoperate with existing F5 products. Let's get started! So why Ansible? Any time that it comes to adopting some new technology for your everyday use, inevitably you need to ask yourself "what's in it for me?". Why not just use some custom shell scripts and pssh to do everything? Here are my reasons for using Ansible. It is agent-less The only dependencies (on the remote device) are SSH and python; and even python is not really a dependency The language that you "do" stuff in is YAML. No CS degree or programming language expertise is required (Perl, Ruby, Python, etc) Extending it is simple (in my opinion) Actions are idempotent Order of operations is well-defined and work is performed top-down Many of the original tools in the DevOps space were agent-based tools. This is a major problem for environments where it's literally (due to technology or politics) impossible to install an agent. Your SLA may prohibit you from installing software on the box. Or, you might legitimately not be able to install the software due to older libraries or other missing dependencies. Ansible has no agent requirement; a plus in my book. Most of the systems that you will come across can be, today, manipulated by Ansible. It is agent-less by design. Dependency wise you need to be able to connect to the machine you want to orchestrate, so it makes sense that SSH is a dependency. Also, you would like to be able to do higher-order "stuff" to a machine. That's where the python dependency comes into play. I say dependency loosely though, because Ansible provides a way to run raw commands on remote systems regardless of whether Python is installed. For professional Ansible development though, this method of orchestrating devices is largely not recommended except in very edge cases. Ansible's configuration language is YAML. If you have never seen YAML before, this is what it looks like - name: Deploy common hosts files settings hosts: all connection: ssh gather_facts: true tasks: - name: Install required packages apt: name: "{{ item }}" state: "present" with_items: - ntp - ubuntu-cloud-keyring - python-mysqldb YAML is generally composed of simple key/value pairs, lists, and dictionaries. Contrast this with the Puppet configuration language; a special DSL that resembles a real programming language. class sso { case $::lsbdistcodename { default: { $ssh_version = 'latest' } } class { '::sso': ldap_uri => $::ldap_uri, dev_env => true, ssh_version => $ssh_version, sshd_allow_groups => $::sshd_allow_groups, } } Or contrast this with Chef, in which you must know Ruby to be able to use. servers = search( :node, "is_server:true AND chef_environment:#{node.chef_environment}" ).sort! do |a, b| a.name <=> b.name end begin resources('service[mysql]') rescue Chef::Exceptions::ResourceNotFound service 'mysql' end template "#{mysql_dir}/etc/my.conf" do source 'my.conf.erb' mode 0644 variables :servers => servers, :mysql_conf => node['mysql']['mysql_conf'] notifies :restart, 'service[mysql]' end In Ansible, work that is performed is idempotent. That's a buzzword. What does it mean? It means that an operation can be performed multiple times without changing the result beyond its initial application. If I try to add the same line to a file a thousand times, it will be added once and then will not be added again 999 times. Another example is adding user accounts. They would be added once, not many times (which might raise errors on the system). Finally, Ansible's workflow is well defined. Work starts at the top of a playbook and makes its way to the bottom. Done. End of story. There are other tools that have a declarative model. These tools attempt to read your mind. "You declare to me how the node should look at the end of a run, and I will determine the order that steps should be run to meet that declaration." Contrast this with Ansible which only operates top-down. We start at the first task, then move to the second, then the third, etc. This removes much of the "magic" from the equation. Often times an error might occur in a declarative tool due specifically to how that tool arranges its dependency graph. When that happens, it's difficult to determine what exactly the tool was doing at the time of failure. That magic doesn't exist in Ansible; work is always top-down whether it be tasks, roles, dependencies, etc. You start at the top and you work your way down. Installation Let's now take a moment to install Ansible itself. Ansible is distributed in different ways depending on your operating system, but one tried and true method to install it is via pip ; the recommended tool for installing python packages. I'll be working on a vanilla installation of Ubuntu 15.04.2 (vivid) for the remaining commands. Ubuntu includes a pip package that should work for you without issue. You can install it via apt-get . sudo apt-get install python-pip python-dev Afterwards, you can install Ansible. sudo pip install markupsafe ansible==1.9.4 You might ask "why not ansible 2.0". Well, because 2.0 was just released and the community is busy ironing out some new-release bugs. I prefer to give these things some time to simmer before diving in. Lucky for us, when we are ready to dive in, upgrading is a simple task. So now you should have Ansible available to you. SEA-ML-RUPP1:~ trupp$ ansible --version ansible 1.9.4 configured module search path = None SEA-ML-RUPP1:~ trupp$ Your first playbook Depending on the tool, the body of work is called different things. Puppet calls them manifests Chef calls them recipes and cookbooks Ansible calls them plays and playbooks Saltstack calls them formulas and states They're all the same idea. You have a system configuration you need to apply, you put it in a file, the tool interprets the file and applies the configuration to the system. We will write a very simple playbook here to illustrate some concepts. It will create a file on the system. Booooooring. I know, terribly boring. We need to start somewhere though, and your eyes might roll back into your head if we were to start off with a more complicated example like bootstrapping a BIG-IP or dynamically creating cloud formation infrastructure in AWS and configuring HA pairs, pools, and injecting dynamically created members into those pools. So we are going to create a single file. We will call it site.yaml . Inside of that file paste in the following. - name: My first play hosts: localhost connection: local gather_facts: true tasks: - name: Create a file copy: dest: "/tmp/test.txt" content: "This is some content" This file is what Ansible refers to as a Playbook. Inside of this playbook file we have a single Play (My first play). There can be multiple Plays in a Playbook. Let's explore what's going on here, as well as touch upon the details of the play itself. First, that Play. Our play is composed of a preamble that contains the following name hosts connection gather_facts The name is an arbitrary name that we give to our Play so that we will know what is being executed if we need to debug something or otherwise generate a reasonable status message. ALWAYS provide a name for your Plays, Tasks, everything that supports the name syntax. Next, the hosts line specifies which hosts we want to target in our Play. For this Play we have a single host; localhost . We can get much more complicated than this though, to include patterns of hosts groups of hosts groups of groups of hosts dynamically created hosts hosts that are not even real You get the point. Next, the connection line tells Ansible how to connect to the hosts. Usually this is the default value ssh . In this case though, because I am operating on the localhost, I can skip SSH altogether and simply say local . After that, I used the gather_facts line to tell Ansible that it should interrogate the remote system (in this case the system localhost) to gather tidbits of information about it. These tidbits can include the installed operating system, the version of the OS, what sort of hardware is installed, etc. After the preamble is written, you can see that I began a new block of "stuff". In this case, the tasks associated with this Play. Tasks are Ansible's way of performing work on the system. The task that I am running here is using the copy module. As I did with my Play earlier, I provide a name for this task. Always name things! After that, the body of the module is written. There are two arguments that I have provided to this module (which are documented more in the References section below) dest content I won't go into great deal here because the module documentation is very clear, but suffice it to say that dest is where I want the file written and content is what I want written in the file. Running the playbook We can run this playbook using the ansible-playbook command. For example. SEA-ML-RUPP1:~ trupp$ ansible-playbook -i notahost, site.yaml The output of the command should resemble the following PLAY [My first play] ****************************************************** GATHERING FACTS *************************************************************** ok: [localhost] TASK: [Create a file] ********************************************************* changed: [localhost] PLAY RECAP ******************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 We can also see that the file we created has the content that we expected. SEA-ML-RUPP1:~ trupp$ cat /tmp/test.txt This is some content A brief aside on the syntax to run the command. Ansible requires that you specify an inventory file to provide hosts that it can orchestrate. In this specific example, we are not specifying a file. Instead we are doing the following Specifying an arbitrary string (notahost) Followed by a comma In Ansible, this is a short-hand trick to skip the requirement that an inventory file be specified. The comma is the key part of the argument. Without it, Ansible will look for a file called notahost and (hopefully) not find it; raising an error otherwise. The output of the command is shown next. The output is actually fairly straight-forward to read. It lists the PLAY s and TASK s that are running (as well as their names...see, I told you you wanted to have names). The status of the Tasks is also shown. This can be values such as changed ok failed skipped unreachable Finally, all Ansible Playbook runs end with a PLAY RECAP where Ansible will tell you what the status of the various plays on your hosts were. It is at this point where a Playbook will be considered successful or not. In this case, the Playbook was completely successful because there were not unreachable hosts nor failed hosts. Summary This was a brief introduction to the orchestration and automation system Ansible. There are far more complex subjects related to Ansible that I will touch upon in future posts. If you found this information useful, rate it as such. If you would like to see more advanced topics covered, videos demo'd, code samples written, or anything else on the subject, let me know in the comments below. Many organizations, both large and small, use DevOps tools like the one presented in this post. Ansible has several features, per design, that make it attractive to these organizations (such as being agent-less, and having minimum requirements). If you'd like to see crazy sophisticated examples of Ansible in use...well...we'll get there. You need to rate and comment on my posts though to let me know that you want to see more. References copy - Copies files to remote locations. — Ansible Documentation raw - Executes a low-down and dirty SSH command — Ansible Documentation Variables — Ansible Documentation2.6KViews0likes12CommentsExisting Ansible BIG-IP modules
Right around the time that I started at F5, I was at the pinnacle of my exposure to Ansible. So imagine my surprise when I saw BIG-IP modules in the Ansible core product! I immediately wanted to know which one of my colleagues I could go talk Ansible-fu with! I cracked open the source code, found the names of contributors, made haste to the corporate phone book to look up where they sat, and...wait a second... ...no entries in the phone book? ...that curious look in colleagues eyes that says "I haven't the foggiest idea what you are talking about". Then it hit me...the BIG-IP modules didn't originate at F5. Upon further investigation, it became clear to me that, indeed, we had no skin in this game. These modules originated from two enterprising individuals who I found on DevCentral, and I would be remiss to exclude their valuable contributions to the Ansible+F5 cause, so in this post I want to highlight those contributions and bring others up-to-speed on current Ansible+BIG-IP functionality. Credit where it is due As far as I can tell from perusing Ansible's git logs, the existing BIG-IP modules in Ansible are the work of Matt Hite and Serge van Ginderachter. Both are DevCentral contributors, and have had a presence in the Ansible community from (at least) 2013 when their modules landed in the tool. Among the modules that they had a hand in are bigip_facts bigip_node bigip_pool bigip_pool_member bigip_monitor_http bigip_monitor_tcp Since that time, I've seen even more folks step up to the plate with Ansible modules that will be making their appearance in upcoming releases. Well, I wanted to have a hand in this work too. I figured I had a perfect opportunity to help because, I had some background in Ansible I had access to unlimited BIG-IP technical resources It was only a question of how to get involved. And then, Matt posted this in a pull request (PR)... Unfortunately I don't have GTM to smoke test this with. Can you solicit some testers on the mailing list? And Serge followed with So same here, not tested as no GTM gear, but an offline code review. Bingo...I was in. If there's one area that I figured I could have the greatest impact, it's the area of technical resources. There is BIG-IP stuff all over the place at F5, and I sit next to, or at least near, many of the people who have knowledge of the products; far more advanced knowledge than I do. Many of those same people hang out and contribute on DevCentral. Paired with my knowledge of Ansible, I figured I could help test BIG-IP related PRs to validate their functionality beyond what a code review offered. So that's what I did (and what we're about to do). Before you begin If you were around for the earlier introductory article to Ansible that I posted a couple days ago, pull up a terminal to that machine. We need to install one more dependency that is common to all of the BIG-IP modules in Ansible; bigsuds . pip install bigsuds With that in place, you're ready to work with the Ansible modules. Additionally, I am going to be using a very minimal inventory file that just includes a single BIG-IP. You will want to adjust yours for your environment, but here is mine. [test] big-ip01.internal Note that the name above is available in my local DNS. If you only have an IP address to work with, you can just add that to your inventory file. For example [test] 192.168.10.2 Also note that I included a line called [test] . In Ansible this is referred to as a group. We will be revisiting this in the future as we begin orchestrating multiple BIG-IPs. Let's dive in to some of Matt and Serge's BIG-IP modules! bigip_facts In Ansible, there is a pattern you often see over and over. This is the pattern of modules that provide information, and modules that change settings. For this tutorial I'll refer to them as, reader modules writer modules Without exception, reader modules are suffixed with an _facts string. While writer modules, are not. So, with that in mind...pop quiz! If I asked you whether the bigip_pool module was a reader module or a writer module, you would say...writer! If I asked you whether the bigip_pool_facts module was a reader module or a writer module, you would say...reader! The bigip_facts module will return to you a number of facts about the BIG-IP in question. Let's take a look. First, my playbook. - name: Test bigip_facts hosts: test connection: local tasks: - name: Get all of the facts from my BIG-IP bigip_facts: server: "{{ inventory_hostname }}" user: "admin" password: "admin" include: "system_info" And now, let's just run it to see what sort of output it generates. ansible-playbook -i hosts site.yaml -vvvv You should be presented with information resembling the following. root@debian:~# ansible-playbook -i hosts site.yaml -vvvv PLAY [Test bigip_facts] ******************************************************* TASK: [Get all of the facts from my BIG-IP] *********************************** ... ok: [big-ip01.internal] => {"ansible_facts": {"system_info": {"base_mac_address": "8A:83:8B:B1:AE:F7", "blade_temperature": [], "chassis_slot_information": [], "globally_unique_identifier": "8A:83:8B:B1:AE:F7", "group_id": "DefaultGroup", "hardware_information": [{"model": "Common KVM processor", "name": "cpus", "slot": 0, "type": "HARDWARE_BASE_BOARD", "versions": [{"name": "cache size", "value": "4096 KB"}, {"name": "cores", "value": "2"}, {"name": "cpu MHz", "value": "2199.994"}]}], ... "os_version": "#1 SMP Mon Aug 11 19:54:07 PDT 2014", "platform": "Z100", "product_category": "Virtual Edition", "switch_board_part_revision": null, "switch_board_serial": null, "system_name": "Linux"}, "time": {"day": 28, "hour": 22, "minute": 1, "month": 1, "second": 7, "year": 2016}, "time_zone": {"gmt_offset": -8, "is_daylight_saving_time": false, "time_zone": "PST"}, "uptime": 854}}, "changed": false} root@debian:~# This module can output a lot of information that you can use in later tasks. Note that I just asked for the system_info facts, but there are a number of them that you can ask for, including address_class certificate client_ssl_profile device device_group interface key node pool rule self_ip software system_info traffic_group trunk virtual_address virtual_server vlan You can include multiple types of facts by separating them with a comma. For example - name: Test bigip_facts hosts: test connection: local tasks: - name: Get all of the facts from my BIG-IP bigip_facts: server: "{{ inventory_hostname }}" user: "admin" password: "admin" include: "system_info,software,self_ip" Returns facts representing the system_info , software , and self IPs. You can use the generated facts in later tasks by referencing their JSON keys. For example - name: Test bigip_facts hosts: test connection: local tasks: - name: Get all of the facts from my BIG-IP bigip_facts: server: "{{ inventory_hostname }}" user: "admin" password: "admin" include: "system_info,software,self_ip" - name: Mention the Self IP debug: msg: "I have self IP {{ self_ip['/Common/net1'].address }}" - name: Mention the software debug: msg: "I have software version {{ software[0].version }}" And the (truncated) output ... TASK: [Mention the Self IP] *************************************************** ok: [big-ip01.internal] => { "msg": "I have self IP 10.2.2.2" } TASK: [Mention the software] ************************************************** ok: [big-ip01.internal] => { "msg": "I have software version 11.6.0" } ... bigip_node This module allows you to manipulate nodes in a BIG-IP. Nodes are logical objects on your BIG-IP that identify the IP address of a physical node on your network. In terms of what we can do with them with the existing BIG-IP modules, you can use this module to create nodes that you can later assign to pool members. First, let's show you the playbook that I'm going to run. - name: Node shenanigans hosts: test connection: local tasks: - name: Add a new node bigip_node: server: "{{ inventory_hostname }}" user: "admin" password: "admin" host: "10.2.1.1" name: "member1" This is a simple example of creating a node in my BIG-IP. As is probably apparent if I am going to be working with pools, I would want to create many nodes. I'll hold off on that until a future example, but hopefully this clarifies the point of how to create the object that we can later assign to pool members. Let's run it! ansible-playbook -i hosts site.yaml And the output we should expect to see looks like this root@debian:~# ansible-playbook -i hosts site.yaml PLAY [Node shenanigans] ******************************************************* TASK: [Add a new node] ******************************************************** changed: [big-ip01.internal] PLAY RECAP ******************************************************************** big-ip01.internal : ok=1 changed=1 unreachable=0 failed=0 root@debian:~# Now, just to clarify what gets dropped where, and where these nodes can be used, let's first look at the Local Traffic > Nodes screen. Now, let's navigate over to the pool screen and try to create a new pool. On the new pool creation screen, we have the option of specifying members of that pool. If we click on Node List, well, look at that. Our new node. bigip_pool This module allows you to create pools on your BIG-IPs. To these pools we can later add pool members. With this and the bigip_pool_member module, you can control the basic load balancing functionality of the BIG-IP. First, just to prove that I have nothing up my BIG-IP sleeve, here's my current pool list. And now, for my playbook - name: Create a pool hosts: test connection: local tasks: - name: Create the pool1 pool bigip_pool: server: "{{ inventory_hostname }}" user: "admin" password: "admin" name: "pool1" With a wave of my magic wand... root@debian:~# ansible-playbook -i hosts site.yaml PLAY [Create a pool] ******************************************************* TASK: [Create the pool1 pool] *********************************** ... changed: [big-ip01.internal] => {"changed": true} PLAY RECAP ******************************************************************** big-ip01.internal : ok=1 changed=1 unreachable=0 failed=0 root@debian:~# ...and my pool list has been changed. The bigip_pool module has a number of other options that let you adjust settings of the pool. They are all documented on the module's page. bigip_pool_member With the bigip_pool_member module, you can manipulate the members of any of the pools on your BIG-IP. This module, in particular, is a crucial part of a rolling upgrade strategy that you may undertake when upgrading software on members of the pool. Consider the following scenario. Remove (or disable) the member from the pool Upgrade the member's software (you pick the software, but let's say Apache for example) Verify the software upgrade Add (or enable) the member back to the old pool, or, add the member to a new pool This module can be used for steps 1 and 4. Let's have a look at my playbook. - name: Pool member manipulation hosts: test connection: local tasks: - name: Drop members out of pool1 bigip_pool_member: server: "{{ inventory_hostname }}" user: "admin" password: "admin" state: "absent" host: "{{ item }}" pool: "pool1" port: "22" with_items: - member1 - member2 - name: Intermediate processing debug: msg: "Upgrade some software" - name: More intermediate processing debug: msg: "Install some configuration" - name: Add member1 node back bigip_node: server: "{{ inventory_hostname }}" user: "admin" password: "admin" name: "{{ item }}" host: "10.2.1.1" state: "present" with_items: - member1 - name: Add member2 node back bigip_node: server: "{{ inventory_hostname }}" user: "admin" password: "admin" name: "{{ item }}" host: "10.2.1.2" state: "present" with_items: - member2 - name: Add member1 back to pool1 bigip_pool_member: server: "{{ inventory_hostname }}" user: "admin" password: "admin" host: "{{ item }}" pool: "pool1" port: "22" state: "present" with_items: - member1 - name: Add member2 to pool2 bigip_pool_member: server: "{{ inventory_hostname }}" user: "admin" password: "admin" host: "{{ item }}" pool: "pool2" port: "22" state: "present" with_items: - member2 And before we run it, let's just take a peek at my current pools. As well as the members of pool1 And the members of pool2 Now, let's have a go at running the playbook. ansible-playbook -i hosts site2.yaml After it runs, you should have output that resembles something like this. root@debian:~# ansible-playbook -i hosts site.yaml PLAY [Pool member manipulation] *********************************************** TASK: [Drop members out of pool1] ********************************************* changed: [big-ip01.internal] => (item=member1) ok: [big-ip01.internal] => (item=member2) TASK: [Intermediate processing] *********************************************** ok: [big-ip01.internal] => { "msg": "Upgrade some software" } TASK: [More intermediate processing] ****************************************** ok: [big-ip01.internal] => { "msg": "Install some configuration" } TASK: [Add member1 node back] ************************************************* changed: [big-ip01.internal] => (item=member1) TASK: [Add member2 node back] ************************************************* ok: [big-ip01.internal] => (item=member2) TASK: [Add member1 back to pool1] ********************************************* changed: [big-ip01.internal] => (item=member1) TASK: [Add member2 to pool2] ************************************************** ok: [big-ip01.internal] => (item=member2) PLAY RECAP ******************************************************************** big-ip01.internal : ok=7 changed=3 unreachable=0 failed=0 root@debian:~# And your pool members should be different, check out the screenshots below. The pool list Members in pool1 Members in pool2 To Summarize BIG-IP would not have a presence in Ansible if it were not for Matt and Serge's initiative. Based on the work they started, I'm happy to assist with testing and contributing new modules moving forward. In my entirely too biased opinion, Ansible fits in well with the tools and products I work on. Perhaps it will also fit in well with your workflow. If we've piqued your interest, then the Ansible mailing list is a great place to ask more Ansible related questions and further solidify your understanding. If you liked this article and want to see more like it, rate it as such. Have comments? Questions? Something else? Leave a comment below! References http://docs.ansible.com/ansible/bigip_facts_module.html http://docs.ansible.com/ansible/bigip_node_module.html http://docs.ansible.com/ansible/bigip_pool_module.html http://docs.ansible.com/ansible/bigip_pool_member_module.html https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/ltm_configuration_guide_10_0_0/ltm_nodes.html#1188710 https://groups.google.com/forum/#!forum/ansible-project1.5KViews0likes14CommentsGetting Started with iControl: Working with the System
So far throughout this Getting Started with iControl series, we’ve worked our way through history, libraries, configuration object, and statistics. In this final article in the series, we’ll tackle a few system functions, namely system ntp and dns settings, and then generating a qkview. Some of the system functions are new functionality to iControl with the rest portal, as system calls are not available via soap. The rule of thumb here is if the system call is available in tmsh via the util branch, it’s available in the rest portal. System DNS Settings When setting up the BIG-IP, some functions require an active DNS configuration to perform lookups. In the GUI, this is configured by navigating to System->Configuration->Device->DNS. In tmsh, it’s configured by modifying the settings in /sys/dns. Note that there is not a create function here, as the DNS objects themselves already exist, whether or not you have configured them. Because of this, when updating these settings via iControl rest, you’ll use the PUT method. An example tmsh configuration for the DNS settings is below: [root@ltm3:Active:Standalone] config # tmsh list sys dns sys dns { name-servers { 10.10.10.2 10.10.10.3 } search { test.local } } This formatted in json looks like this: {"nameServers": ["10.10.10.2", "10.10.10.3"], "search": ["test.local"]} with a PUT to https://hostname/mgmt/tm/sys/dns. Powershell Python Powershell DNS Settings #---------------------------------------------------------------------------- function Get-systemDNS() # # Description # This function retrieves the system DNS configuration # #---------------------------------------------------------------------------- { $uri = "/mgmt/tm/sys/dns"; $link = "https://$Bigip$uri"; $headers = @{}; $headers.Add("ServerHost", $Bigip); $secpasswd = ConvertTo-SecureString $Pass -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ($User, $secpasswd) $obj = Invoke-RestMethod -Method GET -Headers $headers -Uri $link -Credential $mycreds Write-Host "`nName Servers"; Write-Host "------------"; $items = $obj.nameServers; for($i=0; $i -lt $items.length; $i++) { $name = $items[$i]; Write-Host "`t$name"; } Write-Host "`nSearch Domains"; $items = $obj.search; for($i=0; $i -lt $items.length; $i++) { $name = $items[$i]; Write-Host "`t$name"; } Write-Host "`n" } #---------------------------------------------------------------------------- function Set-systemDNS() # # Description # This function sets the system DNS configuration # #---------------------------------------------------------------------------- { param( [array]$Servers, [array]$SearchDomains ); $uri = "/mgmt/tm/sys/dns"; $link = "https://$Bigip$uri"; $headers = @{}; $headers.Add("ServerHost", $Bigip); $headers.Add("Content-Type", "application/json"); $obj = @{ nameServers = $Servers search = $SearchDomains }; $body = $obj | ConvertTo-Json $secpasswd = ConvertTo-SecureString $Pass -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ($User, $secpasswd) $obj = Invoke-RestMethod -Method PUT -Uri $link -Headers $headers -Credential $mycreds -Body $body; Get-systemDNS; } Python DNS Settings def get_sys_dns(bigip, url): try: dns = bigip.get('%s/sys/dns' % url).json() print "\n\n\tName Servers:" for server in dns['nameServers']: print "\t\t%s" % server print "\n\tSearch Domains:" for domain in dns['search']: print "\t\t%s\n\n" %domain except Exception, e: print e def set_sys_dns(bigip, url, servers, search): servers = [x.strip() for x in servers.split(',')] search = [x.strip() for x in search.split(',')] payload = {} payload['nameServers'] = servers payload['search'] = search try: bigip.put('%s/sys/dns' % url, json.dumps(payload)) get_sys_dns(bigip, url) except Exception, e: print e System NTP Settings NTP is another service that some functions like APM access require. In the GUI, the settings for NTP are configured in two places. First, the servers are configured in System->Configuration->Device->NTP. The timezone is configured on the System->Platform page. In tmsh, the settings are configured in /sys/ntp. Like DNS, these objects already exist, so the iControl rest method will be a PUT instead of a POST. An example tmsh configuration for the NTP settings is below: [root@ltm3:Active:Standalone] config # tmsh list sys ntp sys ntp { servers { 10.10.10.1 } timezone America/Chicago } This formatted in json looks like this: {"servers": ["10.10.10.1"], "timezone": "America/Chicago"} with a PUT to https://hostname/mgmt/tm/sys/ntp. Powershell Python Powershell NTP Settings #---------------------------------------------------------------------------- function Get-systemNTP() # # Description # This function retrieves the system NTP configuration # #---------------------------------------------------------------------------- { $uri = "/mgmt/tm/sys/ntp"; $link = "https://$Bigip$uri"; $headers = @{}; $headers.Add("ServerHost", $Bigip); $secpasswd = ConvertTo-SecureString $Pass -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ($User, $secpasswd) $obj = Invoke-RestMethod -Method GET -Headers $headers -Uri $link -Credential $mycreds Write-Host "`nNTP Servers"; Write-Host "------------"; $items = $obj.servers; for($i=0; $i -lt $items.length; $i++) { $name = $items[$i]; Write-Host "`t$name"; } Write-Host "`nTimezone"; $item = $obj.timezone; Write-Host "`t$item`n"; } #---------------------------------------------------------------------------- function Set-systemNTP() # # Description # This function sets the system NTP configuration # #---------------------------------------------------------------------------- { param( [array]$Servers, [string]$TimeZone ); $uri = "/mgmt/tm/sys/ntp"; $link = "https://$Bigip$uri"; $headers = @{}; $headers.Add("ServerHost", $Bigip); $headers.Add("Content-Type", "application/json"); $obj = @{ servers = $Servers timezone = $TimeZone }; $body = $obj | ConvertTo-Json $secpasswd = ConvertTo-SecureString $Pass -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ($User, $secpasswd) $obj = Invoke-RestMethod -Method PUT -Uri $link -Headers $headers -Credential $mycreds -Body $body; Get-systemNTP; } Python NTP Settings def get_sys_ntp(bigip, url): try: ntp = bigip.get('%s/sys/ntp' % url).json() print "\n\n\tNTP Servers:" for server in ntp['servers']: print "\t\t%s" % server print "\n\tTimezone: \n\t\t%s" % ntp['timezone'] except Exception, e: print e def set_sys_ntp(bigip, url, servers, tz): servers = [x.strip() for x in servers.split(',')] payload = {} payload['servers'] = servers payload['timezone'] = tz try: bigip.put('%s/sys/ntp' % url, json.dumps(payload)) get_sys_ntp(bigip, url) except Exception, e: print e Generating a qkview Moving on from configuring some system settings to running a system task, we’ll now focus on a common operational task: running a qkview. In the GUI, this is done via System->Support. This is easy enough at the command line as well with the simple qkview command at the bash prompt, or via tmsh as show below: [root@ltm3:Active:Standalone] config # tmsh run util qkview Gathering System Diagnostics: Please wait ... Diagnostic information has been saved in: /var/tmp/ltm3.test.local.qkview Please send this file to F5 support. This formatted in json looks like this: {"command": "run"} with a POST to https://hostname/mgmt/tm/util/qkview. Powershell Python Powershell - Run a qkview #---------------------------------------------------------------------------- function Gen-QKView() # # Description # This function generates a qkview on the system # #---------------------------------------------------------------------------- { $uri = "/mgmt/tm/util/qkview"; $link = "https://$Bigip$uri"; $headers = @{}; $headers.Add("serverHost", $Bigip); $headers.Add("Content-Type", "application/json"); $secpasswd = ConvertTo-SecureString $Pass -AsPlainText -Force $mycreds = New-Object System.Management.Automation.PSCredential ($User, $secpasswd) $obj = @{ command='run' }; $body = $obj | ConvertTo-Json Write-Host ("Running qkview...standby") $obj = Invoke-RestMethod -Method POST -Uri $link -Headers $headers -Credential $mycreds -Body $body; Write-Host ("qkview is complete and is available in /var/tmp.") } Python - Run a qkview def gen_qkview(bigip, url): payload = {} payload['command'] = 'run' try: print "\n\tRunning qkview...standby" qv = bigip.post('%s/util/qkview' % url, json.dumps(payload)).text if 'saved' in qv: print '\tqkview is complete and available in /var/tmp.' except Exception, e: print e Going Further Now that you have a couple basic tasks in your arsenal, what else might you add? Here are a couple exercises for you to build your toolbox: Configure sshd, creating a custom security banner. Configure snmp Extend the qkview function to support downloading the qkview file generated. Resources All the scripts for this article are available in the codeshare under "Getting Started with iControl Code Samples."2.2KViews0likes2CommentsManaging ZoneRunner Resource Records with Bigsuds
Over the last several years, there have been questions internal and external on how to manage ZoneRunner (the GUI tool in F5 DNS that allows you to manage DNS zones and records) resources via the REST interface. But that's a no can do with the iControl REST--it doesn't have that functionality. It was brought to my attention by one of our solutions engineers that a customer is using some methods in the SOAP interface that allows you to do just that...which was news to me! The things you learn... In this article, I'll highlight a few of the methods available to you and work on a sample domain in the python module bigsuds that utilizes the suds SOAP library for communication duties with the BIG-IP iControl SOAP interface. Test Domain & Procedure For demonstration purposes, I'll create a domain in the external view, dctest1.local, with the following attributes that mirrors nearly identically one I created in the GUI: Type: master Zone Name: dctest1.local. Zone File Name: db.external.dctest1.local. Options: allow-update from localhost TTL: 500 SOA: ns1.dctest1.local. Email: hostmaster.ns1.dctest1.local. Serial: 2021092201 Refresh: 10800 Retry: 3600 Expire: 604800 Negative TTL: 60 I'll also add a couple type A records to that domain: name: mail.dctest1.local., address: 10.0.2.25, TTL: 86400 name: www.dctest1.local., address: 10.0.2.80, TTL: 3600 After adding the records, I'll update one of them, changing the IP and the TTL: name: mail.dctest1.local., address: 10.0.2.110, ttl: 900 Then I'll delete the other one: name: www.dctest1.local., address: 10.0.2.80, TTL: 3600 And finally, I'll delete the zone: name: dctest1.local. ZoneRunner Methods All the methods can be found on Clouddocs in the ZoneRunner, Zone, and ResourceRecord method pages. The specific methods we'll use in our highlight real are: Management.ResourceRecord.add_a Management.ResourceRecord.delete_a Management.ResourceRecord.get_rrs Management.ResourceRecord.update_a Management.Zone.add_zone_text Management.Zone.get_zone_v2 Management.Zone.zone_exist With each method, there is a data structure that the interface expects. Each link above provides the details, but let's look at an example with the add_a method. The method requires three parameters, view_zones, a_records, and sync_ptrs, which the image of the table shows below. The boolean is just a True/False value in a list. The reason the list ( [] ) is there for all the attributes is because you can send a single request to update more than one zone, and addmore than one record within each zone if desired. The data structure for view_zones and a_records is in the following two images. Now that we have an idea of what the methods require, let's take a look at some code! Methods In Action First, I import bigsuds and initialize the BIG-IP. The arguments are ordered in bigsuds for host, username, and password. If the default “admin/admin” is used, they are assumed, as is shown here. import bigsuds b = bigsuds.BIGIP(hostname='ltm3.test.local') Next, I need to format the ViewZone data in a native python dictionary, and then I check for the existence of that zone. zone_view = {'view_name': 'external', 'zone_name': 'dctest1.local.' } b.Management.Zone.zone_exist([zone_view]) # [0] Note that the return value, which should be a list of booleans, is a list with a 0. I’m guessing that’s either suds or the bigsuds implementation doing that, but it’s important to note if you’re checking for a boolean False. It’s also necessary to set the booleans as 0 or 1 as well when sending requests to BIG-IP with bigsuds. Now I will create the zone since it does not yet exist. From the add_zone_text method description on Clouddocs, note that I need to supply, in separate parameters, the zone info, the appropriate zone records, and the boolean to sync reverse records or not. zone_add_info = {'view_name': 'external', 'zone_name': 'dctest1.local.', 'zone_type': 'MASTER', 'zone_file': 'db.external.dctest1.local.', 'option_seq': ['allow-update { localhost;};']} zone_add_records = 'dctest1.local. 500 IN SOA ns1.dctest1.local. hostmaster.ns1.dctest1.local. 2021092201 10800 3600 604800 60;\n' \ 'dctest1.local. 3600 IN NS ns1.dctest1.local.;\n' \ 'ns1.dctest1.local. 3600 IN A 10.0.2.1;' b.Management.Zone.add_zone_text([zone_add_info], [[zone_add_records]], [0]) b.Management.Zone.zone_exist([zone_view]) # [1] Note that the strings here require a detailed understanding of DNS record formatting, the individual fields are not parameters that can be set like in the ZoneRunner GUI. But, I am confident there is an abundance of modules that manage DNS formatting in the python ecosystem that could simplify the data structuring. After creating the zone, another check to see if the zone exists results in a true condition. Huzzah! Now I’ll check the zone info and the existing records for that zone. zone = b.Management.Zone.get_zone_v2([zone_view]) for k, v in zone[0].items(): print(f'{k}: {v}') # view_name: external # zone_name: dctest1.local. # zone_type: MASTER # zone_file: "db.external.dctest1.local." # option_seq: ['allow-update { localhost;};'] rrs = b.Management.ResourceRecord.get_rrs([zone_view]) for rr in rrs[0]: print(rr) # dctest1.local. 500 IN SOA ns1.dctest1.local. hostmaster.ns1.dctest1.local. 2021092201 10800 3600 604800 60 # dctest1.local. 3600 IN NS ns1.dctest1.local. # ns1.dctest1.local. 3600 IN A 10.0.2.1 Everything checks outs! Next I’ll create the A records for the mail and www services. I’m going to add a filter to only check for the mail/www services for printing to cut down on the lines, but know that they’re still there going forward. a1 = {'domain_name': 'mail.dctest1.local.', 'ip_address': '10.0.2.25', 'ttl': 86400} a2 = {'domain_name': 'www.dctest1.local.', 'ip_address': '10.0.2.80', 'ttl': 3600} b.Management.ResourceRecord.add_a(view_zones=[zone_view], a_records=[[a1, a2]], sync_ptrs=[0]) rrs = b.Management.ResourceRecord.get_rrs([zone_view]) for rr in rrs[0]: if any(item in rr for item in ['mail', 'www']): print(rr) # mail.dctest1.local. 86400 IN A 10.0.2.25 # www.dctest1.local. 3600 IN A 10.0.2.80 Here you can see that I’m adding two records to the zone specified and not creating the reverse records (not included for brevity, but in prod would be likely). Now I’ll update the mail address and TTL. b.Management.ResourceRecord.update_a([zone_view], [[a1]], [[a1_update]], [0]) rrs = b.Management.ResourceRecord.get_rrs([zone_view]) for rr in rrs[0]: if any(item in rr for item in ['mail', 'www']): print(rr) # mail.dctest1.local. 900 IN A 10.0.2.110 # www.dctest1.local. 3600 IN A 10.0.2.80 You can see that the address and TTL updated as expected. Note that with the update_/N/ methods, you need to provide the old and new, not just the new. Let’s get destruction and delete the www record! b.Management.ResourceRecord.delete_a([zone_view], [[a2]], [0]) rrs = b.Management.ResourceRecord.get_rrs([zone_view]) for rr in rrs[0]: if any(item in rr for item in ['mail', 'www']): print(rr) # mail.dctest1.local. 900 IN A 10.0.2.110 And your web service is now unreachable via DNS. Congratulations! But there’s more damage we can do: it’s time to delete the whole zone. b.Management.Zone.delete_zone([zone_view]) b.Management.Zone.zone_exist([zone_view]) # [0] And that’s a wrap! As I said, it’s been years since I have spent time with the iControl SOAP interface. It’s nice to know that even though most of what we do is done through REST, imperatively or declaratively, that some missing functionality in that interface is still alive and kicking via SOAP. H/T to Scott Huddy for the nudge to investigate this. Questions? Drop me a comment below. Happy coding! A gist of these samples is available on GitHub.1KViews2likes1Comment64-bit numbers in perl
A question recently came up in the iControl forums about how to deal with the statistics returned by our iControl interfaces. In 9.0, we changed the interfaces to return true 64-bit values consisting of a structure of a low and high 32-bit value (this is due to the fact that there is a lack of support in the SOAP encoding specs, and some of the languages, for native 64-bit numbers). Our SDK samples show something like this: $low = $stat->{"low"}; $high = $stat->{"high"}; $value64 = ($high<<32)|$low; The question was asked, how come the value is truncated to a 32-bit number? I asked myself that same question as it worked for my build. After some digging, it turns out that perl needs to be compiled specifically to allow include support for 64-bit numbers. You can check your installation out by using the Config module use Config; ($Config{use64bitint} eq 'define' || $Config{longsize} >= 8) && print "Supports 64-bit numbers\n"; So, what do you do if you don't have control to rebuild your version of perl? Well, thanks to a user who pointed me this way, there exists a perl module to support large numbers. As long as you have the Math::BigInt package installed, you can use it as follows: use Math::BigInt; $low = $stat->{"low"}; $high = $stat->{"high"}; $value64 = Math::BigInt->new($high)->blsft(32)->bxor($low); Happy Coding! -Joe480Views0likes1CommentBIG-IP Configuration Visualizer - iControl Style
I posted almost two years ago to the day on a cool tool called BIG-IP Config Visualizer, or BCV, that one of our field engineers put together that utilizes a BIG-IP config parser and GraphViz to create images visualizing the relationship of configuration objects for a particular virtual server. Well, I’m here to report that another community user, Russell Moore, has taken that work to the next level. Rather than trying to figure out the nuances of configuration objects amongst all the versions of BIG-IP, he converted the script to utilize iControl! In this tech tip, I’ll walk through the installation steps necessary to get this tool off the ground. The Setup Install a few libraries and GraphViz via apt-get apt-get install libssl-dev libcrypt-ssleay-perl libio-socket-ssl-perl libgraph-writer-graphviz-perl Open a CPAN shell and install SOAP::Lite and Net::Netmask perl –MCPAN –e shell install SOAP::Lite install Net::Netmask After installing those libraries and tools, grab the BCV-iControl source from the codeshare, save it as an executable (bcv.pl on my system) and set these variables (I only changed the ones in bold type): #Declare CLI $vars my $vs1; my $new_dir = 'NO_DIR'; my $extension = 'NO_EXT'; my $ltm_host = "172.16.99.5"; my $ltm_port = '443'; my $user_id = "admin"; my $req_partition; my $user_password = "admin"; my $ltm_protocol = 'https'; my $path; my $dir; Finally, some command-line options: root@ubuntu:/home/jrahm# ./bcv.pl -h Thank you for using BIG-IP Configuration Visualizer (BCV 1.16.1-revisited with soap) -v <VS_NAME> this prints the specified virtual server and requires option -c. Default is to print all -c Specify the partition/container to look in for option -v -t <iControl host LTM> specify ltm_host IP we will connect to -d specifies a directory you want the images in. Has to be in Current working Directory: /home/jrahm Default is /img) -e Define image format options: svg, png (default is jpg) -help for help but you already found it The Payoff Now that all the legwork is complete, we can play! root@ubuntu:/home/jrahm# ./bcv.pl Please wait while we build some maps of your system. Retrieving SelfIPs in Partition: ** Common ** Mapping Partition: ** Common ** routes to gateways Mapping Partition: ** Common ** selfIPs and VLANs.. Mapping Partition: ** Common ** pools and iRule references to pools............ Mapping Partition: ** Common ** virtual servers and properties... Drawing VS: dc.hashtest which is 1 of 3 in Partition: Common Drawing VS: testvip1 which is 2 of 3 in Partition: Common Drawing VS: management_vip which is 3 of 3 in Partition: Common All drawings completed! They can be found in: /home/jrahm/img Taking a look at the virtual server I used for the hashing algorithm distribution tech tip: Conclusion Visual representations of configurations are incredibly helpful in identifying issues quickly. An interesting next step would be to track state of objects from iteration of the drawings, and build a page to include all the images. That would make a nice and cheap dashboard for application owners or operating centers. Any takers? Thanks to community user Russell Moore that took a great contributed tool and made it better with iControl!1.3KViews0likes12CommentsThese Are Not The Scrapes You're Looking For - Session Anomalies
In my first article in this series, I discussed web scraping -- what it is, why people do it, and why it could be harmful. My second article outlined the details of bot detection and how the ASM blocks against these pesky little creatures. This last article in the series of web scraping will focus on the final part of the ASM defense against web scraping: session opening anomalies and session transaction anomalies. These two detection modes are new in v11.3, so if you're using v11.2 or earlier, then you should upgrade and take advantage of these great new features! ASM Configuration In case you missed it in the bot detection article, here's a quick screenshot that shows the location and settings of the Session Opening and Session Transactions Anomaly in the ASM. You'll find all the fun when you navigate to Security > Application Security > Anomaly Detection > Web Scraping. There are three different settings in the ASM for Session Anomaly: Off, Alarm, and Alarm and Block. (Note: these settings are configured independently...they don't have to be set at the same value) Obviously, if Session Anomaly is set to "Off" then the ASM does not check for anomalies at all. The "Alarm" setting will detect anomalies and record attack data, but it will allow the client to continue accessing the website. The "Alarm and Block" setting will detect anomalies, record the attack data, and block the suspicious requests. Session Opening Anomaly The first detection and prevention mode we'll discuss is Session Opening Anomaly. But before we get too deep into this, let's review what a session is. From a simple perspective, a session begins when a client visits a website, and it ends when the client leaves the site (or the client exceeds the session timeout value). Most clients will visit a website, surf around some links on the site, find the information they need, and then leave. When clients don't follow a typical browsing pattern, it makes you wonder what they are up to and if they are one of the bad guys trying to scrape your site. That's where Session Opening Anomaly defense comes in! Session Opening Anomaly defense checks for lots of abnormal activities like clients that don't accept cookies or process JavaScript, clients that don't scrape by surfing internal links in the application, and clients that create a one-time session for each resource they consume. These one-time sessions lead scrapers to open a large number of new sessions in order to complete their job quickly. What's Considered A New Session? Since we are discussing session anomalies, I figured we should spend a few sentences on describing how the ASM differentiates between a new or ongoing session for each client request. Each new client is assigned a "TS cookie" and this cookie is used by the ASM to identify future requests from the client with a known, ongoing session. If the ASM receives a client request and the request does not contain a TS cookie, then the ASM knows the request is for a new session. This will prove very important when calculating the values needed to determine whether or not a client is scraping your site. Detection There are two different methods used by the ASM to detect these anomalies. The first method compares a calculated value to a predetermined ceiling value for newly opened sessions. The second method considers the rate of increase of newly opened sessions. We'll dig into all that in just a minute. But first, let's look at the criteria used for detecting these anomalies. As you can see from the screenshot above, there are three detection criteria the ASM uses...they are: Sessions opened per second increased by: This specifies that the ASM considers client traffic to be an attack if the number of sessions opened per second increases by a given percentage. The default setting is 500 percent. Sessions opened per second reached: This specifies that the ASM considers client traffic to be an attack if the number of sessions opened per second is greater than or equal to this number. The default value is 400 sessions opened per second. Minimum sessions opened per second threshold for detection: This specifies that the ASM considers traffic to be an attack if the number of sessions opened per second is greater than or equal to the number specified. In addition, at least one of the "Sessions opened per second increased by" or "Sessions opened per second reached" numbers must also be reached. If the number of sessions opened per second is lower than the specified number, the ASM does not consider this traffic to be an attack even if one of the "Sessions opened per second increased by" or "Sessions opened per second" reached numbers was reached. The default value for this setting is 200 sessions opened per second. In addition, the ASM maintains two variables for each client IP address: a one-minute running average of new session opening rate, and a one-hour running average of new session opening rate. Both of these variables are recalculated every second. Now that we have all the basic building blocks. let's look at how the ASM determines if a client is scraping your site. First Method: Predefined Ceiling Value This method uses the user-defined "minimum sessions opened per second threshold for detection" value and compares it to the one-minute running average. If the one-minute average is less than this number, then nothing else happens because the minimum threshold has not been met. But, if the one-minute average is higher than this number, the ASM goes on to compare the one-minute average to the user-defined "sessions opened per second reached" value. If the one-minute average is less than this value, nothing happens. But, if the one-minute average is higher than this value, the ASM will declare the client a web scraper. The following flowchart provides a pictorial representation of this process. Second Method: Rate of Increase The second detection method uses several variables to compare the rate of increase of newly opened sessions against user-defined variables. Like the first method, this method first checks to make sure the minimum sessions opened per second threshold is met before doing anything else. If the minimum threshold has been met, the ASM will perform a few more calculations to determine if the client is a web scraper or not. The "sessions opened per second increased by" value (percentage) is multiplied by the one-hour running average and this value is compared to the one-minute running average. If the one-minute average is greater, then the ASM declares the client a web scraper. If the one-minute average is lower, then nothing happens. The following matrix shows a few examples of this detection method. Keep in mind that the one-minute and one-hour averages are recalculated every second, so these values will be very dynamic. Prevention The ASM provides several policies to prevent session opening anomalies. It begins with the first method that you enable in this list. If the system finds this method not effective enough to stop the attack, it uses the next method that you enable in this list. The following screenshots show the different options available for prevention. The "Drop IP Addresses with bad reputation" is tied to Rate Limiting, so it will not appear as an option unless you enable Rate Limiting. Note that IP Address Intelligence must be licensed and enabled. This feature is licensed separately from the other ASM web scraping options. Here's a quick breakdown of what each of these prevention policies do for you: Client Side Integrity Defense: The system determines whether the client is a legal browser or an illegal script by sending a JavaScript challenge to each new session request from the detected IP address, and waiting for a response. The JavaScript challenge will typically involve some sort of computational challenge. Legal browsers will respond with a TS cookie while illegal scripts will not. The default for this feature is disabled. Rate Limiting: The goal of Rate Limiting is to keep the volume of new sessions at a "non-attack" level. The system will drop sessions from suspicious IP addresses after the system determines that the client is an illegal script. The default for this feature is also disabled. Drop IP Addresses with bad reputation: The system drops requests from IP addresses that have a bad reputation according to the system’s IP Address Intelligence database (shown above). The ASM will drop all request from any "bad" IP addresses even if they respond with a TS cookie. IP addresses that do not have a bad reputation also undergo rate limiting. The default for this option is disabled. Keep in mind that this option is available only after Rate Limiting is enabled. In addition, this option is only enforced if at least one of the IP Address Intelligence Categories is set to Alarm mode. Prevention Duration Now that we have detected session opening anomalies and mitigated them using our prevention options, we must figure out how long to apply the prevention measures. This is where the Prevention Duration comes in. This setting specifies the length of time that the system will prevent an attack. The system prevents attacks by rejecting requests from the attacking IP address. There are two settings for Prevention Duration: Unlimited: This specifies that after the system detects and stops an attack, it performs attack prevention until it detects the end of the attack. This is the default setting. Maximum <number of> seconds: This specifies that after the system detects and stops an attack, it performs attack prevention for the amount of time indicated unless the system detects the end of the attack earlier. So, to finish up our Session Opening Anomaly part of this article, I wanted to share a quick scenario. I was recently reading several articles from some of the web scrapers around the block, and I found one guy's solution to work around web scraping defense. Here's what he said: "Since the service conducted rate-limiting based on IP address, my solution was to put the code that hit their service into some client-side JavaScript, and then send the results back to my server from each of the clients. This way, the requests would appear to come from thousands of different places, since each client would presumably have their own unique IP address, and none of them would individually be going over the rate limit." This guy is really smart! And, this would work great against a web scraping defense that only offered a Rate Limiting feature. Here's the pop quiz question: If a user were to deploy this same tactic against the ASM, what would you do to catch this guy? I'm thinking you would need to set your minimum threshold at an appropriate level (this will ensure the ASM kicks into gear when all these sessions are opened) and then the "sessions opened per second" or the "sessions opened per second increased by" should take care of the rest for you. As always, it's important to learn what each setting does and then test it on your own environment for a period of time to ensure you have everything tuned correctly. And, don't forget to revisit your settings from time to time...you will probably need to change them as your network environment changes. Session Transactions Anomaly The second detection and prevention mode is Session Transactions Anomaly. This mode specifies how the ASM reacts when it detects a large number of transactions per session as well as a large increase of session transactions. Keep in mind that web scrapers are designed to extract content from your website as quickly and efficiently as possible. So, web scrapers normally perform many more transactions than a typical application client. Even if a web scraper found a way around all the other defenses we've discussed, the Session Transaction Anomaly defense should be able to catch it based on the sheer number of transactions it performs during a given session. The ASM detects this activity by counting the number of transactions per session and comparing that number to a total average of transactions from all sessions. The following screenshot shows the detection and prevention criteria for Session Transactions Anomaly. Detection How does the ASM detect all this bad behavior? Well, since it's trying to find clients that surf your site much more than other clients, it tracks the number of transactions per client session (note: the ASM will drop a session from the table if no transactions are performed for 15 minutes). It also tracks the average number of transactions for all current sessions (note: the ASM calculates the average transaction value every minute). It can use these two figures to compare a specific client session to a reasonable baseline and figure out if the client is performing too many transactions. The ASM can automatically figure out the number of transactions per client, but it needs some user-defined thresholds to conduct the appropriate comparisons. These thresholds are as follows: Session transactions increased by: This specifies that the system considers traffic to be an attack if the number of transactions per session increased by the percentage listed. The default setting is 500 percent. Session transactions reached: This specifies that the system considers traffic to be an attack if the number of transactions per session is equal to or greater than this number. The default value is 400 transactions. Minimum session transactions threshold for detection: This specifies that the system considers traffic to be an attack if the number of transactions per session is equal to or greater than this number, and at least one of the "Sessions transactions increased by" or "Session transactions reached" numbers was reached. If the number of transactions per session is lower than this number, the system does not consider this traffic to be an attack even if one of the "Session transactions increased by" or "Session transaction reached" numbers was reached. The default value is 200 transactions. The following table shows an example of how the ASM calculates transaction values (averages and individual sessions). We would expect that a given client session would perform about the same number of transactions as the overall average number of transactions per session. But, if one of the sessions is performing a significantly higher number of transactions than the average, then we start to get suspicious. You can see that session 1 and session 3 have transaction values higher than the average, but that only tells part of the story. We need to consider a few more things before we decide if this client is a web scraper or not. By the way, if the ASM knows that a given session is malicious, it does not use that session's transaction numbers when it calculates the average. Now, let's roll in the threshold values that we discussed above. If the ASM is going to declare a client as a web scraper using the session transaction anomaly defense, the session transactions must first reach the minimum threshold. Using our default minimum threshold value of 200, the only session that exceeded the minimum threshold is session 3 (250 > 200). All other sessions look good so far...keep in mind that these numbers will change as the client performs additional transactions during the session, so more sessions may be considered as their transaction numbers increase. Since we have our eye on session 3 at this point, it's time to look at our two methods of detecting an attack. The first detection method is a simple comparison of the total session transaction value to our user-defined "session transactions reached" threshold. If the total session transactions is larger than the threshold, the ASM will declare the client a web scraper. Our example would look like this: Is session 3 transaction value > threshold value (250 > 400)? No, so the ASM does not declare this client as a web scraper. The second detection method uses the "transactions increased by" value along with the average transaction value for all sessions. The ASM multiplies the average transaction value with the "transactions increased by" percentage to calculate the value needed for comparison. Our example would look like this: 90 * 500% = 450 transactions Is session 3 transaction value > result (250 > 450)? No, so the ASM does not declare this client as a web scraper. By the way, only one of these detection methods needs to be met for the ASM to declare the client as a web scraper. You should be able to see how the user-defined thresholds are used in these calculations and comparisons. So, it's important to raise or lower these values as you need for your environment. Prevention Duration In order to save you a bunch of time reading about prevention duration, I'll just say that the Session Transactions Anomaly prevention duration works the same as the Session Opening Anomaly prevention duration (Unlimited vs Maximum <number of> seconds). See, that was easy! Conclusion Thanks for spending some time reading about session anomalies and web scraping defense. The ASM does a great job of detecting and preventing web scrapers from taking your valuable information. One more thing...for an informative anomaly discussion on the DevCentral Security Forum, check out this conversation. If you have any questions about web scraping or ASM configurations, let me know...you can fill out the comment section below or you can contact the DevCentral team at https://devcentral.f5.com/s/community/contact-us.942Views2likes2CommentsiControl Library For Java With Source
Included are the binary distribution of the iControl library for Java and Apache Axis. These releases coincides through version BIG-IP, version 13.0.0. iControl Assembly for Java 11.3.0 iControl Assembly for Java 11.4.0 iControl Assembly for Java 11.4.1 iControl Assembly for Java 11.5.0 iControl Assembly for Java 11.6.0 iControl Assembly for Java 12.0.0 iControl Assembly for Java 12.1.0 iControl Assembly for Java 13.0.0 iControl Assembly for Java 13.1.0 The source distribution of the iControl library for Java with Apache Axis. These releases coincide through version BIG-IP, version 12.1.0. The source code for this project is no longer maintained but available at our at f5-icontrol-library-java repository on GitHub. iControl Assembly Java Source 11.4.0 (older release not available on Github) iControl Assembly Java Source 11.4.1 iControl Assembly Java Source 11.5.0 iControl Assembly Java Source 11.6.0 iControl Assembly Java Source 12.1.03.3KViews0likes0Comments