series-handy-ansible-hacks
9 TopicsF5 Automation with Ansible Tips and Tricks
Getting Started with Ansible and F5 In this article we are going to provide you with a simple set of videos that demonstrate step by step how to implement automation with Ansible. In the last video, however we will demonstrate how telemetry and automation may be used in combination to address potential performance bottlenecks and ensure application availability. To start, we will provide you with details on how to get started with Ansible automation using the Ansible Automation Platform®: Backing up your F5 device Once a user has installed and configured Ansible Automation Platform, we will now transition to a basic maintenance function – an automated backup of a BIG-IP hardware device or Virtual Edition (VE). This is always recommended before major changes are made to our BIG-IP devices Configuring a Virtual Server Next, we will use Ansible to configure a Virtual Server, a task that is most frequently performed via manual functions via the BIG-IP. When changes to a BIG-IP are infrequent, manual intervention may not be so cumbersome. However large enterprise customers may need to perform these tasks hundreds of times: Replace an SSL Certificate The next video will demonstrate how to use Ansible to replace an SSL certificate on a BIG-IP. It is important to note that this video will show the certificate being applied on a BIG-IP and then validated by browsing to the application website: Configure and Deploy an iRule The next administrative function will demonstrate how to configure and push an iRule using the Ansible Automation Platform® onto a BIG-IP device. Again this is a standard administrative task that can be simply automated via Ansible: Delete the Existing Virtual Server Ok so now we have to delete the above configuration to roll back to a steady state. This is a common administrative task when an application is retired. We again demonstrate how Ansible automation may be used to perform these simple administrative tasks: Telemetry and Automation: Using Threshold Triggers to Automate Tasks and Fix Performance Bottlenecks Now you have a clear demonstration as to how to utilize Ansible automation to perform routine tasks on a BIG-IP platform. Once you have become proficient with more routine Ansible tasks, we can explore more high-level, sophisticated automation tasks. In the below demonstration we show how BIG-IP administrators using SSL Orchestrator® (SSLO) can combine telemetry with automation to address performance bottlenecks in an application environment: Resources: So that is a short series of tutorials on how to perform routine tasks using automation plus a preview of a more sophisticated use of automation based upon telemetry and automatic thresholds. For more detail on our partnership, please visit our F5/Ansible page or visit the Red Hat Automation Hub for information on the F5 Ansible certified collections. https://www.f5.com/ansible https://www.ansible.com/products/automation-hub https://galaxy.ansible.com/f5networks/f5_modules5.7KViews2likes1CommentPower of tmsh commands using Ansible
Why is data important Having accurate data has become an integral part of decision making. The data could be for making simple decisions like purchasing the newest electronic gadget in the market or for complex decisions on what hardware and/or software platform works best for your highly demanding application which would provide the best user experience for your customer. In either case research and data collection becomes essential. Using what kind of F5 hardware and/or software in your environment follows the same principals where your IT team would require data to make the right decision. Data could vary from CPU, Throughput and/or Memory utilization etc. of your F5 gear. It could also be data just for a period of a day, a month or a year depending the application usage patterns. Ansible to the rescue Your environment could have 10's or maybe 100 or even 1000's of F5 BIG-IP's in your environment, manually logging into each one to gather data would be a highly inefficient method. One way which is a great and simple way could be to use Ansible as an automation framework to perform this task, relieving you to perform your other job functions. Let's take a look at some of the components needed to use Ansible. An inventory file in Ansible defines the hosts against which your playbook is going to run. Below is an example of a file defining F5 hosts which can be expanded to represent your 10'/100's or 1000's of BIG-IP's. Inventory file: 'inventory.yml' [f5] ltm01 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 ltm02 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 ltm03 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 ltm04 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 ltm05 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 A playbook defines the tasks that are going to be executed. In this playbook we are using the bigip_command module which can take as input any BIG-IP tmsh command and provide the output. Here we are going to use the tmsh commands to gather performance data from the BIG-IP's. The output from each of the BIG-IP's is going to be stored in a file that can be referenced after the playbook finished execution. Playbook: 'performance-data/yml' --- - name: Create empty file hosts: localhost gather_facts: false tasks: - name: Creating an empty file file: path: "./{{filename}}" state: touch - name: Gather stats using tmsh command hosts: f5 connection: local gather_facts: false serial: 1 tasks: - name: Gather performance stats bigip_command: provider: server: "{{server}}" user: "{{user}}" password: "{{password}}" server_port: "{{server_port}}" validate_certs: "{{validate_certs}}" commands: - show sys performance throughput historical - show sys performance system historical register: result - lineinfile: line: "\n###BIG-IP hostname => {{ inventory_hostname }} ###\n" insertafter: EOF dest: "./{{filename}}" - lineinfile: line: "{{ result.stdout_lines }}" insertafter: EOF dest: "./{{filename}}" - name: Format the file shell: cmd: sed 's/,/\n/g' ./{{filename}} > ./{{filename}}_formatted - pause: seconds: 10 - name: Delete file hosts: localhost gather_facts: false tasks: - name: Delete extra file created (delete file) file: path: ./{{filename}} state: absent Execution: The execution command will take as input the playbook name, the inventory file as well as the filename where the output will be stored. (There are different ways of defining and passing parameters to a playbook, below is one such example) ansible-playbook performance_data.yml -i inventory.yml --extra-vars "filename=perf_output" Snippet of expected output: ###BIG-IP hostname => ltm01 ### [['Sys::Performance Throughput' '-----------------------------------------------------------------------' 'Throughput(bits)(bits/sec)Current3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'Service223.8K258.8K279.2K297.4K112.5K' 'In212.1K209.7K210.5K243.6K89.5K' 'Out21.4K21.0K21.1K57.4K30.1K' '' '-----------------------------------------------------------------------' 'SSL TransactionsCurrent3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'SSL TPS00000' '' '-----------------------------------------------------------------------' 'Throughput(packets)(pkts/sec)Current3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'Service7982836362' 'In4140403432' 'Out4140403234'] ['Sys::Performance System' '------------------------------------------------------------' 'System CPU Usage(%)Current3 hrs24 hrs7 days30 days' '------------------------------------------------------------' 'Utilization1718181817' '' '------------------------------------------------------------' 'Memory Used(%)Current3 hrs24 hrs7 days30 days' '------------------------------------------------------------' 'TMM Memory Used1010101010' 'Other Memory Used5555545453' 'Swap Used00000']] ###BIG-IP hostname => ltm02 ### [['Sys::Performance Throughput' '-----------------------------------------------------------------------' 'Throughput(bits)(bits/sec)Current3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'Service202.3K258.7K279.2K297.4K112.5K' 'In190.8K209.7K210.5K243.6K89.5K' 'Out19.6K21.0K21.1K57.4K30.1K' '' '-----------------------------------------------------------------------' 'SSL TransactionsCurrent3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'SSL TPS00000' '' '-----------------------------------------------------------------------' 'Throughput(packets)(pkts/sec)Current3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'Service7782836362' 'In3940403432' 'Out3740403234'] ['Sys::Performance System' '------------------------------------------------------------' 'System CPU Usage(%)Current3 hrs24 hrs7 days30 days' '------------------------------------------------------------' 'Utilization2118181817' '' '------------------------------------------------------------' 'Memory Used(%)Current3 hrs24 hrs7 days30 days' '------------------------------------------------------------' 'TMM Memory Used1010101010' 'Other Memory Used5555545453' 'Swap Used00000']] The data obtained is historical data over a period of time. Sometimes it is also important to gather the peak usage of throughout/memory/cpu over time and not the average. Stay tuned as we will discuss on how to obtain that information in a upcoming article. Conclusion Use the output of the data to learn the traffic patterns and propose the most appropriate BIG-IP hardware/software in your environment. This could be data collected directly in your production environment or a staging environment, which would help you make the decision on what purchasing strategy gives you the most value from your BIG-IP's. For reference: https://www.f5.com/pdf/products/big-ip-local-traffic-manager-ds.pdf The above is one example of how you can get started with using Ansible and tmsh commands. Using this method you can potentially achieve close to 100% automation on the BIG-IP.11KViews4likes3CommentsVMware Horizon and F5 iAPP Deployments Backed by Ansible Automation
The Intro: A little over a year ago I knew barely anything about automation, zero about ansible, and didn't even think it would be something so tied to my life like it is now. I spend all my moments trying to think about how I can make Automation easier in my life, and being in Business Development I spend a lot of time testing F5 solutions and integrations between vendors (specifically between F5 and VMware as well as F5 and RedHat Ansible). I figured why not bring them a little closer together? It takes forever to build Labs and setup environments, and with automation I can do this in mere minutes compared to the hours it use to take (we are talking fresh builds, clean environments). I plan on sharing more about more of my VMware and Ansible automation integrations down the chain (like Horizon labs that can be built from scratch and ready to test in 30 minutes or less). But I wanted to start out with something that I get a lot of questions about:is it possible to automate iApp Deployments? Specifically the VMware Horizon iApp? The answer is YOU CAN NOW! grant you this like all automation is a work in progress. My suggestion is if you have a use case you want to build using what I have started with I encourage it!! TAKE, FORK and Expand!!!! The Code: All of the code I am using is completely accessible via the F5 DevCentral Git Repository and feel free to use it! What does it do? Well, if you are an F5 Guru then you might think it looks similar to how our AS3 code works, if you aren't a Guru its basically taking one set of variables and sending off a single command to the F5 to build the Application (I tell it the things that make it work, and how I want it deployed and it does all the work for me). Keep in mind this isn't using F5 AS3 code, it just mimics the same methods bytaking a JSON declaration of how I want things to be and the F5 does all of the imperative commands for me. --- - name: Build JSON payload ansible.builtin.template: src=f5.horizon.{{deployment_type |lower }}.j2 dest=/tmp/f5.horizon.json - name: Deploy F5 Horizon iApp f5networks.f5_modules.bigip_iapp_service: #Using Collections if not use - bigip_iapp_service: name: "VMware-Horizon" template: "{{iapp_template_name}}" parameters: "{{ lookup('template', '/tmp/f5.horizon.json') }}" provider: server: "{{f5_ip}}" user: "{{f5_user}}" password: "{{f5_pass}}" validate_certs: no delegate_to: localhost All of this code can be found at - https://github.com/f5devcentral/f5-bd-horizon-iapp-deploy/ Deployments: Using the F5 iApp for Horizon provided many options of deployment but they were all categorized into 3 buckets F5 APM with VMware Horizon - Where the F5 acts as the Gateway for all VMware Horizon Connections (Proxying PCoIP/Blast) F5 LTM with VMware Horizon - Internal Connections to an environment from a LAN and being able to secure and load balance Connection Servers F5 LTM with VMware Unified Access Gateway - Using the F5 to load balance the VMware Unified Access Gateways (UAGs) and letting the UAGs proxy the connections. The deployments offer the ability to utilize pre-imported certificates, set the Virtual IP, add additional Connection Servers, Create the iRule for internal connections (origin header check) and much more. All of this is dependent on your deployment and the way you need it setup. The current code doesn't import in the iApp Template nor the certificates, this could be done with other code but currently is not part of this code. All three of these deployment models are considered and part of the code and how its deployed is based on the variables file "{{code_directory}}/vars/horizon_iapp_vars.yml" as shown below. Keep in mind this is using clear text (i.e. username/password for AD) for some variables you can add other ways of securing your passwords like an Ansible VAULT. #F5 Authentication f5_ip: 192.168.1.10 f5_user: admin f5_pass: "my_password" f5_admin_port: 443 #All Deployment Types deployment_type: "apm" #option can be APM, LTM or UAG #iApp Variables iapp_vip_address: "172.16.192.100" iapp_template_name: "f5.vmware_view.v1.5.9" #SSL Info iapp_ssl_cert: "/Common/Wildcard-2022" # If want to use F5 Default Cert for Testing use "/Common/default.crt" iapp_ssl_key: "/Common/Wildcard-2022" # If want to use F5 Default Cert for Testing use "/Common/default.key" iapp_ssl_chain: "/#do_not_use#" #Horizon Info iapp_horizon_fqdn: "horizon.mycorp.com" iapp_horizon_netbios: "My-Corp" iapp_horizon_domainname: "My-Corp.com" iapp_horizon_nat_addresss: "" #enter NAT address or leave empty for none # LTM Deployment Type iapp_irule_origin: - "/Common/Horizon-Origin-Header" # APM and LTM Deployment Types iapp_horizon_connection_servers: - { ip: "192.168.1.50", port: "443" } # to add Connection Servers just add additional line - { ip: "192.168.1.51", port: "443" } #APM Deployment Type iapp_active_directory_username: "my_ad_user" iapp_active_directory_password: "my_ad_password" iapp_active_directory_password_encrypted: "no" # This is still being validated but requires the encrypted password from the BIG-IP iapp_active_directory_servers: - { name: "ad_server_1.mycorp.com", ip: "192.168.1.20" } # to add Active Directory Servers just add additional lines - { name: "ad_server_2.mycorp.com", ip: "192.168.1.21" } # UAG Deployment Type iapp_horizon_uag_servers: - { ip: "192.168.199.50", port: "443" } # to add UAG Servers Just add additional lines - { ip: "192.168.199.51", port: "443" } How do the Variables integrate with the Templates? The templates are JSON based code which Ansible will inject the variables into them depending on the deployment method called. This makes it easier to templates to specific deployments because we don't hard code specific values that aren't necessary or are part of the default deployments. Advanced Deployments would require modification of the JSON code to apply specialized settings that aren't apart of the default. If you want to see more about the templates for each operation (APM/LTM/UAG) check out the JSON Code at the link below: https://github.com/f5devcentral/f5-bd-horizon-iapp-deploy/tree/main/roles/ansible-deploy-iapp/templates The Results: Within seconds I can deploy, configure and make changes to my deployments or even change my deployment type. Could I do this in the GUI? Absolutely but the point is to Automate ALL THE THINGS, and being able to integrate this with solutions like Lab in a box (built from scratch including the F5) saves massive amounts of time. Example of a VMware Horizon iApp Deployment with F5 APM done in ~12 Seconds [root@Elysium f5-bd-horizon-iapp-deploy]# time ansible-playbook horizon_iapp_deploy.yaml PLAY [localhost] ******************************************************************************************************************************************************************** TASK [bypass-variables : ansible.builtin.stat] ************************************************************************************************************************************** ok: [localhost] TASK [bypass-variables : ansible.builtin.include_vars] ****************************************************************************************************************************** ok: [localhost] TASK [create-irule : Create F5 iRule] *********************************************************************************************************************************************** skipping: [localhost] TASK [ansible-deploy-iapp : Build JSON payload] ************************************************************************************************************************************* ok: [localhost] TASK [ansible-deploy-iapp : Deploy F5 Horizon iApp] ********************************************************************************************************************************* changed: [localhost] PLAY RECAP ************************************************************************************************************************************************************************** localhost : ok=4 changed=1 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 real 0m11.954s user 0m6.114s sys 0m0.542s Links: All of this code can be found at - https://github.com/f5devcentral/f5-bd-horizon-iapp-deploy/1.2KViews0likes0CommentsThe BIG-IP and Ansible Sandbox, The Perfect Automation Playground!
Ever need a sandbox area to validate and test your F5 BIG-IP Automation? Well look no further! Ansible and F5 have worked together to create a deployable sandbox solution for all your testing needs within AWS. Why do I need a sandbox environment? In most use cases playbooks are never perfect out of the box. We all need an area to test that code and run through many different scenarios to ensure that the code is fully functional, does error detection/correction and doesn't create outages on an environment; All of which needs to be done before that code gets pushed to production. Setting up that kind of lab can take time to setup/configure and can even be problematic to reset back to a "Factory Default" configuration. Having that ability to reset the infrastructure to a clean state that also has the unique add-ons necessary for your testing,makes the lab reusable for testing new or older builds of code. So Where do I Start? Best place to get started is in the F5 Cloud Docs website https://clouddocs.f5.com/training/automation-sandbox/ The website will help with the following Information on setting up the prerequisites (installing Docker, setting up IAM account in AWS, Route 53 Domain, etc. Using the provided DockerFile code to deploy a local container that will be the control node for deploying the AWS environment. Commands to download the Git repository for the Red Hat Automation Platform Workshops (https://github.com/ansible/workshops/) Commands execute the code to deploy or destroy a sandbox Automation Workshop with an F5 Configuration on AWS Basic (101) and advanced (201) use cases. From there the sky is the limit! The sandbox is the perfect automation playground to test any code you might have. What will the Red Hat Ansible Automation Platform Workshops build with the F5 configuration? As per the diagram below the Ansible Automation Platform Workshops will build out and configure an AWS VPC, an Ansible Host, a BIG-IP (Best Licensed) and two Web Servers. The entire environment will be built and configured during the deployment. Each device within the VPC will also have a public IP and private IP addresses for intranet access between the nodes and external internet accessibility for importing and testing playbooks and configurations. When all the code is run and the lab is built, it will provide the essential information related on how to connect to the environment Workshop Name - Unique identifier that was provided in the variables file Instructor Inventory - The hostnames with the public and private IP addresses to connect to the environment with. Private Key - If you wish to use SSH authentication to the Nodes Want to Learn More? Visit F5 Automation Sandbox cloud docs website - https://clouddocs.f5.com/training/automation-sandbox/ Visit Red Hat Ansible Automation Platform Workshops - https://ansible.github.io/workshops/ Visit Red Hat Ansible Automation Platform Git Repository - https://github.com/ansible/workshops/1.2KViews0likes0CommentsBIG-IP ASM Automation with Ansible
My Background Back in September I started my Ansible journey, coming from no knowledge about Ansible and its automation capabilities I was asked to develop some code/playbooks to automate some of the BIG-IP's ASM functions for AnsibleFest 2019. I was pleasantly surprised on how easy it was to install Ansible, build playbooks and deliver the correct end-state for the BIG-IP.The playbooks and automation took me back down memory lane to when I was creating a universal network bootable Norton Ghost CD in DOS for all of the different models of PCs my work owned. The team I work for (Business Development) has been working hard at making sure our code is easily accessible to customers through GitHub.Our goal is to provide the necessary tools such as F5 Automation Sandbox and use-cases so that even if you are new to Ansible, or a die-hard coder with Ansible that there is a place for you to test, consume and bring life to the code. What is BIG-IP ASM? F5 BIG-IP® Application Security Manager™ (ASM) is a flexible web application firewall that secures web applications in traditional, virtual, and private cloud environments. BIG-IP ASM helps secure applications against unknown vulnerabilities, and enables compliance for key regulatory mandates. BIG-IP ASM is a key part of the F5 application delivery firewall solution, which consolidates traffic management, network firewall, application access, DDoS protection, SSL inspection, and DNS security. What is Ansible? Ansible is a radically simple IT automation engine that automates cloud provisioning, configuration management, application deployment, intra-service orchestration, and many other IT needs. Designed for multi-tier deployments since day one, Ansible models your IT infrastructure by describing how all of your systems inter-relate, rather than just managing one system at a time. It uses no agents and no additional custom security infrastructure, so it's easy to deploy - and most importantly, it uses a very simple language (YAML, in the form of Ansible Playbooks) that allow you to describe your automation jobs in a way that approaches plain English. What does the Code Do? IP Blocking - In ASM, there is a feature called IP address intelligence that can allow or block IP addresses from being able to access protected applications. This code creates a Virtual IP (VIP) and a blank ASM policy attached to that VIP. After the creation the code exports the ASM Policy into an XML format and is then modified by the code snip-it below to add blocked IP addresses and re-import that policy over the existing one. Prior to this snip-it we have code that checks to see if the IP address already exists for things like re-runs of the code and blocks duplicate IP addresses from being added to the XML. This is a snip-it of the Code where it modifies the ASM Policy XML File (this was exported in previous steps in the code) #Import Additional Disallowed IPs - name: Add Disallowed IPs xml: path: "{{ ASM_Policy_File }}" pretty_print: yes input_type: xml insertafter: yes xpath: /policy/geolocation add_children: "<whitelist><ip_address>{{ item.item }}</ip_address><subnet_mask>255.255.255.255</subnet_mask><policy_builder_trusted>false</policy_builder_trusted><ignore_anomalies>false</ignore_anomalies><never_log>false</never_log><block_ip>Always</block_ip><never_learn>false</never_learn><description>blocked</description><ignore_ip_reputation>false</ignore_ip_reputation></whitelist>" with_items: "{{ Blocked_IP_Valid.results }}" when: Blocked_IPs is defined and item.rc == 1 Here is a demonstration of an IP being blocked and unblocked by the BIG-IP ASM Policy. Disallowed URL Filtering - Another feature of ASM is the ability to disallowed URLs, this can be useful when working internally vs. externally and there are other reasons to why a specific URL would be blocked or protected by BIG-IP ASM. This code can be used independently, cooperatively, or not at all with this playbook. Since this playbook is merged with the IP Blocking code it follows the same flow (exporting/importing XML and error checking) as previously mentioned in the IP Blocking to ensure no duplicates are made in the XML. This is a snip-it of the Code where it modifies the ASM Policy XML File (this was exported in previous steps in the code) #Import Additional Disallowed URLs - name: Add Disallowed URLs xml: path: "{{ ASM_Policy_File }}" input_type: xml pretty_print: yes xpath: /policy/urls/disallowed_urls add_children: - "<url protocol=\"HTTP\" type=\"explicit\" name=\"{{ item.item }}\"/>" - "<url protocol=\"HTTPS\" type=\"explicit\" name=\"{{ item.item }}\"/>" with_items: "{{ Blocked_URLs_Valid.results }}" when: Blocked_URLs is defined and item.rc == 1 Here is a demonstration of specific URLs being blocked by the BIG-IP ASM Policy. (Note: the File Name in the repo has been changed but does the same use-case ) Where can you access the Playbook for this integration? https://github.com/f5devcentral/f5-bd-ansible-usecases/tree/master/03-F5-WAF-Policy-Management How to get all of the use-cases currently available. https://github.com/f5devcentral/f5-bd-ansible-usecases Want to try it out but need a Lab to work in? Try out our F5 Automation Sandbox built for AWS! https://clouddocs.f5.com/training/automation-sandbox/2.2KViews0likes7CommentsParsing complex BIG-IP json structures made easy with Ansible filters like json_query
JMESPath and json_query JMESPath (JSON Matching Expression paths) is a query language for searching JSON documents. It allows you to declaratively extract elements from a JSON document. Have a look at this tutorial to learn more. The json_query filter lets you query a complex JSON structure and iterate over it using a loop structure.This filter is built upon jmespath, and you can use the same syntax as jmespath. Click here to learn more about the json_query filter and how it is used in Ansible. In this article we are going to use the bigip_device_info module to get various facts from the BIG-IP and then use the json_query filter to parse the output to extract relevant information. Ansible bigip_device_info module Playbook to query the BIG-IP and gather system based information. - name: "Get BIG-IP Facts" hosts: bigip gather_facts: false connection: local tasks: - name: Query BIG-IP facts bigip_device_info: provider: validate_certs: False server: "xxx.xxx.xxx.xxx" user: "*****" password: "*****" gather_subset: - system-info register: bigip_facts - set_fact: facts: '{{bigip_facts.system_info}}' - name: debug debug: msg="{{facts}}" To view the output on a different subset, below are a few examples to change the gather_subset and set_fact values in the above playbook from gather_subset: system-info, facts: bigip_facts.system_info to any of the below: gather_subset: vlans , facts: bigip_facts.vlans gather_subset: self-ips, facts: bigip_facts.self_ips gather_subset: nodes. facts: bigip_facts.nodes gather_subset: software-volumes, facts: bigip_facts.software_volumes gather_subset: virtual-servers, facts: bigip_facts.virtual_servers gather_subset: system-info, facts: bigip_facts.system_info gather_subset: ltm-pools, facts: bigip_facts.ltm_pools Click here to view all the information that can be obtained from the BIG-IP using this module. Parse the JSON output Once we have the output lets take a look at how to parse the output. As mentioned above the jmespath syntax can be used by the json_query filter. Step 1: We will get the jmespath syntax for the information we want to extract Step 2: We will see how the jmespath syntax and then be used with json_query in an Ansible playbook The website used in this article to try out the below syntax: https://jmespath.org/ Some BIG-IP sample outputs are attached to this article as well (Check the attachments section after the References). The attachment file is a combined output of a few configuration subsets.Copy paste the relevant information from the attachment to test the below examples if you do not have a BIG-IP. System information The output for this section is obtained with above playbook using parameters: gather_subset: system-info, facts: bigip_facts.system_info Once the above playbook is run against your BIG-IP or if you are using the sample configuration attached, copy the output and paste it in the relevant text box. Try different queries by placing them in the text box next to the magnifying glass as shown in image below # Get MAC address, serial number, version information msg.[base_mac_address,chassis_serial,platform,product_version] # Get MAC address, serial number, version information and hardware information msg.[base_mac_address,chassis_serial,platform,product_version,hardware_information[*].[name,type]] Software volumes The output for this section is obtained with above playbook using parameters: gather_subset: software-volumes facts: bigip_facts.software_volumes # Get the name and version of the software volumes installed and its status msg[*].[name,active,version] # Get the name and version only for the software volume that is active msg[?active=='yes'].[name,version] VLANs and Self-Ips The output for this section is obtained with above playbook using parameters gather_subset: vlans and self-ips facts: bigip_facts Look at the following example to define more than one subset in the playbook # Get all the self-ips addresses and vlans assigned to the self-ip # Also get all the vlans and the interfaces assigned to the vlan [msg.self_ips[*].[address,vlan], msg.vlans[*].[full_path,interfaces[*]]] Nodes The output for this section is obtained with above playbook using parameters gather_subset: nodes, facts: bigip_facts.nodes # Get the address and availability status of all the nodes msg[*].[address,availability_status] # Get availability status and reason for a particular node msg[?address=='192.0.1.101'].[full_path,availability_status,status_reason] Pools The output for this section is obtained with above playbook using parameters gather_subset: ltm-pools facts: bigip_facts.ltm_pools # Get the name of all pools msg[*].name # Get the name of all pools and their associated members msg[*].[name,members[*]] # Get the name of all pools and only address of their associated members msg[*].[name,members[*].address] # Get the name of all pools along with address and status of their associated members msg[*].[name,members[*].address,availability_status] # Get status of pool members of a particular pool msg[?name=='/Common/pool'].[members[*].address,availability_status] # Get status of pool # Get address, partition, state of pool members msg[*].[name,members[*][address,partition,state],availability_status] # Get status of a particular pool and particular member (multiple entries on a member) msg[?full_name=='/Common/pool'].[members[?address=='192.0.1.101'].[address,partition],availability_status] Virtual Servers The output for this section is obtained with above playbook using parameters gather_subset: virtual-servers facts: bigip_facts.virtual_servers # Get destination IP address of all virtual servers msg[*].destination # Get destination IP and default pool of all virtual servers msg[*].[destination,default_pool] # Get me all destination IP of all virtual servers that a particular pool as their default pool msg[?default_pool=='/Common/pool'].destination # Get me all profiles assigned to all virtual servers msg[*].[destination,profiles[*].name] Loop and display using Ansible We have seen how to use the jmespath syntax and extract information, now lets see how to use it within an Ansible playbook - name: Parse the output hosts: localhost connection: local gather_facts: false tasks: - name: Setup provider set_fact: provider: server: "xxx.xxx.xxx.xxx" user: "*****" password: "*****" server_port: "443" validate_certs: "no" - name: Query BIG-IP facts bigip_device_info: provider: "{{provider}}" gather_subset: - system_info register: bigip_facts - debug: msg="{{bigip_facts.system_info}}" # Use json query filter. The query_string will be the jmespath syntax # From the jmespath query remove the 'msg' expression and use it as it is - name: "Show relevant information" set_fact: result: "{{bigip_facts.system_info | json_query(query_string)}}" vars: query_string: "[base_mac_address,chassis_serial,platform,product_version,hardware_information[*].[name,type]]" - debug: "msg={{result}}" Another example of what would change if you use a different query (only highlighting the changes that need to made below from the entire playbook) - name: Query BIG-IP facts bigip_device_info: provider: "{{provider}}" gather_subset: - ltm-pools register: bigip_facts - debug: msg="{{bigip_facts.ltm_pools}}" - name: "Show relevant information" set_fact: result: "{{bigip_facts.ltm_pools | json_query(query_string)}}" vars: query_string: "[*].[name,members[*][address,partition,state],availability_status]" The key is to get the jmespath syntax for the information you are looking for and then its a simple step to incorporate it within your Ansible playbook References Try the queries - https://jmespath.org/ Learn more jmespath syntax and example - https://jmespath.org/tutorial.html Ansible lab that can be used as a sandbox - https://clouddocs.f5.com/training/automation-sandbox/2.1KViews2likes2CommentsTelemetry streaming - One click deploy using Ansible
In this article we will focus on using Ansible to enable and install telemetry streaming (TS) and associated dependencies. Telemetry streaming The F5 BIG-IP is a full proxy architecture, which essentially means that the BIG-IP LTM completely understands the end-to-end connection, enabling it to be an endpoint and originator of client and server side connections. This empowers the BIG-IP to have traffic statistics from the client to the BIG-IP and from the BIG-IP to the server giving the user the entire view of their network statistics. To gain meaningful insight, you must be able to gather your data and statistics (telemetry) into a useful place.Telemetry streaming is an extension designed to declaratively aggregate, normalize, and forward statistics and events from the BIG-IP to a consumer application. You can earn more about telemetry streaming here, but let's get to Ansible. Enable and Install using Ansible The Ansible playbook below performs the following tasks Grab the latest Application Services 3 (AS) and Telemetry Streaming (TS) versions Download the AS3 and TS packages and install them on BIG-IP using a role Deploy AS3 and TS declarations on BIG-IP using a role from Ansible galaxy If AVR logs are needed for TS then provision the BIG-IP AVR module and configure AVR to point to TS Prerequisites Supported on BIG-IP 14.1+ version If AVR is required to be configured make sure there is enough memory for the module to be enabled along with all the other BIG-IP modules that are provisioned in your environment The TS data is being pushed to Azure log analytics (modify it to use your own consumer). If azure logs are being used then change your TS json file with the correct workspace ID and sharedkey Ansible is installed on the host from where the scripts are run Following files are present in the directory Variable file (vars.yml) TS poller and listener setup (ts_poller_and_listener_setup.declaration.json) Declare logging profile (as3_ts_setup_declaration.json) Ansible playbook (ts_workflow.yml) Get started Download the following roles from ansible galaxy. ansible-galaxy install f5devcentral.f5app_services_package --force This role performs a series of steps needed to download and install RPM packages on the BIG-IP that are a part of F5 automation toolchain. Read through the prerequisites for the role before installing it. ansible-galaxy install f5devcentral.atc_deploy --force This role deploys the declaration using the RPM package installed above. Read through the prerequisites for the role before installing it. By default, roles get installed into the /etc/ansible/role directory. Next copy the below contents into a file named vars.yml. Change the variable file to reflect your environment # BIG-IP MGMT address and username/password f5app_services_package_server: "xxx.xxx.xxx.xxx" f5app_services_package_server_port: "443" f5app_services_package_user: "*****" f5app_services_package_password: "*****" f5app_services_package_validate_certs: "false" f5app_services_package_transport: "rest" # URI from where latest RPM version and package will be downloaded ts_uri: "https://github.com/F5Networks/f5-telemetry-streaming/releases" as3_uri: "https://github.com/F5Networks/f5-appsvcs-extension/releases" #If AVR module logs needed then set to 'yes' else leave it as 'no' avr_needed: "no" # Virtual servers in your environment to assign the logging profiles (If AVR set to 'yes') virtual_servers: - "vs1" - "vs2" Next copy the below contents into a file named ts_poller_and_listener_setup.declaration.json. { "class": "Telemetry", "controls": { "class": "Controls", "logLevel": "debug" }, "My_Poller": { "class": "Telemetry_System_Poller", "interval": 60 }, "My_Consumer": { "class": "Telemetry_Consumer", "type": "Azure_Log_Analytics", "workspaceId": "<<workspace-id>>", "passphrase": { "cipherText": "<<sharedkey>>" }, "useManagedIdentity": false, "region": "eastus" } } Next copy the below contents into a file named as3_ts_setup_declaration.json { "class": "ADC", "schemaVersion": "3.10.0", "remark": "Example depicting creation of BIG-IP module log profiles", "Common": { "Shared": { "class": "Application", "template": "shared", "telemetry_local_rule": { "remark": "Only required when TS is a local listener", "class": "iRule", "iRule": "when CLIENT_ACCEPTED {\n node 127.0.0.1 6514\n}" }, "telemetry_local": { "remark": "Only required when TS is a local listener", "class": "Service_TCP", "virtualAddresses": [ "255.255.255.254" ], "virtualPort": 6514, "iRules": [ "telemetry_local_rule" ] }, "telemetry": { "class": "Pool", "members": [ { "enable": true, "serverAddresses": [ "255.255.255.254" ], "servicePort": 6514 } ], "monitors": [ { "bigip": "/Common/tcp" } ] }, "telemetry_hsl": { "class": "Log_Destination", "type": "remote-high-speed-log", "protocol": "tcp", "pool": { "use": "telemetry" } }, "telemetry_formatted": { "class": "Log_Destination", "type": "splunk", "forwardTo": { "use": "telemetry_hsl" } }, "telemetry_publisher": { "class": "Log_Publisher", "destinations": [ { "use": "telemetry_formatted" } ] }, "telemetry_traffic_log_profile": { "class": "Traffic_Log_Profile", "requestSettings": { "requestEnabled": true, "requestProtocol": "mds-tcp", "requestPool": { "use": "telemetry" }, "requestTemplate": "event_source=\"request_logging\",hostname=\"$BIGIP_HOSTNAME\",client_ip=\"$CLIENT_IP\",server_ip=\"$SERVER_IP\",http_method=\"$HTTP_METHOD\",http_uri=\"$HTTP_URI\",virtual_name=\"$VIRTUAL_NAME\",event_timestamp=\"$DATE_HTTP\"" } } } } } NOTE: To better understand the above declarations check out our clouddocs page: https://clouddocs.f5.com/products/extensions/f5-telemetry-streaming/latest/telemetry-system.html Next copy the below contents into a file named ts_workflow.yml - name: Telemetry streaming setup hosts: localhost connection: local any_errors_fatal: true vars_files: vars.yml tasks: - name: Get latest AS3 RPM name action: shell wget -O - {{as3_uri}} | grep -E rpm | head -1 | cut -d "/" -f 7 | cut -d "=" -f 1 | cut -d "\"" -f 1 register: as3_output - debug: var: as3_output.stdout_lines[0] - set_fact: as3_release: "{{as3_output.stdout_lines[0]}}" - name: Get latest AS3 RPM tag action: shell wget -O - {{as3_uri}} | grep -E rpm | head -1 | cut -d "/" -f 6 register: as3_output - debug: var: as3_output.stdout_lines[0] - set_fact: as3_release_tag: "{{as3_output.stdout_lines[0]}}" - name: Get latest TS RPM name action: shell wget -O - {{ts_uri}} | grep -E rpm | head -1 | cut -d "/" -f 7 | cut -d "=" -f 1 | cut -d "\"" -f 1 register: ts_output - debug: var: ts_output.stdout_lines[0] - set_fact: ts_release: "{{ts_output.stdout_lines[0]}}" - name: Get latest TS RPM tag action: shell wget -O - {{ts_uri}} | grep -E rpm | head -1 | cut -d "/" -f 6 register: ts_output - debug: var: ts_output.stdout_lines[0] - set_fact: ts_release_tag: "{{ts_output.stdout_lines[0]}}" - name: Download and Install AS3 and TS RPM ackages to BIG-IP using role include_role: name: f5devcentral.f5app_services_package vars: f5app_services_package_url: "{{item.uri}}/download/{{item.release_tag}}/{{item.release}}?raw=true" f5app_services_package_path: "/tmp/{{item.release}}" loop: - {uri: "{{as3_uri}}", release_tag: "{{as3_release_tag}}", release: "{{as3_release}}"} - {uri: "{{ts_uri}}", release_tag: "{{ts_release_tag}}", release: "{{ts_release}}"} - name: Deploy AS3 and TS declaration on the BIG-IP using role include_role: name: f5devcentral.atc_deploy vars: atc_method: POST atc_declaration: "{{ lookup('template', item.file) }}" atc_delay: 10 atc_retries: 15 atc_service: "{{item.service}}" provider: server: "{{ f5app_services_package_server }}" server_port: "{{ f5app_services_package_server_port }}" user: "{{ f5app_services_package_user }}" password: "{{ f5app_services_package_password }}" validate_certs: "{{ f5app_services_package_validate_certs | default(no) }}" transport: "{{ f5app_services_package_transport }}" loop: - {service: "AS3", file: "as3_ts_setup_declaration.json"} - {service: "Telemetry", file: "ts_poller_and_listener_setup_declaration.json"} #If AVR logs need to be enabled - name: Provision BIG-IP with AVR bigip_provision: provider: server: "{{ f5app_services_package_server }}" server_port: "{{ f5app_services_package_server_port }}" user: "{{ f5app_services_package_user }}" password: "{{ f5app_services_package_password }}" validate_certs: "{{ f5app_services_package_validate_certs | default(no) }}" transport: "{{ f5app_services_package_transport }}" module: "avr" level: "nominal" when: avr_needed == "yes" - name: Enable AVR logs using tmsh commands bigip_command: commands: - modify analytics global-settings { offbox-protocol tcp offbox-tcp-addresses add { 127.0.0.1 } offbox-tcp-port 6514 use-offbox enabled } - create ltm profile analytics telemetry-http-analytics { collect-geo enabled collect-http-timing-metrics enabled collect-ip enabled collect-max-tps-and-throughput enabled collect-methods enabled collect-page-load-time enabled collect-response-codes enabled collect-subnets enabled collect-url enabled collect-user-agent enabled collect-user-sessions enabled publish-irule-statistics enabled } - create ltm profile tcp-analytics telemetry-tcp-analytics { collect-city enabled collect-continent enabled collect-country enabled collect-nexthop enabled collect-post-code enabled collect-region enabled collect-remote-host-ip enabled collect-remote-host-subnet enabled collected-by-server-side enabled } provider: server: "{{ f5app_services_package_server }}" server_port: "{{ f5app_services_package_server_port }}" user: "{{ f5app_services_package_user }}" password: "{{ f5app_services_package_password }}" validate_certs: "{{ f5app_services_package_validate_certs | default(no) }}" transport: "{{ f5app_services_package_transport }}" when: avr_needed == "yes" - name: Assign TCP and HTTP profiles to virtual servers bigip_virtual_server: provider: server: "{{ f5app_services_package_server }}" server_port: "{{ f5app_services_package_server_port }}" user: "{{ f5app_services_package_user }}" password: "{{ f5app_services_package_password }}" validate_certs: "{{ f5app_services_package_validate_certs | default(no) }}" transport: "{{ f5app_services_package_transport }}" name: "{{item}}" profiles: - http - telemetry-http-analytics - telemetry-tcp-analytics loop: "{{virtual_servers}}" when: avr_needed == "yes" Now execute the playbook: ansible-playbook ts_workflow.yml Verify Login to the BIG-IP UI Go to menu iApps->Package Management LX. Both the f5-telemetry and f5-appsvs RPM's should be present Login to BIG-IP CLI Check restjavad logs present at /var/log for any TS errors Login to your consumer where the logs are being sent to and make sure the consumer is receiving the logs Conclusion The Telemetry Streaming (TS) extension is very powerful and is capable of sending much more information than described above. Take a look at the complete list of logs as well as consumer applications supported by TS over on CloudDocs: https://clouddocs.f5.com/products/extensions/f5-telemetry-streaming/latest/using-ts.html648Views3likes0CommentsManage and upgrade vCMP hardware using Ansible
I am going to start with a definition of vCMP since many use it but don’t know what it stands for: **Virtual Clustered Multiprocessing™ (vCMP®)** is a feature of the BIG-IP® system that allows you to provision and manage multiple, hosted instances of the BIG-IP software on a single hardware platform. With managing multiple instances on a single platform there is, almost certainly, repetitive tasks that will need to be performed. The vCMP platform is no exception. The vCMP host can consist of multiple slots and multiple guests can be distributed among those slots. There are different ways to provision vCMP guests on a host depending on the hardware specifications of the host. I won't go into details here but click here for great resource to guide you on vCMP guest distribution on vCMP hosts. While there is a inclination to use a software-only solution, F5 BIG-IP vCMP can be a great solution since it provides the performance and reliability you can get from hardware along with an added layer of virtualization. Learn more about the benefits and comparisons of vCMP over other solutions In this article I am going to talk about how you can use Ansible to deploy vCMP guests and also talk about how you can upgrade software on those guests. Part 1: Deploy vCMP guests The BIG-IP image is downloaded and accessible in a directory (/root/images) local from where the ansible playbook is being run Example playbooks: vcmp_host_mgmt.yml Deploy 1 vCMP guest - name: vCMP MGMT hosts: localhost connection: local vars: image: "BIGIP-14.1.2-0.0.37.iso" tasks: # Setup the credentials used to login to the BIG-IP and store it # in a fact that can be used in subsequent tasks - name: Setup provider set_fact: vcmp_host_creds: server: "10.192.xx.xx" user: "admin" password: "admin" server_port: "443" validate_certs: "no" # Upload the software image to the BIG-IP (vCMP host) - name: Upload software on vCMP Host bigip_software_image: image: "/root/images/{{image}}" provider: "{{vcmp_host_creds}}" # Deploy a vCMP guest with the base version of the software image uploaded above - name: Create vCMP guest bigip_vcmp_guest: name: "vCMP85" initial_image: "{{image}}" mgmt_network: bridged mgmt_address: 10.192.73.85/24 mgmt_route: 10.192.73.1 state: present provider: "{{vcmp_host_creds}}" Complete list of parameters that can be used for the module above Few pointers: The vCMP guest when created can be set to any initial BIG-IP image that is present on the vCMP host (that could be the image and version of the vCMP host itself) If VLAN's need to be assigned to the vCMP guest at the time of creation those VLAN's need to be added to the host beforehand Different cores and slots can be assigned to the vCMP guest Management network can be set to bridged or isolated Now the code above deploys 1 vCMP guest. If you have multiple vCMP guests that need to be deployed there are a number of ways to do that: Variable file: A variable file can be used to store information on each vCMP guest and then referenced within the playbook Loops: Using a loop within the task itself Let's take a look at each option Variable file: Deploy multiple vCMP guests using the async operation and a variable file Example variable file: variable_file.yml vcmp_guests: - name: "vCMP85" ip: "10.192.73.85" - name: "vCMP86" ip: "10.192.73.86" Example playbook variable file: vcmp_host_mgmt_var.yml - name: vCMP MGMT hosts: localhost connection: local vars: image: "BIGIP-14.1.2-0.0.37.iso" vars_files: - variable_file.yml tasks: - name: Setup provider set_fact: vcmp_host_creds: server: "10.192.xx.xx" user: "admin" password: "admin" server_port: "443" validate_certs: "no" - name: Upload software on vCMP Host bigip_software_image: image: "/root/images/{{image}}" provider: "{{vcmp_host_creds}}" - name: Create vCMP guest bigip_vcmp_guest: name: "{{item.name}}" initial_image: "{{image}}" mgmt_network: bridged mgmt_address: "{{item.ip}}/24" mgmt_route: 10.192.73.1 cores_per_slot: 1 state: present provider: "{{vcmp_host_creds}}" with_items: "{{vcmp_guests}}" register: _create_vcmp_instances # This will run the tasks in parallel and spin the vCMP guests simultaneously async: 900 poll: 0 - name: Wait for creation to finish async_status: jid: "{{ item.ansible_job_id }}" register: _jobs until: _jobs.finished delay: 10# Check every 10 seconds. Adjust as you like. retries: 85# Retry up to 10 times. Adjust as needed. with_items: "{{ _create_vcmp_instances.results }}" Click here to learn more about the async_status module Next let's look at an example of using loops Example playbook loops: vcmp_host_mgmt_loops.yml Deploy multiple vCMP guests using the async operation - name: vCMP MGMT hosts: localhost connection: local vars: image: "BIGIP-14.1.2-0.0.37.iso" tasks: # Setup the credentials used to login to the BIG-IP and store it # in a fact that can be used in subsequent tasks - name: Setup provider set_fact: vcmp_host_creds: server: "10.192.xx.xx" user: "admin" password: "admin" server_port: "443" validate_certs: "no" # Upload the software image to the BIG-IP (vCMP host) - name: Upload software on vCMP Host bigip_software_image: image: "/root/images/{{image}}" provider: "{{vcmp_host_creds}}" # Deploy a vCMP guest with the base version of the software image uploaded above - name: Create vCMP guest bigip_vcmp_guest: name: "{{item.name}}" initial_image: "{{image}}" mgmt_network: bridged mgmt_address: "{{item.ip}}/24" mgmt_route: 10.192.73.1 state: present provider: "{{vcmp_host_creds}}" with_items: - { name: 'vCMP85', ip: '10.192.73.85' } - { name: 'vCMP86', ip: '10.192.73.86' } register: _create_vcmp_instances # This will run the tasks in parallel and spin the vCMP guests simultaneously async: 900 poll: 0 - name: Wait for tasks creation above to finish async_status: jid: "{{ item.ansible_job_id }}" register: _jobs until: _jobs.finished delay: 10# Check every 10 seconds. Adjust as you like. retries: 85# Retry up to 10 times. Adjust as needed. with_items: "{{ _create_vcmp_instances.results }}" Part 2: Software upgrade Once we have the vCMP guests deployed let's consider a scenario where a new software BIG-IP build is out with a fix for the vCMP guest. Go through certain considerations while upgrading vCMP guests. Example playbook: vcmp_guest_mgmt.yml - name: vCMP guest MGMT hosts: localhost connection: local vars: image: "BIGIP-14.1.2.2-0.0.4.iso" tasks: - name: Setup provider set_fact: vcmp_guest_creds: server: "10.192.xx.xx" user: "admin" password: "admin" server_port: "443" validate_certs: "no" - name: Upload software on vCMP guest bigip_software_image: image: "/root/images/{{image}}" provider: "{{vcmp_guest_creds}}" # This command will list of the volumes available and the software present on each volume - name: run show version on remote devices bigip_command: commands: show sys software provider: "{{vcmp_guest_creds}}" register: result - debug: msg="{{result.stdout_lines}}" # Enter the volume from above to which you want to install the software - pause: prompt: "Choose the volume for software installation (Example format HD1.2)" register: volume # If the format of the volume entered does not match below the playbook will fail - fail: msg: "{{volume.user_input}} is not a valid volume format" when: volume.user_input is not regex("HD[1-9].[1-9]") # This task will install the software and then boot the BIG-IP to that specific volume - name: Ensure image is activated and booted to specified volume bigip_software_install: image: "{{image}}" state: activated volume: "{{volume.user_input}}" provider: "{{vcmp_guest_creds}}" Some End Notes The upgrade procedure for vCMP guests is not just applicable for vCMP. This process can be used for a virtual edition or any hardware appliance of BIG-IP. he same process is used on vCMP hosts as well although for vCMP hosts there are other factors to consider before performing an upgrade. The code above was tested with Ansible 2.9 and a subset of modules used in the playbooks from the list of complete F5 ansible modules available Happy automating !!870Views0likes0CommentsAutomate BIG-IP in customer environments using Ansible
There are a lot of technical resources on how Ansible can be used to automte the F5 BIG-IP. A consolidated list of links to help you brush up on Ansible as well as help you understand the Ansible BIG-IP solution Getting started with Ansible Ansible and F5 solution overview F5 and Ansible Integration - Webinar Dig deeper into F5 and Ansible Integration - Webinar I am going to dive directly into how customers are using ansible to automate their infrastructure today. If you are considering using Ansible or have already gone through a POC maybe a few of these use cases will resonate with you and you might see use of them in your environment. Use Case: Rolling application deployments Problem Consider a web application whose traffic is being load balanced by the BIG-IP. When the web servers running this application need to be updated with a new software package, an organization would consider going in for a rolling deployment where rather than updating all servers or tiers simultaneously. This means, the organization installs the updated software package on one server or subset of servers at a time. The organization would need to bring down one server at a time to have minimal traffic disruption. Solution This is a repeatitive and time consuming task for an organization considering there are newer updates coming out every so often. Ansible’s F5 modules can be used to automate operations like disabling a node (pool member), performing an update on that node and then bring that node back into service. This workflow is applicable for every node member of the pool sequentially resulting in zero down time for the application Modules bigip_ltm_facts bigip_node Sample - name: Update software on Pool Member - "{{node_name}}" hosts: bigip gather_facts: false vars: pool_name: "{{pool_name}}" node_name: "{{node_name}}" tasks: - name: Query BIG-IP facts for Pool Status - "{{pool_name}}" bigip_facts: validate_certs: False server: "{{ bigip_ip_address }}" user: "{{ bigip_username}}" bigip_password: "{{ bigip_password}}" include: "pool" delegate_to: localhost register: bigip_facts - name: Assert that Pool - "{{pool_name}}" is available before disabling node assert: that: - "'AVAILABILITY_STATUS_GREEN' in bigip_facts['ansible_facts']['pool']['/Common/{{iapp_name}}/{{pool_name}}']['object_status']['availability_status']|string" msg: "Pool is NOT available DONOT want to disable any more nodes..Check your BIG-IP" - name: Disable Node - "{{node_name}}" bigip_node: server: "{{ bigip_ip_address }}" user: "{{ bigip_username}}" bigip_password: "{{ bigip_password}}" partition: "Common" name: "{{node_name}}" state: "present" session_state: "disabled" monitor_state: "disabled" delegate_to: localhost - name: Get Node facts - "{{node_name}}" bigip_facts: validate_certs: False server: "{{ bigip_ip_address }}" user: "{{ bigip_username}}" bigip_password: "{{ bigip_password}}" include: "node" delegate_to: localhost register: bigip_facts - name: Assert that node - "{{node_name}}" did get disabled assert: that: - "'AVAILABILITY_STATUS_RED' in bigip_facts['ansible_facts']['node']['/Common/{{node_name}}']['object_status']['availability_status']|string" msg: "Pool member did NOT get DISABLED" - name: Perform software update on node - "{{node_name}}" hosts: "{{node_name}}" gather_facts: false vars: pool_name: "{{pool_name}}" node_name: "{{node_name}}" tasks: - name: "Update 'curl' package" apt: name=curl state=present - name: Enable Node hosts: bigip gather_facts: false vars: pool_name: "{{pool_name}}" node_name: "{{node_name}}" tasks: - name: Enable Node - "{{node_name}}" bigip_node: server: "{{ bigip_ip_address }}" user: "{{ bigip_username}}" bigip_password: "{{ bigip_password}}" partition: "Common" name: "{{node_name}}" state: "present" session_state: "enabled" monitor_state: "enabled" delegate_to: localhost Use Case: Deploying consistent policies across different environment using F5 iApps Problem Consider an organization has many different environments (development/QA/Production). Each environment is a replica of each other and has around 100 BIG-IP’s in each environment. When a new application is added to one environment it needs to be added to other environments in a fast, safe and secure manner. There is also a need to make sure the virtual server bought online adheres to all the traffic rules and security policies. Solution To ensure that the virtual server is deployed correctly and efficiently, the entire application can be deployed on the BIG-IP using the iApps module. iApps also has the power to reference ASM policies hence helping to consistently deploy an application with the appropriate security policies Modules bigip_iapp_template bigip_iapp_service Sample - hosts: localhost tasks: - name: Get iApp from Github get_url: url: "{{ Git URL from where to download the iApp }}" dest: /var/tmp validate_certs: False - name: Upload iApp template to BIG-IP bigip_iapp_template: server: "{{ bigip_ip_address }}" user: "{{ bigip_username }}" password: "{{ bigip_password }}" content: "{{ lookup('file', '/var/tmp/appsvcs_integration_v2.0.003.tmpl') }}" #Name of iApp state: "present" validate_certs: False delegate_to: localhost - name: Deploy iApp bigip_iapp_service: name: "HTTP_VS_With_L7Firewall" template: "appsvcs_integration_v2.0.003" parameters: "{{ lookup('file', 'final_iapp_with_asm.json') }}" #JSON blob for body content to the iApp server: "{{ bigip_ip_address }}" user: "{{ bigip_username}}" password: "{{ bigip_password }}" state: "present" delegate_to: localhost Use Case: Disaster Recovery Problem All organizations should have a disaster recovery plan in place incase of a catastrophic failure resulting in loss of data including BIG-IP configuration data. Re-configuring the entire infrastructure from scratch can be an administrative nightmare. The procedure in place for disaster recovery can also be used for migrating data from one BIG-IP to another as well as for performing hardware refresh and RMA’s. Solution Use BIG-IP user configuration set (UCS) configuration file to restore the configuration on all the BIG-IP’s and have your environment back to its original configuration in minutes. Modules bigip_ucs Sample --- - name: Create UCS file hosts: bigip gather_facts: false tasks: - name: Create ucs file and store it bigip_ucs: server: "{{ bigip_ip_address }}" user: "{{ bigip_username }}" password: "{{ bigip_password }}" ucs: "/root/test.ucs" state: "installed" delegate_to: localhost Use Case: Troubleshooting Problem Any problem on the BIG-IP for example a backend server not receiving traffic or a virtual server dropping traffic unexpectedly or a monitor not responding correctly needs extensive troubleshooting. These problems can be hard to pin point and need an expert to debug and look through the logs. Solution Qkview is a great utility on the BIG-IP which that an administrator can use to automatically collect configuration and diagnostic information from BIG-IP and other F5 systems. Use this ansible module and run it against all BIG-IP’s in your network and collect the diagnostic information to pass it onto to F5 support. Modules bigip_qkview Sample - name: Create qkview hosts: bigip gather_facts: false tasks: - name: Generate and store qkview file bigip_qkview: server: "{{ bigip_ip_address }}" user: "{{ bigip_username }}" password: "{{ bigip_password }}" asm_request_log: "no" dest: "/tmp/localhost.localdomain.qkview" validate_certs: "no" delegate_to: localhost Conclusion These are just some of the use cases that can be tackled using our BIG-IP ansible modules. We have several more modules in Ansible 2.4 release that the ones mentioned above which can help in other scenarios. To view a complete list of BIG-IP modules available in ansible 2.4 release click here Module overview at a glance BIG-IP modules in Ansible release 2.3 New BIG-IP modules in Ansible release 2.4 To get started and save time download a BIG-IP onboarding ansible role from ansible galaxy and run the playbook against your BIG-IP1.5KViews1like6Comments