playbook
5 TopicsPower of tmsh commands using Ansible
Why is data important Having accurate data has become an integral part of decision making. The data could be for making simple decisions like purchasing the newest electronic gadget in the market or for complex decisions on what hardware and/or software platform works best for your highly demanding application which would provide the best user experience for your customer. In either case research and data collection becomes essential. Using what kind of F5 hardware and/or software in your environment follows the same principals where your IT team would require data to make the right decision. Data could vary from CPU, Throughput and/or Memory utilization etc. of your F5 gear. It could also be data just for a period of a day, a month or a year depending the application usage patterns. Ansible to the rescue Your environment could have 10's or maybe 100 or even 1000's of F5 BIG-IP's in your environment, manually logging into each one to gather data would be a highly inefficient method. One way which is a great and simple way could be to use Ansible as an automation framework to perform this task, relieving you to perform your other job functions. Let's take a look at some of the components needed to use Ansible. An inventory file in Ansible defines the hosts against which your playbook is going to run. Below is an example of a file defining F5 hosts which can be expanded to represent your 10'/100's or 1000's of BIG-IP's. Inventory file: 'inventory.yml' [f5] ltm01 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 ltm02 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 ltm03 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 ltm04 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 ltm05 password=admin server=10.192.73.xxx user=admin validate_certs=no server_port=443 A playbook defines the tasks that are going to be executed. In this playbook we are using the bigip_command module which can take as input any BIG-IP tmsh command and provide the output. Here we are going to use the tmsh commands to gather performance data from the BIG-IP's. The output from each of the BIG-IP's is going to be stored in a file that can be referenced after the playbook finished execution. Playbook: 'performance-data/yml' --- - name: Create empty file hosts: localhost gather_facts: false tasks: - name: Creating an empty file file: path: "./{{filename}}" state: touch - name: Gather stats using tmsh command hosts: f5 connection: local gather_facts: false serial: 1 tasks: - name: Gather performance stats bigip_command: provider: server: "{{server}}" user: "{{user}}" password: "{{password}}" server_port: "{{server_port}}" validate_certs: "{{validate_certs}}" commands: - show sys performance throughput historical - show sys performance system historical register: result - lineinfile: line: "\n###BIG-IP hostname => {{ inventory_hostname }} ###\n" insertafter: EOF dest: "./{{filename}}" - lineinfile: line: "{{ result.stdout_lines }}" insertafter: EOF dest: "./{{filename}}" - name: Format the file shell: cmd: sed 's/,/\n/g' ./{{filename}} > ./{{filename}}_formatted - pause: seconds: 10 - name: Delete file hosts: localhost gather_facts: false tasks: - name: Delete extra file created (delete file) file: path: ./{{filename}} state: absent Execution: The execution command will take as input the playbook name, the inventory file as well as the filename where the output will be stored. (There are different ways of defining and passing parameters to a playbook, below is one such example) ansible-playbook performance_data.yml -i inventory.yml --extra-vars "filename=perf_output" Snippet of expected output: ###BIG-IP hostname => ltm01 ### [['Sys::Performance Throughput' '-----------------------------------------------------------------------' 'Throughput(bits)(bits/sec)Current3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'Service223.8K258.8K279.2K297.4K112.5K' 'In212.1K209.7K210.5K243.6K89.5K' 'Out21.4K21.0K21.1K57.4K30.1K' '' '-----------------------------------------------------------------------' 'SSL TransactionsCurrent3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'SSL TPS00000' '' '-----------------------------------------------------------------------' 'Throughput(packets)(pkts/sec)Current3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'Service7982836362' 'In4140403432' 'Out4140403234'] ['Sys::Performance System' '------------------------------------------------------------' 'System CPU Usage(%)Current3 hrs24 hrs7 days30 days' '------------------------------------------------------------' 'Utilization1718181817' '' '------------------------------------------------------------' 'Memory Used(%)Current3 hrs24 hrs7 days30 days' '------------------------------------------------------------' 'TMM Memory Used1010101010' 'Other Memory Used5555545453' 'Swap Used00000']] ###BIG-IP hostname => ltm02 ### [['Sys::Performance Throughput' '-----------------------------------------------------------------------' 'Throughput(bits)(bits/sec)Current3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'Service202.3K258.7K279.2K297.4K112.5K' 'In190.8K209.7K210.5K243.6K89.5K' 'Out19.6K21.0K21.1K57.4K30.1K' '' '-----------------------------------------------------------------------' 'SSL TransactionsCurrent3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'SSL TPS00000' '' '-----------------------------------------------------------------------' 'Throughput(packets)(pkts/sec)Current3 hrs24 hrs7 days30 days' '-----------------------------------------------------------------------' 'Service7782836362' 'In3940403432' 'Out3740403234'] ['Sys::Performance System' '------------------------------------------------------------' 'System CPU Usage(%)Current3 hrs24 hrs7 days30 days' '------------------------------------------------------------' 'Utilization2118181817' '' '------------------------------------------------------------' 'Memory Used(%)Current3 hrs24 hrs7 days30 days' '------------------------------------------------------------' 'TMM Memory Used1010101010' 'Other Memory Used5555545453' 'Swap Used00000']] The data obtained is historical data over a period of time. Sometimes it is also important to gather the peak usage of throughout/memory/cpu over time and not the average. Stay tuned as we will discuss on how to obtain that information in a upcoming article. Conclusion Use the output of the data to learn the traffic patterns and propose the most appropriate BIG-IP hardware/software in your environment. This could be data collected directly in your production environment or a staging environment, which would help you make the decision on what purchasing strategy gives you the most value from your BIG-IP's. For reference: https://www.f5.com/pdf/products/big-ip-local-traffic-manager-ds.pdf The above is one example of how you can get started with using Ansible and tmsh commands. Using this method you can potentially achieve close to 100% automation on the BIG-IP.11KViews4likes3CommentsParsing complex BIG-IP json structures made easy with Ansible filters like json_query
JMESPath and json_query JMESPath (JSON Matching Expression paths) is a query language for searching JSON documents. It allows you to declaratively extract elements from a JSON document. Have a look at this tutorial to learn more. The json_query filter lets you query a complex JSON structure and iterate over it using a loop structure.This filter is built upon jmespath, and you can use the same syntax as jmespath. Click here to learn more about the json_query filter and how it is used in Ansible. In this article we are going to use the bigip_device_info module to get various facts from the BIG-IP and then use the json_query filter to parse the output to extract relevant information. Ansible bigip_device_info module Playbook to query the BIG-IP and gather system based information. - name: "Get BIG-IP Facts" hosts: bigip gather_facts: false connection: local tasks: - name: Query BIG-IP facts bigip_device_info: provider: validate_certs: False server: "xxx.xxx.xxx.xxx" user: "*****" password: "*****" gather_subset: - system-info register: bigip_facts - set_fact: facts: '{{bigip_facts.system_info}}' - name: debug debug: msg="{{facts}}" To view the output on a different subset, below are a few examples to change the gather_subset and set_fact values in the above playbook from gather_subset: system-info, facts: bigip_facts.system_info to any of the below: gather_subset: vlans , facts: bigip_facts.vlans gather_subset: self-ips, facts: bigip_facts.self_ips gather_subset: nodes. facts: bigip_facts.nodes gather_subset: software-volumes, facts: bigip_facts.software_volumes gather_subset: virtual-servers, facts: bigip_facts.virtual_servers gather_subset: system-info, facts: bigip_facts.system_info gather_subset: ltm-pools, facts: bigip_facts.ltm_pools Click here to view all the information that can be obtained from the BIG-IP using this module. Parse the JSON output Once we have the output lets take a look at how to parse the output. As mentioned above the jmespath syntax can be used by the json_query filter. Step 1: We will get the jmespath syntax for the information we want to extract Step 2: We will see how the jmespath syntax and then be used with json_query in an Ansible playbook The website used in this article to try out the below syntax: https://jmespath.org/ Some BIG-IP sample outputs are attached to this article as well (Check the attachments section after the References). The attachment file is a combined output of a few configuration subsets.Copy paste the relevant information from the attachment to test the below examples if you do not have a BIG-IP. System information The output for this section is obtained with above playbook using parameters: gather_subset: system-info, facts: bigip_facts.system_info Once the above playbook is run against your BIG-IP or if you are using the sample configuration attached, copy the output and paste it in the relevant text box. Try different queries by placing them in the text box next to the magnifying glass as shown in image below # Get MAC address, serial number, version information msg.[base_mac_address,chassis_serial,platform,product_version] # Get MAC address, serial number, version information and hardware information msg.[base_mac_address,chassis_serial,platform,product_version,hardware_information[*].[name,type]] Software volumes The output for this section is obtained with above playbook using parameters: gather_subset: software-volumes facts: bigip_facts.software_volumes # Get the name and version of the software volumes installed and its status msg[*].[name,active,version] # Get the name and version only for the software volume that is active msg[?active=='yes'].[name,version] VLANs and Self-Ips The output for this section is obtained with above playbook using parameters gather_subset: vlans and self-ips facts: bigip_facts Look at the following example to define more than one subset in the playbook # Get all the self-ips addresses and vlans assigned to the self-ip # Also get all the vlans and the interfaces assigned to the vlan [msg.self_ips[*].[address,vlan], msg.vlans[*].[full_path,interfaces[*]]] Nodes The output for this section is obtained with above playbook using parameters gather_subset: nodes, facts: bigip_facts.nodes # Get the address and availability status of all the nodes msg[*].[address,availability_status] # Get availability status and reason for a particular node msg[?address=='192.0.1.101'].[full_path,availability_status,status_reason] Pools The output for this section is obtained with above playbook using parameters gather_subset: ltm-pools facts: bigip_facts.ltm_pools # Get the name of all pools msg[*].name # Get the name of all pools and their associated members msg[*].[name,members[*]] # Get the name of all pools and only address of their associated members msg[*].[name,members[*].address] # Get the name of all pools along with address and status of their associated members msg[*].[name,members[*].address,availability_status] # Get status of pool members of a particular pool msg[?name=='/Common/pool'].[members[*].address,availability_status] # Get status of pool # Get address, partition, state of pool members msg[*].[name,members[*][address,partition,state],availability_status] # Get status of a particular pool and particular member (multiple entries on a member) msg[?full_name=='/Common/pool'].[members[?address=='192.0.1.101'].[address,partition],availability_status] Virtual Servers The output for this section is obtained with above playbook using parameters gather_subset: virtual-servers facts: bigip_facts.virtual_servers # Get destination IP address of all virtual servers msg[*].destination # Get destination IP and default pool of all virtual servers msg[*].[destination,default_pool] # Get me all destination IP of all virtual servers that a particular pool as their default pool msg[?default_pool=='/Common/pool'].destination # Get me all profiles assigned to all virtual servers msg[*].[destination,profiles[*].name] Loop and display using Ansible We have seen how to use the jmespath syntax and extract information, now lets see how to use it within an Ansible playbook - name: Parse the output hosts: localhost connection: local gather_facts: false tasks: - name: Setup provider set_fact: provider: server: "xxx.xxx.xxx.xxx" user: "*****" password: "*****" server_port: "443" validate_certs: "no" - name: Query BIG-IP facts bigip_device_info: provider: "{{provider}}" gather_subset: - system_info register: bigip_facts - debug: msg="{{bigip_facts.system_info}}" # Use json query filter. The query_string will be the jmespath syntax # From the jmespath query remove the 'msg' expression and use it as it is - name: "Show relevant information" set_fact: result: "{{bigip_facts.system_info | json_query(query_string)}}" vars: query_string: "[base_mac_address,chassis_serial,platform,product_version,hardware_information[*].[name,type]]" - debug: "msg={{result}}" Another example of what would change if you use a different query (only highlighting the changes that need to made below from the entire playbook) - name: Query BIG-IP facts bigip_device_info: provider: "{{provider}}" gather_subset: - ltm-pools register: bigip_facts - debug: msg="{{bigip_facts.ltm_pools}}" - name: "Show relevant information" set_fact: result: "{{bigip_facts.ltm_pools | json_query(query_string)}}" vars: query_string: "[*].[name,members[*][address,partition,state],availability_status]" The key is to get the jmespath syntax for the information you are looking for and then its a simple step to incorporate it within your Ansible playbook References Try the queries - https://jmespath.org/ Learn more jmespath syntax and example - https://jmespath.org/tutorial.html Ansible lab that can be used as a sandbox - https://clouddocs.f5.com/training/automation-sandbox/2.1KViews2likes2CommentsAnsible F5 Backup
I've trawled the internet and Dev/Central to find a suitable Ansible playbook to do the following. Backup and F5 with the same filename so that I can push to our Gitlab for version control. The Ansible modules seem to either generate a random filename which isn't reusable in a playbook, if I specify source then the current UCS does not get overwritten, if I copy to the local filesystem with the same target name the module appends date and time to the file which will not give any consistency to GItlab. This is so far what I have come up with, the code is in its most basic form for testing only. - name: Clean the local backup directory path: "{{ item }}" state: absent with_fileglob: - "/ansible//dailybackups/*" connection: local - name: Clean the previous UCS file from F5 bigip_ucs: state: absent ucs: "{{ inventory_hostname }}.ucs" provider: server: 1.1.1.1 user: admin password: admin validate_certs: no delegate_to: localhost - name: Save the running configuration of the BIG-IP bigip_ucs_fetch: backup: yes src: "{{ inventory_hostname }}.ucs" dest: /ansible/dailybackups/{{ inventory_hostname }}.ucs provider: server: 1.1.1.1 user: admin password: admin validate_certs: no delegate_to: localhost So to perform a repeatable function I am forced to delete the file from the local file system to be copied to, erase the current UCS file on the F5 which is used as the backup, and then backup the F5 and pull the file to the local file system. Surely there is a slicker way of doing what can be done on a Cisco device in 4 lines. (NB) I have excluded the Git function, these 3 plays are merely to pull a consistent named UCS file and store to the local filesystem.644Views0likes0CommentsDig deeper into Ansible and F5 integration
Basics of Ansible and F5 integration were covered in a joint webinar held earlier in March 2017. To learn more about the integration and current F5 module support along with some use cases view the webinar . We had another joint webinar in June 2017, which went into details on the integration. We spoke about how to F5 ansible modules (that will be part of upcoming Ansible 2.4 release) can be used to perfrom administrative tasks on the BIG-IP, make the BIG-IP ready for application deployment in weeks rather than in days. We also touched upon usage of F5 ansible iApps module to deploy applications on the BIG-IP. The webinar's was very well received which ended with a great Q and A session. Some of the questions that came up were how do we create playbooks for different workflows, what are the best practices that F5 recommends, can we get a sample playbook etc. We will use this forum to answer some of the questions and dig deeper into the F5 and Ansible integration. Now what really is a playbook, it is nothing but a collection of tasks that are to be performed sequentially on a system. Let us consider a use case where a customer has just purchased 20 BIG-IP’s and needs to get all of them networked and to a state where the BIG-IPs are ready to deploy applications. We can define a playbook which consists of tasks required to perform Day0 and Day1 configuration on the BIG-IPs. Lets start with Day0, now networking the system consists of assigning it a NTP and DNS server address, assigning a hostname, making some ssh customizations etc., some of these settings are common to all the BIG-IPs. The common set of configurations can be defined using the concept of a ‘role’ in ansible. Let’s define a ‘onboarding’ role which will configure the common settings like NTP, DNS and SSHD settings. PLAYBOOK FOR ONBOARDING - name: Onboarding BIG-IP hosts: bigip gather_facts: false roles: - onboarding //playbook runs tasks defined in the ‘onboarding’ role This play book will run against all the BIG-IP’s defined in the inventory host file Example of inventory host file [bigip] 10.192.73.218 10.192.73.219 10.192.73.220 10.192.73.221 The above playbook will run tasks specified in the 'onboarding' role in file main.yaml (playbooks/roles/onboarding/tasks/main.yaml) - name: Configure NTP server on BIG-IP bigip_device_ntp: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" ntp_servers: "{{ ntp_servers }}" validate_certs: False delegate_to: localhost - name: Manage SSHD setting on BIG-IP bigip_device_sshd: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" banner: "enabled" banner_text: " {{ banner_text }}" validate_certs: False delegate_to: localhost - name: Manage BIG-IP DNS settings bigip_device_dns: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" name_servers: "{{ dns_servers }}" search: "{{ dns_search_domains }}" ip_version: "{{ ip_version }}" validate_certs: False delegate_to: localhost Variables will be referenced from the main.yaml file under default directory for the ‘onboarding’ role (playbooks/roles/onboarding/default/main.yaml) username: admin password: admin banner_text: "--------Welcome to Onboarding BIGIP----------" ntp_servers: - '172.27.1.1' - '172.27.1.2' dns_servers: - '8.8.8.8' - '4.4.4.4' dns_search_domains: - 'local' - 'localhost' ip_version: 4 The BIG-IP is now ready to deploy applications. One application is configuring the BIG-IP to securely load balance applications. This requires configuring the following on the BIG-IP Vlans Self-IPs Nodes/members (2) Pool (1) Assigning the nodes to the Pool Creating a HTTPS Virtual server Creating a redirect Virtual server, which will redirect all HTTP requests to the HTTPS virtual server (iRule is assigned to the virtual server to achieve this) This playbook will be run individually for each BIG-IP since each will use different values for VLANS/Self IP’s/Virtual server address etc. The variables values for this playbook is defined inline and not in a separate file. PLAYBOOK FOR APPLICATION DEPLOYMENT - name: creating HTTPS application hosts: bigip tasks: - name: Configure VLANs on the BIG-IP bigip_vlan: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" validate_certs: False name: "{{ item.name }}" tag: "{{ item.tag }}" tagged_interface: "{{ item.interface }}" with_items: - name: 'External' tag: '10' interface: '1.1' - name: 'Internal tag: '11’ interface: '1.2' delegate_to: localhost - name: Configure SELF-IPs on the BIG-IP bigip_selfip: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" validate_certs: False name: "{{ item.name }}" address: "{{ item.address }}" netmask: "{{ item.netmask }}" vlan: "{{ item.vlan }}" allow_service: "{{item.allow_service}}" with_items: - name: 'External-SelfIP' address: '10.10.10.10' netmask: '255.255.255.0' vlan: 'External' allow_service: 'default' - name: 'Internal-SelfIP' address: '192.10.10.10' netmask: '255.255.255.0' vlan: 'Internal' allow_service: 'default' delegate_to: localhost - name: Create a web01.internal node //Creating Node1 bigip_node: server: "{{ inventory_hostname }}" user: "admin" password: "admin" host: "192.168.68.140" name: "web01.internal" validate_certs: False delegate_to: localhost - name: Create a web02.internal node //Creating Node2 bigip_node: server: "{{ inventory_hostname }}" user: "admin" password: "admin" host: "192.168.68.141" name: "web02.internal" validate_certs: False delegate_to: localhost - name: Create a web-pool //Creating a pool bigip_pool: server: "{{ inventory_hostname }}" user: "admin" password: "admin" lb_method: "ratio_member" monitors: http name: "web-pool" validate_certs: False delegate_to: localhost - name: Add http node to web-pool //Assigning members to a pool bigip_pool_member: description: "HTTP Webserver-1" host: "{{ item.host }}" name: "{{ item.name }}" user: "admin" password: "admin" pool: "web-pool" port: "80" server: "{{ inventory_hostname }}" validate_certs: False with_items: - host: "192.168.168.140" name: "web01.internal" - host: "192.168.68.141" name: "web02.internal" delegate_to: localhost - name: Create a virtual server //Create a HTTPS Virtual Server bigip_virtual_server: description: "Secure web application" server: "{{ inventory_hostname }}" user: "admin" password: "admin" name: "https_vs" destination: "10.10.20.120" port: 443 snat: "Automap" all_profiles: - http - clientssl pool: "web-pool" validate_certs: False delegate_to: localhost - name: Create a redirect virtual server //Create a redirect virtual server bigip_virtual_server: description: "Redirect Virtual server" server: "{{ inventory_hostname }}" user: "admin" password: "admin" name: "http_redirect" destination: "10.10.20.120" validate_certs: False port: 80 all_profiles: - http all_rules: //Attach an iRule to the Virtual server - _sys_https_redirect delegate_to: localhost Bookmark this page if you are interested in learning more. We will be updating this blog with new F5 modules that are going to be supported with Ansible 2.4 release3.1KViews0likes26CommentsBIG-IP deployments using Ansible in private and public cloud
F5 has been actively developing Ansible modules that help in deploying an application on the BIG-IP. For a list of candidate modules for Ansible 2.4 release refer to the Github link. These modules can be used to configure any BIG-IP (physical/virtual) in any environment (Public/Private or Hybrid cloud) Before we can use the BIG-IP to deploy an application, we need to spin up a virtual edition of the BIG. Let’s look at some ways to spin up a BIG-IP in the Public and Private cloud Private cloud Create a BIG-IP guest VM through VMware vSphere For more details on the ansible module refer to Ansible documentation Pre-condition: On the VMware a template of the BIG-IP image has been created Example Playbook: - name: Create VMware guest hosts: localhost connection: local become: true tasks: - name: Deploy BIG-IP VE vsphere_guest: vcenter_hostname: 10.192.73.100 //vCenter hostname or IP address esxi: datacenter: F5 BD Lab //Datacenter name hostname: 10.192.73.22 //esxi hostname or IP address username: root //vCenter username password: ***** //vCenter password guest: “BIGIP-VM” //Name of the BIG-IP to be created from_template: yes template_src: "BIG-IP VE 12.1.2.0.0.249-Template" //Name of the template Spin up a BIG-IP VM in VMWARE using govc For more details on the govc refer to govc github and vmware github Pre-condition: govc has been installed on the ansible host Example Playbook: - name: Create VMware guest hosts: localhost connection: local tasks: - name: Import OVA and deploy BIG-IP VM command: "/usr/local/bin/govc import.ova -name=newVM BIGIP005 /tmp/BIGIP-12.1.2.0.0.249.LTM-scsi.ova" //Command to import the BIG-IP ova file environment: GOVC_HOST: "10.192.73.100" //vCenter hostname or IP address GOVC_URL: "https://10.192.73.100/sdk" GOVC_USERNAME: "root" //vCenter username GOVC_PASSWORD: "*******" //vCenter password GOVC_INSECURE: "1" GOVC_DATACENTER: "F5 BD Lab" //Datacenter name GOVC_DATASTORE: "datastore1 (5)" //Datastore on where to store the ova file GOVC_RESOURCE_POOL: "Testing" //Resource pool to use - name: Power on the VM command: "/usr/local/bin/govc vm.power -on newVM-BIGIP005" environment: GOVC_HOST: "10.192.73.100" GOVC_URL: "https://10.192.73.100/sdk" GOVC_USERNAME: "root" GOVC_PASSWORD: "vmware" GOVC_INSECURE: "1" GOVC_DATACENTER: "F5 BD Lab" GOVC_DATASTORE: "datastore1 (5)" GOVC_RESOURCE_POOL: "Testing" Public Cloud Spin up a BIG-IP using cloud formation templates in AWS For more details on the BIG-IP cloud formation templates, refer to the following Github Page Pre-condition: Cloud formation JSON template has been downloaded to the Ansible host Example Playbook: - name: Launch BIG-IP CFT in AWS hosts: localhost gather_facts: false tasks: - name: Launch BIG-IP CFT cloudformation: aws_access_key: "******************" //AWS access key aws_secret_key: "******************" //AWS secret key stack_name: "StandaloneBIGIP-1nic-experimental-Ansible" state: "present" region: "us-west-2" disable_rollback: true template: "standalone-hourly-1nic-experimental.json" //JSON blob for the CFT template_parameters: //template parameters availabilityZone1: "us-west-2a" sshKey: "bigip-test" validate_certs : false register: stack - name: Get facts(IP-address) from a cloud formation stack cloudformation_facts: aws_access_key: "*****************" aws_secret_key: "*****************" region: "us-west-2" stack_name: "StandaloneBIGIP-1nic-experimental-Ansible" register: bigip_ip_address - set_fact: //Extract the BIG-IP MGMT IP address ip_address: "{{ bigip_ip_address['ansible_facts']['cloudformation']['StandaloneBIGIP-1nic-experimental-Ansible']['stack_outputs']['Bigip1subnet1Az1SelfEipAddress']}}" - copy: //Copy the BIG-IP MGMT IP address to a file content: "bigip_ip_address: {{ ip_address}}" dest: "aws_var_file.yaml" //Copied IP address can be be referenced from file mode: 0644 Above mentioned are few ways to spin up a BIG-IP Virtual edition in your private/public cloud environment. Once the BIG-IP is installed then use the F5 ansible modules to deploy the application on the BIG-IP. Refer to DevCentral article to learn more about ansible roles and how we can use roles to onboard and network a BIG-IP. Included is a simple playbook that you can download and run against the BIG-IP. - name: Onboarding BIG-IP hosts: bigip //bigip variable should be present in the ansible inventory file gather_facts: false tasks: - name: Configure NTP server on BIG-IP bigip_device_ntp: server: "<bigip_ip_address >" user: "admin" password: "admin" ntp_servers: "172.2.1.1" validate_certs: False delegate_to: localhost - name: Configure BIG-IP hostname bigip_hostname: server: "<bigip_ip_address >" user: "admin" password: "admin" validate_certs: False hostname: "bigip1.local.com" delegate_to: localhost - name: Manage SSHD setting on BIG-IP bigip_device_sshd: server: "<bigip_ip_address >" user: "admin" password: "admin" banner: "enabled" banner_text: "Welcome- CLI username/password to login " validate_certs: False delegate_to: localhost - name: Manage BIG-IP DNS settings bigip_device_dns: server: "<bigip_ip_address >" user: "admin" password: "admin" name_servers: "172.2.1.1" search: "localhost" ip_version: "4" validate_certs: False delegate_to: localhost For more information on BIG-IP ansible playbooks visit the following github link789Views0likes2Comments