Dig deeper into Ansible and F5 integration
Basics of Ansible and F5 integration were covered in a joint webinar held earlier in March 2017. To learn more about the integration and current F5 module support along with some use cases view the webinar .
We had another joint webinar in June 2017, which went into details on the integration. We spoke about how to F5 ansible modules (that will be part of upcoming Ansible 2.4 release) can be used to perfrom administrative tasks on the BIG-IP, make the BIG-IP ready for application deployment in weeks rather than in days. We also touched upon usage of F5 ansible iApps module to deploy applications on the BIG-IP.
The webinar's was very well received which ended with a great Q and A session. Some of the questions that came up were how do we create playbooks for different workflows, what are the best practices that F5 recommends, can we get a sample playbook etc. We will use this forum to answer some of the questions and dig deeper into the F5 and Ansible integration.
Now what really is a playbook, it is nothing but a collection of tasks that are to be performed sequentially on a system. Let us consider a use case where a customer has just purchased 20 BIG-IP’s and needs to get all of them networked and to a state where the BIG-IPs are ready to deploy applications. We can define a playbook which consists of tasks required to perform Day0 and Day1 configuration on the BIG-IPs.
Lets start with Day0, now networking the system consists of assigning it a NTP and DNS server address, assigning a hostname, making some ssh customizations etc., some of these settings are common to all the BIG-IPs. The common set of configurations can be defined using the concept of a ‘role’ in ansible. Let’s define a ‘onboarding’ role which will configure the common settings like NTP, DNS and SSHD settings.
PLAYBOOK FOR ONBOARDING
- name: Onboarding BIG-IP hosts: bigip gather_facts: false roles: - onboarding //playbook runs tasks defined in the ‘onboarding’ role
This play book will run against all the BIG-IP’s defined in the inventory host file
Example of inventory host file
[bigip] 10.192.73.218 10.192.73.219 10.192.73.220 10.192.73.221
The above playbook will run tasks specified in the 'onboarding' role in file main.yaml (playbooks/roles/onboarding/tasks/main.yaml)
- name: Configure NTP server on BIG-IP bigip_device_ntp: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" ntp_servers: "{{ ntp_servers }}" validate_certs: False delegate_to: localhost - name: Manage SSHD setting on BIG-IP bigip_device_sshd: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" banner: "enabled" banner_text: " {{ banner_text }}" validate_certs: False delegate_to: localhost - name: Manage BIG-IP DNS settings bigip_device_dns: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" name_servers: "{{ dns_servers }}" search: "{{ dns_search_domains }}" ip_version: "{{ ip_version }}" validate_certs: False delegate_to: localhost
Variables will be referenced from the main.yaml file under default directory for the ‘onboarding’ role (playbooks/roles/onboarding/default/main.yaml)
username: admin password: admin banner_text: "--------Welcome to Onboarding BIGIP----------" ntp_servers: - '172.27.1.1' - '172.27.1.2' dns_servers: - '8.8.8.8' - '4.4.4.4' dns_search_domains: - 'local' - 'localhost' ip_version: 4
The BIG-IP is now ready to deploy applications. One application is configuring the BIG-IP to securely load balance applications. This requires configuring the following on the BIG-IP
- Vlans
- Self-IPs
- Nodes/members (2)
- Pool (1)
- Assigning the nodes to the Pool
- Creating a HTTPS Virtual server
- Creating a redirect Virtual server, which will redirect all HTTP requests to the HTTPS virtual server (iRule is assigned to the virtual server to achieve this)
This playbook will be run individually for each BIG-IP since each will use different values for VLANS/Self IP’s/Virtual server address etc. The variables values for this playbook is defined inline and not in a separate file.
PLAYBOOK FOR APPLICATION DEPLOYMENT
- name: creating HTTPS application hosts: bigip tasks: - name: Configure VLANs on the BIG-IP bigip_vlan: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" validate_certs: False name: "{{ item.name }}" tag: "{{ item.tag }}" tagged_interface: "{{ item.interface }}" with_items: - name: 'External' tag: '10' interface: '1.1' - name: 'Internal tag: '11’ interface: '1.2' delegate_to: localhost - name: Configure SELF-IPs on the BIG-IP bigip_selfip: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" validate_certs: False name: "{{ item.name }}" address: "{{ item.address }}" netmask: "{{ item.netmask }}" vlan: "{{ item.vlan }}" allow_service: "{{item.allow_service}}" with_items: - name: 'External-SelfIP' address: '10.10.10.10' netmask: '255.255.255.0' vlan: 'External' allow_service: 'default' - name: 'Internal-SelfIP' address: '192.10.10.10' netmask: '255.255.255.0' vlan: 'Internal' allow_service: 'default' delegate_to: localhost - name: Create a web01.internal node //Creating Node1 bigip_node: server: "{{ inventory_hostname }}" user: "admin" password: "admin" host: "192.168.68.140" name: "web01.internal" validate_certs: False delegate_to: localhost - name: Create a web02.internal node //Creating Node2 bigip_node: server: "{{ inventory_hostname }}" user: "admin" password: "admin" host: "192.168.68.141" name: "web02.internal" validate_certs: False delegate_to: localhost - name: Create a web-pool //Creating a pool bigip_pool: server: "{{ inventory_hostname }}" user: "admin" password: "admin" lb_method: "ratio_member" monitors: http name: "web-pool" validate_certs: False delegate_to: localhost - name: Add http node to web-pool //Assigning members to a pool bigip_pool_member: description: "HTTP Webserver-1" host: "{{ item.host }}" name: "{{ item.name }}" user: "admin" password: "admin" pool: "web-pool" port: "80" server: "{{ inventory_hostname }}" validate_certs: False with_items: - host: "192.168.168.140" name: "web01.internal" - host: "192.168.68.141" name: "web02.internal" delegate_to: localhost - name: Create a virtual server //Create a HTTPS Virtual Server bigip_virtual_server: description: "Secure web application" server: "{{ inventory_hostname }}" user: "admin" password: "admin" name: "https_vs" destination: "10.10.20.120" port: 443 snat: "Automap" all_profiles: - http - clientssl pool: "web-pool" validate_certs: False delegate_to: localhost - name: Create a redirect virtual server //Create a redirect virtual server bigip_virtual_server: description: "Redirect Virtual server" server: "{{ inventory_hostname }}" user: "admin" password: "admin" name: "http_redirect" destination: "10.10.20.120" validate_certs: False port: 80 all_profiles: - http all_rules: //Attach an iRule to the Virtual server - _sys_https_redirect delegate_to: localhost
Bookmark this page if you are interested in learning more. We will be updating this blog with new F5 modules that are going to be supported with Ansible 2.4 release
- Payal_SRet. Employee
Hi Satish,
You can define in the inventory file which BIG-IP hosts you want to run your playbook against.
Example: Inventory file
[bigips] 10.1.1.2 10.1.1.3 10.1.1.4
Playbook
- name: Onboarding BIG-IP hosts: bigip This will run the playbook against all hosts undet tag [bigips] in the inventory file gather_facts: false vars_files: - irule_var.yml tasks: - name: Add iRule bigip_irule: server: "{{inventory_hostname}}" user: "admin" password: "****" module: "ltm" name: "{{item}}" content: "{{ lookup('file', '{{item}}') }}" state: present validate_certs: false with_items: "{{irules}}"
Irule variable file (irule_var.yml)
irules: - name_irule1 - name_irule2 - name_irule3
- satishs_370505Nimbostratus
Is there an ansible way to add an irule to all virtual servers ?
I found this example on ansible docs
-
name: Add iRules to the Virtual Server
bigip_virtual_server:
server: lb.mydomain.net
user: admin
password: secret
name: my-virtual-server
irules:
- irule1
- irule2 delegate_to: localhost
is there way to do for all virtual servers ?
-
name: Add iRules to the Virtual Server
bigip_virtual_server:
server: lb.mydomain.net
user: admin
password: secret
name: my-virtual-server
irules:
- RedRenegade_364Nimbostratus
thanks payal.
the presented playbook in this article does not meet the requirements you specify for when explicit routes are needed vs. not needed. see below for the applicable snippet (truncated for simplicity in this case)
with_items: - name: 'External-SelfIP' address: '10.10.10.10' netmask: '255.255.255.0' vlan: 'External' allow_service: 'default' - name: 'Internal-SelfIP' address: '192.10.10.10' netmask: '255.255.255.0' vlan: 'Internal' allow_service: 'default' name: Add http node to web-pool bigip_pool_member: description: "HTTP Webserver-1" host: "{{ item.host }}" name: "{{ item.name }}" user: "admin" password: "admin" pool: "web-pool" port: "80" server: "{{ inventory_hostname }}" validate_certs: False with_items: - host: "192.168.168.140" name: "web01.internal" - host: "192.168.68.141" name: "web02.internal" name: Create a virtual server bigip_virtual_server: description: "Secure web application" server: "{{ inventory_hostname }}" user: "admin" password: "admin" name: "https_vs" destination: "10.10.20.120" port: 443 snat: "Automap" all_profiles: - http - clientssl pool: "web-pool" validate_certs: False delegate_to: localhost
I suppose it's likely that many individuals who visit this site are capable of troubleshooting their way out of this problem (say, if they just copy your playbook for testing purposes) and figuring out that they must either modify they IP's of their nodes and front-end test clients or add the bigip_static_route module to the playbook to specify the respective gateways for reachability to the clients and servers. With that said....I'd want anything with my name on it to actually work as I say it should work, that is only true for this playbook if you place a disclaimer at the beginning of the article stating that this is not a complete bigip configuration being presented, that it's a subset of the configuration for the purpose of explaining ansible integration with F5.
I've started a list of the required modules for onboarding a pair of appliances in most standard enterprise implementations. While the list is not officially complete, this is the likely the minimum configuration needed to provision and initialize the appliances BEFORE any service configuration tasks are performed. thus far, the only configuration item that I can't find an ansible module for would be changing the AAA strategy, such as configuring tacacs or LDAP authentication. keep in mind....the goal, from my point of view, is to make it so the next administrator or engineer has to do less work than I did to bootstrap two appliances and get them up and running with services.
below is the list of modules for onboarding.
-bigip_hostname -bigip_snmp -bigip_vlan -bigip_static_route -bigip_self_ip -bigip_device_connectivity -bigip_remote_syslog -bigip_device_ntp
thank you for the clarification!
- Payal_SRet. Employee
Hi,
You do not need to have a default gateway if you have L2 connectivity to your external (Client traffic) and your internal (server traffic), but if your servers are on a different subnet and you need to tell BIG-IP how to route to that subnet then yes you do need specify static routes. Though it is good practice to specify a default route.
Example of adding a static route:
- name: Add route(s) bigip_static_route: server: "{{bigip_ip}}" user: "{{ bigip_username }}" password: "{{ bigip_password }}" name: "{{item.name}}" gateway_address: "{{item.gw_address}}" netmask: "{{item.netmask}}" destination: "{{item.destination}}" validate_certs: "no" with_items: "{{static_route}}" Associated variable file: static_route: - name: "default" gw_address: "10.168.68.1" destination: "0.0.0.0" netmask: "0.0.0.0"
Yes if your backend servers are not on the same subnet you have two options: 1) Set SNAT to 'Automap' on the Virtual server, this will change the source IP address to the internal Self-IP address of the BIG-IP, so the return traffic will be forced to go back through the BIG-IP 2) If you donot want to use Automap , point all your backend servers default route to the internal Self-IP address of the BIG-IP
Thanks
- RedRenegade_364Nimbostratus
Hi Payal,
I'm confused by the content presented here...I'm new to F5, so forgive me if this seems like a silly question, but....
wouldn't you need to add routes as part of the onboarding playbook, given that the system needs, at a minimum, a default gateway route? also, the backend nodes do not reside on the same subnet as the internal vlan selfip that would be used as the "automap" IP for building the backend connection...
I have a pair of 4600's that I need to deploy for a project, and I'm trying to integrate ansible with it for future use, and it appears that the bigip_static_route module is required for either the onboarding playbook (perferred) or the web application playbook. thoughts? am I missing something?
- Payal_SRet. Employee
Glad it worked !!!
Thanks for sharing your use case. I had a few follow up questions to help us understand better how customers are using automation with BIG-IP - Are the playbooks being used as part of a CICD toolchain by your devOps team - What is the need or scenario which requires you to build virtual servers automatically so often - Is this automation being used in an production environment
- B_Lawrimore_151Nimbostratus
I got it Payal! My issue was in the irule section of the task. I did this instead of what you wrote:
irules: - '{{ item.irule }}'
I got to thinking of the error 'unhashable list' and thought maybe its the fact that I'm putting a list in the placeholder of a list item. by moving it up a level as you had written, it started working just as it should. Great work and thank you so much!
Now that I have this playbook complete, and a better understanding of how Ansible works, I can ask my devOps team to just complete a templated var file, drop it in a shared folder, and let the playbook run on a schedule thus building the Virtual Servers automatically.
- Payal_SRet. Employee
Are you using the module that is packaged with ansible 2.5 (just make sure you are not using a local or older copy of the bigip_virtual_server module).
If that is the case that you are using the latest and greatest package that is bundled with ansible 2.5 release then please open an issue on github https://github.com/F5Networks/f5-ansible/issues (provide as much detail as you can regarding the issue)
If you are using a local or older copy: The module bundled with ansible 2.5 version is (https://github.com/F5Networks/f5-ansible/blob/stable-2.5/library/bigip_virtual_server.py), try and replace that and see if it solves your issue.
- B_Lawrimore_151Nimbostratus
you have the question correct. the answer looks correct as well, however I get an error upon execution:
TypeError: unhashable type: 'list' failed: [10.10.10.10] (item={u'irules': [u'irule1, u'irule2'], u'name': u'vs_test01'}) => { "changed": false, "item": { "irules": [ "irule1", "irule2" ], "name": "vs_test01" }, "rc": 1 } MSG: MODULE FAILURE MODULE_STDERR: Traceback (most recent call last): File "/tmp/ansible_RIMJkD/ansible_module_bigip_virtual_server.py", line 1657, in main() File "/tmp/ansible_RIMJkD/ansible_module_bigip_virtual_server.py", line 1648, in main results = mm.exec_module() File "/tmp/ansible_RIMJkD/ansible_module_bigip_virtual_server.py", line 1402, in exec_module changed = self.present() File "/tmp/ansible_RIMJkD/ansible_module_bigip_virtual_server.py", line 1425, in present return self.update() File "/tmp/ansible_RIMJkD/ansible_module_bigip_virtual_server.py", line 1436, in update if not self.should_update(): File "/tmp/ansible_RIMJkD/ansible_module_bigip_virtual_server.py", line 1444, in should_update result = self._update_changed_options() File "/tmp/ansible_RIMJkD/ansible_module_bigip_virtual_server.py", line 1474, in _update_changed_options change = diff.compare(k) File "/tmp/ansible_RIMJkD/ansible_module_bigip_virtual_server.py", line 1113, in compare result = getattr(self, param) File "/tmp/ansible_RIMJkD/ansible_module_bigip_virtual_server.py", line 1359, in irules if sorted(set(self.want.irules)) != sorted(set(self.have.irules)): TypeError: unhashable type: 'list'
I'm wondering if I have a bad module or plugin version that is causing my headaches. ive updated F5-sdk, suds, bigsuds, etc. I'm running ansible 2.5.2 and python 2.7.5.
Having this ability will round out the automation of building a VS, and modifying VSs in bulk. You've been very helpful.
- Payal_SRet. Employee
So If I get this correctly you want to run a playbook on multiple virtual servers and each virtual server has multiple irules
See if this works for you
Variable file
virtualserver: - name: Test1 ip: "10.192.xx.xx" irules: - irule1 - irule2 - name: Test2 ip: "10.192.xx.xx" irules: - irule1 - irule3
Playbook task:
- name: Add VS on BIG-IP bigip_virtual_server: server: "10.192.xx.xx" user: "****" password: "****" name: "{{item.name}}" destination: "{{item.ip}}" port: 80 irules: "{{item.irules}}" validate_certs: False with_items: "{{virtualserver}}" delegate_to: localhost
Result:
changed: => (item={u'irules': [u'irule1', u'irule2'], u'ip': u'10.192.xx.xx', u'name': u'Test1'}) changed: => (item={u'irules': [u'irule1', u'irule3'], u'ip': u'10.192.xx.xx', u'name': u'Test2'})