cancel
Showing results for 
Search instead for 
Did you mean: 
Login & Join the DevCentral Connects Group to watch the Recorded LiveStream (May 12) on Basic iControl Security - show notes included.
Payal_S_
Cirrus
Cirrus

Basics of Ansible and F5 integration were covered in a joint webinar held earlier in March 2017. To learn more about the integration and current F5 module support along with some use cases view the webinar .

We had another joint webinar in June 2017, which went into details on the integration. We spoke about how to F5 ansible modules (that will be part of upcoming Ansible 2.4 release) can be used to perfrom administrative tasks on the BIG-IP, make the BIG-IP ready for application deployment in weeks rather than in days. We also touched upon usage of F5 ansible iApps module to deploy applications on the BIG-IP.

The webinar's was very well received which ended with a great Q and A session. Some of the questions that came up were how do we create playbooks for different workflows, what are the best practices that F5 recommends, can we get a sample playbook etc. We will use this forum to answer some of the questions and dig deeper into the F5 and Ansible integration.

Now what really is a playbook, it is nothing but a collection of tasks that are to be performed sequentially on a system. Let us consider a use case where a customer has just purchased 20 BIG-IP’s and needs to get all of them networked and to a state where the BIG-IPs are ready to deploy applications. We can define a playbook which consists of tasks required to perform Day0 and Day1 configuration on the BIG-IPs.

Lets start with Day0, now networking the system consists of assigning it a NTP and DNS server address, assigning a hostname, making some ssh customizations etc., some of these settings are common to all the BIG-IPs. The common set of configurations can be defined using the concept of a ‘role’ in ansible. Let’s define a ‘onboarding’ role which will configure the common settings like NTP, DNS and SSHD settings.

PLAYBOOK FOR ONBOARDING

- name: Onboarding BIG-IP
  hosts: bigip
  gather_facts: false
  roles:
   - onboarding                //playbook runs tasks defined in the ‘onboarding’ role

This play book will run against all the BIG-IP’s defined in the inventory host file 
Example of inventory host file

[bigip]
10.192.73.218
10.192.73.219
10.192.73.220
10.192.73.221

The above playbook will run tasks specified in the 'onboarding' role in file main.yaml (playbooks/roles/onboarding/tasks/main.yaml)

- name: Configure NTP server on BIG-IP
  bigip_device_ntp:
     server: "{{ inventory_hostname }}"
     user: "{{ username }}"
     password: "{{ password }}"
     ntp_servers: "{{ ntp_servers }}"
     validate_certs: False
  delegate_to: localhost

- name: Manage SSHD setting on BIG-IP
  bigip_device_sshd:
    server: "{{ inventory_hostname }}"
    user: "{{ username }}"
    password: "{{ password }}"
    banner: "enabled"
    banner_text: " {{ banner_text }}"
    validate_certs: False
  delegate_to: localhost

- name: Manage BIG-IP DNS settings
  bigip_device_dns:
   server: "{{ inventory_hostname }}"
   user: "{{ username }}"
   password: "{{ password }}"
   name_servers: "{{ dns_servers }}"
   search: "{{ dns_search_domains }}"
   ip_version: "{{ ip_version }}"
   validate_certs: False
  delegate_to: localhost

Variables will be referenced from the main.yaml file under default directory for the ‘onboarding’ role (playbooks/roles/onboarding/default/main.yaml)

username: admin
password: admin
banner_text: "--------Welcome to Onboarding BIGIP----------"
ntp_servers:
 - '172.27.1.1'
 - '172.27.1.2'

dns_servers:
 - '8.8.8.8'
 - '4.4.4.4'
dns_search_domains:
 - 'local'
 - 'localhost'
ip_version: 4

The BIG-IP is now ready to deploy applications. One application is configuring the BIG-IP to securely load balance applications. This requires configuring the following on the BIG-IP

  • Vlans
  • Self-IPs
  • Nodes/members (2)
  • Pool (1)
  • Assigning the nodes to the Pool
  • Creating a HTTPS Virtual server
  • Creating a redirect Virtual server, which will redirect all HTTP requests to the HTTPS virtual server (iRule is assigned to the virtual server to achieve this)

This playbook will be run individually for each BIG-IP since each will use different values for VLANS/Self IP’s/Virtual server address etc. The variables values for this playbook is defined inline and not in a separate file.

PLAYBOOK FOR APPLICATION DEPLOYMENT

- name: creating HTTPS application
  hosts: bigip
  tasks:

  - name: Configure VLANs on the BIG-IP
    bigip_vlan:
        server: "{{ inventory_hostname }}"
        user: "{{ username }}"
        password: "{{ password }}"
        validate_certs: False
        name: "{{ item.name }}"
        tag: "{{ item.tag }}"
        tagged_interface: "{{ item.interface }}"
    with_items:
        - name: 'External'
          tag: '10'
          interface: '1.1'
        - name: 'Internal
          tag: '11’
          interface: '1.2'
    delegate_to: localhost

  - name: Configure SELF-IPs on the BIG-IP
    bigip_selfip:
        server: "{{ inventory_hostname }}"
        user: "{{ username }}"
        password: "{{ password }}"
        validate_certs: False
        name: "{{ item.name }}"
        address: "{{ item.address }}"
        netmask: "{{ item.netmask }}"
        vlan: "{{ item.vlan }}"
        allow_service: "{{item.allow_service}}"
    with_items:
        - name: 'External-SelfIP'
          address: '10.10.10.10'
          netmask: '255.255.255.0'
          vlan: 'External'
          allow_service: 'default'
        - name: 'Internal-SelfIP'
          address: '192.10.10.10'
          netmask: '255.255.255.0'
          vlan: 'Internal'
          allow_service: 'default'
    delegate_to: localhost

  - name: Create a web01.internal node             //Creating Node1
    bigip_node:
        server: "{{ inventory_hostname }}"
        user: "admin"
        password: "admin"
        host: "192.168.68.140"
        name: "web01.internal"
        validate_certs: False
        delegate_to: localhost

  - name: Create a web02.internal node             //Creating Node2
    bigip_node:
        server: "{{ inventory_hostname }}"
        user: "admin"
        password: "admin"
        host: "192.168.68.141"
        name: "web02.internal"
        validate_certs: False
    delegate_to: localhost

  - name: Create a web-pool                        //Creating a pool
    bigip_pool:
        server: "{{ inventory_hostname }}"
        user: "admin"
        password: "admin"
        lb_method: "ratio_member"
        monitors: http
        name: "web-pool"
        validate_certs: False
    delegate_to: localhost

  - name: Add http node to web-pool                //Assigning members to a pool
    bigip_pool_member:
        description: "HTTP Webserver-1"
        host: "{{ item.host }}"
        name: "{{ item.name }}"
        user: "admin"
        password: "admin"
        pool: "web-pool"
        port: "80"
        server: "{{ inventory_hostname }}"
        validate_certs: False
    with_items:
        - host: "192.168.168.140"
          name: "web01.internal"
        - host: "192.168.68.141"
          name: "web02.internal"
    delegate_to: localhost

  - name: Create a virtual server                  //Create a HTTPS Virtual Server
    bigip_virtual_server:
        description: "Secure web application"
        server: "{{ inventory_hostname }}"
        user: "admin"
        password: "admin"
        name: "https_vs"
        destination: "10.10.20.120"
        port: 443
        snat: "Automap"
        all_profiles:
            - http
            - clientssl
        pool: "web-pool"
        validate_certs: False
    delegate_to: localhost

  - name: Create a redirect virtual server        //Create a redirect virtual server
    bigip_virtual_server:
        description: "Redirect Virtual server"
        server: "{{ inventory_hostname }}"
        user: "admin"
        password: "admin"
        name: "http_redirect"
        destination: "10.10.20.120"
        validate_certs: False
        port: 80
        all_profiles:
            - http
        all_rules:                               //Attach an iRule to the Virtual server
            - _sys_https_redirect
    delegate_to: localhost

Bookmark this page if you are interested in learning more. We will be updating this blog with new F5 modules that are going to be supported with Ansible 2.4 release 

Comments
B_Earp
Nimbostratus
Nimbostratus

I cannot work out how many files are referenced here - would you be able to post the final config file(s)? It would make it much easier to understand. For me at least.

 

Could you add their locations at the top of the file so I could see how they all reference each other?

 

Thanks a lot...

 

Payal_S_
Cirrus
Cirrus

Sure I understand, I am using the concept of roles here which is Ansible best practice but it can get confusing.

 

To keep it simple have the variable file and the playbook in the same directory and then run the playbook ansible-playbook playbook-name.yaml

 

If your variable file name is say 'variable_file.yaml', then in your playbook you would reference the variable file as follows:

 

  • name: Onboarding BIG-IP hosts: bigip gather_facts: false vars_files:

     

    • variable_file.yaml

    ..... .....

     

You can check out the following github as well for an example: https://github.com/payalsin/f5-ansible/blob/master/playbooks/onboarding-bigip.yml

 

Thanks

 

B_Earp
Nimbostratus
Nimbostratus

Hi - thanks for that - so I followed your link and I think there are 2 files I need to download but I cannot be sure. I don't suppose you could upload the files with their locations hashed out at the top of the file so I can understand how they relate to each other? I tried to run the playbook but it says there is no password, and I looked at both the files and there is no password entry - so I am presuming there is a 3rd hidden file somewhere, but I am not sure what that is or how to reference it. Thanks 🙂

 

Payal_S_
Cirrus
Cirrus

Yes there is another file called the host file which needs to have information about the host you are trying to connect to.

 

"Ansible works against multiple systems in your infrastructure at the same time. It does this by selecting portions of systems listed in Ansible’s inventory, which defaults to being saved in the location /etc/ansible/hosts. You can specify a different inventory file using the -i option on the command line." - http://docs.ansible.com/ansible/latest/intro_inventory.html

 

 

Example of the host file for the above example:

 

[bigip]

 

10.XX.XX.XX

 

[bigip:vars]

 

username=admin

 

password=admin

 

When the playbook is run it will look at the host file. As mentioned above default host file is placed at /etc/ansible/hosts BUT you can create your own host file as well and then reference that host file in your ansible.cfg.

 

Example of ansible.cfg

 

[defaults]

 

inventory = ./your_host_file

 

__WIKKI___62134
Nimbostratus
Nimbostratus

Hi Payal, I was looking at some Q&A in the ansible webinar, and found that there is a slack channel to discuss more devops related topic on F5.

 

It was mentioned f5-common-python.slack.com is the group that we can join. However I did not see any invitation link. Can you please let me know how to register for this slack group?

 

B_Earp
Nimbostratus
Nimbostratus

Hi Payal. Still zero idea how all these files link together. I hate to sound like a broken record, but would you be able to put the name & path at the top of the file so we can see how they all link together, and also highlight any hidden files - I found a few the last time as you know - otherwise there is no way that you can tell how it works. 🙂

 

I am genuinely interested in following what you are putting down and have been hacking away at it for weeks, but because there is no way of understanding how all the files link together it is an impossible task.

 

I am sure that once we understand how the files all link together, and where the hidden files are that we can be up and running in no time!!

 

/etc/ansible/playbooks/roles/onboarding/tasks/main.yaml
  • name: Onboarding BIG-IP hosts: bigip gather_facts: false roles:
    • onboarding //playbook runs tasks defined in the ‘onboarding’ role
/etc/ansible/hosts (add at end of file)

[bigip] 10.192.73.218 10.192.73.219 10.192.73.220 10.192.73.221

 

/some/location/or/file/or/something/else
  • name: Configure NTP server on BIG-IP bigip_device_ntp: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" ntp_servers: "{{ ntp_servers }}" validate_certs: False delegate_to: localhost

     

  • name: Manage SSHD setting on BIG-IP bigip_device_sshd: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" banner: "enabled" banner_text: " {{ banner_text }}" validate_certs: False delegate_to: localhost

     

  • name: Manage BIG-IP DNS settings bigip_device_dns: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" name_servers: "{{ dns_servers }}" search: "{{ dns_search_domains }}" ip_version: "{{ ip_version }}" validate_certs: False delegate_to: localhost

     

/some/other/file/that/we/are/not/sure/about

username: admin password: admin banner_text: "--------Welcome to Onboarding BIGIP----------" ntp_servers: - '172.27.1.1' - '172.27.1.2'

 

dns_servers: - '8.8.8.8' - '4.4.4.4' dns_search_domains: - 'local' - 'localhost' ip_version: 4

 

Payal_S_
Cirrus
Cirrus

Alright let's give this another try. Let's keep it real simple. Let's look at the three files to get this to work

 

1) The host file, which is be default placed at /etc/ansible/hosts (this file will have the IP Address of your bigip)

 

[bigip] 10.XX.XX.XX

2) The variable file is the same directory as your playbook, the values from this variable file will be substituted when the playbook is run

 

username: admin password: admin banner_text: "--------Welcome to Onboarding BIGIP----------" hostname: 'ansibleManaged-bigip.local' ntp_servers: - '172.27.1.1' - '172.27.1.2' dns_servers: - '8.8.8.8' - '4.4.4.4' dns_search_domains: - 'local' - 'localhost' ip_version: 4 bind_servers: - '192.168.2.1' - '192.168.2.2' vlan_information: - name: 'External' tag: '10' interface: '1.1' - name: 'Internal' tag: '11' interface: '1.2' selfip_information: - name: 'External-SelfIP' address: '10.168.68.5' netmask: '255.255.255.0' vlan: 'External' allow_service: 'default' - name: 'Internal-SelfIP' address: '192.168.68.5' netmask: '255.255.255.0' vlan: 'Internal' allow_service: 'default' module_provisioning: - name: 'asm' level: 'nominal'

3) The playbook

 

- name: Onboarding BIG-IP hosts: bigip gather_facts: false vars_files: - var-onboard-network_file.yml tasks: - name: Configure NTP server on BIG-IP bigip_device_ntp: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" ntp_servers: "{{ ntp_servers }}" validate_certs: False delegate_to: localhost - name: Configure BIG-IP hostname bigip_hostname: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" validate_certs: False hostname: "{{ hostname }}" delegate_to: localhost - name: Manage SSHD setting on BIG-IP bigip_device_sshd: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" banner: "enabled" banner_text: " {{ banner_text }}" validate_certs: False delegate_to: localhost - name: Manage BIG-IP DNS settings bigip_device_dns: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" name_servers: "{{ dns_servers }}" search: "{{ dns_search_domains }}" forwarders: "{{ bind_servers }}" ip_version: "{{ ip_version }}" validate_certs: False delegate_to: localhost - name: Provision BIG-IP with appropriate modules bigip_provision: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" validate_certs: False module: "{{ item.name }}" level: "{{ item.level }}" with_items: "{{ module_provisioning }}" tags: provision delegate_to: localhost - name: Configure VLANs on the BIG-IP bigip_vlan: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" validate_certs: False name: "{{ item.name }}" tag: "{{ item.tag }}" tagged_interface: "{{ item.interface }}" with_items: "{{ vlan_information }}" delegate_to: localhost - name: Configure SELF-IPs on the BIG-IP bigip_selfip: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" validate_certs: False name: "{{ item.name }}" address: "{{ item.address }}" netmask: "{{ item.netmask }}" vlan: "{{ item.vlan }}" allow_service: "{{item.allow_service}}" with_items: "{{ selfip_information }}" delegate_to: localhost
B_Earp
Nimbostratus
Nimbostratus

Perfect - thanks very much!!!

 

So after editing /etc/ansible/hosts to include at the bottom:

 

[bigip] 10.XX.XX.XX [bigip:vars] username=admin password=admin

 

The second block of text is saved as var-onboard-network_file.yml, and the 3rd block of text is saved in the same folder as the var-onboard-network_file.yml file and is called whatever_you_like.yml

 

The playbook is then called by running: $ ansible-playbook whatever_you_like.yml

 

For other beginners, see: http://f5-ansible.readthedocs.io/en/devel/modules/list_of_all_modules.html for additional examples

 

So if we look at the bigip_selfip ansible command: http://f5-ansible.readthedocs.io/en/devel/modules/bigip_selfip_module.html

 

For this command there are some settings in the variables file, and some information in the playbook. The playbopok calls the items from the variables file.

 

bigip_selfip command in the playbook:

 

- name: Configure SELF-IPs on the BIG-IP bigip_selfip: server: "{{ inventory_hostname }}" user: "{{ username }}" password: "{{ password }}" validate_certs: False name: "{{ item.name }}" name item selected from variable file address: "{{ item.address }}" address item selected from variable file netmask: "{{ item.netmask }}" netmask item selected from variable file vlan: "{{ item.vlan }}" vlan item selected from variable file allow_service: "{{item.allow_service}}" allow_service item selected from variable file with_items: "{{ selfip_information }}" variable file items heading selection = selfip_information delegate_to: localhost

items in the variables file:

 

selfip_information: - name: 'External-SelfIP' address: '10.128.10.101' netmask: '255.255.255.0' vlan: 'External' allow_service: 'none' - name: 'Internal-SelfIP' address: '10.128.20.101' netmask: '255.255.255.0' vlan: 'Internal' allow_service: 'none' - name: 'HA-SelfIP' address: '1.1.1.1' netmask: '255.255.255.252' vlan: 'HA' allow_service: 'default'

 

So the ansible command bigip_selfip has multiple switches such as name, address, netmask. The values for these are called from the variables file by the playbook by using {{ item.XXX }} as the value. The values are under selfip_information heading in the var file. So taking a selction from the above config

 

Playbook: name: "{{ item.name }}" address: "{{ item.address }}" with_items: "{{ selfip_information }}"

 

Var file items: selfip_information: - name: 'External-SelfIP' equal to {{ item.name }} when called by the playbook address: '10.128.10.101' equal to {{ item.address }} when called by the playbook

 

And for some other of payal's awesome example playbooks and their associated var files

 

You also need an updated ansible and f5 and other python modules installed for it to work

 

  1. Ubuntu - Install ansible $ sudo apt-get update $ sudo apt-get install software-properties-common $ sudo apt-add-repository ppa:ansible/ansible $ sudo apt-get update $ sudo apt-get install ansible

     

  2. Install pip $ sudo apt-get install pip then install the below $ pip install f5-sdk bigsuds netaddr deepdiff

     

KernelPanic
Nimbostratus
Nimbostratus

What are the various software compatibility dependencies for getting ansible f5 to work?

 

Payal_S_
Cirrus
Cirrus

Recommended BIG-IP version is - 12+ Recommended Ansible version is - 2.4

 

On the Ansible host, the f5-sdk and bigsuds package will need to be installed - pip install f5-sdk - pip install bigsuds

 

More information on software dependency on a particular module that is under development, please check the specific module on our github repo: https://github.com/F5Networks/f5-ansible/tree/devel/library

 

B_Earp
Nimbostratus
Nimbostratus

I have created a simple HA pair deployment using xlsx spreadsheet. No ansible knowledge required. Just fill in the spreadsheet and execute the playbook

 

Please see: https://github.com/bwearp/simple-ha-pair

 

Milko_125350
Nimbostratus
Nimbostratus

Hello team, I'm working with the ansible and LTM V12, now we're using user admin by create, delete or change conf for vs, pool, node, etc. I want not use the user role "admin", I'd like use role "manager" but, with this user I can not work with "get stats" in the pool members. I should use the user admin always vía ansible?

 

Thanks advance!

 

Payal_S_
Cirrus
Cirrus

Hi Milko,

 

Ideally you can use any user, but each user role has different privileges assigned to them. Ansible does not have any limitation on which user, it will follow privileges assigned by the BIG-IP for that role: https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/bigip-user-account-administratio...

 

'Manager' role you should be able to view pool member information, what error are you encountering.

 

Thanks

 

B_Lawrimore_151
Nimbostratus
Nimbostratus

i am trying add a new virtual server, or change a configured virtual server, that has multiple irules attached. i want to dynamically assign the names of the irules via a variable file. while i dont have a problem looping through a task using external data, i do have a problem attaching multiple i rules in a single loop. as you may know each time you assign an irule via ansible (and the REST api), the operation is a replace, not append. if i run the task three times with one irule each time, the last irule i ran through the task is the one that is left configured.

 

im not the strongest with ansible, and i am having a hard time figuring the best way to assign multiple irules in one pass of a task using a list. any suggestions?

 

Payal_S_
Cirrus
Cirrus

Try the following

 

Playbook:

 

- name: Rule mgmt BIG-IP hosts: localhost connection: local gather_facts: false vars_files: - irule_var.yml tasks: - name: Add iRule bigip_irule: server: "xx.xx.xxx.xxx" user: "admin" password: "****" module: "ltm" name: "{{item}}" content: "{{ lookup('file', '{{item}}') }}" state: present validate_certs: false with_items: "{{irules}}"

Variable file

 

irules: - irule1 - irule2 - irule3
B_Lawrimore_151
Nimbostratus
Nimbostratus

I tried to write a detailed comment earlier today, but it was flagged as spam for some reason. I'll try again and be more succinct.

 

unfortunately, I did not phrase my question well, but Payal your comment gave me something to think about. I'm trying to assign multiple irules to a single virtual server by using a var file. next, I want to modify multiple virtual servers to have multiple irules--again using a var file. it seems each time I use a list of items to assign multiple irules via Ansible, I get a list of exceptions thrown. however, if I manually assign multiple irules to a single VS if I don't use a var file. I can also statically assign two irules and then dynamically assign a third irule from a var file using a loop. When I change the irule variable to be a single member list, I get a runtime exception.

 

heres my varfile with a simple dictionary that works with a loop:

 

irule_data: - {irule: irule3, vs: vs_test01}

here is the same varfile that works because it uses a list with one element (I have to change the task code to signify I want to use item '0' of the embedded list):

 

irule_data: - {irule: [ 'irule3' ], vs: vs_test01}

here's the varfile that fails using the same loop (the dictionary contains a list with two elements): irule_data: - {irule: [ 'irule3', 'irule4' ], vs: vs_test01}

 

I have tried using with_subelements, with_nested, etc, but all of these do not build the irule list to mimic the following during execution:

 

irules: - irule1 - irule2 - irule3 - irule4

instead I get two task runs that look like this:

 

irules: - irule1 - irule2 - irule3 irules: - irule1 - irule2 - irule4

my overall problem is that I cannot find a way to successfully build the list of irules that I want to apply to the VS using a var file because each type of loop I use causes the task to be run multiple times instead of just once.

 

I have learned a lot from this thread, and I appreciate the attention and support it receives.

 

Thanks

 

Payal_S_
Cirrus
Cirrus

So If I get this correctly you want to run a playbook on multiple virtual servers and each virtual server has multiple irules

 

See if this works for you

 

Variable file

 

virtualserver: - name: Test1 ip: "10.192.xx.xx" irules: - irule1 - irule2 - name: Test2 ip: "10.192.xx.xx" irules: - irule1 - irule3

Playbook task:

 

- name: Add VS on BIG-IP bigip_virtual_server: server: "10.192.xx.xx" user: "****" password: "****" name: "{{item.name}}" destination: "{{item.ip}}" port: 80 irules: "{{item.irules}}" validate_certs: False with_items: "{{virtualserver}}" delegate_to: localhost

Result:

 

changed: => (item={u'irules': [u'irule1', u'irule2'], u'ip': u'10.192.xx.xx', u'name': u'Test1'}) changed: => (item={u'irules': [u'irule1', u'irule3'], u'ip': u'10.192.xx.xx', u'name': u'Test2'})
B_Lawrimore_151
Nimbostratus
Nimbostratus

you have the question correct. the answer looks correct as well, however I get an error upon execution:

 

TypeError: unhashable type: 'list' failed: [10.10.10.10] (item={u'irules': [u'irule1, u'irule2'], u'name': u'vs_test01'}) => { "changed": false, "item": { "irules": [ "irule1", "irule2" ], "name": "vs_test01" }, "rc": 1 } MSG: MODULE FAILURE MODULE_STDERR: Traceback (most recent call last): File "/tmp/ansible_RIMJkD/ansible_module_bigip_virtual_server.py", line 1657, in main() File "/tmp/ansible_RIMJkD/ansible_module_bigip_virtual_server.py", line 1648, in main results = mm.exec_module() File "/tmp/ansible_RIMJkD/ansible_module_bigip_virtual_server.py", line 1402, in exec_module changed = self.present() File "/tmp/ansible_RIMJkD/ansible_module_bigip_virtual_server.py", line 1425, in present return self.update() File "/tmp/ansible_RIMJkD/ansible_module_bigip_virtual_server.py", line 1436, in update if not self.should_update(): File "/tmp/ansible_RIMJkD/ansible_module_bigip_virtual_server.py", line 1444, in should_update result = self._update_changed_options() File "/tmp/ansible_RIMJkD/ansible_module_bigip_virtual_server.py", line 1474, in _update_changed_options change = diff.compare(k) File "/tmp/ansible_RIMJkD/ansible_module_bigip_virtual_server.py", line 1113, in compare result = getattr(self, param) File "/tmp/ansible_RIMJkD/ansible_module_bigip_virtual_server.py", line 1359, in irules if sorted(set(self.want.irules)) != sorted(set(self.have.irules)): TypeError: unhashable type: 'list'

I'm wondering if I have a bad module or plugin version that is causing my headaches. ive updated F5-sdk, suds, bigsuds, etc. I'm running ansible 2.5.2 and python 2.7.5.

 

Having this ability will round out the automation of building a VS, and modifying VSs in bulk. You've been very helpful.

 

Payal_S_
Cirrus
Cirrus

Are you using the module that is packaged with ansible 2.5 (just make sure you are not using a local or older copy of the bigip_virtual_server module).

 

If that is the case that you are using the latest and greatest package that is bundled with ansible 2.5 release then please open an issue on github https://github.com/F5Networks/f5-ansible/issues (provide as much detail as you can regarding the issue)

 

If you are using a local or older copy: The module bundled with ansible 2.5 version is (https://github.com/F5Networks/f5-ansible/blob/stable-2.5/library/bigip_virtual_server.py), try and replace that and see if it solves your issue.

 

B_Lawrimore_151
Nimbostratus
Nimbostratus

I got it Payal! My issue was in the irule section of the task. I did this instead of what you wrote:

 

irules: - '{{ item.irule }}'

I got to thinking of the error 'unhashable list' and thought maybe its the fact that I'm putting a list in the placeholder of a list item. by moving it up a level as you had written, it started working just as it should. Great work and thank you so much!

 

Now that I have this playbook complete, and a better understanding of how Ansible works, I can ask my devOps team to just complete a templated var file, drop it in a shared folder, and let the playbook run on a schedule thus building the Virtual Servers automatically.

 

Payal_S_
Cirrus
Cirrus

Glad it worked !!!

 

Thanks for sharing your use case. I had a few follow up questions to help us understand better how customers are using automation with BIG-IP - Are the playbooks being used as part of a CICD toolchain by your devOps team - What is the need or scenario which requires you to build virtual servers automatically so often - Is this automation being used in an production environment

 

RedRenegade_364
Nimbostratus
Nimbostratus

Hi Payal,

 

I'm confused by the content presented here...I'm new to F5, so forgive me if this seems like a silly question, but....

 

wouldn't you need to add routes as part of the onboarding playbook, given that the system needs, at a minimum, a default gateway route? also, the backend nodes do not reside on the same subnet as the internal vlan selfip that would be used as the "automap" IP for building the backend connection...

 

I have a pair of 4600's that I need to deploy for a project, and I'm trying to integrate ansible with it for future use, and it appears that the bigip_static_route module is required for either the onboarding playbook (perferred) or the web application playbook. thoughts? am I missing something?

 

Payal_S_
Cirrus
Cirrus

Hi,

 

You do not need to have a default gateway if you have L2 connectivity to your external (Client traffic) and your internal (server traffic), but if your servers are on a different subnet and you need to tell BIG-IP how to route to that subnet then yes you do need specify static routes. Though it is good practice to specify a default route.

 

Example of adding a static route:

 

- name: Add route(s) bigip_static_route: server: "{{bigip_ip}}" user: "{{ bigip_username }}" password: "{{ bigip_password }}" name: "{{item.name}}" gateway_address: "{{item.gw_address}}" netmask: "{{item.netmask}}" destination: "{{item.destination}}" validate_certs: "no" with_items: "{{static_route}}" Associated variable file: static_route: - name: "default" gw_address: "10.168.68.1" destination: "0.0.0.0" netmask: "0.0.0.0"

Yes if your backend servers are not on the same subnet you have two options: 1) Set SNAT to 'Automap' on the Virtual server, this will change the source IP address to the internal Self-IP address of the BIG-IP, so the return traffic will be forced to go back through the BIG-IP 2) If you donot want to use Automap , point all your backend servers default route to the internal Self-IP address of the BIG-IP

 

Thanks

 

RedRenegade_364
Nimbostratus
Nimbostratus

thanks payal.

 

the presented playbook in this article does not meet the requirements you specify for when explicit routes are needed vs. not needed. see below for the applicable snippet (truncated for simplicity in this case)

 

with_items: - name: 'External-SelfIP' address: '10.10.10.10' netmask: '255.255.255.0' vlan: 'External' allow_service: 'default' - name: 'Internal-SelfIP' address: '192.10.10.10' netmask: '255.255.255.0' vlan: 'Internal' allow_service: 'default' name: Add http node to web-pool bigip_pool_member: description: "HTTP Webserver-1" host: "{{ item.host }}" name: "{{ item.name }}" user: "admin" password: "admin" pool: "web-pool" port: "80" server: "{{ inventory_hostname }}" validate_certs: False with_items: - host: "192.168.168.140" name: "web01.internal" - host: "192.168.68.141" name: "web02.internal" name: Create a virtual server bigip_virtual_server: description: "Secure web application" server: "{{ inventory_hostname }}" user: "admin" password: "admin" name: "https_vs" destination: "10.10.20.120" port: 443 snat: "Automap" all_profiles: - http - clientssl pool: "web-pool" validate_certs: False delegate_to: localhost

I suppose it's likely that many individuals who visit this site are capable of troubleshooting their way out of this problem (say, if they just copy your playbook for testing purposes) and figuring out that they must either modify they IP's of their nodes and front-end test clients or add the bigip_static_route module to the playbook to specify the respective gateways for reachability to the clients and servers. With that said....I'd want anything with my name on it to actually work as I say it should work, that is only true for this playbook if you place a disclaimer at the beginning of the article stating that this is not a complete bigip configuration being presented, that it's a subset of the configuration for the purpose of explaining ansible integration with F5.

 

I've started a list of the required modules for onboarding a pair of appliances in most standard enterprise implementations. While the list is not officially complete, this is the likely the minimum configuration needed to provision and initialize the appliances BEFORE any service configuration tasks are performed. thus far, the only configuration item that I can't find an ansible module for would be changing the AAA strategy, such as configuring tacacs or LDAP authentication. keep in mind....the goal, from my point of view, is to make it so the next administrator or engineer has to do less work than I did to bootstrap two appliances and get them up and running with services.

 

below is the list of modules for onboarding.

 

-bigip_hostname -bigip_snmp -bigip_vlan -bigip_static_route -bigip_self_ip -bigip_device_connectivity -bigip_remote_syslog -bigip_device_ntp

thank you for the clarification!

 

satishs_370505
Nimbostratus
Nimbostratus

Is there an ansible way to add an irule to all virtual servers ?

 

I found this example on ansible docs

 

  • name: Add iRules to the Virtual Server bigip_virtual_server: server: lb.mydomain.net user: admin password: secret name: my-virtual-server irules:
    • irule1
    • irule2 delegate_to: localhost

is there way to do for all virtual servers ?

 

Payal_S_
Cirrus
Cirrus

Hi Satish,

 

You can define in the inventory file which BIG-IP hosts you want to run your playbook against.

 

Example: Inventory file

 

[bigips] 10.1.1.2 10.1.1.3 10.1.1.4

Playbook

 

- name: Onboarding BIG-IP hosts: bigip This will run the playbook against all hosts undet tag [bigips] in the inventory file gather_facts: false vars_files: - irule_var.yml tasks: - name: Add iRule bigip_irule: server: "{{inventory_hostname}}" user: "admin" password: "****" module: "ltm" name: "{{item}}" content: "{{ lookup('file', '{{item}}') }}" state: present validate_certs: false with_items: "{{irules}}"

Irule variable file (irule_var.yml)

 

irules: - name_irule1 - name_irule2 - name_irule3
Version history
Last update:
‎05-Apr-2017 06:00
Updated by:
Contributors