Forum Discussion
Unable to use ansible playbook to upgrade BIGIP - VE to 15.1.6.1 from 15.1.5.1
Hi Team,
I'm trying to utilize ansible playbook to automate our F5 upgrade.
Current version: 15.1.5.1
New version: 15.1.6.1
I'm trying to utilize a bash script to dynamically identify the partition, create the partition if not available and install image and reboot.
It looks like the bash script doesn't seem to work via ansible but works directly on the F5.
bash script:
#!/bin/bash
OLDIFS="$IFS"
IFS=$'\n'
disk=$(/bin/tmsh show sys sof status | awk '/.D[1-9]/{print substr($1,1,4)}' | head -n1)
maxvnumber=0
for vnumber in $(/bin/tmsh show sys sof status | grep complete)
do
vnumber=${vnumber:4:2}
vnumber=${vnumber// /}
if (( vnumber > maxvnumber )); then
maxvnumber=$vnumber
fi
done
volume=$disk$((maxvnumber + 3))
echo -n $volume
IFS="$OLDIFS"
I have tried to use tmsh and /bin/tmsh - both seems to fail:
"vol": {
"changed": true,
"failed": false,
"rc": 0,
"stderr": "/home/user/.ansible/tmp/ansible-local-61301srclvb76/ansible-tmp-1665415837.2614715-24184-40420301589164/cal_vol.sh: line 4: tmsh: command not found\n/home/user/.ansible/tmp/ansible-local-61301srclvb76/ansible-tmp-1665415837.2614715-24184-40420301589164/cal_vol.sh: line 16: tmsh: command not found\n",
"stderr_lines": [
"/home/user/.ansible/tmp/ansible-local-61301srclvb76/ansible-tmp-1665415837.2614715-24184-40420301589164/cal_vol.sh: line 4: tmsh: command not found",
"/home/user/.ansible/tmp/ansible-local-61301srclvb76/ansible-tmp-1665415837.2614715-24184-40420301589164/cal_vol.sh: line 16: tmsh: command not found"
],
"stdout": "3",
"stdout_lines": [
"3"
]
}
}
I have tried to use the root account to login and still seem to throw the same error:
- name: Upload image
bigip_software_image:
provider: "{{ f5_provider }}"
image: "{{ new_image_dir }}/{{ new_image }}"
- name: Get available volume number to use
script: cal_vol.sh
register: vol
- debug:
var: vol
- name: Install Image and reboot
bigip_software_install:
provider: "{{ f5_provider }}"
image: "{{ new_image }}"
state: activated
volume: "HD1.{{ vol.stdout }}"
async: 45
poll: 0
any_errors_fatal: true
when: wants_upgrade
- name: Group 1 Upgrades
block:
- ansible.builtin.import_tasks: checks.yaml
vars:
stage: "pre"
when: check_virts or check_pools or check_ver
- name: Pausing execution to give device time to reboot (first time)
pause:
minutes: 10
when: wants_upgrade
- name: wait for ssh to come up
wait_for_connection:
connect_timeout: 120
sleep: 5
delay: 5
timeout: 300
when: wants_upgrade
Could someone assist us to resolve the issue?
We have close to 200 F5s, would not prefer to upgrade them manually.
Just for anyone else who read this, the issue was a local execution
connection: local
this causes the system to SSH to itself and there is no tmsh on the ansible host. removing this line and then adding delegate_to: localhost when calling a BIG-IP module solved this issue. it allows the playbook to remote execute the code via SSH
After removing it and fixing a few code tweaks we were able to launch the code
- Matt_MabisEmployee
Just for anyone else who read this, the issue was a local execution
connection: local
this causes the system to SSH to itself and there is no tmsh on the ansible host. removing this line and then adding delegate_to: localhost when calling a BIG-IP module solved this issue. it allows the playbook to remote execute the code via SSH
After removing it and fixing a few code tweaks we were able to launch the code
- vkrishna91Nimbostratus
Thank you Matt for assisting.
- Matt_MabisEmployee
Hey vkrishna91
So i just tested out the code in a 17.x box and seemed to work ill have to deploy out a 15.1.x like you had to confirm that its still working but this is what i have.
--- - name: Upgrade BIG-IP software hosts: bigip_hosts gather_facts: False vars_files: - vars/vars.yml vars: provider: password: "{{ f5_pass }}" server: "{{ ansible_host }}" user: "{{ f5_user }}" validate_certs: False tasks: - name: Get available volume number to use ansible.builtin.script: "{{ playbook_dir }}/files/cal_vol.sh" register: vol - debug: var: vol
Directory Layout
[root@DS9 test]# ls -AlFh total 8.0K -rw-r--r--. 1 root root 262 Oct 11 14:35 ansible.cfg drwxr-xr-x. 2 root root 24 Oct 11 14:44 files/ drwxr-xr-x. 2 root root 27 Oct 11 14:45 inventory/ -rw-r--r--. 1 root root 435 Oct 11 14:45 upgrade.yaml drwxr-xr-x. 2 root root 22 Oct 11 14:39 vars/ [root@DS9 test]# ls -AlFh files/ total 4.0K -rw-r--r--. 1 root root 382 Oct 11 14:43 cal_vol.sh [root@DS9 test]# cat inventory/inventory.yml [bigip_hosts] test-bip ansible_host=10.192.1.10 ansible_user=root ansible_password=xxxxxxxxx [root@DS9 test]# cat vars/vars.yml --- ###F5_ENV #BIG-IP f5_user: admin f5_pass: "xxxxxxxxx" f5_admin_port: 443 [root@DS9 test]# cat files/cal_vol.sh #!/bin/bash OLDIFS="$IFS" IFS=$'\n' disk=$(/bin/tmsh show sys sof status | awk '/.D[1-9]/{print substr($1,1,4)}' | head -n1) maxvnumber=0 for vnumber in $(/bin/tmsh show sys sof status | grep complete) do vnumber=${vnumber:4:2} vnumber=${vnumber// /} if (( vnumber > maxvnumber )); then maxvnumber=$vnumber fi done volume=$disk$((maxvnumber + 3)) echo -n $volume IFS="$OLDIFS" [root@DS9 test]# cat ansible.cfg [defaults] host_key_checking = False library = library:/usr/share/ansible/plugins/modules module_utils = module_utils:/usr/share/ansible/plugins/modules/ansible-for-nsxt/module_utils ansible_python_interpreter=/usr/bin/python3 inventory = inventory/inventory.yml
Execution of the Code
[root@DS9 test]# ansible-playbook upgrade.yaml PLAY [Upgrade BIG-IP software] *************************************************************************************************************************************************** TASK [Get available volume number to use] **************************************************************************************************************************************** changed: [test-bip] TASK [debug] ********************************************************************************************************************************************************************* ok: [test-bip] => { "vol": { "changed": true, "failed": false, "rc": 0, "stderr": "Shared connection to 10.192.1.10 closed.\r\n", "stderr_lines": [ "Shared connection to 10.192.1.10 closed." ], "stdout": "HD1.5", "stdout_lines": [ "HD1.5" ] } } PLAY RECAP *********************************************************************************************************************************************************************** test-bip : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Here is my ansible build info
[root@DS9 test]# ansible --vers ansible [core 2.12.4] config file = /git/test/ansible.cfg configured module search path = ['/git/test/library', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/local/lib/python3.9/site-packages/ansible ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections executable location = /usr/local/bin/ansible python version = 3.9.6 (default, Nov 9 2021, 13:31:27) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)] jinja version = 3.1.1 libyaml = True
in your code you wont need the HD.{{vol.stdout}} you can just use {{vol.stdout}}
Let me try a 15.x build but i dont think there would be a difference...
- vkrishna91Nimbostratus
Hi Matt_Mabis,
Thank you for testing. Sure, I will also compare the configuration with my playbook and see if something stands out.
- Leslie_HubertusRet. Employee
Hi vkrishna91, your post accidentally got caught in our automated spam filter overnight, but I've just released. I'm also looking for a colleague to take a look so we can get your upgrade going, though KeesvandenBos and several other community members may be able to answer more quickly.
- vkrishna91Nimbostratus
Thanks Leslie for the assistance.
Would be helpful to get some guidance on the issue.- Leslie_HubertusRet. Employee
Yeah, I'm not technical, but I know a few folks who are. There's zero reason you shouldn't be able to make it work with a little assist. 🙂
- Matt_MabisEmployee
Just tested with 15.1.5.1
[root@DS9 test]# ansible-playbook upgrade.yaml PLAY [Upgrade BIG-IP software] *************************************************************************************************************************************************** TASK [Get available volume number to use] **************************************************************************************************************************************** changed: [test-bip] TASK [debug] ********************************************************************************************************************************************************************* ok: [test-bip] => { "vol": { "changed": true, "failed": false, "rc": 0, "stderr": "Shared connection to 10.192.1.199 closed.\r\n", "stderr_lines": [ "Shared connection to 10.192.1.199 closed." ], "stdout": "HD1.4", "stdout_lines": [ "HD1.4" ] } } TASK [COLLECT BIG-IP FACTS] ****************************************************************************************************************************************************** ok: [test-bip -> localhost] TASK [debug] ********************************************************************************************************************************************************************* ok: [test-bip] => { "device_facts['system_info']['product_version']": "15.1.5.1" } PLAY RECAP *********************************************************************************************************************************************************************** test-bip : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
the upgrade.yaml
--- - name: Upgrade BIG-IP software hosts: bigip_hosts gather_facts: False vars_files: - vars/vars.yml vars: provider: password: "{{ f5_pass }}" server: "{{ ansible_host }}" user: "{{ f5_user }}" validate_certs: False tasks: - name: Get available volume number to use ansible.builtin.script: "{{ playbook_dir }}/files/cal_vol.sh" register: vol - debug: var: vol - name: COLLECT BIG-IP FACTS f5networks.f5_modules.bigip_device_info: provider: "{{ provider }}" gather_subset: - system-info register: device_facts delegate_to: localhost - debug: var: device_facts['system_info']['product_version']
- vkrishna91Nimbostratus
Thank you for testing it Matt_Mabis.
I will test it on a F5 to check if it works. Would it be possible for you to share the part that actually installs the image on the playbook?
This is what I have configured:- name: Install Image and reboot
bigip_software_install:
provider: "{{ f5_provider }}"
image: "{{ new_image }}"
state: activated
volume: "{{ vol.stdout }}"
async: 45
poll: 0
any_errors_fatal: true
- name: Group 1 Upgrades
block:- ansible.builtin.import_tasks: checks.yaml
vars:
stage: "pre"
when: check_virts or check_pools or check_ver- name: Pausing execution to give device time to reboot (first time)
pause:
minutes: 10 - vkrishna91Nimbostratus
It seems to still keep sending me the following error:
ok: [m4s-pl-lben-d01] => {
"vol": {
"changed": true,
"failed": false,
"rc": 0,
"stderr": "/home/user/.ansible/tmp/ansible-local-53436ousfriqb/ansible-tmp-1665677333.4435208-10213-19546309159604/cal_vol.sh: line 4: tmsh: command not found\n/home/user/.ansible/tmp/ansible-local-53436ousfriqb/ansible-tmp-1665677333.4435208-10213-19546309159604/cal_vol.sh: line 16: tmsh: command not found\n",
"stderr_lines": [
"/home/user/.ansible/tmp/ansible-local-53436ousfriqb/ansible-tmp-1665677333.4435208-10213-19546309159604/cal_vol.sh: line 4: tmsh: command not found",
"/home/user/.ansible/tmp/ansible-local-53436ousfriqb/ansible-tmp-1665677333.4435208-10213-19546309159604/cal_vol.sh: line 16: tmsh: command not found"
],
"stdout": "2",
"stdout_lines": [
"2"
]
}
}
I'm running as root to run the playbook. Would it be possible to post the complete playbook. - vkrishna91Nimbostratus
Using my admin account or root. It gives the same error message.
- vkrishna91Nimbostratus
ansible 2.10.10
python version = 3.9.5 (default, May 26 2021, 13:42:02) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
The version that we are using is little different though.- Matt_MabisEmployee
Let me work on the rest of the code i never went through it all i just tested that section that you were having an issue with in the execution of the script, i can test out my Dummy VM here in a few hrs and let u know about the upgrade path.
One thing i would check is your ansible version but also your Galaxy Collections Version mine is the below i probably should update to latest to verify no bugs tho.
[dasaint@DS9 ~]$ ansible-galaxy collection list |grep -i f5 f5networks.f5_modules 1.15.0
- vkrishna91Nimbostratus
Understood.
seeing the following:
ansible-galaxy collection list |grep -i f5
f5networks.f5_modules 1.9.1
f5networks.f5_modules 1.19.0
- Matt_MabisEmployee
So here is the code, when i was testing it was an unlicensed VE so it failed on the " - name: Wait for all devices to be healthy before proceeding" section b/c the command it runs shows "No License" but if its a licensed system should work fine. But it gets to the point where it installs/reboots the VE, give it a look see how it is.
In my code it also takes an archive at the beginning and end of the run to ensure that you have backups before and after the upgrade. Also i upgraded the Collections to the latest to ensure i was running latest code.
upgrade.yaml
--- - name: Upgrade BIG-IP software hosts: bigip_hosts gather_facts: False vars_files: - vars/vars.yml vars: provider: password: "{{ f5_pass }}" server: "{{ ansible_host }}" user: "{{ f5_user }}" validate_certs: False new_image_dir: "/mnt/apps/isos/VMware/Appliances/F5 Networks/15.x" new_image: "BIGIP-15.1.6-0.0.8.iso" backup_loc: "{{ playbook_dir }}/backups" backup_pfx: "10-13-2022_" tasks: - name: Get available volume number to use ansible.builtin.script: "{{ playbook_dir }}/files/cal_vol.sh" register: vol - debug: var: vol - name: Get Software Volume Information f5networks.f5_modules.bigip_device_info: gather_subset: - software-volumes provider: "{{ provider }}" register: sv delegate_to: localhost - name: Get Current Version set_fact: current_version: "{{ item.version }}" current_boot_loc: "{{ item.name }}" when: item.active == "yes" with_items: "{{ sv.software_volumes }}" - name: Identify Hosts That Require Upgrade set_fact: wants_upgrade: True when: not new_image.split("-")[1] == current_version - name: Identify Hosts That Don't Require Upgrade set_fact: wants_upgrade: False when: new_image.split("-")[1] == current_version - name: Only Upgrading Devices Which Need It block: - name: Check For Only One Boot Location set_fact: dest_boot_loc: "{{vol.stdout}}" when: (not dest_boot_loc is defined) and (sv.software_volumes|length == 1) - name: Check First Boot Location set_fact: dest_boot_loc: "{{ sv.software_volumes.0.name }}" when: (not dest_boot_loc is defined) and (sv.software_volumes.0.active != "yes") - name: Check Second Boot Location set_fact: dest_boot_loc: "{{ sv.software_volumes.1.name }}" when: (not dest_boot_loc is defined) and (sv.software_volumes.1.active != "yes") when: wants_upgrade - name: Device Version Status debug: msg: - "Current version: {{ current_version }}" - "Desired image: {{ new_image }}" - "Upgrade needed: {{ wants_upgrade }}" - name: Print Upgrade Information debug: msg: - "Current version: {{ current_version }} booting from {{ current_boot_loc }}" - "New Image '{{ new_image }}' will be uploaded from '{{ new_image_dir }}'" - "It will be installed to boot location '{{ dest_boot_loc }}'" when: wants_upgrade - name: Wait For Confirmation pause: prompt: "Press a key to continue..." - name: Save the running configuration of the BIG-IP f5networks.f5_modules.bigip_config: provider: "{{ provider }}" save: yes when: wants_upgrade delegate_to: localhost - name: Ensure backup directory exists file: path: "{{ backup_loc }}/{{ inventory_hostname_short }}" state: directory delegate_to: localhost - name: Get Pre-Upgrade UCS Backup f5networks.f5_modules.bigip_ucs_fetch: create_on_missing: yes src: "{{ backup_pfx }}_pre-upgrade.ucs" dest: "{{ backup_loc }}/{{ inventory_hostname_short }}/{{ backup_pfx }}_pre-upgrade.ucs" provider: "{{ provider }}" when: wants_upgrade delegate_to: localhost - name: Upload image f5networks.f5_modules.bigip_software_image: provider: "{{ provider }}" image: "{{ new_image_dir }}/{{ new_image }}" when: wants_upgrade delegate_to: localhost - name: Group 1 Activate Image (Will Cause Reboot) f5networks.f5_modules.bigip_software_install: provider: "{{ provider }}" image: "{{ new_image }}" state: activated volume: "{{ vol.stdout }}" when: (reboot_group == 1) and (wants_upgrade) delegate_to: localhost - name: Wait for all devices to be healthy before proceeding f5networks.f5_modules.bigip_command: provider: "{{ provider }}" match: "any" warn: no commands: - bash -c "cat /var/prompt/ps1" wait_for: - result[0] contains Active - result[0] contains Standby retries: 12 interval: 10 register: result any_errors_fatal: true when: wants_upgrade delegate_to: localhost - name: Group 2 Activate Image (Will Cause Reboot) f5networks.f5_modules.bigip_software_install: provider: "{{ provider }}" image: "{{ new_image }}" state: activated volume: "{{ dest_boot_loc }}" when: (reboot_group == 2) and (wants_upgrade) # any_errors_fatal: true delegate_to: localhost - name: Get Post-Upgrade UCS Backup f5networks.f5_modules.bigip_ucs_fetch: create_on_missing: yes src: "{{ backup_pfx }}_post-upgrade.ucs" dest: "{{ backup_loc }}/{{ inventory_hostname_short }}/{{ backup_pfx }}_post-upgrade.ucs" provider: "{{ provider }}" when: wants_upgrade delegate_to: localhost
vars/vars.yml
--- ###F5_ENV #BIG-IP f5_user: admin f5_pass: "*******" f5_admin_port: 443
files/cal_vol.sh
#!/bin/bash OLDIFS="$IFS" IFS=$'\n' disk=$(/bin/tmsh show sys sof status | awk '/.D[1-9]/{print substr($1,1,4)}' | head -n1) maxvnumber=0 for vnumber in $(/bin/tmsh show sys sof status | grep complete) do vnumber=${vnumber:4:2} vnumber=${vnumber// /} if (( vnumber > maxvnumber )); then maxvnumber=$vnumber fi done volume=$disk$((maxvnumber + 3)) echo -n $volume IFS="$OLDIFS"
inventory/inventory.yml
[bigip_hosts] test-bip ansible_host=xxx.xxx.xxx.xxx ansible_user=root ansible_password=******* reboot_group=1
- vkrishna91Nimbostratus
I updated the configuration.
Now I'm noticing the following the error message:
FAILED! => {"changed": false, "msg": "01070945:3: Invalid volume name (2)"}- Matt_MabisEmployee
Do you have a debug to the VAR variable (volume) like i do in my playbook? if so what is the output, to me it sounds like its not running that script.
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com