Getting started with Ansible
Ansible is an orchestration and automation engine. It provides a means for you to automate the administration of different devices, from Linux to Windows and different special purpose appliances in-b...
Updated Jun 06, 2023
Version 2.0Tim_Rupp
Altostratus
Joined May 21, 2019
KernelPanic
Jan 12, 2019Nimbostratus
Tim, I'm having trouble with the code in K10531487: Running Ansible tasks on the active BIG-IP in a device group. This build and variables works for other f5 ansible plays and roles. I've been debugging this all day..please help.
This is the playbook:
---
- name: "Syncing F5 Active config to group"
hosts: "drhaf5"
serial: 1
vars_files:
- "vars/main.yml"
- "vars/vault.yml"
gather_facts: "no"
roles:
- "f5syncactive"
tasks:
- name: "Get bigip facts"
bigip_facts:
server: "{{inventory_hostname}}"
user: "admin"
password: "{{adminpass}}"
include:
- "device"
- "system_info"
validate_certs: False
check_mode: no
delegate_to: "localhost"
- name: "Display bigip facts {{inventory_hostname}}"
debug:
msg:
- "Hostname: {{ system_info.system_information.host_name }}"
- "Status: {{ device['/Common/' + system_info.system_information.host_name].failover_state }}"
- name: "Create pool"
bigip_pool:
server: "{{inventory_hostname}}"
user: "admin"
password: "{{adminpass}}"
lb_method: "round-robin"
monitors: http
name: "pool1"
validate_certs: False
notify:
- "Save the running configuration to disk"
- "Sync configuration from device to group"
delegate_to: "localhost"
when: device['/Common/' + system_info.system_information.host_name].failover_state == "HA_STATE_ACTIVE"
handlers:
- name: "Save the running {{inventory_hostname}} configuration to disk"
bigip_config:
save: "yes"
server: "{{inventory_hostname}}"
user: "admin"
password: "{{adminpass}}"
validate_certs: False
delegate_to: localhost
- name: "Handler Sync configuration from {{inventory_hostname}} to group"
bigip_configsync_action:
device_group: "sync-failover-group"
sync_device_to_group: "yes"
server: "{{inventory_hostname}}"
user: "admin"
password: "{{adminpass}}"
validate_certs: False
delegate_to: localhost
When the play runs on the a standby box it gets facts and skips, as expected:
TASK [Display bigip facts f5am.express-scripts.com] ******************************************
ok: [f5am.express-scripts.com] => {}
MSG:
[u'Hostname: f5am.express-scripts.com', u'Status: HA_STATE_STANDBY']
TASK [Create pool] ************************************************************************************
skipping: [f5am.express-scripts.com] => {
"changed": false,
"skip_reason": "Conditional result was False"
}
PLAY [Syncing F5 Active config to group] **************************************************************
TASK [Get bigip facts] ********************************************************************************
ok: [f5bm.express-scripts.com -> localhost] => {
"ansible_facts": {
"device": {
"/Common/f5am.express-scripts.com": {
But when it runs on the b box, it fails with "Unexpected **kwargs: {'verify': False}". I have verified that the passwords are equal from a to b boxes. And ansible is able to get facts in the play above.
TASK [Display bigip facts f5bm.express-scripts.com] ******************************************
ok: [f5bm.express-scripts.com] => {}
MSG:
[u'Hostname: f5bm.express-scripts.com', u'Status: HA_STATE_ACTIVE']
TASK [Create pool] ************************************************************************************
fatal: [f5bm.express-scripts.com -> localhost]: FAILED! => {
"changed": false
}
MSG:
Unable to connect to f5bm.express-scripts.com on port 443. The reported error was "Unexpected **kwargs: {'verify': False}".
to retry, use: --limit @/home/eh7305/scripts/ansible/f5tst.retry
PLAY RECAP ********************************************************************************************
f5am.express-scripts.com : ok=2 changed=0 unreachable=0 failed=0
f5bm.express-scripts.com : ok=2 changed=0 unreachable=0 failed=1