Ansible
11 TopicsAnsible playbook run tasks only on Active LTM member
Problem this snippet solves: This is an example of a simple Ansible playbook can be run against a pair of F5 devices and will only run select tasks on is the F5 is in an active state. This is done using the block and when statements within the playbook ('block' requires Ansible 2.5 or above) In this example it sets the hostname of the F5 and if failover state is active then creates three test nodes, a test pool and adds the nodes as pool members all under the test partition. NOTE: This playbook prompts for the F5 username and password to connect to the F5 device, this would normally be set with another file or pulled from something like HashiCorp Vault How to use this snippet: Ansible hosts Inventory example inventory/hosts: [F5DeviceGroup] f5vm01.lab.domain.local f5vm02.lab.domain.local Assuming the hosts file in located locally within a directory named inventory and the Ansible Playbook is named f5TestPool.yml you can run the example using the following command: ansible-playbook -i inventory f5TestPool.yml Example output: F5 Username: F5 Password: PLAY [Run tasks on Active LTM] ******************************************************************************************************************************************************************************************************* TASK [Set hostname] ****************************************************************************************************************************************************************************************************************** ok: [f5vm01.lab.domain.local -> localhost] ok: [f5vm02.lab.domain.local -> localhost] TASK [Get BIG-IP failover status] **************************************************************************************************************************************************************************************************** ok: [f5vm01.lab.domain.local -> localhost] ok: [f5vm02.lab.domain.local -> localhost] TASK [The active LTMs management IP is....] ****************************************************************************************************************************************************************************************** ok: [f5vm01.lab.domain.local] => { "inventory_hostname": "f5vm01.lab.domain.local" } skipping: [f5vm02.lab.domain.local] TASK [Add pool test_pool] ************************************************************************************************************************************************************************************************************ ok: [f5vm01.lab.domain.local -> localhost] skipping: [f5vm02.lab.domain.local] TASK [Add node [{u'name': u'test01', u'address': u'8.8.8.8'}, {u'name': u'test02', u'address': u'8.8.4.4'}, {u'name': u'test03', u'address': u'8.8.1.1'}]] *************************************************************************** ok: [f5vm01.lab.domain.local -> localhost] => (item={u'name': u'test01', u'address': u'8.8.8.8'}) ok: [f5vm01.lab.domain.local -> localhost] => (item={u'name': u'test02', u'address': u'8.8.4.4'}) ok: [f5vm01.lab.domain.local -> localhost] => (item={u'name': u'test03', u'address': u'8.8.1.1'}) skipping: [f5vm02.lab.domain.local] => (item={u'name': u'test01', u'address': u'8.8.8.8'}) skipping: [f5vm02.lab.domain.local] => (item={u'name': u'test02', u'address': u'8.8.4.4'}) skipping: [f5vm02.lab.domain.local] => (item={u'name': u'test03', u'address': u'8.8.1.1'}) TASK [Add pool member [{u'name': u'test01', u'address': u'8.8.8.8'}, {u'name': u'test02', u'address': u'8.8.4.4'}, {u'name': u'test03', u'address': u'8.8.1.1'}] to Pool test_pool] ************************************************** ok: [f5vm01.lab.domain.local -> localhost] => (item={u'name': u'test01', u'address': u'8.8.8.8'}) ok: [f5vm01.lab.domain.local -> localhost] => (item={u'name': u'test02', u'address': u'8.8.4.4'}) ok: [f5vm01.lab.domain.local -> localhost] => (item={u'name': u'test03', u'address': u'8.8.1.1'}) skipping: [f5vm02.lab.domain.local] => (item={u'name': u'test01', u'address': u'8.8.8.8'}) skipping: [f5vm02.lab.domain.local] => (item={u'name': u'test02', u'address': u'8.8.4.4'}) skipping: [f5vm02.lab.domain.local] => (item={u'name': u'test03', u'address': u'8.8.1.1'}) PLAY RECAP *************************************************************************************************************************************************************************************************************************** f5vm01.lab.domain.local : ok=6changed=0unreachable=0failed=0 f5vm02.lab.domain.local : ok=2changed=0unreachable=0failed=0 Code : --- # Playbook 'f5TestPool.yml' - name: Run tasks on Active LTM hosts: F5DeviceGroup connection: local gather_facts: False vars_prompt: - name: f5User prompt: F5 Username - name: f5Pwd prompt: F5 Password vars: f5Provider: server: "{{ inventory_hostname }}" server_port: 443 user: "{{ f5User }}" password: "{{ f5Pwd }}" validate_certs: no transport: rest nodelist: - {name: 'test01', address: "8.8.8.8"} - {name: 'test02', address: "8.8.4.4"} - {name: 'test03', address: "8.8.1.1"} tasks: - name: Set hostname bigip_hostname: provider: "{{ f5Provider }}" hostname: "{{ inventory_hostname }}" delegate_to: localhost - name : Get BIG-IP failover status bigip_command: provider: "{{ f5Provider }}" commands: - "tmsh show sys failover" delegate_to: localhost register: failoverStatus - name: Executing on ACTIVE F5 LTM block: - name: The active LTMs management IP is.... debug: var: inventory_hostname - name: Add pool test_pool bigip_pool: provider: "{{ f5Provider }}" description: "Test pool set by Ansible run by {{ f5User }}" lb_method: least-connections-member name: test_pool partition: test monitor_type: single monitors: - /Common/gateway_icmp priority_group_activation: 0 delegate_to: localhost - name: "Add node {{ nodelist }}" bigip_node: provider: "{{ f5Provider }}" partition: test address: "{{ item.address }}" name: "{{ item.name }}" loop: "{{ nodelist }}" delegate_to: localhost - name: "Add pool member {{ nodelist }} to Pool test_pool" bigip_pool_member: provider: "{{ f5Provider }}" partition: test pool: test_pool address: "{{ item.address }}" name: "{{ item.name }}" port: 53 loop: "{{ nodelist }}" delegate_to: localhost when: "'active' in failoverStatus['stdout'][0]" Tested this on version: 12.1881Views1like1CommentF5 Archiver Ansible Playbook
Problem this snippet solves: Centralized scheduled archiving (backups) on F5 BIG-IP devices are a pain however, in the new world of Infrastructure as Code (IaC) and Super-NetOps tools like Ansible can provide the answer. I have a playbook I have been working on to allow me to backup off box quickly, UCS files are saves to a folder names tmp under the local project folder, this can be changed by editing the following line in the f5Archiver.yml file: dest: "tmp/{{ inventory_hostname }}-{{ date['stdout'] }}.ucs" The playbook can be run from a laptop on demand or via some scheduler (like cron ) or as part of a CI/CD pipelines. How to use this snippet: F5 Archiver Ansible Playbook Gitlab: StrataLabs: AnsibleF5Archiver Overview This Ansible playbook takes a list of F5 devices from a hosts file located within the inventory directory, creates a UCS archive and copies locally into the 'tmp' direcotry. Requirements This Ansible playbook requires the following: * ansible >= 2.5 * python module f5-sdk * F5 BIG-IP running TMOS >= 12 Usage Run using the ansible-playbook command using the inventory -i option to use the invertory directory instead of the default inventory host file. NOTE: F5 username and password are not set in the playbook and so need to be passed into the playbook as extra variables using the --extra-vars option, the variables are f5User for the username and f5Pwd for the password. The below examples use the default admin:admin . To check the playbook before using run the following commands ansible-playbook -i inventory --extra-vars "f5User=admin f5Pwd=admin" f5Archiver.yml --syntax-check ansible-playbook -i inventory --extra-vars "f5User=admin f5Pwd=admin" f5Archiver.yml --check Once happy run the following to execute the playbook ansible-playbook -i inventory --extra-vars "f5User=admin f5Pwd=admin" f5Archiver.yml Tested this on version: 12.11.8KViews2likes1CommentChecksums for F5 Ansible modules
Problem this snippet solves: Checksums for F5 Ansible Modules F5 Networks provides checksums for all of our Ansible modules. See http://docs.ansible.com/ansible/latest/list_of_network_modules.html#f5 for information on individual modules. You can get a checksum for a particular module by running a command such as sha512sum <path_to_module> on a Linux system. Note that these modules come as a part of Ansible, for guidance on finding the path to a particular module, see https://f5-ansible.readthedocs.io/en/devel/usage/installing-modules.html. You can compare the checksum produced by that command against the following list. Release 2.4.1 bigip_command.py 3ad06398943f249acc818a8907933f168811b045a0c5df328ebd8f59878651ffbca8a6904cc512d9f00bb527363a31699987bdb37cb3ca1f899997834e633618 bigip_config.py 9c121b0844bfa4de552c22ce0dca14a1faf9c8f7168066a366fdbba895febf9decdf9320e01fcc024d65eb995d5025f2430ad3fbb885a6fe87a04d9c17cfc279 bigip_configsync_actions.py d2c92cd7674f664e1efa235af196996cfd285cf07baf6669b3bf762bcaf4e16bbcadd54e4da52b06ddd73e80e26b9229092cceac6ab7178f70edebf1df80a6ff bigip_device_dns.py d0e3ef486ec6d521c0a967714c14c6f390b788d077470257e5fbeb435a7c752c718062afdfe5e1745a39117d10749e64c1b5f5db62deb0d399617290c892d1cd bigip_device_ntp.py c620001a9fe11de018ef19c746e2d6b9c8201af234fe2ac4c6d2da84e6f2bc3fec32490e003552902c49b166407b58a7ac3fa857130854973101224af89a3355 bigip_device_sshd.py d07ba52c36e0c94e0ab8e14679489fb54983f00468b188efe490e34519b2482eb899c4516ff1e6e527d6b1603c070d2700dc8e0456b727a1711950dfbdab00cd bigip_facts.py d1fbe1ccd61a671eba43177613d0a6afb76d7f675117060537d97d8c891a56957614b2181427aade7439fda601061dae1ac6dd58c2b085a45ec05225303138dc bigip_gtm_datacenter.py e048a19caed193d5cedc206c3f3adcd036119fd7d822674aca744d0d2f259801ba2665c9821abaa1386d811f4f63dd599e0b48e7e474406a2deac385c5ebc97a bigip_gtm_facts.py e272cb1fbcafee9270e96f273c9be9c86ce8394ecd50c4316995cae48879fc7a0fd2ff5521f82677020f94f70542ab8191ce2d63bc594e88c03ac82d8e01959d bigip_gtm_pool.py 93e06e7ef8e7868890562baff095f4ba459c5aa7707994334ac6b834e11485fcb5f06d5150187598c12f1159e2632b8bf82b1d7071df05b38f2633da56f5baaa bigip_gtm_virtual_server.py dbe6a7d42b4647433aa18047e0137c1a8c7216373f037b78751abf358509d45f3d21f97e27488cfc40f30538b29c8157892d410377af645d2811f7e783758354 bigip_gtm_wide_ip.py eb741094fca75e8db8d31553a288cfd2003c48b974c8e27a8ea56bb7367d5723833e8871077b6315cb3e80d0b9320030964b540999d6655df2a917e5cbabad60 bigip_hostname.py 47dc77b9a349ca897180a54c7f9ae3c758618a6274c50885a633791cd39b2fe3713238e34ca3a94cca589a0ce9e25930d5930183cdfef883287dc7196b978195 bigip_iapp_service.py 9a08c1c33f8257e5cffc6f5dbd64215cc427db82404b1fc39ed1e10c70116b5bf5c3f1e57fff22c4b26ef9ca285350c62ae8b12baa1b3babfada23617e86e788 bigip_iapp_template.py cfa85324f70f77cfd7c6e306c601a76a0b4f183abee456b500b5d9ef810b6bf70a34306c643d4403eb41b70e8685bd9b3df9b1a2b2923e5464571ccec0e59ff4 bigip_irule.py 53e979a9e8c56f5f3beaf7dd0d162556e4b295991876448228062bcba6bbcdab429c89685e38557ce4e1cfb9a383cfb6461b87d63515e70a9ac1f7ff033a3ede bigip_monitor_http.py 284aacf60f62b13efc73d3aec7d0b98e365683168f181020956778724b987a1297ee27b2eaf4c4f79c56e6d057bc7452aac9d6a6c7695520ac34f09d8d553c03 bigip_monitor_tcp_echo.py f7cb58dd7190d4e87f15a1f2b29ea97f3b803b1d1fc0e9340a0aa63e40cd30b36877f9736bc8dd3a64929a35a8e2e865ba1d8eb3584dd1d11ec56a942cc78145 bigip_monitor_tcp_half_open.py f897a69440e79c0c26a35787bd985bc4aab374d327e33cee0bf772053494c84f4ab42286dad004d775930fbac1dee0a24eb1f752bc1e921d52b6dbbc72a766bd bigip_monitor_tcp.py e859dc728e075b8a39be5568d4490e741ed33a8f80e5df44b548ce4e700440913fce2a72f255c5cb7ee36fac3d68d654f28d329484faaf148fb5955ab9033f18 bigip_node.py 297fc1baa911f8cfa604f95fe91ebfd053e1776854742c6aa48c0618def951120a81f4cf6a3fa69bfa38dabe3ceedec7e5cd3094bbd4c074c34bc0d26052ced4 bigip_pool_member.py c50bae3abfa08852a8f9fb59674e88c86a77e61ed5594085b5a8cc99314eb8c0bb6159a38ce1739f4604c6d6606d612adb9ac6985057cfbd319f1ed9a14a68cc bigip_pool.py 2934ed462282c6891cd45ea4e53ed204174cfdb0c608d88c60f9c46f8f15e2e1311c3194b94b2bd12d2a73f51aadace8161d06626f5671711a568161ed77cc0d bigip_provision.py 4346e5c102d18c8c7c46fdacc1b1b0ee7600ed6913a5777faa82d02fb5ed62e6f72e985d497ad9223c52fa65dfc5bf852f5396d7ed8e52903aba69c8517e9e4f bigip_qkview.py 8bf8cc298bd40c50fe94dda9013f0e779c9f784e7809e0a2933e30f2f8c4cd8eb88c7b6c300a560f134aec92ff54e0ba31e9fcce4297ca787d25ed2c86d1353d bigip_routedomain.py 6b94e414078286076681bc95c6c6828e704bfb0963d2af93a6e90215b9af05db927052473e016d805233cefda40c64c766ded5382185a7de7a3eef3ba4db13c2 bigip_selfip.py a030f73a556ca6dd4768974eb695438d4f156d4c10811ee81daf37a60ef1787eb9311a84740c5658bbd8c6ed84d526e93f47e0e06989986d1bd4752f514adc22 bigip_snat_pool.py f8ad98a343ca796ed2b805d4967028a927857af9c95282765ba7bd77d23897092cf1ed7f49161cd470031c48845db5f3cde42dc8a235a455d7e58903c667a0bc bigip_snmp.py c4ea96f5396a2b6cccf8110b9f9c8715e29ec108b908b030fc58dd6a8103e384f58b717433a20a8bb4c91074ac3ae00ed0f3539d35392b18327841f174c91b9a bigip_snmp_trap.py d394a581b3c2f94dd970c2adc869741ee72805fddf9d084b917df3098337af42e80587eab967b7844272a488cac2adc492444b87aebec23fb55de087da57c262 bigip_ssl_certificate.py 3f60b045d6e39a67443e6ce48d5ed27ea7b6168ad3bb414087d03a79e825d9e43fdaa0bf5c24c0acb15ca85e8e94d0552f007c05436d9271b516aba9505d3d10 bigip_sys_db.py 806c05b2c3f9af95b463ac788be2a04576f877ebc4cdf185030d6db48d05164f91445275725d681422a9f7696deb8b76704d8cd68af24e9b52404a883af0c310 bigip_sys_global.py b341a609ae9770e9a1651b4ac8e05924e24f221fd2586c3981a035d56a8ebc0285547cdfbb1972c76fb02b7787032bd2152ce5981d07fdad3a61bf3a904ade1a bigip_ucs.py 556212791c0dc8cb34c6727b07d466135e897230dd9f1d613dcf38438b21aacd067102e61d1c0f9e9df42c78135a6b76d3a6e36fab9da3d941e7af2ca2138e72 bigip_user.py 7b150661bb4c520c11c7b5fd6b91d4e5758cf72146ac10e8a5eb900af033c26c22083841e7dda491f5907c3e61add84792a23bbff0146d580d3242e8f5317a19 bigip_virtual_address.py 1c35ecdeb79dd5350be93ffa16354806660a8dc8f5f2d02dc91a840420845e24205846084fbb6d0fc2ce449d0e11e68326ff794561b759db56405b04b6f6fec7 bigip_virtual_server.py 2f987e62dfd585a370ac8c507dc79832c677f2e10f01ab5853ffda75fbae8c47bc0bf0ee2f15844b768898c1780251d73f2ea6b25b56bfdd4d72a166e80b89d7 bigip_vlan.py 77f785ddcde7f380090b659a804ba79919cd4540c12f7cf58530ee9739e1bf72670b5c48d27a6196e986e7bed246af25540caaaff02fde5224510b3f01b9eb30 Release 2.4.0 bigip_command.py 3ad06398943f249acc818a8907933f168811b045a0c5df328ebd8f59878651ffbca8a6904cc512d9f00bb527363a31699987bdb37cb3ca1f899997834e633618 bigip_config.py 9c121b0844bfa4de552c22ce0dca14a1faf9c8f7168066a366fdbba895febf9decdf9320e01fcc024d65eb995d5025f2430ad3fbb885a6fe87a04d9c17cfc279 bigip_configsync_actions.py d2c92cd7674f664e1efa235af196996cfd285cf07baf6669b3bf762bcaf4e16bbcadd54e4da52b06ddd73e80e26b9229092cceac6ab7178f70edebf1df80a6ff bigip_device_dns.py d0e3ef486ec6d521c0a967714c14c6f390b788d077470257e5fbeb435a7c752c718062afdfe5e1745a39117d10749e64c1b5f5db62deb0d399617290c892d1cd bigip_device_ntp.py c620001a9fe11de018ef19c746e2d6b9c8201af234fe2ac4c6d2da84e6f2bc3fec32490e003552902c49b166407b58a7ac3fa857130854973101224af89a3355 bigip_device_sshd.py d07ba52c36e0c94e0ab8e14679489fb54983f00468b188efe490e34519b2482eb899c4516ff1e6e527d6b1603c070d2700dc8e0456b727a1711950dfbdab00cd bigip_facts.py d1fbe1ccd61a671eba43177613d0a6afb76d7f675117060537d97d8c891a56957614b2181427aade7439fda601061dae1ac6dd58c2b085a45ec05225303138dc bigip_gtm_datacenter.py e048a19caed193d5cedc206c3f3adcd036119fd7d822674aca744d0d2f259801ba2665c9821abaa1386d811f4f63dd599e0b48e7e474406a2deac385c5ebc97a bigip_gtm_facts.py e272cb1fbcafee9270e96f273c9be9c86ce8394ecd50c4316995cae48879fc7a0fd2ff5521f82677020f94f70542ab8191ce2d63bc594e88c03ac82d8e01959d bigip_gtm_pool.py 93e06e7ef8e7868890562baff095f4ba459c5aa7707994334ac6b834e11485fcb5f06d5150187598c12f1159e2632b8bf82b1d7071df05b38f2633da56f5baaa bigip_gtm_virtual_server.py dbe6a7d42b4647433aa18047e0137c1a8c7216373f037b78751abf358509d45f3d21f97e27488cfc40f30538b29c8157892d410377af645d2811f7e783758354 bigip_gtm_wide_ip.py eb741094fca75e8db8d31553a288cfd2003c48b974c8e27a8ea56bb7367d5723833e8871077b6315cb3e80d0b9320030964b540999d6655df2a917e5cbabad60 bigip_hostname.py 47dc77b9a349ca897180a54c7f9ae3c758618a6274c50885a633791cd39b2fe3713238e34ca3a94cca589a0ce9e25930d5930183cdfef883287dc7196b978195 bigip_iapp_service.py 9a08c1c33f8257e5cffc6f5dbd64215cc427db82404b1fc39ed1e10c70116b5bf5c3f1e57fff22c4b26ef9ca285350c62ae8b12baa1b3babfada23617e86e788 bigip_iapp_template.py cfa85324f70f77cfd7c6e306c601a76a0b4f183abee456b500b5d9ef810b6bf70a34306c643d4403eb41b70e8685bd9b3df9b1a2b2923e5464571ccec0e59ff4 bigip_irule.py 53e979a9e8c56f5f3beaf7dd0d162556e4b295991876448228062bcba6bbcdab429c89685e38557ce4e1cfb9a383cfb6461b87d63515e70a9ac1f7ff033a3ede bigip_monitor_http.py 284aacf60f62b13efc73d3aec7d0b98e365683168f181020956778724b987a1297ee27b2eaf4c4f79c56e6d057bc7452aac9d6a6c7695520ac34f09d8d553c03 bigip_monitor_tcp_echo.py f7cb58dd7190d4e87f15a1f2b29ea97f3b803b1d1fc0e9340a0aa63e40cd30b36877f9736bc8dd3a64929a35a8e2e865ba1d8eb3584dd1d11ec56a942cc78145 bigip_monitor_tcp_half_open.py f897a69440e79c0c26a35787bd985bc4aab374d327e33cee0bf772053494c84f4ab42286dad004d775930fbac1dee0a24eb1f752bc1e921d52b6dbbc72a766bd bigip_monitor_tcp.py e859dc728e075b8a39be5568d4490e741ed33a8f80e5df44b548ce4e700440913fce2a72f255c5cb7ee36fac3d68d654f28d329484faaf148fb5955ab9033f18 bigip_node.py 297fc1baa911f8cfa604f95fe91ebfd053e1776854742c6aa48c0618def951120a81f4cf6a3fa69bfa38dabe3ceedec7e5cd3094bbd4c074c34bc0d26052ced4 bigip_pool_member.py c50bae3abfa08852a8f9fb59674e88c86a77e61ed5594085b5a8cc99314eb8c0bb6159a38ce1739f4604c6d6606d612adb9ac6985057cfbd319f1ed9a14a68cc bigip_pool.py 2934ed462282c6891cd45ea4e53ed204174cfdb0c608d88c60f9c46f8f15e2e1311c3194b94b2bd12d2a73f51aadace8161d06626f5671711a568161ed77cc0d bigip_provision.py 4346e5c102d18c8c7c46fdacc1b1b0ee7600ed6913a5777faa82d02fb5ed62e6f72e985d497ad9223c52fa65dfc5bf852f5396d7ed8e52903aba69c8517e9e4f bigip_qkview.py 8bf8cc298bd40c50fe94dda9013f0e779c9f784e7809e0a2933e30f2f8c4cd8eb88c7b6c300a560f134aec92ff54e0ba31e9fcce4297ca787d25ed2c86d1353d bigip_routedomain.py 6b94e414078286076681bc95c6c6828e704bfb0963d2af93a6e90215b9af05db927052473e016d805233cefda40c64c766ded5382185a7de7a3eef3ba4db13c2 bigip_selfip.py a030f73a556ca6dd4768974eb695438d4f156d4c10811ee81daf37a60ef1787eb9311a84740c5658bbd8c6ed84d526e93f47e0e06989986d1bd4752f514adc22 bigip_snat_pool.py f8ad98a343ca796ed2b805d4967028a927857af9c95282765ba7bd77d23897092cf1ed7f49161cd470031c48845db5f3cde42dc8a235a455d7e58903c667a0bc bigip_snmp.py e0863db14cfc66afabef24bd5f62b4931d7e9620f0126d6988aaa1d472e2bf2b5a37292203a9fe5c103f46dc506615fae12d7930c15e79056a9ca54018043b9d bigip_snmp_trap.py d394a581b3c2f94dd970c2adc869741ee72805fddf9d084b917df3098337af42e80587eab967b7844272a488cac2adc492444b87aebec23fb55de087da57c262 bigip_ssl_certificate.py 3f60b045d6e39a67443e6ce48d5ed27ea7b6168ad3bb414087d03a79e825d9e43fdaa0bf5c24c0acb15ca85e8e94d0552f007c05436d9271b516aba9505d3d10 bigip_sys_db.py 806c05b2c3f9af95b463ac788be2a04576f877ebc4cdf185030d6db48d05164f91445275725d681422a9f7696deb8b76704d8cd68af24e9b52404a883af0c310 bigip_sys_global.py b341a609ae9770e9a1651b4ac8e05924e24f221fd2586c3981a035d56a8ebc0285547cdfbb1972c76fb02b7787032bd2152ce5981d07fdad3a61bf3a904ade1a bigip_ucs.py 556212791c0dc8cb34c6727b07d466135e897230dd9f1d613dcf38438b21aacd067102e61d1c0f9e9df42c78135a6b76d3a6e36fab9da3d941e7af2ca2138e72 bigip_user.py 7b150661bb4c520c11c7b5fd6b91d4e5758cf72146ac10e8a5eb900af033c26c22083841e7dda491f5907c3e61add84792a23bbff0146d580d3242e8f5317a19 bigip_virtual_address.py 1c35ecdeb79dd5350be93ffa16354806660a8dc8f5f2d02dc91a840420845e24205846084fbb6d0fc2ce449d0e11e68326ff794561b759db56405b04b6f6fec7 bigip_virtual_server.py 2f987e62dfd585a370ac8c507dc79832c677f2e10f01ab5853ffda75fbae8c47bc0bf0ee2f15844b768898c1780251d73f2ea6b25b56bfdd4d72a166e80b89d7 bigip_vlan.py 77f785ddcde7f380090b659a804ba79919cd4540c12f7cf58530ee9739e1bf72670b5c48d27a6196e986e7bed246af25540caaaff02fde5224510b3f01b9eb30 Release 2.3.0 bigip_device_dns.py 5cab5aadf0f208a74ffc7fff95fa899abcdac6ab0f5f30231ede2297d93c49092d217815619b6cb1c82cf5eb6dc21261ea77878cca15567eedbb9e55d8d546bf bigip_device_ntp.py 5123ed01d005b2d86eb17105f99e53c2691e4a97b205bbacd1cbef5748b097d36f10968cfeb83c752ac94a4c9b2e07851ee7086fb5f724fa2dd24d4dc997d9d9 bigip_device_sshd.py 7ae7e70f52382168632d2d6b431742748f6200cc27e794ab9a4a011cc84575193b125e70ff541b8a8d8d394bd64c238f5899243f24bac9aad37e7b545c09e34f bigip_facts.py 33efbb5e18c336e3f461ef0e5c0eb1287612067b27dc11f174cf3a7ffe6c80388131de421a212c3ffa88f7e300944641044a2d5bec84995e8c408cc704c439b2 bigip_gtm_datacenter.py e6aa62173b61255aa52b9f4c2610bcb182657704185eefc48dd6d5cc0e735229d2f707517f2dfdecdf285878add1554df8b0f82ac6742a783279d0b8852025dd bigip_gtm_facts.py 5bb11a3636979102aaf76d4c62893e39937e4a0d585ddbc442dd3969494abf6dceea50b22ca826ea8dbfa2fc1c98d21a8bfb06acaf05309860ee98fd4cca29ed bigip_gtm_virtual_server.py 0f323468f0c7103e17b10dba2cdab210b013e36337fda8dcfaf3032cee34f00acce5e9ec26b40ce8a84c0c701e28a0335216ca867b911c9f95f5a274096d1a0b bigip_gtm_wide_ip.py 4bc2cac18ef31039d9e49d0012bab6a9b7c752df9d7deaaf132fe0174c60bbe441f4696bacae64d09f07031fd53c51721cefc2b444c38c9274f50e1283e44c63 bigip_hostname.py 2595a07d8a8fead2d45e7c2b17caa4f5848c00b82540ac5ec33a77d2598850895c02b8702391ce43f140680c9f272b985395026d9aa3050c8da3acb0104d6b84 bigip_irule.py 39f4b5c5db0236c22fd9abd869012af978c7f06abdc1f1cba7f8a5ef481813a273c6da5823f3f3cff76fe3f78cfe8ab25e2731fef242e1d2f01f59c7c1d0b602 bigip_monitor_http.py 3657aacab5f9d92d40dac7ba4cd86829d5a17ad396798d63bbacc9d6f2889184d68a9e3ca1999d7b6b89ac5bd4a107377be56a2175a7d6cafdea8d0ebc631124 bigip_monitor_tcp.py 90756b1f3995381b5800ff82f3e1bf85743d837b57b330eb739c1734840e155b33363d2ddf2f1014cc662f6ba7ac280e3a8838c3fbd43cbf964370138f0fb71f bigip_node.py 4a6c95aa197a58aed72b3089778a751889ae49a379a271866d63850bfe1bf437b88112003080d2281a44205801d02a6b58269794e30bb03df82a01aaf1aa9893 bigip_pool_member.py 6af02bd3ed31770c96b325163a4b6a17758fc18f41a1fb4a1295046e43c4a3cc504e65758ad931e7e71856334722d1081542e57c5bbbf2723f4ad0990ba35641 bigip_pool.py 70e1547f67cddf0841d2802da9175d462946e2c61e193dbd0ae9291af7abacecf51ede47b9537d7fc7fa2207e8707b2c03bc4d8aa4a37b44462add85a9d56eb5 bigip_routedomain.py 838aa58eebcf9af143ae5624dd85a17821bdef5bd8c88dd9804ae80739a258482b9d6a6e8fd07d774446a91416598700317d9c87c37f467844ce76acb6240ef3 bigip_selfip.py 2a002638bd90b7d367c59f0c04bdc73340888494b363f0479fc3161b046201a5694f83385f8a80a12ab4cdc70fe67561a26c1fd314cf36720de0cba927b377e9 bigip_snat_pool.py 0573c34057813e8705654950aceadaed944e07e5659c5ca5f2130cfe9ea9d37cc198706cf61b708dca5a9779439cc57e9f2618082441ba5583732b41d54790d8 bigip_ssl_certificate.py 82a6b8dd1767f9c3f41b49462090848f459fa5ea64498d25fe10e330007e4bfe4d0fa0a285fe6ad2630b6aa7bf5c2d3f451640c0e82c5fb9220015d3b0d18a60 bigip_sys_db.py a564ba16600bcde65c5d022dda72023ea61a07d5195945a9f71cb9f2fb68377735cd475f80080d95c42bce3c8feaef62976bd62bbbc049cfd271232826c292f4 bigip_sys_global.py a2eaf08d1231e815e095587937cbdbc8494bed10c61ab94f5e70533cb91bca33b820c869d6370dc565ccd322588c3ecec05030733cc6023c81251e1de7994fc2 bigip_virtual_server.py 74fea4bd89fead3428083783bba8dc19d7db9f2d94ef6c1aeef3e9fa2ad59e453ad1b8f71ed7b713f0dffeb26c673088479b399759cdcbdc38db1726af202904 bigip_vlan.py 372997e1ec13883e6ee00c7993bfd0de87f03a7c0d484184ea526d9bb2bbd131ccbf99b53f8a4ec3e111caab08019928f71b30f2f4f245ba4d65abcdbd1d26b9 Code : You can get a checksum for a particular module by running a command such as sha512sum on a Linux system.313Views0likes0CommentsExporting and importing ASM/AWAF security policies with Ansible and Terraform
Problem this snippet solves: This ansible playbook and Terraform TF file can be ised to copy the test ASM policy from the dev/preproduction environment to the production environment as this is Continuous integration and continuous delivery. Ansible You use the playbook by replacing the vars with "xxx" with your F5 device values for the connection. Also with the "vars_prompt:" you add policy name during execution as the preprod policy name is "{{ asm_policy }}_preprod" and the prod policy name is "{{ asm_policy }}_prod". For example if we enter "test" during the policy execution the name will be test_prod and test_preprod. If using Ansible Tower with the payed version you can use Jenkins or bamboo to push variables (I still have not tested this). Also there is a task that deletes the old asm policy file saved on the server as I saw that the ansible modules have issues overwriting existing files when doing the export and the task name is "Ansible delete file example" and in the group "internal" I have added the localhost. https://docs.ansible.com/ansible/latest/collections/f5networks/f5_modules/index.html Also after importing the policy file the bug https://support.f5.com/csp/article/K25733314 is hit, so the last 2 tasks deactivate and and activate the production policy. A nice example that I based my own is: https://support.f5.com/csp/article/K42420223 You can also write the connections vars in the hosts file as per K42420223 vars: provider: password: "{{ bigip_password }}" server: "{{ ansible_host }}" user: "{{ bigip_username }}" validate_certs: no server_port: 443 Example hosts: [bigip] f5.com [bigip:vars] bigip_password=xxx bigip_username=xxx ansible_host=xxx The policy is exported in binary format otherwize there is an issue importing it after that "binary: yes". Also when importing the option "force: yes" provides an overwrite if there is a policy with the same name. See the comments for my example about using host groups with this way your dev environment can be on one F5 device and the exported policy from it will be imported on another F5 device that is for production. When not using ''all'' for hosts you need to use set_facts to only be propmpted once for the policy name and then this to be shared between plays. Code : --- - name: Exporting and importing the ASM policy hosts: all connection: local become: yes vars: provider: password: xxx server: xxxx user: xxxx validate_certs: no server_port: 443 vars_prompt: - name: asm_policy prompt: What is the name of the ASM policy? private: no tasks: - name: Ansible delete file example file: path: "/home/niki/asm_policy/{{ asm_policy }}" state: absent when: inventory_hostname in groups['internal'] - name: Export policy in XML format bigip_asm_policy_fetch: name: "{{ asm_policy }}_preprod" file: "{{ asm_policy }}" dest: /home/niki/asm_policy/ binary: yes provider: "{{ provider }}" - name: Override existing ASM policy bigip_asm_policy_import: name: "{{ asm_policy }}_prod" source: "/home/niki/asm_policy/{{ asm_policy }}" force: yes provider: "{{ provider }}" notify: - Save the running configuration to disk - name: Task - deactivate policy bigip_asm_policy_manage: name: "{{ asm_policy }}_prod" state: present provider: "{{ provider }}" active: no - name: Task - activate policy bigip_asm_policy_manage: name: "{{ asm_policy }}_prod" state: present provider: "{{ provider }}" active: yes handlers: - name: Save the running configuration to disk bigip_config: save: yes provider: "{{ provider }}" Tested this on version: 13.1 Edit: -------------- When I made this code there was no official documentation but now I see F5 has provided examples for exporting and importing ASM/AWAF policies and even APM policies: https://clouddocs.f5.com/products/orchestration/ansible/devel/modules/bigip_asm_policy_fetch_module.html https://clouddocs.f5.com/products/orchestration/ansible/devel/modules/bigip_apm_policy_fetch_module.html -------------- Terraform Nowadays Terraform also provides the option to export and import AWAF policies (for APM Ansible is still the only way) as there is an F5 provider for terraform. I usedVisual Studio as Visual Studio wil even open for you the teminal, where you can select the folder where the terraform code will be saved after you have added the code run terraform init, terraform plan, terraform apply. VS even has a plugin for writting F5 irules. The terraform data type is not a resource and it is used to get the existing policy data. Data sources allow Terraform to use information defined outside of Terraform, defined by another separate Terraform configuration, or modified by functions. Usefull links for Visual Studio and Terraform: https://registry.terraform.io/providers/F5Networks/bigip/1.16.0/docs/resources/bigip_ltm_datagroup Usefull links for Visual Studio and Terraform: https://registry.terraform.io/providers/F5Networks/bigip/latest/docs/resources/bigip_waf_policy#policy_import_json https://www.youtube.com/watch?v=Z5xG8HLwIh4 The big issue is that Terraform not like Ansible needs you first find the aWAF policy "ID" that is not the name but a random generated identifier and this is no small task. I suggest looking at the link below: https://community.f5.com/t5/technical-articles/manage-f5-big-ip-advanced-waf-policies-with-terraform-part-2/ta-p/300839 Code: You may need to add also this resource below as to save the config and with "depends_on" it wil run after the date group is created. This is like the handler in Ansible that is started after the task is done and also terraform sometimes creates resources at the same time not like Ansible task after task, resource "bigip_command" "save-config" { commands = ["save sys config"] depends_on = [ bigip_waf_policy.test-awaf ] } Tested this on version: 16.11.1KViews1like1CommentAutomate Data Group updates on many Big-IP devices using Big-IQ or Ansible or Terraform
Problem this snippet solves: In many cases generated bad ip address lists by a SIEM (ELK, Splunk, IBM QRADAR) need to be uploaded to F5 for to be blocked but the BIG-IQ can't be used to send data group changes to the F5 devices. 1.A workaround to use the BIG-IQ script option to make all the F5 devices to check a file on a source server and to update the information in the external data group. I hope F5 to add the option to BIG-IQ to schedule when the scrpts to be run otherwise a cron job on the BIG-IQ may trigger the script feature that will execute the data group to refresh its data (sounds like the Matrix). https://clouddocs.f5.com/training/community/big-iq-cloud-edition/html/class5/module1/lab6.html Example command to run in the BIG-IQ script feature: tmsh modify sys file data-group ban_ip type ip source-pathhttps://x.x.x.x/files/bad_ip.txt https://support.f5.com/csp/article/K17523 2.You can also set the command with cronjob on the BIG-IP devices if you don't have BIG-IQ as you just need Linux server to host the data group files. 3.Also without BIG-IQ Ansible playbook can be used to manage many groups on the F5 devices as I have added the ansible playbook code below. Now with the windows subsystem you can run Ansible on Windows! 4.If you have AFM then you can use custom feed lists to upload the external data without the need for Ansible or Big-IQ. The ASM supports IP intelligence but no custom feeds can be used: https://techdocs.f5.com/kb/en-us/products/big-ip-afm/manuals/product/big-ip-afm-getting-started-14-1-0/04.html How to use this snippet: I made my code reading: https://docs.ansible.com/ansible/latest/collections/f5networks/f5_modules/bigip_data_group_module.html https://support.f5.com/csp/article/K42420223 If you want to have an automatic timeout then you need to use the irule table command (but you can't edit that with REST-API, so see the article below as a workaround) that writes in the RAM memory that supports automatic timeout and life time for each entry then there is a nice article for that as I added comment about possible bug resolution, so read the comments! https://devcentral.f5.com/s/articles/populating-tables-with-csv-data-via-sideband-connections Another way is on the server where you save the data group info is to add a bash script that with cronjob deletes from time to time old entries. For example (I tested this). Just write each data group line/text entry with for example IP address and next to it the date it was added. cutoff=$(date -d 'now - 30 days' '+%Y-%m-%d') awk -v cutoff="$cutoff" '$2 >= cutoff { print }' <in.txt >out.txt && mv out.txt in.txt Ansible is a great automation tool that makes changes only when the configuration is modified, so even if you run the same playbook 2 times (a playbook is the main config file and it contains many tasks), the second time there will be nothing (the same is true for terraform). Ansible supports "for" loops but calls them "loop" (before time " with_items " was used) and "if else" conditions but it calls them "when" just to confuse us and the conditions and loops are placed at the end of the task not at the start 😀 A loop is good if you want to apply the same config to multiple devices with some variables just being changed and "when" is nice for example to apply different tasks to different versions of the F5 TMOS or F5 devices with different provisioned modules. https://stackoverflow.com/questions/38571524/remove-line-in-text-file-with-bash-if-the-date-is-older-than-30-days Code : --- - name: Create or modify data group hosts: all connection: local vars: provider: password: xxxxx server: x.x.x.x user: xxxxx validate_certs: no server_port: 443 tasks: - name: Create a data group of IP addresses from a file bigip_data_group: name: block_group records_src: /var/www/files/bad.txt type: address provider: "{{ provider }}" notify: - Save the running configuration to disk handlers: - name: Save the running configuration to disk bigip_config: save: yes provider: "{{ provider }}" The "notify" triggers the handler task after the main task is done as there is no point in saving the config before that and the handler runs only on change, Tested this on version: 15.1 Also now F5 has Terraform Provider and together with Visual Studio you can edit your code on Windows and deploy it from the Visual Studio itself! Visual Studio wil even open for you the teminal, where you can select the folder where the terraform code will be saved after you have added the code run terraform init, terraform plan, terraform apply. VS even has a plugin for writting F5 irules.Terraform's files are called "tf" and the terraform providers are like the ansible inventory file (ansible may also have a provider object in the playbook not the inventory file) and are used to make the connection and then to create the resources (like ansible tasks). Usefull links for Visual Studio and Terraform: https://registry.terraform.io/providers/F5Networks/bigip/1.16.0/docs/resources/bigip_ltm_datagroup https://www.youtube.com/watch?v=Z5xG8HLwIh4 For more advanced terafform stuff like for loops and if or count conditions: https://blog.gruntwork.io/terraform-tips-tricks-loops-if-statements-and-gotchas-f739bbae55f9 Code : You may need to add also this resource below as to save the config and with "depends_on" it wil run after the date group is created. This is like the handler in Ansible that is started after the task is done and also terraform sometimes creates resources at the same time not like Ansible task after task, resource "bigip_command" "save-config" { commands = ["save sys config"] depends_on = [ bigip_ltm_datagroup.terraform-external1 ] } Tested this on version: 16.1 Ansible and Terraform now can be used for AS3 deployments like the BIG-IQ's "applications" as they will push the F5 declarative templates to the F5 device and nowadays even the F5 AWAF/ASM and SSLO (ssl orchestrator) support declarative configurations. For more info: https://www.f5.com/company/blog/f5-as3-and-red-hat-ansible-automation https://clouddocs.f5.com/products/orchestration/ansible/devel/f5_bigip/playbook_tutorial.html https://clouddocs.f5.com/products/orchestration/terraform/latest/userguide/as3-integration.html https://support.f5.com/csp/article/K23449665 https://clouddocs.f5.com/training/fas-ansible-workshop-101/3.3-as3-asm.html https://www.youtube.com/watch?v=Ecua-WRGyJc&t=105s2.5KViews2likes1CommentUpgrade BigIP using Ansible
Problem this snippet solves: A simple, and possibly poor, ansible playbook for upgrading devices. Allows separating devices into two "reboot groups" to allow rolling restarts of clusters. How to use this snippet: Clone or download the repository. Update the hosts.ini inventory file to your requirements Run ansible-playbook -i hosts.ini upgrade.yaml The script will identify a boot location to use from the first two on your big-ip system, will upload and install the image, and will then activate the boot location for each "reboot group" sequentially. Tested this on version: No Version Found984Views1like4CommentsMigrate BigIP Configuration - using f5-sdk and python
Problem this snippet solves: I came across a situation where I need to replace Old BigIP unit with a newer one. I have decided to use python and f5-sdk to read all the different bigip components from source unit and deploy on the destination unit and then compare the config. I have put all the code on github.com as : https://github.com/mshoaibshafi/f5-networks-bigip-migrate-configuration How to use this snippet: The code is as modular as possible and you can start from the file name "Main.py" It follows the following sequence : Migrate Monitors Migrate Pools Migrate Virtuals Migrate Users Compare Configuration Code : GitHub.com Repo : https://github.com/mshoaibshafi/f5-networks-bigip-migrate-configuration Tested this on version: 12.1417Views0likes0CommentsvCPE Scale-N demo for Openstack / Ansible playbooks
Problem this snippet solves: The virtual CPE (Customer Premises Equipment) is a NFV use case where functionality is moved away from the customer end and moved into the virtualized infrastructure of the network operator. This allows more flexible deployments, services and lower costs by eliminating any changes in the customer end. The Ansible Tower scripts provided in the repository https://github.com/f5devcentral/f5-vcpe-demo implement this for an Scale-N cluster. How to use this snippet: Although this code implements a specific use case for vCPE with specific functionality it can be used as an eskeleton for deploying configurations in a Scale-N cluster for any use case. The configurations, including base configs are deployed with iApps. Thanks to tmsh2iapp it is possible to deploy iApps that contain the parameters of all the BIG-IPs in the cluster. At instantiation time the iApps will generate the appropiate config for each BIG-IP. This greatly simplifies the ansible playbooks and the management of the configuration. The Ansible plabooks in this repository also deploy the necessary Openstack/Neutron configuration. Code : https://github.com/f5devcentral/f5-vcpe-demo342Views0likes0CommentsAnsible HA pair deployment using excel spreadsheet. No Ansible knowledge required
Problem this snippet solves: No Ansible knowledge required. Just fill in the spreadsheet and run the playbook. Easily customisable if you want to get more complex Please see: https://github.com/bwearp/simple-ha-pair How to use this snippet: simple-ha-pair Using ansible and an xlsx spreadsheet to set up an HA pair Tested on BIG-IP Software version 12.1.2 The default admin password of admin has been used This project uses the xls_to_facts.py module by Matt Mullen https://github.com/mamullen13316/ansible_xls_to_facts Requirements: BIG-IP Requirements The BIG-IP devices will need to have their management IP, netmask, and management gateway configured They will also need to be licensed and provisionned with ltm. It is possible to both provision and license the devices with ansible but it is not within the remit of this project. For additional information on Ansible and F5 Ansible modules, please see: http://clouddocs.f5.com/products/orchestration/ansible/devel/index.html Ansible Control Machine Requirements I am using Centos, other OS are available Note: It will be easiest to carry out the below as the root user You will need Python 2.7+ $ yum install python You will need pip $ curl 'https://bootstrap.pypa.io/get-pip.py' > get-pip.py && sudo python get-pip.py You will need ansible 2.5+ $ pip install ansible If 2.5+ is not yet available, which it wasn't at the time of writing, please download directly from git $ yum install git $ pip install --upgrade git+https://github.com/ansible/ansible.git You will need to add a few other modules $ pip install f5-sdk bigsuds netaddr deepdiff request objectpath openpyxl You will need to create and copy a root ssh-key to BOTH the bigip devices $ ssh-keygen Accept the defaults $ ssh-copy-id -i /root/.ssh/id_rsa.pub root@<bigip-management-ip> Example: $ ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.1.203 You will need to download the files using git - see above for git installation $ git clone https://github.com/bwearp/simple-ha-pair/ $ cd simple-ha-pair Executing the playbook You will then need to edit the simple-ha-pair.xlsx file to your preferences Then execute the playbook as root $ ansible-playbook simple-ha-pair.yml NOTES: In the simple-ha-pair.xlsx spreadsheet: The HA VLAN must be called 'HA' The settings where yes/no are required must be yes/no and not YES/NO or Yes/No One device must have primary=yes and the other must have primary=no I have added only Standard Virtual Servers with http, client & server ssl profiles, but hopefully it is pretty obvious from the simple-ha-pair.yml playbook how to add in others. Trunks haven't been added. This is because you can't have trunks in VE and also there is no F5 ansible module to add trunks. It could be done relatively easily using the bigip_command module, and hopefully the bigip_command examples in the simple-ha-pair.yml file will show that. I haven't added in persistence settings, as this would require a dropdown list of some kind. Is simple enough to do. Automation does not sit well with complication To update if there are any changes, please cd to the same folder and run: $ git pull You will notice there is also a reset.yml playbook to reset the devices to factory defaults. To run the reset.yml playbook as root: $ ansible-playbook reset.yml Code : https://github.com/bwearp/simple-ha-pair/blob/master/simple-ha-pair.yml Tested this on version: 12.1739Views0likes0CommentsChecksums for F5 Supported Cloud templates on GitHub
Problem this snippet solves: Checksums for F5 supported cloud templates F5 Networks provides checksums for all of our supported Amazon Web Services CloudFormation, Microsoft Azure ARM, Google Deployment Manager, and OpenStack Heat Orchestration templates. See the README files on GitHub for information on individual templates. You can find the templates in the appropriate supported directory on GitHub: Amazon CloudFormation templates: https://github.com/F5Networks/f5-aws-cloudformation/tree/master/supported Microsoft ARM Templates: https://github.com/F5Networks/f5-azure-arm-templates/tree/master/supported Google Templates: https://github.com/F5Networks/f5-google-gdm-templates VMware vCenter Templates: https://github.com/F5Networks/f5-vmware-vcenter-templates OpenStack Heat Orchestration Templates: https://github.com/F5Networks/f5-openstack-hot F5 Ansible Modules: http://docs.ansible.com/ansible/latest/list_of_network_modules.html#f5 Because this page was getting much too long to host all the checksums for all Cloud platforms, we now have individual pages for the checksums: Amazon AWS checksums Microsoft Azure checksums Google Cloud checksums VMware vCenter checksums OpenStack Heat Orchestration checksums F5 Ansible Module checksums Code : You can get a checksum for a particular template by running one of the following commands depending on your operating system: * **Linux**: `sha512sum ` * **Windows using CertUtil**: `CertUtil –hashfile SHA512`4.5KViews0likes0Comments