Ansible
11 TopicsChecksums for F5 Supported Cloud templates on GitHub
Problem this snippet solves: Checksums for F5 supported cloud templates F5 Networks provides checksums for all of our supported Amazon Web Services CloudFormation, Microsoft Azure ARM, Google Deployment Manager, and OpenStack Heat Orchestration templates. See the README files on GitHub for information on individual templates. You can find the templates in the appropriate supported directory on GitHub: Amazon CloudFormation templates: https://github.com/F5Networks/f5-aws-cloudformation/tree/master/supported Microsoft ARM Templates: https://github.com/F5Networks/f5-azure-arm-templates/tree/master/supported Google Templates: https://github.com/F5Networks/f5-google-gdm-templates VMware vCenter Templates: https://github.com/F5Networks/f5-vmware-vcenter-templates OpenStack Heat Orchestration Templates: https://github.com/F5Networks/f5-openstack-hot F5 Ansible Modules: http://docs.ansible.com/ansible/latest/list_of_network_modules.html#f5 Because this page was getting much too long to host all the checksums for all Cloud platforms, we now have individual pages for the checksums: Amazon AWS checksums Microsoft Azure checksums Google Cloud checksums VMware vCenter checksums OpenStack Heat Orchestration checksums F5 Ansible Module checksums Code : You can get a checksum for a particular template by running one of the following commands depending on your operating system: * **Linux**: `sha512sum ` * **Windows using CertUtil**: `CertUtil –hashfile SHA512`4.5KViews0likes0CommentsAutomate Data Group updates on many Big-IP devices using Big-IQ or Ansible or Terraform
Problem this snippet solves: In many cases generated bad ip address lists by a SIEM (ELK, Splunk, IBM QRADAR) need to be uploaded to F5 for to be blocked but the BIG-IQ can't be used to send data group changes to the F5 devices. 1.A workaround to use the BIG-IQ script option to make all the F5 devices to check a file on a source server and to update the information in the external data group. I hope F5 to add the option to BIG-IQ to schedule when the scrpts to be run otherwise a cron job on the BIG-IQ may trigger the script feature that will execute the data group to refresh its data (sounds like the Matrix). https://clouddocs.f5.com/training/community/big-iq-cloud-edition/html/class5/module1/lab6.html Example command to run in the BIG-IQ script feature: tmsh modify sys file data-group ban_ip type ip source-pathhttps://x.x.x.x/files/bad_ip.txt https://support.f5.com/csp/article/K17523 2.You can also set the command with cronjob on the BIG-IP devices if you don't have BIG-IQ as you just need Linux server to host the data group files. 3.Also without BIG-IQ Ansible playbook can be used to manage many groups on the F5 devices as I have added the ansible playbook code below. Now with the windows subsystem you can run Ansible on Windows! 4.If you have AFM then you can use custom feed lists to upload the external data without the need for Ansible or Big-IQ. The ASM supports IP intelligence but no custom feeds can be used: https://techdocs.f5.com/kb/en-us/products/big-ip-afm/manuals/product/big-ip-afm-getting-started-14-1-0/04.html How to use this snippet: I made my code reading: https://docs.ansible.com/ansible/latest/collections/f5networks/f5_modules/bigip_data_group_module.html https://support.f5.com/csp/article/K42420223 If you want to have an automatic timeout then you need to use the irule table command (but you can't edit that with REST-API, so see the article below as a workaround) that writes in the RAM memory that supports automatic timeout and life time for each entry then there is a nice article for that as I added comment about possible bug resolution, so read the comments! https://devcentral.f5.com/s/articles/populating-tables-with-csv-data-via-sideband-connections Another way is on the server where you save the data group info is to add a bash script that with cronjob deletes from time to time old entries. For example (I tested this). Just write each data group line/text entry with for example IP address and next to it the date it was added. cutoff=$(date -d 'now - 30 days' '+%Y-%m-%d') awk -v cutoff="$cutoff" '$2 >= cutoff { print }' <in.txt >out.txt && mv out.txt in.txt Ansible is a great automation tool that makes changes only when the configuration is modified, so even if you run the same playbook 2 times (a playbook is the main config file and it contains many tasks), the second time there will be nothing (the same is true for terraform). Ansible supports "for" loops but calls them "loop" (before time " with_items " was used) and "if else" conditions but it calls them "when" just to confuse us and the conditions and loops are placed at the end of the task not at the start 😀 A loop is good if you want to apply the same config to multiple devices with some variables just being changed and "when" is nice for example to apply different tasks to different versions of the F5 TMOS or F5 devices with different provisioned modules. https://stackoverflow.com/questions/38571524/remove-line-in-text-file-with-bash-if-the-date-is-older-than-30-days Code : --- - name: Create or modify data group hosts: all connection: local vars: provider: password: xxxxx server: x.x.x.x user: xxxxx validate_certs: no server_port: 443 tasks: - name: Create a data group of IP addresses from a file bigip_data_group: name: block_group records_src: /var/www/files/bad.txt type: address provider: "{{ provider }}" notify: - Save the running configuration to disk handlers: - name: Save the running configuration to disk bigip_config: save: yes provider: "{{ provider }}" The "notify" triggers the handler task after the main task is done as there is no point in saving the config before that and the handler runs only on change, Tested this on version: 15.1 Also now F5 has Terraform Provider and together with Visual Studio you can edit your code on Windows and deploy it from the Visual Studio itself! Visual Studio wil even open for you the teminal, where you can select the folder where the terraform code will be saved after you have added the code run terraform init, terraform plan, terraform apply. VS even has a plugin for writting F5 irules.Terraform's files are called "tf" and the terraform providers are like the ansible inventory file (ansible may also have a provider object in the playbook not the inventory file) and are used to make the connection and then to create the resources (like ansible tasks). Usefull links for Visual Studio and Terraform: https://registry.terraform.io/providers/F5Networks/bigip/1.16.0/docs/resources/bigip_ltm_datagroup https://www.youtube.com/watch?v=Z5xG8HLwIh4 For more advanced terafform stuff like for loops and if or count conditions: https://blog.gruntwork.io/terraform-tips-tricks-loops-if-statements-and-gotchas-f739bbae55f9 Code : You may need to add also this resource below as to save the config and with "depends_on" it wil run after the date group is created. This is like the handler in Ansible that is started after the task is done and also terraform sometimes creates resources at the same time not like Ansible task after task, resource "bigip_command" "save-config" { commands = ["save sys config"] depends_on = [ bigip_ltm_datagroup.terraform-external1 ] } Tested this on version: 16.1 Ansible and Terraform now can be used for AS3 deployments like the BIG-IQ's "applications" as they will push the F5 declarative templates to the F5 device and nowadays even the F5 AWAF/ASM and SSLO (ssl orchestrator) support declarative configurations. For more info: https://www.f5.com/company/blog/f5-as3-and-red-hat-ansible-automation https://clouddocs.f5.com/products/orchestration/ansible/devel/f5_bigip/playbook_tutorial.html https://clouddocs.f5.com/products/orchestration/terraform/latest/userguide/as3-integration.html https://support.f5.com/csp/article/K23449665 https://clouddocs.f5.com/training/fas-ansible-workshop-101/3.3-as3-asm.html https://www.youtube.com/watch?v=Ecua-WRGyJc&t=105s2.5KViews2likes1CommentF5 Archiver Ansible Playbook
Problem this snippet solves: Centralized scheduled archiving (backups) on F5 BIG-IP devices are a pain however, in the new world of Infrastructure as Code (IaC) and Super-NetOps tools like Ansible can provide the answer. I have a playbook I have been working on to allow me to backup off box quickly, UCS files are saves to a folder names tmp under the local project folder, this can be changed by editing the following line in the f5Archiver.yml file: dest: "tmp/{{ inventory_hostname }}-{{ date['stdout'] }}.ucs" The playbook can be run from a laptop on demand or via some scheduler (like cron ) or as part of a CI/CD pipelines. How to use this snippet: F5 Archiver Ansible Playbook Gitlab: StrataLabs: AnsibleF5Archiver Overview This Ansible playbook takes a list of F5 devices from a hosts file located within the inventory directory, creates a UCS archive and copies locally into the 'tmp' direcotry. Requirements This Ansible playbook requires the following: * ansible >= 2.5 * python module f5-sdk * F5 BIG-IP running TMOS >= 12 Usage Run using the ansible-playbook command using the inventory -i option to use the invertory directory instead of the default inventory host file. NOTE: F5 username and password are not set in the playbook and so need to be passed into the playbook as extra variables using the --extra-vars option, the variables are f5User for the username and f5Pwd for the password. The below examples use the default admin:admin . To check the playbook before using run the following commands ansible-playbook -i inventory --extra-vars "f5User=admin f5Pwd=admin" f5Archiver.yml --syntax-check ansible-playbook -i inventory --extra-vars "f5User=admin f5Pwd=admin" f5Archiver.yml --check Once happy run the following to execute the playbook ansible-playbook -i inventory --extra-vars "f5User=admin f5Pwd=admin" f5Archiver.yml Tested this on version: 12.11.8KViews2likes1CommentExporting and importing ASM/AWAF security policies with Ansible and Terraform
Problem this snippet solves: This ansible playbook and Terraform TF file can be ised to copy the test ASM policy from the dev/preproduction environment to the production environment as this is Continuous integration and continuous delivery. Ansible You use the playbook by replacing the vars with "xxx" with your F5 device values for the connection. Also with the "vars_prompt:" you add policy name during execution as the preprod policy name is "{{ asm_policy }}_preprod" and the prod policy name is "{{ asm_policy }}_prod". For example if we enter "test" during the policy execution the name will be test_prod and test_preprod. If using Ansible Tower with the payed version you can use Jenkins or bamboo to push variables (I still have not tested this). Also there is a task that deletes the old asm policy file saved on the server as I saw that the ansible modules have issues overwriting existing files when doing the export and the task name is "Ansible delete file example" and in the group "internal" I have added the localhost. https://docs.ansible.com/ansible/latest/collections/f5networks/f5_modules/index.html Also after importing the policy file the bug https://support.f5.com/csp/article/K25733314 is hit, so the last 2 tasks deactivate and and activate the production policy. A nice example that I based my own is: https://support.f5.com/csp/article/K42420223 You can also write the connections vars in the hosts file as per K42420223 vars: provider: password: "{{ bigip_password }}" server: "{{ ansible_host }}" user: "{{ bigip_username }}" validate_certs: no server_port: 443 Example hosts: [bigip] f5.com [bigip:vars] bigip_password=xxx bigip_username=xxx ansible_host=xxx The policy is exported in binary format otherwize there is an issue importing it after that "binary: yes". Also when importing the option "force: yes" provides an overwrite if there is a policy with the same name. See the comments for my example about using host groups with this way your dev environment can be on one F5 device and the exported policy from it will be imported on another F5 device that is for production. When not using ''all'' for hosts you need to use set_facts to only be propmpted once for the policy name and then this to be shared between plays. Code : --- - name: Exporting and importing the ASM policy hosts: all connection: local become: yes vars: provider: password: xxx server: xxxx user: xxxx validate_certs: no server_port: 443 vars_prompt: - name: asm_policy prompt: What is the name of the ASM policy? private: no tasks: - name: Ansible delete file example file: path: "/home/niki/asm_policy/{{ asm_policy }}" state: absent when: inventory_hostname in groups['internal'] - name: Export policy in XML format bigip_asm_policy_fetch: name: "{{ asm_policy }}_preprod" file: "{{ asm_policy }}" dest: /home/niki/asm_policy/ binary: yes provider: "{{ provider }}" - name: Override existing ASM policy bigip_asm_policy_import: name: "{{ asm_policy }}_prod" source: "/home/niki/asm_policy/{{ asm_policy }}" force: yes provider: "{{ provider }}" notify: - Save the running configuration to disk - name: Task - deactivate policy bigip_asm_policy_manage: name: "{{ asm_policy }}_prod" state: present provider: "{{ provider }}" active: no - name: Task - activate policy bigip_asm_policy_manage: name: "{{ asm_policy }}_prod" state: present provider: "{{ provider }}" active: yes handlers: - name: Save the running configuration to disk bigip_config: save: yes provider: "{{ provider }}" Tested this on version: 13.1 Edit: -------------- When I made this code there was no official documentation but now I see F5 has provided examples for exporting and importing ASM/AWAF policies and even APM policies: https://clouddocs.f5.com/products/orchestration/ansible/devel/modules/bigip_asm_policy_fetch_module.html https://clouddocs.f5.com/products/orchestration/ansible/devel/modules/bigip_apm_policy_fetch_module.html -------------- Terraform Nowadays Terraform also provides the option to export and import AWAF policies (for APM Ansible is still the only way) as there is an F5 provider for terraform. I usedVisual Studio as Visual Studio wil even open for you the teminal, where you can select the folder where the terraform code will be saved after you have added the code run terraform init, terraform plan, terraform apply. VS even has a plugin for writting F5 irules. The terraform data type is not a resource and it is used to get the existing policy data. Data sources allow Terraform to use information defined outside of Terraform, defined by another separate Terraform configuration, or modified by functions. Usefull links for Visual Studio and Terraform: https://registry.terraform.io/providers/F5Networks/bigip/1.16.0/docs/resources/bigip_ltm_datagroup Usefull links for Visual Studio and Terraform: https://registry.terraform.io/providers/F5Networks/bigip/latest/docs/resources/bigip_waf_policy#policy_import_json https://www.youtube.com/watch?v=Z5xG8HLwIh4 The big issue is that Terraform not like Ansible needs you first find the aWAF policy "ID" that is not the name but a random generated identifier and this is no small task. I suggest looking at the link below: https://community.f5.com/t5/technical-articles/manage-f5-big-ip-advanced-waf-policies-with-terraform-part-2/ta-p/300839 Code: You may need to add also this resource below as to save the config and with "depends_on" it wil run after the date group is created. This is like the handler in Ansible that is started after the task is done and also terraform sometimes creates resources at the same time not like Ansible task after task, resource "bigip_command" "save-config" { commands = ["save sys config"] depends_on = [ bigip_waf_policy.test-awaf ] } Tested this on version: 16.11.1KViews1like1CommentUpgrade BigIP using Ansible
Problem this snippet solves: A simple, and possibly poor, ansible playbook for upgrading devices. Allows separating devices into two "reboot groups" to allow rolling restarts of clusters. How to use this snippet: Clone or download the repository. Update the hosts.ini inventory file to your requirements Run ansible-playbook -i hosts.ini upgrade.yaml The script will identify a boot location to use from the first two on your big-ip system, will upload and install the image, and will then activate the boot location for each "reboot group" sequentially. Tested this on version: No Version Found961Views1like4CommentsAnsible playbook run tasks only on Active LTM member
Problem this snippet solves: This is an example of a simple Ansible playbook can be run against a pair of F5 devices and will only run select tasks on is the F5 is in an active state. This is done using the block and when statements within the playbook ('block' requires Ansible 2.5 or above) In this example it sets the hostname of the F5 and if failover state is active then creates three test nodes, a test pool and adds the nodes as pool members all under the test partition. NOTE: This playbook prompts for the F5 username and password to connect to the F5 device, this would normally be set with another file or pulled from something like HashiCorp Vault How to use this snippet: Ansible hosts Inventory example inventory/hosts: [F5DeviceGroup] f5vm01.lab.domain.local f5vm02.lab.domain.local Assuming the hosts file in located locally within a directory named inventory and the Ansible Playbook is named f5TestPool.yml you can run the example using the following command: ansible-playbook -i inventory f5TestPool.yml Example output: F5 Username: F5 Password: PLAY [Run tasks on Active LTM] ******************************************************************************************************************************************************************************************************* TASK [Set hostname] ****************************************************************************************************************************************************************************************************************** ok: [f5vm01.lab.domain.local -> localhost] ok: [f5vm02.lab.domain.local -> localhost] TASK [Get BIG-IP failover status] **************************************************************************************************************************************************************************************************** ok: [f5vm01.lab.domain.local -> localhost] ok: [f5vm02.lab.domain.local -> localhost] TASK [The active LTMs management IP is....] ****************************************************************************************************************************************************************************************** ok: [f5vm01.lab.domain.local] => { "inventory_hostname": "f5vm01.lab.domain.local" } skipping: [f5vm02.lab.domain.local] TASK [Add pool test_pool] ************************************************************************************************************************************************************************************************************ ok: [f5vm01.lab.domain.local -> localhost] skipping: [f5vm02.lab.domain.local] TASK [Add node [{u'name': u'test01', u'address': u'8.8.8.8'}, {u'name': u'test02', u'address': u'8.8.4.4'}, {u'name': u'test03', u'address': u'8.8.1.1'}]] *************************************************************************** ok: [f5vm01.lab.domain.local -> localhost] => (item={u'name': u'test01', u'address': u'8.8.8.8'}) ok: [f5vm01.lab.domain.local -> localhost] => (item={u'name': u'test02', u'address': u'8.8.4.4'}) ok: [f5vm01.lab.domain.local -> localhost] => (item={u'name': u'test03', u'address': u'8.8.1.1'}) skipping: [f5vm02.lab.domain.local] => (item={u'name': u'test01', u'address': u'8.8.8.8'}) skipping: [f5vm02.lab.domain.local] => (item={u'name': u'test02', u'address': u'8.8.4.4'}) skipping: [f5vm02.lab.domain.local] => (item={u'name': u'test03', u'address': u'8.8.1.1'}) TASK [Add pool member [{u'name': u'test01', u'address': u'8.8.8.8'}, {u'name': u'test02', u'address': u'8.8.4.4'}, {u'name': u'test03', u'address': u'8.8.1.1'}] to Pool test_pool] ************************************************** ok: [f5vm01.lab.domain.local -> localhost] => (item={u'name': u'test01', u'address': u'8.8.8.8'}) ok: [f5vm01.lab.domain.local -> localhost] => (item={u'name': u'test02', u'address': u'8.8.4.4'}) ok: [f5vm01.lab.domain.local -> localhost] => (item={u'name': u'test03', u'address': u'8.8.1.1'}) skipping: [f5vm02.lab.domain.local] => (item={u'name': u'test01', u'address': u'8.8.8.8'}) skipping: [f5vm02.lab.domain.local] => (item={u'name': u'test02', u'address': u'8.8.4.4'}) skipping: [f5vm02.lab.domain.local] => (item={u'name': u'test03', u'address': u'8.8.1.1'}) PLAY RECAP *************************************************************************************************************************************************************************************************************************** f5vm01.lab.domain.local : ok=6changed=0unreachable=0failed=0 f5vm02.lab.domain.local : ok=2changed=0unreachable=0failed=0 Code : --- # Playbook 'f5TestPool.yml' - name: Run tasks on Active LTM hosts: F5DeviceGroup connection: local gather_facts: False vars_prompt: - name: f5User prompt: F5 Username - name: f5Pwd prompt: F5 Password vars: f5Provider: server: "{{ inventory_hostname }}" server_port: 443 user: "{{ f5User }}" password: "{{ f5Pwd }}" validate_certs: no transport: rest nodelist: - {name: 'test01', address: "8.8.8.8"} - {name: 'test02', address: "8.8.4.4"} - {name: 'test03', address: "8.8.1.1"} tasks: - name: Set hostname bigip_hostname: provider: "{{ f5Provider }}" hostname: "{{ inventory_hostname }}" delegate_to: localhost - name : Get BIG-IP failover status bigip_command: provider: "{{ f5Provider }}" commands: - "tmsh show sys failover" delegate_to: localhost register: failoverStatus - name: Executing on ACTIVE F5 LTM block: - name: The active LTMs management IP is.... debug: var: inventory_hostname - name: Add pool test_pool bigip_pool: provider: "{{ f5Provider }}" description: "Test pool set by Ansible run by {{ f5User }}" lb_method: least-connections-member name: test_pool partition: test monitor_type: single monitors: - /Common/gateway_icmp priority_group_activation: 0 delegate_to: localhost - name: "Add node {{ nodelist }}" bigip_node: provider: "{{ f5Provider }}" partition: test address: "{{ item.address }}" name: "{{ item.name }}" loop: "{{ nodelist }}" delegate_to: localhost - name: "Add pool member {{ nodelist }} to Pool test_pool" bigip_pool_member: provider: "{{ f5Provider }}" partition: test pool: test_pool address: "{{ item.address }}" name: "{{ item.name }}" port: 53 loop: "{{ nodelist }}" delegate_to: localhost when: "'active' in failoverStatus['stdout'][0]" Tested this on version: 12.1814Views1like1CommentAnsible HA pair deployment using excel spreadsheet. No Ansible knowledge required
Problem this snippet solves: No Ansible knowledge required. Just fill in the spreadsheet and run the playbook. Easily customisable if you want to get more complex Please see: https://github.com/bwearp/simple-ha-pair How to use this snippet: simple-ha-pair Using ansible and an xlsx spreadsheet to set up an HA pair Tested on BIG-IP Software version 12.1.2 The default admin password of admin has been used This project uses the xls_to_facts.py module by Matt Mullen https://github.com/mamullen13316/ansible_xls_to_facts Requirements: BIG-IP Requirements The BIG-IP devices will need to have their management IP, netmask, and management gateway configured They will also need to be licensed and provisionned with ltm. It is possible to both provision and license the devices with ansible but it is not within the remit of this project. For additional information on Ansible and F5 Ansible modules, please see: http://clouddocs.f5.com/products/orchestration/ansible/devel/index.html Ansible Control Machine Requirements I am using Centos, other OS are available Note: It will be easiest to carry out the below as the root user You will need Python 2.7+ $ yum install python You will need pip $ curl 'https://bootstrap.pypa.io/get-pip.py' > get-pip.py && sudo python get-pip.py You will need ansible 2.5+ $ pip install ansible If 2.5+ is not yet available, which it wasn't at the time of writing, please download directly from git $ yum install git $ pip install --upgrade git+https://github.com/ansible/ansible.git You will need to add a few other modules $ pip install f5-sdk bigsuds netaddr deepdiff request objectpath openpyxl You will need to create and copy a root ssh-key to BOTH the bigip devices $ ssh-keygen Accept the defaults $ ssh-copy-id -i /root/.ssh/id_rsa.pub root@<bigip-management-ip> Example: $ ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.1.203 You will need to download the files using git - see above for git installation $ git clone https://github.com/bwearp/simple-ha-pair/ $ cd simple-ha-pair Executing the playbook You will then need to edit the simple-ha-pair.xlsx file to your preferences Then execute the playbook as root $ ansible-playbook simple-ha-pair.yml NOTES: In the simple-ha-pair.xlsx spreadsheet: The HA VLAN must be called 'HA' The settings where yes/no are required must be yes/no and not YES/NO or Yes/No One device must have primary=yes and the other must have primary=no I have added only Standard Virtual Servers with http, client & server ssl profiles, but hopefully it is pretty obvious from the simple-ha-pair.yml playbook how to add in others. Trunks haven't been added. This is because you can't have trunks in VE and also there is no F5 ansible module to add trunks. It could be done relatively easily using the bigip_command module, and hopefully the bigip_command examples in the simple-ha-pair.yml file will show that. I haven't added in persistence settings, as this would require a dropdown list of some kind. Is simple enough to do. Automation does not sit well with complication To update if there are any changes, please cd to the same folder and run: $ git pull You will notice there is also a reset.yml playbook to reset the devices to factory defaults. To run the reset.yml playbook as root: $ ansible-playbook reset.yml Code : https://github.com/bwearp/simple-ha-pair/blob/master/simple-ha-pair.yml Tested this on version: 12.1725Views0likes0CommentsMigrate BigIP Configuration - using f5-sdk and python
Problem this snippet solves: I came across a situation where I need to replace Old BigIP unit with a newer one. I have decided to use python and f5-sdk to read all the different bigip components from source unit and deploy on the destination unit and then compare the config. I have put all the code on github.com as : https://github.com/mshoaibshafi/f5-networks-bigip-migrate-configuration How to use this snippet: The code is as modular as possible and you can start from the file name "Main.py" It follows the following sequence : Migrate Monitors Migrate Pools Migrate Virtuals Migrate Users Compare Configuration Code : GitHub.com Repo : https://github.com/mshoaibshafi/f5-networks-bigip-migrate-configuration Tested this on version: 12.1415Views0likes0CommentsAutomating BIG-IP deployments using Ansible
Problem this snippet solves: Provides the opportunity to easily test deployment models and use cases of BIG-IP in AWS EC2. While AWS is used to provide a virtual compute and networking infrastructure, best practices shown here may be applicable to other public and private ‘cloud’ environments. Shows how the lifecycle of BIG-IP services can be automated using open-source configuration management and orchestration tools, in conjunction with the APIs provided by the BIG-IP platform. How to use this snippet: See README.md and /docs in the linked Github repository. Code : https://github.com/F5Networks/aws-deployments/ Tested this on version: 11.6374Views0likes0CommentsvCPE Scale-N demo for Openstack / Ansible playbooks
Problem this snippet solves: The virtual CPE (Customer Premises Equipment) is a NFV use case where functionality is moved away from the customer end and moved into the virtualized infrastructure of the network operator. This allows more flexible deployments, services and lower costs by eliminating any changes in the customer end. The Ansible Tower scripts provided in the repository https://github.com/f5devcentral/f5-vcpe-demo implement this for an Scale-N cluster. How to use this snippet: Although this code implements a specific use case for vCPE with specific functionality it can be used as an eskeleton for deploying configurations in a Scale-N cluster for any use case. The configurations, including base configs are deployed with iApps. Thanks to tmsh2iapp it is possible to deploy iApps that contain the parameters of all the BIG-IPs in the cluster. At instantiation time the iApps will generate the appropiate config for each BIG-IP. This greatly simplifies the ansible playbooks and the management of the configuration. The Ansible plabooks in this repository also deploy the necessary Openstack/Neutron configuration. Code : https://github.com/f5devcentral/f5-vcpe-demo340Views0likes0Comments