Exporting and importing ASM/AWAF security policies with Ansible and Terraform

Problem this snippet solves:

 

This ansible playbook and Terraform TF file can be ised to copy the test ASM policy from the dev/preproduction environment to the production environment as this is Continuous integration and continuous delivery.

 

Ansible

 

You use the playbook by replacing the vars with "xxx" with your F5 device values for the connection. Also with the " vars_prompt:" you add policy name during execution as the preprod policy name is "{{ asm_policy }}_preprod" and the prod policy name is "{{ asm_policy }}_prod". For example if we enter "test" during the policy execution the name will be test_prod and test_preprod. If using Ansible Tower with the payed version you can use Jenkins or bamboo to push variables (I still have not tested this).

 

 

Also there is a task that deletes the old asm policy file saved on the server as I saw that the ansible modules have issues overwriting existing files when doing the export and the task name is "Ansible delete file example" and in the group "internal" I have added the localhost.

 

 

 

https://docs.ansible.com/ansible/latest/collections/f5networks/f5_modules/index.html

 

 

Also after importing the policy file the bug https://support.f5.com/csp/article/K25733314 is hit, so the last 2 tasks deactivate and and activate the production policy.

 

 

A nice example that I based my own is:

 

https://support.f5.com/csp/article/K42420223

 

 

You can also write the connections vars in the hosts file as per K42420223

 

 

 vars:

   provider:

     password: "{{ bigip_password }}"

     server: "{{ ansible_host }}"

     user: "{{ bigip_username }}"

     validate_certs: no

     server_port: 443

 

 

 

 

Example hosts:

 

[bigip]

 

f5.com

 

 

 

[bigip:vars]

 

bigip_password=xxx

bigip_username=xxx

ansible_host=xxx

 

 

The policy is exported in binary format otherwize there is an issue importing it after that "binary: yes". Also when importing the option " force: yes" provides an overwrite if there is a policy with the same name.

 

 

See the comments for my example about using host groups with this way your dev environment can be on one F5 device and the exported policy from it will be imported on another F5 device that is for production. When not using ''all'' for hosts you need to use set_facts to only be propmpted once for the policy name and then this to be shared between plays.

Code :

---



- name: Exporting and importing the ASM policy
  hosts: all
  connection: local
  become: yes


  vars:
    provider:
      password: xxx
      server: xxxx
      user: xxxx
      validate_certs: no
      server_port: 443

  vars_prompt:

    - name: asm_policy
      prompt: What is the name of the ASM policy?
      private: no

  tasks:


   - name: Ansible delete file example
     file:
       path: "/home/niki/asm_policy/{{ asm_policy }}"
       state: absent
     when: inventory_hostname in groups['internal']


   - name: Export policy in XML format
     bigip_asm_policy_fetch:
       name: "{{ asm_policy }}_preprod"
       file: "{{ asm_policy }}"
       dest: /home/niki/asm_policy/
       binary: yes
       provider: "{{ provider }}"


   - name: Override existing ASM policy
     bigip_asm_policy_import:
       name: "{{ asm_policy }}_prod"
       source: "/home/niki/asm_policy/{{ asm_policy }}"
       force: yes
       provider: "{{ provider }}"

     notify:
        - Save the running configuration to disk


   - name: Task - deactivate policy
     bigip_asm_policy_manage:
         name: "{{ asm_policy }}_prod"
         state: present
         provider: "{{ provider }}"
         active: no


   - name: Task - activate policy
     bigip_asm_policy_manage:
         name: "{{ asm_policy }}_prod"
         state: present
         provider: "{{ provider }}"
         active: yes

  handlers:
     - name: Save the running configuration to disk
       bigip_config:
        save: yes
        provider: "{{ provider }}"

Tested this on version:

13.1

Edit:

--------------

When I made this code there was no official documentation but now I see F5 has provided examples for exporting and importing ASM/AWAF policies and even APM policies:

 

https://clouddocs.f5.com/products/orchestration/ansible/devel/modules/bigip_asm_policy_fetch_module.html

https://clouddocs.f5.com/products/orchestration/ansible/devel/modules/bigip_apm_policy_fetch_module.html

--------------

 

Terraform

 

Nowadays Terraform also provides the option to export and import AWAF policies (for APM Ansible is still the only way) as there is an F5 provider for terraform. I used Visual Studio as Visual Studio wil even open for you the teminal, where you can select the folder where the terraform code will be saved after you have added the code run terraform init, terraform plan, terraform apply. VS even has a plugin for writting F5 irules.

 

The terraform data type is not a resource and it is used to get the existing policy data. Data sources allow Terraform to use information defined outside of Terraform, defined by another separate Terraform configuration, or modified by functions.

 

Usefull links for Visual Studio  and Terraform:

https://registry.terraform.io/providers/F5Networks/bigip/1.16.0/docs/resources/bigip_ltm_datagroup

 

Usefull links for Visual Studio  and Terraform:

https://registry.terraform.io/providers/F5Networks/bigip/latest/docs/resources/bigip_waf_policy#policy_import_json

https://www.youtube.com/watch?v=Z5xG8HLwIh4

 

The big issue is that Terraform not like Ansible needs you first find the aWAF policy "ID" that is not the name but a random generated identifier and this is no small task. I suggest looking at the link below:

 

https://community.f5.com/t5/technical-articles/manage-f5-big-ip-advanced-waf-policies-with-terraform-part-2/ta-p/300839

 

 

Code:

You may need to add also this resource below as to save the config and with "depends_on" it wil run after the date group is created. This is like the handler in Ansible that is started after the task is done and also terraform sometimes creates resources at the same time not like Ansible task after task,

 

 

resource "bigip_command" "save-config" {
  commands = ["save sys config"]
    depends_on = [
bigip_waf_policy.test-awaf
  ]
}

 

Tested this on version:

16.1

Updated Dec 23, 2022
Version 5.0

Was this article helpful?

1 Comment

  • Example of using seperate plays to first play delete the old local asm policy file and then just for group "bigip" the second play exports and imports the ASM policy. If you are using different F5 devices for production and preproduction then just make different host groups and seperate plays (in one play for preprod that exports and one play for prod that imports the ASM policy).

     

     

    ---

     

    - name: Deliting old files

     hosts: all

     connection: local

     

     

     vars_prompt:

     

       - name: asm_policy

         prompt: What is the name of the ASM policy?

         private: no

     

     

     tasks:

     

     

      - name: Ansible delete file example

        file:

          path: "/home/niki/asm_policy/{{ asm_policy }}"

          state: absent

        when: inventory_hostname in groups['internal']

     

      - set_fact: "asm_fact={{ asm_policy }}"

     

     

    - name: Import and export the ASM policy

     hosts: bigip

     connection: local

     become: yes

     

     

     vars:

       provider:

         password: "{{ bigip_password }}"

         server: "{{ ansible_host }}"

         user: "{{ bigip_username }}"

         validate_certs: no

         server_port: 443

     

     

     

     tasks:

     

     

      - name: Export policy in XML format

        bigip_asm_policy_fetch:

          name: "{{ asm_fact }}_preprod"

          file: "{{ asm_fact }}"

          dest: /home/niki/asm_policy/

          binary: yes

          provider: "{{ provider }}"

     

     

      - name: Override existing ASM policy

        bigip_asm_policy_import:

          name: "{{ asm_fact }}_prod"

          source: "/home/niki/asm_policy/{{ asm_fact }}"

          force: yes

          provider: "{{ provider }}"

     

        notify:

           - Save the running configuration to disk

     

     

      - name: Task - deactivate policy

        bigip_asm_policy_manage:

            name: "{{ asm_fact }}_prod"

            state: present

            provider: "{{ provider }}"

            active: no

     

     

      - name: Task - activate policy

        bigip_asm_policy_manage:

            name: "{{ asm_fact }}_prod"

            state: present

            provider: "{{ provider }}"

            active: yes

     

     handlers:

        - name: Save the running configuration to disk

          bigip_config:

           save: yes

           provider: "{{ provider }}"

     

     

     

     

     

     

     

    Another way to share the policy name variable between hosts again with facts is using a dummy host to attach it and this way you don't need to use "all" in the first play to attach the fact under all the hosts.

     

     

     

     

     

     

     

    ---

     

    - name: Deliting old files

     hosts: internal

     connection: local

     

     

     vars_prompt:

     

       - name: asm_policy

         prompt: What is the name of the ASM policy?

         private: no

     

     

     tasks:

     

     

      - name: Ansible delete file example

        file:

          path: "/home/niki/asm_policy/{{ asm_policy }}"

          state: absent

     

     

      - name: set a variable

        set_fact:

           shared_variable: "{{ asm_policy }}"

      - name: add variables to dummy host

        add_host:

           name: "variable_holder"

           shared_variable: "{{ shared_variable }}"

     

     

    - name: Import and export the ASM policy

     hosts: bigip

     connection: local

     become: yes

     

     

     vars:

       asm_fact: "{{ hostvars['variable_holder']['shared_variable'] }}"

       provider:

         password: "{{ bigip_password }}"

         server: "{{ ansible_host }}"

         user: "{{ bigip_username }}"

         validate_certs: no

         server_port: 443