AS3
20 TopicsAdvanced WAF v16.0 - Declarative API
Since v15.1 (in draft), F5® BIG-IP® Advanced WAF™ can import Declarative WAF policy in JSON format. The F5® BIG-IP® Advanced Web Application Firewall (Advanced WAF) security policies can be deployed using the declarative JSON format, facilitating easy integration into a CI/CD pipeline. The declarative policies are extracted from a source control system, for example Git, and imported into the BIG-IP. Using the provided declarative policy templates, you can modify the necessary parameters, save the JSON file, and import the updated security policy into your BIG-IP devices. The declarative policy copies the content of the template and adds the adjustments and modifications on to it. The templates therefore allow you to concentrate only on the specific settings that need to be adapted for the specific application that the policy protects. This Declarative WAF JSON policy is similar to NGINX App Protect policy. You can find more information on the Declarative Policy here : NAP : https://docs.nginx.com/nginx-app-protect/policy/ Adv. WAF : https://techdocs.f5.com/en-us/bigip-15-1-0/big-ip-declarative-security-policy.html Audience This guide is written for IT professionals who need to automate their WAF policy and are familiar with Advanced WAF configuration. These IT professionals can fill a variety of roles: SecOps deploying and maintaining WAF policy in Advanced WAF DevOps deploying applications in modern environment and willing to integrate Advanced WAF in their CI/CD pipeline F5 partners who sell technology or create implementation documentation This article covers how to PUSH/PULL a declarative WAF policy in Advanced WAF: With Postman With AS3 Table of contents Upload Policy in BIG-IP Check the import Apply the policy OpenAPI Spec File import AS3 declaration CI/CD integration Find the Policy-ID Update an existing policy Video demonstration First of all, you need a JSON WAF policy, as below : { "policy": { "name": "policy-api-arcadia", "description": "Arcadia API", "template": { "name": "POLICY_TEMPLATE_API_SECURITY" }, "enforcementMode": "blocking", "server-technologies": [ { "serverTechnologyName": "MySQL" }, { "serverTechnologyName": "Unix/Linux" }, { "serverTechnologyName": "MongoDB" } ], "signature-settings": { "signatureStaging": false }, "policy-builder": { "learnOnlyFromNonBotTraffic": false } } } 1. Upload Policy in BIG-IP There are 2 options to upload a JSON file into the BIG-IP: 1.1 Either you PUSH the file into the BIG-IP and you IMPORT IT OR 1.2 the BIG-IP PULL the file from a repository (and the IMPORT is included) <- BEST option 1.1 PUSH JSON file into the BIG-IP The call is below. As you can notice, it requires a 'Content-Range' header. And the value is 0-(filesize-1)/filesize. In the example below, the file size is 662 bytes. This is not easy to integrate in a CICD pipeline, so we created the PULL method instead of the PUSH (in v16.0) curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/file-transfer/uploads/policy-api.json' \ --header 'Content-Range: 0-661/662' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --header 'Content-Type: application/json' \ --data-binary '@/C:/Users/user/Desktop/policy-api.json' At this stage, the policy is still a file in the BIG-IP file system. We need to import it into Adv. WAF. To do so, the next call is required. This call import the file "policy-api.json" uploaded previously. An CREATE the policy /Common/policy-api-arcadia curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Type: application/javascript' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{ "filename":"policy-api.json", "policy": { "fullPath":"/Common/policy-api-arcadia" } }' 1.2 PULL JSON file from a repository Here, the JSON file is hosted somewhere (in Gitlab or Github ...). And the BIG-IP will pull it. The call is below. As you can notice, the call refers to the remote repo and the body is a JSON payload. Just change the link value with your JSON policy URL. With one call, the policy is PULLED and IMPORTED. curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{ "fileReference": { "link": "http://10.1.20.4/root/as3-waf/-/raw/master/policy-api.json" } }' A second version of this call exists, and refer to the fullPath of the policy. This will allow you to update the policy, from a second version of the JSON file, easily. One call for the creation and the update. As you can notice below, we add the "policy":"fullPath" directive. The value of the "fullPath" is the partition and the name of the policy set in the JSON policy file. This method is VERY USEFUL for CI/CD integrations. curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{ "fileReference": { "link": "http://10.1.20.4/root/as3-waf/-/raw/master/policy-api.json" }, "policy": { "fullPath":"/Common/policy-api-arcadia" } }' 2. Check the IMPORT Check if the IMPORT worked. To do so, run the next call. curl --location --request GET 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ You should see a 200 OK, with the content below (truncated in this example). Please notice the "status":"COMPLETED". { "kind": "tm:asm:tasks:import-policy:import-policy-taskcollectionstate", "selfLink": "https://localhost/mgmt/tm/asm/tasks/import-policy?ver=16.0.0", "totalItems": 11, "items": [ { "isBase64": false, "executionStartTime": "2020-07-21T15:50:22Z", "status": "COMPLETED", "lastUpdateMicros": 1.595346627e+15, "getPolicyAttributesOnly": false, ... From now, your policy is imported and created in the BIG-IP. You can assign it to a VS as usual (Imperative Call or AS3 Call). But in the next session, I will show you how to create a Service with AS3 including the WAF policy. 3. APPLY the policy As you may know, a WAF policy needs to be applied after each change. This is the call. curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/apply-policy/' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{"policy":{"fullPath":"/Common/policy-api-arcadia"}}' 4. OpenAPI spec file IMPORT As you know, Adv. WAF supports OpenAPI spec (2.0 and 3.0). Now, with the declarative WAF, we can import the OAS file as well. The BEST solution, is to PULL the OAS file from a repo. And in most of the customer' projects, it will be the case. In the example below, the OAS file is hosted in SwaggerHub (Github for Swagger files). But the file could reside in a private Gitlab repo for instance. The URL of the project is : https://app.swaggerhub.com/apis/F5EMEASSA/Arcadia-OAS3/1.0.0-oas3 The URL of the OAS file is : https://api.swaggerhub.com/apis/F5EMEASSA/Arcadia-OAS3/1.0.0-oas3 This swagger file (OpenAPI 3.0 Spec file) includes all the application URL and parameters. What's more, it includes the documentation (for NGINX APIm Dev Portal). Now, it is pretty easy to create a WAF JSON Policy with API Security template, referring to the OAS file. Below, you can notice the new section "open-api-files" with the link reference to SwaggerHub. And the new template POLICY_TEMPLATE_API_SECURITY. Now, when I upload / import and apply the policy, Adv. WAF will download the OAS file from SwaggerHub and create the policy based on API_Security template. { "policy": { "name": "policy-api-arcadia", "description": "Arcadia API", "template": { "name": "POLICY_TEMPLATE_API_SECURITY" }, "enforcementMode": "blocking", "server-technologies": [ { "serverTechnologyName": "MySQL" }, { "serverTechnologyName": "Unix/Linux" }, { "serverTechnologyName": "MongoDB" } ], "signature-settings": { "signatureStaging": false }, "policy-builder": { "learnOnlyFromNonBotTraffic": false }, "open-api-files": [ { "link": "https://api.swaggerhub.com/apis/F5EMEASSA/Arcadia-OAS3/1.0.0-oas3" } ] } } 5. AS3 declaration Now, it is time to learn how we can do all of these steps in one call with AS3 (3.18 minimum). The documentation is here : https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/declarations/application-security.html?highlight=waf_policy#virtual-service-referencing-an-external-security-policy With this AS3 declaration, we: Import the WAF policy from a external repo Import the Swagger file (if the WAF policy refers to an OAS file) from an external repo Create the service { "class": "AS3", "action": "deploy", "persist": true, "declaration": { "class": "ADC", "schemaVersion": "3.2.0", "id": "Prod_API_AS3", "API-Prod": { "class": "Tenant", "defaultRouteDomain": 0, "API": { "class": "Application", "template": "generic", "VS_API": { "class": "Service_HTTPS", "remark": "Accepts HTTPS/TLS connections on port 443", "virtualAddresses": ["10.1.10.27"], "redirect80": false, "pool": "pool_NGINX_API_AS3", "policyWAF": { "use": "Arcadia_WAF_API_policy" }, "securityLogProfiles": [{ "bigip": "/Common/Log all requests" }], "profileTCP": { "egress": "wan", "ingress": { "use": "TCP_Profile" } }, "profileHTTP": { "use": "custom_http_profile" }, "serverTLS": { "bigip": "/Common/arcadia_client_ssl" } }, "Arcadia_WAF_API_policy": { "class": "WAF_Policy", "url": "http://10.1.20.4/root/as3-waf-api/-/raw/master/policy-api.json", "ignoreChanges": true }, "pool_NGINX_API_AS3": { "class": "Pool", "monitors": ["http"], "members": [{ "servicePort": 8080, "serverAddresses": ["10.1.20.9"] }] }, "custom_http_profile": { "class": "HTTP_Profile", "xForwardedFor": true }, "TCP_Profile": { "class": "TCP_Profile", "idleTimeout": 60 } } } } } 6. CI/CID integration As you can notice, it is very easy to create a service with a WAF policy pulled from an external repo. So, it is easy to integrate these calls (or the AS3 call) into a CI/CD pipeline. Below, an Ansible playbook example. This playbook run the AS3 call above. That's it :) --- - hosts: bigip connection: local gather_facts: false vars: my_admin: "admin" my_password: "admin" bigip: "10.1.1.12" tasks: - name: Deploy AS3 WebApp uri: url: "https://{{ bigip }}/mgmt/shared/appsvcs/declare" method: POST headers: "Content-Type": "application/json" "Authorization": "Basic YWRtaW46YWRtaW4=" body: "{{ lookup('file','as3.json') }}" body_format: json validate_certs: no status_code: 200 7. FIND the Policy-ID When the policy is created, a Policy-ID is assigned. By default, this ID doesn't appear anywhere. Neither in the GUI, nor in the response after the creation. You have to calculate it or ask for it. This ID is required for several actions in a CI/CD pipeline. 7.1 Calculate the Policy-ID We created this python script to calculate the Policy-ID. It is an hash from the Policy name (including the partition). For the previous created policy named "/Common/policy-api-arcadia", the policy ID is "Ar5wrwmFRroUYsMA6DuxlQ" Paste this python code in a new waf-policy-id.py file, and run the command python waf-policy-id.py "/Common/policy-api-arcadia" Outcome will be The Policy-ID for /Common/policy-api-arcadia is: Ar5wrwmFRroUYsMA6DuxlQ #!/usr/bin/python from hashlib import md5 import base64 import sys pname = sys.argv[1] print 'The Policy-ID for', sys.argv[1], 'is:', base64.b64encode(md5(pname.encode()).digest()).replace("=", "") 7.2 Retrieve the Policy-ID and fullPath with a REST API call Make this call below, and you will see in the response, all the policy creations. Find yours and collect the PolicyReference directive. The Policy-ID is in the link value "link": "https://localhost/mgmt/tm/asm/policies/Ar5wrwmFRroUYsMA6DuxlQ?ver=16.0.0" You can see as well, at the end of the definition, the "fileReference" referring to the JSON file pulled by the BIG-IP. And please notice the "fullPath", required if you want to update your policy curl --location --request GET 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Range: 0-601/601' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ { "isBase64": false, "executionStartTime": "2020-07-22T11:23:42Z", "status": "COMPLETED", "lastUpdateMicros": 1.595417027e+15, "getPolicyAttributesOnly": false, "kind": "tm:asm:tasks:import-policy:import-policy-taskstate", "selfLink": "https://localhost/mgmt/tm/asm/tasks/import-policy/B45J0ySjSJ9y9fsPZ2JNvA?ver=16.0.0", "filename": "", "policyReference": { "link": "https://localhost/mgmt/tm/asm/policies/Ar5wrwmFRroUYsMA6DuxlQ?ver=16.0.0", "fullPath": "/Common/policy-api-arcadia" }, "endTime": "2020-07-22T11:23:47Z", "startTime": "2020-07-22T11:23:42Z", "id": "B45J0ySjSJ9y9fsPZ2JNvA", "retainInheritanceSettings": false, "result": { "policyReference": { "link": "https://localhost/mgmt/tm/asm/policies/Ar5wrwmFRroUYsMA6DuxlQ?ver=16.0.0", "fullPath": "/Common/policy-api-arcadia" }, "message": "The operation was completed successfully. The security policy name is '/Common/policy-api-arcadia'. " }, "fileReference": { "link": "http://10.1.20.4/root/as3-waf/-/raw/master/policy-api.json" } }, 8 UPDATE an existing policy It is pretty easy to update the WAF policy from a new JSON file version. To do so, collect from the previous call 7.2 Retrieve the Policy-ID and fullPath with a REST API call the "Policy" and "fullPath" directive. This is the path of the Policy in the BIG-IP. Then run the call below, same as 1.2 PULL JSON file from a repository, but add the Policy and fullPath directives Don't forget to APPLY this new version of the policy 3. APPLY the policy curl --location --request POST 'https://10.1.1.12/mgmt/tm/asm/tasks/import-policy/' \ --header 'Content-Type: application/json' \ --header 'Authorization: Basic YWRtaW46YWRtaW4=' \ --data-raw '{ "fileReference": { "link": "http://10.1.20.4/root/as3-waf/-/raw/master/policy-api.json" }, "policy": { "fullPath":"/Common/policy-api-arcadia" } }' TIP : this call, above, can be used in place of the FIRST call when we created the policy "1.2 PULL JSON file from a repository". But be careful, the fullPath is the name set in the JSON policy file. The 2 values need to match: "name": "policy-api-arcadia" in the JSON Policy file pulled by the BIG-IP "policy":"fullPath" in the POST call 9 Video demonstration In order to help you to understand how it looks with the BIG-IP, I created this video covering 4 topics explained in this article : The JSON WAF policy Pull the policy from a remote repository Update the WAF policy with a new version of the declarative JSON file Deploy a full service with AS3 and Declarative WAF policy At the end of this video, you will be able to adapt the REST Declarative API calls to your infrastructure, in order to deploy protected services with your CI/CD pipelines. Direct link to the video on DevCentral YouTube channel : https://youtu.be/EDvVwlwEFRw4.2KViews5likes2CommentsDeclarative Advanced WAF policy lifecycle in a CI/CD pipeline
The purpose of this article is to show the configuration used to deploy a declarative Advanced WAF policy to a BIG-IP and automatically configure it to protect an API workload by consuming an OpenAPI file describing the application. For this experiment, a Gitlab CI/CD pipeline was used to deploy an API workload to Kubernetes, configure a declarative Adv. WAF policy to a BIG-IP device and tuning it by incorporating learning suggestions exported from the BIG-IP. Lastly, the F5 WAF tester tool was used to determine and improve the defensive posture of the Adv. WAF policy. Deploying the declarative Advanced WAF policy through a CI/CD pipeline To deploy the Adv. WAF policy, the Gitlab CI/CD pipeline is calling an Ansible playbook that will in turn deploy an AS3 application referencing the Adv.WAF policy from a separate JSON file. This allows the application definition and WAF policy to be managed by 2 different groups, for example NetOps and SecOps, supporting separation of duties. The following Ansible playbook was used; --- - hosts: bigip connection: local gather_facts: false vars: my_admin: "xxxx" my_password: "xxxx" bigip: "xxxx" tasks: - name: Deploy AS3 API AWAF policy uri: url: "https://{{ bigip }}/mgmt/shared/appsvcs/declare" method: POST headers: "Content-Type": "application/json" "Authorization": "Basic xxxxxxxxxx body: "{{ lookup('file','as3_waf_openapi.json') }}" body_format: json validate_certs: no status_code: 200 The Advanced WAF policy 'as3_waf_openapi.json' was specified as follows: { "class": "AS3", "action": "deploy", "persist": true, "declaration": { "class": "ADC", "schemaVersion": "3.2.0", "id": "Prod_API_AS3", "API-Prod": { "class": "Tenant", "defaultRouteDomain": 0, "arcadia": { "class": "Application", "template": "generic", "VS_API": { "class": "Service_HTTPS", "remark": "Accepts HTTPS/TLS connections on port 443", "virtualAddresses": ["xxxxx"], "redirect80": false, "pool": "pool_NGINX_API", "policyWAF": { "use": "Arcadia_WAF_API_policy" }, "securityLogProfiles": [{ "bigip": "/Common/Log all requests" }], "profileTCP": { "egress": "wan", "ingress": { "use": "TCP_Profile" } }, "profileHTTP": { "use": "custom_http_profile" }, "serverTLS": { "bigip": "/Common/arcadia_client_ssl" } }, "Arcadia_WAF_API_policy": { "class": "WAF_Policy", "url": "http://xxxx/root/awaf_openapi/-/raw/master/WAF/ansible/bigip/policy-api.json", "ignoreChanges": true }, "pool_NGINX_API": { "class": "Pool", "monitors": ["http"], "members": [{ "servicePort": 8080, "serverAddresses": ["xxxx"] }] }, "custom_http_profile": { "class": "HTTP_Profile", "xForwardedFor": true }, "TCP_Profile": { "class": "TCP_Profile", "idleTimeout": 60 } } } } } The AS3 declaration will provision a separate Administrative Partition ('API-Prod') containing a Virtual Server ('VS_API'), an Adv. WAF policy ('Arcadia_WAF_API_policy') and a pool ('pool_NGINX_API'). The Adv.WAF policy being referenced ('policy-api.json') is stored in the same Gitlab repository but can be downloaded from a separate location. { "policy": { "name": "policy-api-arcadia", "description": "Arcadia API", "template": { "name": "POLICY_TEMPLATE_API_SECURITY" }, "enforcementMode": "transparent", "server-technologies": [ { "serverTechnologyName": "MySQL" }, { "serverTechnologyName": "Unix/Linux" }, { "serverTechnologyName": "MongoDB" } ], "signature-settings": { "signatureStaging": false }, "policy-builder": { "learnOnlyFromNonBotTraffic": false }, "open-api-files": [ { "link": "http://xxxx/root/awaf_openapi/-/raw/master/App/openapi3-arcadia.yaml" } ] }, "modifications": [ ] } The declarative Adv.WAF policy is referencing in turn the OpenAPI file ('openapi3-arcadia.yaml') that describes the application being protected. Executing the Ansible playbook results in the AS3 application being deployed, along with the Adv.WAF policy that is automatically configured according to the OpenAPI file. Handling learning suggestions in a CI/CD pipeline The next step in the CI/CD pipeline used for this experiment was to send legitimate traffic using the API and collect the learning suggestions generated by the Adv.WAF policy, which will allow a simple way to customize the WAF policy further for the specific application being protected. The following Ansible playbook was used to retrieve the learning suggestions: --- - hosts: bigip connection: local gather_facts: true vars: my_admin: "xxxx" my_password: "xxxx" bigip: "xxxxx" tasks: - name: Get all Policy_key/IDs for WAF policies uri: url: 'https://{{ bigip }}/mgmt/tm/asm/policies?$select=name,id' method: GET headers: "Authorization": "Basic xxxxxxxxxxx" validate_certs: no status_code: 200 return_content: yes register: waf_policies - name: Extract Policy_key/ID of Arcadia_WAF_API_policy set_fact: Arcadia_WAF_API_policy_ID="{{ item.id }}" loop: "{{ (waf_policies.content|from_json)['items'] }}" when: item.name == "Arcadia_WAF_API_policy" - name: Export learning suggestions uri: url: "https://{{ bigip }}/mgmt/tm/asm/tasks/export-suggestions" method: POST headers: "Content-Type": "application/json" "Authorization": "Basic xxxxxxxxxxx" body: "{ \"inline\": \"true\", \"policyReference\": { \"link\": \"https://{{ bigip }}/mgmt/tm/asm/policies/{{ Arcadia_WAF_API_policy_ID }}/\" } }" body_format: json validate_certs: no status_code: - 200 - 201 - 202 - name: Get learning suggestions uri: url: "https://{{ bigip }}/mgmt/tm/asm/tasks/export-suggestions" method: GET headers: "Authorization": "Basic xxxxxxxxx" validate_certs: no status_code: 200 register: result - name: Print learning suggestions debug: var=result A sample learning suggestions output is shown below: "json": { "items": [ { "endTime": "xxxxxxxxxxxxx", "id": "ZQDaRVecGeqHwAW1LDzZTQ", "inline": true, "kind": "tm:asm:tasks:export-suggestions:export-suggestions-taskstate", "lastUpdateMicros": 1599953296000000.0, "result": { "suggestions": [ { "action": "add-or-update", "description": "Enable Evasion Technique", "entity": { "description": "Directory traversals" }, "entityChanges": { "enabled": true }, "entityType": "evasion" }, { "action": "add-or-update", "description": "Enable HTTP Check", "entity": { "description": "Check maximum number of parameters" }, "entityChanges": { "enabled": true }, "entityType": "http-protocol" }, { "action": "add-or-update", "description": "Enable HTTP Check", "entity": { "description": "No Host header in HTTP/1.1 request" }, "entityChanges": { "enabled": true }, "entityType": "http-protocol" }, { "action": "add-or-update", "description": "Enable enforcement of policy violation", "entity": { "name": "VIOL_REQUEST_MAX_LENGTH" }, "entityChanges": { "alarm": true, "block": true }, "entityType": "violation" } Incorporating the learning suggestions in the Adv.WAF policy can be done by simple copy&pasting the self-contained learning suggestions blocks into the "modifications" list of the Adv.WAF policy: { "policy": { "name": "policy-api-arcadia", "description": "Arcadia API", "template": { "name": "POLICY_TEMPLATE_API_SECURITY" }, "enforcementMode": "transparent", "server-technologies": [ { "serverTechnologyName": "MySQL" }, { "serverTechnologyName": "Unix/Linux" }, { "serverTechnologyName": "MongoDB" } ], "signature-settings": { "signatureStaging": false }, "policy-builder": { "learnOnlyFromNonBotTraffic": false }, "open-api-files": [ { "link": "http://xxxxxx/root/awaf_openapi/-/raw/master/App/openapi3-arcadia.yaml" } ] }, "modifications": [ { "action": "add-or-update", "description": "Enable Evasion Technique", "entity": { "description": "Directory traversals" }, "entityChanges": { "enabled": true }, "entityType": "evasion" } ] } Enhancing Advanced WAF policy posture by using the F5 WAF tester The F5 WAF tester is a tool that generates known attacks and checks the response of the WAF policy. For example, running the F5 WAF tester against a policy that has a "transparent" enforcement mode will cause the tests to fail as the attacks will not be blocked. The F5 WAF tester can suggest possible enhancement of the policy, in this case the change of the enforcement mode. An abbreviated sample output of the F5 WAF Tester: ................................................................ "100000023": { "CVE": "", "attack_type": "Server Side Request Forgery", "name": "SSRF attempt (AWS Metadata Server)", "results": { "parameter": { "expected_result": { "type": "signature", "value": "200018040" }, "pass": false, "reason": "ASM Policy is not in blocking mode", "support_id": "" } }, "system": "All systems" }, "100000024": { "CVE": "", "attack_type": "Server Side Request Forgery", "name": "SSRF attempt - Local network IP range 10.x.x.x", "results": { "request": { "expected_result": { "type": "signature", "value": "200020201" }, "pass": false, "reason": "ASM Policy is not in blocking mode", "support_id": "" } }, "system": "All systems" } }, "summary": { "fail": 48, "pass": 0 } Changing the enforcement mode from "transparent" to "blocking" can easily be done by editing the same Adv. WAF policy file: { "policy": { "name": "policy-api-arcadia", "description": "Arcadia API", "template": { "name": "POLICY_TEMPLATE_API_SECURITY" }, "enforcementMode": "blocking", "server-technologies": [ { "serverTechnologyName": "MySQL" }, { "serverTechnologyName": "Unix/Linux" }, { "serverTechnologyName": "MongoDB" } ], "signature-settings": { "signatureStaging": false }, "policy-builder": { "learnOnlyFromNonBotTraffic": false }, "open-api-files": [ { "link": "http://xxxxx/root/awaf_openapi/-/raw/master/App/openapi3-arcadia.yaml" } ] }, "modifications": [ { "action": "add-or-update", "description": "Enable Evasion Technique", "entity": { "description": "Directory traversals" }, "entityChanges": { "enabled": true }, "entityType": "evasion" } ] } A successful run will will be achieved when all the attacks will be blocked. ......................................... "100000023": { "CVE": "", "attack_type": "Server Side Request Forgery", "name": "SSRF attempt (AWS Metadata Server)", "results": { "parameter": { "expected_result": { "type": "signature", "value": "200018040" }, "pass": true, "reason": "", "support_id": "17540898289451273964" } }, "system": "All systems" }, "100000024": { "CVE": "", "attack_type": "Server Side Request Forgery", "name": "SSRF attempt - Local network IP range 10.x.x.x", "results": { "request": { "expected_result": { "type": "signature", "value": "200020201" }, "pass": true, "reason": "", "support_id": "17540898289451274344" } }, "system": "All systems" } }, "summary": { "fail": 0, "pass": 48 } Conclusion By adding the Advanced WAF policy into a CI/CD pipeline, the WAF policy can be integrated in the lifecycle of the application it is protecting, allowing for continuous testing and improvement of the security posture before it is deployed to production. The flexible model of AS3 and declarative Advanced WAF allows the separation of roles and responsibilities between NetOps and SecOps, while providing an easy way for tuning the policy to the specifics of the application being protected. Links UDF lab environment link. Short instructional video link.2.7KViews3likes2CommentsBIG-IP Orchestrate in private Data Center using Terraform Cloud Agent
What is Terraform Cloud? Terraform Cloud offers organizations a unified workflow for provisioning their cloud, private data center, and SaaS infrastructure, ensuring continuous infrastructure management throughout its entire lifecycle. What is F5 BIG-IP? BIG-IP is a collection of hardware platforms and software solutions providing services focused on security, reliability, and performance. It helps in doing Application delivery server load balancing of applications securely and at scale. BIG-IP can be deployed in private or public clouds. BIG-IP in private Data Center When BIG-IP is in a private data center, it has a private IP, making it tricky to reach with tools like Terraform Cloud from the outside. However, if you're dealing with both private and public clouds using Terraform Cloud, you can use Terraform Cloud agents. These agents help control BIG-IP in private data centers, even when the IP isn't accessible externally. BIG-IP supports Application Services 3 (AS3) and FAST templates, presenting a highly synergistic relationship with Terraform. This synergy is particularly pronounced due to the availability of the BIG-IP Terraform provider, coupled with dedicated resources designed specifically for the deployment of AS3 and FAST templates. AS3 and FAST templates serve as powerful tools for configuring and managing BIG-IP application services. AS3 simplifies the process of defining, managing, and deploying application-related configurations, providing a declarative model for specifying how applications should be set up on BIG-IP devices. FAST, on the other hand, extends automation capabilities by incorporating telemetry functionalities, and enhancing monitoring and reporting capabilities. The integration with Terraform is pivotal in this context, as the BIG-IP Terraform provider facilitates the seamless incorporation of AS3 and FAST templates into infrastructure-as-code (IaC) workflows. The example Terraform configuration is at https://github.com/scshitole/privateDC How to orchestrate BIG-IP in a private Data Center? In this illustrative scenario, the BIG-IP system is operational within a private Data Center. A Virtual Machine has been configured to host the Terraform Cloud agent, encapsulated within a container. The Terraform configuration pertinent to our deployment resides in the GitHub repository: https://github.com/scshitole/privateDC. Let us now turn our attention to the Terraform Agent—an agile and lightweight component capable of executing within a container on a Virtual Machine. Its primary function is to establish and maintain a secure connection with Terraform Cloud, perpetually polling for instructions. Notably, this agent not only retrieves directives from Terraform Cloud but also acquires essential information regarding the Terraform workspace and the TF configuration. What distinguishes this agent is its seamless operation without necessitating alterations to existing firewall rules. Its communication with Terraform Cloud transpires over HTTPS and only demands appropriately configured DNS settings. Upon queuing a Terraform plan, the control plane initiates the dispatch of the configuration to the agent. Subsequently, the agent diligently retrieves the workspace and orchestrates the deployment of the configuration onto the BIG-IP infrastructure. What advantages does this solution offer? Automation Workflows with Terraform Cloud: By leveraging Terraform Cloud, we gain the capability to establish automated workflows for configuring BIG-IP. This not only streamlines the configuration process but also enhances efficiency through the power of automation. Additionally, Terraform Cloud enables the orchestration of BIG-IP Web Application Firewall (WAF) configurations in a Hybrid Cloud environment, providing a comprehensive solution for managing security across diverse infrastructures. Enhanced Security with Maintained Private IPs and Credentials: The solution ensures a robust security posture by maintaining the confidentiality of the infrastructure's private IP addresses and credentials. This practice prevents security sprawls and unauthorized access attempts, fortifying the integrity of the entire system. Seamless BIG-IP Configuration Migration: The flexibility of BIG-IP configuration migration is a notable advantage, allowing for a smooth transition between private and public cloud environments. This bidirectional migration capability ensures adaptability to evolving infrastructure needs, facilitating a seamless shift of BIG-IP configurations as organizational requirements dictate. Whether moving configurations from a private cloud to a public cloud or vice versa, this capability provides agility and scalability in infrastructure management. How to set up configuration on Terraform Cloud? o Once logged into Terraform Cloud, choose your organization from the available options. o Go to the Projects & Workspaces section and opt for the Version Control Workflow. o The BIG-IP Terraform configuration template resides in the GitHub repository; please choose the relevant repository. o Choose the correct GitHub repository; it should be visible here. o Now, provide a name for the workspace; feel free to select something relevant, but it must be unique. o Enter the variables here, including details such as the BIG-IP's IP address, username, and password. Ensure to choose the HCL option, and if needed, you can set it as invisible. o Then, go to the established workspace and click on "New run. o Go to the Agents section and select Create Agent Pool. o Enter a fitting name for the Agent Pool, as illustrated. You can opt for a unique name matching the workspace for easy identification, although it's not mandatory. o Provide a suitable description for the Agent Pool, explaining its specific purpose or activities. Then, proceed to click on "Generate Token"; you will require this token when running the agent. o Copy the newly generated token and follow the outlined steps to configure your agents. Run the provided docker command, including essential environment variables like TFC_AGENT_TOKEN and TFC_AGENT_NAME. If desired, you can also run the docker in the background using the appropriate docker command option. Key Take Away Terraform Cloud streamlines infrastructure provisioning and management across various environments for consistent lifecycle control. Terraform Cloud agents enable effective orchestration of BIG-IP configurations in private data centers, addressing challenges associated with private IPs. The seamless integration of BIG-IP, AS3, FAST templates, and Terraform supports efficient infrastructure-as-code workflows, especially beneficial in multi-cloud setups. Terraform Cloud facilitates automated workflows, simplifies BIG-IP configurations, and supports orchestrating Web Application Firewall (WAF) setups in Hybrid Cloud environments. Emphasizing security, the solution maintains the confidentiality of private IPs and credentials, preventing security sprawl and unauthorized access. The solution offers flexibility by allowing seamless BIG-IP configuration migration between private and public cloud environments, ensuring adaptability and scalability. For more details, please watch the accompanying video https://youtu.be/RgCqnDxpf3E321Views1like0CommentsCreating a Docker Container to Run AS3 Declarations
This guide will take you through some very basic docker, Python, and F5 AS3 configuration to create a single-function container that will update a pre-determined BIG-IP using an AS3 declaration stored on Github. While it’s far from production ready, it might serve as a basis for more complex configurations, plus it illustrates nicely some technology you can use to automate BIG-IP configuration using AS3, Python and containers. I'm starting with a running BIG-IP - in this case a VE running on the Google Cloud Platform, with the AS3 wroker installed and provisioned, plus a couple of webservers listening on different ports. First we’re going to need a host running docker. Fire up an instance in on the platform of your choice – in this example I’m using Ubuntu 18.04 LTS on the Google Cloud platform – that’s purely from familiarity – anything that can run Docker will do. The install process is well documented but looks a bit like this: $ sudo apt-get update $ sudo apt-get install \ apt-transport-https \ ca-certificates \ curl \ gnupg-agent \ software-properties-common $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - $ sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" $sudo apt-get update $ sudo apt-get install docker-ce docker-ce-cli containerd.io It's worth adding your user to the docker group to avoid repeadly forgetting to type sudo (or is that just me?) $ sudo usermod -aG docker $USER Next let's test it's all working: $ docker run hello-world Next let’s take a look at the AS3 declaration, as you might expect form me by now, it’s the most basic version – a simple HTTP app, with two pool members. The beauty of the AS3 model, of course, is that it doesn’t matter how complex your declaration is, the implementation is always the same. So you could take a much more involved declaration and just by changing the file the python script uses, get a more complex configuration. { "class": "AS3", "action": "deploy", "persist": true, "declaration": { "class": "ADC", "schemaVersion": "3.0.0", "id": "urn:uuid:33045210-3ab8-4636-9b2a-c98d22ab915d", "label": "Sample 1", "remark": "Simple HTTP Service with Round-Robin Load Balancing", "Sample_01": { "class": "Tenant", "A1": { "class": "Application", "template": "http", "serviceMain": { "class": "Service_HTTP", "virtualAddresses": [ "10.138.0.4" ], "pool": "web_pool" }, "web_pool": { "class": "Pool", "monitors": [ "http" ], "members": [ { "servicePort": 8080, "serverAddresses": [ "10.138.0.3" ] }, { "servicePort": 8081, "serverAddresses": [ "10.138.0.3" ] } ] } } } } } Now we need some python code to fire up our request. The code below is absolutely a minimum viable set that’s been written for simplicity and clarity and does minimal error checking. There are more ways to improve it that lines of code in it, but it will get you started. #Python Code to run an as3 declaration # import requests import os from requests.auth import HTTPBasicAuth # Get rid of annoying insecure requests waring from requests.packages.urllib3.exceptions import InsecureRequestWarning requests.packages.urllib3.disable_warnings(InsecureRequestWarning) # Declaration location GITDRC = 'https://raw.githubusercontent.com/RuncibleSpoon/as3/master/declarations/payload.json' IP = '10.138.0.4' PORT = '8443' USER = os.environ['XUSER'] PASS = os.environ['XPASS'] URLBASE = 'https://' + IP + ':' + PORT TESTPATH = '/mgmt/shared/appsvcs/info' AS3PATH = '/mgmt/shared/appsvcs/declare' print("########### Fetching Declaration ###########") d = requests.get(GITDRC) # Check we have connectivity and AS3 is installed print('########### Checking that AS3 is running on ', IP ,' #########') url = URLBASE + TESTPATH r = requests.get(url, auth=HTTPBasicAuth(USER, PASS), verify=False) if r.status_code == 200: data = r.json() if data["version"]: print('AS3 version is ', data["version"]) print('########## Runnig Declaration #############') url = URLBASE + AS3PATH headers = { 'content-type': 'application/json', 'accept': 'application/json' } r = requests.post(url, auth=HTTPBasicAuth(USER, PASS), verify=False, data=d.text, headers=headers) print('Status Code:', r.status_code,'\n', r.text) else: print('AS3 test to ',IP, 'failed: ', r.text) This simple python code will pull down an S3 declaration from GitHub using the 'requests' Python library, and the GITDRC variable, connect to a specific BIG-IP, test it’s running AS3 (see here for AS3 setup instructions), and then apply the declaration. It will give you some tracing output, but that’s about it. There are couple of things to note about IP’s, users, and passwords: IP = '10.138.0.4' PORT = '8443' USER = os.environ['XUSER'] PASS = os.environ['XPASS' As you can see, I’ve set the IP and port statically and the username and passwords are pulled in from environment variables in the container. We’ll talk more about the environment variables below, but this is more a way to illustrate your options than design advice. Now we need to build a container to run it in. Containers are relatively easy to build with just a Dockerfile and a few more test files in a directory. Here's the docker file: FROM python:3 WORKDIR /usr/src/app ARG Username=admin ENV XUSER=$Username ARG Password=admin ENV XPASS=$Password # line bleow is not actually used - see comments - but oy probably is a better way ARG DecURL=https://raw.githubusercontent.com/RuncibleSpoon/as3/master/declarations/payload.json ENV Declaration=$DecURL COPY requirements.txt ./ RUN pip install --no-cache-dir -r requirements.txt COPY . . ENTRYPOINT [ "python", "./as3.py" ] You can see a couple of ARG and ENV statements , these simple set the environment variables that we’re (somewhat arbitrarily) using in the python script. Further more we’re going to override them in then build command later. It’s worth noting this isn’t a way to obfuscate passwords, they are exposed by a simple $ docker image history command that will expose all sorts of things about the build of the container, including the environment variables passes to it. This can be overcome by a multi-stage build – but proper secret management is something you should explore – and comment below if you’d like some examples. What’s this requirements.txt file mentioned in the Dockerfile it’s just a manifest to the install of the python package we need: # This file is used by pip to install required python packages # Usage: pip install -r requirements.txt # requests package requests==2.21.0 With our Dockerfile, requirements.txt and as3.py files in a directory we're ready to build a container - in this case I'm going to pass some environment variables into the build to be incorporated in the image - and replace the ones we have set in the Dockerfile: $ export XUSER=admin $ export XPASS=admin Build the container (the -t flag names and tags your container, more of which later): $ docker build -t runciblespoon/as3_python:A --build-arg Username=$XUSER --build-arg Password=$XPASS . The first time you do this there will be some lag as files for the python3 source container are downloaded and cached, but once it has run you should be able to see your image: $ docker image list REPOSITORY TAG IMAGE ID CREATED SIZE runciblespoon/as3_python A 819dfeaad5eb About a minute ago 936MB python 3 954987809e63 42 hours ago 929MB hello-world latest fce289e99eb9 3 months ago 1.84kB Now we are ready to run the container maybe a good time for a 'nothing up my sleeve' moment - here is the state of the BIG-IP before hand Now let's run the container from our image. The -tty flag attached a pseudo terminal for output and --rm deletes the container afterwards: $ docker run --tty --rm runciblespoon/as3_python:A ########### Fetching Declaration ########### ########### Checking that AS3 is running on 10.138.0.4 ######### AS3 version is 3.10.0 ########## Runing Declaration ############# Status Code: 200 {"results":[{"message":"success","lineCount":23,"code":200,"host":"localhost","tenant":"Sample_01","runTime":929}],"declaration":{"class":"ADC","schemaVersion":"3.0.0","id":"urn:uuid:33045210-3ab8-4636-9b2a-c98d22ab915d","label":"Sample 1","remark":"Simple HTTP Service with Round-Robin Load Balancing","Sample_01":{"class":"Tenant","A1":{"class":"Application","template":"http","serviceMain":{"class":"Service_HTTP","virtualAddresses":["10.138.0.4"],"pool":"web_pool"},"web_pool":{"class":"Pool","monitors":["http"],"members":[{"servicePort":8080,"serverAddresses":["10.138.0.3"]},{"servicePort":8081,"serverAddresses":["10.138.0.3"]}]}}},"updateMode":"selective","controls":{"archiveTimestamp":"2019-04-26T18:40:56.861Z"}}} Success, by the looks of things. Let's check the BIG-IP: running our container has pulled down the AS3 declaration, and applied it to the BIG-IP. This same container can now be run repeatedly - and only the AS3 declaration stored in git (or anywhere else your container can get it from) needs to change. So now you have this container running locally you might want ot put it somewhere. Docker hub is a good choice and lets you create one private repository for free. Remember this container image has credentials, so keep it safe and private. Now the reason for the -t runciblespoon/as3_python:A flag earlier. My docker hub user is "runciblespoon" and my private repository is as3_python. So now all i need ot do is login to Docker Hub and push my image there: $ docker login $ docker push runciblespoon/as3_python:B Now I can go to any other host that runs Docker, login to Docker hub and run my container: $ docker login $ docker run --tty --rm runciblespoon/as3_python:A Unable to find image 'runciblespoon/as3_python:A' locally B: Pulling from runciblespoon/as3_python ... ########### Fetching Declaration ########### Docker will pull down my container form my private repo and run it, using the AS3 declaration I've specified. If I want to change my config, I just change the declaration and run it again. Hopefully this article gives you a starting point to develop your own containers, python scripts, or AS3 declarations, I'd be interested in what more you would like to see, please ask away in the comments section.1.6KViews0likes4CommentsAS3 Best Practice
Introduction AS3 is a declarative API that uses JSON key-value pairs to describe a BIG-IP configuration. From virtual IP to virtual server, to the members, pools, and nodes required, AS3 provides a simple, readable format in which to describe a configuration. Once you've got the configuration, all that's needed is to POST it to the BIG-IP, where the AS3 extension will happily accept it and execute the commands necessary to turn it into a fully functional, deployed BIG-IP configuration. If you are new to AS3, start reading the following references: Products - Automation and orchestration toolchain (f5.com; Product information) Application Services 3 Extension Documentation (clouddocs; API documentation and guides) F5 Application Services 3 Extension (AS3) (GitHub; Source repository) This article describes some considerations in order to efficiently deploy the AS3 configurations. Architecture In the TMOS space, the services that AS3 provides are processed by a daemon named 'restnoded'. It relies on the existing BIG-IP framework for deploying declarations. The framework consists of httpd, restjavad and icrd_child as depicted below (the numbers in parenthesis are listening TCP port numbers). These processes are also used by other services. For example, restjavad is a gateway for all the iControl REST requests, and is used by a number of services on BIG-IP and BIG-IQ. When an interaction between any of the processes fails, AS3 operation fails. The failures stem from lack of resources, timeouts, data exceeding predefined thresholds, resource contention among the services, and more. In order to complete AS3 operations successfully, it is advised to follow the Best Practice outlined below. Best Practice Your single source of truth is your declaration Refrain from overwriting the AS3-deployed BIG-IP configurations by the other means such as TMSH, GUI or iControl REST calls. Since you started to use the AS3 declarative model, the source of truth for your device's configurations is in your declaration, not the BIG-IP configuration files. Although AS3 tries to weigh BIG-IP locally stored configurations as much as it can do, discrepancy between the declaration and the current configuration on BIG-IP may cause the AS3 to perform less efficiently or error unexpectedly. When you wish to change a section of a tenant (e.g., pool name change), modify the declaration and submit it. Keep the number of applications in one tenant to a minimum AS3 processes each tenant separately. Having too many applications (virtual servers) in a single tenant (partition) results in a lengthy poll when determining the current configuration. In extreme cases (thousands of virtuals), the action may time out. When you want to deploy a thousand or more applications on a single device, consider chunking the work for AS3 by spreading the applications across multiple tenants (say, 100 applications per tenant). AS3 tenant access behavior behaves as BIG-IP partition behavior. A non-Common partition virtual cannot gain access to another partition's pool, and in the same way, an AS3 application does not have access to a pool or profile in another tenant. In order to share configuration across tenants, AS3 allows configuration of the "Shared" application within the "Common" tenant. AS3 avoids race conditions while configuring /Common/Shared by processing additions first and deletions last, as shown below. This dual process may cause some additional delay in declaration handling. Overwrite rather than patching (POSTing is a more efficient practice than PATCHing) AS3 is a stateless machine and is idempotent. It polls BIG-IP for its full configuration, performs a current-vs-desired state comparison, and generates an optimal set of REST calls to fill the differences. When the initial state of BIG-IP is blank, the poll time is negligible. This is why initial configuration with AS3 is often quicker than subsequent changes, especially when the tenant contains a large number of applications. AS3 provides the means to partially modify using PATCH (see AS3 API Methods Details), but do not expect PATCH changes to be performant. AS3 processes each PATCH by (1) performing a GET to obtain the last declaration, (2) patching that declaration, and (3) POSTing the entire declaration to itself. A PATCH of one pool member is therefore slower than a POST of your entire tenant configuration. If you decide to use PATCH, make sure that the tenant configuration is a manageable size. Note: Using PATCH to make a surgical change is convenient, but using PATCH over POST breaks the declarative model. Your declaration should be your single source of truth. If you include PATCH, the source of truth becomes "POST this file, then apply one or more PATCH declarations." Get the latest version AS3 is evolving rapidly with new features that customers have been wishing for along with fixes for known issues. Visit the AS3 section of the F5 Networks Github. Issues section shows what features and fixes have been incorporated. For BIG-IQ, check K54909607: BIG-IQ Centralized Management compatibility with F5 Application Services 3 Extension and F5 Declarative Onboarding for compatibilities with BIG-IQ versions before installation. Use administrator Use a user with the administrator role when you submit your declaration to a target BIG-IP device. Your may find your role insufficient to manipulate BIG-IP objects that are included in your declaration. Even one authorized item will cause the entire operation to fail and role back. See the following articles for more on BIG-IP user and role. Manual Chapter : User Roles (12.x) Manual Chapter : User Roles (13.x) Manual Chapter : User Roles (14.x) Prerequisites and Requirements (clouddocs AS3 document) Use Basic Authentication for a large declaration You can choose either Basic Authentication (HTTP Authorization header) or Token-Based Authentication (F5 proprietary X-F5-Auth-Token) for accessing BIG-IP. While the Basic Authentication can be used any time, a token obtained for the Token-Based Authentication expires after 1,200 seconds (20 minutes). While AS3 does re-request a new token upon expiry, it requires time to perform the operation, which may cause AS3 to slow down. Also, the number of tokens for a user is limited to 100 (since 13.1), hence if you happen to have other iControl REST players (such as BIG-IQ or your custom iControl REST scripts) using the Token-Based Authentication for the same user, AS3 may not be able to obtain the next token, and your request will fail. See the following articles for more on the Token-Based Authentication. Demystifying iControl REST Part 6: Token-Based Authentication (DevCentral article). iControl REST Authentication Token Management (DevCentral article) Authentication and Authorization (clouddocs AS3 document) Choose the best window for deployment AS3 (restnoded daemon) is a Control Plane process. It competes against other Control Plane processes such as monpd and iRules LX (node.js) for CPU/memory resources. AS3 uses the iControl REST framework for manipulating the BIG-IP resources. This implies that its operation is impacted by any processes that use httpd (e.g., GUI), restjavad, icrd_child and mcpd. If you have resource-hungry processes that run periodically (e.g., avrd), you may want to run your AS3 declaration during some other time window. See the following K articles for a list of processes K89999342 BIG-IP Daemons (12.x) K05645522 BIG-IP Daemons (v13.x) K67197865 BIG-IP Daemons (v14.x) K14020: BIG-IP ASM daemons (11.x - 15.x) K14462: Overview of BIG-IP AAM daemons (11.x - 15.x) Workarounds If you experience issues such as timeout on restjavad, it is possible that your AS3 operation had resource issues. After reviewing the Best Practice above but still unable to alleviate the problem, you may be able to temporarily fix it by applying the following tactics. Increase the restjavad memory allocation The memory size of restjavad can be increased by the following tmsh sys db commands tmsh modify sys db provision.extramb value <value> tmsh modify sys db restjavad.useextramb value true The provision.extramb db key changes the maximum Java heap memory to (192 + <value> * 8 / 10) MB. The default value is 0. After changing the memory size, you need to restart restjavad. tmsh restart sys service restjavad See the following article for more on the memory allocation: K26427018: Overview of Management provisioning Increase a number of icrd_child processes restjavad spawns a number of icrd_child processes depending on the load. The maximum number of icrd_child processes can be configured from /etc/icrd.conf. Please consult F5 Support for details. See the following article for more on the icrd_child process verbosity: K96840770: Configuring the log verbosity for iControl REST API related to icrd_child Decrease the verbosity levels of restjavad and icrd_child Writing log messages to the file system is not exactly free of charge. Writing unnecessarily large amount of messages to files would increase the I/O wait, hence results in slowness of processes. If you have changed the verbosity levels of restjavad and/or icrd_child, consider rolling back the default levels. See the following article for methods to change verbosity level: K15436: Configuring the verbosity for restjavad logs on the BIG-IP system14KViews13likes2CommentsHigh-level Pathways to Security Visibility
Editor's Note: The F5 Beacon capabilities referenced in this article hosted on F5 Cloud Services are planning a migration to a new SaaS Platform - Check out the latest here. (10 Minute Read) Introduction In previous articles we identified the elements needed to gain visibility into adaptive application security postures. This entails observing the security configuration (static) and monitoring telemetry (dynamic) coming from different control points (ref. Visibility and Orchestration). We also suggested that security visibility should be integrated in the software development and/or deployment lifecycle as part of a shift-left strategy (ref. Shift-left Security Visibility). Now, we’ll focus on identifying a high-level pathway to achieve application security visibility. First, we need to identify the constraints that frame the effort. We will then identify concrete examples of insertion with F5 technologies. The end-goal is to ensure that you keep close control over the application security by embracing a holistic approach to visibility integrated in the software development/deployment lifecycle. Constraints Inserting security visibility in your enterprise is part of the shift-left strategy (ref. url-to-shift-left-sec-vis.). (https://www.f5.com/company/blog/beyond-visibility-is-operability) In order to be practical, we need to make sure that the pathway adheres to the following guidelines: Friction – The solution should not introduce any friction into the pipeline - For example, the tools used by the DEVOPS and SECOPS teams (e.g. Gitlab, Jenkins) should be the same avoiding gated interdependencies where a change by one group is blocked/delayed by the other. Programmability – The security-centric solutions implemented during the journey need to be highly programmable – This will ensure that the tools adapt to the environment (e.g. services, micro-services), the supporting infrastructure (e.g. cloud, containers), and the application. Automation – Enabling automation is key. This can be achieved by ensuring the tools deployed can be automatically configured without intervention as part of a pipeline. One way to ensure this is to leverage declarative application programing interfaces (API) (link-to-f5-declarative-interface) Scalability – Applications can span across infrastructure that is infinitely scalable like public cloud, across availability zones and geographies. This requires that any solution that is deployed to secure/protect applications and workloads be able to scale. To scale horizontally, the solution can be implemented across multiple workloads in multiple instances. To scale vertically, the solution should be able to handle increasing amounts of traffic in single/few instances. Transparency – From a performance and functionality standpoint, the solutions inserted to gain security visibility cannot impact the application. For example, when a proxy is inserted, it cannot add latency between the client and the workload. It also cannot affect the functionality provided by the workload. Resiliency – Inserting a solution to support your applications security and visibility should be resilient. Any failure of the process providing visibility should be flagged and not affect the application’s/workload’s performance or availability. Visibility Insertion All F5 solutions can be inserted in the application delivery infrastructure to provide security visibility. This comes in the form security-aware proxies. The BIG-IP or NGINX Plus platforms are particularly well-suited for insertion in infrastructure requiring inline low-latency and powerful application security and visibility. Deploying F5 solutions can easily be done observing all the constraints mentioned above. Friction Thanks to the available form factors and programmatic templates provided, implementing BIG-IP or NGINX Plus in the infrastructure is easily achieved using appropriate templates. For example, when working with AWS, a BIG-IP can easily be deployed using a Cloud Formation Template (CFT) found here. From the enterprise git (Gitlab, Github, Bitbucket etc.) repository, BIG-IP can be deployed directly by cloning/forking the F5 repository and integrating with the pipeline (ref. Clouddocs Article). Programmability The BIG-IP Advanced Web Application Firewall (Advanced WAF) configuration is highly programable. The advantage is that the configuration can be stored and or modified easily outside of the BIG-IP. For example, an base policy aimed at protecting against OWASP Top 10 Risks can look like the following: { "policy": { "name": "Complete_OWASP_Top_Ten", "description": "A generic, OWASP Top 10 protection items v1.0", "template": { "name": "POLICY_TEMPLATE_RAPID_DEPLOYMENT" }, "fullPath": "/Common/Complete_OWASP_Top_Ten", "enforcementMode":"transparent", "signature-settings":{ "signatureStaging": false, "minimumAccuracyForAutoAddedSignatures": "high" }, "protocolIndependent": true, "caseInsensitive": true, "general": { "trustXff": true }, "data-guard": { "enabled": true }, "policy-builder-server-technologies": { "enableServerTechnologiesDetection": true }, "blocking-settings": { "violations": [ { "alarm": true, "block": true, "description": "ASM Cookie Hijacking", "learn": false, "name": "VIOL_ASM_COOKIE_HIJACKING" }, { "alarm": true, "block": true, "description": "Access from disallowed User/Session/IP/Device ID", "name": "VIOL_SESSION_AWARENESS" }, { "alarm": true, "block": true, "description": "Modified ASM cookie", "learn": true, "name": "VIOL_ASM_COOKIE_MODIFIED" }, { "alarm": true, "block": true, "description": "XML data does not comply with format settings", "learn": true, "name": "VIOL_XML_FORMAT" }, { "name": "VIOL_FILETYPE", "alarm": true, "block": true, "learn": true } ], "evasions": [ { "description": "Bad unescape", "enabled": true, "learn": true }, { "description": "Apache whitespace", "enabled": true, "learn": true }, { "description": "Bare byte decoding", "enabled": true, "learn": true }, { "description": "IIS Unicode codepoints", "enabled": true, "learn": true }, { "description": "IIS backslashes", "enabled": true, "learn": true }, { "description": "%u decoding", "enabled": true, "learn": true }, { "description": "Multiple decoding", "enabled": true, "learn": true, "maxDecodingPasses": 3 }, { "description": "Directory traversals", "enabled": true, "learn": true } ] }, "xml-profiles": [ { "name": "Default", "defenseAttributes": { "allowDTDs": false, "allowExternalReferences": false } } ], "session-tracking": { "sessionTrackingConfiguration": { "enableTrackingSessionHijackingByDeviceId": true } } } } In the example above, aspects of a security policy like evasion techniques, or cookie consumption settings can easily be programmed in the configuration and handled like any other application code for versioning, editing or storing. The standard JSON format can be managed in a Git repository for use in any environment. Documentation for JSON representations of WAF policies can be found here. This is also true for all F5 security platforms including NGINX App Protect or Essential App Protect (ref. NGINX Configuration Guide and EAP API Users Guide). Similarly, configuring BIG-IP to forward security information telemetry to appropriate facilities can be achieved with the use of the Telemetry Streaming framework. For example, in order to configure BIG-IP to send telemetry data to a centralized visibility tool (F5 Beacon, or ELK for example) it can be configured with a declaration like: "class": "Telemetry", "controls": { "class": "Controls", "logLevel": "debug" }, "TS_Poller": { "class": "Telemetry_System_Poller", "interval": 60 }, "TS_Listener": { "class": "Telemetry_Listener", "port": 6514 }, "TS_Consumer": { "class": "Telemetry_Consumer", "type": "Generic_HTTP", "host": "my.visibility-host.url", "protocol": "http", "port": 8888, "path": "/", "method": "POST", "headers": [ { "name": "content-type", "value": "application/json" } ] } } The above declaration identifies the host where it will send telemetry – in this case debug data Scalability, Transparency and Resiliency F5 provides highly scalable, resilient and transparent solutions that can be inserted in any infrastructure to secure and provide visibility into web applications. Discussing these aspects of BIG-IP, NGINX Plus, or NGINX App Protect is beyond the scope of this article. For more information on scalability and high-availability you can refer to Performance of NGINX and NGINX Plus, NGINX App Protect Application Security Testing or BIG-IP Datasheet. Conclusion This article is meant to offer a path to visibility using F5 technology by inserting BIG-IP and configure it to provide application security and generate telemetry to gain visibility into the application's security posture. The aim is for you build a blueprint to systematically watch over your adaptive valuable applications and workloads across your infrastructure.423Views0likes0CommentsLightboard Lessons: Zero Touch Application Deployments with Terraform, Consul, and AS3
In this lightboard lesson, I show how you can move from the manual work of traditional app deployments to the automated goodness of zero touch app deployments! This demo solution was shown in a Hashicorp webinar featuring our own Eric Chen, and utilizes Hashicorp's Terraform and Consul applications, as well as the AS3 component of the F5 Automation Toolchain. Resources This demo in detail F5 Terraform automation lab (hosted by WWT, registration required) F5 Resources for Terraform Hands-on Intro to Infrastructure as Code Using Terraform Ready to go! Deploying F5 Infrastructure Using Terraform F5 and Hashicorp Essentials (article series on Terraform, Consul, and Vault)1.3KViews0likes2CommentsHow to Use BIG-IQ and Ansible to Build Advanced BIG-IP Automation Workflows
It’s no secret that automation of networking, security, and application development processes offers a laundry list of benefits—reduced deployment time, lowered cost, fewer errors, and more resilient systems, to name a few. One of the most popular tools for building automation workflows is Ansible. Ansible is a powerful, open-source tool that simplifies and automates many common tasks and enables infrastructure as code for creating, deploying, and managing F5 application delivery and security services. This is accomplished through playbooks and roles available on Ansible Galaxy. Another way to streamline working with BIG-IP is with BIG-IQ Centralized Management. BIG-IQ combines deep, app-centric visibility and dashboarding together with device, configuration, and policy management in a unified, intuitive user interface. From BIG-IQ, you can create new BIG-IP Virtual Editions (VEs), provision them with Declarative Onboarding, create advanced AS3 services, move deployments, upgrade software, and much more. Together Ansible and BIG-IQ make automation and management of your BIG-IP environment simple and straightforward—enabling an effective, intuitive, data-rich, and highly visual solution that offers value to networking/F5 gurus, security practitioners, and application owners/developers alike. To make things even easier, the F5 team has developed several community-supported Ansible roles that are designed to inject automation into workflows and make BIG-IQ’s simple app-centric management functionality even better. Please note that this workflow assumes that you already have a BIG-IQ Centralized Management deployment up and running. The end result will be a fully provisioned BIG-IP deployment that can be fully managed—client-to-server visibility, troubleshooting, object level configuration, etc.—from BIG-IQ’s intuitive, role-specific GUI. You can get started with these roles and workflows today by checking out F5’s repository on Ansible Galaxy. To use these Ansible roles and playbooks, you’ll need to download and install them to a local workstation that will be used for managing F5 deployments. Use the Ansible roles for BIG-IQ below to: Create new VEs Onboard VEs with DO Create and deploy common objects such as SSL certs and WAF policies Create AS3 application delivery and security services Move deployments across BIG-IPs For additional how-to-use resources, guidance, and labs for BIG-IQ, check out the video library and the BIG-IQ labs. Create a BIG-IP VE in AWS tasks: - name: Create a VE in AWS include_role: name: f5devcentral.bigiq_create_ve vars: cloud_environment: "BIG-IQ AWS US-East" ve_name: "bigipvm01" register: status - name: Get AWS BIG-IP VE IP address (port 8443) debug: msg: "{{ ve_ip_address }}" - name: Get AWS BIG-IP VE private Key Filename debug: msg: "{{ private_key_filename }}" Onboard the New BIG-IP VE with Declarative Onboarding tasks: - name: Onboard BIG-IP VE with DO include_role: name: f5devcentral.atc_deploy vars: atc_service: Device atc_method: POST atc_declaration: "{{ lookup('template','do_bigip_aws.j2') }}" atc_delay: 30 atc_retries: 15 register: atc_DO_status do_bigip_aws.j2: { "class": "DO", "declaration": { "schemaVersion": "1.5.0", "class": "Device", "async": true, "Common": { "class": "Tenant", "myLicense": { "class": "License", "licenseType": "licensePool", "licensePool": "byol-pool", "bigIpUsername": "admin", "bigIpPassword": "secret" }, "myProvision": { "class": "Provision", "ltm": "nominal", "avr": "nominal" }, "myNtp": { "class": "NTP", "servers": [ "169.254.169.123" ], "timezone": "UTC" }, "admin": { "class": "User", "shell": "bash", "userType": "regular", "partitionAccess": { "all-partitions": { "role": "admin" } }, "password": "secret" }, "hostname": "bigipvm01.example.com" } }, "targetUsername": "admin", "targetHost": "{{ ve_ip_address }}", "targetPort": 8443, "targetSshKey": { "path": "{{ private_key_filename }}" }, "bigIqSettings": { "conflictPolicy": "USE_BIGIQ", "deviceConflictPolicy": "USE_BIGIP", "failImportOnConflict": false, "versionedConflictPolicy": "KEEP_VERSION", "statsConfig": { "enabled": true } } } Create SSL Certificate and Key on BIG-IQ tasks: - name: Authenticate to BIG-IQ uri: url: https://{{ provider.server }}:{{ provider.server_port }}/mgmt/shared/authn/login method: POST headers: Content-Type: application/json body: username: "{{ provider.user }}" password: "{{ provider.password }}" loginProviderName: "{{ provider.auth_provider | default('tmos') }}" body_format: json timeout: 60 status_code: 200, 202 validate_certs: "{{ provider.validate_certs }}" register: auth - name: Create SSL Certificate and Key on BIG-IQ uri: url: https://{{ provider.server }}:{{ provider.server_port }}/mgmt/cm/adc-core/tasks/certificate-management method: POST headers: Content-Type: application/json X-F5-Auth-Token: "{{ auth.json.token.token }}" body: | { "issuer": "Self", "itemName": "mywebapp.crt", "itemPartition": "Common", "durationInDays": 365, "country": "US", "commonName": "mywebapp.example.com ", "division": "MyDiv", "organization": "MyOrg", "locality": "Seattle", "state": "WA", "subjectAlternativeName": "DNS: mywebapp.example.com", "securityType": "normal", "keyType": "RSA", "keySize": 2048, "command": "GENERATE_CERT" } body_format: json timeout: 60 status_code: 200, 202 validate_certs: "{{ provider.validate_certs }}" register: json_response Pin and Deploy SSL Certificates and Key to BIG-IP tasks: - name: Pin and deploy SSL certificate and key to BIG-IP include_role: name: f5devcentral.bigiq_pinning_deploy_objects vars: bigiq_task_name: "Deployment through Ansible/API - mywebapp" modules: - name: ltm pins: - { type: "sslCertReferences", name: "mywebapp.crt" } - { type: "sslKeyReferences", name: "mywebapp.key" } device_address: "{{ ve_ip_address }}" register: status Deploy an AS3 Service to BIG-IP tasks: - name: Deploy AS3 application services to BIG-IP include_role: name: f5devcentral.atc_deploy vars: atc_service: AS3 atc_method: POST atc_declaration: "{{ lookup('template','as3_bigiq_https_app.j2') }}" atc_delay: 30 atc_retries: 15 register: atc_AS3_status as3_bigiq_https_app.j2: { "class": "AS3", "action": "deploy", "declaration": { "class": "ADC", "schemaVersion": "3.12.0", "target": { "address": "{{ ve_ip_address }}" }, "myorg": { "class": "Tenant", "mywebapp": { "class": "Application", "schemaOverlay": "AS3-F5-HTTPS-offload-lb-existing-cert-template-big-iq-default-v1", "template": "https", "serviceMain": { "class": "Service_HTTPS", "pool": "Pool", "enable": true, "serverTLS": "TLS_Server", "virtualPort": 443, "profileAnalytics": { "use": "Analytics_Profile" }, "virtualAddresses": [ "0.0.0.0" ] }, "Pool": { "class": "Pool", "members": [ { "adminState": "enable", "servicePort": 80, "serverAddresses": 10.1.3.23 } ] }, "TLS_Server": { "class": "TLS_Server", "certificates": [ { "certificate": "Certificate" } ] }, "Certificate": { "class": "Certificate", "privateKey": { "bigip": "/Common/mywebapp.key" }, "certificate": { "bigip": "/Common/mywebapp.crt" } }, "Analytics_Profile": { "class": "Analytics_Profile", "collectIp": false, "collectGeo": false, "collectUrl": false, "collectMethod": false, "collectUserAgent": false, "collectOsAndBrowser": false, "collectPageLoadTime": false, "collectResponseCode": true, "collectClientSideStatistics": true } } } } } Move an AS3 Service Within BIG-IQ Dashboard tasks: - name: Move an AS3 application service in BIG-IQ dashboard. include_role: name: f5devcentral.bigiq_move_app_dashboard vars: apps: - name: myWebApp pins: - name: "myorg_mywebapp" register: status1.6KViews2likes0CommentsManage BIG-IPs in Azure using Terraform Cloud
Introduction In this article I’ll outline a suite of demonstration resources designed to help you and your IT team explore the possibilities of applying DevOps practices in your own environments. The demonstration resources described below show how tools like Git, HashiCorp Terraform, HashiCorp Sentinel, Chef Inspec and F5's Automation Toolchain can be used to introduce some of the practices listed above to F5 BIG-IPs and the IT services they help deliver. By following along with the README in the demonstration repository and the video walk-throughs listed below, you should be able to run this demonstration on your own. Software Delivery Key Practices IT Industry research, such as Accelerate, shows improving a company's ability to deliver software has a significant positive benefit to their overall success. The following practices and design principles are cornerstones to that improvement. Version control of code and configuration Automation of Deployment Automation of Testing and Test Data Management "Shifting Left" on Security Loosely Coupled Architectures Pro-active Notification Caveats These repositories use simplifying demonstration shortcuts for password, key, and network security. Production-ready enterprise designs and workflows should be used in place of these shortcuts. DO NOT ASSUME THAT THE CODE AND CONFIGURATION IN THESE REPOSITORIES IS PRODUCTION-READY The particular source control approach shown in this demonstration is one of many. Before using this approach to support your Infrastructure as Code and Configuration Management assets and workflows, you should learn about different patterns of source code management and determine what best fits your team's needs. A variety of tools are used in this demonstration. In most cases they are not exclusively required and can be replaced with other similar tools. The demonstration uses a licensed version of Terraform Cloud in order to demonstrate the capabilities of HashiCorp Sentinel. If you are using the free version of Terraform Cloud you won't be able to try the policy compliance use-cases, and the rest of the demonstration code should work as expected. Setting up your demonstration automation host Before running the demonstration code, you'll need to set up the IDE host and the Azure account. Instructions for those steps are here Video walk-throughs Fork the repository and open it in Visual Studio Code (1m36s) Once the tools are installed, you can create your own copy of the repository and open it in your IDE. In the videos, Visual Studio Code is used as the IDE. In order to follow along, you'll need to create your own repository in order to set up the Terraform Cloud configuration and make your own adjustments to build configuration (e.g. the number of application servers deployed) Set up a Terraform Cloud workspace (1m38s) Before running the Terraform Cloud workflow, a Terraform Cloud workspace is required. This video steps you through manually configuring the workspace and linking it to your cloned repository. Programmatically set up Terraform Cloud workspaces for production, test, and development (10m40s) Setting up the workspaces programmatically has the benefits of rapid consistent results and executable knowledge in the form of scripts and configuration files. In this video we step you through programmatically building workspaces for production, test, and development environments using this repository. We also programmatically configure simple source-controlled compliance Sentinal policies. Initial build of production, test, and development (7m59s) Everything should be ready for the first build of your production, test, and development environments. In this video, we step you through manually triggering Terraform Cloud builds. In addition, we'll see the impact of Sentinel policies in use, how to override policies that have been triggered, and the audit trail that results. Automated testing of production (4m18s) Once your environment builds have completed, it's critical to validate that they are fit for use. In this video, we step you through a simple set of tests that validate the readiness of the F5 BIG-IPs built by the Terraform Cloud workflow. These tests are not comprehensive, but demonstrate the benefits of an executable "definition of done." The source of an updated version of the Inspec tests used in the demonstration is here. Manual inspection of production (2m45s) In this video, we walk through the BIG-IPs that were built in the production environment. We inspect the virtual servers and their associated pools, noting the number of application servers that were built and joined to the pool. Programmatically add application servers and include them in the BIG-IP virtual server (8m6s) In this video, we explore the use-case of expanding the pool in the previously built production environment, using a simple change in source control. We'll see the Terraform Cloud workflow automatically trigger a new build based on a merge commit to your cloned repository. New application servers will be built and automatically added to the pool by F5's Service Discovery iApp. Update WAF from a source control repository (no video walk-through) We leave it as an exercise for the reader (or possibly an updated video) to look for the WAF deployed with the virtual server. The WAF is retrieved from source control here. In addition, you can experiment with changing the version of the WAF in the AS3 template in the stanza shown below. Usable values for versions are 0.1.0, 0.1.1, 0.2.0, and 0.2.1. If you choose to do this, follow the same workflow shown in the previous video about scaling the number of application servers. "ASM_Policy": { "class": "WAF_Policy", "url": "https://github.com/mjmenger/waf-policy/raw/0.1.1/asm_policy.xml", "ignoreChanges": false } What's next? If you've followed along through the all of the use-cases in the demonstration repository, you have seen the following: Source-controlled build of an application environment, including BIG-IPs, virtual servers, pools, and WAF policies. Managed changes with logging of authoring and approvals. Automated scaling of application resources and BIG-IP configuration. Automated updates to BIG-IP WAF policies. If you want to realize the benefits of these practices for your IT service delivery, please reach out to your F5 account team.1.1KViews2likes0CommentsIntroducing the F5 CLI – Easy button for the F5 Automation Toolchain
The F5 command-line interface (F5-CLI) is the latest addition to the F5 automation family and helps to enable the ease of F5 deployment of application services using the F5 Automation toolchain and F5 cloud services. The F5-CLI was built with ease of use in mind and takes its inspiration from other cloud-based command-line interfaces like those delivered via AWS, Azure, and GCP. The beauty of the F5 CLI is that you can easily deploy your applications by using the AS3/DO/TS or the cloud services declarative model without any software development or coding knowledge, or the need to use third-party tools or programming languages in order to make use of the APIS. Benefits of the F5 CLI: • Quickly access and consume F5’s APIs and Services with familiar remote CLI UX • Configurable settings • Include common actions in Continuous Deployment (CD) pipelines • Prototyping Test calls that may be used in more complex custom integrations using the underlying SDK Supports discovery activities/querying of command-line results (for example, “list accounts” to find the desired account which will be used as an input to final automation) • Support quick one-off automation activities (for example, leveraging a bash loop to create/delete large lists of objects) Ease of use We are going to show two examples of using the F5 Command Line Interface (F5 CLI) • Using the F5 CLI as a simple command line interface and • Using the F5 CLI for rapid F5 integration into a simple automation pipeline Using the F5 CLI straight from the command line The F5 CLI is delivered as a docker container or as a Python application and is very simple to run. One command is required to get the F5 CLI going as a docker image; for example: docker run -it -v "$HOME/.f5_cli:/root/.f5_cli" -v "$(pwd):/f5-cli" f5devcentral/f5-cli:latest /bin/bash And from there you can simply type “f5” and you will be presented with the command line options. #f5 --help Usage: f5 [OPTIONS] COMMAND [ARGS]... Welcome to the F5 command line interface. Options: --version Show the version and exit. --help Show this message and exit. Commands: bigip Manage BIG-IP login Login to BIG-IP, F5 Cloud Services, etc. cs Manage F5 Cloud Services config Configure CLI authentication and configuration --help is your friend here as it will show you all of the various options that you have. Let’s look at a simple example of logging in and sending an F5 AS3 declaration to a Big-IP: Login #f5 login --authentication-provider bigip --host 2.2.2.2 --user auser --password apassword { "message": "Logged in successfully" } Next, send a declaration to your BIG-IP, BIGIQ, F5 Cloud Services; for example: The following command creates a VIP on a BIG-IP with 2 pool members and associates a WAF Policy. bash-5.0# f5 bigip extension as3 create --declaration basicwafpolicy.json Reference: - Review the AS3 declaration above here Using the F5 CLI as part of a continuous deployment pipeline From there, it is also simple to think about ways that you could use the F5 CLI as part of a continuous deployment pipeline. This example uses Jenkins. Jenkins is an open-source automation server. It helps automate the parts of software development related to building, testing, deploying, and facilitating continuous integration and continuous delivery. It is relatively simple to use the F5 CLI as part of a Jenkins pipeline. In my example, I run a docker container that is running a container-based Cloud Bees Jenkins distribution which in turn is running docker inside of the container in order to make use of the F5 CLI. Configuring the F5 CLI to run on a Jenkins Docker container In order to get Jenkins running in my lab I use the cloud bees Jenkins distribution. In my case for this demonstration I am running docker on my laptop. A production deployment would be configured slightly differently. - First install docker: For more information on how to install docker, see the following. https://docs.docker.com/get-docker/ - Pull a cloud bees Jenkins distribution. https://hub.docker.com/r/cloudbees/cloudbees-jenkins-distribution/ docker pull cloudbees/cloudbees-jenkins-distribution - Run the following command in a terminal in order to start the initial Jenkins setup. docker run -u root -p 8080:8080 -p 50000:50000 -v ~/tmp/jenkins-data:/var/cloudbees-distribution -v /var/run/docker.sock:/var/run/docker.sock cloudbees/cloudbees-jenkins-distribution - Access the Jenkins interface to perform the initial setup on http://0.0.0.0:8080 You will need the initial admin password to proceed, which in my case is written into the terminal window from where I am running Jenkins. Jenkins initial setup is required. An admin user has been created and a password generated. Please use the following password to proceed to installation: d425a181f15e4a6eb95ca59e683d3411 This may also be found at: /var/cloudbees-jenkins-distribution/secrets/initialAdminPassword - Follow the in browser instructions to create users and then get the recommended plugins installed in Jenkins. - After you have Jenkins configured you must then install the docker plugin inside of the container. Open another terminal window and in the terminal window run docker ps ... make a note of the Jenkins container name ... docker exec -it JENKINSCONTAINERNAME /bin/bash ... inside the container shell ... apt install docker.io Setting up your Jenkins pipeline - Inside of Jenkins, select “new item” and then “pipeline.” You will then name your pipeline and select the “OK” button. - After that, you will be presented with a list of tabs and you should be able to copy the pipeline code below into the pipeline tab. This then enables a fairly simple Jenkins pipeline that shows how easy it is to build an automation process from scratch. The following is an example Jenkins pipeline that automates the F5-CLI: pipeline { /* These should really be in a vault/credentials store but for testing * they'll be added as parameters that can be overridden in the job * configuration. */ parameters { string(name: 'BIG-IP_ADDRESS', defaultValue: '2.2.2.2', description: 'The IP address for BIG-IP management.') string(name: 'BIG-IP_USER', defaultValue: 'auser', description: 'The user account for authentication.') password(name: 'BIG-IP_PASSWORD', defaultValue: 'apassword', description: 'The password for authentication.') booleanParam(name: 'DISABLE_TLS_VERIFICATION', defaultValue: true, description: 'Disable TLS and HTTPS certificate checking.') } agent { docker { image 'f5devcentral/f5-cli:latest' } } stages { stage('Prepare F5 CLI configuration') { steps { script { if(params.DISABLE_TLS_VERIFICATION) { echo 'Configuring F5 CLI to ignore TLS/HTTPS certificate errors' sh 'f5 config set-defaults --disable-ssl-warnings true' } } script { echo "Authenticating to BIG-IP instance at ${params.BIG-IP_ADDRESS}" sh "f5 login --authentication-provider bigip --host ${params.BIG-IP_ADDRESS} --user ${params.BIG-IP_USER} --password ${params.BIG-IP_PASSWORD}" } } } stage('Pull declaration from github (or not)') { steps { script { echo 'Pull Waff Policy from Github' sh 'wget https://raw.githubusercontent.com/dudesweet/simpleAS3Declare/master/basicwafpolicy.json -O basicwafpolicy.json' } } } stage('Post declaration to BIG-IP (or not)') { steps { script { echo 'Post declaration to BIG-IP (or not)' sh 'f5 f5 bigip extension as3 create --declaration basicwafpolicy.json' } } } } } Reference: Review the jenkinsfile here I have also included a 5-minute presentation and video demonstration that covers using the F5 CLI. The video reviews the F5 CLI then goes on to run the above Jenkins pipeline: https://youtu.be/1KGPCeZex6A Conclusion The F5 CLI is very easy to use and can get you up and running fast with the F5 API. You don’t have to be familiar with any particular programming languages or automation tools. You can use a command-line to use F5 declarative APIs for simple and seamless F5 service verification, automation, and insertion that can get you up and running with F5 services. Additionally, we showed how easy it is to integrate the F5 CLI into Jenkins, which is an open-source software development and automation server. You can begin your automation journey and easily include F5 as part of your automation pipelines. Perhaps the final thing I should say is that it doesn’t have to stop here. The next step could be to use the F5 Python SDK or to explore how F5 APIs can be integrated into the F5 Ecosystem like Ansible or Terraform and many more. For more information, your starting point should always be https://clouddocs.f5.com/. Links and References F5 SDK GitHub repo: https://github.com/f5devcentral/f5-sdk-python Documentation: https://clouddocs.f5.com/sdk/f5-sdk-python/ F5 CLI GitHub repo: https://github.com/f5devcentral/f5-cli Docker Container: https://hub.docker.com/r/f5devcentral/f5-cli Documentation: https://clouddocs.f5.com/sdk/f5-cli/ If you want to talk about F5 cloud solutions we have a Slack channel: Slack / F5cloudsolutions : https://f5cloudsolutions.slack.com Cloud bees Jenkins distribution: https://hub.docker.com/r/cloudbees/cloudbees-jenkins-distribution/ Docker: https://docs.docker.com/get-docker/1.9KViews3likes0Comments