Declarative Advanced WAF policy lifecycle in a CI/CD pipeline
The purpose of this article is to show the configuration used to deploy a declarative Advanced WAF policy to a BIG-IP and automatically configure it to protect an API workload by consuming an OpenAPI file describing the application. For this experiment, a Gitlab CI/CD pipeline was used to deploy an API workload to Kubernetes, configure a declarative Adv. WAF policy to a BIG-IP device and tuning it by incorporating learning suggestions exported from the BIG-IP. Lastly, the F5 WAF tester tool was used to determine and improve the defensive posture of the Adv. WAF policy. Deploying the declarative Advanced WAF policy through a CI/CD pipeline To deploy the Adv. WAF policy, the Gitlab CI/CD pipeline is calling an Ansible playbook that will in turn deploy an AS3 application referencing the Adv.WAF policy from a separate JSON file. This allows the application definition and WAF policy to be managed by 2 different groups, for example NetOps and SecOps, supporting separation of duties. The following Ansible playbook was used; --- - hosts: bigip connection: local gather_facts: false vars: my_admin: "xxxx" my_password: "xxxx" bigip: "xxxx" tasks: - name: Deploy AS3 API AWAF policy uri: url: "https://{{ bigip }}/mgmt/shared/appsvcs/declare" method: POST headers: "Content-Type": "application/json" "Authorization": "Basic xxxxxxxxxx body: "{{ lookup('file','as3_waf_openapi.json') }}" body_format: json validate_certs: no status_code: 200 The Advanced WAF policy 'as3_waf_openapi.json' was specified as follows: { "class": "AS3", "action": "deploy", "persist": true, "declaration": { "class": "ADC", "schemaVersion": "3.2.0", "id": "Prod_API_AS3", "API-Prod": { "class": "Tenant", "defaultRouteDomain": 0, "arcadia": { "class": "Application", "template": "generic", "VS_API": { "class": "Service_HTTPS", "remark": "Accepts HTTPS/TLS connections on port 443", "virtualAddresses": ["xxxxx"], "redirect80": false, "pool": "pool_NGINX_API", "policyWAF": { "use": "Arcadia_WAF_API_policy" }, "securityLogProfiles": [{ "bigip": "/Common/Log all requests" }], "profileTCP": { "egress": "wan", "ingress": { "use": "TCP_Profile" } }, "profileHTTP": { "use": "custom_http_profile" }, "serverTLS": { "bigip": "/Common/arcadia_client_ssl" } }, "Arcadia_WAF_API_policy": { "class": "WAF_Policy", "url": "http://xxxx/root/awaf_openapi/-/raw/master/WAF/ansible/bigip/policy-api.json", "ignoreChanges": true }, "pool_NGINX_API": { "class": "Pool", "monitors": ["http"], "members": [{ "servicePort": 8080, "serverAddresses": ["xxxx"] }] }, "custom_http_profile": { "class": "HTTP_Profile", "xForwardedFor": true }, "TCP_Profile": { "class": "TCP_Profile", "idleTimeout": 60 } } } } } The AS3 declaration will provision a separate Administrative Partition ('API-Prod') containing a Virtual Server ('VS_API'), an Adv. WAF policy ('Arcadia_WAF_API_policy') and a pool ('pool_NGINX_API'). The Adv.WAF policy being referenced ('policy-api.json') is stored in the same Gitlab repository but can be downloaded from a separate location. { "policy": { "name": "policy-api-arcadia", "description": "Arcadia API", "template": { "name": "POLICY_TEMPLATE_API_SECURITY" }, "enforcementMode": "transparent", "server-technologies": [ { "serverTechnologyName": "MySQL" }, { "serverTechnologyName": "Unix/Linux" }, { "serverTechnologyName": "MongoDB" } ], "signature-settings": { "signatureStaging": false }, "policy-builder": { "learnOnlyFromNonBotTraffic": false }, "open-api-files": [ { "link": "http://xxxx/root/awaf_openapi/-/raw/master/App/openapi3-arcadia.yaml" } ] }, "modifications": [ ] } The declarative Adv.WAF policy is referencing in turn the OpenAPI file ('openapi3-arcadia.yaml') that describes the application being protected. Executing the Ansible playbook results in the AS3 application being deployed, along with the Adv.WAF policy that is automatically configured according to the OpenAPI file. Handling learning suggestions in a CI/CD pipeline The next step in the CI/CD pipeline used for this experiment was to send legitimate traffic using the API and collect the learning suggestions generated by the Adv.WAF policy, which will allow a simple way to customize the WAF policy further for the specific application being protected. The following Ansible playbook was used to retrieve the learning suggestions: --- - hosts: bigip connection: local gather_facts: true vars: my_admin: "xxxx" my_password: "xxxx" bigip: "xxxxx" tasks: - name: Get all Policy_key/IDs for WAF policies uri: url: 'https://{{ bigip }}/mgmt/tm/asm/policies?$select=name,id' method: GET headers: "Authorization": "Basic xxxxxxxxxxx" validate_certs: no status_code: 200 return_content: yes register: waf_policies - name: Extract Policy_key/ID of Arcadia_WAF_API_policy set_fact: Arcadia_WAF_API_policy_ID="{{ item.id }}" loop: "{{ (waf_policies.content|from_json)['items'] }}" when: item.name == "Arcadia_WAF_API_policy" - name: Export learning suggestions uri: url: "https://{{ bigip }}/mgmt/tm/asm/tasks/export-suggestions" method: POST headers: "Content-Type": "application/json" "Authorization": "Basic xxxxxxxxxxx" body: "{ \"inline\": \"true\", \"policyReference\": { \"link\": \"https://{{ bigip }}/mgmt/tm/asm/policies/{{ Arcadia_WAF_API_policy_ID }}/\" } }" body_format: json validate_certs: no status_code: - 200 - 201 - 202 - name: Get learning suggestions uri: url: "https://{{ bigip }}/mgmt/tm/asm/tasks/export-suggestions" method: GET headers: "Authorization": "Basic xxxxxxxxx" validate_certs: no status_code: 200 register: result - name: Print learning suggestions debug: var=result A sample learning suggestions output is shown below: "json": { "items": [ { "endTime": "xxxxxxxxxxxxx", "id": "ZQDaRVecGeqHwAW1LDzZTQ", "inline": true, "kind": "tm:asm:tasks:export-suggestions:export-suggestions-taskstate", "lastUpdateMicros": 1599953296000000.0, "result": { "suggestions": [ { "action": "add-or-update", "description": "Enable Evasion Technique", "entity": { "description": "Directory traversals" }, "entityChanges": { "enabled": true }, "entityType": "evasion" }, { "action": "add-or-update", "description": "Enable HTTP Check", "entity": { "description": "Check maximum number of parameters" }, "entityChanges": { "enabled": true }, "entityType": "http-protocol" }, { "action": "add-or-update", "description": "Enable HTTP Check", "entity": { "description": "No Host header in HTTP/1.1 request" }, "entityChanges": { "enabled": true }, "entityType": "http-protocol" }, { "action": "add-or-update", "description": "Enable enforcement of policy violation", "entity": { "name": "VIOL_REQUEST_MAX_LENGTH" }, "entityChanges": { "alarm": true, "block": true }, "entityType": "violation" } Incorporating the learning suggestions in the Adv.WAF policy can be done by simple copy&pasting the self-contained learning suggestions blocks into the "modifications" list of the Adv.WAF policy: { "policy": { "name": "policy-api-arcadia", "description": "Arcadia API", "template": { "name": "POLICY_TEMPLATE_API_SECURITY" }, "enforcementMode": "transparent", "server-technologies": [ { "serverTechnologyName": "MySQL" }, { "serverTechnologyName": "Unix/Linux" }, { "serverTechnologyName": "MongoDB" } ], "signature-settings": { "signatureStaging": false }, "policy-builder": { "learnOnlyFromNonBotTraffic": false }, "open-api-files": [ { "link": "http://xxxxxx/root/awaf_openapi/-/raw/master/App/openapi3-arcadia.yaml" } ] }, "modifications": [ { "action": "add-or-update", "description": "Enable Evasion Technique", "entity": { "description": "Directory traversals" }, "entityChanges": { "enabled": true }, "entityType": "evasion" } ] } Enhancing Advanced WAF policy posture by using the F5 WAF tester The F5 WAF tester is a tool that generates known attacks and checks the response of the WAF policy. For example, running the F5 WAF tester against a policy that has a "transparent" enforcement mode will cause the tests to fail as the attacks will not be blocked. The F5 WAF tester can suggest possible enhancement of the policy, in this case the change of the enforcement mode. An abbreviated sample output of the F5 WAF Tester: ................................................................ "100000023": { "CVE": "", "attack_type": "Server Side Request Forgery", "name": "SSRF attempt (AWS Metadata Server)", "results": { "parameter": { "expected_result": { "type": "signature", "value": "200018040" }, "pass": false, "reason": "ASM Policy is not in blocking mode", "support_id": "" } }, "system": "All systems" }, "100000024": { "CVE": "", "attack_type": "Server Side Request Forgery", "name": "SSRF attempt - Local network IP range 10.x.x.x", "results": { "request": { "expected_result": { "type": "signature", "value": "200020201" }, "pass": false, "reason": "ASM Policy is not in blocking mode", "support_id": "" } }, "system": "All systems" } }, "summary": { "fail": 48, "pass": 0 } Changing the enforcement mode from "transparent" to "blocking" can easily be done by editing the same Adv. WAF policy file: { "policy": { "name": "policy-api-arcadia", "description": "Arcadia API", "template": { "name": "POLICY_TEMPLATE_API_SECURITY" }, "enforcementMode": "blocking", "server-technologies": [ { "serverTechnologyName": "MySQL" }, { "serverTechnologyName": "Unix/Linux" }, { "serverTechnologyName": "MongoDB" } ], "signature-settings": { "signatureStaging": false }, "policy-builder": { "learnOnlyFromNonBotTraffic": false }, "open-api-files": [ { "link": "http://xxxxx/root/awaf_openapi/-/raw/master/App/openapi3-arcadia.yaml" } ] }, "modifications": [ { "action": "add-or-update", "description": "Enable Evasion Technique", "entity": { "description": "Directory traversals" }, "entityChanges": { "enabled": true }, "entityType": "evasion" } ] } A successful run will will be achieved when all the attacks will be blocked. ......................................... "100000023": { "CVE": "", "attack_type": "Server Side Request Forgery", "name": "SSRF attempt (AWS Metadata Server)", "results": { "parameter": { "expected_result": { "type": "signature", "value": "200018040" }, "pass": true, "reason": "", "support_id": "17540898289451273964" } }, "system": "All systems" }, "100000024": { "CVE": "", "attack_type": "Server Side Request Forgery", "name": "SSRF attempt - Local network IP range 10.x.x.x", "results": { "request": { "expected_result": { "type": "signature", "value": "200020201" }, "pass": true, "reason": "", "support_id": "17540898289451274344" } }, "system": "All systems" } }, "summary": { "fail": 0, "pass": 48 } Conclusion By adding the Advanced WAF policy into a CI/CD pipeline, the WAF policy can be integrated in the lifecycle of the application it is protecting, allowing for continuous testing and improvement of the security posture before it is deployed to production. The flexible model of AS3 and declarative Advanced WAF allows the separation of roles and responsibilities between NetOps and SecOps, while providing an easy way for tuning the policy to the specifics of the application being protected. Links UDF lab environment link. Short instructional video link.2.1KViews3likes2CommentsWhen user goes through LB the server page has stripped information
I have created a pretty simple round robin load balancing for a user with three servers. As a part of this I also have DNS LB in place that sends the traffic to two VIPs that are connected to the three nodes in a pool I have created on my LTM F5. User accesses the LB DNS URL I provide via Https://<>.com > VIP > Pool > Nodes. There is a certificate applied to the clientssl and serverssl profiles attached to the VIPs. The user is able to get to their backend servers/nodes when going through the load balancer, but we are coming across an interesting issue. When the user goes through the F5 the server dashboard page they usually see is stripped of information on that dashboard. Typically, there would be tiles shown on the server dashboard, but it is just the basic UI and none of the tiles. When the user goes directly to their server, all the information/tiles are shown as normal. I have never experienced this problem before and am not sure how to prove out the F5 is causing the issue or how it is happening. Any insight would be greatly appreciated! *Attached file shows what I'm explaining.56Views0likes6CommentsIntroduction to DevCentral Basics
Starting out as a new user to F5 technology can be a daunting task; where do you start? The DevCentral Basics program is DevCentral’s way to ease your transition from new user to administrator without filling your day with deployment guides or step-by-step how-to articles. If you’re new to F5 and BIG-IP, this is the place to start. We scoured the internet search engines to find out what you're asking so we can answer those questions first and provide you a comfortable platform for continuing education. To get you started, we'll start with some basic documentation for you, the new F5 admin: What Is BIG-IP? To regular users this question doesn't make sense but it's a very good question for those new to F5. Here we cover exactly what is F5's BIG-IP and how the ecosystem works together. What is Load Balancing? If you want to balance your application traffic load across a bunch of physical servers and make those servers look like one great big server to the outside world, this article will get you started in the right direction. What are Application Delivery Controllers? Part 1 & Part 2 ADCs are what all good load balancers grew up to be. They can help with application scalability, high availability, and predictability if your website. We give you some of the basic terminology and functionality of these application lifelines. Your First 30 Days With BIG-IP You've received one or more boxes in your office. Or maybe you received a license for a VE. Now what? Let's check out what you can do with your license and walking through the interface so you know where everything is. This is an overview what to expect as a new admin of F5 and take some of the intimidation factor away when you login to the management console for that very first time. Contacting F5 Support We'll cover your basic support questions and what to do when there's a problem so you can get in contact with the correct resources as quick as possible. From using DevCentral and AskF5 to using iHealth, we'll show you where your best support resources are. Troubleshooting BIG-IP - The Basics Divide and conquer is your best method to find out why you're receiving server resets or having a tough time understanding why you keep bouncing between web servers. Reducing your complexity and starting with traffic basics allows you to step through issues and get you up and running. What is DNS? Imagine how difficult it would be to use the Internet if you had to remember dozens of number combinations to do anything. Learn how the Internet’s phonebook, DNS, translates the domain names you type into a browser into an IP address so you can find the resource you're looking for on the Internet. Once you're comfortable with BIG-IP you can move on and branch into learning all of our various products that comprise the BIG-IP architecture. As we update and build new features for BIG-IP, we'll create new DevCentral Basics content and link to further reading so you don't have to learn BIG-IP through deployment guides. We'll give you the why, deployment guides will tell you how.393Views0likes0CommentsWhat is an Application Delivery Controller - Part I
A Little History Application Delivery got its start in the form of network-based load balancing hardware. It is the essential foundation on which Application Delivery Controllers (ADCs) operate. The second iteration of purpose-built load balancing (following application-based proprietary systems) materialized in the form of network-based appliances. These are the true founding fathers of today's ADCs. Because these devices were application-neutral and resided outside of the application servers themselves, they could load balance using straightforward network techniques. In essence, these devices would present a "virtual server" address to the outside world, and when users attempted to connect, they would forward the connection to the most appropriate real server doing bi-directional network address translation (NAT). Figure 1: Network-based load balancing appliances. With the advent of virtualization and cloud computing, the third iteration of ADCs arrived as software delivered virtual editions intended to run on hypervisors. Virtual editions of application delivery services have the same breadth of features as those that run on purpose-built hardware and remove much of the complexity from moving application services between virtual, cloud, and hybrid environments. They allow organizations to quickly and easily spin-up application services in private or public cloud environments. Basic Application Delivery Terminology It would certainly help if everyone used the same lexicon; unfortunately, every vendor of load balancing devices (and, in turn, ADCs) seems to use different terminology. With a little explanation, however, the confusion surrounding this issue can easily be alleviated. Node, Host, Member, and Server Most ADCs have the concept of a node, host, member, or server; some have all four, but they mean different things. There are two basic concepts that they all try to express. One concept—usually called a node or server—is the idea of the physical or virtual server itself that will receive traffic from the ADC. This is synonymous with the IP address of the physical server and, in the absence of a ADC, would be the IP address that the server name (for example, www.example.com) would resolve to. We will refer to this concept as the host. The second concept is a member (sometimes, unfortunately, also called a node by some manufacturers). A member is usually a little more defined than a server/node in that it includes the TCP port of the actual application that will be receiving traffic. For instance, a server named www.example.com may resolve to an address of 172.16.1.10, which represents the server/node, and may have an application (a web server) running on TCP port 80, making the member address 172.16.1.10:80. Simply put, the member includes the definition of the application port as well as the IP address of the physical server. We will refer to this as the service. Why all the complication? Because the distinction between a physical server and the application services running on it allows the ADC to individually interact with the applications rather than the underlying hardware or hypervisor. A host (172.16.1.10) may have more than one service available (HTTP, FTP, DNS, and so on). By defining each application uniquely (172.16.1.10:80, 172.16.1.10:21, and 172.16.1.10:53), the ADC can apply unique load balancing and health monitoring based on the services instead of the host. However, there are still times when being able to interact with the host (like low-level health monitoring or when taking a server offline for maintenance) is extremely convenient. Most load balancing-based technology uses some concept to represent the host, or physical server, and another to represent the services available on it— in this case, simply host and services. Pool, Cluster, and Farm Load balancing allows organizations to distribute inbound application traffic across multiple back-end destinations, including cloud deployments. It is therefore a necessity to have the concept of a collection of back-end destinations. Pools, as we will refer to them (also known as clusters or farms) are collections of similar services available on any number of hosts. For instance, all services that offer the company web page would be collected into a pool called "company web page" and all services that offer e-commerce services would be collected into a pool called "e-commerce." The key element here is that all systems have a collective object that refers to "all similar services" and makes it easier to work with them as a single unit. This collective object—a pool—is almost always made up of services, not hosts. Virtual Server Although not always the case, today the term virtual server means a server hosting virtual machines. It is important to note that like the definition of services, virtual server usually includes the application port was well as the IP address. The term "virtual service" would be more in keeping with the IP:Port convention; but because most vendors, ADC and Cloud alike use virtual server, this article uses virtual server as well. Putting It All Together Putting all of these concepts together makes up the basic steps in load balancing. The ADC presents virtual servers to the outside world. Each virtual server points to a pool of services that reside on one or more physical hosts. Figure 2: Application Delivery comprises four basic concepts—virtual servers, pool/cluster, services, and hosts. While the diagram above may not be representative of a real-world deployment, it does provide the elemental structure for continuing a discussion about application delivery basics. ps Next Steps ReadWhat is Load Balancing?if you haven't already and check out What is an Application Delivery Controller -Part II.3.4KViews0likes1CommentAPM Configuration to Support Duo MFA using iRule
Overview BIG-IP APM has supported Duo as an MFA provider for a long time with RADIUS-based integration. Recently, Duo has added support for Universal Prompt that uses Open ID Connect (OIDC) protocol to provide two-factor authentication. To integrate APM as an OIDC client and resource server, and Duo as an Identity Provider (IdP), Duo requires the user’s logon name and custom parameters to be sent for Authentication and Token request. This guide describes the configuration required on APM to enable Duo MFA integration using an iRule. iRules addresses the custom parameter challenges by generating the needed custom values and saving them in session variables, which the OAuth Client agent then uses to perform MFA with Duo. This integration procedure is supported on BIG-IP versions 13.1, 14.1x, 15.1x, and 16.x. To integrate Duo MFA with APM, complete the following tasks: 1. Choose deployment type: Per-request or Per-session 2. Configure credentials and policies for MFA on the DUO web portal 3. Create OAuth objects on the BIG-IP system 4. Configure the iRule 5. Create the appropriate access policy/policies on the BIG-IP system 6. Apply policy/policies and iRule to the APM virtual server Choose deployment type APM supports two different types of policies for performing authentication functions. Per-session policies: Per-session policies provide authentication and authorization functions that occur only at the beginning of a user’s session. These policies are compatible with most APM use cases such as VPN, Webtop portal, Remote Desktop, federation IdP, etc. Per-request policies: Per-request policies provide dynamic authentication and authorization functionality that may occur at any time during a user’s session, such as step-up authentication or auditing functions only for certain resources. These policies are only compatible with Identity Aware Proxy and Web Access Management use cases and cannot be used with VPN or webtop portals. This guide contains information about setting up both policy types. Prerequisites Ensure the BIG-IP system has DNS and internet connectivity to contact Duo directly for validating the user's OAuth tokens. Configure credentials and policies for MFA on Duo web portal Before you can protect your F5 BIG-IP APM Web application with Duo, you will first need to sign up for a Duo account. 1. Log in to the Duo Admin Panel and navigate to Applications. 2. Click Protect an application. Figure 1: Duo Admin Panel – Protect an Application 3. Locate the entry for F5 BIG-IP APM Web in the applications list and click Protect to get the Client ID, Client secret, and API hostname. You will need this information to configure objects on APM. Figure 2: Duo Admin Panel – F5 BIG-IP APM Web 4. As DUO is used as a secondary authentication factor, the user’s logon name is sent along with the authentication request. Depending on your security policy, you may want to pre-provision users in Duo, or you may allow them to self-provision to set their preferred authentication type when they first log on. To add users to the Duo system, navigate to the Dashboard page and click the Add New...-> Add User button. A Duo username should match the user's primary authentication username. Refer to the https://duo.com/docs/enrolling-users link for the different methods of user enrollment. Refer to Duo Universal Prompt for additional information on Duo’s two-factor authentication. Create OAuth objects on the BIG-IP system Create a JSON web key When APM is configured to act as an OAuth client or resource server, it uses JSON web keys (JWKs) to validate the JSON web tokens it receives from Duo. To create a JSON web key: 1. On the Main tab, select Access > Federation > JSON Web Token > Key Configuration. The Key Configuration screen opens. 2. To add a new key configuration, click Create. 3. In the ID and Shared Secret fields, enter the Client ID and Client Secret values respectively obtained from Duo when protecting the application. 4. In the Type list, select the cryptographic algorithm used to sign the JSON web key. Figure 3: Key Configuration screen 5. Click Save. Create a JSON web token As an OAuth client or resource server, APM validates the JSON web tokens (JWT) it receives from Duo. To create a JSON web token: 1. On the Main tab, select Access > Federation > JSON Web Token > Token Configuration. The Token Configuration screen opens. 2. To add a new token configuration, click Create. 3. In the Issuer field, enter the API hostname value obtained from Duo when protecting the application. 4. In the Signing Algorithms area, select from the Available list and populate the Allowed and Blocked lists. 5. In the Keys (JWK) area, select the previously configured JSON web key in the allowed list of keys. Figure 4: Token Configuration screen 6. Click Save. Configure Duo as an OAuth provider APM uses the OAuth provider settings to get URIs on the external OAuth authorization server for JWT web tokens. To configure an OAuth provider: 1. On the Main tab, select Access > Federation > OAuth Client / Resource Server > Provider. The Provider screen opens. 2. To add a provider, click Create. 3. In the Name field, type a name for the provider. 4. From the Type list, select Custom. 5. For Token Configuration (JWT), select a configuration from the list. 6. In the Authentication URI field, type the URI on the provider where APM should redirect the user for authentication. The hostname is the same as the API hostname in the Duo application. 7. In the Token URI field, type the URI on the provider where APM can get a token. The hostname is the same as the API hostname in the Duo application. Figure 5: OAuth Provider screen 8. Click Finished. Configure Duo server for APM The OAuth Server settings specify the OAuth provider and role that Access Policy Manager (APM) plays with that provider. It also sets the Client ID, Client Secret, and Client’s SSL certificates that APM uses to communicate with the provider. To configure a Duo server: 1. On the Main tab, select Access > Federation > OAuth Client / Resource Server > OAuth Server. The OAuth Server screen opens. 2. To add a server, click Create. 3. In the Name field, type a name for the Duo server. 4. From the Mode list, select how you want the APM to be configured. 5. From the Type list, select Custom. 6. From the OAuth Provider list, select the Duo provider. 7. From the DNS Resolver list, select a DNS resolver (or click the plus (+) icon, create a DNS resolver, and then select it). 8. In the Token Validation Interval field, type a number. In a per-request policy subroutine configured to validate the token, the subroutine repeats at this interval or the expiry time of the access token, whichever is shorter. 9. In the Client Settings area, paste the Client ID and Client secret you obtained from Duo when protecting the application. 10. From the Client's ServerSSL Profile Name, select a server SSL profile. Figure 6: OAuth Server screen 11. Click Finished. Configure an auth-redirect-request and a token-request Requests specify the HTTP method, parameters, and headers to use for the specific type of request. An auth-redirect-request tells Duo where to redirect the end-user, and a token-request accesses the authorization server for obtaining an access token. To configure an auth-redirect-request: 1. On the Main tab, select Access > Federation > OAuth Client / Resource Server > Request. The Request screen opens. 2. To add a request, click Create. 3. In the Name field, type a name for the request. 4. For the HTTP Method, select GET. 5. For the Type, select auth-redirect-request. 6. As shown in Figure 7, specify the list of GET parameters to be sent: request parameter with value depending on the type of policy For per-request policy: %{subsession.custom.jwt_duo} For per-session policy: %{session.custom.jwt_duo} client_id parameter with type client-id response_type parameter with type response-type Figure 7: Request screen with auth-redirect-request (Use “subsession.custom…” for Per-request or “session.custom…” for Per-session) 7. Click Finished. To configure a token-request: 1. On the Main tab, select Access > Federation > OAuth Client / Resource Server > Request. The Request screen opens. 2. To add a request, click Create. 3. In the Name field, type a name for the request. 4. For the HTTP Method, select POST. 5. For the Type, select token-request. 6. As shown in Figure 8, specify the list of POST parameters to be sent: client_assertion parameter with value depending on the type of policy For per-request policy: %{subsession.custom.jwt_duo_token} For per-session policy: %{session.custom.jwt_duo_token} client_assertion_type parameter with value urn:ietf:params:oauth:client-assertion-type:jwt-bearer grant_type parameter with type grant-type redirect_uri parameter with type redirect-uri Figure 8: Request screen with token-request (Use “subsession.custom…” for Per-request or “session.custom…” for Per-session) 7. Click Finished. Configure the iRule iRules gives you the ability to customize and manage your network traffic. Configure an iRule that creates the required sub-session variables and usernames for Duo integration. Note: This iRule has sections for both per-request and per-session policies and can be used for either type of deployment. To configure an iRule: 1. On the Main tab, click Local Traffic > iRules. 2. To create an iRules, click Create. 3. In the Name field, type a name for the iRule. 4. Copy the sample code given below and paste it in the Definition field. Replace the following variables with values specific to the Duo application: <Duo Client ID> in the getClientId function with Duo Application ID. <Duo API Hostname> in the createJwtToken function with API Hostname. For example, https://api-duohostname.com/oauth/v1/token. <JSON Web Key> in the getJwkName function with the configured JSON web key. Note: The iRule ID here is set as JWT_CREATE. You can rename the ID as desired. You specify this ID in the iRule Event agent in Visual Policy Editor. Note: The variables used in the below example are global, which may affect your performance. Refer to the K95240202: Understanding iRule variable scope article for further information on global variables, and determine if you use a local variable for your implementation. proc randAZazStr {len} { return [subst [string repeat {[format %c [expr {int(rand() * 26) + (rand() > .5 ? 97 : 65)}]]} $len]] } proc getClientId { return <Duo Client ID> } proc getExpiryTime { set exp [clock seconds] set exp [expr $exp + 900] return $exp } proc getJwtHeader { return "{\"alg\":\"HS512\",\"typ\":\"JWT\"}" } proc getJwkName { return <JSON Web Key> #e.g. return "/Common/duo_jwk" } proc createJwt {duo_uname} { set header [call getJwtHeader] set exp [call getExpiryTime] set client_id [call getClientId] set redirect_uri "https://" set redirect [ACCESS::session data get "session.server.network.name"] append redirect_uri $redirect append redirect_uri "/oauth/client/redirect" set payload "{\"response_type\": \"code\",\"scope\":\"openid\",\"exp\":${exp},\"client_id\":\"${client_id}\",\"redirect_uri\":\"${redirect_uri}\",\"duo_uname\":\"${duo_uname}\"}" set jwt_duo [ ACCESS::oauth sign -header $header -payload $payload -alg HS512 -key [call getJwkName] ] return $jwt_duo } proc createJwtToken { set header [call getJwtHeader] set exp [call getExpiryTime] set client_id [call getClientId] set aud "<Duo API Hostname>/oauth/v1/token" #Example: set aud https://api-duohostname.com/oauth/v1/token set jti [call randAZazStr 32] set payload "{\"sub\": \"${client_id}\",\"iss\":\"${client_id}\",\"aud\":\"${aud}\",\"exp\":${exp},\"jti\":\"${jti}\"}" set jwt_duo [ ACCESS::oauth sign -header $header -payload $payload -alg HS512 -key [call getJwkName] ] return $jwt_duo } when ACCESS_POLICY_AGENT_EVENT { set irname [ACCESS::policy agent_id] if { $irname eq "JWT_CREATE" } { set ::duo_uname [ACCESS::session data get "session.logon.last.username"] ACCESS::session data set session.custom.jwt_duo [call createJwt $::duo_uname] ACCESS::session data set session.custom.jwt_duo_token [call createJwtToken] } } when ACCESS_PER_REQUEST_AGENT_EVENT { set irname [ACCESS::perflow get perflow.irule_agent_id] if { $irname eq "JWT_CREATE" } { set ::duo_uname [ACCESS::session data get "session.logon.last.username"] ACCESS::perflow set perflow.custom [call createJwt $::duo_uname] ACCESS::perflow set perflow.scratchpad [call createJwtToken] } } Figure 9: iRule screen 5. Click Finished. Create the appropriate access policy/policies on the BIG-IP system Per-request policy Skip this section for a per-session type deployment The per-request policy is used to perform secondary authentication with Duo. Configure the access policies through the access menu, using the Visual Policy Editor. The per-request access policy must have a subroutine with an iRule Event, Variable Assign, and an OAuth Client agent that requests authorization and tokens from an OAuth server. You may use other per-request policy items such as URL branching or Client Type to call Duo only for certain target URIs. Figure 10 shows a subroutine named duosubroutine in the per-request policy that handles Duo MFA authentication. Figure 10: Per-request policy in Visual Policy Editor Configuring the iRule Event agent The iRule Event agent specifies the iRule ID to be executed for Duo integration. In the ID field, type the iRule ID as configured in the iRule. Figure 11: iRule Event agent in Visual Policy Editor Configuring the Variable Assign agent The Variable Assign agent specifies the variables for token and redirect requests and assigns a value for Duo MFA in a subroutine. This is required only for per-request type deployment. Add sub-session variables as custom variables and assign their custom Tcl expressions as shown in Figure 12. subsession.custom.jwt_duo_token = return [mcget {perflow.scratchpad}] subsession.custom.jwt_duo = return [mcget {perflow.custom}] Figure 12: Variable Assign agent in Visual Policy Editor Configuring the OAuth Client agent An OAuth Client agent requests authorization and tokens from the Duo server. Specify OAuth parameters as shown in Figure 13. In the Server list, select the Duo server to which the OAuth client directs requests. In the Authentication Redirect Request list, select the auth-redirect-request configured earlier. In the Token Request list, select the token-request configured earlier. Some deployments may not need the additional information provided by OpenID Connect. You could, in that case, disable it. Figure 13: OAuth Client agent in Visual Policy Editor Per-session policy Configure the Per Session policy as appropriate for your chosen deployment type. Per-request: The per-session policy must contain at least one logon page to set the username variable in the user’s session. Preferably it should also perform some type of primary authentication. This validated username is used later in the per-request policy. Per-session: The per-session policy is used for all authentication. A per-request policy is not used. Figures 14a and 14b show a per-session policy that runs when a client initiates a session. Depending on the actions you include in the access policy, it can authenticate the user and perform actions that populate session variables with data for use throughout the session. Figure 14a: Per-session policy in Visual Policy Editor performs both primary authentication and Duo authentication (for per-session use case) Figure 14b: Per-session policy in Visual Policy Editor performs primary authentication only (for per-request use case) Apply policy/policies and iRule to the APM virtual server Finally, apply the per-request policy, per-session policy, and iRule to the APM virtual server. You assign iRules as a resource to the virtual server that users connect. Configure the virtual server’s default pool to the protected local web resource. Apply policy/policies to the virtual server Per-request policy To attach policies to the virtual server: 1. On the Main tab, click Local Traffic > Virtual Servers. 2. Select the Virtual Server. 3. In the Access Policy section, select the policy you created. 4. Click Finished. Figure 15: Access Policy section in Virtual Server (per-request policy) Per-session policy Figure 16 shows the Access Policy section in Virtual Server when the per-session policy is deployed. Figure 16: Access Policy section in Virtual Server (per-session policy) Apply iRule to the virtual server To attach the iRule to the virtual server: 1. On the Main tab, click Local Traffic > Virtual Servers. 2. Select the Virtual Server. 3. Select the Resources tab. 4. Click Manage in the iRules section. 5. Select an iRule from the Available list and add it to the Enabled list. 6. Click Finished.16KViews10likes50CommentsF5 BIG-IP and NetApp StorageGRID - Providing Fast and Scalable S3 API for AI apps
F5 BIG-IP, an industry-leading ADC solution, can provide load balancing services for HTTPS servers, with full security applied in-flight and performance levels to meet any enterprise’s capacity targets. Specific to S3 API, the object storage and retrieval protocol that rides upon HTTPS, an aligned partnering solution exists from NetApp, which allows a large-scale set of S3 API targets to ingest and provide objects. Automatic backend synchronization allows any node to be offered up at a target by a server load balancer like BIG-IP. This allows overall storage node utilization to be optimized across the node set, and scaled performance to reach the highest S3 API bandwidth levels, all while offering high availability to S3 API consumers. S3 compatible storage is becoming popular for AI applications due to its superior performance over traditional protocols such as NFS or CIFS, as well as enabling repatriation of data from the cloud to on-prem. These are scenarios where the amount of data faced is large, this drives the requirement for new levels of scalability and performance; S3 compatible object storages such as NetApp StorageGRID are purpose-built to reach such levels. Sample BIG-IP and StorageGRID Configuration This document is based upon tests and measurements using the following lab configuration. All devices in the lab were virtual machine-based offerings. The S3 service to be projected to the outside world, depicted in the above diagram and delivered to the client via the external network, will use a BIG-IP virtual server (VS) which is tied to an origin pool of three large-capacity StorageGRID nodes. The BIG-IP maintains the integrity of the NetApp nodes by frequent HTTP-based health checks. Should an unhealthy node be detected, it will be dropped from the list of active pool members. When content is written via the S3 protocol to any node in the pool, the other members are synchronized to serve up content should they be selected by BIG-IP for future read requests. The key recommendations and observations in building the lab include: Setup a local certificate authority such that all nodes can be trusted by the BIG-IP. Typically the local CA-signed certificate will incorporate every node’s FQDN and IP address within the listed subject alternate names (SAN) to make the backend solution streamlined with one single certificate. Different F5 profiles, such as FastL4 or FastHTTP, can be selected to reach the right tradeoff between the absolute capacity of stateful traffic load-balanced versus rich layer 7 functions like iRules or authentication. Modern techniques such as multi-part uploads or using HTTP Ranges for downloads can take large objects, and concurrently move smaller pieces across the load balancer, lowering total transaction times, and spreading work over more CPU cores. The S3 protocol, at its core, is a set of REST API calls. To facilitate testing, the widely used S3Browser (www.s3browser.com) was used to quickly and intuitively create S3 buckets on the NetApp offering and send/retrieve objects (files) through the BIG-IP load balancer. Setup the BIG-IP and StorageGrid Systems The StorageGrid solution is an array of storage nodes, provisioned with the help of an administrative host, the “Grid Manager”. For interactive users, no thick client is required as on-board web services allow a streamlined experience all through an Internet browser. The following is an example of Grid Manager, taken from a Chrome browser; one sees the three Storage Nodes setup have been successfully added. The load balancer, in our case, the BIG-IP, is set up with a virtual server to support HTTPS traffic and distributed that traffic, which is S3 object storage traffic, to the three StorageGRID nodes. The following screenshot demonstrates that the BIG-IP is setup in a standard HA (active-passive pair) configuration and the three pool members are healthy (green, health checks are fine) and receiving/sending S3 traffic, as the byte counts are seen in the image to be non-zero. On the internal side of the BIG-IP, TCP port 18082 is being used for S3 traffic. To do testing of the solution, including features such as multi-part uploads and downloads, a popular S3 tool, S3Browser, was downloaded and used. The following shows the entirety of the S3Browser setup. Simply create an account (StorageGRID-Account-01 in our example) and point the REST API endpoint at the BIG-IP Virtual Server that is acting as the secure front door for our pool of NetApp nodes. The S3 Access Key ID and Secret values are generated at turn-up time of the NetApp appliances. All S3 traffic will, of course, be SSL/TLS encrypted. BIG-IP will intercept the SSL traffic (high-speed decrypt) and then re-encrypt when proxying the traffic to a selected origin pool member. Other valid load balancer setups exist; one might include an “off load” approach to SSL, whereby the S3 nodes safely co-located in a data center may prefer to receive non-SSL HTTP S3 traffic. This may see an overall performance improvement in terms of peak bandwidth per storage node, but this comes at the tradeoff of security considerations. Experimenting with S3 Protocol and Load Balancing With all the elements in place to start understanding the behavior of S3 and spreading traffic across NetApp nodes, a quick test involved creating a S3 bucket and placing some objects in that new bucket. Buckets are logical collections of objects, conceptually not that different from folders or directories in file systems. In fact, a S3 bucket could even be mounted as a folder in an operating system such as Linux. In their simplest form, most commonly, buckets can simply serve as high-capacity, performant storage and retrieval targets for similarly themed structured or unstructured data. In the first test, we created a new bucket (“audio-clip-bucket”) and uploaded four sample files to the new bucket using S3Browser. We then zeroed the statistics for each pool member on the BIG-IP, to see if even this small upload would spread S3 traffic across more than a single NetApp device. Immediately after the upload, the counters reflect that two StorageGRID nodes were selected to receive S3 transactions. Richly detailed, per-transaction visibility can be obtained by leveraging the F5 SSL Orchestrator (SSLO) feature on the BIG-IP, whereby copies of the bi-directional S3 traffic decrypted within the load balancer can be sent to packet loggers, analytics tools, or even protocol analyzers like Wireshark. The BIG-IP also has an onboard analytics tool, Application Visibility and Reporting (AVR) which can provide some details on the nuances of the S3 traffic being proxied. AVR demonstrates the following characteristics of the above traffic, a simple bucket creation and upload of 4 objects. With AVR, one can see the URL values used by S3, which include the bucket name itself as well as transactions incorporating the object names at URLs. Also, the HTTP methods used included both GETS and PUTS. The use of HTTP PUT is expected when creating a new bucket. S3 is not governed by a typical standards body document, such as an IETF Request for Comment (RFC), but rather has evolved out of AWS and their use of S3 since 2006. For details around S3 API characteristics and nomenclature, this site can be referenced. For example, the expected syntax for creating a bucket is provided, including the fact that it should be an HTTP PUT to the root (/) URL target, with the bucket configuration parameters including name provided within the HTTP transaction body. Achieving High Performance S3 with BIG-IP and StorageGRID A common concern with protocols, such as HTTP, is head-of-line blocking, where one large, lengthy transaction blocks subsequent desired, queued transactions. This is one of the reasons for parallelism in HTTP, where loading 30 or more objects to paint a web page will often utilize two, four, or even more concurrent TCP sessions. Another performance issue when dealing with very large transactions is, without parallelism, even those most performant networks will see an established TCP session reach a maximum congestion window (CWND) where no more segments may be in flight until new TCP ACKs arrive back. Advanced TCP options like TCP exponential windowing or TCP SACK can help, but regardless of this, the achievable bandwidth of any one TCP session is bounded and may also frequently task only one core in multi-core CPUs. With the BIG-IP serving as the intermediary, large S3 transactions may default to “multi-part” uploads and downloads. The larger objects become a series of smaller objects that conveniently can be load-balanced by BIG-IP across the entire cluster of NetApp nodes. As displayed in the following diagram, we are asking for multi-part uploads to kick in for objects larger than 5 megabytes. After uploading a 20-megabyte file (technically, 20,000,000 bytes) the BIG-IP shows the traffic distributed across multiple NetApp nodes to the tune of 160.9 million bits. The incoming bits, incoming from the perspective of the origin pool members, confirm the delivery of the object with a small amount of protocol overhead (bits divided by eight to reach bytes). The value of load balancing manageable chunks of very large objects will pay dividends over time with faster overall transaction completion times due to the spreading of traffic across NetApp nodes, more TCP sessions reaching high congestion window values, and no single-core bottle necks in multicore equipment. Tuning BIG-IP for High Performance S3 Service Delivery The F5 BIG-IP offers a set of different profiles it can run via its Local Traffic Manager (LTM) module in accordance with; LTM is the heart of the server load balancing function. The most performant profile in terms of attainable traffic load is the “FastL4” profile. This, and other profiles such as “OneConnect” or “FastHTTP”, can be tied to a virtual server, and details around each profile can be found here within the BIG-IP GUI: The FastL4 profile can increase virtual server performance and throughput for supported platforms by using the embedded Packet Velocity Acceleration (ePVA) chip to accelerate traffic. The ePVA chip is a hardware acceleration field programmable gate array (FPGA) that delivers high-performance L4 throughput by offloading traffic processing to the hardware acceleration chip. The BIG-IP makes flow acceleration decisions in software and then offloads eligible flows to the ePVA chip for that acceleration. For platforms that do not contain the ePVA chip, the system performs acceleration actions in software. Software-only solutions can increase performance in direct relationship to the hardware offered by the underlying host. As examples of BIG-IP virtual edition (VE) software running on mid-grade hardware platforms, results with Dell can be found here and similar experiences with HPE Proliant platforms are here. One thing to note about FastL4 as the profile to underpin a performance mode BIG-IP virtual server is that it is layer 4 oriented. For certain features that involve layer 7 HTTP related fields, such as using iRules to swap HTTP headers or perform HTTP authentication, a different profile might be more suitable. A bonus of FastL4 are some interesting specific performance features catering to it. In the BIG-IP version 17 release train, there is a feature to quickly tear down, with no delay, TCP sessions no longer required. Most TCP stacks implement TCP “2MSL” rules, where upon receiving and sending TCP FIN messages, the socket enters a lengthy TCP “TIME_WAIT” state, often minutes long. This stems back to historically bad packet loss environments of the very early Internet. A concern was high latency and packet loss might see incoming packets arrive at a target very late, and the TCP state machine would be confused if no record of the socket still existed. As such, the lengthy TIME_WAIT period was adopted even though this is consuming on-board resources to maintain the state. With FastL4, the “fast” close with TCP reset option now exists, such that any incoming TCP FIN message observed by BIG-IP will result in TCP RESETS being sent to both endpoints, normally bypassing TIME_WAIT penalties. OneConnect and FastHTTP Profiles As mentioned, other traffic profiles on BIG-IP are directed towards Layer 7 and HTTP features. One interesting profile is F5’s “OneConnect”. The OneConnect feature set works with HTTP Keep-Alives, which allows the BIG-IP system to minimize the number of server-side TCP connections by making existing connections available for reuse by other clients. This reduces, among other things, excessive TCP 3-way handshakes (Syn, Syn-Ack, Ack) and mitigates the small TCP congestion windows that new TCP sessions start with and only increases with successful traffic delivery. Persistent server-side TCP connections ameliorate this. When a new connection is initiated to the virtual server, if an existing server-side flow to the pool member is idle, the BIG-IP system applies the OneConnect source mask to the IP address in the request to determine whether it is eligible to reuse the existing idle connection. If it is eligible, the BIG-IP system marks the connection as non-idle and sends a client request over it. If the request is not eligible for reuse, or an idle server-side flow is not found, the BIG-IP system creates a new server-side TCP connection and sends client requests over it. The last profile considered is the “Fast HTTP” profile. The Fast HTTP profile is designed to speed up certain types of HTTP connections and again strives to reduce the number of connections opened to the back-end HTTP servers. This is accomplished by combining features from the TCP, HTTP, and OneConnect profiles into a single profile that is optimized for network performance. A resulting high performance HTTP virtual server processes connections on a packet-by-packet basis and buffers only enough data to parse packet headers. The performance HTTP virtual server TCP behavior operates as follows: the BIG-IP system establishes server-side flows by opening TCP connections to pool members. When a client makes a connection to the performance HTTP virtual server, if an existing server-side flow to the pool member is idle, the BIG-IP LTM system marks the connection as non-idle and sends a client request over the connection. Summary The NetApp StorageGRID multi-node S3 compatible object storage solution fits well with a high-performance server load balancer, thus making the F5 BIG-IP a good fit. S3 protocol can itself be adjusted to improve transaction response times, such as through the use of multi-part uploads and downloads, amplifying the default load balancing to now spread even more traffic chunks over many NetApp nodes. BIG-IP has numerous approaches to configuring virtual servers, from highest performance L4-focused profiles to similar offerings that retain L7 HTTP awareness. Lab testing was accomplished using the S3Browser utility and results of traffic flows were confirmed with both the standard BIG-IP GUI and the additional AVR analytics module, which provides additional protocol insight.181Views3likes0CommentsBypass "Bad unescape" in Body POST (ASM, POST, JSON)
Here the Block. As you can see is "%" is detected without encoding meaning. This is normal since the "%" is in the Body of the post as JSON data (see below) Of course if I disable the "Bad unescape" in " Learning and Blocking Settings" it works, but my Goal is to bypass using rule on parameter or similar, till now without success. Does anyone have a solution ? ======= JSON on POST Dody Request =======================68Views0likes11Comments