nginx app protect
11 TopicsHow to deploy NGINX App Protect as an Overlay Security Protection Solution for Your Existing API Management Platform
Solution Overview NGINX App Protect combines the proven effectiveness of F5's Advanced WAF technology with the agility and performance of NGINX Plus. It runs natively on NGINX Plus and addresses some of the most difficult challenges facing modern DevOps environments. NGINX App Protect, when deployed as an overlay security solution to complement your third-party API Management platforms such as MuleSoft, Kong, Google’s Apigee, and others, provides: Seamlessly integrateswith NGINXPlus and NGINX Ingress Controller Strong security controls to protect against malicious attacks Reduces complexity and tool sprawl while delivering modern apps Enforces security and regulatory requirements. With NGINX App Protect, you can detect and defend against OWASP App Security's Top Ten attack types like data exfiltration, malicious infections inputs, and many others. Figure: NGINX App Protect as an overlay security protection solution for your existing API Management Platform Solution Deployment This solution deployment procedure assumes that you have an understanding of the NGINX+ platform. If you like to learn more about NGINX, click this link. Install NGINX App Protect Install the most recent version of the NGINX Plus App Protect package (which includes NGINX Plus): sudo yum install -y app-protect Edit the NGINX configuration file and enable the NGINX App Protect module. user nginx; worker_processes auto; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; load_module modules/ngx_http_app_protect_module.so; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; server { listen 80; server_name localhost; proxy_http_version 1.1; app_protect_enable on; app_protect_policy_file "/etc/nginx/NginxDefaultPolicy.json"; app_protect_security_log_enable on; app_protect_security_log "/etc/nginx/log-default.json" syslog:server=10.1.20.6:5144; location / { resolver 10.1.1.9; resolver_timeout 5s; client_max_body_size 0; default_type text/html; proxy_pass http://k8s.arcadia-finance.io:30274$request_uri; } } } Create a Log Configuration Create a log configuration file log_default.json sudo vi log-default.json { "filter": { "request_type": "all" }, "content": { "format": "default", "max_request_size": "any", "max_message_size": "5k" } } Restart the NGINX Service Restart the NGINX service and check the logs. sudo systemctl start nginx less /var/log/nginx/error.log Update Signatures To add NGINX Plus App Protect signatures repository, download and update the signature package. sudo yum install -y app-protect-attack-signatures sudo yum --showduplicates list app-protect-attack-signatures sudo yum install -y app-protect-attack-signatures-2020.04.30 Reload NGINX process to apply the new signatures. sudo nginx -s reload Advanced features NGINX App Protect supports various advanced security features like Bot Protection, Cryptonice integration, API Security with OpenAPI file import. Let's look at how to deploy Bot Protection in this example. Create a new NAP policy JSON file with Bot sudo vi /etc/nginx/policy_bots.json { "policy": { "name": "bot_defense_policy", "template": { "name": "POLICY_TEMPLATE_NGINX_BASE" }, "applicationLanguage": "utf-8", "enforcementMode": "blocking", "bot-defense": { "settings": { "isEnabled": true }, "mitigations": { "classes": [ { "name": "trusted-bot", "action": "alarm" }, { "name": "untrusted-bot", "action": "block" }, { "name": "malicious-bot", "action": "block" } ] } } } Modify the nginx.conf file to reference this new policy JSON file. sudo vi /etc/nginx/nginx.conf user nginx; worker_processes 1; load_module modules/ngx_http_app_protect_module.so; error_log /var/log/nginx/error.log debug; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name localhost; proxy_http_version 1.1; app_protect_enable on; app_protect_policy_file "/etc/nginx/policy_bots.json"; app_protect_security_log_enable on; app_protect_security_log "/etc/nginx/app-protect-log-policy.json" syslog:server=10.1.20.6:5144; location / { resolver 10.1.1.9; resolver_timeout 5s; client_max_body_size 0; default_type text/html; proxy_pass http://k8s.arcadia-finance.io:30274$request_uri; } } } Verification We will verify the NGINX App Protect deployment for illegal requests in the below example. Test with curl and with the browser using both the correct URL and the illegal URL: curl http://localhost curl http://localhost/?a=%3Cscript%3E Simultaneously open a second terminal window to see there are three types of violations noted in the log. tail -f /var/log/app_protect/class_illegal_security.log Request ID 1035880152621768133: GET / received on 2021-06-14 16:09:55 from IP 127.0.01 had the following violations: Illegal meta character in value, Attack Signature detected, Violation Rating Thereat detected. Note: NGINX App Protect logs all violations that occur. Therefore, we see not only the illegal character noted but also that it detected an attack signature and there is a rating threat also detected. Additional Resources NGINX App Protect: Configuration Guide NGINX: Installation and Deployment Guides Try NGINX App Protect: 30 days free trial2.6KViews0likes0CommentsUsing NGINX Controller API Management Module and NGINX App Protect to secure financial services API transactions
As financial services APIs (such as Open Banking) are concerned primarily with managing access to exposed banking APIs, the security aspect has always been of paramount importance. Securing financial services APIs is a vast topic, as security controls are distributed among different functions, such as user authentication at the Identity Provider level, user authorization and basic API security at the API Gateway level and advanced API security at the WAF level. In this article we will explore how two NGINX products, Controller API Management Module and App Protect, can be deployed to secure the OAuth Authorization Code flow which is a building block of the access controls used to secure many financial services APIs.. Physical setup The setup used to support this article comprises of NGINX Controller API Management Module, providing API Management functions through an instance of NGINX API Gateway and NGINX App Protect deployed on a Kubernetes Ingress Controller providing advanced security for the Kubernetes-deployed demo application, Arcadia Finance. These elements are being deployed and configured in an automated fashion using a Gitlab CI/CD pipeline. The visualization for NGINX App Protect is provided by NAP dashboards deployed in ELK. Note: For the purpose of supporting this lab, APM was configured as an OAuth Authorization Server supporting OpenID Connect. Its configuration, along with the implementation details of the third party banking application (AISP/PISP), acting as an OAuth Client, is beyond the scope of this article. In an OAuth Authorization Code flow, the PSU (End User) is initiating an API request through the Account or Payment Information Services Provider (AISP/PISP Application) which first redirects the end user to the Authorization Server. Strong Customer Authentication is being performed between the end user and Authorization Server which, if successful, will issue an authorization code and redirect the user back to theAISP/PISP Application. The AISP/PISP Application will exchange the authorization code for an ID Token and a JWT Access Token, the latter will be attached as a bearer token to the initial end-user API request which will then be forwarded to the API Gateway. The API Gateway will authenticate the signature of the JWT Access Token by downloading the JSON Web Key (JWK) from the Authorization Server and may apply further security controls by authorising the API call based on JWT claims and/or apply rate limits. Worth noting here is the security function of the API Gateway, which provides positive security by allowing only calls conforming to published APIs, in addition to authentication and authorization functions. The Web Application Firewall function, represented here by the NGINX App Protect deployed on the Kubernetes Ingress Controller (KIC), will add negative security protection, by checking the request against a database of attack signatures, and advanced API security, by validating the API request against the OpenAPI manifest and providing Bot detection capabilities. Configuration To configure the NGINX Controller API Management Module, first create an Application by sending a POST request to 'https://{{ my_controller }}/api/v1/services/environments/env_prod/apps' having the following body: { "metadata": { "name": "app_api", "displayName": "API Application Arcadia", "description": "", "tags": [] }, "desiredState": {} } Then create an Identity Provider, pointed at the Authorization Server's JWK endpoint, by sending a PUT request to 'https://{{ my_controller }}/api/v1/security/identity-providers/bank_idp' having the following body: { "metadata": { "name": "bank_idp", "tags": [] }, "desiredState": { "environmentRefs": [ { "ref": "/services/environments/env_prod" } ], "identityProvider": { "type": "JWT", "jwkFile": { "type": "REMOTE_FILE", "uri": "https://bank.f5lab/f5-oauth2/v1/jwks", "cacheExpire": "12h" } } } } Create an API definition by sending a PUT request to 'https://{{ my_controller }}/api/v1/services/api-definitions/arcadia-api-def/versions/v1' with the following body: { "metadata": { "name": "v1", "displayName": "arcadia-api-def" }, "desiredState": { "specs": { "REST": { "openapi": "3.0.0", "info": { "version": "v1", "title": "arcadia-api-def" }, "paths": {} } } } } Then import the OpenAPI definition by sending a PUT request to 'https://{{ my_controller }}/api/v1/services/api-definitions/arcadia-api-def/versions/v1/import' with the OpenAPI JSON as a request body. Publish the API definition by sending a PUT request to 'https://{{ my_controller }}/api/v1/services/environments/env_prod/apps/app_api/published-apis/prod-api', with the following body: { "metadata": { "name": "prod-api", "displayName": "prod-api", "tags": [] }, "desiredState": { "apiDefinitionVersionRef": { "ref": "/services/api-definitions/arcadia-api-def/versions/v1" }, "gatewayRefs": [ { "ref": "/services/environments/env_prod/gateways/gw_api" } ] } } Declare the necessary back-end components (in this example webapi-kic.nginx-udf.internal Kubernetes workload) by sending a PUT to 'https://{{ my_controller }}/api/v1/services/environments/env_prod/apps/app_api/components/cp_moneytransfer_api' with the following body: { "metadata": { "name": "cp_moneytransfer_api", "displayName": "cp_moneytransfer_api", "tags": [] }, "desiredState": { "ingress": { "uris": { "/api/rest/execute_money_transfer.php": { "php": { "get": { "description": "Send money to a friend", "parameters": [ { "in": "body", "name": "body", "required": true, "schema": { "type": "object" } } ], "responses": { "200": { "description": "200 response" } } }, "matchMethod": "EXACT" } } }, "gatewayRefs": [ { "ref": "/services/environments/env_prod/gateways/gw_api" } ] }, "backend": { "ntlmAuthentication": "DISABLED", "preserveHostHeader": "DISABLED", "workloadGroups": { "wl_mainapp_api": { "loadBalancingMethod": { "type": "ROUND_ROBIN" }, "uris": { "http://webapi-kic.nginx-udf.internal:30276": { "isBackup": false, "isDown": false, "isDrain": false } } } } }, "programmability": { "requestHeaderModifications": [ { "action": "DELETE", "applicableURIs": [], "headerName": "Host" }, { "action": "ADD", "applicableURIs": [], "headerName": "Host", "headerValue": "k8s.arcadia-finance.io" } ] }, "logging": { "errorLog": "DISABLED", "accessLog": { "state": "DISABLED" } }, "security": { "rateLimits": { "policy_1": { "rate": "5000r/m", "burstBeforeReject": 0, "statusCode": 429, "key": "$binary_remote_addr" } }, "conditionalAuthPolicies": { "policy_1": { "action": "ALLOW", "comparisonType": "CONTAINS", "comparisonValues": [ "Payment" ], "sourceType": "JWT_CLAIM", "sourceKey": "scope", "denyStatusCode": 403 } }, "identityProviderRefs": [ { "ref": "/security/identity-providers/bank_idp" } ], "jwtClientAuth": { "keyLocation": "BEARER" } }, "publishedApiRefs": [ { "ref": "/services/environments/env_prod/apps/app_api/published-apis/prod-api" } ] } } Note the 'security' block, specifying the JWT authentication, the Identity Provider from where to download the JWK, the authorization check applied on each request and the rate limit policy. The configuration used to deploy NGINX App Protect on the Kubernetes Ingress Controller can be consulted here. Summary In this article we showed how NGINX Controller API Management Module and NGINX App Protect can be deployed to protect API calls as part of the OAuth Authorization Code flow which is a basic flow used to control the access to many financial services APIs. Links UDF lab environment link.1.6KViews1like0CommentsAdopting SRE practices with F5: Layered Security Policy for North-South Traffic
In an organization with enough maturity in cybersecurity and modern application architectures, there are two different cybersecurity teams that operate the more advanced security policies for the company. NetSecOps and DevSecOps are the two cybersecurity teams in an organization, and they typically have different security requirements. NetSecOps requires a ‘Standardized Application Security Policy'. They aim to block common attacks to the production network with a high level of confidence, resulting in a ‘low-false positive rate,’ at the network level. The OWASP Top 10 threats is a good example here. Moreover, the responsibility of NetSecOps is not limited to stopping basic attack types like the OWASP Top 10, but it also covers more advanced and complicated application-based attacks such as ‘Bot Attacks,’ ‘Fraud Attacks,’ and ‘DDoS Attacks.’ However, when it comes to the ‘Modern-App environment,’ it is not easy for the NetSecOps team to understand the details of the application traffic flow inside the Kubernetes or OpenShift cluster. For this reason, as far as modern applications are concerned, the security policies of NetSecOps often focus more on compliance and audit purposes. However, DevSecOps wants the application-specific security policies for different types of applications to be operating inside their Kubernetes or OpenShift clusters. This is possible since DevSecOps understands how their applications work and they want to apply more optimized security policies for their backend applications. This is why it is sometimes difficult to achieve both security team’s goals with a single security solution. This is why the enterprise needs to deploy two different WAFs to meet the different requirements from both NetSecOps and DevSecOps. This article will cover how two different security teams can achieve their goals with two separate WAF (Web Application Firewall) deployments in the network - F5 Advanced WAF for NetSecOps and NGINX App Protect for DevSecOps. Solution Overview The solution includes two F5 components – F5 Advanced WAF and NGINX App Protect. From a technological point of view, NGINX App Protectutilizes s a subset of F5 Advanced WAF functionality, meaning that their underlying technologies are the same. Each of those WAF components can run with different security policies in order to achieve different goals. In F5 Advanced WAF, NetSecOps can apply the WAF policy for the ‘coarse-grained model’ of security, while DevSecOps adopts the ‘fine-grained model’ with the NAP. In other words, this means that F5 Advanced WAF can be configured with a ‘Negative Policy,’ and NGINX App Protect can be configured with a ‘Positive Policy.’ In our use-case, we assumed that NetSecOps wants to block the OWASP Top 10 threats while DevSecOps has a different 'file accessing' policy for each backend application. The brief architecture is depicted below. Combining F5 Advanced WAF and NGINX App Protect enables layered application security policies to prevent the most complicated and advanced application-based attacks efficiently. This architecture utilizes the following workflow: 1.The F5 Advanced WAF blocks the most commonly used attack types including ‘Command Injection,’ ‘SQL Injection,’ ‘Cross-Site Scripting,’ and ‘Server Side Request Forgery’ attacks. 2.When the attacker tries to access the different files in each application, NGINX App Protect manually specifies the file types that are allowed (or disallowed) in traffic based on the security policies configured by the DevSecOps team. 3.All alert details from F5 Advanced WAF and NGINX App Protect are sent to the ‘Elasticsearch’ for central monitoring purposes. Each of the above workflows will be discussed in the following sections. ·This blog doesn’t include all the required steps to reproduce the use-case in the environment. Please refer to this link for all the required configuration steps. NGINX App Protect provides ‘Application-Specific’ policies NGINX App Protect can provide security protection and controls at the microservice level inside the Kubernetes or OpenShift cluster. The NGINX App Protect can be deployed in the OpenShift cluster as a container image. The NGINX App Protect policy configuration uses the declarative format built on a pre-defined base template. The policy uses the JSON format to represent the policy details. This file can be edited to apply a unique security policy to the NGINX App Protect instance. Once the policy is created, the policy can be attached to the 'nginx.conf' file by referencing the policy file. In this example, we used the ‘nginx_sre.conf’ file as the main configuration file for NGINX and the ‘NginxSRELabPolicy.json’ file represents the NGINX App Protect policy. NginxSRELabPolicy.json: | { "policy": { "name": "SRE_DVWA01_POLICY", "template": { "name": "POLICY_TEMPLATE_NGINX_BASE" }, "applicationLanguage": "utf-8", "enforcementMode": "blocking", "response-pages": [ { "responseContent": "<html><head><title>SRE DevSecOps - DVWA01 - Blocking Page</title></head><body><font color=green size=10>NGINX App Protect Blocking Page - DVWA01 Server</font><br><br>Please consult with your administrator.<br><br>Your support ID is: <%TS.request.ID()%><br><br><a href='javascript:history.back();'>[Go Back]</a></body></html>", "responseHeader": "HTTP/1.1 302 OK\\r\\nCache-Control: no-cache\\r\\nPragma: no-cache\\r\\nConnection: close", "responseActionType": "custom", "responsePageType": "default" } ], "blocking-settings": { "violations": [ { "name": "VIOL_FILETYPE", "alarm": true, "block": true } ] }, "filetypes": [ { "name": "*", "type": "wildcard", "allowed": true, "checkPostDataLength": false, "postDataLength": 4096, "checkRequestLength": false, "requestLength": 8192, "checkUrlLength": true, "urlLength": 2048, "checkQueryStringLength": true, "queryStringLength": 2048, "responseCheck": false }, { "name": "pdf", "allowed": false } ] } } --- The above configuration file shows the NAP policy of application #01, where the DevSecOps team wants to disallow file access to the ‘PDF’ file format. For application #02, the NAP policy is configured to reject the access to the ‘JPG’ file. And the ‘remote logging’ configuration needs to be applied on the NGINX to export the NGINX App Protect's alert details. The below configuration shows how we exported the NGINX App Protect logging details to an external device, Elasticsearch. server { listen 8080; server_name dvwa02-http; proxy_http_version 1.1; real_ip_header X-Forwarded-For; set_real_ip_from 0.0.0.0/0; app_protect_enable on; app_protect_security_log_enable on; app_protect_policy_file "/etc/nginx/NginxSRELabPolicy.json"; app_protect_security_log "/etc/app_protect/conf/log_default.json" syslog:server=your_elk_ip_here; location / { client_max_body_size 0; default_type text/html; proxy_pass http://dvwa02; proxy_set_header Host $host; } Preventing OWASP Top 10 threats in F5 Advanced WAF F5 Advanced WAF is the next-generation WAF solution designed to prevent advanced application-based attacks. It supports 1000+ proven application-level signatures, custom signatures, Machine-Learning based DDoS prevention, Intelligence-based attack mitigation, and Behavioural-based WAF functions. But in this use-case, we focused on the prevention of the OWASP Top 10 attacks, which is only a small part of the F% Advanced WAF attack overall coverage. The important point here is how we can configure the F5 Advanced WAF to apply the WAF's efficient ‘Negative Security’ model. In order to configure the correct F5 Advanced WAF policy, one should follow the procedures below: 1. Go to 'Security' -> 'Application Security' -> 'Security Policies' -> 'Create' 2. Click the security policy that was just created (SRE_DEVSEC_01) ·Click the 'View Learning and Blocking Settings' under the 'Enforcement Mode' menu 3. Expand 'Attack Signatures' and Click 'Change' menu 4. Apply the check box. ·Click 'Close' ->click 'Save' -> click 'Apply Policy' ·Apply the policy to the virtual server. (Please make sure that we're on OCP partition.) 5. 'Local Traffic' -> 'Virtual Servers' -> 'devsecops_http_vs' -> Security -> Policies Please note that the ‘virtual server’ configuration is required in the BIG-IP before proceeding to this step. Configuring custom blocking page for F5 Advanced WAF 1.Click the security policy that was created (SRE_DEVSEC_01) 2.Go to 'Response and Blocking page' -> 'Blocking page default' -> 'Custom response' -> 'Response Body' <html><head><title>SRE DevSecOps Blocking Page</title></head><body><font color=red size=12>F5 Advanced WAF Blocking Page</font><br><br>Please consult with your administrator.<br><br>Your support ID is: <%TS.request.ID()%><br><br><a href='javascript:history.back();'>[Go Back]</a></body></html> Simulating the Attack The following steps show how to simulate the application-based attacks and to see how F5 Advanced WAF and NGINX App Protect can protect the applications efficiently. Preventing OWASP Top 10 Attacks - NetSecOps First, log in to the application through the GUI and go to the ‘Command Injection’ menu. And type the command ‘8.8.8.8 | cat /etc/passwd’ and click the ‘Submit’ button. If F5 Advanced WAF works correctly, you should be able to see the below ‘blocking page’. ·You can find the instructions from the Github link here how to simulate other attack types – SQL Injection, SSRF and XSS. Restrict file accessing based on the application types - DevSecOps 1.Access to application 01 on the browser with URL -> "http://your_app_domain.com/hackable/uploads/" 2.When the ‘PDF’ file is clicked on in this directory, the following blocking screen should be shown. Summary In modern application architectures, security concerns are becoming more serious. WAF is the major security solution available to enterprise applications. The security policy of the WAF has to protect backend applications correctly, but at the same time, it must also ensure legitimate user traffic access to the backend resources without creating issues. This sounds straightforward, but it is not easy to configure the right security policies to achieve both goals simultaneously. When it comes to modern application architectures, it is even more difficult to achieve this goal. Since traditional security teams lack understanding about the application flow inside a Kubernetes or OpenShift environment, it is challenging to apply the required security policies in the WAF to protect the microservices. Due to the nature of their microservices, different applications spin up and down frequently, and security requirements are also changed on a regular basis. The cybersecurity team needs to have a solution that can fit these unique requirements. For NetSecOps, they would require a solution that can have enterprise-level protection features and operational-efficiency for their SOC team. F5 Advanced WAF is designed to efficiently prevent known and unknown types of advanced application-based attacks, while NGINX App Protect easily provides ‘application-specific’ security policies for each application inside the microservice environment. The enterprises can acquire the proper protection for their modern app environment through the combination of F5 Advanced WAF and NGINX App Protect. Please visit the DevCentral GitHub repo and follow the guidelines to try this use-case in your environment.1.3KViews1like1CommentNGINX App Protect deployment in Kubernetes integrated in CI/CD pipeline
This article describes the configuration used to insert an NGINX Plus with App Protect container into a pod, protecting the application deployed in the pod. This implements the ‘per-pod proxy’ model, where each pod is augmented with a dedicated, embedded proxy to handle and secure ingress traffic to the pod. Other deployment patterns are also possible. NGINX App Protect may be deployed as a load-balancing proxy tier within Kubernetes, in front of services that require App Protect security and behind the Ingress Controller. Alternatively, NGINX App Protect may be deployed externally to the Kubernetes environment. The advantage of deploying NGINX App Protect within the application pod is that it is very easy to integrate into a Gitlab CI/CD pipeline. For this demo, the Kubernetes Ingress Controller used is F5 BIG-IP along with the F5 BIG-IP controller (k8s-bigip-ctlr) who is pushing the configuration using the AS3 declarative model. You could also use NGINX Plus Ingress Controller to load-balance traffic to the application pods: An alternative deployment model would embed the WAF within the application pod. This extends protection to internal (East-West) traffic beside external (North-South) and ensures that the WAF is packaged alongside the application in an easily relocatable format. The demo setup referenced in this article is using the following components: -Gitlab to deploy the Kubernetes configuration as part of a CI/CD pipeline -OWASP’s vulnerable application JuiceShop as the App container -NGINX Plus with App Protect module as a container, processing ingress traffic -F5 Container Ingress services controller (k8s-bigip-ctrl) to listen for configuration changes and to reconfigure the F5 BIG-IP via AS3 declarations -F5 BIG-IP as an Ingress Controller, adding better reporting capabilities and allowing sending traffic directly to Kubernetes pods using Calico + BGP F5 BIG-IP Configuration To integrate BIG-IP as an Ingress Controller using Calico and BGP, the BIG-IP device needs to be configured as a BGP neighbour to the Kubernetes nodes. For more information on the BIG-IP configuration to integrate with Kubernetes, you can consult CIS and Kubernetes - Part 1: Install Kubernetes and Calico F5 Container Ingress services controller configuration To configure the F5 CIS controller to loadbalance directly the traffic to the Pods, the –pool-member-type=cluster argument needs to be passed to the controller: For a complete list of configuration options for CIS, consult F5 BIG-IP Controller for Kubernetes CI/CD pipeline configuration On running the CI/CD pipeline in Gitlab, the following code gets executed: The main configuration has been split in multiple files: -staging.j2.vars -ConfigMapJS.yaml -ConfigMapNginx.yaml -ConfigMapWaf.yaml -serviceJSplusAppProtect.yaml -deploymentJSplusAppProtect.yaml -ConfigMapLTM.yaml ConfigMapJS.yaml contains JuiceShop config, which is out of the scope of the current article. The deploymentJSplusAppProtect.yaml describes the JuiceShop application container (port 3000) and the NGINX App Protect container (ports 80 and 443 – only port 80 will be used in this demo): ConfigMapNginx.yaml creates the NGINX Plus configuration: -A server listening on port 80 -NGINX App Protect module pointing to waf-policy.json file -A “backend” server pointing to the same pod (127.0.0.1) on port 3000 – the JuiceShop application container The ConfigMapWaf.yaml file contains the NGINX App Protect configuration: For the purpose of this demo a very simple configuration was used, consisting of the base template and setting the enforcementMode to “transparent”. A more complete example of a NGINX App Protect policy could be defined as follows: apiVersion: v1 kind: ConfigMap metadata: name: nginx-waf namespace: production data: waf-policy.json: | { “name”: “nginx-policy”, “template”: { “name”: “POLICY_TEMPLATE_NGINX_BASE” }, “applicationLanguage”: “utf-8”, “enforcementMode”: “blocking”, “signature-sets”: [ { “name”: “All Signatures”, “block”: false, “alarm”: true }, { “name”: “High Accuracy Signatures”, “block”: true, “alarm”: true } ], “blocking-settings”: { “violations”: [ { “name”: “VIOL_RATING_NEED_EXAMINATION”, “alarm”: true, “block”: true }, { “name”: “VIOL_HTTP_PROTOCOL”, “alarm”: true, “block”: true }, { “name”: “VIOL_FILETYPE”, “alarm”: true, “block”: true }, { “name”: “VIOL_COOKIE_MALFORMED”, “alarm”: true, “block”: false } ], “http-protocols”: [ { “description”: “Body in GET or HEAD requests”, “enabled”: true, “maxHeaders”: 20, “maxParams”: 500 } ], “filetypes”: [ { “name”: “*”, “type”: “wildcard”, “allowed”: true, “responseCheck”: true } ], “data-guard”: { “enabled”: true, “maskData”: true, “creditCardNumbers”: true, “usSocialSecurityNumbers”: true }, “cookies”: [ { “name”: “*”, “type”: “wildcard”, “accessibleOnlyThroughTheHttpProtocol”: true, “attackSignaturesCheck”: true, “insertSameSiteAttribute”: “strict” } ], “evasions”: [ { “description”: “%u decoding”, “enabled”: true, “maxDecodingPasses”: 2 } ] } } The serviceJSplusAppProtect.yaml contains the k8s-bigip-ctrl labels that will enable F5 Controller Ingress Services to track the application address and the targetPort that BIG-IP Ingress Controller will use to loadbalance the traffic directly to the Pods: For more information on Container Ingress Services labels, please consult CIS and AS3 Extension Integration (https://clouddocs.f5.com/containers/v2/kubernetes/kctlr-k8s-as3.html). The ConfigMapLTM.yaml defines the AS3 template that k8s-bigip-ctrl will fill by parsing the environment variables and Kubernetes services and then deploy on the BIG-IP: Where the VS_IP is being sourced from staging.j2.vars file and serverAddresses are discovered by querying Kubernetes: Running the pipeline will result in a Virtual Server deployed in the “staging” administrative partition, with a pool with two members, each being one replica of the of the NGINX App Protect container (port 80) deployed in front of their respective application containers. The pool members are the Kubernetes pods allowing for loadbalancing the traffic directly between them as opposed to sending the traffic to a Kubernetes service. The routes to reach the pool members are being learned via BGP.1KViews1like0CommentsExample NGINX App Protect deployed on Kubernetes Ingress Controller
Problem this snippet solves: This code offers a couple of examples of deploying NGINX App Protect on Kubernetes Ingress Controller, showing one instance protecting traditional Web applications and one protecting API applications. How to use this snippet: The code can be applied manually through kubectl commands or as a part of a CI/CD pipeline. Code : #### Deploy NGINX Plus Ingress for WebApp from Gitlab.com ##### --- apiVersion: apps/v1 kind: Deployment metadata: name: webapp-nginx-ingress namespace: nginx-ingress spec: replicas: 1 selector: matchLabels: app: webapp-nginx-ingress template: metadata: labels: app: webapp-nginx-ingress #annotations: #prometheus.io/scrape: "true" #prometheus.io/port: "9113" spec: serviceAccountName: nginx-ingress imagePullSecrets: - name: containers: - image: name: webapp-nginx-plus-ingress imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 - name: https containerPort: 443 #- name: prometheus #containerPort: 9113 securityContext: allowPrivilegeEscalation: true runAsUser: 101 #nginx capabilities: drop: - ALL add: - NET_BIND_SERVICE env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name args: - -nginx-plus - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret - -enable-app-protect - -ingress-class=webapp-arcadia-ingress-class #- -v=3 # Enables extensive logging. Useful for troubleshooting. #- -report-ingress-status #- -external-service=nginx-ingress #- -enable-leader-election #- -enable-prometheus-metrics #### WebApp Protect Policy ### --- apiVersion: appprotect.f5.com/v1beta1 kind: APPolicy metadata: name: webapp-dataguard-blocking spec: policy: name: webapp-dataguard-blocking template: name: POLICY_TEMPLATE_NGINX_BASE applicationLanguage: utf-8 enforcementMode: blocking blocking-settings: violations: - name: VIOL_DATA_GUARD alarm: true block: true data-guard: enabled: true maskData: true creditCardNumbers: true usSocialSecurityNumbers: true enforcementMode: ignore-urls-in-list enforcementUrls: [] ### App Protect Logs ### --- apiVersion: appprotect.f5.com/v1beta1 kind: APLogConf metadata: name: logconf spec: filter: request_type: all content: format: default max_request_size: any max_message_size: 5k ### Create Ingress Service #### --- apiVersion: v1 kind: Service metadata: name: webapp-nginx-ingress namespace: nginx-ingress spec: type: NodePort ports: - port: 80 targetPort: 80 nodePort: 30274 protocol: TCP name: http - port: 443 targetPort: 443 nodePort: 30275 protocol: TCP name: https selector: app: webapp-nginx-ingress ### Deploy Arcadia Ingress Service ##### --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: webapp-arcadia-ingress annotations: kubernetes.io/ingress.class: "webapp-arcadia-ingress-class" appprotect.f5.com/app-protect-policy: "default/webapp-dataguard-blocking" appprotect.f5.com/app-protect-enable: "True" appprotect.f5.com/app-protect-security-log-enable: "True" appprotect.f5.com/app-protect-security-log: "default/logconf" appprotect.f5.com/app-protect-security-log-destination: "syslog:server=10.1.20.6:5144" spec: rules: - host: k8s.arcadia-finance.io http: paths: - path: / backend: serviceName: main servicePort: 80 - path: /files backend: serviceName: backend servicePort: 80 - path: /api backend: serviceName: app2 servicePort: 80 - path: /app3 backend: serviceName: app3 servicePort: 80 #### Deploy WebAPI NGINX Plus Ingress for WebAPI from Gitlab.com ##### --- apiVersion: apps/v1 kind: Deployment metadata: name: webapi-nginx-ingress namespace: nginx-ingress spec: replicas: 1 selector: matchLabels: app: webapi-nginx-ingress template: metadata: labels: app: webapi-nginx-ingress #annotations: #prometheus.io/scrape: "true" #prometheus.io/port: "9113" spec: serviceAccountName: nginx-ingress imagePullSecrets: - name: containers: - image: name: webapi-nginx-plus-ingress imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 - name: https containerPort: 443 #- name: prometheus #containerPort: 9113 securityContext: allowPrivilegeEscalation: true runAsUser: 101 #nginx capabilities: drop: - ALL add: - NET_BIND_SERVICE env: - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name args: - -nginx-plus - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret - -enable-app-protect - -ingress-class=webapi-arcadia-ingress-class #- -v=3 # Enables extensive logging. Useful for troubleshooting. #- -report-ingress-status #- -external-service=nginx-ingress #- -enable-leader-election #- -enable-prometheus-metrics #### App Protect Policy ### --- apiVersion: appprotect.f5.com/v1beta1 kind: APPolicy metadata: name: webapi-blocking spec: policy: name: webapi-blocking template: name: POLICY_TEMPLATE_NGINX_BASE open-api-files: - link: "http://10.1.20.4/root/nap_kic_openapi/-/raw/master/App/openapi3-arcadia-kic.json" applicationLanguage: utf-8 enforcementMode: blocking blocking-settings: violations: - name: VIOL_MANDATORY_REQUEST_BODY alarm: true block: true - name: VIOL_PARAMETER_LOCATION alarm: true block: true - name: VIOL_MANDATORY_PARAMETER alarm: true block: true - name: VIOL_JSON_SCHEMA alarm: true block: true - name: VIOL_PARAMETER_ARRAY_VALUE alarm: true block: true - name: VIOL_PARAMETER_VALUE_BASE64 alarm: true block: true - name: VIOL_FILE_UPLOAD alarm: true block: true - name: VIOL_URL_CONTENT_TYPE alarm: true block: true - name: VIOL_PARAMETER_STATIC_VALUE alarm: true block: true - name: VIOL_PARAMETER_VALUE_LENGTH alarm: true block: true - name: VIOL_PARAMETER_DATA_TYPE alarm: true block: true - name: VIOL_PARAMETER_NUMERIC_VALUE alarm: true block: true - name: VIOL_PARAMETER_VALUE_REGEXP alarm: true block: true - name: VIOL_URL alarm: true block: true - name: VIOL_PARAMETER alarm: true block: true - name: VIOL_PARAMETER_EMPTY_VALUE alarm: true block: true - name: VIOL_PARAMETER_REPEATED alarm: true block: true ### Create Ingress Service #### --- apiVersion: v1 kind: Service metadata: name: webapi-nginx-ingress namespace: nginx-ingress spec: type: NodePort ports: - port: 80 targetPort: 80 nodePort: 30276 protocol: TCP name: http - port: 443 targetPort: 443 nodePort: 30277 protocol: TCP name: https selector: app: webapi-nginx-ingress ### Deploy Arcadia Ingress Service ##### --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: webapi-arcadia-ingress annotations: kubernetes.io/ingress.class: "webapi-arcadia-ingress-class" appprotect.f5.com/app-protect-policy: "default/webapi-blocking" appprotect.f5.com/app-protect-enable: "True" appprotect.f5.com/app-protect-security-log-enable: "True" appprotect.f5.com/app-protect-security-log: "default/logconf" appprotect.f5.com/app-protect-security-log-destination: "syslog:server=10.1.20.25:5144" spec: rules: - host: k8s.arcadia-finance.io http: paths: - path: /trading backend: serviceName: main servicePort: 80 - path: /api backend: serviceName: app2 servicePort: 80 Tested this on version: No Version Found900Views1like0CommentsProtect multi-cloud and Edge Generative AI applications with F5 Distributed Cloud
F5 Distributed Cloud capabilities allows customers to use a single platform for connectivity, application delivery and security of GenAI applications in any cloud location and at the Edge, with a consistent and simplified operational model, a game changer for streamlined operational experience for DevOps, NetOps and SecOps.824Views3likes0CommentsShift-left Security Visibility
5 Minute Read Applications evolve rapidly whether your organization runs off-the-shelf applications or builds its own applications and frameworks.Increasingly, workloads are developed in a matter of hours, packaged, instantiated and torn down in timespans measured in minutes.This is the nature of continuous deployment pipelines. Securing applications and workloads that are highly dynamic possibly living across different colocation facilities (Colos) and cloud infrastructures (public and private), can only be achieved following continuous integration and continuous deployment (CI/CD) methodologies. In practice, this has resulted in integrating security in the software development and deployment lifecycles with the automation of: ·Code scanning when checked in to software versioning management tools (Git repository) using static application security testing (SAST), ·Integrating web application firewall (WAF) in the CI/CD pipeline as can be seen with BIG-IP Advanced WAF or NGINX App Protect and Kubernetes ·Verifying artifacts used for application instantiation such as packages and containers during creation, before they are stored to ensure that they are free of malware ·Scanning applications/workloads at runtime as part of the pipeline before deployment in production leveraging dynamic application security testing (DAST) tools ·Securing runtime environments enforcing admission control, use of least-privilege during spin-up, and implementing monitoring ·Building in compliance at every stage of the development and deployment process as needed (HIPAA, PCI etc.) Building automated and orchestrated ·Monitoring Kubernetes clusters and other cloud and physical infrastructure where workloads run, however ephemerally, tracking things like container health, container orchestration and ownership across the cluster ·Monitoring workloads and cloud services following site reliability engineering (SRE) guidelines – following the “Golden Signals” (Source: SRE Book) ·Performing regular vulnerability scanning of applications in production Building security in the early stages of the software development lifecycle is part of a “shift-left” strategy, and is discussed here or here. In order to support specific service level objectives (SLO), monitoring at the application and infrastructure level is built into the infrastructure from the perspective of latency, traffic, errors and saturation. More information on adopting SRE with F5 can be found here. What about security visibility? The fundamental need to have a holistic view of the application and its components is essential to be able to mitigate threats.You need to visualize your applications and possible threats in order to assess risk, identify threat vectors before attackers do, and possibly investigate in the event of an issue. In a previous article, we focused on the posture assessment (static view) and monitoring (dynamic view) axis as fundamentals of security visibility.Implementing this visibility entails building and deploying security and security visibility within the pipeline along-side the application, workload or infrastructure. The above figure shows the insertion of WAF policy and logging the software building pipeline.In a shift-left effort the logging, telemetry and security policy configuration is integrated alongside the application’s SDLC.The intent is to leverage the same pipeline infrastructure and have the security and visibility aspects integrated in the project with the workload code.WAF only provides one aspect of the application security suite and is the focus of this exercise.Other infrastructure-centric security configurations, such as mTLS use within a Kubernetes Service Mesh or authentication/authorization framework configurations, can also be inserted in the pipeline. In order to simplify testing and deployment, the security infrastructure needs to be defined as code for consistent deployment for testing and production environments.Within an infrastructure-as-code and security-as-code framework, this article is meant to outline the need for visibility-as-code, and more precisely security-visibility-as-code.All F5 WAF products and their associated logging, monitoring and telemetry capabilities are easily automated and defined as-code.They can be inserted anywhere programmatically in all environment for testing as well as production.812Views0likes0CommentsF5 powered API security and management
Editor's Note:The F5 Beacon capabilities referenced in this article hosted on F5 Cloud Services are planning a migration to a new SaaS Platform - Check out the latesthere. Introduction Application Programming Interfaces (APIs) enable application delivery systems to communicate with each other. According to a survey conducted by IDC, security is the main impediment to delivery of API-based services.Research conducted by F5 Labs shows that APIs are highly susceptible to cyber-attacks. Access or injection attacks against the authentication surface of the API are launched first, followed by exploitation of excessive permissions to steal or alter data that is reachable via the API.Agile development practices, highly modular application architectures, and business pressures for rapid development contribute to security holes in both APIs exposed to the public and those used internally. API delivery programs must include the following elements : (1) Automated Publishing of APIs using Swagger files or OpenAPI files, (2) Authentication and Authorization of API calls, (3) Routing and rate limiting of API calls, (4) Security of API calls and finally (5) Metric collection and visualization of API calls.The reference architecture shown below offers a streamlined way of achieving each element of an API delivery program. F5 solution works with modern automation and orchestration tools, equipping developers with the ability to implement and verify security at strategic points within the API development pipeline. Security gets inserted into the CI/CD pipeline where it can be tested and attached to the runtime build, helping to reduce the attack surface of vulnerable APIs. Common Patterns Enterprises need to maintain and evolve their traditional APIs, while simultaneously developing new ones using modern architectures. These can be delivered with on-premises servers, from the cloud, or hybrid environments. APIs are difficult to categorize as they are used in delivering a variety of user experiences, each one potentially requiring a different set of security and compliance controls. In all of the patterns outlined below, NGINX Controller is used for API Management functions such as publishing the APIs, setting up authentication and authorization, and NGINX API Gateway forms the data path.Security controls are addressed based on the security requirements of the data and API delivery platform. 1.APIs for highly regulated business Business APIs that involve the exchange of sensitive or regulated information may require additional security controls to be in compliance with local regulations or industry mandates.Some examples are apps that deliver protected health information or sensitive financial information.Deep payload inspection at scale, and custom WAF rules become an important mechanism for protecting this type of API. F5 Advanced WAF is recommended for providing security in this scenario. 2.Multi-cloud distributed API Mobile App users who are dispersed around the world need to get a response from the API backend with low latency.This requires that the API endpoints be delivered from multiple geographies to optimize response time.F5 DNS Load Balancer Cloud Service (global server load balancing) is used to connect API clients to the endpoints closest to them.In this case, F5 Cloud Services Essential App protect is recommended to offer baseline security, and NGINX APP protect deployed closer to the API workload, should be used for granular security controls. Best practices for this pattern are described here. 3.API workload in Kubernetes F5 service mesh technology helps API delivery teams deal with the challenges of visibility and security when API endpoints are deployed in Kubernetes environment. NGINX Ingress Controller, running NGINX App Protect, offers seamless North-South connectivity for API calls. F5 Aspen Mesh is used to provide East-West visibility and mTLS-based security for workloads.The Kubernetes cluster can be on-premises or deployed in any of the major cloud provider infrastructures including Google’s GKE, Amazon’s EKS/Fargate, and Microsoft’s AKS. An example for implementing this pattern with NGINX per pod proxy is described here, and more examples are forthcoming in the API Security series. 4.API as Serverless Functions F5 cloud services Essential App Protect offering SaaS-based security or NGINX App Protect deployed in AWS Fargate can be used to inject protection in front of serverless API endpoints. Summary F5 solutions can be leveraged regardless of the architecture used to deliver APIs or infrastructure used to host them.In all patterns described above, metrics and logs are sent to one or many of the following: (1) F5 Beacon (2) SIEM of choice (3) ELK stack.Best practices for customizing API related views via any of these visibility solutions will be published in the following DevCentral series. DevOps can automate F5 products for integration into the API CI/CD pipeline.As a result, security is no longer a roadblock to delivering APIs at the speed of business. F5 solutions are future-proof, enabling development teams to confidently pivot from one architecture to another. To complement and extend the security of above solutions, organizations can leverage the power of F5 Silverline Managed Services to protect their infrastructure against volumetric, DNS, and higher-level denial of service attacks.The Shape bot protection solutions can also be coupled to detect and thwart bots, including securing mobile access with its mobile SDK.799Views2likes0CommentsDevCentral Connects: Log4j CVE-2021-44228
Buu and John held court today in a special Monday episode of DevCentral Connects with F5 security experts MegaZone, David Warburton, and Joe Martin to discuss the log4j vulnerability. Resources AskF5 Solution on the Vulnerability (K19026212) F5 Labs: Explaining the Widespread log4j vulnerability Beyond patching and mitigations, maybe some architecture changes? (LinkedIn Thread) Talking to Leadership (Slide Deck from InfoSec Innovations)710Views0likes1CommentHow I did it - "Securing Nvidia Triton Inference Server with NGINX Plus Ingress Controller”
In this installment of "How I Dit it", we step into the world of AI and Machine learning (ML) and take a look at how F5’s NGINX Plus Ingress Controller can provide secure and scalable external access to Nvidia’s Triton Inference Servers hosted on Kubernetes.699Views0likes0Comments