NGINX
70 TopicsBolt-on Auth with NGINX Plus and F5 Distributed Cloud
Inarguably, we are well into the age wherein the user interface for a typical web application has shifted from server-generated markup to APIs as the preferred point of interaction. As developers, we are presented with a veritable cornucopia of tools, frameworks, and standards to aid us in the development of these APIs and the services behind them. What about securing these APIs? Now more than ever, attackers have focused their efforts on abusing APIs to exfiltrate data or compromise systems at an increasingly alarming rate. In fact, a large portion of the 2023 OWASP Top 10 API Security Risks list items are caused by a lack of (or insufficient) authentication and authorization. How can we provide protection for existing APIs to prevent unauthorized access? What if my APIs have already been developed without considering access control? What are my options now? Enter the use of a proxy to provide security services. Solutions such as F5 NGINX Plus can easily be configured to provide authorization and auditing for your APIs - irrespective of where they are deployed. For instance, you can enable OpenID Connect (OIDC) on NGINX Plus to provide authentication and authorization for your applications (including APIs) without having to change a single line of code. In this article, we will present an existing application with an API deployed in an F5 Distributed Cloud cluster. This application lacks authentication and authorization features. The app we will be using is the Sentence demo app, deployed into a Kubernetes cluster on Distributed Cloud. The Kubernetes cluster we will be using in this walkthrough is a Distributed Cloud Virtual Kubernetes (vk8s) instance deployed to host application services in more than one Regional Edge site. Why? An immediate benefit is that as a developer, I don’t have to be concerned with managing my own Kubernetes cluster. We will use automation to declaratively configure a virtual Kubernetes cluster and deploy our application to it in a matter of seconds! Once the Sentence demo app is up and running, we will deploy NGINX Plus into another vk8s cluster for the purpose of providing authorization services. What about authentication? We will walk through configuring Microsoft Entra ID (formerly Azure Active Directory) as the identity provider for our application, and then configure NGINX Plus to act as an OIDC Relying Party to provide security services for the deployed API. Finally, we will make use of Distributed Cloud HTTP load balancers. We will provision one publicly available load balancer that will securely route traffic to the NGINX Plus authorization server. We will then provision an additional Load Balancer to provide application routing services to the Sentence app. This second load balancer differs from the first in that it is only “advertised” (and therefore only reachable) from services inside the namespace. This results in a configuration that makes it impossible for users to bypass the NGINX authorization server in an attempt to directly consume the Sentence app. The following is a diagram representing what will be deployed: Let’s get to it! Deployment Steps The detailed steps to deploy this solution are located in a GitHub repository accompanying this article. Follow the steps here, and be sure to come back to this article for the wrap-up! Conclusion You did it! With the power and reach of Distributed Cloud combined with the security that NGINX Plus provides, we have been able to easily provide authorization for our example API-based application. Where could we go from here? Do you remember we deployed these applications to two specific geographical sites? You could very easily extend the reach of this solution to more regions (distributed globally) to provide reliability and low-latency experiences for the end users of this application. Additionally, you can easily attach Distributed Cloud’s award-winning DDoS mitigation, WAF, and Bot mitigation to further protect your applications from attacks and fraudulent activity. Thanks for taking this journey with me, and I welcome your comments below. Acknowledgments This article wouldn’t have been the same without the efforts ofFouad_Chmainy, Matt_Dierick, and Alexis Da Costa. They are the original authors of the distributed design, the Sentence app, and the NGINX Plus OIDC image optimized for Distributed Cloud. Additionally, special thanks toCody_GreenandKevin_Reynoldsfor inspiration and assistance in the Terraform portion of the solution. Thanks, guys!1.5KViews8likes3Comments2022 DevCentral MVP Announcement
Congratulations to the 2022 DevCentral MVPs! Without users who take time from their busy days to share their experience and knowledge for others, DevCentral would be more of a corporate news site and not an actual user community. To that end, the DevCentral MVP Award is given annually to the outstanding group of individuals – the experts in the technical F5 user community who go out of their way to engage with the user community. The award is our way of recognizing their significant contributions, because while all of our users collectively make DevCentral one of the top community sites around and a valuable resource for everyone, MVPs regularly go above and beyond in assisting fellow F5 users.We understand that 2021 was difficult for everyone, and we are extra-grateful to this year's MVPs for going out of their ways to help others. MVPs get badges in their DevCentral profiles so everyone can see that they are recognized experts. This year’s MVPs will receive a glass award, certificate, exclusive thank-you gifts, and invitations to exclusive webinars and behind-the-scenes looks at things like roadmaps, new product sneak-previews, and innovative concepts in development. The 2022 DevCentral MVPs are: Aditya K Vlogs AlexBCT Amine_Kadimi Austin_Geraci Boneyard Daniel_Wolf Dario_Garrido David.burgoyne Donamato 01 Enes_Afsin_Al FrancisD iaine jaikumar_f5 Jim_Schwartzme1 JoshBecigneul JTLampe Kai Wilke Kees van den Bos Kevin_Davies Lionel Deval (Lidev) LouisK Mayur_Sutare Neeeewbie Niels_van_Sluis Nikoolayy1 P K Patrik_Jonsson Philip Jönsson Rob_Carr Rodolfo_Nützmann Rodrigo_Albuquerque Samstep SanjayP ScottE Sebastian Maniak Stefan_Klotz StephanManthey Tyler.Hatton1.2KViews8likes0CommentsMitigating OWASP API Security Top 10 risks using F5 NGINX App Protect
This 2019 API Security article covers the summary of OWASP API Security Top 10 – 2019 categories and newly published 2023 API security article covered introductory part of newest edition of OWASP API Security Top 10 risks – 2023. We will deep-dive into some of those common risks and how we can protect our applications against these vulnerabilities using F5 NGINX App Protect. Excessive Data Exposure Problem Statement: As shown below in one of the demo application API’s, Personal Identifiable Information (PII) data, like Credit Card Numbers (CCN) and U.S. Social Security Numbers (SSN), are visible in responses that are highly sensitive. So, we must hide these details to prevent personal data exploits. Solution: To prevent this vulnerability, we will use the DataGuard feature in NGINX App Protect, which validates all response data for sensitive details and will either mask the data or block those requests, as per the configured settings. First, we will configure DataGuard to mask the PII data as shown below and will apply this configuration. Next, if we resend the same request, we can see that the CCN/SSN numbers are masked, thereby preventing data breaches. If needed, we can update configurations to block this vulnerability after which all incoming requests for this endpoint will be blocked. If you open the security log and filter with this support ID, we can see that the request is either blocked or PII data is masked, as per the DataGuard configuration applied in the above section. Injection Problem Statement: Customer login pages without secure coding practices may have flaws. Intruders could use those flaws to exploit credential validation using different types of injections, like SQLi, command injections, etc. In our demo application, we have found an exploit which allows us to bypass credential validation using SQL injection (by using username as “' OR true --” and any password), thereby getting administrative access, as below: Solution: NGINX App Protect has a database of signatures that match this type of SQLi attacks. By configuring the WAF policy in blocking mode, NGINX App Protect can identify and block this attack, as shown below. If you check in the security log with this support ID, we can see that request is blocked because of SQL injection risk, as below. Insufficient Logging & Monitoring Problem Statement: Appropriate logging and monitoring solutions play a pivotal role in identifying attacks and also in finding the root cause for any security issues. Without these solutions, applications are fully exposed to attackers and SecOps is completely blind to identifying details of users and resources being accessed. Solution: NGINX provides different options to track logging details of applications for end-to-end visibility of every request both from a security and performance perspective. Users can change configurations as per their requirements and can also configure different logging mechanisms with different levels. Check the links below for more details on logging: https://www.nginx.com/blog/logging-upstream-nginx-traffic-cdn77/ https://www.nginx.com/blog/modsecurity-logging-and-debugging/ https://www.nginx.com/blog/using-nginx-logging-for-application-performance-monitoring/ https://docs.nginx.com/nginx/admin-guide/monitoring/logging/ https://docs.nginx.com/nginx-app-protect-waf/logging-overview/logs-overview/ Unrestricted Access to Sensitive Business Flows Problem Statement: By using the power of automation tools, attackers can now break through tough levels of protection. The inefficiency of APIs to detect automated bot tools not only causes business loss, but it can also adversely impact the services for genuine users of an application. Solution: NGINX App Protect has the best-in-class bot detection technology and can detect and label automation tools in different categories, like trusted, untrusted, and unknown. Depending on the appropriate configurations applied in the policy, requests generated from these tools are either blocked or alerted. Below is an example that shows how requests generated from the Postman automation tool are getting blocked. By filtering the security log with this support-id, we can see that the request is blocked because of an untrusted bot. Lack of Resources & Rate Limiting Problem Statement: APIs do not have any restrictions on the size or number of resources that can be requested by the end user. Above mentioned scenarios sometimes lead to poor API server performance, Denial of Service (DoS), and brute force attacks. Solution: NGINX App Protect provides different ways to rate limit the requests as per user requirements. A simple rate limiting use case configuration is able to block requests after reaching the limit, which is demonstrated below. Conclusion: In short, this article covered some common API vulnerabilities and shows how NGINX App Protect can be used as a mitigation solution to prevent these OWASP API security risks. Related resources for more information or to get started: F5 NGINX App Protect OWASP API Security Top 10 2019 OWASP API Security Top 10 20232.3KViews7likes0CommentsNGINX Management Suite API Connectivity Manager - Modern API driven Applications
Introduction API based applications benefits NGINX Management Suite API Connectivity Manager capabilities API Connectivity Manager use case API Connectivity Manager use case overview API Connectivity Manager traffic flows API Connectivity Manager lab & implementation References Introduction API based applications benefits Before we dive into our API gateway use case, we will go one step back and check why the move to API driven applications, below are some of the benefits for this move: Loose coupling: API-based applications can be built and maintained independently, allowing for faster development and deployment cycles. Reusability: APIs can be reused across multiple applications, reducing the need to duplicate code and effort. Scalability: API-based architecture allows for easier scaling of individual services, rather than having to scale the entire application. Flexibility: APIs allow for different client applications to consume the same services, such as web, mobile, and IoT devices. Interoperability: APIs facilitate communication between different systems and platforms, enabling integration with third-party services and data sources. Microservices: API-based architecture allows developers to build small, modular services that can be developed, deployed, and scaled independently. NGINX Management Suite API Connectivity Manager capabilities NGINX Management Suite API Connectivity Manager adds to the capabilities of the API driven applications a secure approach to authenticate, access and developing those API based applications. API Connectivity Manager is used to connect, secure, and govern our APIs. In addition, API Connectivity Manager lets us separate infrastructure lifecycle management from the API lifecycle, giving the IT/Ops teams and application developers the ability to work independently. API Connectivity Manager provides the following features: Create and manage isolated Workspaces for business units, development teams, and so on, so each team can develop and deploy at its own pace without affecting other teams. Create and manage API infrastructure in isolated workspaces. Enforce uniform security policies across all workspaces by applying global policies. Create Developer Portals that align with your brand, with custom color themes, logos, and favicons. Onboard your APIs to an API Gateway cluster and publish your API documentation to the Dev Portal. Let teams apply policies to their API proxies to provide custom quality of service for individual applications. Onboard API documentation by uploading an OpenAPI spec. Publish your API docs to a Dev Portal while keeping your API’s backend service private. Let users issue API keys or basic authentication credentials for access to your API. Send API calls by using the Developer Portal’s API Reference documentation. API Connectivity Manager use case API Connectivity Manager use case overview In our case we will have three teams, Infrastructure team, this one will be responsible for setting up the infrastructure, domains and access policies. API team, this one will be responsible for setting up the API documentation, QoS and gateway for both production and developer portals. Application team, this one will be responsible for learning the APIs through the developer portal and use the APIs through the production portal. Authentication in our case is done via two methods, API Key authentication for API version 1. OAuth2 introspection for API version 2. Note, More Authentication methods can be used (JSON Web Token Assertion) included in the following tutorial. API authentication more detailed discussion can be found here Application Programming Interface (API) Authentication types simplified Additional features like API rate limiting can be applied as well, here's a toturial to enable that feature. API Connectivity Manager traffic flows In our use case will have three flows, Management flow, illustrated below. Metrics and events collection flow, illustrated below Data flow illustrated below NGINX tutorial on how to streamline API operations with API Connectivity Manager, API Connectivity Manager lab & implementation ِThe steps we are going to follow with some useful tutorial videos are highlighted below, Setup backend API application (This step has been already done for you in the lab). Setup API Connectivity Manager infrastructure and policies. Enable API Key Authentication via the following Youtube toturial Enable API Key Authentication with API Connectivity Manager. Publish APIs and Documentation through API Connectivity Manager. Test APIs through API Developer Portal The detailed lab guide and the implementation videos Cloud labs detailed guide https://clouddocs.f5.com/training/community/nginx/html/class10/class10.html UDF lab can be found here as well https://udf.f5.com/b/ed5ffb71-bcce-47ec-9d9f-307441e4c12c#documentation Below a recorded Lab walkthrough by our awesome guru Matt_Dierick References API Connectivity Manager NGINX Management Suite NGINX Docs API Connectivity Manager UDF Lab1.8KViews7likes0CommentsF5 NGINXaaS for Azure: Multi-Region Architecture
The F5 NGINXaaS for Azureoffering recently announced general availability. Trust me...I've been using it and having fun! In this article, I will show you an example hub and spoke architecture using GitHub Actions and Azure Functions to automate NGINX configurations. As a bonus, I have code on GitHub that you can use to deploy this example. Topics Covered: NGINXaaS for Azure Architecture Explained The NGINXaaS for Azure architecture consists of an F5 subscription as well as customer subscription. F5 subscription - hidden from user, NGINX Plus instances, control plane, data plane Customer subscription - eNICs from VNet Injection, customer network stack, customer workloads F5 Subscription The NGINXaaS offering createsNGINX Plus instances and other related components like NGINX control plane and data plane resources in the F5 subscriptions. These items are not visible to the end user, and therefore result in the operational tasks of upgrades and scaling being managed by the NGINXaaS offering instead of the user. Each NGINX deployment, like other Azure services, is regional in nature. If you need to deploy NGINX closer to the client, then this will require multiple NGINX deployments (ex. westus2, eastus2). Each NGINX deployment will have a unique listener address. You can then use DNS to send clients to an NGINX deployment in the nearest region. Here is an example diagram. Customer Subscription The customer subscription has items like network stacks, Key Vaults, monitoring, application workloads, and more. The NGINX deployment automatically creates ethernet NICs (eNICs) in the customer subscription using VNet Injection and subnet delegation. The eNICs are deployed inside their own Azure Resource Group. They receive IP addressing from the customer VNet and are indeed visible by the user. However, there is no management needed with the eNICs because they are part of the NGINX deployment. Note: In my testing during public preview, I have noticed that Azure lets you manually remove subnet delegation for the NGINX service. Warning...do NOT do this. It will break traffic flow. Hub and Spoke Architecture You can easily make a hub and spoke design with NGINX in the mix using VNet peering. This is a great use case when required to use a shared NGINX deployment across different VNets, environments, or scaling workloads across multiple regions. Recall from earlier that an NGINX deployment will automatically create eNICs in the customer subscription. Therefore, you can control the entry point into the customer environment and the traffic flows. For example, configuring NGINX to use a customer shared VNet with peering gives you a hub and spoke design such as the picture below. This results in the NGINX eNICs being deployed into a customer Shared VNet (hub). Meanwhile the customer places workloads into their own VNets (spokes). Demo Code If this is the first time deploying NGINXaaS for Azure in your subscription, then you will need to subscribe to it in the marketplace. Search for “F5 NGINXaaS for Azure” in marketplace or follow this link Select F5 NGINXaaS for Azure and choose "Public Preview" and subscribe Time to play with code! Click the link below and review the README to deploy the demo example.There are prerequisites to follow. For example, you need to have a GitHub repository that stores the NGINX configuration files. You also need to have an Azure Key Vault and secret containing your GitHub access token. These are explained in the README. GitHub repo - F5 NGINXaaS for Azure Deployment with Demo Application in Multiple Regions After the deployment is done, you have a few options on how to handle NGINX configurations. I will share examples in future articles, but for now go ahead and explore on your own. Refer to the NGINXaaS for Azure documentation "NGINX Configuration" to get started. Summary This article gives an example architecture for deploying the NGINXaaS for Azure offering. I shared details on the different NGINX components, and I also shared demo code to help you explore the solution on your own! Contact us with any questions or requirements. We would love to hear from you! Resources DevCentral Series - F5 NGINXaaS for Azure F5 NGINXaaS for Azure Docs Blog Introducing F5 NGINXaaS for Azure3.2KViews6likes2CommentsVIPTest: Rapid Application Testing for F5 Environments
VIPTest is a Python-based tool for efficiently testing multiple URLs in F5 environments, allowing quick assessment of application behavior before and after configuration changes. It supports concurrent processing, handles various URL formats, and provides detailed reports on HTTP responses, TLS versions, and connectivity status, making it useful for migrations and routine maintenance.551Views5likes2CommentsApplication Programming Interface (API) Authentication types simplified
API is a critical part of most of our modern applications. In this article we will walkthrough the different authentication types, to help us in the future articles covering NGINX API Connectivity Manager authentication and NGINX Single Sign-on.4KViews5likes0CommentsHow to deploy NGINX App Protect WAF on the NGINX Ingress Controller using argoCD
Overview The NGINX App Protect WAF can be deployed as an add-on within the NGINX Ingress Controller, making the two functionin tandem as a WAF armed with a Kubernetes Ingress Controller. This repo leverages argoCD as a GitOps continuous delivery tool to showcase an end-to-end example of how to use the combo to frontend a simple Kubernetes application. As this repo is public facing, I am also using a tool named 'Sealed-Secrets' to encrypt all the secret manifests. However, this is not a requirement for deploying either component. I will go through argoCD and Sealed-Secrets first as supporting pieces and then go into NGINX App Protect WAF itself. Please note that this tutorial is applies to the NGINX Plus-based version of NGINX Ingress Controller. If you aren’t sure which version you’re using, read the blog A Guide to Choosing an Ingress Controller, Part 4: NGINX Ingress Controller Options. argoCD If you do not know argoCD, I strongly recommend that you check it out. In essence, with argoCD, you create an argoCD application that references a location (e.g., Git repo or folder) containing all your manifests. argoCD applies all the manifests in that location and constantly monitors and syncs changes from that location. e.g., if you made a change to a manifest or added a new one, after you did a Git commit for that change, argoCD picks it up and immediate applies the change within your Kubernetes. The following screenshot taken from argoCD shows that I have an app named 'cafe'. The 'cafe' app points to a Git repo where all manifests are stored. The status is 'Healthy' and 'Synced'. It means that argoCD has successfully applied all the manifests that it knows about, and these manifests are in sync with the Git repo. ThecafeargoCD application manifest is shownhere. For this repo, all argoCD application manifests are stored in thebootstrapfolder. To add the Cafe argoCD application, run the following, kubectl apply -f cafe.yaml Sealed Secrets When Kubernetes manifests that contain secrets such as passwords and private keys, they cannot simply be pushed to a public repo. With Sealed-Secrets, you can solve this problem by sealing those manifests files offline via the binary and then push them to the public repo. When you apply the sealed secret manifests from the public repo, the sealed-secrets component that sits inside Kubernetes will decrypt the sealed secrets and then apply them on the fly. To do this, you must upload the encryption key into Kubernetes first. Please note that I am only using sealed secrets so I can push my secret manifests to a public repo, for the purpose of this demo. It is not a requirement to install NGINX App Protect WAF. If you have a private repo, you can simply push all your secret manifests there and argoCD will then apply them as is. The following commands set up sealed secrets with my specified certificate/key in its own namespace. export PRIVATEKEY="dc7.h.l.key" export PUBLICKEY="dc7.h.l.cer" export NAMESPACE="sealed-secrets" export SECRETNAME="dc7.h.l" kubectl -n "$NAMESPACE" create secret tls "$SECRETNAME" --cert="$PUBLICKEY" --key="$PRIVATEKEY" kubectl -n "$NAMESPACE" label secret "$SECRETNAME" sealedsecrets.bitnami.com/sealed-secrets-key=active To create a sealed TLS secret, kubectl create secret tls wildcard.abbagmbh.de --cert=wildcard.abbagmbh.de.cer --key=wildcard.abbagmbh.de.key -n nginx-ingress --dry-run=client -o yaml | kubeseal \ --controller-namespace sealed-secrets \ --format yaml \ > sealed-wildcard.abbagmbh.de.yaml NGINX App Protect WAF The NGINX App Protect WAF for Kubernetes is a NGINX Ingress Controller software security module add-on with L7 WAF capabilities.It can be embedded within the NGINX Ingress Controller. The installation process for NGINX App Protect WAF is identical to NGINX Ingress Controller, with the following additional steps. Apply NGINX App Protect WAF specific CRD's to Kubernetes Apply NGINX App Protect WAF log configuration (NGINX App Protect WAF logging is different from NGINX Plus) Apply NGINX App Protect WAF protection policy The official installation docs using manifestshave great info around the entire process. This repo simply collated all the necessary manifests required for NGINX App Protect WAF in a directory that is then fed to argoCD. Image Pull Secret With the NGINX App Protect WAF docker image, you can either pull it from the official NGINX private repo, or from your own repo. In the former case, you would need to create a secret that is generated from the JWT file (part of the NGINX license files). See below for detail. To create a sealed docker-registry secret, username=`cat nginx-repo.jwt` kubectl create secret docker-registry private-registry.nginx.com \ --docker-server=private-registry.nginx.com \ --docker-username=$username \ --docker-password=none \ --namespace nginx-ingress \ --dry-run=client -o yaml | kubeseal \ --controller-namespace sealed-secrets \ --format yaml \ > sealed-docker-registry-secret.yaml Notice the controller namespace above, it needs to match the namespace where you installed Sealed-Secrets. NGINX App Protect WAF CRD's A number of NGINX App Protect WAF specific CRD's (Custom Resource Definition) are required for installation. They are included in thecrdsdirectory and should be picked up and applied by argoCD automatically. NGINX App Protect WAF Configuration The NGINX App Protect WAF configuration includes the followings in this demo: User defined signature NGINX App Protect WAF policy NGINX App Protect WAF log configuration Thisuser defined signatureshows an example of a custom signature that looks for keywordapplein request traffic. TheNGINX App Protect WAF policydefines violation rules. In this case it blocks traffic caught by the custom signature defined above. This sample policy also enables data guard and all other protection features defined in a base template. TheNGINX App Protect WAF log configurationdefines what gets logged and how they look like. e.g., log all traffic versus log illegal traffic. This repo also includes a manifest for syslog deployment as a log destination used by NGINX App Protect WAF. Ingress To use NGINX App Protect WAF, you must create anIngressresource. Within the Ingress manifest, you use annotations to apply NGINX App Protect WAF specific settings that were discussed above. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: cafe-ingress annotations: kubernetes.io/ingress.class: "nginx" appprotect.f5.com/app-protect-policy: "nginx-ingress/dataguard-alarm" appprotect.f5.com/app-protect-enable: "True" appprotect.f5.com/app-protect-security-log-enable: "True" appprotect.f5.com/app-protect-security-log: "nginx-ingress/logconf" appprotect.f5.com/app-protect-security-log-destination: "syslog:server=syslog-svc.nginx-ingress:514" The traffic routing logic is done via the followings, spec: ingressClassName: nginx # use only with k8s version >= 1.18.0 tls: - hosts: - cafe.abbagmbh.de secretName: wildcard.abbagmbh.de rules: - host: cafe.abbagmbh.de http: paths: - path: /tea pathType: Prefix backend: service: name: tea-svc port: number: 8080 - path: /coffee pathType: Prefix backend: service: name: coffee-svc port: number: 8080 Testing Once you added both applications (in bootstrap folder) into argoCD, you should see the followings in argoCD UI. We can do a test to confirm if NGINX App Protect WAF routes traffic based upon HTTP URI, as well as whether WAF protection is applied. First get the NGINX Ingress Controller IP. % kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE cafe-ingress nginx cafe.abbagmbh.de x.x.x.x 80, 443 1d Now send traffic to both'/tea'and'/coffee'URI paths. % curl --resolve cafe.abbagmbh.de:443:x.x.x.x https://cafe.abbagmbh.de/tea Server address: 10.244.0.22:8080 Server name: tea-6fb46d899f-spvld Date: 03/May/2022:06:02:24 +0000 URI: /tea Request ID: 093ed857d28e160b7417bb4746bec774 % curl --resolve cafe.abbagmbh.de:443:x.x.x.x https://cafe.abbagmbh.de/coffee Server address: 10.244.0.21:8080 Server name: coffee-6f4b79b975-7fwwk Date: 03/May/2022:06:03:51 +0000 URI: /coffee Request ID: 0744417d1e2d59329401ed2189067e40 As you can see from above, traffic destined to '/tea' is routed to the tea pod (tea-6fb46d899f-spvld) and traffic destined to '/coffee' is routed to the coffee pod (coffee-6f4b79b975-7fwwk). Let us trigger a violation based on the user defined signature, % curl --resolve cafe.abbagmbh.de:443:x.x.x.x https://cafe.abbagmbh.de/apple <html><head><title>Request Rejected</title></head><body>The requested URL was rejected. Please consult with your administrator.<br><br>Your support ID is: 10807744421744880061<br><br><a href='javascript:history.back();'>[Go Back]</a></body></html> Finally, traffic violating the XSS rule. curl --resolve cafe.abbagmbh.de:443:x.x.x.x 'https://cafe.abbagmbh.de/tea<script>' <html><head><title>Request Rejected</title></head><body>The requested URL was rejected. Please consult with your administrator.<br><br>Your support ID is: 10807744421744881081<br><br><a href='javascript:history.back();'>[Go Back]</a></body></html> Confirming that logs are received on the syslog pod. % tail -f /var/log/message May 3 06:30:32 nginx-ingress-6444787b8-l6fzr ASM:attack_type="Non-browser Client Abuse of Functionality Cross Site Scripting (XSS)" blocking_exception_reason="N/A" date_time="2022-05-03 06:30:32" dest_port="443" ip_client="x.x.x.x" is_truncated="false" method="GET" policy_name="dataguard-alarm" protocol="HTTPS" request_status="blocked" response_code="0" severity="Critical" sig_cves=" " sig_ids="200000099 200000093" sig_names="XSS script tag (URI) XSS script tag end (URI)" sig_set_names="{Cross Site Scripting Signatures;High Accuracy Signatures} {Cross Site Scripting Signatures;High Accuracy Signatures}" src_port="1478" sub_violations="N/A" support_id="10807744421744881591" threat_campaign_names="N/A" unit_hostname="nginx-ingress-6444787b8-l6fzr" uri="/tea<script>" violation_rating="5" vs_name="24-cafe.abbagmbh.de:8-/tea" x_forwarded_for_header_value="N/A" outcome="REJECTED" outcome_reason="SECURITY_WAF_VIOLATION" violations="Illegal meta character in URL Attack signature detected Violation Rating Threat detected Bot Client Detected" json_log="{violations:[{enforcementState:{isBlocked:false} violation:{name:VIOL_URL_METACHAR}} {enforcementState:{isBlocked:true} violation:{name:VIOL_RATING_THREAT}} {enforcementState:{isBlocked:true} violation:{name:VIOL_BOT_CLIENT}} {enforcementState:{isBlocked:true} signature:{name:XSS script tag (URI) signatureId:200000099} violation:{name:VIOL_ATTACK_SIGNATURE}} {enforcementState:{isBlocked:true} signature:{name:XSS script tag end (URI) signatureId:200000093} violation:{name:VIOL_ATTACK_SIGNATURE}}]}" Conclusion The NGINX App Protect WAF deploys as a software security module add-on to the NGINX Ingress Controller and provides comprehensive application security for your Kubernetes environment. I hope that you find the deployment simple and straightforward.4.3KViews5likes0CommentsKnowledge sharing: Containers, Kubernetes, Openshift, F5 Container Connector, NGINX Ingress
For anyone interested about the free traning for "F5 Container Connector for Kubernetes" or "F5 OpenShift Container Integration" at "LearnF5". For NGINX being installed in Kubernetes there is enough info but for F5 Contaner Connector/Container Ingress Services there is not so much: https://docs.nginx.com/nginx-ingress-controller/f5-ingresslink/ https://www.nginx.com/products/nginx-ingress-controller/ https://community.f5.com/t5/technical-articles/better-together-f5-container-ingress-services-and-nginx-plus/ta-p/280471 F5 Devcentral also has youtube channel with usefull info: https://www.youtube.com/c/devcentral If you don't have good knowledge about containers and kubernetes then first check the links below. For Docker containers in youtube you will find a lot of good training for example: you need to learn Kubernetes RIGHT NOW!! - YouTube Docker Tutorial for Beginners [FULL COURSE in 3 Hours] - YouTube Docker overview | Docker Documentation The same is true for Kubernetes and they have a free test lab on their site: Learn Kubernetes Basics | Kubernetes you need to learn Docker RIGHT NOW!! // Docker Containers 101 - YouTube Red Hat has some free training and IBM provides some free labs for Containers, Kubernetes, Openshift etc.: Training and Certification (redhat.com) IBM CloudLabs: Free, Interactive Kubernetes Tutorials | IBM Red Hat OpenShift Tutorials | IBM963Views5likes2CommentsF5 High Availability - Public Cloud Guidance
This article will provide information about BIG-IP and NGINX high availability (HA) topics that should be considered when leveraging the public cloud. There are differences between on-prem and public cloud such as cloud provider L2 networking. These differences lead to challenges in how you address HA, failover time, peer setup, scaling options, and application state. Topics Covered: Discuss and Define HA Importance of Application Behavior and Traffic Sizing HA Capabilities of BIG-IP and NGINX Various HA Deployment Options (Active/Active, Active/Standby, auto scale) Example Customer Scenario What is High Availability? High availability can mean many things to different people. Depending on the application and traffic requirements, HA requires dual data paths, redundant storage, redundant power, and compute. It means the ability to survive a failure, maintenance windows should be seamless to user, and the user experience should never suffer...ever! Reference: https://en.wikipedia.org/wiki/High_availability So what should HA provide? Synchronization of configuration data to peers (ex. configs objects) Synchronization of application session state (ex. persistence records) Enable traffic to fail over to a peer Locally, allow clusters of devices to act and appear as one unit Globally, disburse traffic via DNS and routing Importance of Application Behavior and Traffic Sizing Let's look at a common use case... "gaming app, lots of persistent connections, client needs to hit same backend throughout entire game session" Session State The requirement of session state is common across applications using methods like HTTP cookies,F5 iRule persistence, JSessionID, IP affinity, or hash. The session type used by the application can help you decide what migration path is right for you. Is this an app more fitting for a lift-n-shift approach...Rehost? Can the app be redesigned to take advantage of all native IaaS and PaaS technologies...Refactor? Reference: 6 R's of a Cloud Migration Application session state allows user to have a consistent and reliable experience Auto scaling L7 proxies (BIG-IP or NGINX) keep track of session state BIG-IP can only mirror session state to next device in cluster NGINX can mirror state to all devices in cluster (via zone sync) Traffic Sizing The cloud provider does a great job with things like scaling, but there are still cloud provider limits that affect sizing and machine instance types to keep in mind. BIG-IP and NGINX are considered network virtual appliances (NVA). They carry quota limits like other cloud objects. Google GCP VPC Resource Limits Azure VM Flow Limits AWS Instance Types Unfortunately, not all limits are documented. Key metrics for L7 proxies are typically SSL stats, throughput, connection type, and connection count. Collecting these application and traffic metrics can help identify the correct instance type. We have a list of the F5 supported BIG-IP VE platforms on F5 CloudDocs. F5 Products and HA Capabilities BIG-IP HA Capabilities BIG-IP supports the following HA cluster configurations: Active/Active - all devices processing traffic Active/Standby - one device processes traffic, others wait in standby Configuration sync to all devices in cluster L3/L4 connection sharing to next device in cluster (ex. avoids re-login) L5-L7 state sharing to next device in cluster (ex. IP persistence, SSL persistence, iRule UIE persistence) Reference: BIG-IP High Availability Docs NGINX HA Capabilities NGINX supports the following HA cluster configurations: Active/Active - all devices processing traffic Active/Standby - one device processes traffic, others wait in standby Configuration sync to all devices in cluster Mirroring connections at L3/L4 not available Mirroring session state to ALL devices in cluster using Zone Synchronization Module (NGINX Plus R15) Reference: NGINX High Availability Docs HA Methods for BIG-IP In the following sections, I will illustrate 3 common deployment configurations for BIG-IP in public cloud. HA for BIG-IP Design #1 - Active/Standby via API HA for BIG-IP Design #2 - A/A or A/S via LB HA for BIG-IP Design #3 - Regional Failover (multi region) HA for BIG-IP Design #1 - Active/Standby via API (multi AZ) This failover method uses API calls to communicate with the cloud provider and move objects (IP address, routes, etc) during failover events. The F5 Cloud Failover Extension (CFE) for BIG-IP is used to declaratively configure the HA settings. Cloud provider load balancer is NOT required Fail over time can be SLOW! Only one device actively used (other device sits idle) Failover uses API calls to move cloud objects, times vary (see CFE Performance and Sizing) Key Findings: Google API failover times depend on number of forwarding rules Azure API slow to disassociate/associate IPs to NICs (remapping) Azure API fast when updating routes (UDR, user defined routes) AWS reliable with API regarding IP moves and routes Recommendations: This design with multi AZ is more preferred than single AZ Recommend when "traditional" HA cluster required or Lift-n-Shift...Rehost For Azure (based on my testing)... Recommend using Azure UDR versus IP failover when possible Look at Failover via LB example instead for Azure If API method required, look at DNS solutions to provide further redundancy HA for BIG-IP Design #2 - A/A or A/S via LB (multi AZ) Cloud LB health checks the BIG-IP for up/down status Faster failover times (depends on cloud LB health timers) Cloud LB allows A/A or A/S Key difference: Increased network/compute redundancy Cloud load balancer required Recommendations: Use "failover via LB" if you require faster failover times For Google (based on my testing)... Recommend against "via LB" for IPSEC traffic (Google LB not supported) If load balancing IPSEC, then use "via API" or "via DNS" failover methods HA for BIG-IP Design #3 - Regional Failover via DNS (multi AZ, multi region) BIG-IP VE active/active in multiple regions Traffic disbursed to VEs by DNS/GSLB DNS/GSLB intelligent health checks for the VEs Key difference: Cloud LB is not required DNS logic required by clients Orchestration required to manage configs across each BIG-IP BIG-IP standalone devices (no DSC cluster limitations) Recommendations: Good for apps that handle DNS resolution well upon failover events Recommend when cloud LB cannot handle a particular protocol Recommend when customer is already using DNS to direct traffic Recommend for applications that have been refactored to handle session state outside of BIG-IP Recommend for customers with in-house skillset to orchestrate (Ansible, Terraform, etc) HA Methods for NGINX In the following sections, I will illustrate 2 common deployment configurations for NGINX in public cloud. HA for NGINX Design #1 - Active/Standby via API HA for NGINX Design #2 - Auto Scale Active/Active via LB HA for NGINX Design #1 - Active/Standby via API (multi AZ) NGINX Plus required Cloud provider load balancer is NOT required Only one device actively used (other device sits idle) Only available in AWS currently Recommendations: Recommend when "traditional" HA cluster required or Lift-n-Shift...Rehost Reference: Active-Passive HA for NGINX Plus on AWS HA for NGINX Design #2 - Auto Scale Active/Active via LB (multi AZ) NGINX Plus required Cloud LB health checks the NGINX Faster failover times Key difference: Increased network/compute redundancy Cloud load balancer required Recommendations: Recommended for apps fitting a migration type of Replatform or Refactor Reference: Active-Active HA for NGINX Plus on AWS, Active-Active HA for NGINX Plus on Google Pros & Cons: Public Cloud Scaling Options Review this handy table to understand the high level pros and cons of each deployment method. Example Customer Scenario #1 As a means to make this topic a little more real, here isa common customer scenario that shows you the decisions that go into moving an application to the public cloud. Sometimes it's as easy as a lift-n-shift, other times you might need to do a little more work. In general, public cloud is not on-prem and things might need some tweaking. Hopefully this example will give you some pointers and guidance on your next app migration to the cloud. Current Setup: Gaming applications F5 Hardware BIG-IP VIRPIONs on-prem Two data centers for HA redundancy iRule heavy configuration (TLS encryption/decryption, payload inspections) Session Persistence = iRule Universal Persistence (UIE), and other methods Biggest app 15K SSL TPS 15Gbps throughput 2 million concurrent connections 300K HTTP req/sec (L7 with TLS) Requirements for Successful Cloud Migration: Support current traffic numbers Support future target traffic growth Must run in multiple geographic regions Maintain session state Must retain all iRules in use Recommended Design for Cloud Phase #1: Migration Type: Hybrid model, on-prem + cloud, and some Rehost Platform: BIG-IP Retaining iRules means BIG-IP is required Licensing: High Performance BIG-IP Unlocks additional CPU cores past 8 (up to 24) extra traffic and SSL processing Instance type: check F5 supported BIG-IP VE platforms for accelerated networking (10Gb+) HA method: Active/Standby and multi-region with DNS iRule Universal persistence only mirrors to only next device, keep cluster size to 2 scale horizontally via additional HA clusters and DNS clients pinned to a region via DNS (on-prem or public cloud) inside region, local proxy cluster shares state This example comes up in customer conversations often. Based on customer requirements, in-house skillset, current operational model, and time frames there is one option that is better than the rest. A second design phase lends itself to more of a Replatform or Refactor migration type. In that case, more options can be leveraged to take advantage of cloud-native features. For example, changing the application persistence type from iRule UIE to cookie would allow BIG-IP to avoid keeping track of state. Why? With cookies, the client keeps track of that session state. Client receives a cookie, passes the cookie to L7 proxy on successive requests, proxy checks cookie value, sends to backend pool member. The requirement for L7 proxy to share session state is now removed. Example Customer Scenario #2 Here is another customer scenario. This time the application is a full suite of multimedia content. In contrast to the first scenario, this one will illustrate the benefits of rearchitecting various components allowing greater flexibility when leveraging the cloud. You still must factor in-house skill set, project time frames, and other important business (and application) requirements when deciding on the best migration type. Current Setup: Multimedia (Gaming, Movie, TV, Music) Platform BIG-IP VIPRIONs using vCMP on-prem Two data centers for HA redundancy iRule heavy (Security, Traffic Manipulation, Performance) Biggest App: oAuth + Cassandra for token storage (entitlements) Requirements for Success Cloud Migration: Support current traffic numbers Elastic auto scale for seasonal growth (ex. holidays) VPC peering with partners (must also bypass Web Application Firewall) Must support current or similar traffic manipulating in data plane Compatibility with existing tooling used by Business Recommended Design for Cloud Phase #1: Migration Type: Repurchase, migration BIG-IP to NGINX Plus Platform: NGINX iRules converted to JS or LUA Licensing: NGINX Plus Modules: GeoIP, LUA, JavaScript HA method: N+1 Autoscaling via Native LB Active Health Checks This is a great example of a Repurchase in which application characteristics can allow the various teams to explore alternative cloud migration approaches. In this scenario, it describes a phase one migration of converting BIG-IP devices to NGINX Plus devices. This example assumes the BIG-IP configurations can be somewhat easily converted to NGINX Plus, and it also assumes there is available skillset and project time allocated to properly rearchitect the application where needed. Summary OK! Brains are expanding...hopefully? We learned about high availability and what that means for applications and user experience. We touched on the importance of application behavior and traffic sizing. Then we explored the various F5 products, how they handle HA, and HA designs. These recommendations are based on my own lab testing and interactions with customers. Every scenario will carry its own requirements, and all options should be carefully considered when leveraging the public cloud. Finally, we looked at a customer scenario, discussed requirements, and design proposal. Fun! Resources Read the following articles for more guidance specific to the various cloud providers. Advanced Topologies and More on Highly Available Services Lightboard Lessons - BIG-IP Deployments in Azure Google and BIG-IP Failing Faster in the Cloud BIG-IP VE on Public Cloud High-Availability Load Balancing with NGINX Plus on Google Cloud Platform Using AWS Quick Starts to Deploy NGINX Plus NGINX on Azure5.6KViews5likes2Comments