NGINX
70 TopicsHow to rewrite a path to a backend service dropping the prefix and passing the remaining path?
Hello, I am not sure whether my posting is appropriate in this area, so please delete it if there is a violation of posting rules... This must be a common task, but I cannot figure out how to do the following fanout rewrite in our nginx ingress: http://abcccc.com/httpbin/anything-> /anything (the httpbin backend service) When I create the following ingress with a path of '/' and send the query, I receive a proper response. curl -I -k http://abczzz.com/anything apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: mikie-ingress namespace: mikie spec: ingressClassName: nginx rules: - host: abczzz.com http: paths: - path: / pathType: Prefix backend: service: name: httpbin-service port: number: 8999 What I really need is to be able to redirect to different services off of this single host, so I changed the ingress to the following, but the query always fails with a 404. Basically, I want the /httpbin to disappear and pass the path onto the backend service, httpbin. curl -I -k http://abczzz.com/httpbin/anything apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: mikie-ingress namespace: mikie annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 spec: ingressClassName: nginx rules: - host: abczzz.com http: paths: - path: /httpbin(/|$)(.*) pathType: Prefix backend: service: name: httpbin-service port: number: 8999 Thank you for your time and interest, Mike19KViews0likes15Commentsdoes nginx (1.20 or newer) re-resolve DNS for proxy_pass?
Consider this nginx config snippet: location ^~ /_example/ { proxy_pass https://example.com/_example/; proxy_set_header Host my-site; } Assuming "example.com" DNS TTL is set to 60 seconds - will nginx re-resolve DNS after 60 seconds? Or does it only resolve the name on startup? I'm finding different info around the internet: - it will re-resolve only in the commercial nginx plus - it will re-resolve only in newer nginx releases; in older ones, one need to make some workarounds - it will only resolve once on startup and never again12KViews0likes5CommentsF5 High Availability - Public Cloud Guidance
This article will provide information about BIG-IP and NGINX high availability (HA) topics that should be considered when leveraging the public cloud. There are differences between on-prem and public cloud such as cloud provider L2 networking. These differences lead to challenges in how you address HA, failover time, peer setup, scaling options, and application state. Topics Covered: Discuss and Define HA Importance of Application Behavior and Traffic Sizing HA Capabilities of BIG-IP and NGINX Various HA Deployment Options (Active/Active, Active/Standby, auto scale) Example Customer Scenario What is High Availability? High availability can mean many things to different people. Depending on the application and traffic requirements, HA requires dual data paths, redundant storage, redundant power, and compute. It means the ability to survive a failure, maintenance windows should be seamless to user, and the user experience should never suffer...ever! Reference: https://en.wikipedia.org/wiki/High_availability So what should HA provide? Synchronization of configuration data to peers (ex. configs objects) Synchronization of application session state (ex. persistence records) Enable traffic to fail over to a peer Locally, allow clusters of devices to act and appear as one unit Globally, disburse traffic via DNS and routing Importance of Application Behavior and Traffic Sizing Let's look at a common use case... "gaming app, lots of persistent connections, client needs to hit same backend throughout entire game session" Session State The requirement of session state is common across applications using methods like HTTP cookies,F5 iRule persistence, JSessionID, IP affinity, or hash. The session type used by the application can help you decide what migration path is right for you. Is this an app more fitting for a lift-n-shift approach...Rehost? Can the app be redesigned to take advantage of all native IaaS and PaaS technologies...Refactor? Reference: 6 R's of a Cloud Migration Application session state allows user to have a consistent and reliable experience Auto scaling L7 proxies (BIG-IP or NGINX) keep track of session state BIG-IP can only mirror session state to next device in cluster NGINX can mirror state to all devices in cluster (via zone sync) Traffic Sizing The cloud provider does a great job with things like scaling, but there are still cloud provider limits that affect sizing and machine instance types to keep in mind. BIG-IP and NGINX are considered network virtual appliances (NVA). They carry quota limits like other cloud objects. Google GCP VPC Resource Limits Azure VM Flow Limits AWS Instance Types Unfortunately, not all limits are documented. Key metrics for L7 proxies are typically SSL stats, throughput, connection type, and connection count. Collecting these application and traffic metrics can help identify the correct instance type. We have a list of the F5 supported BIG-IP VE platforms on F5 CloudDocs. F5 Products and HA Capabilities BIG-IP HA Capabilities BIG-IP supports the following HA cluster configurations: Active/Active - all devices processing traffic Active/Standby - one device processes traffic, others wait in standby Configuration sync to all devices in cluster L3/L4 connection sharing to next device in cluster (ex. avoids re-login) L5-L7 state sharing to next device in cluster (ex. IP persistence, SSL persistence, iRule UIE persistence) Reference: BIG-IP High Availability Docs NGINX HA Capabilities NGINX supports the following HA cluster configurations: Active/Active - all devices processing traffic Active/Standby - one device processes traffic, others wait in standby Configuration sync to all devices in cluster Mirroring connections at L3/L4 not available Mirroring session state to ALL devices in cluster using Zone Synchronization Module (NGINX Plus R15) Reference: NGINX High Availability Docs HA Methods for BIG-IP In the following sections, I will illustrate 3 common deployment configurations for BIG-IP in public cloud. HA for BIG-IP Design #1 - Active/Standby via API HA for BIG-IP Design #2 - A/A or A/S via LB HA for BIG-IP Design #3 - Regional Failover (multi region) HA for BIG-IP Design #1 - Active/Standby via API (multi AZ) This failover method uses API calls to communicate with the cloud provider and move objects (IP address, routes, etc) during failover events. The F5 Cloud Failover Extension (CFE) for BIG-IP is used to declaratively configure the HA settings. Cloud provider load balancer is NOT required Fail over time can be SLOW! Only one device actively used (other device sits idle) Failover uses API calls to move cloud objects, times vary (see CFE Performance and Sizing) Key Findings: Google API failover times depend on number of forwarding rules Azure API slow to disassociate/associate IPs to NICs (remapping) Azure API fast when updating routes (UDR, user defined routes) AWS reliable with API regarding IP moves and routes Recommendations: This design with multi AZ is more preferred than single AZ Recommend when "traditional" HA cluster required or Lift-n-Shift...Rehost For Azure (based on my testing)... Recommend using Azure UDR versus IP failover when possible Look at Failover via LB example instead for Azure If API method required, look at DNS solutions to provide further redundancy HA for BIG-IP Design #2 - A/A or A/S via LB (multi AZ) Cloud LB health checks the BIG-IP for up/down status Faster failover times (depends on cloud LB health timers) Cloud LB allows A/A or A/S Key difference: Increased network/compute redundancy Cloud load balancer required Recommendations: Use "failover via LB" if you require faster failover times For Google (based on my testing)... Recommend against "via LB" for IPSEC traffic (Google LB not supported) If load balancing IPSEC, then use "via API" or "via DNS" failover methods HA for BIG-IP Design #3 - Regional Failover via DNS (multi AZ, multi region) BIG-IP VE active/active in multiple regions Traffic disbursed to VEs by DNS/GSLB DNS/GSLB intelligent health checks for the VEs Key difference: Cloud LB is not required DNS logic required by clients Orchestration required to manage configs across each BIG-IP BIG-IP standalone devices (no DSC cluster limitations) Recommendations: Good for apps that handle DNS resolution well upon failover events Recommend when cloud LB cannot handle a particular protocol Recommend when customer is already using DNS to direct traffic Recommend for applications that have been refactored to handle session state outside of BIG-IP Recommend for customers with in-house skillset to orchestrate (Ansible, Terraform, etc) HA Methods for NGINX In the following sections, I will illustrate 2 common deployment configurations for NGINX in public cloud. HA for NGINX Design #1 - Active/Standby via API HA for NGINX Design #2 - Auto Scale Active/Active via LB HA for NGINX Design #1 - Active/Standby via API (multi AZ) NGINX Plus required Cloud provider load balancer is NOT required Only one device actively used (other device sits idle) Only available in AWS currently Recommendations: Recommend when "traditional" HA cluster required or Lift-n-Shift...Rehost Reference: Active-Passive HA for NGINX Plus on AWS HA for NGINX Design #2 - Auto Scale Active/Active via LB (multi AZ) NGINX Plus required Cloud LB health checks the NGINX Faster failover times Key difference: Increased network/compute redundancy Cloud load balancer required Recommendations: Recommended for apps fitting a migration type of Replatform or Refactor Reference: Active-Active HA for NGINX Plus on AWS, Active-Active HA for NGINX Plus on Google Pros & Cons: Public Cloud Scaling Options Review this handy table to understand the high level pros and cons of each deployment method. Example Customer Scenario #1 As a means to make this topic a little more real, here isa common customer scenario that shows you the decisions that go into moving an application to the public cloud. Sometimes it's as easy as a lift-n-shift, other times you might need to do a little more work. In general, public cloud is not on-prem and things might need some tweaking. Hopefully this example will give you some pointers and guidance on your next app migration to the cloud. Current Setup: Gaming applications F5 Hardware BIG-IP VIRPIONs on-prem Two data centers for HA redundancy iRule heavy configuration (TLS encryption/decryption, payload inspections) Session Persistence = iRule Universal Persistence (UIE), and other methods Biggest app 15K SSL TPS 15Gbps throughput 2 million concurrent connections 300K HTTP req/sec (L7 with TLS) Requirements for Successful Cloud Migration: Support current traffic numbers Support future target traffic growth Must run in multiple geographic regions Maintain session state Must retain all iRules in use Recommended Design for Cloud Phase #1: Migration Type: Hybrid model, on-prem + cloud, and some Rehost Platform: BIG-IP Retaining iRules means BIG-IP is required Licensing: High Performance BIG-IP Unlocks additional CPU cores past 8 (up to 24) extra traffic and SSL processing Instance type: check F5 supported BIG-IP VE platforms for accelerated networking (10Gb+) HA method: Active/Standby and multi-region with DNS iRule Universal persistence only mirrors to only next device, keep cluster size to 2 scale horizontally via additional HA clusters and DNS clients pinned to a region via DNS (on-prem or public cloud) inside region, local proxy cluster shares state This example comes up in customer conversations often. Based on customer requirements, in-house skillset, current operational model, and time frames there is one option that is better than the rest. A second design phase lends itself to more of a Replatform or Refactor migration type. In that case, more options can be leveraged to take advantage of cloud-native features. For example, changing the application persistence type from iRule UIE to cookie would allow BIG-IP to avoid keeping track of state. Why? With cookies, the client keeps track of that session state. Client receives a cookie, passes the cookie to L7 proxy on successive requests, proxy checks cookie value, sends to backend pool member. The requirement for L7 proxy to share session state is now removed. Example Customer Scenario #2 Here is another customer scenario. This time the application is a full suite of multimedia content. In contrast to the first scenario, this one will illustrate the benefits of rearchitecting various components allowing greater flexibility when leveraging the cloud. You still must factor in-house skill set, project time frames, and other important business (and application) requirements when deciding on the best migration type. Current Setup: Multimedia (Gaming, Movie, TV, Music) Platform BIG-IP VIPRIONs using vCMP on-prem Two data centers for HA redundancy iRule heavy (Security, Traffic Manipulation, Performance) Biggest App: oAuth + Cassandra for token storage (entitlements) Requirements for Success Cloud Migration: Support current traffic numbers Elastic auto scale for seasonal growth (ex. holidays) VPC peering with partners (must also bypass Web Application Firewall) Must support current or similar traffic manipulating in data plane Compatibility with existing tooling used by Business Recommended Design for Cloud Phase #1: Migration Type: Repurchase, migration BIG-IP to NGINX Plus Platform: NGINX iRules converted to JS or LUA Licensing: NGINX Plus Modules: GeoIP, LUA, JavaScript HA method: N+1 Autoscaling via Native LB Active Health Checks This is a great example of a Repurchase in which application characteristics can allow the various teams to explore alternative cloud migration approaches. In this scenario, it describes a phase one migration of converting BIG-IP devices to NGINX Plus devices. This example assumes the BIG-IP configurations can be somewhat easily converted to NGINX Plus, and it also assumes there is available skillset and project time allocated to properly rearchitect the application where needed. Summary OK! Brains are expanding...hopefully? We learned about high availability and what that means for applications and user experience. We touched on the importance of application behavior and traffic sizing. Then we explored the various F5 products, how they handle HA, and HA designs. These recommendations are based on my own lab testing and interactions with customers. Every scenario will carry its own requirements, and all options should be carefully considered when leveraging the public cloud. Finally, we looked at a customer scenario, discussed requirements, and design proposal. Fun! Resources Read the following articles for more guidance specific to the various cloud providers. Advanced Topologies and More on Highly Available Services Lightboard Lessons - BIG-IP Deployments in Azure Google and BIG-IP Failing Faster in the Cloud BIG-IP VE on Public Cloud High-Availability Load Balancing with NGINX Plus on Google Cloud Platform Using AWS Quick Starts to Deploy NGINX Plus NGINX on Azure5.6KViews5likes2CommentsDeploying NGINX Ingress Controller with OpenShift on AWS Managed Service: ROSA
Introduction In March 2021, Amazon and Red Hat announced the General Availability of Red Hat OpenShift Service on AWS (ROSA). ROSA is a fully-managed OpenShift service, jointly managed and supported by both Red Hat and Amazon Web Services (AWS). OpenShift offers users several different deployment models. For customers that require a high degree of customization and have the skill sets to manage their environment, they can build and manage OpenShift Container Platform (OCP) on AWS. For those who want to alleviate the complexity in managing the environment and focus on their applications, they can consume OpenShift as a service, or Red Hat OpenShift Service on AWS (ROSA). The benefits of ROSA are two-fold. First, we can enjoy more simplified Kubernetes cluster creation using the familiar Red Hat OpenShift console, features, and tooling without the burden of manually scaling and managing the underlying infrastructure. Secondly,the managed service made easier with joint billing, support, and out-of-the-box integration to AWS infrastructure and services. In this article, I am exploring how to deploy an environment with NGINX Ingress Controller integrated into ROSA. Deploy Red Hat OpenShift Service on AWS (ROSA) The ROSA service may be deployed directly from the AWS console.Red Hat has done a great job in providing the instructions on creating a ROSA cluster in the Installation Guide. The guide documents the AWS prerequisites, required AWS service quotas, and configuration of your AWS accounts. We run the following commands to ensure that the prerequisites are met before installing ROSA. -Verify that my AWS account has the necessary permissions: ❯ rosa verify permissions I: Validating SCP policies... I: AWS SCP policies ok -Verify that my AWS account has the necessary quota to deploy a Red Hat OpenShift Service on the AWS cluster. ❯ rosa verify quota --region=us-west-2 I: Validating AWS quota... I: AWS quota ok. If cluster installation fails, validate actual AWS resource usage against https://docs.openshift.com/rosa/rosa_getting_started/rosa-required-aws-service-quotas.html Next, I ran the following command to prepare my AWS account for cluster deployment: ❯ rosa init I: Logged in as 'ericji' on 'https://api.openshift.com' I: Validating AWS credentials... I: AWS credentials are valid! I: Validating SCP policies... I: AWS SCP policies ok I: Validating AWS quota... I: AWS quota ok. If cluster installation fails, validate actual AWS resource usage against https://docs.openshift.com/rosa/rosa_getting_started/rosa-required-aws-service-quotas.html I: Ensuring cluster administrator user 'osdCcsAdmin'... I: Admin user 'osdCcsAdmin' created successfully! I: Validating SCP policies for 'osdCcsAdmin'... I: AWS SCP policies ok I: Validating cluster creation... I: Cluster creation valid I: Verifying whether OpenShift command-line tool is available... I: Current OpenShift Client Version: 4.7.19 If we were to follow their instructions to create a ROSA cluster using therosaCLI, after about 35 minutes our deployment would produce a Red Hat OpenShift cluster along with the needed AWS components. ❯ rosa create cluster --cluster-name=eric-rosa I: Creating cluster 'eric-rosa' I: To view a list of clusters and their status, run 'rosa list clusters' I: Cluster 'eric-rosa' has been created. I: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information. I: To determine when your cluster is Ready, run 'rosa describe cluster -c eric-rosa'. I: To watch your cluster installation logs, run 'rosa logs install -c eric-rosa --watch'. Name:eric-rosa … During the deployment, we may enter the following command to follow the OpenShift installer logs to track the progress of our cluster: > rosa logs install -c eric-rosa --watch After the Red Hat OpenShift Service on AWS (ROSA) cluster is created, we must configure identity providers to determine how users log in to access the cluster. What just happened? Let's review what just happened. The above installation programautomatically set up the following AWS resources for the ROSA environment: AWS VPC subnets per Availability Zone (AZ). For single AZ implementations two subnets were created (one public one private) The multi-AZ implementation would make use of three Availability Zones, with a public and private subnet in each AZ (a total of six subnets). OpenShift cluster nodes (or EC2 instances) Three Master nodes were created to cater for cluster quorum and to ensure proper fail-over and resilience of OpenShift. At least two infrastructure nodes, catering for build-in OpenShift container registry, OpenShift router layer, and monitoring. Multi-AZ implementations Three Master nodes and three infrastructure nodes spread across three AZs Assuming that application workloads will also be running in all three AZs for resilience, this will deploy three Workers. This will translate to a minimum of nine EC2 instances running within the customer account. A collection ofAWS Elastic Load Balancers, some of these Load balancers will provide end-user access to the application workloads running on OpenShift via the OpenShift router layer, other AWS elastic load balancers will expose endpoints used for cluster administration and management by the SRE teams. Source:https://aws.amazon.com/blogs/containers/red-hat-openshift-service-on-aws-architecture-and-networking/ Deploy NGINX Ingress Controller The NGINX Ingress Operator is a supported and certified mechanism for deployingNGINX Ingress Controller in an OpenShift environment, with point-and-click installation and automatic upgrades. It works for both the NGINX Open Source-basedand NGINX Plus-basededitionsof NGINX Ingress Controller. In thistutorial, I’ll bedeploying theNGINX Plus-based edition.ReadWhy You Need an Enterprise-Grade Ingress Controller on OpenShiftforuse casesthat merit the use of this edition.If you’re not sure how these editions are different, readWait, Which NGINX Ingress Controller for Kubernetes Am I Using? Iinstall the NGINX Ingress Operator from the OpenShift console.There are numerous options you can set when configuring the NGINX Ingress Controller, as listedinour GitHubrepo.Here is a manifestexample: apiVersion: k8s.nginx.org/v1alpha1 kind: NginxIngressController metadata: name: my-nginx-ingress-controller namespace: openshift-operators spec: ingressClass: nginx serviceType: LoadBalancer nginxPlus: true type: deployment image: pullPolicy: Always repository: ericzji/nginx-plus-ingress tag: 1.12.0 To verify the deployment, run the following commands in a terminal. As shown in the output, the manifest I used in the previous step deployed two replicas of the NGINX Ingress Controller and exposed them with aLoadBalancerservice. ❯ oc get pods -n openshift-operators NAMEREADYSTATUSRESTARTSAGE my-nginx-ingress-controller-b556f8bb-bsn4k1/1Running014m nginx-ingress-operator-controller-manager-7844f95d5f-pfczr2/2Running03d5h ❯ oc get svc -n openshift-operators NAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGE my-nginx-ingress-controllerLoadBalancer172.30.171.237a2b3679e50d36446d99d105d5a76d17f-1690020410.us-west-2.elb.amazonaws.com80:30860/TCP,443:30143/TCP25h nginx-ingress-operator-controller-manager-metrics-serviceClusterIP172.30.50.231<none> With NGINX Ingress Controller deployed, we'll have an environment that looks like this: Post-deployment verification After the ROSA cluster was configured, I deployed an app (Hipster) in OpenShift that is exposed by NGINX Ingress Controller (by creating an Ingress resource). To use a custom hostname,it requires that we manually change your DNS record on the Internetto point to the IP address value of AWS Elastic Load Balancer. ❯ dig +short a2dc51124360841468c684127c4a8c13-808882247.us-west-2.elb.amazonaws.com 34.209.171.103 52.39.87.162 35.164.231.54 I made this DNS change (optionally, use a local host record), and we will see my demo app available on the Internet, like this: Deleting your environment To avoid unexpected charges, don't forget to delete your environment if you no longer need it. ❯ rosa delete cluster -c eric-rosa --watch ? Are you sure you want to delete cluster eric-rosa? Yes I: Cluster 'eric-rosa' will start uninstalling now W: Logs for cluster 'eric-rosa' are not available … Conclusion To summarize, ROSA allows infrastructure and security teams to accelerate the deployment of the Red Hat OpenShift Service on AWS. Integration with NGINX Ingress Controller provides comprehensive L4-L7 security services for the application workloads running on Red Hat OpenShift Service on AWS (ROSA). As a developer, having your clusters as well as security services maintained by this service gives you the freedom to focus on deploying applications. You have two options for gettingstarted with NGINX Ingress Controller: Download the NGINX Open Source-based version of NGINX Ingress Controller fromour GitHub repo. If you prefer to bring your own license to AWS,get a free trialdirectly from F5 NGINX.5KViews0likes0CommentsHow to deploy NGINX App Protect WAF on the NGINX Ingress Controller using argoCD
Overview The NGINX App Protect WAF can be deployed as an add-on within the NGINX Ingress Controller, making the two functionin tandem as a WAF armed with a Kubernetes Ingress Controller. This repo leverages argoCD as a GitOps continuous delivery tool to showcase an end-to-end example of how to use the combo to frontend a simple Kubernetes application. As this repo is public facing, I am also using a tool named 'Sealed-Secrets' to encrypt all the secret manifests. However, this is not a requirement for deploying either component. I will go through argoCD and Sealed-Secrets first as supporting pieces and then go into NGINX App Protect WAF itself. Please note that this tutorial is applies to the NGINX Plus-based version of NGINX Ingress Controller. If you aren’t sure which version you’re using, read the blog A Guide to Choosing an Ingress Controller, Part 4: NGINX Ingress Controller Options. argoCD If you do not know argoCD, I strongly recommend that you check it out. In essence, with argoCD, you create an argoCD application that references a location (e.g., Git repo or folder) containing all your manifests. argoCD applies all the manifests in that location and constantly monitors and syncs changes from that location. e.g., if you made a change to a manifest or added a new one, after you did a Git commit for that change, argoCD picks it up and immediate applies the change within your Kubernetes. The following screenshot taken from argoCD shows that I have an app named 'cafe'. The 'cafe' app points to a Git repo where all manifests are stored. The status is 'Healthy' and 'Synced'. It means that argoCD has successfully applied all the manifests that it knows about, and these manifests are in sync with the Git repo. ThecafeargoCD application manifest is shownhere. For this repo, all argoCD application manifests are stored in thebootstrapfolder. To add the Cafe argoCD application, run the following, kubectl apply -f cafe.yaml Sealed Secrets When Kubernetes manifests that contain secrets such as passwords and private keys, they cannot simply be pushed to a public repo. With Sealed-Secrets, you can solve this problem by sealing those manifests files offline via the binary and then push them to the public repo. When you apply the sealed secret manifests from the public repo, the sealed-secrets component that sits inside Kubernetes will decrypt the sealed secrets and then apply them on the fly. To do this, you must upload the encryption key into Kubernetes first. Please note that I am only using sealed secrets so I can push my secret manifests to a public repo, for the purpose of this demo. It is not a requirement to install NGINX App Protect WAF. If you have a private repo, you can simply push all your secret manifests there and argoCD will then apply them as is. The following commands set up sealed secrets with my specified certificate/key in its own namespace. export PRIVATEKEY="dc7.h.l.key" export PUBLICKEY="dc7.h.l.cer" export NAMESPACE="sealed-secrets" export SECRETNAME="dc7.h.l" kubectl -n "$NAMESPACE" create secret tls "$SECRETNAME" --cert="$PUBLICKEY" --key="$PRIVATEKEY" kubectl -n "$NAMESPACE" label secret "$SECRETNAME" sealedsecrets.bitnami.com/sealed-secrets-key=active To create a sealed TLS secret, kubectl create secret tls wildcard.abbagmbh.de --cert=wildcard.abbagmbh.de.cer --key=wildcard.abbagmbh.de.key -n nginx-ingress --dry-run=client -o yaml | kubeseal \ --controller-namespace sealed-secrets \ --format yaml \ > sealed-wildcard.abbagmbh.de.yaml NGINX App Protect WAF The NGINX App Protect WAF for Kubernetes is a NGINX Ingress Controller software security module add-on with L7 WAF capabilities.It can be embedded within the NGINX Ingress Controller. The installation process for NGINX App Protect WAF is identical to NGINX Ingress Controller, with the following additional steps. Apply NGINX App Protect WAF specific CRD's to Kubernetes Apply NGINX App Protect WAF log configuration (NGINX App Protect WAF logging is different from NGINX Plus) Apply NGINX App Protect WAF protection policy The official installation docs using manifestshave great info around the entire process. This repo simply collated all the necessary manifests required for NGINX App Protect WAF in a directory that is then fed to argoCD. Image Pull Secret With the NGINX App Protect WAF docker image, you can either pull it from the official NGINX private repo, or from your own repo. In the former case, you would need to create a secret that is generated from the JWT file (part of the NGINX license files). See below for detail. To create a sealed docker-registry secret, username=`cat nginx-repo.jwt` kubectl create secret docker-registry private-registry.nginx.com \ --docker-server=private-registry.nginx.com \ --docker-username=$username \ --docker-password=none \ --namespace nginx-ingress \ --dry-run=client -o yaml | kubeseal \ --controller-namespace sealed-secrets \ --format yaml \ > sealed-docker-registry-secret.yaml Notice the controller namespace above, it needs to match the namespace where you installed Sealed-Secrets. NGINX App Protect WAF CRD's A number of NGINX App Protect WAF specific CRD's (Custom Resource Definition) are required for installation. They are included in thecrdsdirectory and should be picked up and applied by argoCD automatically. NGINX App Protect WAF Configuration The NGINX App Protect WAF configuration includes the followings in this demo: User defined signature NGINX App Protect WAF policy NGINX App Protect WAF log configuration Thisuser defined signatureshows an example of a custom signature that looks for keywordapplein request traffic. TheNGINX App Protect WAF policydefines violation rules. In this case it blocks traffic caught by the custom signature defined above. This sample policy also enables data guard and all other protection features defined in a base template. TheNGINX App Protect WAF log configurationdefines what gets logged and how they look like. e.g., log all traffic versus log illegal traffic. This repo also includes a manifest for syslog deployment as a log destination used by NGINX App Protect WAF. Ingress To use NGINX App Protect WAF, you must create anIngressresource. Within the Ingress manifest, you use annotations to apply NGINX App Protect WAF specific settings that were discussed above. apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: cafe-ingress annotations: kubernetes.io/ingress.class: "nginx" appprotect.f5.com/app-protect-policy: "nginx-ingress/dataguard-alarm" appprotect.f5.com/app-protect-enable: "True" appprotect.f5.com/app-protect-security-log-enable: "True" appprotect.f5.com/app-protect-security-log: "nginx-ingress/logconf" appprotect.f5.com/app-protect-security-log-destination: "syslog:server=syslog-svc.nginx-ingress:514" The traffic routing logic is done via the followings, spec: ingressClassName: nginx # use only with k8s version >= 1.18.0 tls: - hosts: - cafe.abbagmbh.de secretName: wildcard.abbagmbh.de rules: - host: cafe.abbagmbh.de http: paths: - path: /tea pathType: Prefix backend: service: name: tea-svc port: number: 8080 - path: /coffee pathType: Prefix backend: service: name: coffee-svc port: number: 8080 Testing Once you added both applications (in bootstrap folder) into argoCD, you should see the followings in argoCD UI. We can do a test to confirm if NGINX App Protect WAF routes traffic based upon HTTP URI, as well as whether WAF protection is applied. First get the NGINX Ingress Controller IP. % kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE cafe-ingress nginx cafe.abbagmbh.de x.x.x.x 80, 443 1d Now send traffic to both'/tea'and'/coffee'URI paths. % curl --resolve cafe.abbagmbh.de:443:x.x.x.x https://cafe.abbagmbh.de/tea Server address: 10.244.0.22:8080 Server name: tea-6fb46d899f-spvld Date: 03/May/2022:06:02:24 +0000 URI: /tea Request ID: 093ed857d28e160b7417bb4746bec774 % curl --resolve cafe.abbagmbh.de:443:x.x.x.x https://cafe.abbagmbh.de/coffee Server address: 10.244.0.21:8080 Server name: coffee-6f4b79b975-7fwwk Date: 03/May/2022:06:03:51 +0000 URI: /coffee Request ID: 0744417d1e2d59329401ed2189067e40 As you can see from above, traffic destined to '/tea' is routed to the tea pod (tea-6fb46d899f-spvld) and traffic destined to '/coffee' is routed to the coffee pod (coffee-6f4b79b975-7fwwk). Let us trigger a violation based on the user defined signature, % curl --resolve cafe.abbagmbh.de:443:x.x.x.x https://cafe.abbagmbh.de/apple <html><head><title>Request Rejected</title></head><body>The requested URL was rejected. Please consult with your administrator.<br><br>Your support ID is: 10807744421744880061<br><br><a href='javascript:history.back();'>[Go Back]</a></body></html> Finally, traffic violating the XSS rule. curl --resolve cafe.abbagmbh.de:443:x.x.x.x 'https://cafe.abbagmbh.de/tea<script>' <html><head><title>Request Rejected</title></head><body>The requested URL was rejected. Please consult with your administrator.<br><br>Your support ID is: 10807744421744881081<br><br><a href='javascript:history.back();'>[Go Back]</a></body></html> Confirming that logs are received on the syslog pod. % tail -f /var/log/message May 3 06:30:32 nginx-ingress-6444787b8-l6fzr ASM:attack_type="Non-browser Client Abuse of Functionality Cross Site Scripting (XSS)" blocking_exception_reason="N/A" date_time="2022-05-03 06:30:32" dest_port="443" ip_client="x.x.x.x" is_truncated="false" method="GET" policy_name="dataguard-alarm" protocol="HTTPS" request_status="blocked" response_code="0" severity="Critical" sig_cves=" " sig_ids="200000099 200000093" sig_names="XSS script tag (URI) XSS script tag end (URI)" sig_set_names="{Cross Site Scripting Signatures;High Accuracy Signatures} {Cross Site Scripting Signatures;High Accuracy Signatures}" src_port="1478" sub_violations="N/A" support_id="10807744421744881591" threat_campaign_names="N/A" unit_hostname="nginx-ingress-6444787b8-l6fzr" uri="/tea<script>" violation_rating="5" vs_name="24-cafe.abbagmbh.de:8-/tea" x_forwarded_for_header_value="N/A" outcome="REJECTED" outcome_reason="SECURITY_WAF_VIOLATION" violations="Illegal meta character in URL Attack signature detected Violation Rating Threat detected Bot Client Detected" json_log="{violations:[{enforcementState:{isBlocked:false} violation:{name:VIOL_URL_METACHAR}} {enforcementState:{isBlocked:true} violation:{name:VIOL_RATING_THREAT}} {enforcementState:{isBlocked:true} violation:{name:VIOL_BOT_CLIENT}} {enforcementState:{isBlocked:true} signature:{name:XSS script tag (URI) signatureId:200000099} violation:{name:VIOL_ATTACK_SIGNATURE}} {enforcementState:{isBlocked:true} signature:{name:XSS script tag end (URI) signatureId:200000093} violation:{name:VIOL_ATTACK_SIGNATURE}}]}" Conclusion The NGINX App Protect WAF deploys as a software security module add-on to the NGINX Ingress Controller and provides comprehensive application security for your Kubernetes environment. I hope that you find the deployment simple and straightforward.4.3KViews5likes0CommentsApplication Programming Interface (API) Authentication types simplified
API is a critical part of most of our modern applications. In this article we will walkthrough the different authentication types, to help us in the future articles covering NGINX API Connectivity Manager authentication and NGINX Single Sign-on.4KViews5likes0CommentsManaging NGINX App Protect WAF for Beginners
F5 NGINX App Protect WAF brings much of the tried-and-true capabilities of the F5 BIG-IP Advanced WAF to the DevOps environment, which demand performance without compromising on delivery velocity. However, as security capabilities like a WAF are traditionally handled by dedicated security teams, DevOps and application teams may find themselvesout of their comfort zone when it comes to securing their applications. Having deployed NGINX App Protect WAF with <insert your favourite CI/CD tooling> in your environment, you may find yourself asking "what next?" Figure 1: NGINX App Protect WAF and DoS security solution deployment options NGINX App Protect is a modern app security solution that works seamlessly in DevOps environments and is available as a robust WAF or as a DoS product for app layer defense. Each can be deployed on NGINX Plus in the image shown above. For the purpose of this article, we will be focusing on the WAF product.In this post, I will provide some basic steps to help you get started with operating your NGINX App Protect WAF. Some mitigation is better than none Application security is a complex field, just take a look at the security features supported by NGINX App Protect WAF. It is easy to be overwhelmed by the sheer amount of security jargons, let alone identifying the features relevant for your application, leading to a case of analysis paralysis. Simplifying a lot of this jargon into intent-based policies is a big focus at F5. Instead of talking about signatures that detect malicious traffic based on certain patterns, it could be as simple as saying "I don't want untrusted or malicious bots accessing my application", or "Sign me up for F5 Threat Campaigns to deal with what's in the wild now" Some form of mitigation is better than none, so start with simple policies, shown here, to deal with a large amount of unwanted traffic from the Internet. With time, the WAF policy can be tuned based on learnings from the information provided by NGINX App Protect WAF. If there are concerns about the WAF unintentionally blocking legitimate traffic and creating a negative experience for your users, configure the policy to be in transparent mode instead of blocking. Making sense of the information With a good amount of security coverage now enabled for your application, let us shift our focus to the information that NGINX App Protect WAF provides to you. NGINX App Protect WAF conveys security events in the form of logs rich with information,which can be exported via syslog toexternal logging and monitoring platforms such as an ELK stack, Splunk, or Grafana. These platforms provide powerful search capabilities which enableseasier consumption of security events, simplifying some of the day 2 activities - a couple of which are covered in the next section.If anything, they show how much unwanted traffic are trying to access your applications, but are blocked by NGINX App Protect WAF! For those interested in having high level information immediately, you can also use the information to build dashboards such as the ones here: https://github.com/f5devcentral/f5-waf-elk-dashboards https://github.com/skenderidis/nap-dashboard Day 2 and onwards With NGINX App Protect WAF up and running, much of the operational activities will involve keeping the NGINX App Protect WAF attack signatures updated, which can be easily automated. Tuning the WAF policy is also a major part of managing NGINX App Protect WAF. A common trigger for WAF tuning is the discovery of a new exploit affecting your applications. In such cases, look up "NGINX App Protect" and the name of the exploit online, and you should find mitigation steps provided by F5. One example is this article on how to Mitigate the Apache Log4j2 vulnerability with NGINX Application Security products. Another trigger would be when you start seeing support tickets from your users containing messages as below, indicating that they have been blocked by NGINX App Protect WAF. Image 1: A support ticket message from NGINX App Protect WAF that provides a Support ID which can be used to obtain further insights. For this, look up the support ID in the NGINX App Protect WAF security logs (this is where the use of ELK, Splunk and Grafana come in handy), and review the corresponding log entry which contains information on the user request and the reason the request was blocked. Image 2: A screenshot of log entries displayed from a Kibana dashboard. If a request is deemed incorrectly blocked by NGINX App Protect WAF, it is known as a false positive, which can often happenwhen a WAF is first deployed, or changes are recently made to the application, introducing new behaviours previously not considered when defining the WAF policy. The WAF policy can then be updated to disregard the corresponding signatures or violations, allowing your users to access your application once again. As you get more comfortable with NGINX App Protect WAF, there are other areas of improvement that can be explored, such as implementing WAF policy as codefor easier CI/CD pipeline integration, or analyzing the information provided by NGINX App Protect WAF to understand the attack surfaces on your applications, enabling you to focus your security efforts where it counts. Closing Hopefully, the information above gives you a better perspective of what is involved in managing NGINX App Protect WAF. In closing, the goal of NGINX App Protect WAF is to secure your application with minimal effort, and have you focus your efforts on delivering business value through your application instead.3.5KViews3likes0CommentsF5 NGINXaaS for Azure: Multi-Region Architecture
The F5 NGINXaaS for Azureoffering recently announced general availability. Trust me...I've been using it and having fun! In this article, I will show you an example hub and spoke architecture using GitHub Actions and Azure Functions to automate NGINX configurations. As a bonus, I have code on GitHub that you can use to deploy this example. Topics Covered: NGINXaaS for Azure Architecture Explained The NGINXaaS for Azure architecture consists of an F5 subscription as well as customer subscription. F5 subscription - hidden from user, NGINX Plus instances, control plane, data plane Customer subscription - eNICs from VNet Injection, customer network stack, customer workloads F5 Subscription The NGINXaaS offering createsNGINX Plus instances and other related components like NGINX control plane and data plane resources in the F5 subscriptions. These items are not visible to the end user, and therefore result in the operational tasks of upgrades and scaling being managed by the NGINXaaS offering instead of the user. Each NGINX deployment, like other Azure services, is regional in nature. If you need to deploy NGINX closer to the client, then this will require multiple NGINX deployments (ex. westus2, eastus2). Each NGINX deployment will have a unique listener address. You can then use DNS to send clients to an NGINX deployment in the nearest region. Here is an example diagram. Customer Subscription The customer subscription has items like network stacks, Key Vaults, monitoring, application workloads, and more. The NGINX deployment automatically creates ethernet NICs (eNICs) in the customer subscription using VNet Injection and subnet delegation. The eNICs are deployed inside their own Azure Resource Group. They receive IP addressing from the customer VNet and are indeed visible by the user. However, there is no management needed with the eNICs because they are part of the NGINX deployment. Note: In my testing during public preview, I have noticed that Azure lets you manually remove subnet delegation for the NGINX service. Warning...do NOT do this. It will break traffic flow. Hub and Spoke Architecture You can easily make a hub and spoke design with NGINX in the mix using VNet peering. This is a great use case when required to use a shared NGINX deployment across different VNets, environments, or scaling workloads across multiple regions. Recall from earlier that an NGINX deployment will automatically create eNICs in the customer subscription. Therefore, you can control the entry point into the customer environment and the traffic flows. For example, configuring NGINX to use a customer shared VNet with peering gives you a hub and spoke design such as the picture below. This results in the NGINX eNICs being deployed into a customer Shared VNet (hub). Meanwhile the customer places workloads into their own VNets (spokes). Demo Code If this is the first time deploying NGINXaaS for Azure in your subscription, then you will need to subscribe to it in the marketplace. Search for “F5 NGINXaaS for Azure” in marketplace or follow this link Select F5 NGINXaaS for Azure and choose "Public Preview" and subscribe Time to play with code! Click the link below and review the README to deploy the demo example.There are prerequisites to follow. For example, you need to have a GitHub repository that stores the NGINX configuration files. You also need to have an Azure Key Vault and secret containing your GitHub access token. These are explained in the README. GitHub repo - F5 NGINXaaS for Azure Deployment with Demo Application in Multiple Regions After the deployment is done, you have a few options on how to handle NGINX configurations. I will share examples in future articles, but for now go ahead and explore on your own. Refer to the NGINXaaS for Azure documentation "NGINX Configuration" to get started. Summary This article gives an example architecture for deploying the NGINXaaS for Azure offering. I shared details on the different NGINX components, and I also shared demo code to help you explore the solution on your own! Contact us with any questions or requirements. We would love to hear from you! Resources DevCentral Series - F5 NGINXaaS for Azure F5 NGINXaaS for Azure Docs Blog Introducing F5 NGINXaaS for Azure3.2KViews6likes2CommentsMastering API Architectures Ebook from NGINX
Learn how to design, operate and evolve your API architecture through a combination of the authors’ real-life experience and an accompanying case study that mimics and event management conference system that allows attendees to view and book presentation sessions. Chapter 10 has a high-level overview of that case study. Download for free. Topics you will learn in this ebook: The fundamentals of REST APIs and how to best build, version, and test APIs The architectural patterns involved in building an API platform The differences in managing API traffic at ingress and within service-to-servicecommunication, and how to apply patterns and technologies such as API gateways and service meshes Threat modeling and key security considerations for APIs, such as authentication,authorization, and encryption How to evolve existing systems toward APIs and different target deployments, such as the cloud After reading you will be able to: Design, build, and test API-based systems Help to implement and drive an organization’s API program from an architectural perspective Deploy, release, and configure key components of an API platform What this book is not: This book is not solely focused on cloud technologies. Many of you have hybrid architectures or entire systems hosted in data centers. This book took care to cover the design and operational factors of API architectures that support both approaches. The book is not tied to a specific language. Developers who should read this book: Have been coding for several years and have a good understanding of common software development challenges, patterns, and best practices. Increasingly realize the software industry is moving toward building service-oriented architecture (SOA) and adopting cloud services, making building and operating APIs a core skill (Chapter 8). Want to learn more about designing effective APIs and testing them (Chapter 2). Want to expand their knowledge of various implementation choices, (e.g., synchronous versus asynchronous communication) and technologies. Want to be able to ask the right questions and evaluate which approach is best for a given context. What you will need: This book doesn’t favor one style of architecture or a specific language. The focus is on the approach. Code examples are available in the accompanying GitHub repository. Download for free.2.5KViews1like0Comments