NGINX Ingress Controller
10 TopicsCertified Kubernetes Administrator (CKA) for F5 guys and gals
Summary The Certified Kubernetes Administrator (CKA) program is an industry certification that allows you to quickly establish your skills and credibility in today's job market. Kubernetes is exploding in popularity, and with the majority of new development happening in microservices-based technologies, it's important for existing network and security teams to learn in order to stay relevant. We recently chatted about this on DevCentral live show: What is CKA? The CKA is designed by the Cloud Native Computing Foundation (CNCF), which is a part of the non-profit Linux Foundation. As a result, the learning required for this certification is not skewed to any 3rd party cloud, hardware, or software platform. This vendor-agnostic approach means you can easily access the technology you need to learn, build your own cluster, and pass the exam. This agnostic approach also means the exam, which you can take from home, does not require special knowledge that you normally get from working in a corporate environment. Personally I don't manage a production k8s environment, and I passed the exam - no vendor-specific knowledge required. Why should I care about CKA? Are you feeling "left behind" or "legacy" as the industry converts to Kubernetes? Are new terms and concepts used by developers creating silos or gaps between traditional Networks or Security teams, and developers? Are you looking to integrate your corporate security into Kubernetes environments? Do you want to stay relevant in the job market? If you answered yes to any of the above, the CKA is a great way to catch up and thrive in Kubernetes conversations at your workplace. Sure, but how is this relevant to an F5 guy or gal? Personally I'm often helping devs expose their k8s apps with a firewall policy or SSL termination. NGINX Kubernetes Ingress Controller (KIC) is extremely popular for developers to manage ingress within their Kubernetes cluster. F5 Container Ingress Services (CIS) is a popular solution for enterprises integrating k8s or OpenShift with their existing security measures. Both of these are areas where your networking and security skills are needed by your developers! As a network guy, the CKA rounded out my knowledge of Kubernetes - from networking to storage to security and troubleshooting. The figure below is a common solution that I talk customers through when they want to use KIC for managing traffic inside their cluster, but CIS to have inbound traffic traverse an enterprise firewall and integrate with Kubernetes. How do I study for the CKA? Personally I took a course at ACloudGuru, which was all the preparation I needed. Others I know took a course from Udemy and spoke highly of it, and others just read all they could from free community sources. Recently the exam fee (currently $375 USD) changed to include 2x practice exams from Killer.sh. This wasn't available when I signed up for the exam, but my colleagues spoke extremely highly of the preparation they received from these practice exams. I recommend finding a low-cost course to study if you can, and strongly recommend you build your own cluster from the ground up as preparation for your exam. Building your own cluster (deploying Linux hosts, installing a runtime like Docker, and then installing k8s control plane servers and nodes) is a fantastic way to appreciate the architecture and purpose of Kubernetes. Finally, take the exam (register here). I recommend paying for and scheduling your exam before you start studying - that way you'll have a deadline to motivate you! I'm a CKA, now what? Share your achievement on LinkedIn! Share your knowledge with colleagues. You'll be in high demand from employers but more importantly, you'll be valuable in your workplace when you can help developers with things like network and security and integrations with your existing platforms. And then shoot me a note to let me know you've done it!789Views3likes0CommentsAnnouncing F5 NGINX Gateway Fabric 1.3.0 with Tracing, GRPCRoute, and Client Settings
The release of NGINX Gateway Fabric version 1.3.0, introduces plenty of highly requested features and improvements. GRPCRoutes are now supported to manage gRPC traffic, similar to the handling of HTTPRoute. The update includes new custom policies like ClientSettingsPolicy for client request configurations and ObservabilityPolicy for enabling application tracing with OpenTelemetry support. The GRPCRoute allows for efficient routing, header modifications, traffic weighting, and error conversion from HTTP to gRPC. We will explain how to set up NGINX Gateway Fabric to manage gRPC traffic using a Gateway and a GRPCRoute, providing a detailed example of the setup. It also outlines how to enable tracing through the NginxProxy resource and ObservabilityPolicy, emphasizing a selective approach to tracing to avoid data overload. Additionally, the ClientSettingsPolicy allows for the customization of NGINX directives at the Gateway or Route level, giving users control over certain NGINX behaviors with the possibility of overriding Gateway defaults at the Route level. Looking ahead, the NGINX Gateway Fabric team plans to work on TLS Passthrough, IPv6, and improvements to the testing suite, while preparing for larger updates like NGINX directive customization and separation of data and control planes. Check the end of the article to see how to get involved in the development process through GitHub and participate in bi-weekly community meetings. Further resources and links are also provided within.205Views0likes0CommentsOWASP Tactical Access Defense Series: Broken Object Level Authorization and BIG-IP APM
Addressing Broken Object Level Authorization (BOLA) vulnerabilities requires a multifaceted approach, combining robust coding practices, secure development methodologies, and powerful tools. Among these tools, F5 BIG-IP Access Policy Manager (APM) stands out as a crucial component in the arsenal of security measures. This article, the second in a series of articles dedicated to fortifying application security, delves into the pivotal role that BIG-IP APM plays in identifying, mitigating, and ultimately preventing OWASP top 10 API vulnerabilities byproviding developers and security professionals with a comprehensive guide to bolstering application security in the face of evolving cyber threats. Broken Object Level Authorization This is one of the most common and severe vulnerabilities within APIs and is related to Insecure Direct Object References (IDOR). Starting with, what's Object Level Authorization? This is an access control mechanism that's in place to validate which user has access to a specific endpoint and what actions to be performed. BOLA and IDOR refer to situations where the endpoints fail to enforce specific authorization rules on endpoints, or the user is successfully able to access unauthorized endpoints and perform unauthorized actions. The weakness that can lead to this vulnerability is the server component fails to track client state and rely on other parameters that can be tweaked from the client side, for example (Cookies, object IDs). BOLA Example Let's assume this backend directory, - /uploads/ - user1/ - file1.txt - file2.txt - user2/ - file3.txt - file4.txt The expected user1 usage is as follows, https://example.com/viewfile?file=file1.txt the user can access file1. If the server is vulnerable to BOLA, let's have user2 accessing the server, then try to navigate to file1 as follows, https://example.com/viewfile?file=user1/file1.txt What could help us in this situation? Yes, we need granular endpoint authorization with proper client state tracking. That's where our lovely friend BIG-IP APM comes into the picture. Let's see how BIG-IP APM can help us. BIG-IP APM and BOLA protection BIG-IP APM provides API protection through its Per-Request policy, where the it applies granular Access protection to each API endpoint. How BIG-IP APM enhancesdefenses We start with creating our Per-Request policy, this policy works in a different way than the per-session policy, as the flow will be evaluted on a per-request basis, making sure to consider variations throught the session life-time. Below are some of the key benefits: Wide range of Authentication, SSO, and MFA mechanisms to properly identify the initiating machine or user. Ability to integrate with 3rd parties to provide additional enforcement decisions based on the organization's policy. Ability to apply endpoint checks on the client side before session initiation. This goes to BIG-IP in general, the ability to apply custom traffic control on both of the traffic sides, Client and Server. Using BIG-IP API protection profile. Protection profiles are an easy way to deploy both APM (Per-Request policy) and Advanced Web Application Firewall (AWAF). As a pre-requisite, you need APM, AWAF licensed and provisioned. Use OpenAPI Spec 2.0 as an input to the API protection. Apply different Authentication methods, whether Basic, Oauth (Directly from the template), or once we have the API protection profile created, we can customize policy elements to our needs. Using Hybrid approach with F5 Distributed Cloud (F5 XC) + BIG-IP APM We had this approach discussed in details through F5Hybrid Security Architectures (Part 5 - F5 XC, BIG-IP APM, CIS, and NGINX Ingress Controller) Stay engaged to explore the comprehensive capabilities of BIG-IP APM and how it plays a pivotal role in fortifying your security posture against these formidable threats. Related Content F5 BIG-IP Access Policy Manager | F5 Introduction to OWASP API Security Top 10 2023 OWASP Top 10 API Security Risks – 2023 - OWASP API Security Top 10 API Protection Concepts OWASP Tactical Access Defense Series: How BIG-IP APM Strengthens Defenses Against OWASP Top 10 F5 Hybrid Security Architectures (Part 5 - F5 XC, BIG-IP APM, CIS, and NGINX Ingress Controller)565Views3likes0CommentsF5 Hybrid Security Architectures (Part 5 - F5 XC, BIG-IP APM, CIS, and NGINX Ingress Controller)
Here in this example solution, we will be using DevSecOps practices to deploy an AWS Elastic Kubernetes Service (EKS) cluster running the Brewz test web application serviced by F5 NGINX Ingress Controller. To secure our application and APIs, we will deploy F5 Distributed Cloud's Web App and API Protection service as well as F5 BIG-IP Access Policy Manger and Advanced WAF. We will then use F5 Container Ingress Service and IngressLink to tie it all together.1.6KViews3likes0CommentsSimplifying Kubernetes Ingress using F5 Technologies
Kubernetes Ingress is an important component to any Kubernetes environment as you're likely trying to build applications that need to be accessed from outside of the k8s environment. F5 provides both BIG-IP and NGINX approaches to Ingress and with that, the breadth of F5 solutions can be applied to a Kubernetes environment. This might be overwhelming if you don't have experience in all of those solutions and you may just want to simply expose an application to start. Mark Dittmer, Sr. Product Management Engineer at F5, has put together a simple walkthrough guide for how to configure Kubernetes Ingress using F5 technologies. He incorporated both BIG-IP Container Ingress Services and NGINX Ingress Controller in this walkthrough. By the end, you'll be able to securely present your k8s Service using an IP that is dynamically provisioned from a range you specify and leverage the Service Type LoadBalancer. Simple as that! GitHub repo: https://github.com/mdditt2000/k8s-bigip-ctlr/tree/main/user_guides/simplifying-ingress1.4KViews0likes0CommentsDeploying NGINX Ingress Controller with OpenShift on AWS Managed Service: ROSA
Introduction In March 2021, Amazon and Red Hat announced the General Availability of Red Hat OpenShift Service on AWS (ROSA). ROSA is a fully-managed OpenShift service, jointly managed and supported by both Red Hat and Amazon Web Services (AWS). OpenShift offers users several different deployment models. For customers that require a high degree of customization and have the skill sets to manage their environment, they can build and manage OpenShift Container Platform (OCP) on AWS. For those who want to alleviate the complexity in managing the environment and focus on their applications, they can consume OpenShift as a service, or Red Hat OpenShift Service on AWS (ROSA). The benefits of ROSA are two-fold. First, we can enjoy more simplified Kubernetes cluster creation using the familiar Red Hat OpenShift console, features, and tooling without the burden of manually scaling and managing the underlying infrastructure. Secondly,the managed service made easier with joint billing, support, and out-of-the-box integration to AWS infrastructure and services. In this article, I am exploring how to deploy an environment with NGINX Ingress Controller integrated into ROSA. Deploy Red Hat OpenShift Service on AWS (ROSA) The ROSA service may be deployed directly from the AWS console.Red Hat has done a great job in providing the instructions on creating a ROSA cluster in the Installation Guide. The guide documents the AWS prerequisites, required AWS service quotas, and configuration of your AWS accounts. We run the following commands to ensure that the prerequisites are met before installing ROSA. -Verify that my AWS account has the necessary permissions: ❯ rosa verify permissions I: Validating SCP policies... I: AWS SCP policies ok -Verify that my AWS account has the necessary quota to deploy a Red Hat OpenShift Service on the AWS cluster. ❯ rosa verify quota --region=us-west-2 I: Validating AWS quota... I: AWS quota ok. If cluster installation fails, validate actual AWS resource usage against https://docs.openshift.com/rosa/rosa_getting_started/rosa-required-aws-service-quotas.html Next, I ran the following command to prepare my AWS account for cluster deployment: ❯ rosa init I: Logged in as 'ericji' on 'https://api.openshift.com' I: Validating AWS credentials... I: AWS credentials are valid! I: Validating SCP policies... I: AWS SCP policies ok I: Validating AWS quota... I: AWS quota ok. If cluster installation fails, validate actual AWS resource usage against https://docs.openshift.com/rosa/rosa_getting_started/rosa-required-aws-service-quotas.html I: Ensuring cluster administrator user 'osdCcsAdmin'... I: Admin user 'osdCcsAdmin' created successfully! I: Validating SCP policies for 'osdCcsAdmin'... I: AWS SCP policies ok I: Validating cluster creation... I: Cluster creation valid I: Verifying whether OpenShift command-line tool is available... I: Current OpenShift Client Version: 4.7.19 If we were to follow their instructions to create a ROSA cluster using therosaCLI, after about 35 minutes our deployment would produce a Red Hat OpenShift cluster along with the needed AWS components. ❯ rosa create cluster --cluster-name=eric-rosa I: Creating cluster 'eric-rosa' I: To view a list of clusters and their status, run 'rosa list clusters' I: Cluster 'eric-rosa' has been created. I: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information. I: To determine when your cluster is Ready, run 'rosa describe cluster -c eric-rosa'. I: To watch your cluster installation logs, run 'rosa logs install -c eric-rosa --watch'. Name:eric-rosa … During the deployment, we may enter the following command to follow the OpenShift installer logs to track the progress of our cluster: > rosa logs install -c eric-rosa --watch After the Red Hat OpenShift Service on AWS (ROSA) cluster is created, we must configure identity providers to determine how users log in to access the cluster. What just happened? Let's review what just happened. The above installation programautomatically set up the following AWS resources for the ROSA environment: AWS VPC subnets per Availability Zone (AZ). For single AZ implementations two subnets were created (one public one private) The multi-AZ implementation would make use of three Availability Zones, with a public and private subnet in each AZ (a total of six subnets). OpenShift cluster nodes (or EC2 instances) Three Master nodes were created to cater for cluster quorum and to ensure proper fail-over and resilience of OpenShift. At least two infrastructure nodes, catering for build-in OpenShift container registry, OpenShift router layer, and monitoring. Multi-AZ implementations Three Master nodes and three infrastructure nodes spread across three AZs Assuming that application workloads will also be running in all three AZs for resilience, this will deploy three Workers. This will translate to a minimum of nine EC2 instances running within the customer account. A collection ofAWS Elastic Load Balancers, some of these Load balancers will provide end-user access to the application workloads running on OpenShift via the OpenShift router layer, other AWS elastic load balancers will expose endpoints used for cluster administration and management by the SRE teams. Source:https://aws.amazon.com/blogs/containers/red-hat-openshift-service-on-aws-architecture-and-networking/ Deploy NGINX Ingress Controller The NGINX Ingress Operator is a supported and certified mechanism for deployingNGINX Ingress Controller in an OpenShift environment, with point-and-click installation and automatic upgrades. It works for both the NGINX Open Source-basedand NGINX Plus-basededitionsof NGINX Ingress Controller. In thistutorial, I’ll bedeploying theNGINX Plus-based edition.ReadWhy You Need an Enterprise-Grade Ingress Controller on OpenShiftforuse casesthat merit the use of this edition.If you’re not sure how these editions are different, readWait, Which NGINX Ingress Controller for Kubernetes Am I Using? Iinstall the NGINX Ingress Operator from the OpenShift console.There are numerous options you can set when configuring the NGINX Ingress Controller, as listedinour GitHubrepo.Here is a manifestexample: apiVersion: k8s.nginx.org/v1alpha1 kind: NginxIngressController metadata: name: my-nginx-ingress-controller namespace: openshift-operators spec: ingressClass: nginx serviceType: LoadBalancer nginxPlus: true type: deployment image: pullPolicy: Always repository: ericzji/nginx-plus-ingress tag: 1.12.0 To verify the deployment, run the following commands in a terminal. As shown in the output, the manifest I used in the previous step deployed two replicas of the NGINX Ingress Controller and exposed them with aLoadBalancerservice. ❯ oc get pods -n openshift-operators NAMEREADYSTATUSRESTARTSAGE my-nginx-ingress-controller-b556f8bb-bsn4k1/1Running014m nginx-ingress-operator-controller-manager-7844f95d5f-pfczr2/2Running03d5h ❯ oc get svc -n openshift-operators NAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGE my-nginx-ingress-controllerLoadBalancer172.30.171.237a2b3679e50d36446d99d105d5a76d17f-1690020410.us-west-2.elb.amazonaws.com80:30860/TCP,443:30143/TCP25h nginx-ingress-operator-controller-manager-metrics-serviceClusterIP172.30.50.231<none> With NGINX Ingress Controller deployed, we'll have an environment that looks like this: Post-deployment verification After the ROSA cluster was configured, I deployed an app (Hipster) in OpenShift that is exposed by NGINX Ingress Controller (by creating an Ingress resource). To use a custom hostname,it requires that we manually change your DNS record on the Internetto point to the IP address value of AWS Elastic Load Balancer. ❯ dig +short a2dc51124360841468c684127c4a8c13-808882247.us-west-2.elb.amazonaws.com 34.209.171.103 52.39.87.162 35.164.231.54 I made this DNS change (optionally, use a local host record), and we will see my demo app available on the Internet, like this: Deleting your environment To avoid unexpected charges, don't forget to delete your environment if you no longer need it. ❯ rosa delete cluster -c eric-rosa --watch ? Are you sure you want to delete cluster eric-rosa? Yes I: Cluster 'eric-rosa' will start uninstalling now W: Logs for cluster 'eric-rosa' are not available … Conclusion To summarize, ROSA allows infrastructure and security teams to accelerate the deployment of the Red Hat OpenShift Service on AWS. Integration with NGINX Ingress Controller provides comprehensive L4-L7 security services for the application workloads running on Red Hat OpenShift Service on AWS (ROSA). As a developer, having your clusters as well as security services maintained by this service gives you the freedom to focus on deploying applications. You have two options for gettingstarted with NGINX Ingress Controller: Download the NGINX Open Source-based version of NGINX Ingress Controller fromour GitHub repo. If you prefer to bring your own license to AWS,get a free trialdirectly from F5 NGINX.5.1KViews0likes0CommentsDigital Transformation in Financial Services Using Production Grade Kubernetes Deployment
The Banking and Financial Services Industry (BFSI) requires the speed of modern application development in order to shorten the time it takes to bring value to their customers. But they also face the constraints of security and regulatory requirements that tend to slow down the development and deployment process. F5 and NGINX bring the security and agile development technology while Red Hat OpenShift provides the modern development architecture needed to achieve the speed and agility required by BFSI companies.299Views0likes0CommentsQuick Deployment: Deploy F5 CIS/F5 IngressLink in a Kubernetes cluster on AWS
Summary This article describes how to deploy F5 Container Ingress Services (CIS) and F5 IngressLink with NGINX Ingress Controller on AWS cloud quickly and predictably. All you need to begin with are your AWS credentials. (15-35 mins). Problem Statement Deploying F5 IngressLink is quick and simple. However, to do so, you must first deploy the following resources: AWS resources such as VPC, subnets, security groups and more. A Kubernetes cluster on AWS A BIG-IP Instance F5 Container Ingress Services, CIS NGINX Ingress Controller Application pods Many times, we want to spin up a Kubernetes cluster with the resources listed above for a quick demo, educational purposes, experimental testing or to simply run a command to view its output. However, the creation and deletion processes are both however error prone and time consuming. In addition, we don't want the overhead and cost of maintaining the cluster and keeping the instances/virtual machines running. And we'd like to tear down the deployment as soon as we're done. Solution We need to automate and integrate predictably the creation steps described in F5 Clouddocs: To deploy BIG-IP CIS and F5 IngressLink. NGINX documentation: To deploy NGINX Ingress Controller. Refer to my GitHub repository to perform this using either: Disclaimer: The deployment in the GitHub repository is for demo or experimental purposes and not meant for production use or supported by F5 Support. For example, the Kubernetes nodes are configured to have public elastic IP addresses for easy access for troubleshooting. kops: kops takes about 6-8 mins to deploy a Kubernetes cluster on AWS. You can complete the BIG-IP CIS/Ingress Link deployment in about 15 mins. OR eksctl: eksctl takes about 25-28mins to deploy an EKS cluster on AWS. You can complete the BIG-IP CIS/Ingress Link deployment in about 35mins. If you don't need the Kubernetes resources eksctl creates, such as an EKS cluster managed by Amazon's EKS control plane and at least two subnets in different availability zones for resilience and so on, kops is a faster option. For any bugs, please raise an issue on GitHub.1.6KViews4likes2CommentsDeploying NGINXplus with AppProtect in Tanzu Kubernetes Grid
Introduction Tanzu Kubernetes Grid (aka TKG) is VMware's main Kubernetes offering. Although Tanzu Kubernetes Grid is a certified conformant Kubernetes offering the different Kubernetes offerings can be customized in different ways. In the case of TKG a remarkable feature is the use of Pod Security Policies by default. TKG clusters are very easily spin-up in either public or private clouds by means of creating a single declaration YAML file such as the following: apiVersion: run.tanzu.vmware.com/v1alpha1 kind: TanzuKubernetesCluster metadata: name: tkg1 namespace: tkg1 spec: distribution: version: v1.18.15+vmware.1-tkg.1.600e412 topology: controlPlane: count: 1 class: best-effort-medium storageClass: vsan-default-storage-policy workers: count: 1 class: best-effort-medium storageClass: vsan-default-storage-policy As you can see from the schema a TKG cluster is deployed just as another Kubernetes resource. How does it work? In the case of public clouds, these TanzuKubernetesCluster resources are instantiated from a bootstrap cluster named "management Kubernetes cluster" whilst when using vSphere with Tanzu, the TanzuKubernetesCluster resources are instantiated from vSphere with Tanzu's supervisor cluster. In this blog post it will be shown an example from start to end: Creating wildcard certificate from custom CA with easy-rsa. Enabling Harbor registry. Installing NGINXplus with AppProtect. Using NGINXplus Ingress Controller without AppProtect. Adding AppProtect to an Ingress resource. AppProtect is an NGINXplus module for WAF and Bot protection based on market leading F5 BIG-IP's ASM. AppProtect provides enhanced capabilities and performance for those who require more than what mod_auth provides. We will finish with two relevant considerations: Updating NGINXplus Ingress controller using Helm. This is used for example for scaling-out NGINXplus hence improving the overall performance. Using NGINXplus alongside with other Ingress Controllers (such as Contour). Prerequisites You need an NGINXplus license which can be retrieved from https://www.nginx.com/free-trial-request-nginx-ingress-controller/. This license is in practice a cert/key pair with the file names nginx-repo.{crt,key} referenced later on. The following software needs to be present in your in your machine: Docker v18.09+ GNU Make git Helm3 OpenSSL https://github.com/OpenVPN/easy-rsa.git Create a wildcard certificate with easy-rsa In the next steps it will be created a Certificate Authority (CA) and from it a wildcard certificate/key pair which will be loaded into Kubernetes as a TLS secret. This wildcard certificate will be used by all the services which will expose through Ingress. Retrieve easy-rsa and initialize a CA (output summarized): $ git clone https://github.com/OpenVPN/easy-rsa.git $ cd easyrsa3/ $ ./easyrsa init-pki $ ./easyrsa build-ca Generate the wildcard key/cert pair (output summarized): $ ./easyrsa gen-req wildcard Common Name (eg: your user, host, or server name) [wildcard]:*.tkg.bd.f5.com Keypair and certificate request completed. Your files are: req: /Users/alonsocamaro/Documents/VMware-Tanzu/tanzu/easy-rsa/easyrsa3/pki/reqs/wildcard.req key: /Users/alonsocamaro/Documents/VMware-Tanzu/tanzu/easy-rsa/easyrsa3/pki/private/wildcard.key $ ./easyrsa sign-req server wildcard Request subject, to be signed as a server certificate for 825 days: subject= commonName= *.tkg.bd.f5.com The Subject's Distinguished Name is as follows commonName:ASN.1 12:'*.tkg.bd.f5.com' Certificate is to be certified until Aug 21 15:58:24 2023 GMT (825 days) Write out database with 1 new entries Data Base Updated Certificate created at: /Users/alonsocamaro/Documents/VMware-Tanzu/tanzu/easy-rsa/easyrsa3/pki/issued/wildcard.crt The certificate is stored in ./pki/issued/wildcard.crt and the key is stored encrypted in pki/private/wildcard.key. Import these into a Kubernetes secret using the next steps: $ openssl rsa -in ./pki/private/wildcard.key -out ./pki/private/wildcard-unencrypted.key Enter pass phrase for ./pki/private/wildcard.key: writing RSA key $ kubectl create ns ingress-nginx $ kubectl create -n ingress-nginx secret tls wildcard-tls --key ./pki/private/wildcard-unencrypted.key --cert ./pki/issued/wildcard.crt secret/wildcard-tls created $ rm ./pki/private/wildcard-unencrypted.key As you might have noticed the secret is loaded in the namespace ingress-nginx where NGINXplus Ingress Controller will be installed. Enable your image registry in Tanzu You need an image registry. When using vSphere with Tanzu this comes with Harbor. In this case you have to follow the next steps: Enable Harbor by following https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-AE24CF79-3C74-4CCD-B7C7-757AD082D86A.html. Trust Harbor's self-signed certificate: In the case of using vSphere with Tanzu Update 2 (U2) follow https://tanzu.vmware.com/content/blog/how-to-set-up-harbor-registry-self-signed-certificates-tanzu-kubernetes-clusters. If using a previous version follow https://cormachogan.com/2020/06/23/integrating-embedded-vsphere-with-kubernetes-harbor-registry-with-tkg-guest-clusters/ with the caveat that you will need to re-apply the changes if you upgrade the TKG cluster. Alternatively, you could also use your own certificates and follow https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.3/vmware-tanzu-kubernetes-grid-13/GUID-cluster-lifecycle-secrets.html#trust-custom-ca-certificates-on-cluster-nodes-3. Installing NGINXplus Ingress Controller This blog shows step by step everything that needs to be done to create an AppProtect-secured Ingress Controller with a wildcard certificate that will created as well. This blog only assumes that the TKG cluster is up and running. If you want to perform further customizations you can check https://docs.nginx.com/nginx-ingress-controller and https://docs.nginx.com/nginx-app-protect/configuration. In this blog it has been used TKG 1.3 in vSphere with Tanzu with NSX-T. The steps are similar when using any other supported TKG environment. Build the NGINXplus DOCKER image Define the registry endpoint and namespace where the TKG cluster will be deployed: $ REGISTRY=<registry IP or FQDN> $ NS=<your namespace> Log in the registry $ docker login $REGISTRY Username: <your user> Password: <your password> Login Succeeded Retrieve NGINXplus $ git clone https://github.com/nginxinc/kubernetes-ingress/ $ cd kubernetes-ingress $ git checkout v1.11.1 Copy the license files into base folder of NGINXplus $ cp $LICDIR/nginx-repo.{crt,key} . Build the image $ make debian-image-nap-plus PREFIX=$REGISTRY/$NS/nginx-plus-ingress TARGET=container Docker version 19.03.8, build afacb8b docker build --build-arg IC_VERSION=1.11.1-32745366 --build-arg GIT_COMMIT=32745366 --build-arg VERSION=1.11.1 --target container -f build/Dockerfile -t 10.105.210.67/tkg1/nginx-plus-ingress:1.11.1 . --build-arg BUILD_OS=debian-plus-ap --build-arg PLUS=-plus --secret id=nginx-repo.crt,src=nginx-repo.crt --secret id=nginx-repo.key,src=nginx-repo.key [+] Building 4.9s (24/24) FINISHED After this we can verify the image is ready in our local docker: $ docker images REPOSITORYTAGIMAGE IDCREATEDSIZE $REGISTRY/$NS/nginx-plus-ingress1.11.170113ec3891435 minutes ago626MB If we wanted to push it into another namespace we would perform an image tag operation as follows: docker image tag 70113ec38914 $REGISTRY/$ANOTHERNS/nginx-plus-ingress:1.11.1 Upload the image into the repository: make push PREFIX=$REGISTRY/$NS/nginx-plus-ingress Configure NGINXplus installation Switch to the helm chart folder cd deployments/helm-chart Make a backup of the default nginx-plus config file cp values-plus.yaml values-plus.yaml.orig We will edit the file values-plus.yaml as follows in order to: Enable AppProtect Allow to specify a wildcard TLS certificate that we will use for all the services. Expose NGINXplus using a LoadBalancer with a Cluster externalTrafficPolicy. Exposing NGINXplus (or any other Ingress Controller such as Contour) using Cluster externalTrafficPolicy is required given that the NSX-T native load balancer doesn't perform any health checking when creating a Service of Type LoadBalancer. We will see how to improve this in future blogs with the use of BIG-IP. controller: nginxplus: true image: repository: nginx-plus-ingress tag: "1.11.1" service: externalTrafficPolicy: Cluster appprotect: ## Enable the App Protect module in the Ingress Controller. enable: true wildcardTLS: ## The base64-encoded TLS certificate for every Ingress host that has TLS enabled but no secret specified. ## If the parameter is not set, for such Ingress hosts NGINX will break any attempt to establish a TLS connection. cert: "" ## The base64-encoded TLS key for every Ingress host that has TLS enabled but no secret specified. ## If the parameter is not set, for such Ingress hosts NGINX will break any attempt to establish a TLS connection. key: "" ## The secret with a TLS certificate and key for every Ingress host that has TLS enabled but no secret specified. ## The value must follow the following format: `<namespace>/<name>`. ## Used as an alternative to specifying a certificate and key using `controller.wildcardTLS.cert` and `controller.wildcardTLS.key` parameters. ## Format: <namespace>/<secret_name> secret: ingress-nginx/wildcard-tls This file can also be found in https://raw.githubusercontent.com/f5devcentral/f5-bd-tanzu-tkg-nginxplus/main/values-plus.yaml Apply the required PodSecurityPolicy before NGINXplus installation The next step creates a PodSecurityPolicy which is required by Tanzu Kubernetes Grid and it is bound to the Service Account ingress-nginx used in the regular NGINXplus install. $ kubectl apply -f https://raw.githubusercontent.com/f5devcentral/f5-bd-tanzu-tkg-nginxplus/main/nginx-psp.yaml podsecuritypolicy.policy/ingress-nginx created clusterrole.rbac.authorization.k8s.io/ingress-nginx-psp created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-psp created Install NGINXplus Ingress controller using Helm From the deployments/helm-chart directory of the downloaded NGINXplus, it is just needed to run the next command: $ helm -n ingress-nginx install ingress-nginx -f values-plus.yaml . NAME: ingress-nginx LAST DEPLOYED: Mon May 17 15:16:20 2021 NAMESPACE: ingress-nginx STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The NGINX Ingress Controller has been installed. Checking the resulting installation When checking the resulting resources we can see that by default a single POD is created. We can scale up/down this as required by using Helm. This will be shown later on. Note also that by default the NGINXplus Ingress Controller is automatically exposed using the Service Type LoadBalancer resource which configures an external load balancer. In this case the external load balancer is NSX-T's native LB as shown in the screenshot. below When using vSphere networking this would have been HAproxy by default. In next blogs we will show how to use F5 BIG-IP in TKG clusters instead. $ kubectl -n ingress-nginx get all NAMEREADYSTATUSRESTARTSAGE pod/ingress-nginx-nginx-ingress-7d4587b44c-n9b8l1/1Running016h NAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGE service/ingress-nginx-nginx-ingressLoadBalancer100.69.127.6610.105.210.6880:30527/TCP,443:31185/TCP16h NAMEREADYUP-TO-DATEAVAILABLEAGE deployment.apps/ingress-nginx-nginx-ingress1/11116h NAMEDESIREDCURRENTREADYAGE replicaset.apps/ingress-nginx-nginx-ingress-7d4587b44c11116h In the next screenshots we can see the resulting configuration in NSX-T: Both pools for port 80 and port 443 point to the worker's node addresses. This means that the traffic flow will be NSX-T LB -> ClusterIP -> Ingress Controller's POD address (in the same or in another node). This is the case of any regular Ingress Controller including TKG's provided Contour. In next blogs it will be shown how these many layers of indirection can be bypassed using F5 BIG-IP. Using NGINXplus Ingress Controller without AppProtect Creating a regular Ingress resource In this initial example we will create two services (coffee and tea) which will be exposed with an Ingress resource called cafe-ingress. This will expose the services in the URL https://cafe.tkg.bd.f5.com/coffee and https://cafe.tkg.bd.f5.com/tea using the previously created wildcard certificate for *.tkg.bd.f5.com as depicted in the next diagram. To create this setup run the following commands: $ kubectl create ns test $ kubectl -n test -f https://raw.githubusercontent.com/f5devcentral/f5-bd-tanzu-tkg-nginxplus/main/cafe-rbac.yaml $ kubectl -n test -f https://raw.githubusercontent.com/f5devcentral/f5-bd-tanzu-tkg-nginxplus/main/cafe.yaml $ kubectl -n test -f https://raw.githubusercontent.com/f5devcentral/f5-bd-tanzu-tkg-nginxplus/main/cafe-ingress.yaml This is the same example provided in the official NGINXplus documentation but we add the cafe-rbac.yaml declaration which creates the necessary PodSecurity policies and bindings for TKG. To verify the result first we will check the Ingress resource itself: $ kubectl -n test get ingress NAMECLASSHOSTSADDRESSPORTSAGE cafe-ingressnginxcafe.tkg.bd.f5.com10.105.210.6880, 4433m44s where we can observe that the IP address is the one of the external loadbalancer seen before. To verify it is all working as expected we will use curl as follows: $ curl --cacert ca-tkg.bd.f5.com.crt --resolve cafe.tkg.bd.f5.com:443:10.105.210.68 https://cafe.tkg.bd.f5.com/coffee Server address: 100.96.1.16:8080 Server name: coffee-86954d94fd-pnvpq Date: 18/May/2021:16:18:59 +0000 URI: /coffee Request ID: 63964930a2d1038af5f204ef8fbe91fc which has the following key parameters: Use --cacert to specify our CA crt file previously created Use --resolve to allow curl resolve the FQDN of the request Adding AppProtect to an Ingress resource Additional configuration Our deployed NGINXplus has AppProtect built-in. It is up to the user of the Ingress resource if it wants to enable it, on a per Ingress basis. In our example we will apply the AppProtect security policies in the user namespace "test". We will also create a syslog store in the ingress-nginx namespace. All these can be customized. Ultimately, the user just needs to add the following annotations in order to secure the cafe site: annotations: appprotect.f5.com/app-protect-policy: "test/dataguard-alarm" appprotect.f5.com/app-protect-enable: "True" appprotect.f5.com/app-protect-security-log-enable: "True" appprotect.f5.com/app-protect-security-log: "test/logconf" appprotect.f5.com/app-protect-security-log-destination: "syslog:server=100.70.175.24:514" The custom AppProtect policy used in this example contains DataGuard protection for Credit Card Number, US Social Security number leaks and a custom signature. It is also defined where to send the AppProtect logs. These are sent in SYSLOG/TCP mode independently of the regular logs generated by NGINXplus. To make all these happen first we will create the syslog server: kubectl apply -n ingress-nginx -f https://raw.githubusercontent.com/f5devcentral/f5-bd-tanzu-tkg-nginxplus/main/syslog-rbac.yaml kubectl apply -n ingress-nginx -f https://raw.githubusercontent.com/f5devcentral/f5-bd-tanzu-tkg-nginxplus/main/syslog.yaml Next, we will create the AppProtect policies: kubectl apply -n test -f https://raw.githubusercontent.com/f5devcentral/f5-bd-tanzu-tkg-nginxplus/main/ap-apple-uds.yaml kubectl apply -n test -f https://raw.githubusercontent.com/f5devcentral/f5-bd-tanzu-tkg-nginxplus/main/ap-dataguard-alarm-policy.yaml kubectl apply -n test -f https://raw.githubusercontent.com/f5devcentral/f5-bd-tanzu-tkg-nginxplus/main/ap-logconf.yaml Finally we will add the above annotations to the Ingress resource. For that, we will need to get SYSLOG's POD address and replace it in the cafe-ingress-ap.yaml definition. curl -O https://raw.githubusercontent.com/f5devcentral/f5-bd-tanzu-tkg-nginxplus/main/cafe-ingress-ap.yaml SYSLOG_IP=<IP address of syslog's POD> sed -e "s/SYSLOG/$SYSLOG_IP/" cafe-ingress-ap.yaml > cafe-ingress-ap-syslog.yaml kubectl apply -n test -f cafe-ingress-ap-syslog.yaml Note: it might take few seconds to make the AppProtect configuration effective. Verifying AppProtect Run the following command to watch the requests live as handled by AppProtect: kubectl -n ingress-nginx exec -it <SYSLOG POD NAME> -- tail -f /var/log/messages Send a request that triggers the custom signature: curl --cacert ca-tkg.bd.f5.com.crt --resolve cafe.tkg.bd.f5.com:443:10.105.210.69 "https://cafe.tkg.bd.f5.com/coffee/" -X POST -d "apple" You should see a log similar to the following one in the syslog logs: May 24 13:43:23 ingress-nginx-nginx-ingress-7d4587b44c-wvrxs ASM:attack_type="Non-browser Client,Brute Force Attack",blocking_exception_reason="N/A",date_time="2021-05-24 13:43:23",dest_port="443",ip_client="10.105.210.69",is_truncated="false",method="POST",policy_name="dataguard-alarm",protocol="HTTPS",request_status="blocked",response_code="0",severity="Critical",sig_cves="N/A",sig_ids="300000000",sig_names="Apple_medium_acc [Fruits]",sig_set_names="{apple_sigs}",src_port="4096",sub_violations="N/A",support_id="15704273797572010868",threat_campaign_names="N/A",unit_hostname="ingress-nginx-nginx-ingress-7d4587b44c-wvrxs",uri="/coffee/",violation_rating="3",vs_name="24-cafe.tkg.bd.f5.com:9-/coffee",x_forwarded_for_header_value="N/A",outcome="REJECTED",outcome_reason="SECURITY_WAF_VIOLATION",violations="Attack signature detected,Bot Client Detected",violation_details="<?xml version='1.0' encoding='UTF-8'?><BAD_MSG><violation_masks><block>10000000200c00-3030430000070</block><alarm>2477f0ffcbbd0fea-8003f35cb000007c</alarm><learn>200000-20</learn><staging>0-0</staging></violation_masks><request-violations><violation><viol_index>42</viol_index><viol_name>VIOL_ATTACK_SIGNATURE</viol_name><context>request</context><sig_data><sig_id>300000000</sig_id><blocking_mask>7</blocking_mask><kw_data><buffer>YXBwbGU=</buffer><offset>0</offset><length>5</length></kw_data></sig_data></violation></request-violations></BAD_MSG>",bot_signature_name="curl",bot_category="HTTP Library",bot_anomalies="N/A",enforced_bot_anomalies="N/A",client_class="Untrusted Bot",request="POST /coffee/ HTTP/1.1\r\nHost: cafe.tkg.bd.f5.com\r\nUser-Agent: curl/7.64.1\r\nAccept: */*\r\nContent-Length: 5\r\nContent-Type: application/x-www-form-urlencoded\r\n\r\napple" Updating NGINXplus Ingress controller using Helm By default a single NGINXplus instance is created, if you want to increase the performance of it, scaling-out is as simple as editing the values-plus.yaml file and setting a replicaCount parameter with the desired value: controller: replicaCount: 4 And running helm upgrade as follows $ helm -n ingress-nginx upgrade ingress-nginx -f values-plus.yaml . Release "ingress-nginx" has been upgraded. Happy Helming! NAME: ingress-nginx LAST DEPLOYED: Thu May 20 14:20:38 2021 NAMESPACE: ingress-nginx STATUS: deployed REVISION: 2 TEST SUITE: None NOTES: The NGINX Ingress Controller has been installed. Using NGINXplus alongside with other Ingress Controllers (such as Contour). NGINXplus does support Ingress/v1 resource version available in Kubernetes 1.18+ as well as previous Ingress/v1beta1 API resource version for backwards compatibility. Contour Ingress Controller is provided in TKG by VMware as an add-on which is not installed by default. If installed, you have to be aware that Contour, at time of this writting (May 2021), only supports the older Ingress/v1beta1 API resource version. This means that when defining Ingress resources you have to specify the Ingress Controller to use by means of adding the following annotation: kubernetes.io/ingress.class: <ingress conroller name> where <ingress controller name> could be nginx or contour. For further details on this topic you can check https://docs.nginx.com/nginx-ingress-controller/installation/running-multiple-ingress-controllers/ Conclusion In this blog post we have gone through all the steps required to install and use NGINXplus with AppProtect in Tanzu Kubernetes Grid with a real world example. Overall, the installation is the same as in any Kubernetes but the following two items need to be taken into account: Before deploying, make sure that the appropriate PodSecurityPolicies are in place for either NGINXplus or the workloads. PodSecurityPolicies are not enabled by default in many Kubernetes distributions so this represents a change from the usual practice. If deploying NGINXplus alongside another Ingress Controller make sure that the Ingress resources are defined appropriately in order to select the right Ingress Controller for the corresponding Ingress resource. In this blog we used NGINXplus in a TKG cluster deployed in an on-premises infrastructure (vSphere with Tanzu) with the Antrea CNI and NSX-T networking. The steps would have been the same if it had been used vSphere networking or Calico CNI. The only difference could come when exposing it through the external load balancer. If the external load balancer performed health checking it would be preferable to use local externalTrafficPolicy since this avoids a hop and allows keeping the client source address. In future blogs we will post how to expose NGINXplus more effectively and in a cloud-agnostic manner by using BIG-IP as external load balancer.1.3KViews1like0Comments