Deploying NGINX Ingress Controller with OpenShift on AWS Managed Service: ROSA
Introduction
In March 2021, Amazon and Red Hat announced the General Availability of Red Hat OpenShift Service on AWS (ROSA). ROSA is a fully-managed OpenShift service, jointly managed and supported by both Red Hat and Amazon Web Services (AWS).
OpenShift offers users several different deployment models. For customers that require a high degree of customization and have the skill sets to manage their environment, they can build and manage OpenShift Container Platform (OCP) on AWS. For those who want to alleviate the complexity in managing the environment and focus on their applications, they can consume OpenShift as a service, or Red Hat OpenShift Service on AWS (ROSA).
The benefits of ROSA are two-fold. First, we can enjoy more simplified Kubernetes cluster creation using the familiar Red Hat OpenShift console, features, and tooling without the burden of manually scaling and managing the underlying infrastructure. Secondly, the managed service made easier with joint billing, support, and out-of-the-box integration to AWS infrastructure and services.
In this article, I am exploring how to deploy an environment with NGINX Ingress Controller integrated into ROSA.
Deploy Red Hat OpenShift Service on AWS (ROSA)
The ROSA service may be deployed directly from the AWS console. Red Hat has done a great job in providing the instructions on creating a ROSA cluster in the Installation Guide.
The guide documents the AWS prerequisites, required AWS service quotas, and configuration of your AWS accounts. We run the following commands to ensure that the prerequisites are met before installing ROSA.
- Verify that my AWS account has the necessary permissions:
❯ rosa verify permissions I: Validating SCP policies... I: AWS SCP policies ok
- Verify that my AWS account has the necessary quota to deploy a Red Hat OpenShift Service on the AWS cluster.
❯ rosa verify quota --region=us-west-2 I: Validating AWS quota... I: AWS quota ok. If cluster installation fails, validate actual AWS resource usage against https://docs.openshift.com/rosa/rosa_getting_started/rosa-required-aws-service-quotas.html
Next, I ran the following command to prepare my AWS account for cluster deployment:
❯ rosa init I: Logged in as 'ericji' on 'https://api.openshift.com' I: Validating AWS credentials... I: AWS credentials are valid! I: Validating SCP policies... I: AWS SCP policies ok I: Validating AWS quota... I: AWS quota ok. If cluster installation fails, validate actual AWS resource usage against https://docs.openshift.com/rosa/rosa_getting_started/rosa-required-aws-service-quotas.html I: Ensuring cluster administrator user 'osdCcsAdmin'... I: Admin user 'osdCcsAdmin' created successfully! I: Validating SCP policies for 'osdCcsAdmin'... I: AWS SCP policies ok I: Validating cluster creation... I: Cluster creation valid I: Verifying whether OpenShift command-line tool is available... I: Current OpenShift Client Version: 4.7.19
If we were to follow their instructions to create a ROSA cluster using the rosa CLI, after about 35 minutes our deployment would produce a Red Hat OpenShift cluster along with the needed AWS components.
❯ rosa create cluster --cluster-name=eric-rosa I: Creating cluster 'eric-rosa' I: To view a list of clusters and their status, run 'rosa list clusters' I: Cluster 'eric-rosa' has been created. I: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information. I: To determine when your cluster is Ready, run 'rosa describe cluster -c eric-rosa'. I: To watch your cluster installation logs, run 'rosa logs install -c eric-rosa --watch'. Name: eric-rosa …
During the deployment, we may enter the following command to follow the OpenShift installer logs to track the progress of our cluster:
> rosa logs install -c eric-rosa --watch
After the Red Hat OpenShift Service on AWS (ROSA) cluster is created, we must configure identity providers to determine how users log in to access the cluster.
What just happened?
Let's review what just happened. The above installation program automatically set up the following AWS resources for the ROSA environment:
- AWS VPC subnets per Availability Zone (AZ).
- For single AZ implementations two subnets were created (one public one private)
- The multi-AZ implementation would make use of three Availability Zones, with a public and private subnet in each AZ (a total of six subnets).
- OpenShift cluster nodes (or EC2 instances)
- Three Master nodes were created to cater for cluster quorum and to ensure proper fail-over and resilience of OpenShift.
- At least two infrastructure nodes, catering for build-in OpenShift container registry, OpenShift router layer, and monitoring.
- Multi-AZ implementations
- Three Master nodes and three infrastructure nodes spread across three AZs
- Assuming that application workloads will also be running in all three AZs for resilience, this will deploy three Workers. This will translate to a minimum of nine EC2 instances running within the customer account.
- A collection of AWS Elastic Load Balancers, some of these Load balancers will provide end-user access to the application workloads running on OpenShift via the OpenShift router layer, other AWS elastic load balancers will expose endpoints used for cluster administration and management by the SRE teams.
Deploy NGINX Ingress Controller
The NGINX Ingress Operator is a supported and certified mechanism for deploying NGINX Ingress Controller in an OpenShift environment, with point-and-click installation and automatic upgrades. It works for both the NGINX Open Source-based and NGINX Plus-based editions of NGINX Ingress Controller. In this tutorial, I’ll be deploying the NGINX Plus-based edition. Read Why You Need an Enterprise-Grade Ingress Controller on OpenShift for use cases that merit the use of this edition. If you’re not sure how these editions are different, read Wait, Which NGINX Ingress Controller for Kubernetes Am I Using?
I install the NGINX Ingress Operator from the OpenShift console. There are numerous options you can set when configuring the NGINX Ingress Controller, as listed in our GitHub repo. Here is a manifest example :
apiVersion: k8s.nginx.org/v1alpha1 kind: NginxIngressController metadata: name: my-nginx-ingress-controller namespace: openshift-operators spec: ingressClass: nginx serviceType: LoadBalancer nginxPlus: true type: deployment image: pullPolicy: Always repository: ericzji/nginx-plus-ingress tag: 1.12.0
To verify the deployment, run the following commands in a terminal. As shown in the output, the manifest I used in the previous step deployed two replicas of the NGINX Ingress Controller and exposed them with a LoadBalancer service.
❯ oc get pods -n openshift-operators NAME READY STATUS RESTARTS AGE my-nginx-ingress-controller-b556f8bb-bsn4k 1/1 Running 0 14m nginx-ingress-operator-controller-manager-7844f95d5f-pfczr 2/2 Running 0 3d5h ❯ oc get svc -n openshift-operators NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-nginx-ingress-controller LoadBalancer 172.30.171.237 a2b3679e50d36446d99d105d5a76d17f-1690020410.us-west-2.elb.amazonaws.com 80:30860/TCP,443:30143/TCP 25h nginx-ingress-operator-controller-manager-metrics-service ClusterIP 172.30.50.231 <none>
With NGINX Ingress Controller deployed, we'll have an environment that looks like this:
Post-deployment verification
After the ROSA cluster was configured, I deployed an app (Hipster) in OpenShift that is exposed by NGINX Ingress Controller (by creating an Ingress resource). To use a custom hostname, it requires that we manually change your DNS record on the Internet to point to the IP address value of AWS Elastic Load Balancer.
❯ dig +short a2dc51124360841468c684127c4a8c13-808882247.us-west-2.elb.amazonaws.com 34.209.171.103 52.39.87.162 35.164.231.54
I made this DNS change (optionally, use a local host record), and we will see my demo app available on the Internet, like this:
Deleting your environment
To avoid unexpected charges, don't forget to delete your environment if you no longer need it.
❯ rosa delete cluster -c eric-rosa --watch ? Are you sure you want to delete cluster eric-rosa? Yes I: Cluster 'eric-rosa' will start uninstalling now W: Logs for cluster 'eric-rosa' are not available …
Conclusion
To summarize, ROSA allows infrastructure and security teams to accelerate the deployment of the Red Hat OpenShift Service on AWS. Integration with NGINX Ingress Controller provides comprehensive L4-L7 security services for the application workloads running on Red Hat OpenShift Service on AWS (ROSA). As a developer, having your clusters as well as security services maintained by this service gives you the freedom to focus on deploying applications.
You have two options for getting started with NGINX Ingress Controller:
- Download the NGINX Open Source-based version of NGINX Ingress Controller from our GitHub repo.
- If you prefer to bring your own license to AWS, get a free trial directly from F5 NGINX.