Securing and Scaling Hybrid apps with F5 NGINX (Part 2)

If you attended a cybersecurity trade-show lately, you may have noticed the term “Zero Trust (ZT)” advertised on almost every booth. It may seem like most security companies are offering the same value proposition: ‘Securing apps with ZT’. Its commonality stems from the fact that ZT is a broad term that can span endless use cases. 

ZT is not a feature or capability, rather a philosophy embraced by IT security leaders based on the idea that all traffic entering and exiting a system is not trusted and must be scrutinized before passing through. Organizations are shifting to a zero-trust mindset due to the increased complexity of cyber-attacks. Perimeter based firewalls are no longer sufficient in securing digital resources.   

In Part 1 of our series, we configured NGINX Plus as an external load balancer to route and terminate TLS traffic to cluster nodes. In this part of the series, we leverage the same NGINX Plus deployment to enable ZT use cases that will improve the security posture of your hybrid applications.

NOTE: Part 1 of the series is a prerequisite for enabling the ZT use cases in our examples. Please ensure that part 1 is completed before starting with part 2

Part 1 of the series

ZT Use case #1: OIDC Authentication 

OIDC (OpenID Connect) is an authentication layer on top of the OAuth 2.0 framework. Many organizations will choose OIDC to authenticate digital identities and enable SSO (Single-Sign-On) for consumer applications. With Single-Sign-on technologies, users gain access to multiple applications with one set of user credentials by authenticating their identities through an IdP (Identity Provider).

I can configure NGINX Plus to operate as an OIDC relaying party to exchange and validate ID tokens with the IdP (Identity Provider), in addition to basic reverse-proxy load balancing configured in part 1. 
I will extend the architecture in part 1 with an IdP and configure NGINX Plus as the identity aware proxy.  

 

Prerequisites for NGINX Plus 

Before configuring NGINX Plus as the OIDC identity aware proxy: 

1. Installation of NJS is required. 

$ sudo apt-get install nginx-plus-module-njs 

2. Load the NJS module into the NGINX configuration by adding the following line at the top of your nginx.conf file.  

load_module modules/ngx_http_js_module.so;

3. Clone the OIDC GitHub repository in your directory of choice 

cd /home/ubuntu && git clone --branch R28 https://github.com/nginxinc/nginx-openid-connect.git 
 

Setting up the IdP 

The IdP will manage and store digital identities to mitigate attackers from impersonating users to steal sensitive information. There are many IdP vendors to choose from; Okta, Ping Identity, Azure AD. We chose Okta as the IdP in our example moving forward.  

If you do not have access to an IdP, you can quickly get started with the Okta Command Line Interface (CLI) and run the okta register command to sign up for a new account.  

Once account creation is successful, we will use the Okta CLI to preconfigure Okta as the IdP, creating what Okta calls an app integration. Other IdPs will have different nomenclatures defining an application integration. For example, Azure AD will call them App registrations.  If you are not using Okta, you can follow the documentation of the IdP you are using and skip to the next section (Configuring NGINX as the OpenID Connect relying party). 

1. Run the okta login command to authenticate the Okta Cli with your Okta developer account. Enter your Okta domain and API token at the prompts 

$ okta login  
Okta Org URL: https://your-okta-domain 
Okta API token: your-api-token 

 2. Create the app integration 

 $ okta apps create --app-name=mywebapp --redirect-uri=https://<nginx-plus-hostname>:443/_codexch 

where 

  • --app-name defines the application name (here, mywebapp) 
  • --redirect-uri defines the URI to which sign-ins are redirected to the NGINX Plus. <nginx-plus-hostname> should resolve to the NGINX Plus external IP configured in part 1. We use port 443 since TLS termination is configured on NGINX Plus from part 1. Recall we used self-signed certificates and keys to configure TLS on NGINX Plus. In a production environment, we recommend using certs/keys issued by a trusted certificate authority such as Let’s Encrypt. 

Once the command from step #2 is completed, the client ID and Secret generated from the app integration can be found in ${HOME}/.okta.env

Configuring NGINX as the OpenID Connect relying party

Now that we have finished setting up our IdP, we can now start configuring NGINX Plus as the OpenID Connect relying party. Once logged into the NGINX Plus instance, simply run the configuration script from your home directory.  

$ ./nginx-openid-connect/configure.sh -h <nginx-plus-hostname> -k request -i <YOURCLIENTID> -s <YOURCLIENTSECRET> –x https://dev-xxxxxxx.okta.com/.well-known/openid-configuration 

where 

  • -h defines the hostname of NGINX Plus 
  • -k defines how NGINX will retrieve JWK files to validate JWT signatures. The JWK file is retrieved from a subrequest to the IdP  
  • -i defines the Client ID generated from the IdP 
  • -s defines the Client Secret generated from the IdP 
  • -x defines the URL of the OpenID configuration endpoint. Using Okta as the example, the URL starts with your Okta organization domain, followed by the path URI /.well-known/openid-configuration 

The configure script will generate OIDC config files for NGINX Plus. We will copy the generated config files into the /etc/nginx/conf.d directory from part 1. 

$ sudo cp frontend.conf openid_connect.js openid_connect.server_conf openid_connect_configuration.conf /etc/nginx/conf.d/ 

You will notice by default that frontend.conf listens on port 8010 with clear text http. We need to merge kube_lb.conf into frontend.conf to enable both use cases from part 1 and 2. The resulting frontend.conf should look something like this: https://gist.github.com/nginx-gists/af067326734063da6a4ff42146873262 

Finally, I will need to edit the openid_connect_configuration.conf file and modify my client secret to the one generated by my Okta IdP.

Reload NGINX Plus for the new config to take effect. 

$ nginx -s reload

Testing the Environment

Now we are ready to test our environment in action. To summarize, we set up an IdP and configured NGINX Plus as the identity aware proxy to validate user ID tokens before the entering the Kubernetes cluster. 

To test the environment, we will open a browser and enter the hostname of NGINX Plus into the address field. You should be redirected to your IdP login page.  

Note: The host name should resolve to the Public IP of the NGINX Plus machine. 

Once you are prompted with the IdP login page from your browser, you can access the Kubernetes pods once the user credentials are validated. User credentials should be defined from the IdP.

Once you are logged into your application, the ID token of the authenticated is stored in the  NGINX Plus Key-Value Store.

Enabling PKCE with OIDC

In the previous section, we learned how to configure NGINX Plus as the OIDC relying party to authenticate user identities attempting connections to protected Kubernetes applications. However, there are few cases where attackers can intercept code exchange requests issued from the IdP and hijack your ID tokens and gain access to your sensitive applications.  

PKCE is an extension of the OIDC Authorization code flow designed to protect against authorization code interception and theft. PKCE provides an extra layer of security where the attacker will need to provide a code verifier in addition to the authorization code in exchange for the ID token from the IdP. 

In our current setup, NGINX Plus will send a random generated PKCE code verifier as a query parameter when redirecting users to the IdP login page. The IdP will use this PKCE code verifier as  extra validation when the authorization code is exchanged for the ID token.

PKCE needs to be enabled from both NGINX Plus and the IdP. To enable PKCE verification on NGINX Plus, edit the openid_connect_configuration.conf file and modify $oidc_pkce_enable to 1 and reload NGINX Plus.

Depending on the IdP you are using, a checkbox should be available to enable PKCE.

Testing the Environment

To test that PKCE is working, we will open a browser and enter the NGINX Plus host name once again. You should be redirected to the login page, only this time you will notice the URL had slightly changed with additional query parameters: 

  • code_challenge_method – Method used to hash the plain code verifier (most likely SHA256) 
  • code_challenge – The hashed value of the plain code verifier 

NGINX Plus will provide this plain code verifier along with the authorization code in exchange for the ID token. NGINX Plus will then validate the ID token and store it in cache.

Extending OIDC with 3rd party Systems

Customers may need to integrate their OIDC workflow with proprietary Auth/Auth systems already in production. For example, additional metadata pertaining to a user may need to be collected from an external Redis Cache or JFrog Artifactory. We can fill this gap by extending our diagram from the previous section.

 

In addition to token validation with NGINX Plus, I pull response data from JFrog Artifactory and pass it to the backend applications once users are authenticated. 

Note: I am using JFrog Artifactory as an example 3rd party endpoint here. I can technically use any endpoint I want.

Testing the Environment

To test our environment in action, I will make a few updates to my NGINX OIDC configuration. You can pull our updated GitHub repository and use it as reference for updating your NGINX configuration. 

Update #1: Adding the njs source code

The first update is extending NGINX Plus with njs and retrieving response data from our 3rd party system. Add the KvOperations.js source file in your /etc/nginx/njs directory. 

Update #2: frontend.conf

I am adding lines 37-39 to frontend.conf so that NGINX Plus will initiate the sub-request to our 3rd party system after users are authenticated with OIDC. We are setting the URI of this sub-request to /kvtest. More on this on the next update.

Update #3: openid_connect.server_conf

We are adding lines 35-48 to openid_connect.server_conf consisting of two internal redirects in NGINX: 

  • /kvtest; Internal redirect from sub-requests with URI /kvtest will functions in KvOperations.js
  • /auth; Internal redirect for sub-requests with URI /auth will be proxied to the 3rd party endpoint. You can replace the <artifactory-url> in line 47 with your own endpoints

Update #4: openid_connect_configuration.conf

This update is optional and applies when passing dynamic variables to your 3rd party endpoints. You can dynamically update variables on the fly by sending POST requests to the NGINX Plus Key-Value store.

$ curl -iX POST -d '{"admin", "<input-value>"}' http://localhost:9000/api/7/http/<keyval-zone-name>

We are defining/instantiating the new Key-Value store in lines 102-104Once the updates are complete, I can test the optimized OIDC environment by troubleshooting/verifying the application is on the receiving end of my dynamic input values. 

Wrapping it up 

I covered a subset of ZT use cases with NGINX in hybrid architectures. The use cases presented in this article center around authentication. In the next part of our series, I will cover more ZT use case that include: 

  • Alternative authentication methods (SAML)
  • Encryption
  • Authorization and Access Control
  • Monitoring/Auditing
Published May 16, 2024
Version 1.0

Was this article helpful?

No CommentsBe the first to comment