Building on a previous article and use case, this article discusses the three more advanced features of F5 NGINX Ingress Controller:
This article is a direct follow-on from my previous article and accompanying code repo; understanding of that use case is a prerequisite for this article. To summarize the previous article, I showed how to configure NGINX Ingress Controller to:
Under the heading Advanced features you might add yourself, I listed three features that might be requirements in a production setting where multiple Ingress controller pods were running:
I leaned heavily on Liam Crilly's excellent guide that explains how to do this for NGINX outside of K8s. But completing those last three requirements was a little tougher than I expected! So I created another, separate code repo along with this article.
This is the easiest of the requirements to meet. Caching to disk for NGINX is documented here, although I preferred to learn via this easy guide. Whether in K8s or a more traditional NGINX installation, caching is configured by directives in the http, server, or location contexts. Because caching is not directly defined in the specification of the VS or VSR CRD schema, we configure these directives in K8s via snippets that are available in the spec, as well as snippets in the ConfigMap that customizes NGINX's behavior. In my example, I configure http-snippets in my ConfigMap to include the
proxy_cache_path directive in the http context. I also configure location-snippets in my VirtualServerRoute to include the
proxy_cache directive and others in the location context.
Caching to disk is great, but if you're dealing with multiple NGINX instances you may want to distribute this cache so that it is consistent and local across instances. In a production environment, it's likely you will have multiple Ingress controller pods in a deployment or daemonset. Therefore, we'll use the key-value store as a method to cache the response code (HTTP 204 or 401) for auth subrequests. Doing this inside K8s added some complexity for me, which I'll walk through below.
keyval directives work together to create a zone in memory for the key-value store, and define the key and value to be stored. In my example, they are http-snippets in the ConfigMap. To sync the key-value store across instances we use
zone_sync_server directives, which I've added via stream-snippets and a headless service in the ConfigMap. To learn this, I followed what I'd seen in Liam's article as well as a ConfigMap I'd seen in an OIDC example.
I also need to import my script, which I have done with a http-snippet around line 8 in the VirtualServer that sets the directive
With the module loaded and script imported, I can call functions within my script using the js_content directive. In my example, I've created a location called auth_js and used this directive, all within a server-snippet around line 13 in my VirtualServer. In my example, that directive is
How did I get this script at this path on disk when I'm using a container image? In K8s you can create a script in a new ConfigMap and add this ConfigMap data to a volume on the pod using the deployment manifest. In my example, the ConfigMap is called njs-cm.yaml (here) and the deployment file is nginx-ingress.yaml (here). After this, my njs script is on the ingress controller disk at /etc/nginx/njs/auth_keyval.js.
To write or read from the key-value store, we can use either the REST API or njs. In our scenario, we're writing and reading with njs. But we need a way to revoke cached authorizations (i.e., remove key-value pairs) and we want to use the API for this.
By default, the NGINX Ingress Controller image has API access disabled for anyone reaching the API over the network, but enabled via unix sockets. I can tell this because of the default main template, but you can also launch a container and read the default file at /etc/nginx/nginx.conf. This is why we've been able to use njs to write and read key-value pairs, but we still cannot write/delete key-value pairs with REST calls.
To finish with all the complexity, we have to enable the NGINX Plus API as writeable at a location and then use this location for our API calls. In my example, I've added a location called /api with a directive
api write=on defined. This location was simply added via a server-snippet in my VirtualServer resource around line 15, simply to use fewer lines than the alternative of creating a VSR with a location-snippet.
Now, I can use cURL commands to remove entries from my key-value store by targeting my website and the /api path. Here's the article I followed to learn those cURL commands to add/remove entries from the key-value store.
My customer did ask for future functionality that would allow common features, like authentication and caching, to be configured within the CRD spec directly and not via snippets. This is under consideration, but the takeaway for me here was that snippets are incredibly powerful for achieving configuration of NGINX Ingress controller and, along with the NGINX Plus API and dashboard, provide advanced functionality for a solution that is supported and enterprise-level.
Please reach out if you'd like me to explain more!
Accompanying GitHub repo: https://github.com/mikeoleary/nginx-auth-plus-externalname-advanced
Part 1 of this use case: ExternalName Service and Authentication Subrequests with NGINX Ingress Controller
Use case overview with NGINX (outside of K8s): Validating OAuth 2.0 Access Tokens with NGINX and NGINX Plus - NGINX
Example of REST API with key-value store: Using the NGINX Plus Key-Value Store to Secure Ephemeral SSL Keys from HashiCorp Vault - NGINX