Simplifying OIDC and SSO with the New NGINX Plus R34 OIDC Module

Introduction: Why OIDC and SSO Matter

As web infrastructures scale and modernize, strong and standardized methods of authentication become essential. OpenID Connect (OIDC) provides a flexible layer on top of OAuth 2.0, enabling both user authentication (login) and authorization (scopes, roles). By adopting OIDC for SSO, you can:

  • Provide a frictionless login experience across multiple services.
  • Consolidate user session management, removing custom auth code from each app.
  • Lay a foundation for Zero Trust policies, by validating and enforcing identity at the network’s edge.

While Zero Trust is a broader security model that extends beyond SSO alone, implementing OIDC at the proxy level is an important piece of the puzzle. It ensures that every request is associated with a verified identity, enabling fine-grained policies and tighter security boundaries across all your applications.

NGINX, acting as a reverse proxy, is an ideal place to manage these OIDC flows. However, the journey to robust OIDC support in NGINX has evolved - from an njs-based approach with scripts and maps, to a far more user-friendly native module in NGINX Plus R34.

The njs-based OIDC Solution

Before the native OIDC module, many users turned to the njs-based reference implementation. This setup combines multiple pieces:

  • njs script to handle OIDC flows (redirecting to the IdP, exchanging tokens, etc.).
  • auth_jwt module for token validation.
  • keyval module to store and pair a session cookie with the actual ID token.

While it covers the essential OIDC steps (redirects, code exchanges, and forwarding claims), it has some drawbacks:

  • Configuration complexity. Most of the logic hinges on creative usage of NGINX directives (like map), which can be cumbersome, especially if you used more than one authentication provider or your environment changes frequently.
  • Limited Metadata Discovery. It doesn’t natively fetch the IdP’s `.well-known/openid-configuration`. Instead, a separate bash script queries the IdP and rewrites parts of the NGINX config. Any IdP changes require you to re-run that script and reload NGINX.
  • Performance Overhead. The njs solution effectively revalidates ID tokens on every request.
    Why? Because NGINX on its own doesn’t maintain a traditional server-side session object. Instead, it simulates a “session” by tying a cookie to the user’s id_token in keyval. auth_jwt checks the token each time, retrieving it from keyval and verifying the signature, expiration, and extract claims. Under heavy load, this constant JWT validation can become expensive.

For many, that extra overhead conflicts with how modern OIDC clients usually do short-lived session cookies, validating the token only once per session or relying on a more efficient approach. Hence the motivation for a native OIDC module.

Meet the New Native OIDC Module in NGINX Plus R34

With the complexities of the njs-based approach in mind, NGINX introduced a fully integrated OIDC module in NGINX Plus R34. This module is designed to be a “proper” OIDC client, including:

  • Automatic TLS-only communication with the IdP.
  • Full metadata discovery (no external scripts needed).
  • Authorization code flows.
  • Token validation and caching.
  • A real session model using secure cookies.
  • Access token support (including automatic refresh).
  • Straightforward mapping of user claims to NGINX variables.

We also have a Deployment Guide that shows how to set up this module for popular IdPs like Okta, Keycloak, Entra ID, and others. That guide focuses on typical use cases (obtaining tokens, verifying them, and passing claims upstream). However, here we’ll go deeper into how the module works behind the scenes, using Keycloak as our IdP.

Our Scenario: Keycloak + NGINX Plus R34

We’ll demonstrate a straightforward Keycloak realm called nginx and a client also named nginx with “client authentication” and the “standard flow” enabled. We have:

Minimal Configuration Example:

http {
    resolver 1.1.1.1 ipv4=on valid=300s;

    oidc_provider keycloak {
        issuer https://kc.route443.dev/realms/nginx;
        client_id nginx;
        client_secret secret;
    }

    server {
        listen 443 ssl;
        server_name n1.route443.dev;

        ssl_certificate /etc/ssl/certs/fullchain.pem;
        ssl_certificate_key /etc/ssl/private/key.pem;

        location / {
            auth_oidc keycloak;

            proxy_set_header sub   $oidc_claim_sub;
            proxy_set_header email $oidc_claim_email;
            proxy_set_header name  $oidc_claim_name;
            proxy_pass http://127.0.0.1:8080;
        }
    }

    server {
        # Simple test backend
        listen 8080;

        location / {
            return 200 "Hello, $http_name!\nEmail: $http_email\nKeycloak sub: $http_sub\n";
            default_type text/plain;
        }
    }
}

Configuration Breakdown

  • oidc_provider keycloak {}. Points to our Keycloak issuer, plus client_id and client_secret. Automatically triggers .well-known/openid-configuration discovery. Quite an important note: all interaction with the IdP is secured exclusively over SSL/TLS, so NGINX must trust the certificate presented by Keycloak. By default, this trust is validated against your system’s CA bundle (the default CA store for your Linux or FreeBSD distribution). If the IdP’s certificate is not included in the system CA bundle, you can explicitly specify a trusted certificate or chain using the ssl_trusted_certificate directive so that NGINX can validate and trust your Keycloak certificate.
  • auth_oidc keycloak. For any request to https://n1.route443.dev/, NGINX checks if the user has a valid session. If not, it starts the OIDC flow.
  • Passing Claims Upstream. We add headers sub, email, and name based on $oidc_claim_sub, $oidc_claim_email, and $oidc_claim_name, the module’s built-in variables extracted from token.

Step-by-Step: Under the Hood of the OIDC Flow

Retrieving and Caching OIDC Metadata

As soon as you send an HTTP GET request to https://n1.route443.dev, you’ll see that NGINX redirects you to Keycloak’s authentication page. However, before that redirect happens, several interesting steps occur behind the scenes. Let’s take a closer look at the very first thing NGINX does in this flow:

NGINX checks if it has cached OIDC metadata for the IdP. If no valid cache exists, NGINX constructs a metadata URL by appending /.well-known/openid-configuration to the issuer you specified in the config. It then resolves the IdP’s hostname using the resolver directive.

NGINX parses the JSON response from the IdP, extracting critical parameters such as issuer, authorization_endpoint and token_endpoint. It also inspects response_types_supported to confirm that this IdP supports the authorization code flow and ID tokens. These details are essential for the subsequent steps in the OIDC process.

NGINX caches these details for one hour (or for however long the IdP’s Cache-Control headers specify), so it doesn’t need to re-fetch them on every request. This process happens in the background. However, you might notice a slight delay for the very first user if the cache is empty, since NGINX needs a fresh copy of the metadata.

Below is an example of the metadata request and response:

NGINX -> IdP:
HTTP GET /realms/nginx/.well-known/openid-configuration

IdP -> NGINX:
HTTP 200 OK
Content-Type: application/json
Cache-Control: no-cache  // Means NGINX will store it for 1 hour by default

{
  "issuer": "<https://kc.route443.dev/realms/nginx>",
  "authorization_endpoint": "<https://kc.route443.dev/realms/nginx/protocol/openid-connect/auth>",
  "token_endpoint": "<https://kc.route443.dev/realms/nginx/protocol/openid-connect/token>",
  "jwks_uri": "<http://kc.route443.dev:8080/realms/nginx/protocol/openid-connect/certs>",
  "response_types_supported": [
    "code","none","id_token","token","id_token token",
    "code id_token","code token","code id_token token"
  ]
  // ... other parameters
}

This metadata tells NGINX everything it needs to know about how to redirect users for authentication, where to request tokens afterward, and which JWT signing keys to trust. By caching these results, NGINX avoids unnecessary lookups on subsequent logins, making the process more efficient for every user who follows.

Building the Authorization URL & Setting a Temporary Session Cookie

Now that NGINX has discovered and cached the IdP’s metadata, it’s ready to redirect your browser to Keycloak for actual login. Here’s where the OpenID Connect Authorization Code Flow begins in earnest. NGINX adds a few crucial parameters, like response_type=code, client_id, redirect_uri, state, and nonce to the authorization_endpoint it learned from the metadata, then sends you the following HTTP 302 response:

NGINX -> User Agent:
HTTP 302 Moved Temporarily
Location: <https://kc.route443.dev/realms/nginx/protocol/openid-connect/auth?response_type=code&scope=openid&client_id=nginx&redirect_uri=http%3A%2F%2Fn1.route443.dev%2Foidc_callback&state=state&nonce=nonce>
Set-Cookie: NGX_OIDC_SESSION=temp_cookie/; Path=/; Secure; HttpOnly

At this point, you’re probably noticing the Set-Cookie: NGX_OIDC_SESSION=temp_cookie; line. This is a temporary session cookie, sometimes called a “pre-session” cookie. NGINX needs it to keep track of your “in-progress” authentication state - so once you come back from Keycloak with the authorization code, NGINX will know how to match that code to your browser session. However, since NGINX hasn’t actually validated any tokens yet, this cookie is only ephemeral. It remains a placeholder until Keycloak returns valid tokens and NGINX completes the final checks. Once that happens, you’ll get a permanent session cookie, which will then store your real session data across requests.

User Returns to NGINX with an Authorization Code

Once the user enters their credentials on Keycloak’s login page and clicks “Login”, Keycloak redirects the browser back to the URL specified in your redirect_uri parameter. In our example, that happens to be http://n1.route443.dev/oidc_callback. It’s worth noting that /oidc_callback is just the default location and if you ever need something different, you can tweak it via the redirect_uri directive in the OIDC module configuration.

When Keycloak redirects the user, it includes several query parameters in the URL, most importantly, the code parameter (the authorization code) and state, which NGINX uses to ensure this request matches the earlier session-setup steps. Here’s a simplified example of what the callback request might look like:

 

User Agent -> NGINX:
HTTP GET /oidc_callback
    Query Parameter: state=state
    Query Parameter: session_state=keycloak_session_state
    Query Parameter: iss=<https://kc.route443.dev/realms/nginx>
    Query Parameter: code=code

Essentially, Keycloak is handing NGINX a “proof” that this user successfully logged in, along with a cryptographic token (the code) that lets NGINX exchange it for real ID and access tokens. Since /oidc_callback is tied to NGINX’s native OIDC logic, NGINX automatically grabs these parameters, checks whether the state parameter matches what it originally sent to Keycloak, and then prepares to make a token request to the IdP’s token_endpoint. Note that the OIDC module does not use the iss parameter for identifying the provider, provider identity is verified through the state parameter and the pre-session cookie, which references a provider-specific key.

Exchanging the Code for Tokens and Validating the ID Token

Once NGINX receives the oidc_callback request and checks all parameters, it proceeds by sending a POST request to the Keycloak token_endpoint, supplying the authorization code, client credentials, and the redirect_uri:

 

NGINX -> IdP:
POST /realms/nginx/protocol/openid-connect/token
Host: kc.route443.dev
Authorization: Basic bmdpbng6c2VjcmV0

Form data:
grant_type=authorization_code
code=5865798e-682e-4eb7-8e3e-2d2c0dc5132e.f2abd107-35c1-4c8c-949f-03953a5249b2.nginx
redirect_uri=https://n1.route443.dev/oidc_callback

Keycloak responds with a JSON object containing at least an id_token, access_token plus token_type=bearer. Depending on your IdP’s configuration and the scope you requested, the response might also include a refresh_token and an expires_in field. The expires_in value indicates how long the access token is valid (in seconds), and NGINX can use it to decide when to request a new token on the user’s behalf. At this point, the module also spends a moment validating the ID token’s claims - ensuring that fields like iss, aud, exp, and nonce align with what was sent initially. If any of these checks fail, the token is deemed invalid, and the request is rejected.

Once everything checks out, NGINX stores the tokens and session details. Here, the OIDC module takes advantage of the keyval mechanism to keep track of user sessions. You might wonder, “Where is that keyval zone configured?” The short answer is that it’s automatic for simplicity, unless you want to override it with your own settings. By default, you get up to 8 MB of session storage, which is more than enough for most use cases. But if you need something else, you can specify a custom zone via the session_store directive. If you’re curious to see this store in action, you can even inspect it through the NGINX Plus API endpoint, for instance:

GET /api/9/http/keyvals/oidc_default_store_keycloak

(where oidc_default_store_ is the prefix and keycloak is your oidc_provider name).

With the tokens now safely validated and stashed, NGINX is ready to finalize the session. The module issues a permanent session cookie back to the user and transitions them into the “logged-in” state, exactly what we’ll see in the next step.

Finalizing the Session and Passing Claims Upstream

Once NGINX verifies all tokens and securely stores the user’s session data, it sends a final HTTP 302 back to the client, this time setting a permanent session cookie:

NGINX -> User Agent:
HTTP 302 Moved Temporarily
Location: https://n1.route443.dev/
Set-Cookie: NGX_OIDC_SESSION=permanent_cookie; Path=/; Secure; HttpOnly

At this point, the user officially has a valid OIDC session in NGINX. Armed with that session cookie, they can continue sending requests to the protected resource (in our case, https://n1.route443.dev/). Each request now carries the NGX_OIDC_SESSION cookie, so NGINX recognizes the user as authenticated and automatically injects the relevant OIDC claims into request headers - such as sub, email, and name. This means your upstream application at http://127.0.0.1:8080 can rely on these headers to know who the user is and handle any additional logic accordingly.

Working with OIDC Variables

Now, let’s talk about how you can leverage the OIDC module for more than just simple authentication. One of its biggest strengths is its ability to extract token claims and forward them upstream in request headers. Any claim in the token can be used as an NGINX variable named $oidc_claim_name, where name is whichever claim you’d like to extract. In our example, we’ve already shown how to pass sub, email, and name, but you can use any claims that appear in the token. For a comprehensive list of possible claims, check the OIDC specification as well as your IdP’s documentation.

Beyond individual claims, you can also access the entire ID and Access Tokens directly via $oidc_id_token and $oidc_access_token. These variables can come in handy if you need to pass an entire token in a request header, or if you’d like to inspect its contents for debugging purposes.

As you can see, configuring NGINX as a reverse proxy with OIDC support doesn’t require you to be an authentication guru. All you really need to do is set up the module, specify the parameters you want, and decide which token claims you’d like to forward as headers.

Handling Nested or Complex Claims (Using auth_jwt)

Note

The workaround in this section (and the next) goes slightly beyond the “native OIDC” approach, since we’re now relying on additional steps to parse more complex data. However, this method remains the only way to handle advanced token structures at the time of writing.

Sometimes, the claim you need to extract is actually a nested object, or even an array. That’s not super common, but it can happen if your Identity Provider returns complex data structures in the token. Currently, the OIDC module can’t directly parse nested claims - this is a known limitation that should be addressed in future releases.

In the meantime, your best workaround is to use the auth_jwt module. Yes, it’s a bit of a detour, but right now it’s the only way (whether you use an njs-based approach or the native OIDC module) to retrieve more intricate structures from a token. Let’s look at an example where the address claim is itself an object containing street, city, and zip, and we only want the city field forwarded as a header:

http {
    auth_jwt_claim_set $city address city;

    server {
    ...
        location / {
            auth_oidc keycloak;
            auth_jwt off token=$oidc_id_token;

            proxy_set_header x-city $city;
            proxy_pass http://127.0.0.1:8080;
        }
    }
}

Notice how we’ve set auth_jwt off token=$oidc_id_token. We’re effectively telling auth_jwt to not revalidate the token (because it was already validated during the initial OIDC flow) but to focus on extracting additional claims from it. Meanwhile, the auth_jwt_claim_set directive specifies the variable $city and points it to the nested city field in the address claim. With this in place, you can forward that value in a custom header (x-city) to your application.

And that’s it. By combining the OIDC module for authentication with the auth_jwt module for more nuanced claim extraction, you can handle even the trickiest token structures in NGINX. In most scenarios, though, you’ll find that the straightforward $oidc_claim_ variables do the job just fine and no extra modules needed.

Role-Based Access Control (Using auth_jwt)

As you’ve noticed, because we’re not revalidating the token signature on every request, the overhead introduced by the auth_jwt module is fairly minimal. That’s great news for performance. But auth_jwt also opens up additional possibilities, like the ability to leverage the auth_jwt_require directive. With this, you can tap into NGINX not just for authentication, but also for authorization, restricting access to certain parts of your site or API based on claims (or any other variables you might be tracking).

For instance, maybe you only want to grant admin-level users access to a specific admin dashboard. If a user’s token doesn’t include the right claim (like role=admin), you want to deny entry. Let’s take a quick look at how this might work in practice:

 

http {
    map $jwt_claim_role $role_admin {
        "admin" 1;
    }

    server {
        ...
        # Location for admin-only resources:
        location /admin {
            auth_jwt foo token=$oidc_id_token;

            # Check that $role_admin is not empty and not "0" -> otherwise return 403:
            auth_jwt_require $role_admin error=403;

            # If 403 happens, we show a custom page:
            error_page 403 /403_custom.html;

            proxy_pass http://127.0.0.1:8080;
        }

        # Location for the custom 403 page
        location = /403_custom.html {
            # Internal, so it can't be directly accessed from outside
            internal;
            # Return the 403 status and a custom message
            return 403 "Access restricted to admins only!";
        }
    }
}

How It Works:

In our map block, we check the user’s $jwt_claim_role and set $role_admin to 1 if it matches "admin". Then, inside the /admin location, we have something like:

auth_jwt foo token=$oidc_id_token;
auth_jwt_require $role_admin error=403;

Here, foo is simply the realm name (a generic string you can customize), and token=$oidc_id_token tells NGINX which token to parse. At first glance, this might look like a normal auth_jwt configuration - but notice that we haven’t specified a public key via auth_jwt_key_file or auth_jwt_key_request. That means NGINX isn’t re-verifying the token’s signature here. Instead, it’s only parsing the token so we can use its claims within auth_jwt_require.

Thanks to the fact that the OIDC module has already validated the ID token earlier in the flow, this works perfectly fine in practice. We still get access to $jwt_claim_role and can enforce auth_jwt_require $role_admin error=403;, ensuring anyone without the “admin” role gets an immediate 403 Forbidden. Meanwhile, we display a friendlier message by specifying:

error_page 403 /403_custom.html;

So even though it might look like a normal JWT validation setup, it’s really a lesser-known trick to parse claims without re-checking signatures, leveraging the prior validation done by the OIDC module. This approach neatly ties together the native OIDC flow with role-based access control - without requiring us to juggle another set of keys.

Logout in OIDC

So far, we’ve covered how to log in with OIDC and handle advanced scenarios like nested claims or role-based control. But there’s another critical topic: how do users log out? The OpenID Connect standard lays out several mechanisms:

  • RP-Initiated Logout: The relying party (NGINX in this case) calls the IdP’s logout endpoint, which can clear sessions both in NGINX and at the IdP level.
  • Front-Channel Logout: The IdP provides a way to notify the RP via a front-channel mechanism (often iframes or redirects) that the user has ended their session.
  • Back-Channel Logout: Uses server-to-server requests between the IdP and the RP to terminate sessions behind the scenes.

Right now, the native OIDC module in its first release does not fully implement these logout flows. They’re on the roadmap, but as of today, you may need a workaround if you want to handle sign-outs more gracefully. Still, one of the great things about NGINX is that even if a feature isn’t officially implemented, you can often piece together a solution with a little extra configuration.

A Simple Logout Workaround

Imagine you have a proxied application that includes a “Logout” button or link. You want clicking that button to end the user’s NGINX session. Below is a conceptual snippet showing how you might achieve that:

http {
    server {
        listen 443 ssl;
        server_name n1.route443.dev;

        # OIDC provider config omitted for brevity
        # ...

        location / {
            auth_oidc keycloak;
            proxy_pass http://127.0.0.1:8080;
        }

        # "Logout" location that invalidates the session
        location /logout {
            # Here, we forcibly remove the NGX_OIDC_SESSION cookie
            add_header Set-Cookie "NGX_OIDC_SESSION=; Path=/; HttpOnly; Secure; Expires=Thu, 01 Jan 1970 00:00:00 GMT";
            
            # Optionally, we can redirect the user to a "logged out" page
            return 302 "https://n1.route443.dev/logged_out";
        }

        location = /logged_out {
            # A simple page or message confirming the user is logged out
            return 200 "You've been logged out.";
        }
    }
}
  • /logout location: When the user clicks the “logout” link in your app, it can redirect them here.
  • Clearing the cookie: We set NGX_OIDC_SESSION to an expired value, ensuring NGINX no longer recognizes this OIDC session on subsequent requests.
  • Redirect to a “logged out” page: We redirect the user to /logged_out, or wherever you want them to land next.

Keep in mind, this approach only logs out at the NGINX layer. The user might still have an active session with the IdP (Keycloak, Entra ID, etc.) because it manages its own cookies. A fully synchronized logout - where both the RP and the IdP sessions end simultaneously, would require an actual OIDC logout flow, which the current module hasn’t fully implemented yet.

Conclusion

Whether you’re looking to protect a basic web app, parse claims, or enforce role-based policies, the native OIDC module in NGINX Plus R34 offers a way to integrate modern SSO at the proxy layer. Although certain scenarios (like nested claim parsing or fully-fledged OIDC logout) may still require workarounds and careful configuration, the out-of-the-box experience is already much more user-friendly than older njs-based solutions, and new features continue to land in every release.

If you’re tackling more complex setups - like UserInfo endpoint support, advanced session management, or specialized logout requirements - stay tuned. The NGINX team is actively improving the module and extend its capabilities. With a little know-how (and possibly a sprinkle of auth_jwt magic), you can achieve an OIDC-based architecture that fits your exact needs, all while preserving the flexibility and performance NGINX is known for.

Updated Apr 07, 2025
Version 2.0