oidc
10 TopicsRequest and validate OAuth / OIDC tokens with APM
BIG-IP APM is able to request and validate OAuth2.0 and OpenID Connect tokens. It can act as Client, Resource Server and Authorization Server. In this article, I cover the use cases where APM acts as Resource Server (validating the tokens) and Client (requesting the tokens). 1. The tokens : Access Token : this is the Oauth2.0 token (access_token). It is used for Authorization and has to be validated by the Resource server. This Resource Server will contact the Authorization server for validation (Out of Path validation - External) Access Token can be either OPAQUE or JWT ID Token : this is the OpenID Token (id_token). It is used by the client only in order to know you use the resource owner. For instance, when you see you name and your picture on the top right corner of an app, this comes from the ID_Token. This token is not user/validated by the Resource Server ID Token is JWT only 2. OPAQUE vs JWT tokens : JWT : Decodable Encryptable Can be validated against a preconfigured JWKS in-box or externally OPAQUE : Not decodable (opaque) Proprietary format, might be any length, and must be unique Must be validated in out-of-path HTTP request to the originating provider (the Authorization Server) 3. Token validation : OAuth Scope agent is used to validate an Access Token either against an internal JSON web key set (JWKS) if the Access Token is JWT via an APM provider configuration (Azure AD uses JWT only) or externally via HTTPS if the Access Token is Opaque. JWKS is faster because we don't have an extra HTTP transaction. Oauth Scope agent is used when APM is RS and the request from the client (APM or mobile app) has a authorization bearer header. Either with Opaque (External) or JWT token (Internal). With Opaque token, if the RS needs more information about the user, it needs to request an OpenID Connect UserInfo by presenting the access_token to the AS. The opaque scope provides with several information but it depends of the AS. For Google, an OIDC userInfo is needed to get the First and Last names. 4. Token Request : Oauth Client agent is used to request the Access token and id_token when APM is deployed as Oauth Client. To do so, 2 grant types are available (Code and password) With Authorization code grant, the Client agent exchanges an Authorization code for an access_token code + id_token (if OIDC used). When the Oauth Client gets the access_token (and id_token), the Oauth scope agent validates them.4.1KViews5likes3CommentsKeycloak as idp for APM
Dear devcentral, Has anyone successfully integrated keycloak as an OIDC backend for APM on F5? We are running v13.1 so this version should be able to use this feature, right? So far I have successfully setup a provider using the autodiscover OpenID URI. Created a client application on the keycloak server with the client_id and secret. Next I'm somewhat confused on how to proceed? From what I read in the docs I need to configure the custom requests for keycloak. Though I can't seem the find these. Kind regards, Joren3.2KViews0likes14CommentsOAuth SSO like SAML Inline SSO possible?
Hi Folks, I have the following challenge and I am unsure, how it can be solved. F5 APM as OAuth Authorization Server Web Application as OAuth Client + Ressource Server Szenario 1: Internal Access This works like a charme. The user go's to the Web Application, clicks on the OIDC Login Link, is redirected to the Authorization Server, etc. The classic grant flow. Szenario 2: External Access through APM Portal The customer demand is, to publish this web application through a F5 APM Webtop with single sign on. The Web Application does not support getting the JWT from the authorization header, therefore all Bearer SSO methodes are not working. The application must go through the OAuth Grant Flow transparently for the user. This looks like the SAML Inline SSO method, but that is not possible with OAuth or do I miss anything? I have two ideas, how this can be solved. It would be great, If someone knows an even simpler method. Publish the OAuth Server in the internet. Publish the Web Application through a new Virtual Server with an Access Profile attached. Add Portal Link to the Web Application. Span the access session accross both Access Profiles. Opening the Web Application from the Web Top -> works seamless with the same Access Session Clicking on the OIDC Login Link at the Web Application Redirect to the OAuth Server New Access Session begins and the user must login again -> BAD The new access session for the Authorization server is required, because: The Access Policy must be validated to trigger the OAuth Authorization VPE Agent. The Access Policy is closed automatically after OAuth Authorization. First idea: At initial login on the Webtop: Generate a secure domain cookie Set it in the browser Write a mapping table (ltm table) cookie->username At the OAuth Server: Get the cookie Lookup the username in the mapping table If found, set the OAuth username, else prompt for authentication OAuth Authorization works without user login again Second idea: At initial auth-redirect Request from the Web Application: Intercept the auth-redirect request Use a sideband connection to request the authorization code from the authorization server (skip authentication, authorization server is only available on the f5 itself) Use another sideband connection to send the authorization code via the redirect-request back to the Web Application Use the redirect-request response as the response for 1. and deliver it to the browser This are the only two ideas I have, too solve this challenge. However, is it really as complex as I think or is there a really simple method I have overseen?1.7KViews0likes4CommentsHow to customise Azure AD OIDC user ID token for APM integration
Overview A Service Provider (SP) such as the F5 APM can integrate with Azure AD (AAD) as an Identity Provider (IDP) for federated authentication using OpenID Connect (OIDC). Through this process, a user visiting APM (e.g., https://myapps.acme.corp ) is immediately redirected to AAD for authentication, once authenticated, AAD returns a code to the APM via the user browser. The APM grabs that code, adds additional information, sends them together to AAD, and finally receives an ‘access_token’ and ‘id_token’. This article takes a special focus around what is included in ‘id_token’ that AAD returns, as it is used by APM, and broadly speaking by any relying party SP, for the purpose of session creation. ‘id_token’ (part of OIDC) contains user identity information and is highly customizable. The customization of ‘id_token’ is completely done within AAD. The concept is simple, but not until it is well understood in my experience; especially with the AAD having a bunch of configuration items in the mix, such as ‘Token configuration’, ‘API permissions’ and ‘Expose an API’. This article hopes to cut all the clutter and un-muddy the water so to speak, around this topic. OIDC As a refresher, the difference between oAuth and OIDC lies in that OIDC is an identity piece laying on top of oAuth. Specifically, with oAuth, ‘access_token’ alone is returned, whereas with OIDC, ‘id_token’ is returned in addition to ‘access_token’. Scope To tell AAD we are using OIDC, the APM needs to include a scope named ‘openid’ in its outbound request to the AAD. This is achieved via the following setting. Within AAD, your application must include ‘openid’, as shown below. By default, the ‘openid’ scope comes with a list of claims that will be included in ‘id_token’. However, for certain claims to be available, additional scopes are also required. For example, if you want to have ‘preferred_username’and ‘name’ claims included, ‘profile’ scope needs to be added as well, as is depicted in the following. AAD also lets you add optional claims via ‘Token configuration’ as shown below. If these optional claims need additional scopes, AAD will add those scopes in for you under ‘API permissions’. On the APM ‘OAuth Client’ configuration, make sure to add those scopes in as highlighted below. Once the APM matches scope with AAD, AAD will include all claims in ‘id_token’ it sends back to the APM. The APM is then able to consume those claims based on the use case (e.g., create a session using email) I am hoping that this short article sheds some light around your integration work around this space.1.7KViews2likes0CommentsF5 NGINX Plus R34 Release Now Available
We’re excited to announce the availability of F5 NGINX Plus Release 34 (R34). Based on NGINX Open Source, NGINX Plus is the only all-in-one software web server, load balancer, reverse proxy, content cache, and API gateway. New and enhanced features in NGINX Plus R34 include: Forward proxy support for NGINX usage reporting: With R34, NGINX Plus allows NGINX customers to send their license usage telemetry to F5 via an existing enterprise forward proxy in their environment. Native support for OpenID Connect configuration: With this release, we would like to announce the availability of native OpenID Connect (OIDC) module in NGINX Plus. The native module brings simplified configuration and better performance while addressing many of the complexities with the existing njs-based solution. SSL Dynamic Certificate Caching: NGINX Plus R34 builds upon the certificate caching improvements in R32 and introduces support for caching of dynamic certificates and preserving this cache across configuration reloads. Important Changes In Behavior Removal of OpenTracing Module: In NGINX Plus R32, we announced deprecation of the OpenTracing module in favor of OpenTelemetry module introduced in NGINX Plus R29. , and was marked to be removed with NGINX Plus R34. The OpenTracing is now being removed from NGINX Plus effective this release. Changes to Platform Support Added Platforms: Alpine Linux 3.21 Removed Platforms: Alpine Linux 3.17 SLES12 Deprecated Platforms: Alpine Linux 3.18 Ubuntu 20.04 New Features in Detail Forward proxy support for NGINX usage reporting In the previous NGINX Plus release (NGINX Plus R33), we introduced major changes to NGINX Plus licensing requiring all NGINX customers to report their commercial NGINX usage to F5. One of the prime feedback we received for this feature was the need to enable NGINX instances to send telemetry via existing outbound proxies, primarily for environments where NGINX instances cannot connect to the F5 licensing endpoint directly. We are pleased to note that NGINX Plus R34 introduces support for using existing forward proxy solutions in customers environment for sending licensing telemetry to F5. With this update, NGINX Plus can now be configured to use the HTTP CONNECT proxy to establish a tunnel to the F5 licensing endpoint to send the usage telemetry. Configuration The following snippet shows the basic NGINX configuration needed in the ngx_mgmt_module module for sending NGINX usage telemetry via a forward proxy solution. mgmt { proxy HOST:PORT; proxy_username USER; #optional proxy_password PASS; #optional } For complete details, refer the docs here. Native support for OpenID Connect configuration Currently, NGINX Plus relies on a njs-based solution for OpenID Connect (OIDC) implementation that involves intricate JavaScript files and advanced setup steps, which are error-prone. With NGINX Plus R34, we're thrilled to introduce native OIDC support in NGINX Plus. This native implementation eliminates many of the complexities of the njs-based approach, making it faster, highly efficient, and incredibly easy to configure, with no burdensome overheads of maintaining and upgrading the njs module. To this effect, a new module ngx_http_oidc_module is introduced in NGINX Plus R34 that implements authentication as a relying party in OIDC using the Authorization Code Flow. The native implementation allows the flexibility to enable OIDC authentication globally, or at a more granular, per-server or a per-location level. It also allows effortless auto-discovery and retrieval of the OpenID providers' configuration metadata without needing complex external scripts for each Identity Provider (IdP), greatly simplifying the configuration process. For a complete overview and examples of the features in the native implementation of OIDC in NGINX Plus and how it improves upon the njs based implementation, refer the blog. Configuration The configuration to setup OIDC natively in NGINX Plus is relatively straightforward requiring minimal directives when compared to the njs based implementation. http { resolver 10.0.0.1; oidc_provider my_idp { issuer "https://provider.domain"; client_id "unique_id"; client_secret "unique_secret"; } server { location / { auth_oidc my_idp; proxy_set_header username $oidc_claim_sub; proxy_pass http://backend; } } } The example assumes that the “https://<nginx-host>/oidc_callback” redirection URI is configured on the OpenID Provider's side. For instructions on how to configure the native OIDC module for various identity providers, refer the NGINX deployment guide. SSL Certificate Caching improvements In NGINX Plus R32, we introduced changes to cache various SSL objects and reuse the cached objects elsewhere in the configuration. This provided noticeable improvements in the initial configuration load time primarily where a small number of unique objects were being referenced multiple times. With R34, we are adding further enhancements to this functionality where cached SSL objects are reused across configuration reloads, making the reloads even faster. Also, SSL certificates with variables are now cached as well. Refer the blog for a detailed overview of this feature implementation. Other Enhancements and Bug Fixes Keepalive timeout improvements Prior to this release, idle keepalive connections could be closed any time the connection needed to be reused for another client or when the worker was gracefully shutting down. With NGINX Plus R34, a new directive keepalive_min_timeout is being introduced. This directive sets a timeout during which a keepalive connection will not be closed by NGINX for connection reuse or graceful worker shutdown. The change allows clients that send multiple requests over the same connection without delay or with a small delay between them, to avoid receiving a TCP RST in response to one of them, if not for network reasons or non-graceful worker shutdown. As a side-effect, it also addresses the TCP reset problem described in RFC 9112, Section 9.6, when the last sent HTTP response could be damaged by a followup TCP RST. It is important for non-idempotent requests, which cannot be retried by client. It is however recommended to not set keepalive_min_timeout to large values as this can introduce an additional delay during worker process shutdown and may restrict NGINX from effective connection reuse. Improved health check logging NGINX Plus R34 adds logging enhancements in the error log for better visibility while troubleshooting upstream health check failures. The server status code is now logged on health check failures. Increased session key size Prior to R34, NGINX accepted an SSL session with maximum 4k(4096) bytes. With NGINX Plus R34, the maximum session size has been increased to 8k(8192) bytes to accommodate use cases where the sessions could be larger than 4k bytes. For ex. in cases where a client certificate is saved in the session, with tickets (in TLS v1.2 or older versions), or with stateless tickets (in TLS v1.3) the sessions maybe of noticeably large size. Certain stateless session resumption implementations may store additional data as well. One such case is with JDK, which is known to include server certificates in the session ticket data which roughly doubles the decoded session size. The changes also include improved logging to capture cases when sessions are not saved in shared memory due to size. Changes in the Open Telemetry Module TLS support in OTEL traces: NGINX now allows enabling TLS for sending OTEL traces. It can be enabled by specifying "https" scheme in the endpoint as shown. otel_exporter { endpoint "https://otel.labt.fp.f5net.com:4433"; trusted_certificate “path/to/custom/ca/bundle“; # optional } By default, system CA bundle is used to verify endpoint's certificate which can be overridden with "trusted_certificate" directive if required. For a complete list of changes to the OTEL module, refer the NGINX OTEL change log. Changes Inherited from NGINX Open Source NGINX Plus R34 is based on NGINX mainline release and inherits all functional changes, features, and bug fixes made since NGINX Plus R33 was released (in NGINX Open source 1.27.3 and 1.27.4 mainline versions) Features: SSL Certificate Caching "keepalive_min_timeout" directive The "server" directive in the "upstream" block supports the "resolve" parameter. The "resolver" and "resolver_timeout" directives in the "upstream" block. SmarterMail specific mode support for IMAP LOGIN with untagged CAPABILITY response in the mail proxy module. Changes: Now TLSv1 and TLSv1.1 protocols are disabled by default. An IPv6 address in square brackets and no port can be specified in the "proxy_bind", "fastcgi_bind", "grpc_bind", "memcached_bind", "scgi_bind", and "uwsgi_bind" directives, and as client address in ngx_http_realip_module. Bug Fixes: gzip filter failed to use preallocated memory" alerts appeared in logs when using zlib-ng. nginx could not build libatomic library using the library sources if the --with-libatomic=DIR option was used. QUIC connection might not be established when using 0-RTT; the bug had appeared in 1.27.1. NGINX now ignores QUIC version negotiation packets from clients. NGINX could not be built on Solaris 10 and earlier with the ngx_http_v3_module. Bugfixes in HTTP/3. Bugfixes in the ngx_http_mp4_module. The "so_keepalive" parameter of the "listen" directive might be handled incorrectly on DragonFly BSD. Bugfix in the proxy_store directive. Security: Insufficient check in virtual servers handling with TLSv1.3 SNI allowed to reuse SSL sessions in a different virtual server, to bypass client SSL certificates verification (CVE-2025-23419). For the full list of new changes, features, bug fixes, and workarounds inherited from recent releases, see the NGINX changes . Changes to the NGINX Javascript Module NGINX Plus R34 incorporates changes from the NGINX JavaScript (njs) module version 0.8.9. The following is a list of notable changes in njs since 0.8.7 (which was the version shipped with NGINX Plus R33). Features: Added fs module for QuickJS engine. Implemented process object for the QuickJS engine. Implemented the process.kill() method. Bug Fixes: Removed extra VM creation per server. Previously, when js_import was declared in http or stream blocks, an extra copy of the VM instance was created for each server block. This was not needed and consumed a lot of memory for configurations with many server blocks. This issue was introduced in 9b674412 (0.8.6) and was partially fixed for location blocks only in 685b64f0 (0.8.7). Fixed XML tests with libxml2 2.13 and later. Fixed promise resolving when Promise is inherited. Fixed absolute scope in cloned VMs. Fixed limit rated output. Optimized use of SSL contexts for the js_fetch_trusted_certificate directive. For a comprehensive list of all the features, changes, and bug fixes, see the njs Changelog.1.3KViews0likes0CommentsAuthenticate user of native mobile app with OpenId Connect
Does F5 Big IP Access Manager support mobile apps authenticating over OpenId Connect with custom URI redirect_uri? Our native mobile app (iOS and Android) authenticates the user using the Authorization Code Grant flow. How it Works. Our redirect_uri (ie callback uri) is: com.mckesson.wfm.ansos2go://signin We are a software vendor in the Healthcare domain. Our customer who uses F5 Big IP says that this URI is considered invalid by F5 when configuring the OpenId Connect Service Provider. Is that true? If so, how do native mobile app developers perform OIDC authentication with F5? Thanks, Scott UPDATE: I got word from my customer that they set up a rewrite policy, so they could enter the redirect_uri as https:/com.mckesson.wfm.ansos2go://signin. Then, they strip off the https:// in the response to the initial 'authorize' call. This is NUTS! Why does F5 Big IP Access Manager require redirect_uri to be https://...? This totally breaks the OpenId Connect specification which says "The Redirection URI MAY use an alternate scheme, such as one that is intended to identify a callback into a native application." https://openid.net/specs/openid-connect-core-1_0.html#AuthorizationEndpoint805Views3likes0CommentsSimplifying OIDC and SSO with the New NGINX Plus R34 OIDC Module
Introduction: Why OIDC and SSO Matter As web infrastructures scale and modernize, strong and standardized methods of authentication become essential. OpenID Connect (OIDC) provides a flexible layer on top of OAuth 2.0, enabling both user authentication (login) and authorization (scopes, roles). By adopting OIDC for SSO, you can: Provide a frictionless login experience across multiple services. Consolidate user session management, removing custom auth code from each app. Lay a foundation for Zero Trust policies, by validating and enforcing identity at the network’s edge. While Zero Trust is a broader security model that extends beyond SSO alone, implementing OIDC at the proxy level is an important piece of the puzzle. It ensures that every request is associated with a verified identity, enabling fine-grained policies and tighter security boundaries across all your applications. NGINX, acting as a reverse proxy, is an ideal place to manage these OIDC flows. However, the journey to robust OIDC support in NGINX has evolved - from an njs-based approach with scripts and maps, to a far more user-friendly native module in NGINX Plus R34. The njs-based OIDC Solution Before the native OIDC module, many users turned to the njs-based reference implementation. This setup combines multiple pieces: njs script to handle OIDC flows (redirecting to the IdP, exchanging tokens, etc.). auth_jwt module for token validation. keyval module to store and pair a session cookie with the actual ID token. While it covers the essential OIDC steps (redirects, code exchanges, and forwarding claims), it has some drawbacks: Configuration complexity. Most of the logic hinges on creative usage of NGINX directives (like map), which can be cumbersome, especially if you used more than one authentication provider or your environment changes frequently. Limited Metadata Discovery. It doesn’t natively fetch the IdP’s `.well-known/openid-configuration`. Instead, a separate bash script queries the IdP and rewrites parts of the NGINX config. Any IdP changes require you to re-run that script and reload NGINX. Performance Overhead. The njs solution effectively revalidates ID tokens on every request. Why? Because NGINX on its own doesn’t maintain a traditional server-side session object. Instead, it simulates a “session” by tying a cookie to the user’s id_token in keyval. auth_jwt checks the token each time, retrieving it from keyval and verifying the signature, expiration, and extract claims. Under heavy load, this constant JWT validation can become expensive. For many, that extra overhead conflicts with how modern OIDC clients usually do short-lived session cookies, validating the token only once per session or relying on a more efficient approach. Hence the motivation for a native OIDC module. Meet the New Native OIDC Module in NGINX Plus R34 With the complexities of the njs-based approach in mind, NGINX introduced a fully integrated OIDC module in NGINX Plus R34. This module is designed to be a “proper” OIDC client, including: Automatic TLS-only communication with the IdP. Full metadata discovery (no external scripts needed). Authorization code flows. Token validation and caching. A real session model using secure cookies. Access token support (including automatic refresh). Straightforward mapping of user claims to NGINX variables. We also have a Deployment Guide that shows how to set up this module for popular IdPs like Okta, Keycloak, Entra ID, and others. That guide focuses on typical use cases (obtaining tokens, verifying them, and passing claims upstream). However, here we’ll go deeper into how the module works behind the scenes, using Keycloak as our IdP. Our Scenario: Keycloak + NGINX Plus R34 We’ll demonstrate a straightforward Keycloak realm called nginx and a client also named nginx with “client authentication” and the “standard flow” enabled. We have: Keycloak as the IdP, running at https://kc.route443.dev/realms/nginx. NGINX Plus R34 configured as a reverse proxy. A simple upstream service at http://127.0.0.1:8080. Minimal Configuration Example: http { resolver 1.1.1.1 ipv4=on valid=300s; oidc_provider keycloak { issuer https://kc.route443.dev/realms/nginx; client_id nginx; client_secret secret; } server { listen 443 ssl; server_name n1.route443.dev; ssl_certificate /etc/ssl/certs/fullchain.pem; ssl_certificate_key /etc/ssl/private/key.pem; location / { auth_oidc keycloak; proxy_set_header sub $oidc_claim_sub; proxy_set_header email $oidc_claim_email; proxy_set_header name $oidc_claim_name; proxy_pass http://127.0.0.1:8080; } } server { # Simple test backend listen 8080; location / { return 200 "Hello, $http_name!\nEmail: $http_email\nKeycloak sub: $http_sub\n"; default_type text/plain; } } } Configuration Breakdown oidc_provider keycloak {}. Points to our Keycloak issuer, plus client_id and client_secret. Automatically triggers .well-known/openid-configuration discovery. Quite an important note: all interaction with the IdP is secured exclusively over SSL/TLS, so NGINX must trust the certificate presented by Keycloak. By default, this trust is validated against your system’s CA bundle (the default CA store for your Linux or FreeBSD distribution). If the IdP’s certificate is not included in the system CA bundle, you can explicitly specify a trusted certificate or chain using the ssl_trusted_certificate directive so that NGINX can validate and trust your Keycloak certificate. auth_oidc keycloak. For any request to https://n1.route443.dev/, NGINX checks if the user has a valid session. If not, it starts the OIDC flow. Passing Claims Upstream. We add headers sub, email, and name based on $oidc_claim_sub, $oidc_claim_email, and $oidc_claim_name, the module’s built-in variables extracted from token. Step-by-Step: Under the Hood of the OIDC Flow Retrieving and Caching OIDC Metadata As soon as you send an HTTP GET request to https://n1.route443.dev, you’ll see that NGINX redirects you to Keycloak’s authentication page. However, before that redirect happens, several interesting steps occur behind the scenes. Let’s take a closer look at the very first thing NGINX does in this flow: NGINX checks if it has cached OIDC metadata for the IdP. If no valid cache exists, NGINX constructs a metadata URL by appending /.well-known/openid-configuration to the issuer you specified in the config. It then resolves the IdP’s hostname using the resolver directive. NGINX parses the JSON response from the IdP, extracting critical parameters such as issuer, authorization_endpoint and token_endpoint. It also inspects response_types_supported to confirm that this IdP supports the authorization code flow and ID tokens. These details are essential for the subsequent steps in the OIDC process. NGINX caches these details for one hour (or for however long the IdP’s Cache-Control headers specify), so it doesn’t need to re-fetch them on every request. This process happens in the background. However, you might notice a slight delay for the very first user if the cache is empty, since NGINX needs a fresh copy of the metadata. Below is an example of the metadata request and response: NGINX -> IdP: HTTP GET /realms/nginx/.well-known/openid-configuration IdP -> NGINX: HTTP 200 OK Content-Type: application/json Cache-Control: no-cache // Means NGINX will store it for 1 hour by default { "issuer": "<https://kc.route443.dev/realms/nginx>", "authorization_endpoint": "<https://kc.route443.dev/realms/nginx/protocol/openid-connect/auth>", "token_endpoint": "<https://kc.route443.dev/realms/nginx/protocol/openid-connect/token>", "jwks_uri": "<http://kc.route443.dev:8080/realms/nginx/protocol/openid-connect/certs>", "response_types_supported": [ "code","none","id_token","token","id_token token", "code id_token","code token","code id_token token" ] // ... other parameters } This metadata tells NGINX everything it needs to know about how to redirect users for authentication, where to request tokens afterward, and which JWT signing keys to trust. By caching these results, NGINX avoids unnecessary lookups on subsequent logins, making the process more efficient for every user who follows. Building the Authorization URL & Setting a Temporary Session Cookie Now that NGINX has discovered and cached the IdP’s metadata, it’s ready to redirect your browser to Keycloak for actual login. Here’s where the OpenID Connect Authorization Code Flow begins in earnest. NGINX adds a few crucial parameters, like response_type=code, client_id, redirect_uri, state, and nonce to the authorization_endpoint it learned from the metadata, then sends you the following HTTP 302 response: NGINX -> User Agent: HTTP 302 Moved Temporarily Location: <https://kc.route443.dev/realms/nginx/protocol/openid-connect/auth?response_type=code&scope=openid&client_id=nginx&redirect_uri=http%3A%2F%2Fn1.route443.dev%2Foidc_callback&state=state&nonce=nonce> Set-Cookie: NGX_OIDC_SESSION=temp_cookie/; Path=/; Secure; HttpOnly At this point, you’re probably noticing the Set-Cookie: NGX_OIDC_SESSION=temp_cookie; line. This is a temporary session cookie, sometimes called a “pre-session” cookie. NGINX needs it to keep track of your “in-progress” authentication state - so once you come back from Keycloak with the authorization code, NGINX will know how to match that code to your browser session. However, since NGINX hasn’t actually validated any tokens yet, this cookie is only ephemeral. It remains a placeholder until Keycloak returns valid tokens and NGINX completes the final checks. Once that happens, you’ll get a permanent session cookie, which will then store your real session data across requests. User Returns to NGINX with an Authorization Code Once the user enters their credentials on Keycloak’s login page and clicks “Login”, Keycloak redirects the browser back to the URL specified in your redirect_uri parameter. In our example, that happens to be http://n1.route443.dev/oidc_callback. It’s worth noting that /oidc_callback is just the default location and if you ever need something different, you can tweak it via the redirect_uri directive in the OIDC module configuration. When Keycloak redirects the user, it includes several query parameters in the URL, most importantly, the code parameter (the authorization code) and state, which NGINX uses to ensure this request matches the earlier session-setup steps. Here’s a simplified example of what the callback request might look like: User Agent -> NGINX: HTTP GET /oidc_callback Query Parameter: state=state Query Parameter: session_state=keycloak_session_state Query Parameter: iss=<https://kc.route443.dev/realms/nginx> Query Parameter: code=code Essentially, Keycloak is handing NGINX a “proof” that this user successfully logged in, along with a cryptographic token (the code) that lets NGINX exchange it for real ID and access tokens. Since /oidc_callback is tied to NGINX’s native OIDC logic, NGINX automatically grabs these parameters, checks whether the state parameter matches what it originally sent to Keycloak, and then prepares to make a token request to the IdP’s token_endpoint. Note that the OIDC module does not use the iss parameter for identifying the provider, provider identity is verified through the state parameter and the pre-session cookie, which references a provider-specific key. Exchanging the Code for Tokens and Validating the ID Token Once NGINX receives the oidc_callback request and checks all parameters, it proceeds by sending a POST request to the Keycloak token_endpoint, supplying the authorization code, client credentials, and the redirect_uri: NGINX -> IdP: POST /realms/nginx/protocol/openid-connect/token Host: kc.route443.dev Authorization: Basic bmdpbng6c2VjcmV0 Form data: grant_type=authorization_code code=5865798e-682e-4eb7-8e3e-2d2c0dc5132e.f2abd107-35c1-4c8c-949f-03953a5249b2.nginx redirect_uri=https://n1.route443.dev/oidc_callback Keycloak responds with a JSON object containing at least an id_token, access_token plus token_type=bearer. Depending on your IdP’s configuration and the scope you requested, the response might also include a refresh_token and an expires_in field. The expires_in value indicates how long the access token is valid (in seconds), and NGINX can use it to decide when to request a new token on the user’s behalf. At this point, the module also spends a moment validating the ID token’s claims - ensuring that fields like iss, aud, exp, and nonce align with what was sent initially. If any of these checks fail, the token is deemed invalid, and the request is rejected. Once everything checks out, NGINX stores the tokens and session details. Here, the OIDC module takes advantage of the keyval mechanism to keep track of user sessions. You might wonder, “Where is that keyval zone configured?” The short answer is that it’s automatic for simplicity, unless you want to override it with your own settings. By default, you get up to 8 MB of session storage, which is more than enough for most use cases. But if you need something else, you can specify a custom zone via the session_store directive. If you’re curious to see this store in action, you can even inspect it through the NGINX Plus API endpoint, for instance: GET /api/9/http/keyvals/oidc_default_store_keycloak (where oidc_default_store_ is the prefix and keycloak is your oidc_provider name). With the tokens now safely validated and stashed, NGINX is ready to finalize the session. The module issues a permanent session cookie back to the user and transitions them into the “logged-in” state, exactly what we’ll see in the next step. Finalizing the Session and Passing Claims Upstream Once NGINX verifies all tokens and securely stores the user’s session data, it sends a final HTTP 302 back to the client, this time setting a permanent session cookie: NGINX -> User Agent: HTTP 302 Moved Temporarily Location: https://n1.route443.dev/ Set-Cookie: NGX_OIDC_SESSION=permanent_cookie; Path=/; Secure; HttpOnly At this point, the user officially has a valid OIDC session in NGINX. Armed with that session cookie, they can continue sending requests to the protected resource (in our case, https://n1.route443.dev/). Each request now carries the NGX_OIDC_SESSION cookie, so NGINX recognizes the user as authenticated and automatically injects the relevant OIDC claims into request headers - such as sub, email, and name. This means your upstream application at http://127.0.0.1:8080 can rely on these headers to know who the user is and handle any additional logic accordingly. Working with OIDC Variables Now, let’s talk about how you can leverage the OIDC module for more than just simple authentication. One of its biggest strengths is its ability to extract token claims and forward them upstream in request headers. Any claim in the token can be used as an NGINX variable named $oidc_claim_name, where name is whichever claim you’d like to extract. In our example, we’ve already shown how to pass sub, email, and name, but you can use any claims that appear in the token. For a comprehensive list of possible claims, check the OIDC specification as well as your IdP’s documentation. Beyond individual claims, you can also access the entire ID and Access Tokens directly via $oidc_id_token and $oidc_access_token. These variables can come in handy if you need to pass an entire token in a request header, or if you’d like to inspect its contents for debugging purposes. As you can see, configuring NGINX as a reverse proxy with OIDC support doesn’t require you to be an authentication guru. All you really need to do is set up the module, specify the parameters you want, and decide which token claims you’d like to forward as headers. Handling Nested or Complex Claims (Using auth_jwt) Sometimes, the claim you need to extract is actually a nested object, or even an array. That’s not super common, but it can happen if your Identity Provider returns complex data structures in the token. Currently, the OIDC module can’t directly parse nested claims - this is a known limitation that should be addressed in future releases. In the meantime, your best workaround is to use the auth_jwt module. Yes, it’s a bit of a detour, but right now it’s the only way (whether you use an njs-based approach or the native OIDC module) to retrieve more intricate structures from a token. Let’s look at an example where the address claim is itself an object containing street, city, and zip, and we only want the city field forwarded as a header: http { auth_jwt_claim_set $city address city; server { ... location / { auth_oidc keycloak; auth_jwt off token=$oidc_id_token; proxy_set_header x-city $city; proxy_pass http://127.0.0.1:8080; } } } Notice how we’ve set auth_jwt off token=$oidc_id_token. We’re effectively telling auth_jwt to not revalidate the token (because it was already validated during the initial OIDC flow) but to focus on extracting additional claims from it. Meanwhile, the auth_jwt_claim_set directive specifies the variable $city and points it to the nested city field in the address claim. With this in place, you can forward that value in a custom header (x-city) to your application. And that’s it. By combining the OIDC module for authentication with the auth_jwt module for more nuanced claim extraction, you can handle even the trickiest token structures in NGINX. In most scenarios, though, you’ll find that the straightforward $oidc_claim_ variables do the job just fine and no extra modules needed. Role-Based Access Control (Using auth_jwt) As you’ve noticed, because we’re not revalidating the token signature on every request, the overhead introduced by the auth_jwt module is fairly minimal. That’s great news for performance. But auth_jwt also opens up additional possibilities, like the ability to leverage the auth_jwt_require directive. With this, you can tap into NGINX not just for authentication, but also for authorization, restricting access to certain parts of your site or API based on claims (or any other variables you might be tracking). For instance, maybe you only want to grant admin-level users access to a specific admin dashboard. If a user’s token doesn’t include the right claim (like role=admin), you want to deny entry. Let’s take a quick look at how this might work in practice: http { map $jwt_claim_role $role_admin { "admin" 1; } server { ... # Location for admin-only resources: location /admin { auth_jwt foo token=$oidc_id_token; # Check that $role_admin is not empty and not "0" -> otherwise return 403: auth_jwt_require $role_admin error=403; # If 403 happens, we show a custom page: error_page 403 /403_custom.html; proxy_pass http://127.0.0.1:8080; } # Location for the custom 403 page location = /403_custom.html { # Internal, so it can't be directly accessed from outside internal; # Return the 403 status and a custom message return 403 "Access restricted to admins only!"; } } } How It Works: In our map block, we check the user’s $jwt_claim_role and set $role_admin to 1 if it matches "admin". Then, inside the /admin location, we have something like: auth_jwt foo token=$oidc_id_token; auth_jwt_require $role_admin error=403; Here, foo is simply the realm name (a generic string you can customize), and token=$oidc_id_token tells NGINX which token to parse. At first glance, this might look like a normal auth_jwt configuration - but notice that we haven’t specified a public key via auth_jwt_key_file or auth_jwt_key_request. That means NGINX isn’t re-verifying the token’s signature here. Instead, it’s only parsing the token so we can use its claims within auth_jwt_require. Thanks to the fact that the OIDC module has already validated the ID token earlier in the flow, this works perfectly fine in practice. We still get access to $jwt_claim_role and can enforce auth_jwt_require $role_admin error=403;, ensuring anyone without the “admin” role gets an immediate 403 Forbidden. Meanwhile, we display a friendlier message by specifying: error_page 403 /403_custom.html; So even though it might look like a normal JWT validation setup, it’s really a lesser-known trick to parse claims without re-checking signatures, leveraging the prior validation done by the OIDC module. This approach neatly ties together the native OIDC flow with role-based access control - without requiring us to juggle another set of keys. Logout in OIDC So far, we’ve covered how to log in with OIDC and handle advanced scenarios like nested claims or role-based control. But there’s another critical topic: how do users log out? The OpenID Connect standard lays out several mechanisms: RP-Initiated Logout: The relying party (NGINX in this case) calls the IdP’s logout endpoint, which can clear sessions both in NGINX and at the IdP level. Front-Channel Logout: The IdP provides a way to notify the RP via a front-channel mechanism (often iframes or redirects) that the user has ended their session. Back-Channel Logout: Uses server-to-server requests between the IdP and the RP to terminate sessions behind the scenes. Right now, the native OIDC module in its first release does not fully implement these logout flows. They’re on the roadmap, but as of today, you may need a workaround if you want to handle sign-outs more gracefully. Still, one of the great things about NGINX is that even if a feature isn’t officially implemented, you can often piece together a solution with a little extra configuration. A Simple Logout Workaround Imagine you have a proxied application that includes a “Logout” button or link. You want clicking that button to end the user’s NGINX session. Below is a conceptual snippet showing how you might achieve that: http { server { listen 443 ssl; server_name n1.route443.dev; # OIDC provider config omitted for brevity # ... location / { auth_oidc keycloak; proxy_pass http://127.0.0.1:8080; } # "Logout" location that invalidates the session location /logout { # Here, we forcibly remove the NGX_OIDC_SESSION cookie add_header Set-Cookie "NGX_OIDC_SESSION=; Path=/; HttpOnly; Secure; Expires=Thu, 01 Jan 1970 00:00:00 GMT"; # Optionally, we can redirect the user to a "logged out" page return 302 "https://n1.route443.dev/logged_out"; } location = /logged_out { # A simple page or message confirming the user is logged out return 200 "You've been logged out."; } } } /logout location: When the user clicks the “logout” link in your app, it can redirect them here. Clearing the cookie: We set NGX_OIDC_SESSION to an expired value, ensuring NGINX no longer recognizes this OIDC session on subsequent requests. Redirect to a “logged out” page: We redirect the user to /logged_out, or wherever you want them to land next. Keep in mind, this approach only logs out at the NGINX layer. The user might still have an active session with the IdP (Keycloak, Entra ID, etc.) because it manages its own cookies. A fully synchronized logout - where both the RP and the IdP sessions end simultaneously, would require an actual OIDC logout flow, which the current module hasn’t fully implemented yet. Conclusion Whether you’re looking to protect a basic web app, parse claims, or enforce role-based policies, the native OIDC module in NGINX Plus R34 offers a way to integrate modern SSO at the proxy layer. Although certain scenarios (like nested claim parsing or fully-fledged OIDC logout) may still require workarounds and careful configuration, the out-of-the-box experience is already much more user-friendly than older njs-based solutions, and new features continue to land in every release. If you’re tackling more complex setups - like UserInfo endpoint support, advanced session management, or specialized logout requirements - stay tuned. The NGINX team is actively improving the module and extend its capabilities. With a little know-how (and possibly a sprinkle of auth_jwt magic), you can achieve an OIDC-based architecture that fits your exact needs, all while preserving the flexibility and performance NGINX is known for.699Views2likes1CommentRequest and validate OAuth/OIDC tokens with APM when F5 is behind a web proxy
This question concerns a deployment using OpenID Connect with Okta as the Authorization server and F5 APM as the Resource server. The F5 is running LTM 14.1 and is in non-routeable address space behind a firewall and web proxy. An F5 "provider" object was configured via Access -> Federation -> OAuth Client/Resource Server -> Provider Connections to Okta via the "Authentication URI" and other URIs in the provider object occur over the management plane. The F5 must be able to resolve the name and have a route to Okta. There is no provision in the provider object to specify that the connection traverse a web proxy. For comparison, a similar problem arises when trying to connect to an OCSP server when the F5 is behind a web proxy. A solution for the OCSP connection is outlined in the article ocsp-through-an-outbound-explicit-proxy-29026. This solution uses a "proxy VIP" to direct the traffic through a web proxy. The solution works because the OCSP call is unencrypted http. However, in the case of F5 OAuth "provider" object, the connection is encrypted HTTPS. If a "proxy VIP" is configured as in the OCSP example, there does not appear to be a way to change the HTTP "GET" to a "CONNECT" in order to perform an encrypted connection through the web proxy. Is there any other way to configure an F5 as a OAuth Resource server when it is behind a web proxy?516Views0likes0CommentsWe Heard You! R35 Brings Frictionless OIDC Logout and Richer Claims to NGINX Plus
Quick Overview Hello friends! NGINX Plus R35 ships four new directives in the built-in ngx_http_oidc_module - logout_uri, post_logout_uri, logout_token_hint, and userinfo. Together, they finally close some of the most common end-to-end OIDC gaps: a clean, standards-aligned RP-initiated logout and easy access to user profile claims. R35 adds a new http_auth_require_module with the auth_require directive so you can implement RBAC checks directly - no more auth_jwt_require + auth_jwt workaround from R34. auth_require isn’t OIDC-specific, you can use it anywhere. In this post, though, we’ll look at it briefly through an OIDC lens. Rather than drowning you in implementation minutiae or some code samples, we’ll walk the actual traffic flow step by step. That way, both admins and engineers can see exactly what’s on the wire and why it matters. What’s new logout_uri - A local path users hit to start logout. NGINX constructs the correct RP-initiated logout request to your IdP, attaching all required parameters for you. post_logout_uri - Where the IdP should send the user after a successful logout. Set it in NGINX and also allow it in your IdP application settings. logout_token_hint on|off - When on, NGINX adds id_token_hint=<JWT> to the IdP’s logout endpoint. Some IdPs (e.g., OneLogin) require this and will return HTTP 400 without it. For most providers, it’s optional. userinfo on|off - When on, NGINX automatically calls userinfo endpoint, fetches extended claims, and exposes them as $oidc_claim_* variables. You can also inspect the raw JSON via $oidc_userinfo. There’s also the new http_auth_require_module with the auth_require directive. It’s not OIDC‑specific, and you can use it anywhere, but in OIDC setups, it’s a straightforward way to implement RBAC directly against $oidc_claim_* claims without reaching for auth_jwt_require. Using OneLogin as the example In a previous article, we used Keycloak as the IdP. This time we’ll try a hosted provider, OneLogin - because it has a few behaviors that make the new features shine. Everything here applies to other providers with the usual small differences. Create an application: OpenID Connect (OIDC) -> Web Application. Sign-in Redirect URIs: https://demo.route443.dev/oidc_callback Sign-out Redirect URIs: https://demo.route443.dev/post_logout/ (Both must match what you configure in NGINX.) On the SSO tab, copy Client ID, Client Secret, and Issuer (typically https://<subdomain>.onelogin.com/oidc/2). On Assignments, grant yourself access, otherwise OneLogin will respond with access_denied after auth. You can refer to our deployment guide for OneLogin. Metadata sanity check Let’s fetch OneLogin’s metadata and note the endpoints that matter for our flow. By default, OneLogin publishes the OpenID Provider Configuration at: https://.onelogin.com/oidc/2/.well-known/openid-configuration curl https://<subdomain>.onelogin.com/oidc/2/.well-known/openid-configuration | jq Example (trimmed): { "issuer": "https://route443-dev.onelogin.com/oidc/2", "authorization_endpoint": "https://route443-dev.onelogin.com/oidc/2/auth", "token_endpoint": "https://route443-dev.onelogin.com/oidc/2/token", "userinfo_endpoint": "https://route443-dev.onelogin.com/oidc/2/me", "end_session_endpoint": "https://route443-dev.onelogin.com/oidc/2/logout" } Two important notes: For RP-initiated logout, NGINX only uses end_session_endpoint from metadata - you can’t override it in the config. If it’s missing, you won’t get a proper RP-initiated logout. If userinfo is on, NGINX will call userinfo_endpoint immediately after exchanging the code for tokens. If userinfo is unavailable, NGINX returns HTTP 500 to the client. Unlike some clients, this is not a soft failure, so if you enable userinfo, make sure that endpoint is up during login. A minimal, working R35 config http { resolver 1.1.1.1 valid=300s ipv4=on; oidc_provider onelogin { issuer https://route443-dev.onelogin.com/oidc/2; client_id 37e2eb90-...; client_secret 4aeca...; logout_uri /logout; post_logout_uri https://demo.route443.dev/post_logout/; logout_token_hint on; userinfo on; } server { listen 443 ssl; server_name demo.route443.dev; ssl_certificate ...; ssl_certificate_key ...; auth_oidc onelogin; proxy_set_header X-Sub $oidc_claim_sub; proxy_set_header X-Userinfo $oidc_userinfo; proxy_pass http://app; location /post_logout/ { return 200 "Arrivederci!\n"; default_type text/plain; } } upstream app { server 127.0.0.1:8080; } } Reload (nginx -s reload) and we’re ready to test. Quick note: on logout_uri the value (/logout) is a trigger, not a location you need to implement yourself. If your app exposes a “Sign out, %username%” link like /logout?user=foo, hitting that URL causes NGINX to perform RP-initiated logout against the IdP. Alternatively, your app can render a different link that points to wherever you’ve configured logout_uri. The key idea is: your app points to the local logout_uri and NGINX handles the IdP call. What userinfo on Does Under the Hood Open your app in a fresh browser session. On the first request, NGINX sees you’re unauthenticated (based on the session cookie) and redirects you to OneLogin. After the authorization code flow, we will now move to the Exchange Code -> Tokens step. We covered this process in detail in a previous article. Because userinfo on is enabled right after NGINX obtains and validates the token set, it calls the userinfo_endpoint from the metadata: NGINX -> OneLogin GET /oidc/2/me HTTP/1.1 Host: route443-dev.onelogin.com Connection: close Authorization: Bearer <access_token> OneLogin -> NGINX HTTP/1.1 200 OK Content-Type: application/json ... {"sub":"177988316","email":"user4@route443.dev","preferred_username":"user4","name":"user4"} NGINX parses the JSON and merges these claims into $oidc_claim_* . Userinfo claims override same-named claims from the id_token. In this example, $oidc_claim_email becomes user4@route443.dev. For troubleshooting, you can inspect the raw body via $oidc_userinfo variable. RP-Initiated Logout Alright, after authentication and authorization have successfully passed and the user has gained access to the application, let’s try signing out. To do this, we’ll open the following link in the browser: https://demo.route443.dev/logout?user=user4. Based on the logout_uri directive, NGINX will understand that it needs to initiate an RP-initiated logout and will redirect the user to the OneLogin provider’s end_session_endpoint, that is, to https://route443-dev.onelogin.com/oidc/2/logout, which we obtained from the metadata. At the same time, NGINX will add the id_token_hint parameter to the request, which contains the user’s ID token that we previously obtained. So the request will look like this: User Agent -> NGINX: HTTP GET /logout?user=user4 Host: demo.route443.dev Cookie: NGX_OIDC_SESSION=ae00b3f...; NGINX -> User Agent: HTTP/1.1 302 Found Location: https://route443-dev.onelogin.com/oidc/2/logout?client_id=37e2eb90...&id_token_hint=ey...&post_logout_redirect_uri=https://demo.route443.dev/post_logout/ Set-Cookie: NGX_OIDC_SESSION=; httponly; secure; path=/ Look closely at the redirect NGINX issues when you hit your local logout_uri. You’ll see it added id_token_hint=<JWT> before sending the browser to the IdP’s end_session_endpoint. That happens because you enabled logout_token_hint on in your NGINX config. With OneLogin this isn’t optional: omit the hint and you’ll be greeted by HTTP 400. With most other providers, the hint is optional, which is exactly why we don’t recommend turning it on unless your IdP demands it. This is the only request where NGINX puts the ID token on the wire in clear view of the user agent and intermediaries, so if you don’t need to expose it, don’t. There’s also UX nuance here. Some IdPs change behavior depending on whether id_token_hint is present. With the hint, you might see an explicit "Are you sure you want to sign out?" confirmation. Without it, the same provider might tear down the session immediately. Same endpoint, different feel. Know what your IdP does and choose intentionally. You’ll notice another parameter NGINX appends: post_logout_redirect_uri. That’s the return address after a successful logout - it must match what you configured in NGINX and what you allowed in the IdP app. In our example, it’s https://demo.route443.dev/post_logout/, which is exactly where the browser lands after OneLogin is done. Now, about the cookie that mysteriously vanishes. NGINX clears NGX_OIDC_SESSION right away. It does this defensively because it cannot predict what the IdP will do next - bounce you to a login page, show a confirmation screen, or even fail. Clearing the local session guarantees you won’t keep accidental access if the IdP misbehaves. Why does that matter? Imagine the provider fails to fully drop the server-side session state. On your next request back to the app, NGINX will dutifully send you to the IdP, the IdP will happily say “oh, you’re still good,” and you’ll be right back in the app with no fresh authentication. That’s not the logout story you want. The takeaway is simple: test RP-initiated logout meticulously with your provider, verify that the server-side session is killed, and only then call it done. In our happy-path run, the flow is pleasantly uneventful: NGINX redirects with id_token_hint (because OneLogin requires it) and post_logout_redirect_uri, OneLogin terminates the session, sends the browser back to /post_logout/, and the user gets their minimalist "Arrivederci!" confirmation. Clean in, clean out: User Agent -> OneLogin: HTTP GET /oidc/2/logout?client_id=37e2eb90...&id_token_hint=ey...&post_logout_redirect_uri=https://demo.route443.dev/post_logout/ Host: route443-dev.onelogin.com OneLogin -> User Agent: HTTP 302 Moved Temporarily Location: https://demo.route443.dev/post_logout/ Declarative RBAC OIDC in NGINX isn’t only about passing identity claims upstream for SSO, it’s also about using those claims to control who gets to which resource. In R34, we had to lean on auth_jwt_require with a little dance:: after a user authenticated, we’d re-feed the ID token to the auth_jwt module just so we could gate access on claims. It worked, but it added config noise, extra $jwt_* variables, and CPU (parsing the token on every request). R35 finally removes that crutch. The new auth_require directive lets us use OIDC claims directly in NGINX, no more "auth_jwt" workaround. The module itself isn’t tied to OIDC, you can pair it with any NGINX config, but with auth_oidc it gives you clean, declarative RBAC right in your config. We’ll keep it practical: imagine two areas in your app, /admin and /support. Admins should access /admin location, admins or folks with the support permission should see /support location. Here’s what that looks like in NGINX. map $oidc_claim_groups $is_admin { default 0; ~*(^|\s)admin(\s|$) 1; } map $oidc_claim_email $is_corp_user { default 0; ~*@example\.com$ 1; } # OR logic map "$is_admin$is_corp_user" $admin_or_corp { default 0; ~1 1; } server { # ... location /admin/ { auth_require $is_admin; proxy_pass http://app; } location /support/ { auth_require $admin_or_corp; proxy_pass http://app; } } In this example we keep things simple: $is_admin comes from the groups claim (we treat it as a space-delimited string) and $is_corp_user checks that the user’s email ends with @example.com. We then build a tiny OR with another map: if either flag is 1, $admin_or_corp becomes 1. From there, auth_require is straightforward - it allows access when the referenced variable is non-empty and not "0" and denies with HTTP 403 by default (you can override the status with error=<4xx|5xx>). Remember that listing multiple variables in a single auth_require is a logical AND, for OR, precompute a boolean with map as shown above. Wrap-Up OIDC improvements in R35 are a meaningful milestone for the module. NGINX Plus R35 lifts the OIDC client from "almost there" to nearly complete: a reliable RP‑initiated logout, first‑class userinfo integration, and a new auth_require for clean, declarative RBAC right in your configuration. We’re not done, though: we still plan to fill a few remaining gaps: Front‑Channel and Back‑Channel logouts, PKCE, and a handful of other niceties that should cover nearly all deployment requirements. Stay tuned!281Views1like0CommentsEdge Client OAuth with Azure
Hello All, I tried OAuth feature on Edge Client with Azure as IDP. It works, I receive the Access Token and connect successfully. The problem is that Policy does not parse the JWT token and just stores it as secure variable. So I have no information about the user. I can parse it with an irule, but I expected to be parsed automatically, lilke when you use an OAuth Client in VPE. Am I missing something?83Views0likes0Comments