NGINX Plus
58 TopicsUsing F5 NGINX Plus as the Ingress Controller within Nutanix Kubernetes Platform (NKP)
Managing incoming traffic is a critical component of running applications efficiently within Kubernetes clusters. As organizations continue to deploy a growing number of microservices, the need for robust, flexible, and intelligent traffic management solutions becomes more apparent. In this article, we provide an overview of how F5 NGINX Plus, when used as the ingress controller in the Nutanix Kubernetes Platform (NKP), offers a comprehensive approach to traffic optimization, application reliability, and security.94Views1like0CommentsF5 NGINX Plus R35 Release Now Available
We’re excited to announce the availability of F5 NGINX Plus Release 35 (R35). Based on NGINX Open Source, NGINX Plus is the only all-in-one software web server, load balancer, reverse proxy, content cache, and API gateway. New and enhanced features in NGINX Plus R35 include: ACME protocol support: This release introduces native support for Automated Certificate Management Environment (ACME) protocol in NGINX Plus. The ACME protocol automates SSL/TLS certificate lifecycle management by enabling direct communication between clients and certificate authorities for issuance, installation, revocation, and replacement of SSL certificates. Automatic JWT Renewal and Update: This capability simplifies the NGINX Plus renewal experience by automating the process of updating the license JWT for F5 NGINX instances communicating directly with the F5 licensing endpoint for license reporting. Native OIDC Enhancements: This release includes additional enhancements to the Native OpenID connect module, adding support for Relying party (RP) initiated Logout and UserInfo endpoint for streamlining authentication workflows. Support for Early Hints: NGINX Plus R35 introduces support for Early Hints (HTTP 103), which optimizes website performance by allowing browsers to preload resources before the final server response, reducing latency and accelerating content display. QUIC – CUBIC Congestion Control: With R35, we have extended support for congestion algorithms in our HTTP3/QUIC implementation to also support CUBIC which provides better bandwidth utilization resulting in quicker load times and faster downloads. NGINX Javascript QuickJS - Full ES2023 support: With this NGINX Plus release, we now support full ES2023 JavaScript specification for QuickJS runtime for your custom NGINX scripting and extensibility needs using NGINX JavaScript. Changes to Platform Support NGINX Plus R35 introduces the following updates to the NGINX Plus technical specification. Added Platforms: Support for the following platforms has been added with this release Alpine Linux 3.22 RHEL 10 Removed Platforms: Support for the following platforms has been removed starting this release. Alpine Linux 3.18 – Reached End of Support in May 2025 Ubuntu 20.04 (LTS) – Reached End of support in May 2025 Deprecated Platforms: Alpine Linux 3.19 Note: For SUSE Linux Enterprise Server (SLES) 15, SP6 is now the required service pack version. The older service packs have been EOL’ed by the vendor and are no longer supported. New Features In Details ACME Protocol Support The ACME protocol (Automated Certificate Management Environment) is a communications protocol primarily designed to automate the process of issuing, validating, renewing, and revoking digital security certificates (e.g., TLS/SSL certificates). It allows clients to interact with a Certificate Authority (CA) without requiring manual intervention, simplifying the deployment of secure websites and other services that rely on HTTPS. With the NGINX Plus R35 release, we are pleased to announce the preview release of native ACME support in NGINX. ACME support is available as a Rust-based dynamic module for both NGINX Open Source, as well as enterprise F5 NGINX One customers using NGINX Plus. Native ACME support greatly simplifies and automates the process of obtaining and renewing SSL/TLS certificates. There’s no need to track certificate expiration dates and manually update or review configs each time an update is needed. With this support, NGINX can now directly communicate with ACME-compatible Certificate Authorities (CAs) like Let's Encrypt to handle certificate management without requiring external plugins like certbot, cert-manager, etc or ongoing manual intervention. This reduces complexity, minimizes operational overhead, and streamlines the deployment of encrypted HTTPS for websites and applications while also making the certificate management process more secure and less error prone. The implementation introduces a new module ngx_http_acme_module providing built-in directives for requesting, installing, and renewing certificates directly from NGINX configuration. The current implementation supports HTTP-01 challenge with support for TLS-ALPN and DNS-01 challenges planned in future. For a detailed overview of the implementation and the value it brings, refer the ACME blog post. To get step by step instructions on how to configure ACME in your environment, refer to NGINX docs. Automatic JWT Renewal and Update This feature enables the automatic update of the JWT license for customers reporting their usage directly to the F5 licensing endpoint (product.connect.nginx.com) post successful renewal of the subscription. The feature applies to subscriptions nearing expiration (within 30 days) as well as subscriptions that have expired, but remain within the 90-day grace period. Here is how this feature works: Starting 30 days prior to JWT license expiration, NGINX Plus will notify the licensing endpoint server of JWT license expiration as part of the automatic usage reporting process. The licensing endpoint server will continually check for a renewed NGINX One subscription with F5 CRM system. Once the subscription is successfully renewed, the F5 licensing endpoint server will send the updated JWT to corresponding NGINX Plus instance. NGINX Plus instance in turn will automatically deploy the renewed JWT license to the location based on your existing configuration without the need for any NGINX reload or service restart. Note: The renewed JWT file received from F5 is named nginx-mgmt-license and is located at the state_path location on your NGINX instance. For more details, refer to NGINX docs. Native OpenID Connect Module Enhancements The NGINX Plus R34 release introduced native support for OpenID Connect (OIDC) authentication. Continuing the momentum, we are excited to add support for OIDC Relying Party (RP) Initiated Logout along with support for retrieving claims via the OIDC UserInfo endpoint in this release. Relying Party (RP) Initiated Logout RP-Initiated Logout is a method used in federated authentication systems (e.g., systems using OpenID Connect (OIDC) or Security Assertion Markup Language (SAML)) to allow a user to log out of an application (called the relying party) and propagate the logout request to other services in the authentication ecosystem, such as the identity provider (IdP) and other sessions tied to the user. This facilitates session synchronization and clean-up across multiple applications or environments. The RP-Initiated Logout support in NGINX OIDC native module helps provide a seamless user experience by enhancing the consistency of authentication and logout workflows, particularly in Single Sign-On (SSO) environments. It significantly helps improve security by ensuring user sessions are terminated securely thereby reducing the risk of unauthorized access. It also simplifies the development processes by minimizing the need for custom coding and promoting adherence to best practices. Additionally, it strengthens user privacy and supports compliance efforts enabling users to easily terminate sessions, thereby reducing the exposure from lingering session. The implementation involves the client (browser) initiating a logout by sending a request to the relying party's (NGINX) logout endpoint. NGINX(RP) adds additional parameters to the request and redirects it to the IdP, which terminates the associated user session and redirects the client to the specified post_logout_uri. Finally, NGINX as the relying party presents a post-logout confirmation page, signaling the completion of the logout process and ensuring session termination across both the relying party and the identity provider. UserInfo Retrieval Support The OIDC UserInfo endpoint is used by applications to retrieve profile information about the authenticated Identity. Applications can use this endpoint to retrieve profile information, preferences and other user-specific information to ensure a consistent user management process. The support for UserInfo endpoint in the native OIDC module provides a standardized mechanism to fetch user claims from Identity Providers (IdPs) helping simplify the authentication workflows and reducing overall system complexity. Having a standard mechanism also helps define and adopt development best practices across client applications for retrieving user claims offering tremendous value to developers, administrators, and end-users. The implementation enables the RP (nginx) to call an identity provider's OIDC UserInfo endpoint with the access token (Authorization: Bearer) and obtain scope-dependent End-user claims (e.g., profile, email, scope, address). This provides the standard, configuration-driven mechanism for claim retrieval across client applications and reduces integration complexity. Several new directives (logout_uri, post_logout_uri, logout_token_hint, and userinfo) have been added to the ngx_http_oidc_module to support both these features. Refer to our technical blog on how NGINX Plus R35 offers frictionless logout and UserInfo retrieval support as part of the native OIDC implementation for a comprehensive overview of both of these features and how they work under the hood. For instructions on how to configure the native OIDC module for various identity providers, refer the NGINX deployment guide. Early Hints Support Early Hints (RFC 8297) is a HTTP status code to improve website performance by allowing the server to send preliminary hints to the client before the final response is ready. Specifically, the server sends a 103 status code with headers indicating which resources (like CSS, JavaScript, images) the client can pre-fetch while the server is still working on generating the full response. Majority of the web browsers including Chrome, Safari and Edge support it today. A new NGINX directive early_hints has been added to specify the conditions under which backends can send Early Hints to the client. NGINX will parse the Early Hints from the backend and send them to the client. The following example shows how to proxy Early Hints for HTTP/2 and HTTP/3 clients and disable them for HTTP/1.1 early_hints $http2$http3; proxy_pass http://bar.example.com; For more details, refer NGINX docs and a detailed blog on Early Hints support in NGINX. QUIC – Support for CUBIC Congestion Control Algorithm CUBIC is a congestion control algorithm designed to optimize internet performance. It is widely used and well-tested in TCP implementations and excels in high-bandwidth and high-delay networks by efficiently managing data transmission ensuring faster speeds, rapid recovery from congestion, and reduced latency. Its adaptability to various network conditions and fair resource allocation makes it a reliable choice for delivering a smooth and responsive online experience and enhance overall user satisfaction. We announced support for CUBIC congestion algorithm in NGINX open source mainline version 1.27.4. All the bug fixes and enhancements since then are being merged into NGINX Plus R35. For a detailed overview of the implementation, refer to our blog on the topic. NGINX Javascript QuickJS - Full ES2023 support We introduced preview support for QuickJS runtime in NGINX JavaScript(njs) version 0.8.6 in the NGINX Plus R33 release. We have been quietly focused on this initiative since and are pleased to announce full ES2023 JavaScript specification support in NGINX JavaScript(njs) version 0.9.1 with NGINX Plus R35 release. With full ES2023 specification support, you can now use the latest JavaScript features that modern developers expect as standard to extend NGINX capabilities using njs. Refer to this detailed blog for a comprehensive overview of our QuickJS implementation, the motivation behind QuickJS runtime support and where we are headed with NGINX JavaScript. For specific details on how you can leverage QuickJS in your njs scripts, please refer to our documentation. Other Enhancements and Bug Fixes Variable based Access Control Support To enable robust access control using identity claims, R34 and earlier versions required a workaround involving the auth_jwt_require directive. This involved reprocessing the ID token with the auth_jwt module to manage access based on claims. This approach introduced configuration complexity and performance overhead. With R35, NGINX simplifies this process through the auth_require directive, which allows direct use of claims for resource-based access control without relying on auth_jwt. This directive is part of a new module ngx_http_auth_require_module added in this release. For ex, the following NGINX OIDC configuration maps the role claim from the id_token to $admin_role variable and sets it to 1 if the user’s role is “admin”. The /location block then uses auth_require $admin_role to restrict access, allowing only the users with admin role to proceed. http { oidc_provider my_idp { ... } map $oidc_claim_role $admin_role { "admin" 1; } server { auth_oidc my_idp; location /admin { auth_require $admin_role; } } } Though the directive is not exclusive to OIDC, when paired with auth_oidc, it provides a clean and declarative Role-Based Access Control (RBAC) mechanism within the server configuration. For example, you can easily configure access so only admins reach the /admin location, while either admins or users with specific permissions access other locations. The result is streamlined, efficient, and practical access management directly in NGINX. Note that the new auth_require directive does not replace auth_jwt_require as the two serve distinct purposes. While auth_jwt_require is an integral part of JWT validation in the JWT module focusing on headers and claims checks, auth_require operates in a separate ACCESS phase for access control. Deprecating auth_jwt_require would reduce flexibility, particularly in "satisfy" modes of operation, and complicate configurations. Additionally, auth_jwt_require plays a critical role in initializing JWT-related variables, enabling their use in subrequests. This initialization, crucial for JWE claims, cannot be done via REWRITE module directives as JWE claims are not available before JWT decryption. Support for JWS RSASSA-PSS algorithms: RSASSA-PSS algorithms are used for verifying the signatures of JSON Web Tokens (JWTs) to ensure their authenticity and integrity. In NGINX, these algorithms are typically employed via the auth_jwt_module when validating JWTs signed using RSASSA-PSS. We are adding support for following algorithms as specified in RFC 7518 (Section 3.5): PS256 PS384 PS512 Improved Node Outage Detection and Logging This release also introduces improvements in the timeout handling for zone_sync connections enabling faster detection of offline nodes and reducing counter accumulation risks. This improvement is aimed at improving synchronization of nodes in a cluster and early detection of failures improving system’s overall performance and reliability. Additional heuristics are added to detect blocked workers to proactively address prolonged event loop times. License API Updates NGINX license API endpoint now provides additional information. The “uuid” parameter in the license information is now available via the API endpoint. Changes Inherited from NGINX Open Source NGINX Plus R35 is based on NGINX 1.29.0 mainline release and inherits all functional changes, features, and bug fixes made since NGINX Plus R34 was released (which was based on 1.27.4 mainline release). Features: Early Hints support - support for response code 103 from proxy and gRPC backends; CUBIC congestion control algorithm support in QUIC connections. Loading of secret keys from hardware tokens with OpenSSL provider. Support for the "so_keepalive" parameter of the "listen" directive on macOS. Changes: The logging level of SSL errors in a QUIC handshake has been changed from "error" to "crit" for critical errors, and to "info" for the rest; the logging level of unsupported QUIC transport parameters has been lowered from "info" to "debug". Bug Fixes: nginx could not be built by gcc 15 if ngx_http_v2_module or ngx_http_v3_module modules were used. nginx might not be built by gcc 14 or newer with -O3 -flto optimization if ngx_http_v3_module was used. In the "grpc_ssl_password_file", "proxy_ssl_password_file", and "uwsgi_ssl_password_file" directives when loading SSL certificates and encrypted keys from variables; the bug had appeared in 1.23.1. In the $ssl_curve and $ssl_curves variables when using pluggable curves in OpenSSL. nginx could not be built with musl libc. Bugfixes and performance improvements in HTTP/3. Security: (CVE-2025-53859) SMTP Authentication process memory over-read: This vulnerability in the NGINX ngx_mail_smtp_module may allow an unauthenticated attacker to trigger buffer over-read resulting in worker process memory disclosure to the authentication server. For the full list of new changes, features, bug fixes, and workarounds inherited from recent releases, see the NGINX changes . Changes to the NGINX Javascript Module NGINX Plus R35 incorporates changes from the NGINX JavaScript (njs) module version 0.9.1. The following is a list of notable changes in njs since 0.8.9 (which was the version shipped with NGINX Plus R34). Features: Added support for the QuickJS-NG library. Added support for WebCrypto API, FetchAPI, TextEncoder and TextDecoder, querystring module, crypto module and xml module for the QuickJS engine. Added state file for a shared dictionary. Added ECDH support for WebCrypto. Added support for reading r.requestText or r.requestBuffer from a temporary file. Improvements: Performance improvements due to refactored handling of built-in strings, symbols, and small integers Multiple memory usage improvements improved reporting of unhandled promise rejections. Bug Fixes: Fixed segfault in njs_property_query(). The issue was introduced in b28e50b1 (0.9.0). Fixed Function constructor template injection. Fixed GCC compilation with O3 optimization level. Fixed constant is too large for 'long' warning on MIPS -mabi=n32. Fixed compilation with GCC 4.1. Fixed %TypedArray%.from() with the buffer is detached by the mapper. Fixed %TypedArray%.prototype.slice() with overlapping buffers. Fixed handling of detached buffers for typed arrays. Fixed frame saving for async functions with closures. Fixed RegExp compilation of patterns with escaped '[' characters. Fixed handling of undefined values of a captured group in RegExp.prototype[Symbol.split](). Fixed GCC 15 build error with -Wunterminated-string-initialization. Fixed name corruption in variables and headers processing. Fixed incr() method of a shared dictionary with an empty init argument for the QuickJS engine. Bugfix: accepting response headers with underscore characters in Fetch API. Fixed Buffer.concat() with a single argument in QuickJS. Bugfix: added missed syntax error for await in template literal. Fixed non-NULL terminated strings formatting in exceptions for the QuickJS engine. Fixed compatibility with recent change in QuickJS and QuickJS-NG. Fixed serializeToString(). Previously, serializeToString() was exclusiveC14n() which returned a string instead of Buffer. According to the published documentation, it should be c14n() For a comprehensive list of all the features, changes, and bug fixes, see the njs Changelog. F5 NGINX in F5’s Application Delivery & Security Platform NGINX One is part of F5’s Application Delivery & Security Platform. It helps organizations deliver, improve, and secure new applications and APIs. This platform is a unified solution designed to ensure reliable performance, robust security, and seamless scalability for applications deployed across cloud, hybrid, and edge architectures. NGINX One is the all-in-one, subscription-based package that unifies all of NGINX’s capabilities. NGINX One brings together the features of NGINX Plus, F5 NGINX App Protect, and NGINX Kubernetes and management solutions into a single, easy-to-consume package. NGINX Plus, a key component of NGINX One, adds features to open-source NGINX that are designed for enterprise-grade performance, scalability, and security. Ready to try the new release? Follow this guide for more information on installing and deploying NGINX Plus.571Views1like0CommentsWe Heard You! R35 Brings Frictionless OIDC Logout and Richer Claims to NGINX Plus
Quick Overview Hello friends! NGINX Plus R35 ships four new directives in the built-in ngx_http_oidc_module - logout_uri, post_logout_uri, logout_token_hint, and userinfo. Together, they finally close some of the most common end-to-end OIDC gaps: a clean, standards-aligned RP-initiated logout and easy access to user profile claims. R35 adds a new http_auth_require_module with the auth_require directive so you can implement RBAC checks directly - no more auth_jwt_require + auth_jwt workaround from R34. auth_require isn’t OIDC-specific, you can use it anywhere. In this post, though, we’ll look at it briefly through an OIDC lens. Rather than drowning you in implementation minutiae or some code samples, we’ll walk the actual traffic flow step by step. That way, both admins and engineers can see exactly what’s on the wire and why it matters. What’s new logout_uri - A local path users hit to start logout. NGINX constructs the correct RP-initiated logout request to your IdP, attaching all required parameters for you. post_logout_uri - Where the IdP should send the user after a successful logout. Set it in NGINX and also allow it in your IdP application settings. logout_token_hint on|off - When on, NGINX adds id_token_hint=<JWT> to the IdP’s logout endpoint. Some IdPs (e.g., OneLogin) require this and will return HTTP 400 without it. For most providers, it’s optional. userinfo on|off - When on, NGINX automatically calls userinfo endpoint, fetches extended claims, and exposes them as $oidc_claim_* variables. You can also inspect the raw JSON via $oidc_userinfo. There’s also the new http_auth_require_module with the auth_require directive. It’s not OIDC‑specific, and you can use it anywhere, but in OIDC setups, it’s a straightforward way to implement RBAC directly against $oidc_claim_* claims without reaching for auth_jwt_require. Using OneLogin as the example In a previous article, we used Keycloak as the IdP. This time we’ll try a hosted provider, OneLogin - because it has a few behaviors that make the new features shine. Everything here applies to other providers with the usual small differences. Create an application: OpenID Connect (OIDC) -> Web Application. Sign-in Redirect URIs: https://demo.route443.dev/oidc_callback Sign-out Redirect URIs: https://demo.route443.dev/post_logout/ (Both must match what you configure in NGINX.) On the SSO tab, copy Client ID, Client Secret, and Issuer (typically https://<subdomain>.onelogin.com/oidc/2). On Assignments, grant yourself access, otherwise OneLogin will respond with access_denied after auth. You can refer to our deployment guide for OneLogin. Metadata sanity check Let’s fetch OneLogin’s metadata and note the endpoints that matter for our flow. By default, OneLogin publishes the OpenID Provider Configuration at: https://.onelogin.com/oidc/2/.well-known/openid-configuration curl https://<subdomain>.onelogin.com/oidc/2/.well-known/openid-configuration | jq Example (trimmed): { "issuer": "https://route443-dev.onelogin.com/oidc/2", "authorization_endpoint": "https://route443-dev.onelogin.com/oidc/2/auth", "token_endpoint": "https://route443-dev.onelogin.com/oidc/2/token", "userinfo_endpoint": "https://route443-dev.onelogin.com/oidc/2/me", "end_session_endpoint": "https://route443-dev.onelogin.com/oidc/2/logout" } Two important notes: For RP-initiated logout, NGINX only uses end_session_endpoint from metadata - you can’t override it in the config. If it’s missing, you won’t get a proper RP-initiated logout. If userinfo is on, NGINX will call userinfo_endpoint immediately after exchanging the code for tokens. If userinfo is unavailable, NGINX returns HTTP 500 to the client. Unlike some clients, this is not a soft failure, so if you enable userinfo, make sure that endpoint is up during login. A minimal, working R35 config http { resolver 1.1.1.1 valid=300s ipv4=on; oidc_provider onelogin { issuer https://route443-dev.onelogin.com/oidc/2; client_id 37e2eb90-...; client_secret 4aeca...; logout_uri /logout; post_logout_uri https://demo.route443.dev/post_logout/; logout_token_hint on; userinfo on; } server { listen 443 ssl; server_name demo.route443.dev; ssl_certificate ...; ssl_certificate_key ...; auth_oidc onelogin; proxy_set_header X-Sub $oidc_claim_sub; proxy_set_header X-Userinfo $oidc_userinfo; proxy_pass http://app; location /post_logout/ { return 200 "Arrivederci!\n"; default_type text/plain; } } upstream app { server 127.0.0.1:8080; } } Reload (nginx -s reload) and we’re ready to test. Quick note: on logout_uri the value (/logout) is a trigger, not a location you need to implement yourself. If your app exposes a “Sign out, %username%” link like /logout?user=foo, hitting that URL causes NGINX to perform RP-initiated logout against the IdP. Alternatively, your app can render a different link that points to wherever you’ve configured logout_uri. The key idea is: your app points to the local logout_uri and NGINX handles the IdP call. What userinfo on Does Under the Hood Open your app in a fresh browser session. On the first request, NGINX sees you’re unauthenticated (based on the session cookie) and redirects you to OneLogin. After the authorization code flow, we will now move to the Exchange Code -> Tokens step. We covered this process in detail in a previous article. Because userinfo on is enabled right after NGINX obtains and validates the token set, it calls the userinfo_endpoint from the metadata: NGINX -> OneLogin GET /oidc/2/me HTTP/1.1 Host: route443-dev.onelogin.com Connection: close Authorization: Bearer <access_token> OneLogin -> NGINX HTTP/1.1 200 OK Content-Type: application/json ... {"sub":"177988316","email":"user4@route443.dev","preferred_username":"user4","name":"user4"} NGINX parses the JSON and merges these claims into $oidc_claim_* . Userinfo claims override same-named claims from the id_token. In this example, $oidc_claim_email becomes user4@route443.dev. For troubleshooting, you can inspect the raw body via $oidc_userinfo variable. RP-Initiated Logout Alright, after authentication and authorization have successfully passed and the user has gained access to the application, let’s try signing out. To do this, we’ll open the following link in the browser: https://demo.route443.dev/logout?user=user4. Based on the logout_uri directive, NGINX will understand that it needs to initiate an RP-initiated logout and will redirect the user to the OneLogin provider’s end_session_endpoint, that is, to https://route443-dev.onelogin.com/oidc/2/logout, which we obtained from the metadata. At the same time, NGINX will add the id_token_hint parameter to the request, which contains the user’s ID token that we previously obtained. So the request will look like this: User Agent -> NGINX: HTTP GET /logout?user=user4 Host: demo.route443.dev Cookie: NGX_OIDC_SESSION=ae00b3f...; NGINX -> User Agent: HTTP/1.1 302 Found Location: https://route443-dev.onelogin.com/oidc/2/logout?client_id=37e2eb90...&id_token_hint=ey...&post_logout_redirect_uri=https://demo.route443.dev/post_logout/ Set-Cookie: NGX_OIDC_SESSION=; httponly; secure; path=/ Look closely at the redirect NGINX issues when you hit your local logout_uri. You’ll see it added id_token_hint=<JWT> before sending the browser to the IdP’s end_session_endpoint. That happens because you enabled logout_token_hint on in your NGINX config. With OneLogin this isn’t optional: omit the hint and you’ll be greeted by HTTP 400. With most other providers, the hint is optional, which is exactly why we don’t recommend turning it on unless your IdP demands it. This is the only request where NGINX puts the ID token on the wire in clear view of the user agent and intermediaries, so if you don’t need to expose it, don’t. There’s also UX nuance here. Some IdPs change behavior depending on whether id_token_hint is present. With the hint, you might see an explicit "Are you sure you want to sign out?" confirmation. Without it, the same provider might tear down the session immediately. Same endpoint, different feel. Know what your IdP does and choose intentionally. You’ll notice another parameter NGINX appends: post_logout_redirect_uri. That’s the return address after a successful logout - it must match what you configured in NGINX and what you allowed in the IdP app. In our example, it’s https://demo.route443.dev/post_logout/, which is exactly where the browser lands after OneLogin is done. Now, about the cookie that mysteriously vanishes. NGINX clears NGX_OIDC_SESSION right away. It does this defensively because it cannot predict what the IdP will do next - bounce you to a login page, show a confirmation screen, or even fail. Clearing the local session guarantees you won’t keep accidental access if the IdP misbehaves. Why does that matter? Imagine the provider fails to fully drop the server-side session state. On your next request back to the app, NGINX will dutifully send you to the IdP, the IdP will happily say “oh, you’re still good,” and you’ll be right back in the app with no fresh authentication. That’s not the logout story you want. The takeaway is simple: test RP-initiated logout meticulously with your provider, verify that the server-side session is killed, and only then call it done. In our happy-path run, the flow is pleasantly uneventful: NGINX redirects with id_token_hint (because OneLogin requires it) and post_logout_redirect_uri, OneLogin terminates the session, sends the browser back to /post_logout/, and the user gets their minimalist "Arrivederci!" confirmation. Clean in, clean out: User Agent -> OneLogin: HTTP GET /oidc/2/logout?client_id=37e2eb90...&id_token_hint=ey...&post_logout_redirect_uri=https://demo.route443.dev/post_logout/ Host: route443-dev.onelogin.com OneLogin -> User Agent: HTTP 302 Moved Temporarily Location: https://demo.route443.dev/post_logout/ Declarative RBAC OIDC in NGINX isn’t only about passing identity claims upstream for SSO, it’s also about using those claims to control who gets to which resource. In R34, we had to lean on auth_jwt_require with a little dance:: after a user authenticated, we’d re-feed the ID token to the auth_jwt module just so we could gate access on claims. It worked, but it added config noise, extra $jwt_* variables, and CPU (parsing the token on every request). R35 finally removes that crutch. The new auth_require directive lets us use OIDC claims directly in NGINX, no more "auth_jwt" workaround. The module itself isn’t tied to OIDC, you can pair it with any NGINX config, but with auth_oidc it gives you clean, declarative RBAC right in your config. We’ll keep it practical: imagine two areas in your app, /admin and /support. Admins should access /admin location, admins or folks with the support permission should see /support location. Here’s what that looks like in NGINX. map $oidc_claim_groups $is_admin { default 0; ~*(^|\s)admin(\s|$) 1; } map $oidc_claim_email $is_corp_user { default 0; ~*@example\.com$ 1; } # OR logic map "$is_admin$is_corp_user" $admin_or_corp { default 0; ~1 1; } server { # ... location /admin/ { auth_require $is_admin; proxy_pass http://app; } location /support/ { auth_require $admin_or_corp; proxy_pass http://app; } } In this example we keep things simple: $is_admin comes from the groups claim (we treat it as a space-delimited string) and $is_corp_user checks that the user’s email ends with @example.com. We then build a tiny OR with another map: if either flag is 1, $admin_or_corp becomes 1. From there, auth_require is straightforward - it allows access when the referenced variable is non-empty and not "0" and denies with HTTP 403 by default (you can override the status with error=<4xx|5xx>). Remember that listing multiple variables in a single auth_require is a logical AND, for OR, precompute a boolean with map as shown above. Wrap-Up OIDC improvements in R35 are a meaningful milestone for the module. NGINX Plus R35 lifts the OIDC client from "almost there" to nearly complete: a reliable RP‑initiated logout, first‑class userinfo integration, and a new auth_require for clean, declarative RBAC right in your configuration. We’re not done, though: we still plan to fill a few remaining gaps: Front‑Channel and Back‑Channel logouts, PKCE, and a handful of other niceties that should cover nearly all deployment requirements. Stay tuned!244Views1like0CommentsAnnouncing F5 NGINX Instance Manager 2.20
We’re thrilled to announce the release of F5 NGINX Instance Manager 2.20, now available for download! This update focuses on improving accessibility, simplifying deployments, improving observability, and enriching the user experience based on valuable customer feedback. What’s New in This Release? Lightweight Mode for NGINX Instance Manager Reduce resource usage with the new "Lightweight Mode", which allows you to deploy F5 NGINX Instance Manager without requiring a ClickHouse database. While metrics and events will no longer be available without ClickHouse, all other instance management functionalities—such as certificate management, WAF, templates, and more—will work seamlessly across VM, Docker, and Kubernetes installations. With this change, ClickHouse becomes optional for deployments. Customers who require metrics and events should continue to include ClickHouse in their setup. For those focused on basic use cases, Lightweight Mode offers a streamlined deployment that reduces system complexity while maintaining core functionality for essential tasks. Lightweight Mode is perfect for customers who need simplified management capabilities for scenarios such as: Fleet Management WAF Configuration Usage Reporting as Part of Your Subscription (for NGINX Plus R33 or later) Certificate Management Managing Templates Scanning Instances Enabling API-based GitOps for Configuration Management In testing, NGINX Instance Manager worked well without ClickHouse. It only needed 1 CPU and 1 GB of RAM to manage up to 10 instances (without App Protect). However, please note that this represents the absolute minimum configuration and may result in performance issues depending on your use case. For optimal performance, we recommend allocating more appropriate system resources. See the updated technical specification in the documentation for more details. Support for Multiple Subscriptions Align and consolidate usage from multiple NGINX Plus subscriptions on a single NGINX Instance Manager instance. This feature is especially benficial for customers who use NGINX Instance Manager as a reporting endpoint, even in disconnected or air-gapped environments. This feature was added with NGINX Plus R33. Improved Licensing and Reporting for Disconnected Environments Managing NGINX Instance Manager in environments with no outbound internet connectivity is now simpler. Customers can configure NGINX Instance Manager to use a forward proxy for licensing and reporting. For truly air-gapped environments, we've improved offline licensing: upload your license JWT to activate all features, and enjoy a 90-day grace period to submit an initial report to F5. We've also revamped the usage reporting script to be more intuitive and backwards-compatible with older versions. Enhanced User Interface We’ve modernized the NGINX Instance Manager UI to streamline navigation and make it consistent with the F5 NGINX One Console. Features are now grouped into submenus for easier access. Additionally, breadcrumbs have been added to all pages for improved usability. Instance Export Enhancements We’ve added the ability to export instances and instance groups, simplifying the process of managing and sharing configuration details. This improvement makes it easier to keep track of large deployments and maintain consistency across environments. Performance and Stability Improvements With this release, we’ve made performance and stability improvements, a key part of every update to ensure NGINX Instance Manager runs smoothly in all environments. We’ve addressed multiple bug fixes in this release to improve stability and reliability. For more details on all the fixes included, please visit the release notes. Platform Improvements and Helm Chart Migration We’ve made significant enhancements to the Helm charts to simplify the installation process for NGINX Instance Manager in Kubernetes environments. Starting with this release, the Helm charts have moved to a new repository: nginx-stable/nim with chart version 2.0. Note: NGINX Instance Manager versions 2.19 or lower will remain in the old repository, nms-stable/nms-hybrid. Be sure to update your configurations accordingly when upgrading to version 2.20 or later. Looking Ahead: Security, Modernization, and Kubernetes Innovations As part of the F5 NGINX One product offering, NGINX Instance Manager continues to evolve to meet the demands of modern infrastructures. We're committed to improving security, scalability, usability, and observability to align with your needs. Although support for the latest F5 NGINX Agent v3 is not included in this release. We are actively exploring ways to enable it later this year to bring additional value for both NGINX Instance Manager and the NGINX One Console. Additionally, we’re exploring new ways to enhance support for data plane NGINX deployments, particularly in Kubernetes environments. Stay tuned for updates as we continue to innovate for cloud-native and containerized workloads. We’re eager to hear your feedback to help shape the roadmap for future releases. Get Started Now To explore the new lightweight mode, enhanced UI, and updated features, download NGINX Instance Manager 2.20. For more details on bug fixes and performance improvements, check out the full release notes. . The NGINX Impact in F5’s Application Delivery & Security Platform NGINX Instance Manager is part of F5’s Application Delivery & Security Platform. It helps organizations deliver, optimize, and secure modern applications and APIs. This platform is a unified solution designed to ensure reliable performance, robust security, and seamless scalability for applications deployed across cloud, hybrid, and edge architectures. NGINX Instance Manager is also a key component of NGINX One, the all-in-one, subscription-based package that unifies all of NGINX’s capabilities. NGINX One brings together the features of NGINX Plus, F5 NGINX App Protect, and NGINX Kubernetes and management solutions into a single, easy-to-consume package. A cornerstone of the NGINX One package, NGINX Instance Manager extends the capabilities of open-source NGINX with features designed specifically for enterprise-grade performance, scalability, and security.261Views0likes0CommentsF5 NGINX Plus R33 Licensing and Usage Reporting
Beginning with F5 NGINX Plus version R33, all customers are required to deploy a JSON Web Token (JWT) license for each commercial instance of NGINX Plus. Each instance is responsible for validating its own license status. Furthermore, NGINX Plus will report usage either to the F5 NGINX licensing endpoint or to the F5 NGINX Instance Manager for customers who are connected. For those customers who are disconnected or operate in an air-gapped environment, usage can be reported directly to the F5 NGINX Instance Manager. To learn more about the latest features of NGINX R33, please check out the recent blog post. Install or Upgrade NGINX Plus R33 To successfully upgrade to NGINX Plus R33 or perform a fresh installation, begin by downloading the JWT license from your F5 account. Once you have the license, place it in the F5 NGINX directory before proceeding with the upgrade. For a fresh installation, after completing the installation, also place the JWT license in the NGINX directory. For further details, please refer to the provided instructions. This video provides a step-by-step guide on installing or upgrading to NGINX Plus R33. Report Usage to F5 in Connected Environment To effectively report usage data to F5 within a connected environment using NGINX Instance Manager, it's important to ensure that port 443 is open. The default configuration directs the usage endpoint to send reports directly to the F5 licensing endpoint at product.connect.nginx.com. By default, usage reporting is enabled, and it's crucial to successfully send at least one report on installation for NGINX to process traffic. However, you can postpone the initial reporting requirement by turning off the directive in your NGINX configuration. This allows NGINX Plus to handle traffic without immediate reporting during a designated grace period. To configure usage reporting to F5 using NGINX Instance Manager, update the usage endpoint to reflect the fully qualified domain name (FQDN) of the NGINX Instance Manager. For further details, please refer to the provided instructions. This video shows how to report usage in the connected environment using NGINX Instance Manager. Report Usage to F5 in Disconnected Environment using NGINX Instance Manager In a disconnected environment without an internet connection, you need to take certain steps before submitting usage data to F5. First, in NGINX Plus R33, update the `usage report` directive within the management block of your NGINX configuration to point to your NGINX Instance Manager host. Ensure that your NGINX R33 instances can access the NGINX Instance Manager by setting up the necessary DNS entries. Next, in the NMS configuration in NGINX Instance Manager, modify the ‘mode of operation’ to disconnected, save the file, and restart NGINX Instance Manager. There are multiple methods available for adding a license and submitting the initial usage report in this disconnected environment. You can use a Bash script, REST API, or the web interface. For detailed instructions on each method, please refer to the documentation. This video shows how to report usage in disconnected environments using NGINX Instance Manager. Conclusion The transition to NGINX Plus R33 introduces important enhancements in licensing and usage reporting that can greatly improve your management of NGINX instances. With the implementation of JSON Web Tokens (JWT), you can validate your subscription and report telemetry data more effectively. To ensure compliance and optimize performance, it’s crucial to understand the best practices for usage reporting, regardless of whether you are operating in a connected or disconnected environment. Get started today with a 30-day trial, and contact us if you have any questions. Resources NGINX support documentation Blog announcement providing a comprehensive summary of the new features in this release.751Views3likes3CommentsF5 NGINX API Gateway: Simplify Response Manipulation
Discover how NGINX API Gateway, powered by the NJS module, can dynamically manipulate API responses to enhance functionality without backend changes. This article walks through a basic hands-on example of incrementing response values, explores batching API requests with NGINX Plus for better performance, and shares best practices from F5. Learn how to optimize your APIs, reduce latency, and unlock new possibilities with NGINX and F5’s expert solutions. Read more to transform your API strategy today!282Views0likes0CommentsUnlocking Insights: Enhancing Observability in F5 NGINXaaS for Azure for Optimal Operations
Introduction To understand application performance, you need more than just regular health checks. You need to look at the system’s behavior, how users use it, and find possible slowdowns before they become big problems. By using F5 NGINXaaS for Azure, organizations can gain enhanced visibility into their backend applications through extensive metrics, API (access) logs, and operational logs within Azure environments. This proactive approach helps prevent minor issues from developing into major challenges while optimizing resource efficiency. This technical guide highlights advanced observability techniques and demonstrates how organizations can leverage F5 NGINXaaS to create robust, high-performing application delivery solutions that ensure seamless and responsive user experiences. Benefits of F5 NGINX as a Service F5 NGINXaaS for Azure provides robust integration with ecosystem tools designed to monitor and analyze application health and performance. It uses rich telemetry from granular metrics across various protocols, including HTTP, TLS, TCP, and UDP. For technical experts overseeing deployments in Azure, this service delivers valuable insights that facilitate more effective troubleshooting and optimize workflows for streamlined operations. Key advantages of F5 NGINXaaS include access to over 200 detailed health and performance metrics that are critical for ensuring application stability, scalability, and efficiency. Please refer to the documentation for detailed information to learn more about the available metrics. There are two ways to monitor metrics in F5 NGINXaaS for Azure, providing flexibility in how you can track the health and performance of your applications: Azure Monitoring Integration for F5 NGINXaaS: An Azure-native solution delivering detailed analytical reports and customizable alerts. Grafana Dashboard Support: A visualization tool specifically designed to provide real-time, actionable insights into system health and performance. Dive Deep with Azure Monitoring for F5 NGINXaaS Azure Monitoring integration with F5 NGINXaaS provides a comprehensive observability solution tailored to dynamic cloud environments, equipping teams with the tools to enhance application performance and reliability. A crucial aspect of this solution is the integration of F5 NGINXaaS access and error logs, which offers insights essential for troubleshooting and resolving issues effectively. By combining these logs with deep insights into application and performance metrics such as request throughput, latency, error rates, and resource utilization, technical teams can make informed decisions to optimize their applications. Key Features Include: Advanced Analytics: Explore detailed traffic patterns and usage trends to better understand application load dynamics. This allows teams to fine-tune configurations and improve performance based on actual user activity. Customizable Alerts: Set specific thresholds for key performance indicators to receive immediate notifications about anomalies, such as unexpected spikes in 5xx error rates or latency challenges. This proactive approach empowers teams to resolve incidents swiftly and minimize their impact. Detailed Metrics: Utilize comprehensive metrics encompassing connection counts, active connections, and request processing times. These insights facilitate better resource allocation and more efficient traffic management. Logs Integration: Access and analyze F5 NGINXaaS logs alongside performance metrics, providing a holistic view of application behavior. This integration is vital for troubleshooting, enabling teams to correlate log data with observability insights for effective issue identification and resolution. Scalability Insights: Monitor real-time resource allocation and consumption. Predict growth challenges and optimize scaling decisions to ensure your F5 NGINXaaS service deployments can handle variable client load effectively. By integrating Azure Monitoring with F5 NGINXaaS, organizations can significantly enhance their resilience, swiftly tackle performance challenges, and ensure that their services consistently deliver outstanding user experiences. With actionable data at their fingertips, teams are well-positioned to achieve operational excellence and foster greater user satisfaction. Visualize Success with Native Azure Grafana Dashboard Enable the Grafana dashboard and import the F5 NGINXaaS metrics dashboard to take your monitoring capabilities to the next level. This dynamic integration provides a clear view of various performance metrics, allowing teams to make informed decisions backed by insightful data. Together, Azure Monitoring and the Grafana Dashboard form a strong alliance, creating a comprehensive observability solution that amplifies your application’s overall performance and reliability. The Grafana interface allows real-time querying of performance metrics, offering intuitive visual tools like graphs and charts that simplify complex data interpretation. With Azure Monitoring, Grafana builds a robust observability stack, ensuring proactive oversight and reactive diagnostics. Getting Started with NGINXaaS Azure Workshop We have curated self-paced workshops designed to help you effectively leverage the enhanced observability features of F5 NGINXaaS. These workshops provide valuable insights and hands-on experience, empowering you to develop robust observability in a self-directed learning environment. Azure monitoring lab workshop will enhance your skills in creating and analyzing access logs with NGINX. You’ll learn to develop a comprehensive log format, capturing essential details from backend servers. By the end, you'll be equipped to use Azure’s monitoring tools effectively, significantly contributing to your growth and success. In the Native Azure Grafana Dashboard workshop, you'll explore the integration of F5 NGINXaaS for Azure with Grafana for effective service monitoring. You'll create a dashboard to track essential metrics for your backend servers. This hands-on session will equip you with the skills to analyze real-time data and make informed decisions backed by valuable insights. Upon completing this lab exercise, you will have gained practical expertise in leveraging enhanced observability features of F5 NGINXaaS. You will be proficient in creating and analyzing access logs, ensuring you can effectively capture critical data from backend servers. Additionally, you will have developed the skills necessary to integrate F5 NGINXaaS with Grafana, allowing you to build a dynamic dashboard that tracks essential metrics in real-time. This hands-on experience will empower you to make informed decisions based on valuable insights, significantly enhancing your capabilities in monitoring and maintaining your applications. Conclusion By fully utilizing the observability features of F5 NGINXaaS, the organization can gain valuable insights that enhance performance and efficiency. With Azure Monitoring and Grafana working together, teams can manage proactively and make informed, data-driven decisions. This approach leads to smoother web experiences and improves operational performance. Interested in getting started with F5 NGINXaaS for Azure? You can find us on the Azure marketplace.192Views1like0CommentsSimplifying OIDC and SSO with the New NGINX Plus R34 OIDC Module
Introduction: Why OIDC and SSO Matter As web infrastructures scale and modernize, strong and standardized methods of authentication become essential. OpenID Connect (OIDC) provides a flexible layer on top of OAuth 2.0, enabling both user authentication (login) and authorization (scopes, roles). By adopting OIDC for SSO, you can: Provide a frictionless login experience across multiple services. Consolidate user session management, removing custom auth code from each app. Lay a foundation for Zero Trust policies, by validating and enforcing identity at the network’s edge. While Zero Trust is a broader security model that extends beyond SSO alone, implementing OIDC at the proxy level is an important piece of the puzzle. It ensures that every request is associated with a verified identity, enabling fine-grained policies and tighter security boundaries across all your applications. NGINX, acting as a reverse proxy, is an ideal place to manage these OIDC flows. However, the journey to robust OIDC support in NGINX has evolved - from an njs-based approach with scripts and maps, to a far more user-friendly native module in NGINX Plus R34. The njs-based OIDC Solution Before the native OIDC module, many users turned to the njs-based reference implementation. This setup combines multiple pieces: njs script to handle OIDC flows (redirecting to the IdP, exchanging tokens, etc.). auth_jwt module for token validation. keyval module to store and pair a session cookie with the actual ID token. While it covers the essential OIDC steps (redirects, code exchanges, and forwarding claims), it has some drawbacks: Configuration complexity. Most of the logic hinges on creative usage of NGINX directives (like map), which can be cumbersome, especially if you used more than one authentication provider or your environment changes frequently. Limited Metadata Discovery. It doesn’t natively fetch the IdP’s `.well-known/openid-configuration`. Instead, a separate bash script queries the IdP and rewrites parts of the NGINX config. Any IdP changes require you to re-run that script and reload NGINX. Performance Overhead. The njs solution effectively revalidates ID tokens on every request. Why? Because NGINX on its own doesn’t maintain a traditional server-side session object. Instead, it simulates a “session” by tying a cookie to the user’s id_token in keyval. auth_jwt checks the token each time, retrieving it from keyval and verifying the signature, expiration, and extract claims. Under heavy load, this constant JWT validation can become expensive. For many, that extra overhead conflicts with how modern OIDC clients usually do short-lived session cookies, validating the token only once per session or relying on a more efficient approach. Hence the motivation for a native OIDC module. Meet the New Native OIDC Module in NGINX Plus R34 With the complexities of the njs-based approach in mind, NGINX introduced a fully integrated OIDC module in NGINX Plus R34. This module is designed to be a “proper” OIDC client, including: Automatic TLS-only communication with the IdP. Full metadata discovery (no external scripts needed). Authorization code flows. Token validation and caching. A real session model using secure cookies. Access token support (including automatic refresh). Straightforward mapping of user claims to NGINX variables. We also have a Deployment Guide that shows how to set up this module for popular IdPs like Okta, Keycloak, Entra ID, and others. That guide focuses on typical use cases (obtaining tokens, verifying them, and passing claims upstream). However, here we’ll go deeper into how the module works behind the scenes, using Keycloak as our IdP. Our Scenario: Keycloak + NGINX Plus R34 We’ll demonstrate a straightforward Keycloak realm called nginx and a client also named nginx with “client authentication” and the “standard flow” enabled. We have: Keycloak as the IdP, running at https://kc.route443.dev/realms/nginx. NGINX Plus R34 configured as a reverse proxy. A simple upstream service at http://127.0.0.1:8080. Minimal Configuration Example: http { resolver 1.1.1.1 ipv4=on valid=300s; oidc_provider keycloak { issuer https://kc.route443.dev/realms/nginx; client_id nginx; client_secret secret; } server { listen 443 ssl; server_name n1.route443.dev; ssl_certificate /etc/ssl/certs/fullchain.pem; ssl_certificate_key /etc/ssl/private/key.pem; location / { auth_oidc keycloak; proxy_set_header sub $oidc_claim_sub; proxy_set_header email $oidc_claim_email; proxy_set_header name $oidc_claim_name; proxy_pass http://127.0.0.1:8080; } } server { # Simple test backend listen 8080; location / { return 200 "Hello, $http_name!\nEmail: $http_email\nKeycloak sub: $http_sub\n"; default_type text/plain; } } } Configuration Breakdown oidc_provider keycloak {}. Points to our Keycloak issuer, plus client_id and client_secret. Automatically triggers .well-known/openid-configuration discovery. Quite an important note: all interaction with the IdP is secured exclusively over SSL/TLS, so NGINX must trust the certificate presented by Keycloak. By default, this trust is validated against your system’s CA bundle (the default CA store for your Linux or FreeBSD distribution). If the IdP’s certificate is not included in the system CA bundle, you can explicitly specify a trusted certificate or chain using the ssl_trusted_certificate directive so that NGINX can validate and trust your Keycloak certificate. auth_oidc keycloak. For any request to https://n1.route443.dev/, NGINX checks if the user has a valid session. If not, it starts the OIDC flow. Passing Claims Upstream. We add headers sub, email, and name based on $oidc_claim_sub, $oidc_claim_email, and $oidc_claim_name, the module’s built-in variables extracted from token. Step-by-Step: Under the Hood of the OIDC Flow Retrieving and Caching OIDC Metadata As soon as you send an HTTP GET request to https://n1.route443.dev, you’ll see that NGINX redirects you to Keycloak’s authentication page. However, before that redirect happens, several interesting steps occur behind the scenes. Let’s take a closer look at the very first thing NGINX does in this flow: NGINX checks if it has cached OIDC metadata for the IdP. If no valid cache exists, NGINX constructs a metadata URL by appending /.well-known/openid-configuration to the issuer you specified in the config. It then resolves the IdP’s hostname using the resolver directive. NGINX parses the JSON response from the IdP, extracting critical parameters such as issuer, authorization_endpoint and token_endpoint. It also inspects response_types_supported to confirm that this IdP supports the authorization code flow and ID tokens. These details are essential for the subsequent steps in the OIDC process. NGINX caches these details for one hour (or for however long the IdP’s Cache-Control headers specify), so it doesn’t need to re-fetch them on every request. This process happens in the background. However, you might notice a slight delay for the very first user if the cache is empty, since NGINX needs a fresh copy of the metadata. Below is an example of the metadata request and response: NGINX -> IdP: HTTP GET /realms/nginx/.well-known/openid-configuration IdP -> NGINX: HTTP 200 OK Content-Type: application/json Cache-Control: no-cache // Means NGINX will store it for 1 hour by default { "issuer": "<https://kc.route443.dev/realms/nginx>", "authorization_endpoint": "<https://kc.route443.dev/realms/nginx/protocol/openid-connect/auth>", "token_endpoint": "<https://kc.route443.dev/realms/nginx/protocol/openid-connect/token>", "jwks_uri": "<http://kc.route443.dev:8080/realms/nginx/protocol/openid-connect/certs>", "response_types_supported": [ "code","none","id_token","token","id_token token", "code id_token","code token","code id_token token" ] // ... other parameters } This metadata tells NGINX everything it needs to know about how to redirect users for authentication, where to request tokens afterward, and which JWT signing keys to trust. By caching these results, NGINX avoids unnecessary lookups on subsequent logins, making the process more efficient for every user who follows. Building the Authorization URL & Setting a Temporary Session Cookie Now that NGINX has discovered and cached the IdP’s metadata, it’s ready to redirect your browser to Keycloak for actual login. Here’s where the OpenID Connect Authorization Code Flow begins in earnest. NGINX adds a few crucial parameters, like response_type=code, client_id, redirect_uri, state, and nonce to the authorization_endpoint it learned from the metadata, then sends you the following HTTP 302 response: NGINX -> User Agent: HTTP 302 Moved Temporarily Location: <https://kc.route443.dev/realms/nginx/protocol/openid-connect/auth?response_type=code&scope=openid&client_id=nginx&redirect_uri=http%3A%2F%2Fn1.route443.dev%2Foidc_callback&state=state&nonce=nonce> Set-Cookie: NGX_OIDC_SESSION=temp_cookie/; Path=/; Secure; HttpOnly At this point, you’re probably noticing the Set-Cookie: NGX_OIDC_SESSION=temp_cookie; line. This is a temporary session cookie, sometimes called a “pre-session” cookie. NGINX needs it to keep track of your “in-progress” authentication state - so once you come back from Keycloak with the authorization code, NGINX will know how to match that code to your browser session. However, since NGINX hasn’t actually validated any tokens yet, this cookie is only ephemeral. It remains a placeholder until Keycloak returns valid tokens and NGINX completes the final checks. Once that happens, you’ll get a permanent session cookie, which will then store your real session data across requests. User Returns to NGINX with an Authorization Code Once the user enters their credentials on Keycloak’s login page and clicks “Login”, Keycloak redirects the browser back to the URL specified in your redirect_uri parameter. In our example, that happens to be http://n1.route443.dev/oidc_callback. It’s worth noting that /oidc_callback is just the default location and if you ever need something different, you can tweak it via the redirect_uri directive in the OIDC module configuration. When Keycloak redirects the user, it includes several query parameters in the URL, most importantly, the code parameter (the authorization code) and state, which NGINX uses to ensure this request matches the earlier session-setup steps. Here’s a simplified example of what the callback request might look like: User Agent -> NGINX: HTTP GET /oidc_callback Query Parameter: state=state Query Parameter: session_state=keycloak_session_state Query Parameter: iss=<https://kc.route443.dev/realms/nginx> Query Parameter: code=code Essentially, Keycloak is handing NGINX a “proof” that this user successfully logged in, along with a cryptographic token (the code) that lets NGINX exchange it for real ID and access tokens. Since /oidc_callback is tied to NGINX’s native OIDC logic, NGINX automatically grabs these parameters, checks whether the state parameter matches what it originally sent to Keycloak, and then prepares to make a token request to the IdP’s token_endpoint. Note that the OIDC module does not use the iss parameter for identifying the provider, provider identity is verified through the state parameter and the pre-session cookie, which references a provider-specific key. Exchanging the Code for Tokens and Validating the ID Token Once NGINX receives the oidc_callback request and checks all parameters, it proceeds by sending a POST request to the Keycloak token_endpoint, supplying the authorization code, client credentials, and the redirect_uri: NGINX -> IdP: POST /realms/nginx/protocol/openid-connect/token Host: kc.route443.dev Authorization: Basic bmdpbng6c2VjcmV0 Form data: grant_type=authorization_code code=5865798e-682e-4eb7-8e3e-2d2c0dc5132e.f2abd107-35c1-4c8c-949f-03953a5249b2.nginx redirect_uri=https://n1.route443.dev/oidc_callback Keycloak responds with a JSON object containing at least an id_token, access_token plus token_type=bearer. Depending on your IdP’s configuration and the scope you requested, the response might also include a refresh_token and an expires_in field. The expires_in value indicates how long the access token is valid (in seconds), and NGINX can use it to decide when to request a new token on the user’s behalf. At this point, the module also spends a moment validating the ID token’s claims - ensuring that fields like iss, aud, exp, and nonce align with what was sent initially. If any of these checks fail, the token is deemed invalid, and the request is rejected. Once everything checks out, NGINX stores the tokens and session details. Here, the OIDC module takes advantage of the keyval mechanism to keep track of user sessions. You might wonder, “Where is that keyval zone configured?” The short answer is that it’s automatic for simplicity, unless you want to override it with your own settings. By default, you get up to 8 MB of session storage, which is more than enough for most use cases. But if you need something else, you can specify a custom zone via the session_store directive. If you’re curious to see this store in action, you can even inspect it through the NGINX Plus API endpoint, for instance: GET /api/9/http/keyvals/oidc_default_store_keycloak (where oidc_default_store_ is the prefix and keycloak is your oidc_provider name). With the tokens now safely validated and stashed, NGINX is ready to finalize the session. The module issues a permanent session cookie back to the user and transitions them into the “logged-in” state, exactly what we’ll see in the next step. Finalizing the Session and Passing Claims Upstream Once NGINX verifies all tokens and securely stores the user’s session data, it sends a final HTTP 302 back to the client, this time setting a permanent session cookie: NGINX -> User Agent: HTTP 302 Moved Temporarily Location: https://n1.route443.dev/ Set-Cookie: NGX_OIDC_SESSION=permanent_cookie; Path=/; Secure; HttpOnly At this point, the user officially has a valid OIDC session in NGINX. Armed with that session cookie, they can continue sending requests to the protected resource (in our case, https://n1.route443.dev/). Each request now carries the NGX_OIDC_SESSION cookie, so NGINX recognizes the user as authenticated and automatically injects the relevant OIDC claims into request headers - such as sub, email, and name. This means your upstream application at http://127.0.0.1:8080 can rely on these headers to know who the user is and handle any additional logic accordingly. Working with OIDC Variables Now, let’s talk about how you can leverage the OIDC module for more than just simple authentication. One of its biggest strengths is its ability to extract token claims and forward them upstream in request headers. Any claim in the token can be used as an NGINX variable named $oidc_claim_name, where name is whichever claim you’d like to extract. In our example, we’ve already shown how to pass sub, email, and name, but you can use any claims that appear in the token. For a comprehensive list of possible claims, check the OIDC specification as well as your IdP’s documentation. Beyond individual claims, you can also access the entire ID and Access Tokens directly via $oidc_id_token and $oidc_access_token. These variables can come in handy if you need to pass an entire token in a request header, or if you’d like to inspect its contents for debugging purposes. As you can see, configuring NGINX as a reverse proxy with OIDC support doesn’t require you to be an authentication guru. All you really need to do is set up the module, specify the parameters you want, and decide which token claims you’d like to forward as headers. Handling Nested or Complex Claims (Using auth_jwt) Sometimes, the claim you need to extract is actually a nested object, or even an array. That’s not super common, but it can happen if your Identity Provider returns complex data structures in the token. Currently, the OIDC module can’t directly parse nested claims - this is a known limitation that should be addressed in future releases. In the meantime, your best workaround is to use the auth_jwt module. Yes, it’s a bit of a detour, but right now it’s the only way (whether you use an njs-based approach or the native OIDC module) to retrieve more intricate structures from a token. Let’s look at an example where the address claim is itself an object containing street, city, and zip, and we only want the city field forwarded as a header: http { auth_jwt_claim_set $city address city; server { ... location / { auth_oidc keycloak; auth_jwt off token=$oidc_id_token; proxy_set_header x-city $city; proxy_pass http://127.0.0.1:8080; } } } Notice how we’ve set auth_jwt off token=$oidc_id_token. We’re effectively telling auth_jwt to not revalidate the token (because it was already validated during the initial OIDC flow) but to focus on extracting additional claims from it. Meanwhile, the auth_jwt_claim_set directive specifies the variable $city and points it to the nested city field in the address claim. With this in place, you can forward that value in a custom header (x-city) to your application. And that’s it. By combining the OIDC module for authentication with the auth_jwt module for more nuanced claim extraction, you can handle even the trickiest token structures in NGINX. In most scenarios, though, you’ll find that the straightforward $oidc_claim_ variables do the job just fine and no extra modules needed. Role-Based Access Control (Using auth_jwt) As you’ve noticed, because we’re not revalidating the token signature on every request, the overhead introduced by the auth_jwt module is fairly minimal. That’s great news for performance. But auth_jwt also opens up additional possibilities, like the ability to leverage the auth_jwt_require directive. With this, you can tap into NGINX not just for authentication, but also for authorization, restricting access to certain parts of your site or API based on claims (or any other variables you might be tracking). For instance, maybe you only want to grant admin-level users access to a specific admin dashboard. If a user’s token doesn’t include the right claim (like role=admin), you want to deny entry. Let’s take a quick look at how this might work in practice: http { map $jwt_claim_role $role_admin { "admin" 1; } server { ... # Location for admin-only resources: location /admin { auth_jwt foo token=$oidc_id_token; # Check that $role_admin is not empty and not "0" -> otherwise return 403: auth_jwt_require $role_admin error=403; # If 403 happens, we show a custom page: error_page 403 /403_custom.html; proxy_pass http://127.0.0.1:8080; } # Location for the custom 403 page location = /403_custom.html { # Internal, so it can't be directly accessed from outside internal; # Return the 403 status and a custom message return 403 "Access restricted to admins only!"; } } } How It Works: In our map block, we check the user’s $jwt_claim_role and set $role_admin to 1 if it matches "admin". Then, inside the /admin location, we have something like: auth_jwt foo token=$oidc_id_token; auth_jwt_require $role_admin error=403; Here, foo is simply the realm name (a generic string you can customize), and token=$oidc_id_token tells NGINX which token to parse. At first glance, this might look like a normal auth_jwt configuration - but notice that we haven’t specified a public key via auth_jwt_key_file or auth_jwt_key_request. That means NGINX isn’t re-verifying the token’s signature here. Instead, it’s only parsing the token so we can use its claims within auth_jwt_require. Thanks to the fact that the OIDC module has already validated the ID token earlier in the flow, this works perfectly fine in practice. We still get access to $jwt_claim_role and can enforce auth_jwt_require $role_admin error=403;, ensuring anyone without the “admin” role gets an immediate 403 Forbidden. Meanwhile, we display a friendlier message by specifying: error_page 403 /403_custom.html; So even though it might look like a normal JWT validation setup, it’s really a lesser-known trick to parse claims without re-checking signatures, leveraging the prior validation done by the OIDC module. This approach neatly ties together the native OIDC flow with role-based access control - without requiring us to juggle another set of keys. Logout in OIDC So far, we’ve covered how to log in with OIDC and handle advanced scenarios like nested claims or role-based control. But there’s another critical topic: how do users log out? The OpenID Connect standard lays out several mechanisms: RP-Initiated Logout: The relying party (NGINX in this case) calls the IdP’s logout endpoint, which can clear sessions both in NGINX and at the IdP level. Front-Channel Logout: The IdP provides a way to notify the RP via a front-channel mechanism (often iframes or redirects) that the user has ended their session. Back-Channel Logout: Uses server-to-server requests between the IdP and the RP to terminate sessions behind the scenes. Right now, the native OIDC module in its first release does not fully implement these logout flows. They’re on the roadmap, but as of today, you may need a workaround if you want to handle sign-outs more gracefully. Still, one of the great things about NGINX is that even if a feature isn’t officially implemented, you can often piece together a solution with a little extra configuration. A Simple Logout Workaround Imagine you have a proxied application that includes a “Logout” button or link. You want clicking that button to end the user’s NGINX session. Below is a conceptual snippet showing how you might achieve that: http { server { listen 443 ssl; server_name n1.route443.dev; # OIDC provider config omitted for brevity # ... location / { auth_oidc keycloak; proxy_pass http://127.0.0.1:8080; } # "Logout" location that invalidates the session location /logout { # Here, we forcibly remove the NGX_OIDC_SESSION cookie add_header Set-Cookie "NGX_OIDC_SESSION=; Path=/; HttpOnly; Secure; Expires=Thu, 01 Jan 1970 00:00:00 GMT"; # Optionally, we can redirect the user to a "logged out" page return 302 "https://n1.route443.dev/logged_out"; } location = /logged_out { # A simple page or message confirming the user is logged out return 200 "You've been logged out."; } } } /logout location: When the user clicks the “logout” link in your app, it can redirect them here. Clearing the cookie: We set NGX_OIDC_SESSION to an expired value, ensuring NGINX no longer recognizes this OIDC session on subsequent requests. Redirect to a “logged out” page: We redirect the user to /logged_out, or wherever you want them to land next. Keep in mind, this approach only logs out at the NGINX layer. The user might still have an active session with the IdP (Keycloak, Entra ID, etc.) because it manages its own cookies. A fully synchronized logout - where both the RP and the IdP sessions end simultaneously, would require an actual OIDC logout flow, which the current module hasn’t fully implemented yet. Conclusion Whether you’re looking to protect a basic web app, parse claims, or enforce role-based policies, the native OIDC module in NGINX Plus R34 offers a way to integrate modern SSO at the proxy layer. Although certain scenarios (like nested claim parsing or fully-fledged OIDC logout) may still require workarounds and careful configuration, the out-of-the-box experience is already much more user-friendly than older njs-based solutions, and new features continue to land in every release. If you’re tackling more complex setups - like UserInfo endpoint support, advanced session management, or specialized logout requirements - stay tuned. The NGINX team is actively improving the module and extend its capabilities. With a little know-how (and possibly a sprinkle of auth_jwt magic), you can achieve an OIDC-based architecture that fits your exact needs, all while preserving the flexibility and performance NGINX is known for.655Views2likes1CommentF5 NGINX Plus R34 Release Now Available
We’re excited to announce the availability of F5 NGINX Plus Release 34 (R34). Based on NGINX Open Source, NGINX Plus is the only all-in-one software web server, load balancer, reverse proxy, content cache, and API gateway. New and enhanced features in NGINX Plus R34 include: Forward proxy support for NGINX usage reporting: With R34, NGINX Plus allows NGINX customers to send their license usage telemetry to F5 via an existing enterprise forward proxy in their environment. Native support for OpenID Connect configuration: With this release, we would like to announce the availability of native OpenID Connect (OIDC) module in NGINX Plus. The native module brings simplified configuration and better performance while addressing many of the complexities with the existing njs-based solution. SSL Dynamic Certificate Caching: NGINX Plus R34 builds upon the certificate caching improvements in R32 and introduces support for caching of dynamic certificates and preserving this cache across configuration reloads. Important Changes In Behavior Removal of OpenTracing Module: In NGINX Plus R32, we announced deprecation of the OpenTracing module in favor of OpenTelemetry module introduced in NGINX Plus R29. , and was marked to be removed with NGINX Plus R34. The OpenTracing is now being removed from NGINX Plus effective this release. Changes to Platform Support Added Platforms: Alpine Linux 3.21 Removed Platforms: Alpine Linux 3.17 SLES12 Deprecated Platforms: Alpine Linux 3.18 Ubuntu 20.04 New Features in Detail Forward proxy support for NGINX usage reporting In the previous NGINX Plus release (NGINX Plus R33), we introduced major changes to NGINX Plus licensing requiring all NGINX customers to report their commercial NGINX usage to F5. One of the prime feedback we received for this feature was the need to enable NGINX instances to send telemetry via existing outbound proxies, primarily for environments where NGINX instances cannot connect to the F5 licensing endpoint directly. We are pleased to note that NGINX Plus R34 introduces support for using existing forward proxy solutions in customers environment for sending licensing telemetry to F5. With this update, NGINX Plus can now be configured to use the HTTP CONNECT proxy to establish a tunnel to the F5 licensing endpoint to send the usage telemetry. Configuration The following snippet shows the basic NGINX configuration needed in the ngx_mgmt_module module for sending NGINX usage telemetry via a forward proxy solution. mgmt { proxy HOST:PORT; proxy_username USER; #optional proxy_password PASS; #optional } For complete details, refer the docs here. Native support for OpenID Connect configuration Currently, NGINX Plus relies on a njs-based solution for OpenID Connect (OIDC) implementation that involves intricate JavaScript files and advanced setup steps, which are error-prone. With NGINX Plus R34, we're thrilled to introduce native OIDC support in NGINX Plus. This native implementation eliminates many of the complexities of the njs-based approach, making it faster, highly efficient, and incredibly easy to configure, with no burdensome overheads of maintaining and upgrading the njs module. To this effect, a new module ngx_http_oidc_module is introduced in NGINX Plus R34 that implements authentication as a relying party in OIDC using the Authorization Code Flow. The native implementation allows the flexibility to enable OIDC authentication globally, or at a more granular, per-server or a per-location level. It also allows effortless auto-discovery and retrieval of the OpenID providers' configuration metadata without needing complex external scripts for each Identity Provider (IdP), greatly simplifying the configuration process. For a complete overview and examples of the features in the native implementation of OIDC in NGINX Plus and how it improves upon the njs based implementation, refer the blog. Configuration The configuration to setup OIDC natively in NGINX Plus is relatively straightforward requiring minimal directives when compared to the njs based implementation. http { resolver 10.0.0.1; oidc_provider my_idp { issuer "https://provider.domain"; client_id "unique_id"; client_secret "unique_secret"; } server { location / { auth_oidc my_idp; proxy_set_header username $oidc_claim_sub; proxy_pass http://backend; } } } The example assumes that the “https://<nginx-host>/oidc_callback” redirection URI is configured on the OpenID Provider's side. For instructions on how to configure the native OIDC module for various identity providers, refer the NGINX deployment guide. SSL Certificate Caching improvements In NGINX Plus R32, we introduced changes to cache various SSL objects and reuse the cached objects elsewhere in the configuration. This provided noticeable improvements in the initial configuration load time primarily where a small number of unique objects were being referenced multiple times. With R34, we are adding further enhancements to this functionality where cached SSL objects are reused across configuration reloads, making the reloads even faster. Also, SSL certificates with variables are now cached as well. Refer the blog for a detailed overview of this feature implementation. Other Enhancements and Bug Fixes Keepalive timeout improvements Prior to this release, idle keepalive connections could be closed any time the connection needed to be reused for another client or when the worker was gracefully shutting down. With NGINX Plus R34, a new directive keepalive_min_timeout is being introduced. This directive sets a timeout during which a keepalive connection will not be closed by NGINX for connection reuse or graceful worker shutdown. The change allows clients that send multiple requests over the same connection without delay or with a small delay between them, to avoid receiving a TCP RST in response to one of them, if not for network reasons or non-graceful worker shutdown. As a side-effect, it also addresses the TCP reset problem described in RFC 9112, Section 9.6, when the last sent HTTP response could be damaged by a followup TCP RST. It is important for non-idempotent requests, which cannot be retried by client. It is however recommended to not set keepalive_min_timeout to large values as this can introduce an additional delay during worker process shutdown and may restrict NGINX from effective connection reuse. Improved health check logging NGINX Plus R34 adds logging enhancements in the error log for better visibility while troubleshooting upstream health check failures. The server status code is now logged on health check failures. Increased session key size Prior to R34, NGINX accepted an SSL session with maximum 4k(4096) bytes. With NGINX Plus R34, the maximum session size has been increased to 8k(8192) bytes to accommodate use cases where the sessions could be larger than 4k bytes. For ex. in cases where a client certificate is saved in the session, with tickets (in TLS v1.2 or older versions), or with stateless tickets (in TLS v1.3) the sessions maybe of noticeably large size. Certain stateless session resumption implementations may store additional data as well. One such case is with JDK, which is known to include server certificates in the session ticket data which roughly doubles the decoded session size. The changes also include improved logging to capture cases when sessions are not saved in shared memory due to size. Changes in the Open Telemetry Module TLS support in OTEL traces: NGINX now allows enabling TLS for sending OTEL traces. It can be enabled by specifying "https" scheme in the endpoint as shown. otel_exporter { endpoint "https://otel.labt.fp.f5net.com:4433"; trusted_certificate “path/to/custom/ca/bundle“; # optional } By default, system CA bundle is used to verify endpoint's certificate which can be overridden with "trusted_certificate" directive if required. For a complete list of changes to the OTEL module, refer the NGINX OTEL change log. Changes Inherited from NGINX Open Source NGINX Plus R34 is based on NGINX mainline release and inherits all functional changes, features, and bug fixes made since NGINX Plus R33 was released (in NGINX Open source 1.27.3 and 1.27.4 mainline versions) Features: SSL Certificate Caching "keepalive_min_timeout" directive The "server" directive in the "upstream" block supports the "resolve" parameter. The "resolver" and "resolver_timeout" directives in the "upstream" block. SmarterMail specific mode support for IMAP LOGIN with untagged CAPABILITY response in the mail proxy module. Changes: Now TLSv1 and TLSv1.1 protocols are disabled by default. An IPv6 address in square brackets and no port can be specified in the "proxy_bind", "fastcgi_bind", "grpc_bind", "memcached_bind", "scgi_bind", and "uwsgi_bind" directives, and as client address in ngx_http_realip_module. Bug Fixes: gzip filter failed to use preallocated memory" alerts appeared in logs when using zlib-ng. nginx could not build libatomic library using the library sources if the --with-libatomic=DIR option was used. QUIC connection might not be established when using 0-RTT; the bug had appeared in 1.27.1. NGINX now ignores QUIC version negotiation packets from clients. NGINX could not be built on Solaris 10 and earlier with the ngx_http_v3_module. Bugfixes in HTTP/3. Bugfixes in the ngx_http_mp4_module. The "so_keepalive" parameter of the "listen" directive might be handled incorrectly on DragonFly BSD. Bugfix in the proxy_store directive. Security: Insufficient check in virtual servers handling with TLSv1.3 SNI allowed to reuse SSL sessions in a different virtual server, to bypass client SSL certificates verification (CVE-2025-23419). For the full list of new changes, features, bug fixes, and workarounds inherited from recent releases, see the NGINX changes . Changes to the NGINX Javascript Module NGINX Plus R34 incorporates changes from the NGINX JavaScript (njs) module version 0.8.9. The following is a list of notable changes in njs since 0.8.7 (which was the version shipped with NGINX Plus R33). Features: Added fs module for QuickJS engine. Implemented process object for the QuickJS engine. Implemented the process.kill() method. Bug Fixes: Removed extra VM creation per server. Previously, when js_import was declared in http or stream blocks, an extra copy of the VM instance was created for each server block. This was not needed and consumed a lot of memory for configurations with many server blocks. This issue was introduced in 9b674412 (0.8.6) and was partially fixed for location blocks only in 685b64f0 (0.8.7). Fixed XML tests with libxml2 2.13 and later. Fixed promise resolving when Promise is inherited. Fixed absolute scope in cloned VMs. Fixed limit rated output. Optimized use of SSL contexts for the js_fetch_trusted_certificate directive. For a comprehensive list of all the features, changes, and bug fixes, see the njs Changelog.1.2KViews0likes0CommentsQuestion regarding nginx plus gpg key
Hi Everyone, I'm trying to install the Nginx Plus on my ubuntu lab. I followed the nginx article to set up my lab environment, including setting up the gpg key, and get the valid nginx signed key from our F5 sale representative for lab tesing. https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-plus/#install_debian_ubuntu I got the error message after running apt update. It indicates that the repotitory is not signed. Reading package lists... Done E: Failed to fetch https://pkgs.nginx.com/plus/ubuntu/dists/jammy/InRelease 400 Bad Request [IP: 18.198.212.80 443] E: The repository 'https://pkgs.nginx.com/plus/ubuntu jammy InRelease' is not signed. N: Updating from such a repository can't be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. Has anyone got this issue like this ? Does anyone know how I can fix this issue ? Best regards, Ding94Views0likes1Comment