NGINX Plus
47 TopicsF5 NGINX API Gateway: Simplify Response Manipulation
Discover how NGINX API Gateway, powered by the NJS module, can dynamically manipulate API responses to enhance functionality without backend changes. This article walks through a basic hands-on example of incrementing response values, explores batching API requests with NGINX Plus for better performance, and shares best practices from F5. Learn how to optimize your APIs, reduce latency, and unlock new possibilities with NGINX and F5’s expert solutions. Read more to transform your API strategy today!30Views0likes0CommentsUnlocking Insights: Enhancing Observability in F5 NGINXaaS for Azure for Optimal Operations
Introduction To understand application performance, you need more than just regular health checks. You need to look at the system’s behavior, how users use it, and find possible slowdowns before they become big problems. By using F5 NGINXaaS for Azure, organizations can gain enhanced visibility into their backend applications through extensive metrics, API (access) logs, and operational logs within Azure environments. This proactive approach helps prevent minor issues from developing into major challenges while optimizing resource efficiency. This technical guide highlights advanced observability techniques and demonstrates how organizations can leverage F5 NGINXaaS to create robust, high-performing application delivery solutions that ensure seamless and responsive user experiences. Benefits of F5 NGINX as a Service F5 NGINXaaS for Azure provides robust integration with ecosystem tools designed to monitor and analyze application health and performance. It uses rich telemetry from granular metrics across various protocols, including HTTP, TLS, TCP, and UDP. For technical experts overseeing deployments in Azure, this service delivers valuable insights that facilitate more effective troubleshooting and optimize workflows for streamlined operations. Key advantages of F5 NGINXaaS include access to over 200 detailed health and performance metrics that are critical for ensuring application stability, scalability, and efficiency. Please refer to the documentation for detailed information to learn more about the available metrics. There are two ways to monitor metrics in F5 NGINXaaS for Azure, providing flexibility in how you can track the health and performance of your applications: Azure Monitoring Integration for F5 NGINXaaS: An Azure-native solution delivering detailed analytical reports and customizable alerts. Grafana Dashboard Support: A visualization tool specifically designed to provide real-time, actionable insights into system health and performance. Dive Deep with Azure Monitoring for F5 NGINXaaS Azure Monitoring integration with F5 NGINXaaS provides a comprehensive observability solution tailored to dynamic cloud environments, equipping teams with the tools to enhance application performance and reliability. A crucial aspect of this solution is the integration of F5 NGINXaaS access and error logs, which offers insights essential for troubleshooting and resolving issues effectively. By combining these logs with deep insights into application and performance metrics such as request throughput, latency, error rates, and resource utilization, technical teams can make informed decisions to optimize their applications. Key Features Include: Advanced Analytics: Explore detailed traffic patterns and usage trends to better understand application load dynamics. This allows teams to fine-tune configurations and improve performance based on actual user activity. Customizable Alerts: Set specific thresholds for key performance indicators to receive immediate notifications about anomalies, such as unexpected spikes in 5xx error rates or latency challenges. This proactive approach empowers teams to resolve incidents swiftly and minimize their impact. Detailed Metrics: Utilize comprehensive metrics encompassing connection counts, active connections, and request processing times. These insights facilitate better resource allocation and more efficient traffic management. Logs Integration: Access and analyze F5 NGINXaaS logs alongside performance metrics, providing a holistic view of application behavior. This integration is vital for troubleshooting, enabling teams to correlate log data with observability insights for effective issue identification and resolution. Scalability Insights: Monitor real-time resource allocation and consumption. Predict growth challenges and optimize scaling decisions to ensure your F5 NGINXaaS service deployments can handle variable client load effectively. By integrating Azure Monitoring with F5 NGINXaaS, organizations can significantly enhance their resilience, swiftly tackle performance challenges, and ensure that their services consistently deliver outstanding user experiences. With actionable data at their fingertips, teams are well-positioned to achieve operational excellence and foster greater user satisfaction. Visualize Success with Native Azure Grafana Dashboard Enable the Grafana dashboard and import the F5 NGINXaaS metrics dashboard to take your monitoring capabilities to the next level. This dynamic integration provides a clear view of various performance metrics, allowing teams to make informed decisions backed by insightful data. Together, Azure Monitoring and the Grafana Dashboard form a strong alliance, creating a comprehensive observability solution that amplifies your application’s overall performance and reliability. The Grafana interface allows real-time querying of performance metrics, offering intuitive visual tools like graphs and charts that simplify complex data interpretation. With Azure Monitoring, Grafana builds a robust observability stack, ensuring proactive oversight and reactive diagnostics. Getting Started with NGINXaaS Azure Workshop We have curated self-paced workshops designed to help you effectively leverage the enhanced observability features of F5 NGINXaaS. These workshops provide valuable insights and hands-on experience, empowering you to develop robust observability in a self-directed learning environment. Azure monitoring lab workshop will enhance your skills in creating and analyzing access logs with NGINX. You’ll learn to develop a comprehensive log format, capturing essential details from backend servers. By the end, you'll be equipped to use Azure’s monitoring tools effectively, significantly contributing to your growth and success. In the Native Azure Grafana Dashboard workshop, you'll explore the integration of F5 NGINXaaS for Azure with Grafana for effective service monitoring. You'll create a dashboard to track essential metrics for your backend servers. This hands-on session will equip you with the skills to analyze real-time data and make informed decisions backed by valuable insights. Upon completing this lab exercise, you will have gained practical expertise in leveraging enhanced observability features of F5 NGINXaaS. You will be proficient in creating and analyzing access logs, ensuring you can effectively capture critical data from backend servers. Additionally, you will have developed the skills necessary to integrate F5 NGINXaaS with Grafana, allowing you to build a dynamic dashboard that tracks essential metrics in real-time. This hands-on experience will empower you to make informed decisions based on valuable insights, significantly enhancing your capabilities in monitoring and maintaining your applications. Conclusion By fully utilizing the observability features of F5 NGINXaaS, the organization can gain valuable insights that enhance performance and efficiency. With Azure Monitoring and Grafana working together, teams can manage proactively and make informed, data-driven decisions. This approach leads to smoother web experiences and improves operational performance. Interested in getting started with F5 NGINXaaS for Azure? You can find us on the Azure marketplace.59Views1like0CommentsSimplifying OIDC and SSO with the New NGINX Plus R34 OIDC Module
Introduction: Why OIDC and SSO Matter As web infrastructures scale and modernize, strong and standardized methods of authentication become essential. OpenID Connect (OIDC) provides a flexible layer on top of OAuth 2.0, enabling both user authentication (login) and authorization (scopes, roles). By adopting OIDC for SSO, you can: Provide a frictionless login experience across multiple services. Consolidate user session management, removing custom auth code from each app. Lay a foundation for Zero Trust policies, by validating and enforcing identity at the network’s edge. While Zero Trust is a broader security model that extends beyond SSO alone, implementing OIDC at the proxy level is an important piece of the puzzle. It ensures that every request is associated with a verified identity, enabling fine-grained policies and tighter security boundaries across all your applications. NGINX, acting as a reverse proxy, is an ideal place to manage these OIDC flows. However, the journey to robust OIDC support in NGINX has evolved - from an njs-based approach with scripts and maps, to a far more user-friendly native module in NGINX Plus R34. The njs-based OIDC Solution Before the native OIDC module, many users turned to the njs-based reference implementation. This setup combines multiple pieces: njs script to handle OIDC flows (redirecting to the IdP, exchanging tokens, etc.). auth_jwt module for token validation. keyval module to store and pair a session cookie with the actual ID token. While it covers the essential OIDC steps (redirects, code exchanges, and forwarding claims), it has some drawbacks: Configuration complexity. Most of the logic hinges on creative usage of NGINX directives (like map), which can be cumbersome, especially if you used more than one authentication provider or your environment changes frequently. Limited Metadata Discovery. It doesn’t natively fetch the IdP’s `.well-known/openid-configuration`. Instead, a separate bash script queries the IdP and rewrites parts of the NGINX config. Any IdP changes require you to re-run that script and reload NGINX. Performance Overhead. The njs solution effectively revalidates ID tokens on every request. Why? Because NGINX on its own doesn’t maintain a traditional server-side session object. Instead, it simulates a “session” by tying a cookie to the user’s id_token in keyval. auth_jwt checks the token each time, retrieving it from keyval and verifying the signature, expiration, and extract claims. Under heavy load, this constant JWT validation can become expensive. For many, that extra overhead conflicts with how modern OIDC clients usually do short-lived session cookies, validating the token only once per session or relying on a more efficient approach. Hence the motivation for a native OIDC module. Meet the New Native OIDC Module in NGINX Plus R34 With the complexities of the njs-based approach in mind, NGINX introduced a fully integrated OIDC module in NGINX Plus R34. This module is designed to be a “proper” OIDC client, including: Automatic TLS-only communication with the IdP. Full metadata discovery (no external scripts needed). Authorization code flows. Token validation and caching. A real session model using secure cookies. Access token support (including automatic refresh). Straightforward mapping of user claims to NGINX variables. We also have a Deployment Guide that shows how to set up this module for popular IdPs like Okta, Keycloak, Entra ID, and others. That guide focuses on typical use cases (obtaining tokens, verifying them, and passing claims upstream). However, here we’ll go deeper into how the module works behind the scenes, using Keycloak as our IdP. Our Scenario: Keycloak + NGINX Plus R34 We’ll demonstrate a straightforward Keycloak realm called nginx and a client also named nginx with “client authentication” and the “standard flow” enabled. We have: Keycloak as the IdP, running at https://kc.route443.dev/realms/nginx. NGINX Plus R34 configured as a reverse proxy. A simple upstream service at http://127.0.0.1:8080. Minimal Configuration Example: http { resolver 1.1.1.1 ipv4=on valid=300s; oidc_provider keycloak { issuer https://kc.route443.dev/realms/nginx; client_id nginx; client_secret secret; } server { listen 443 ssl; server_name n1.route443.dev; ssl_certificate /etc/ssl/certs/fullchain.pem; ssl_certificate_key /etc/ssl/private/key.pem; location / { auth_oidc keycloak; proxy_set_header sub $oidc_claim_sub; proxy_set_header email $oidc_claim_email; proxy_set_header name $oidc_claim_name; proxy_pass http://127.0.0.1:8080; } } server { # Simple test backend listen 8080; location / { return 200 "Hello, $http_name!\nEmail: $http_email\nKeycloak sub: $http_sub\n"; default_type text/plain; } } } Configuration Breakdown oidc_provider keycloak {}. Points to our Keycloak issuer, plus client_id and client_secret. Automatically triggers .well-known/openid-configuration discovery. Quite an important note: all interaction with the IdP is secured exclusively over SSL/TLS, so NGINX must trust the certificate presented by Keycloak. By default, this trust is validated against your system’s CA bundle (the default CA store for your Linux or FreeBSD distribution). If the IdP’s certificate is not included in the system CA bundle, you can explicitly specify a trusted certificate or chain using the ssl_trusted_certificate directive so that NGINX can validate and trust your Keycloak certificate. auth_oidc keycloak. For any request to https://n1.route443.dev/, NGINX checks if the user has a valid session. If not, it starts the OIDC flow. Passing Claims Upstream. We add headers sub, email, and name based on $oidc_claim_sub, $oidc_claim_email, and $oidc_claim_name, the module’s built-in variables extracted from token. Step-by-Step: Under the Hood of the OIDC Flow Retrieving and Caching OIDC Metadata As soon as you send an HTTP GET request to https://n1.route443.dev, you’ll see that NGINX redirects you to Keycloak’s authentication page. However, before that redirect happens, several interesting steps occur behind the scenes. Let’s take a closer look at the very first thing NGINX does in this flow: NGINX checks if it has cached OIDC metadata for the IdP. If no valid cache exists, NGINX constructs a metadata URL by appending /.well-known/openid-configuration to the issuer you specified in the config. It then resolves the IdP’s hostname using the resolver directive. NGINX parses the JSON response from the IdP, extracting critical parameters such as issuer, authorization_endpoint and token_endpoint. It also inspects response_types_supported to confirm that this IdP supports the authorization code flow and ID tokens. These details are essential for the subsequent steps in the OIDC process. NGINX caches these details for one hour (or for however long the IdP’s Cache-Control headers specify), so it doesn’t need to re-fetch them on every request. This process happens in the background. However, you might notice a slight delay for the very first user if the cache is empty, since NGINX needs a fresh copy of the metadata. Below is an example of the metadata request and response: NGINX -> IdP: HTTP GET /realms/nginx/.well-known/openid-configuration IdP -> NGINX: HTTP 200 OK Content-Type: application/json Cache-Control: no-cache // Means NGINX will store it for 1 hour by default { "issuer": "<https://kc.route443.dev/realms/nginx>", "authorization_endpoint": "<https://kc.route443.dev/realms/nginx/protocol/openid-connect/auth>", "token_endpoint": "<https://kc.route443.dev/realms/nginx/protocol/openid-connect/token>", "jwks_uri": "<http://kc.route443.dev:8080/realms/nginx/protocol/openid-connect/certs>", "response_types_supported": [ "code","none","id_token","token","id_token token", "code id_token","code token","code id_token token" ] // ... other parameters } This metadata tells NGINX everything it needs to know about how to redirect users for authentication, where to request tokens afterward, and which JWT signing keys to trust. By caching these results, NGINX avoids unnecessary lookups on subsequent logins, making the process more efficient for every user who follows. Building the Authorization URL & Setting a Temporary Session Cookie Now that NGINX has discovered and cached the IdP’s metadata, it’s ready to redirect your browser to Keycloak for actual login. Here’s where the OpenID Connect Authorization Code Flow begins in earnest. NGINX adds a few crucial parameters, like response_type=code, client_id, redirect_uri, state, and nonce to the authorization_endpoint it learned from the metadata, then sends you the following HTTP 302 response: NGINX -> User Agent: HTTP 302 Moved Temporarily Location: <https://kc.route443.dev/realms/nginx/protocol/openid-connect/auth?response_type=code&scope=openid&client_id=nginx&redirect_uri=http%3A%2F%2Fn1.route443.dev%2Foidc_callback&state=state&nonce=nonce> Set-Cookie: NGX_OIDC_SESSION=temp_cookie/; Path=/; Secure; HttpOnly At this point, you’re probably noticing the Set-Cookie: NGX_OIDC_SESSION=temp_cookie; line. This is a temporary session cookie, sometimes called a “pre-session” cookie. NGINX needs it to keep track of your “in-progress” authentication state - so once you come back from Keycloak with the authorization code, NGINX will know how to match that code to your browser session. However, since NGINX hasn’t actually validated any tokens yet, this cookie is only ephemeral. It remains a placeholder until Keycloak returns valid tokens and NGINX completes the final checks. Once that happens, you’ll get a permanent session cookie, which will then store your real session data across requests. User Returns to NGINX with an Authorization Code Once the user enters their credentials on Keycloak’s login page and clicks “Login”, Keycloak redirects the browser back to the URL specified in your redirect_uri parameter. In our example, that happens to be http://n1.route443.dev/oidc_callback. It’s worth noting that /oidc_callback is just the default location and if you ever need something different, you can tweak it via the redirect_uri directive in the OIDC module configuration. When Keycloak redirects the user, it includes several query parameters in the URL, most importantly, the code parameter (the authorization code) and state, which NGINX uses to ensure this request matches the earlier session-setup steps. Here’s a simplified example of what the callback request might look like: User Agent -> NGINX: HTTP GET /oidc_callback Query Parameter: state=state Query Parameter: session_state=keycloak_session_state Query Parameter: iss=<https://kc.route443.dev/realms/nginx> Query Parameter: code=code Essentially, Keycloak is handing NGINX a “proof” that this user successfully logged in, along with a cryptographic token (the code) that lets NGINX exchange it for real ID and access tokens. Since /oidc_callback is tied to NGINX’s native OIDC logic, NGINX automatically grabs these parameters, checks whether the state parameter matches what it originally sent to Keycloak, and then prepares to make a token request to the IdP’s token_endpoint. Note that the OIDC module does not use the iss parameter for identifying the provider, provider identity is verified through the state parameter and the pre-session cookie, which references a provider-specific key. Exchanging the Code for Tokens and Validating the ID Token Once NGINX receives the oidc_callback request and checks all parameters, it proceeds by sending a POST request to the Keycloak token_endpoint, supplying the authorization code, client credentials, and the redirect_uri: NGINX -> IdP: POST /realms/nginx/protocol/openid-connect/token Host: kc.route443.dev Authorization: Basic bmdpbng6c2VjcmV0 Form data: grant_type=authorization_code code=5865798e-682e-4eb7-8e3e-2d2c0dc5132e.f2abd107-35c1-4c8c-949f-03953a5249b2.nginx redirect_uri=https://n1.route443.dev/oidc_callback Keycloak responds with a JSON object containing at least an id_token, access_token plus token_type=bearer. Depending on your IdP’s configuration and the scope you requested, the response might also include a refresh_token and an expires_in field. The expires_in value indicates how long the access token is valid (in seconds), and NGINX can use it to decide when to request a new token on the user’s behalf. At this point, the module also spends a moment validating the ID token’s claims - ensuring that fields like iss, aud, exp, and nonce align with what was sent initially. If any of these checks fail, the token is deemed invalid, and the request is rejected. Once everything checks out, NGINX stores the tokens and session details. Here, the OIDC module takes advantage of the keyval mechanism to keep track of user sessions. You might wonder, “Where is that keyval zone configured?” The short answer is that it’s automatic for simplicity, unless you want to override it with your own settings. By default, you get up to 8 MB of session storage, which is more than enough for most use cases. But if you need something else, you can specify a custom zone via the session_store directive. If you’re curious to see this store in action, you can even inspect it through the NGINX Plus API endpoint, for instance: GET /api/9/http/keyvals/oidc_default_store_keycloak (where oidc_default_store_ is the prefix and keycloak is your oidc_provider name). With the tokens now safely validated and stashed, NGINX is ready to finalize the session. The module issues a permanent session cookie back to the user and transitions them into the “logged-in” state, exactly what we’ll see in the next step. Finalizing the Session and Passing Claims Upstream Once NGINX verifies all tokens and securely stores the user’s session data, it sends a final HTTP 302 back to the client, this time setting a permanent session cookie: NGINX -> User Agent: HTTP 302 Moved Temporarily Location: https://n1.route443.dev/ Set-Cookie: NGX_OIDC_SESSION=permanent_cookie; Path=/; Secure; HttpOnly At this point, the user officially has a valid OIDC session in NGINX. Armed with that session cookie, they can continue sending requests to the protected resource (in our case, https://n1.route443.dev/). Each request now carries the NGX_OIDC_SESSION cookie, so NGINX recognizes the user as authenticated and automatically injects the relevant OIDC claims into request headers - such as sub, email, and name. This means your upstream application at http://127.0.0.1:8080 can rely on these headers to know who the user is and handle any additional logic accordingly. Working with OIDC Variables Now, let’s talk about how you can leverage the OIDC module for more than just simple authentication. One of its biggest strengths is its ability to extract token claims and forward them upstream in request headers. Any claim in the token can be used as an NGINX variable named $oidc_claim_name, where name is whichever claim you’d like to extract. In our example, we’ve already shown how to pass sub, email, and name, but you can use any claims that appear in the token. For a comprehensive list of possible claims, check the OIDC specification as well as your IdP’s documentation. Beyond individual claims, you can also access the entire ID and Access Tokens directly via $oidc_id_token and $oidc_access_token. These variables can come in handy if you need to pass an entire token in a request header, or if you’d like to inspect its contents for debugging purposes. As you can see, configuring NGINX as a reverse proxy with OIDC support doesn’t require you to be an authentication guru. All you really need to do is set up the module, specify the parameters you want, and decide which token claims you’d like to forward as headers. Handling Nested or Complex Claims (Using auth_jwt) Sometimes, the claim you need to extract is actually a nested object, or even an array. That’s not super common, but it can happen if your Identity Provider returns complex data structures in the token. Currently, the OIDC module can’t directly parse nested claims - this is a known limitation that should be addressed in future releases. In the meantime, your best workaround is to use the auth_jwt module. Yes, it’s a bit of a detour, but right now it’s the only way (whether you use an njs-based approach or the native OIDC module) to retrieve more intricate structures from a token. Let’s look at an example where the address claim is itself an object containing street, city, and zip, and we only want the city field forwarded as a header: http { auth_jwt_claim_set $city address city; server { ... location / { auth_oidc keycloak; auth_jwt off token=$oidc_id_token; proxy_set_header x-city $city; proxy_pass http://127.0.0.1:8080; } } } Notice how we’ve set auth_jwt off token=$oidc_id_token. We’re effectively telling auth_jwt to not revalidate the token (because it was already validated during the initial OIDC flow) but to focus on extracting additional claims from it. Meanwhile, the auth_jwt_claim_set directive specifies the variable $city and points it to the nested city field in the address claim. With this in place, you can forward that value in a custom header (x-city) to your application. And that’s it. By combining the OIDC module for authentication with the auth_jwt module for more nuanced claim extraction, you can handle even the trickiest token structures in NGINX. In most scenarios, though, you’ll find that the straightforward $oidc_claim_ variables do the job just fine and no extra modules needed. Role-Based Access Control (Using auth_jwt) As you’ve noticed, because we’re not revalidating the token signature on every request, the overhead introduced by the auth_jwt module is fairly minimal. That’s great news for performance. But auth_jwt also opens up additional possibilities, like the ability to leverage the auth_jwt_require directive. With this, you can tap into NGINX not just for authentication, but also for authorization, restricting access to certain parts of your site or API based on claims (or any other variables you might be tracking). For instance, maybe you only want to grant admin-level users access to a specific admin dashboard. If a user’s token doesn’t include the right claim (like role=admin), you want to deny entry. Let’s take a quick look at how this might work in practice: http { map $jwt_claim_role $role_admin { "admin" 1; } server { ... # Location for admin-only resources: location /admin { auth_jwt foo token=$oidc_id_token; # Check that $role_admin is not empty and not "0" -> otherwise return 403: auth_jwt_require $role_admin error=403; # If 403 happens, we show a custom page: error_page 403 /403_custom.html; proxy_pass http://127.0.0.1:8080; } # Location for the custom 403 page location = /403_custom.html { # Internal, so it can't be directly accessed from outside internal; # Return the 403 status and a custom message return 403 "Access restricted to admins only!"; } } } How It Works: In our map block, we check the user’s $jwt_claim_role and set $role_admin to 1 if it matches "admin". Then, inside the /admin location, we have something like: auth_jwt foo token=$oidc_id_token; auth_jwt_require $role_admin error=403; Here, foo is simply the realm name (a generic string you can customize), and token=$oidc_id_token tells NGINX which token to parse. At first glance, this might look like a normal auth_jwt configuration - but notice that we haven’t specified a public key via auth_jwt_key_file or auth_jwt_key_request. That means NGINX isn’t re-verifying the token’s signature here. Instead, it’s only parsing the token so we can use its claims within auth_jwt_require. Thanks to the fact that the OIDC module has already validated the ID token earlier in the flow, this works perfectly fine in practice. We still get access to $jwt_claim_role and can enforce auth_jwt_require $role_admin error=403;, ensuring anyone without the “admin” role gets an immediate 403 Forbidden. Meanwhile, we display a friendlier message by specifying: error_page 403 /403_custom.html; So even though it might look like a normal JWT validation setup, it’s really a lesser-known trick to parse claims without re-checking signatures, leveraging the prior validation done by the OIDC module. This approach neatly ties together the native OIDC flow with role-based access control - without requiring us to juggle another set of keys. Logout in OIDC So far, we’ve covered how to log in with OIDC and handle advanced scenarios like nested claims or role-based control. But there’s another critical topic: how do users log out? The OpenID Connect standard lays out several mechanisms: RP-Initiated Logout: The relying party (NGINX in this case) calls the IdP’s logout endpoint, which can clear sessions both in NGINX and at the IdP level. Front-Channel Logout: The IdP provides a way to notify the RP via a front-channel mechanism (often iframes or redirects) that the user has ended their session. Back-Channel Logout: Uses server-to-server requests between the IdP and the RP to terminate sessions behind the scenes. Right now, the native OIDC module in its first release does not fully implement these logout flows. They’re on the roadmap, but as of today, you may need a workaround if you want to handle sign-outs more gracefully. Still, one of the great things about NGINX is that even if a feature isn’t officially implemented, you can often piece together a solution with a little extra configuration. A Simple Logout Workaround Imagine you have a proxied application that includes a “Logout” button or link. You want clicking that button to end the user’s NGINX session. Below is a conceptual snippet showing how you might achieve that: http { server { listen 443 ssl; server_name n1.route443.dev; # OIDC provider config omitted for brevity # ... location / { auth_oidc keycloak; proxy_pass http://127.0.0.1:8080; } # "Logout" location that invalidates the session location /logout { # Here, we forcibly remove the NGX_OIDC_SESSION cookie add_header Set-Cookie "NGX_OIDC_SESSION=; Path=/; HttpOnly; Secure; Expires=Thu, 01 Jan 1970 00:00:00 GMT"; # Optionally, we can redirect the user to a "logged out" page return 302 "https://n1.route443.dev/logged_out"; } location = /logged_out { # A simple page or message confirming the user is logged out return 200 "You've been logged out."; } } } /logout location: When the user clicks the “logout” link in your app, it can redirect them here. Clearing the cookie: We set NGX_OIDC_SESSION to an expired value, ensuring NGINX no longer recognizes this OIDC session on subsequent requests. Redirect to a “logged out” page: We redirect the user to /logged_out, or wherever you want them to land next. Keep in mind, this approach only logs out at the NGINX layer. The user might still have an active session with the IdP (Keycloak, Entra ID, etc.) because it manages its own cookies. A fully synchronized logout - where both the RP and the IdP sessions end simultaneously, would require an actual OIDC logout flow, which the current module hasn’t fully implemented yet. Conclusion Whether you’re looking to protect a basic web app, parse claims, or enforce role-based policies, the native OIDC module in NGINX Plus R34 offers a way to integrate modern SSO at the proxy layer. Although certain scenarios (like nested claim parsing or fully-fledged OIDC logout) may still require workarounds and careful configuration, the out-of-the-box experience is already much more user-friendly than older njs-based solutions, and new features continue to land in every release. If you’re tackling more complex setups - like UserInfo endpoint support, advanced session management, or specialized logout requirements - stay tuned. The NGINX team is actively improving the module and extend its capabilities. With a little know-how (and possibly a sprinkle of auth_jwt magic), you can achieve an OIDC-based architecture that fits your exact needs, all while preserving the flexibility and performance NGINX is known for.218Views1like1CommentF5 NGINX Plus R34 Release Now Available
We’re excited to announce the availability of F5 NGINX Plus Release 34 (R34). Based on NGINX Open Source, NGINX Plus is the only all-in-one software web server, load balancer, reverse proxy, content cache, and API gateway. New and enhanced features in NGINX Plus R34 include: Forward proxy support for NGINX usage reporting: With R34, NGINX Plus allows NGINX customers to send their license usage telemetry to F5 via an existing enterprise forward proxy in their environment. Native support for OpenID Connect configuration: With this release, we would like to announce the availability of native OpenID Connect (OIDC) module in NGINX Plus. The native module brings simplified configuration and better performance while addressing many of the complexities with the existing njs-based solution. SSL Dynamic Certificate Caching: NGINX Plus R34 builds upon the certificate caching improvements in R32 and introduces support for caching of dynamic certificates and preserving this cache across configuration reloads. Important Changes In Behavior Removal of OpenTracing Module: In NGINX Plus R32, we announced deprecation of the OpenTracing module in favor of OpenTelemetry module introduced in NGINX Plus R29. , and was marked to be removed with NGINX Plus R34. The OpenTracing is now being removed from NGINX Plus effective this release. Changes to Platform Support Added Platforms: Alpine Linux 3.21 Removed Platforms: Alpine Linux 3.17 SLES12 Deprecated Platforms: Alpine Linux 3.18 Ubuntu 20.04 New Features in Detail Forward proxy support for NGINX usage reporting In the previous NGINX Plus release (NGINX Plus R33), we introduced major changes to NGINX Plus licensing requiring all NGINX customers to report their commercial NGINX usage to F5. One of the prime feedback we received for this feature was the need to enable NGINX instances to send telemetry via existing outbound proxies, primarily for environments where NGINX instances cannot connect to the F5 licensing endpoint directly. We are pleased to note that NGINX Plus R34 introduces support for using existing forward proxy solutions in customers environment for sending licensing telemetry to F5. With this update, NGINX Plus can now be configured to use the HTTP CONNECT proxy to establish a tunnel to the F5 licensing endpoint to send the usage telemetry. Configuration The following snippet shows the basic NGINX configuration needed in the ngx_mgmt_module module for sending NGINX usage telemetry via a forward proxy solution. mgmt { proxy HOST:PORT; proxy_username USER; #optional proxy_password PASS; #optional } For complete details, refer the docs here. Native support for OpenID Connect configuration Currently, NGINX Plus relies on a njs-based solution for OpenID Connect (OIDC) implementation that involves intricate JavaScript files and advanced setup steps, which are error-prone. With NGINX Plus R34, we're thrilled to introduce native OIDC support in NGINX Plus. This native implementation eliminates many of the complexities of the njs-based approach, making it faster, highly efficient, and incredibly easy to configure, with no burdensome overheads of maintaining and upgrading the njs module. To this effect, a new module ngx_http_oidc_module is introduced in NGINX Plus R34 that implements authentication as a relying party in OIDC using the Authorization Code Flow. The native implementation allows the flexibility to enable OIDC authentication globally, or at a more granular, per-server or a per-location level. It also allows effortless auto-discovery and retrieval of the OpenID providers' configuration metadata without needing complex external scripts for each Identity Provider (IdP), greatly simplifying the configuration process. For a complete overview and examples of the features in the native implementation of OIDC in NGINX Plus and how it improves upon the njs based implementation, refer the blog. Configuration The configuration to setup OIDC natively in NGINX Plus is relatively straightforward requiring minimal directives when compared to the njs based implementation. http { resolver 10.0.0.1; oidc_provider my_idp { issuer "https://provider.domain"; client_id "unique_id"; client_secret "unique_secret"; } server { location / { auth_oidc my_idp; proxy_set_header username $oidc_claim_sub; proxy_pass http://backend; } } } The example assumes that the “https://<nginx-host>/oidc_callback” redirection URI is configured on the OpenID Provider's side. For instructions on how to configure the native OIDC module for various identity providers, refer the NGINX deployment guide. SSL Certificate Caching improvements In NGINX Plus R32, we introduced changes to cache various SSL objects and reuse the cached objects elsewhere in the configuration. This provided noticeable improvements in the initial configuration load time primarily where a small number of unique objects were being referenced multiple times. With R34, we are adding further enhancements to this functionality where cached SSL objects are reused across configuration reloads, making the reloads even faster. Also, SSL certificates with variables are now cached as well. Refer the blog for a detailed overview of this feature implementation. Other Enhancements and Bug Fixes Keepalive timeout improvements Prior to this release, idle keepalive connections could be closed any time the connection needed to be reused for another client or when the worker was gracefully shutting down. With NGINX Plus R34, a new directive keepalive_min_timeout is being introduced. This directive sets a timeout during which a keepalive connection will not be closed by NGINX for connection reuse or graceful worker shutdown. The change allows clients that send multiple requests over the same connection without delay or with a small delay between them, to avoid receiving a TCP RST in response to one of them, if not for network reasons or non-graceful worker shutdown. As a side-effect, it also addresses the TCP reset problem described in RFC 9112, Section 9.6, when the last sent HTTP response could be damaged by a followup TCP RST. It is important for non-idempotent requests, which cannot be retried by client. It is however recommended to not set keepalive_min_timeout to large values as this can introduce an additional delay during worker process shutdown and may restrict NGINX from effective connection reuse. Improved health check logging NGINX Plus R34 adds logging enhancements in the error log for better visibility while troubleshooting upstream health check failures. The server status code is now logged on health check failures. Increased session key size Prior to R34, NGINX accepted an SSL session with maximum 4k(4096) bytes. With NGINX Plus R34, the maximum session size has been increased to 8k(8192) bytes to accommodate use cases where the sessions could be larger than 4k bytes. For ex. in cases where a client certificate is saved in the session, with tickets (in TLS v1.2 or older versions), or with stateless tickets (in TLS v1.3) the sessions maybe of noticeably large size. Certain stateless session resumption implementations may store additional data as well. One such case is with JDK, which is known to include server certificates in the session ticket data which roughly doubles the decoded session size. The changes also include improved logging to capture cases when sessions are not saved in shared memory due to size. Changes in the Open Telemetry Module TLS support in OTEL traces: NGINX now allows enabling TLS for sending OTEL traces. It can be enabled by specifying "https" scheme in the endpoint as shown. otel_exporter { endpoint "https://otel.labt.fp.f5net.com:4433"; trusted_certificate “path/to/custom/ca/bundle“; # optional } By default, system CA bundle is used to verify endpoint's certificate which can be overridden with "trusted_certificate" directive if required. For a complete list of changes to the OTEL module, refer the NGINX OTEL change log. Changes Inherited from NGINX Open Source NGINX Plus R34 is based on NGINX mainline release and inherits all functional changes, features, and bug fixes made since NGINX Plus R33 was released (in NGINX Open source 1.27.3 and 1.27.4 mainline versions) Features: SSL Certificate Caching "keepalive_min_timeout" directive The "server" directive in the "upstream" block supports the "resolve" parameter. The "resolver" and "resolver_timeout" directives in the "upstream" block. SmarterMail specific mode support for IMAP LOGIN with untagged CAPABILITY response in the mail proxy module. Changes: Now TLSv1 and TLSv1.1 protocols are disabled by default. An IPv6 address in square brackets and no port can be specified in the "proxy_bind", "fastcgi_bind", "grpc_bind", "memcached_bind", "scgi_bind", and "uwsgi_bind" directives, and as client address in ngx_http_realip_module. Bug Fixes: gzip filter failed to use preallocated memory" alerts appeared in logs when using zlib-ng. nginx could not build libatomic library using the library sources if the --with-libatomic=DIR option was used. QUIC connection might not be established when using 0-RTT; the bug had appeared in 1.27.1. NGINX now ignores QUIC version negotiation packets from clients. NGINX could not be built on Solaris 10 and earlier with the ngx_http_v3_module. Bugfixes in HTTP/3. Bugfixes in the ngx_http_mp4_module. The "so_keepalive" parameter of the "listen" directive might be handled incorrectly on DragonFly BSD. Bugfix in the proxy_store directive. Security: Insufficient check in virtual servers handling with TLSv1.3 SNI allowed to reuse SSL sessions in a different virtual server, to bypass client SSL certificates verification (CVE-2025-23419). For the full list of new changes, features, bug fixes, and workarounds inherited from recent releases, see the NGINX changes . Changes to the NGINX Javascript Module NGINX Plus R34 incorporates changes from the NGINX JavaScript (njs) module version 0.8.9. The following is a list of notable changes in njs since 0.8.7 (which was the version shipped with NGINX Plus R33). Features: Added fs module for QuickJS engine. Implemented process object for the QuickJS engine. Implemented the process.kill() method. Bug Fixes: Removed extra VM creation per server. Previously, when js_import was declared in http or stream blocks, an extra copy of the VM instance was created for each server block. This was not needed and consumed a lot of memory for configurations with many server blocks. This issue was introduced in 9b674412 (0.8.6) and was partially fixed for location blocks only in 685b64f0 (0.8.7). Fixed XML tests with libxml2 2.13 and later. Fixed promise resolving when Promise is inherited. Fixed absolute scope in cloned VMs. Fixed limit rated output. Optimized use of SSL contexts for the js_fetch_trusted_certificate directive. For a comprehensive list of all the features, changes, and bug fixes, see the njs Changelog.511Views0likes0CommentsHow I did it - “Delivering Kasm Workspaces three ways”
Securing modern, containerized platforms like Kasm Workspaces requires a robust and multi-faceted approach to ensure performance, reliability, and data protection. In this edition of "How I did it" we'll see how F5 technologies can enhance the security and scalability of Kasm Workspaces deployments.485Views2likes0CommentsInstalling and Locking a Specific Version of F5 NGINX Plus
A guide for installing and locking a specific version of NGINX Plus to ensure stability, meet internal policies, and prepare for controlled upgrades. Introduction The most common way to install F5 NGINX Plus is by using the package manager tool native to your Linux host (e.g., yum, apt-get, etc.). By default, the package manager installs the latest available version of NGINX Plus. However, there may be scenarios where you need to install an earlier version. To help you modify your automation scripts, we’ve provided example commands for selecting a specific version. Common Scenarios for Installing an Earlier Version of NGINX Plus Your internal policy requires sticking to internally tested versions before deploying the latest release. You prefer to maintain consistency by using the same version across your entire fleet for simplicity. You’d like to verify and meet additional requirements introduced in a newer release (e.g., NGINX Plus Release 33) before upgrading. Commands for Installing and Holding a Specific Version of NGINX Plus Use the following commands based on your Linux distribution to install and lock a prior version of NGINX Plus: Ubuntu 20.04, 22.04, 24.04 LTS sudo apt-get update sudo apt-get install -y nginx-plus=<VERSION> sudo apt-mark hold nginx-plus Debian 11, 12 sudo apt-get update sudo apt-get install -y nginx-plus=<VERSION> sudo apt-mark hold nginx-plus AlmaLinux 8, 9 / Rocky Linux 8, 9 / Oracle Linux 8.1+, 9 / RHEL 8.1+, 9 sudo yum install -y nginx-plus-<VERSION> sudo yum versionlock nginx-plus Amazon Linux 2 LTS, 2023 sudo yum install -y nginx-plus-<VERSION> sudo yum versionlock nginx-plus SUSE Linux Enterprise Server 12, 15 SP5+ sudo zypper install nginx-plus=<VERSION> sudo zypper addlock nginx-plus Alpine Linux 3.17, 3.18, 3.19, 3.20 apk add nginx-plus=<VERSION> echo "nginx-plus hold" | sudo tee -a /etc/apk/world FreeBSD 13, 14 pkg install nginx-plus-<VERSION> pkg lock nginx-plus Notes Replace <VERSION> with the desired version (e.g., 32-2*). After installation, verify the installed version with the command: nginx -v. Holding or locking the package ensures it won’t be inadvertently upgraded during routine updates. Conclusion Installing and locking a specific version of NGINX Plus ensures stability, compliance with internal policies, and proper validation of new features before deployment. By following the provided commands tailored to your Linux distribution, you can confidently maintain control over your infrastructure while minimizing the risk of unintended upgrades. Regularly verifying the installed version and holding updates will help ensure consistency and reliability across your environments.232Views0likes0CommentsF5 NGINX Plus R33 Release Now Available
We’re excited to announce the availability of NGINX Plus Release 33 (R33). The release introduces major changes to NGINX licensing, support for post quantum cryptography, initial support for QuickJS runtime in NGINX JavaScript and a lot more.1.6KViews3likes3CommentsF5 NGINX Plus R33 Licensing and Usage Reporting
Beginning with F5 NGINX Plus version R33, all customers are required to deploy a JSON Web Token (JWT) license for each commercial instance of NGINX Plus. Each instance is responsible for validating its own license status. Furthermore, NGINX Plus will report usage either to the F5 NGINX licensing endpoint or to the F5 NGINX Instance Manager for customers who are connected. For those customers who are disconnected or operate in an air-gapped environment, usage can be reported directly to the F5 NGINX Instance Manager. To learn more about the latest features of NGINX R33, please check out the recent blog post. Install or Upgrade NGINX Plus R33 To successfully upgrade to NGINX Plus R33 or perform a fresh installation, begin by downloading the JWT license from your F5 account. Once you have the license, place it in the F5 NGINX directory before proceeding with the upgrade. For a fresh installation, after completing the installation, also place the JWT license in the NGINX directory. For further details, please refer to the provided instructions. This video provides a step-by-step guide on installing or upgrading to NGINX Plus R33. Report Usage to F5 in Connected Environment To effectively report usage data to F5 within a connected environment using NGINX Instance Manager, it's important to ensure that port 443 is open. The default configuration directs the usage endpoint to send reports directly to the F5 licensing endpoint at product.connect.nginx.com. By default, usage reporting is enabled, and it's crucial to successfully send at least one report on installation for NGINX to process traffic. However, you can postpone the initial reporting requirement by turning off the directive in your NGINX configuration. This allows NGINX Plus to handle traffic without immediate reporting during a designated grace period. To configure usage reporting to F5 using NGINX Instance Manager, update the usage endpoint to reflect the fully qualified domain name (FQDN) of the NGINX Instance Manager. For further details, please refer to the provided instructions. This video shows how to report usage in the connected environment using NGINX Instance Manager. Report Usage to F5 in Disconnected Environment using NGINX Instance Manager In a disconnected environment without an internet connection, you need to take certain steps before submitting usage data to F5. First, in NGINX Plus R33, update the `usage report` directive within the management block of your NGINX configuration to point to your NGINX Instance Manager host. Ensure that your NGINX R33 instances can access the NGINX Instance Manager by setting up the necessary DNS entries. Next, in the NMS configuration in NGINX Instance Manager, modify the ‘mode of operation’ to disconnected, save the file, and restart NGINX Instance Manager. There are multiple methods available for adding a license and submitting the initial usage report in this disconnected environment. You can use a Bash script, REST API, or the web interface. For detailed instructions on each method, please refer to the documentation. This video shows how to report usage in disconnected environments using NGINX Instance Manager. Conclusion The transition to NGINX Plus R33 introduces important enhancements in licensing and usage reporting that can greatly improve your management of NGINX instances. With the implementation of JSON Web Tokens (JWT), you can validate your subscription and report telemetry data more effectively. To ensure compliance and optimize performance, it’s crucial to understand the best practices for usage reporting, regardless of whether you are operating in a connected or disconnected environment. Get started today with a 30-day trial, and contact us if you have any questions. Resources NGINX support documentation Blog announcement providing a comprehensive summary of the new features in this release.465Views3likes2CommentsIntroducing the New Docker Compose Installation Option for F5 NGINX Instance Manager
F5 NGINX Instance Manager (NIM) is a centralised management solution designed to simplify the administration and monitoring of F5 NGINX instances across various environments, including on-premises, cloud, and hybrid infrastructures. It provides a single interface to efficiently oversee multiple NGINX instances, making it particularly useful for organizations using NGINX at scale. We’re excited to introduce a new Docker Compose installation option for NGINX Instance Manager, designed to help you get up and running faster than ever before, in just a couple of steps. Key Features: Quick and Easy Installation: With just a couple of steps, you can pull and deploy NGINX Instance Manager on any Docker host, without having to manually configure multiple components. The image is available in our container registry, so once you have a valid license to access it, getting up and running is as simple as pulling the container. Fault-Tolerant and Resilient: This installation option is designed with fault tolerance in mind. Persistent storage ensures your data is safe even in the event of container restarts or crashes. Additionally, with a separate database container, your product’s data is isolated, adding an extra layer of resilience and making it easier to manage backups and restores. Seamless Upgrades: Upgrades are a breeze. You can update to the latest version of NGINX Instance Manager by simply updating the image tag in your Docker Compose file. This makes it easy to stay up-to-date with the latest features and improvements without worrying about downtime or complex upgrade processes. Backup and Restore Options: To ensure your data is protected, this installation option comes with built-in backup and restore capabilities. Easily back up your data to a safe location and restore it in case of any issues. Environment Configuration Flexibility: The Docker Compose setup allows you to define custom environment variables, giving you full control over configuration settings such as log levels, timeout values, and more. Production-Ready: Designed for scalability and reliability, this installation method is ready for production environments. With proper resource allocation and tuning, you can deploy NGINX Instance Manager to handle heavy workloads while maintaining performance. The following steps walk you through how to deploy and manage NGINX Instance Manager using Docker Compose. What you need A working version of Docker Your NGINX subscription’s JSON Web Token from MyF5 This pre-configured docker-compose.yaml file: Download docker-compose.yaml file . Step 1 - Set up Docker for NGINX container registry Log in to the Docker registry using the contents of the JSON Web Token file you downloaded from MyF5 : docker login private-registry.nginx.com --username=<JWT_CONTENTS> --password=none Step 2 - Run “docker login” and then “docker compose up” in the directory where you downloaded docker-compose.yaml Note: You can optionally set the Administrator password for NGINX Instance Manager prior to running Docker Compose. ~$ docker login private-registry.nginx.com --username=<JWT_CONTENTS> --password=none ~$ echo "admin" > admin_password.txt ~$ docker compose up -d [+] Running 6/6 ✔ Network nim_clickhouse Created 0.1s ✔ Network nim_external_network Created 0.2s ✔ Network nim_default Created 0.2s ✔ Container nim-precheck-1 Started 0.8s ✔ Container nim-clickhouse-1 Healthy 6.7s ✔ Container nim-nim-1 Started 7.4s. Step 3 – Access NGINX Instance Manager Go to the NGINX Instance Manager UI on https://<<DOCKER_HOST>>:443 and license the product using the same JSON Web Token you downloaded from MyF5 earlier. Conclusion With this new setup, you can install and run NGINX Instance Manager on any Docker host in just 3 steps, dramatically reducing setup time and simplifying deployment. Whether you are working in a development environment or deploying to production, the Docker Compose-based solution ensures a seamless and reliable experience. For more information on using the docker compose option with NGINX Instance manager such as running a backup and restore, using secrets, and many more, please see the instructions here.89Views1like0CommentsHow I did it - "Securing NVIDIA’s Morpheus AI Framework with NGINX Plus Ingress Controller”
In this installment of "How I Did It," we continue our journey into AI security. I have documented how I deployed an NVIDIA Morpheus AI infrastructure along with F5's NGINX Plus Ingress Controller to provide secure and scalable external access.363Views2likes1Comment