nginx one
14 TopicsFine-Tuning F5 NGINX WAF Policy with Policy Lifecycle Manager and Security Dashboard
Introduction Traditional WAF management often relies on manual, error-prone editing of JSON or configuration files, resulting in inconsistent security policies across distributed applications. F5 NGINX One Console and NGINX Instance Manager address this by providing intuitive Graphical User Interfaces (GUIs) that replace complex text editors with visual controls. This visual approach empowers SecOps teams to manage security at all three distinct levels precisely: Broad Protection: Rapidly enabling or disabling entire signature sets to cover fast but broad categories of attacks. Targeted Tuning: Fine-tuning security by enabling or disabling signatures for a specific attack type. Granular Control: Defining precise actions for specific user-defined URLs, cookies, or parameters, ensuring that security does not break legitimate application functionality. Centralized Policy Management (F5 NGINX One Console) This video illustrates the shift from manually managing isolated NGINX WAF configurations to a unified, automated approach. With NGINX One Console, you can establish a robust "Golden Policy" and enforce it consistently across development, staging, and production environments from a single SaaS interface. The platform simplifies complex security tasks through a visual JSON editor that makes advanced protection accessible to the entire team, not just deep experts. It also prioritizes operational safety; the "Diff View" allows you to validate changes against the active configuration side-by-side before going live. This enables a smooth workflow where policies are tested in "Transparent Mode" and seamlessly toggled to "Blocking Mode" once validated, ensuring security measures never slow down your release cycles. Operational Visibility & Tuning (F5 NGINX Instance Manager) This video highlights how NGINX Instance Manager transforms troubleshooting from a tedious log-hunting exercise into a rapid, visual investigation. When a user is blocked, support teams can simply paste a Support ID into the dashboard to instantly locate the exact log entry, eliminating the need to grep through text files on individual servers. The console’s new features allow for surgical precision rather than blunt force; instead of turning off entire security signatures, you can create granular exceptions for specific patterns—like a semicolon in a URL—while keeping the rest of your security wall intact. Combined with visual dashboards that track threat campaigns and signature status, this tool drastically reduces Mean-Time-To-Resolution (MTTR) and ensures security controls don’t degrade the application experience. Conclusion The F5 NGINX One Console and F5 NGINX Instance Manager go beyond simplifying workflows—they unlock the full potential of your security stack. With a clear, visual interface, they enable you to manage and resolve the entire range of WAF capabilities easily. These tools make advanced security manageable by allowing you to create and fine-tune policies with precision, whether adjusting broad signature sets or defining rules for specific URLs and parameters. By streamlining these tasks, they enable you to handle complex operations that were once roadblocks, providing a smooth, effective way to keep your applications secure. Resources Devcentral Article: https://community.f5.com/kb/technicalarticles/introducing-f5-waf-for-nginx-with-intuitive-gui-in-nginx-one-console-and-nginx-i/343836 NGINX One Documentation: https://docs.nginx.com/nginx-one-console/waf-integration/overview/ NGINX Instance Manager Documentation: https://docs.nginx.com/nginx-instance-manager/waf-integration/overview/106Views2likes0CommentsThe Ingress NGINX Alternative: F5 NGINX Ingress Controller for the Long Term
The Kubernetes community recently announced that Ingress NGINX will be retired in March 2026. After that date, there won’t be any more new updates, bugfixes, or security patches. ingress-nginx is no longer a viable enterprise solution for the long-term, and organizations using it in production should move quickly to explore alternatives and plan to shift their workloads to Kubernetes ingress solutions that are continuing development. Your Options (And Why We Hope You’ll Consider NGINX) There are several good Ingress controllers available—Traefik, HAProxy, Kong, Envoy-based options, and Gateway API implementations. The Kubernetes docs list many of them, and they all have their strengths. Security start-up Chainguard is maintaining a status-quo version of ingress-nginx and applying basic safety patches as part of their EmeritOSS program. But this program is designed as a stopgap to keep users safe while they transition to a different ingress solution. F5 maintains an OSS permissively licensed NGINX Ingress Controller. The project is open source, Apache 2.0 licensed, and will stay that way. There is a team of dedicated engineers working on it with a slate of upcoming upgrades. If you’re already comfortable with NGINX and just want something that works without a significant learning curve, we believe that the F5 NGINX Ingress Controller for Kubernetes is your smoothest path forward. The benefits of adopting NGINX Ingress Controller open source include: Genuinely open source: Apache 2.0 licensed with 150+ contributors from diverse organizations, not just F5. All development happens publicly on GitHub, and F5 has committed to keeping it open source forever. Plus community calls every 2 weeks. Minimal learning curve: Uses the same NGINX engine you already know. Most Ingress NGINX annotations have direct equivalents, and the migration guide provides clear mappings for your existing configurations. Supported annotations include popular ones such as nginx.org/client-body-buffer-size mirrors nginx.ingress.kubernetes.io/client-body-buffer-size (sets the maximum size of the client request body buffer). Also available in VirtualServer and ConfigMap. nginx.org/rewrite-target mirrors nginx.ingress.kubernetes.io/rewrite-target (sets a replacement path for URI rewrites) nginx.org/ssl-ciphers mirrors nginx.ingress.kubernetes.io/ssl-ciphers (configures enabled TLS cipher suites) nginx.org/ssl-prefer-server-cipher mirrors nginx.ingress.kubernetes.io/ssl-prefer-server-ciphers (controls server-side cipher preference during the TLS handshake) Optional enterprise-grade capabilities: While the OSS version is robust, NGINX Plus integration is available for enterprises needing high availability, authentication and authorization, session persistence, advanced security and commercial support Sustainable maintenance: A dedicated full-time team at F5 ensures regular security updates, bug fixes, and feature development. Production-tested at scale: NGINX Ingress Controller powers approximately 40% of Kubernetes Ingress deployments with over 10 million downloads. It’s battle-tested in real production environments. Kubernetes-native design: Custom Resource Definitions (VirtualServer, Policy, TransportServer) provide cleaner configuration than annotation overload, with built-in validation to prevent errors. Advanced capabilities when you need them: Support for canary deployments, A/B testing, traffic splitting, JWT validation, rate limiting, mTLS, and more—available in the open source version. Future-proof architecture: Active development of NGINX Gateway Fabric provides a clear migration path when you’re ready to move to Gateway API. NGINX Gateway Fabric is a conformant Gateway API solution under CNCF conformance criteria and it is one of the most widely used open source Gateway API solutions. Moving to NGINX Ingress Controller Here’s a rough migration guide. You can also check our more detailed migration guide on our documentation site. Phase 1: Take Stock See what you have: Document your current Ingress resources, annotations, and ConfigMaps Check for snippets: Identify any annotations like: nginx.ingress.kubernetes.io/configuration-snippet Confirm you're using it: Run kubectl get pods --all-namespaces --selector app.kubernetes.io/name=ingress-nginx Set it up alongside: Install NGINX Ingress Controller in a separate namespace while keeping your current setup running Phase 2: Translate Your Config Convert annotations: Most of your existing annotations have equivalents in NGINX Ingress Controller - there's a comprehensive migration guide that maps them out Consider VirtualServer resources: These custom resources are cleaner than annotation-heavy Ingress, and give you more control, but it's your choice Or keep using Ingress: If you want minimal changes, it works fine with standard Kubernetes Ingress resources Handle edge cases: For anything that doesn't map directly, you can use snippets or Policy resources Phase 3: Test Everything Try it with test apps: Create some test Ingress rules pointing to NGINX Ingress Controller Run both side-by-side: Keep both controllers running and route test traffic through the new one Verify functionality: Check routing, SSL, rate limiting, CORS, auth—whatever you're using Check performance: Verify it handles your traffic the way you need Phase 4: Move Over Gradually Start small: Migrate your less-critical applications first Shift traffic slowly: Update DNS/routing bit by bit Watch closely: Keep an eye on logs and metrics as you go Keep an escape hatch: Make sure you can roll back if something goes wrong Phase 5: Finish Up Complete the migration: Move your remaining workloads Clean up the old controller: Uninstall community Ingress NGINX once everything's moved Tidy up: Remove old ConfigMaps and resources you don't need anymore Enterprise-grade capabilities and support Once an ingress layer becomes mission-critical, enterprise features become necessary. High availability, predictable failover, and supportability matter as much as features. Enterprise-grade capabilities available for NGINX Ingress Controller Plus include high availability, authentication and authorization, commercial support, and more. These ensure production traffic remains fast, secure, and reliable. Capabilities include: Commercial support Backed by vendor commercial support (SLAs, escalation paths) for production incidents Access to tested releases, patches, and security fixes suitable for regulated/enterprise environments Guidance for production architecture (HA patterns, upgrade strategies, performance tuning) Helps organizations standardize on a supported ingress layer for platform engineering at scale Dynamic Reconfiguration Upstream configuration updates via API without process reloads Eliminates memory bloat and connection timeouts as upstream server lists and variables are updated in real time when pods scale or configurations change Authentication & Authorization Built-in authentication support for OAuth 2.0 / OIDC, JWT validation, and basic auth External identity provider integration (e.g., Okta, Azure AD, Keycloak) via auth request patterns JWT validation at the edge, including signature verification, claims inspection, and token expiry enforcement Fine-grained access control based on headers, claims, paths, methods, or user identity Optional Web Application Firewall Native integration with F5 WAF for NGINX for OWASP Top 10 protection, gRPC schema validation, and OpenAPI enforcement DDoS mitigation capabilities when combined with F5 security solutions Centralized policy enforcement across multiple ingress resources High availability (HA) Designed to run as multiple Ingress Controller replicas in Kubernetes for redundancy and scale State sharing: Maintains session persistence, rate limits, and key-value stores for seamless uptime. Here’s the full list of differences between NGINX Open Source and NGINX One – a package that includes NGINX Plus Ingress Controller, NGINX Gateway Fabric, F5 WAF for NGINX, and NGINX One Console for managing NGINX Plus Ingress Controllers at scale. Get Started Today Ready to begin your migration? Here's what you need: 📚 Read the full documentation: NGINX Ingress Controller Docs 💻 Clone the repository: github.com/nginx/kubernetes-ingress 🐳 Pull the image: Docker Hub - nginx/nginx-ingress 🔄 Follow the migration guide: Migrate from Ingress-NGINX to NGINX Ingress Controller Interested in the enterprise version? Try NGINX One for free and give it a whirl The NGINX Ingress Controller community is responsive and full of passionate builders -- join the conversation in the GitHub Discussions or the NGINX Community Forum. You’ve got time to plan this migration right, but don’t wait until March 2026 to start.650Views1like0CommentsF5 NGINX Plus R36 Release Now Available
We’re thrilled to announce the availability of F5 NGINX Plus Release 36 (R36). NGINX Plus R36 adds advanced capabilities like an HTTP CONNECT forward proxy, richer OpenID Connect (OIDC) abilities, expanded ACME options, and new container images packaged with popular modules. In addition, NGINX Plus inherits all the latest capabilities from NGINX Open Source, the only all-in-one software web server, load balancer, reverse proxy, content cache, and API gateway. Here’s a summary of the most important updates in R36: HTTP CONNECT Forward Proxy NGINX Plus can now tunnel HTTP traffic via the HTTP CONNECT method, making it easy to centralize egress policies through a trusted NGINX Plus server. Advanced features enable capabilities that limit proxied traffic to specific origin clients, ports, or destination hosts and networks. Native OIDC Support for PKCE, Front-Channel Logout, and POST Client Authentication We’ve made additional enhancements to the popular OpenID Connect module by adding support for PKCE enforcement, the ability to log out of all applications a client was signed in to, and support for authenticating the OIDC client using the client_secret_post method. ACME Enhancements for Certificate Automation Certificate automation capabilities are more powerful than ever, as we continue to make improvements to the ACME (Automatic Certificate Management Environment) module. New features include support for the TLS-ALPN-01 challenge method, the ability to select a specific certificate chain, IP-based certificates, ACME profiles, and external account bindings. TLS Certificate Compression High-volume or high-latency connections can now benefit from a new capability that optimizes TLS handshakes by compressing certificates and associated CA chains. Container Images with Popular Modules Container images now include our most popular modules, making it even easier to deploy NGINX Plus in container environments. Alongside the previously included njs module, images now ship with the ACME, OpenTelemetry Tracing, and Prometheus Exporter modules. Key features inherited from NGINX Open Source include: Support for 0-RTT in QUIC Inheritance control for headers and trailers Support for OpenSSL 3.5 New Features in Detail HTTP CONNECT Forward Proxy NGINX Plus R36 adds native HTTP CONNECT support via a new ngx_http_tunnel_module that enables NGINX Plus to operate as a forward proxy for outbound HTTP/HTTPS traffic. You can restrict clients, ports, and hosts, as well as which networks are reachable via centralized egress policies. This new capability allows you to monitor outbound connections through one or more NGINX Plus instances instead of relying on separate proxy products or DIY open source projects. Why it matters Enforces consistent egress policies without introducing another proxy to your stack. Centralizes proxy rules and threat monitoring for outbound connections. Reduces the need for custom modules or sidecar proxies to tunnel outbound TLS. Who it helps Platform and network teams that must route all outbound traffic through a controlled proxy. Security teams that want a single place to enforce and audit egress rules. Operators consolidating L7 functions (inbound and outbound) on NGINX Plus. The following is a sample forward proxy configuration that filters ports, destination hosts and networks. Note the requirement to specify a DNS resolver. For simplicity, we've configured a popular, publicly available DNS. However, doing so is not recommended for certain production environments, due to access limitations and unpredictable availability. num_map $request_port $allowed_ports { 443 1; 8440-8445 1; } geo $upstream_last_addr $allowed_nets { 52.58.199.0/24 1; 3.125.197.172/24 1; } map $host $allowed_hosts { hostnames; nginx.org 1; *.nginx.org 1; } server { listen 3128; resolver 1.1.1.1; # unsafe tunnel_allow_upstream $allowed_nets $allowed_ports $allowed_hosts; tunnel_pass; } Native OpenID Connect Support for PKCE, Front-Channel Logout, and POST Client Authentication R36 extends the native OIDC features introduced in R34 with PKCE enforcement, front-channel logout, and support for authenticating clients via the client_secret_post method. PKCE for authorization code flow can now be auto-enabled for a configured Identity Provider when S256 is advertised for “code_challenge_methods_supported” in federated metadata. Alternatively, it can be explicitly toggled with a simple directive: pkce on; # force PKCE even if the OP does not advertise it pkce off; # disable PKCE even if the OP advertises S256 A new front-channel logout endpoint lets an Identity Provider trigger a browser-based sign-out of all applications that a client is signed in to. oidc_provider okta_app { issuer https://dev.okta.com/oauth2/default; client_id <client_id>; client_secret <client_secret>; logout_uri /logout; logout_token_hint on; frontchannel_logout_uri /front_logout; userinfo on; } Support for client_secret_post improves compatibility with identity providers that require POST-based client authentication per OpenID Connect Core 1.0. Why it matters Brings NGINX Plus in line with modern identity provider expectations that treat PKCE as a best practice for all authorization code flows. Enables reliable logout across multiple applications from a single identity provider trigger. Makes NGINX Plus easier to integrate with providers that insist on POST-based client authentication. Who it helps Identity access management and security teams standardizing on OIDC and PKCE. Application and API owners who need consistent login and logout behavior across many services. Architects integrating with cloud identity providers (Entra ID, Okta, etc.) that require specific OIDC patterns. ACME Enhancements for Certificate Automation Building on the ACME support shipped in our previous release, NGINX Plus R36 incorporates new features from the nginx acme project, including: TLS-ALPN-01 challenge support Selection of a specific certificate chain IP-based certificates ACME profiles External account bindings These capabilities align NGINX Plus with what is available in NGINX Open Source via the ngx_http_acme_module. Why it matters Lets you keep more of the certificate lifecycle inside NGINX instead of relying on external tooling. Makes it easier to comply with CA policies and organizational PKI standards. Supports more complex environments such as IP-only endpoints and specific chain requirements. Who it helps SRE and platform teams running large fleets that rely on automated certificate renewal. Security and PKI teams that need tighter control over certificate chains, profiles, and CA bindings. Operators who want to standardize on a single ACME implementation across NGINX Plus and NGINX Open Source. TLS Certificate Compression TLS certificate compression introduced in NGINX Open Source 1.29.1 is now available in NGINX Plus R36. The ssl_certificate_compression directive controls certificate compression during TLS handshakes, which remains off by default and must be explicitly enabled. Why it matters Reduces the size of TLS handshakes when certificate chains are large. Can optimize performance in high connection volume or high-latency environments. Offers incremental performance optimization without changing application behavior. Who it helps Performance-focused operators running services with many short-lived TLS connections. Teams supporting mobile or high-latency clients where every byte in the handshake matters. Operators who want to experiment with handshake optimizations while retaining control via explicit configuration. Container Images with Popular Modules Included Finally, NGINX Plus R36 introduces container images that bundle NGINX Plus with commonly requested first-party modules such as OpenTelemetry (OTel) Tracing, ACME, Prometheus Exporter, and NGINX JavaScript (njs). Options include images with or without F5 NGINX Agent and those running NGINX in privileged or unprivileged mode. Additionally, a slim NGINX Plus–only image will be made available for teams who prefer to build bespoke custom containers. Additional Enhancements Available in NGINX Plus R36 Upstream-specific request headers Dynamically assigned proxy_bind pools Changes Inherited from NGINX Open Source NGINX Plus R36 is based on the NGINX 1.29.3 mainline release and inherits all functional changes, features, and bug fixes made since NGINX Plus R35 was released (which was based on the 1.29.0 mainline release). For the full list of new changes, features, bug fixes, and workarounds inherited from recent releases, see the NGINX changes . Changes to Platform Support Added Platforms Support for the following platforms has been added: Debian 13 Rocky Linux 10 SUSE Linux Enterprise Server 16 Removed Platforms Support for the following platforms has been removed: Alpine Linux 3.19 – Reached End of Support in November 2025 Deprecated Platforms Support for the following platforms will be removed in a future release: Alpine Linux 3.20 Important Warning NGINX Plus is built on the latest minor release of each supported operating system platform. In many cases, the latest revisions of these operating systems are adapting their platforms to support OpenSSL 3.5 (e.g. RHEL 9.7 and 10.1). In these situations, NGINX Plus requires that OpenSSL 3.5.0 or later is installed for proper operation. F5 NGINX in F5’s Application Delivery & Security Platform NGINX One is part of F5’s Application Delivery & Security Platform. It helps organizations deliver, improve, and secure new applications and APIs. This platform is a unified solution designed to ensure reliable performance, robust security, and seamless scalability for applications deployed across cloud, hybrid, and edge architectures. NGINX One is the all-in-one, subscription-based package that unifies all of NGINX’s capabilities. NGINX One brings together the features of NGINX Plus, F5 NGINX App Protect, and NGINX Kubernetes and management solutions into a single, easy-to-consume package. NGINX Plus, a key component of NGINX One, adds features to NGINX Open Source that are designed for enterprise-grade performance, scalability, and security. Follow this guide for more information on installing and deploying NGINX Plus R36 or NGINX Open Source.777Views1like0CommentsF5 NGINX Plus R35 Release Now Available
We’re excited to announce the availability of F5 NGINX Plus Release 35 (R35). Based on NGINX Open Source, NGINX Plus is the only all-in-one software web server, load balancer, reverse proxy, content cache, and API gateway. New and enhanced features in NGINX Plus R35 include: ACME protocol support: This release introduces native support for Automated Certificate Management Environment (ACME) protocol in NGINX Plus. The ACME protocol automates SSL/TLS certificate lifecycle management by enabling direct communication between clients and certificate authorities for issuance, installation, revocation, and replacement of SSL certificates. Automatic JWT Renewal and Update: This capability simplifies the NGINX Plus renewal experience by automating the process of updating the license JWT for F5 NGINX instances communicating directly with the F5 licensing endpoint for license reporting. Native OIDC Enhancements: This release includes additional enhancements to the Native OpenID connect module, adding support for Relying party (RP) initiated Logout and UserInfo endpoint for streamlining authentication workflows. Support for Early Hints: NGINX Plus R35 introduces support for Early Hints (HTTP 103), which optimizes website performance by allowing browsers to preload resources before the final server response, reducing latency and accelerating content display. QUIC – CUBIC Congestion Control: With R35, we have extended support for congestion algorithms in our HTTP3/QUIC implementation to also support CUBIC which provides better bandwidth utilization resulting in quicker load times and faster downloads. NGINX Javascript QuickJS - Full ES2023 support: With this NGINX Plus release, we now support full ES2023 JavaScript specification for QuickJS runtime for your custom NGINX scripting and extensibility needs using NGINX JavaScript. Changes to Platform Support NGINX Plus R35 introduces the following updates to the NGINX Plus technical specification. Added Platforms: Support for the following platforms has been added with this release Alpine Linux 3.22 RHEL 10 Removed Platforms: Support for the following platforms has been removed starting this release. Alpine Linux 3.18 – Reached End of Support in May 2025 Ubuntu 20.04 (LTS) – Reached End of support in May 2025 Deprecated Platforms: Alpine Linux 3.19 Note: For SUSE Linux Enterprise Server (SLES) 15, SP6 is now the required service pack version. The older service packs have been EOL’ed by the vendor and are no longer supported. New Features In Details ACME Protocol Support The ACME protocol (Automated Certificate Management Environment) is a communications protocol primarily designed to automate the process of issuing, validating, renewing, and revoking digital security certificates (e.g., TLS/SSL certificates). It allows clients to interact with a Certificate Authority (CA) without requiring manual intervention, simplifying the deployment of secure websites and other services that rely on HTTPS. With the NGINX Plus R35 release, we are pleased to announce the preview release of native ACME support in NGINX. ACME support is available as a Rust-based dynamic module for both NGINX Open Source, as well as enterprise F5 NGINX One customers using NGINX Plus. Native ACME support greatly simplifies and automates the process of obtaining and renewing SSL/TLS certificates. There’s no need to track certificate expiration dates and manually update or review configs each time an update is needed. With this support, NGINX can now directly communicate with ACME-compatible Certificate Authorities (CAs) like Let's Encrypt to handle certificate management without requiring external plugins like certbot, cert-manager, etc or ongoing manual intervention. This reduces complexity, minimizes operational overhead, and streamlines the deployment of encrypted HTTPS for websites and applications while also making the certificate management process more secure and less error prone. The implementation introduces a new module ngx_http_acme_module providing built-in directives for requesting, installing, and renewing certificates directly from NGINX configuration. The current implementation supports HTTP-01 challenge with support for TLS-ALPN and DNS-01 challenges planned in future. For a detailed overview of the implementation and the value it brings, refer the ACME blog post. To get step by step instructions on how to configure ACME in your environment, refer to NGINX docs. Automatic JWT Renewal and Update This feature enables the automatic update of the JWT license for customers reporting their usage directly to the F5 licensing endpoint (product.connect.nginx.com) post successful renewal of the subscription. The feature applies to subscriptions nearing expiration (within 30 days) as well as subscriptions that have expired, but remain within the 90-day grace period. Here is how this feature works: Starting 30 days prior to JWT license expiration, NGINX Plus will notify the licensing endpoint server of JWT license expiration as part of the automatic usage reporting process. The licensing endpoint server will continually check for a renewed NGINX One subscription with F5 CRM system. Once the subscription is successfully renewed, the F5 licensing endpoint server will send the updated JWT to corresponding NGINX Plus instance. NGINX Plus instance in turn will automatically deploy the renewed JWT license to the location based on your existing configuration without the need for any NGINX reload or service restart. Note: The renewed JWT file received from F5 is named nginx-mgmt-license and is located at the state_path location on your NGINX instance. For more details, refer to NGINX docs. Native OpenID Connect Module Enhancements The NGINX Plus R34 release introduced native support for OpenID Connect (OIDC) authentication. Continuing the momentum, we are excited to add support for OIDC Relying Party (RP) Initiated Logout along with support for retrieving claims via the OIDC UserInfo endpoint in this release. Relying Party (RP) Initiated Logout RP-Initiated Logout is a method used in federated authentication systems (e.g., systems using OpenID Connect (OIDC) or Security Assertion Markup Language (SAML)) to allow a user to log out of an application (called the relying party) and propagate the logout request to other services in the authentication ecosystem, such as the identity provider (IdP) and other sessions tied to the user. This facilitates session synchronization and clean-up across multiple applications or environments. The RP-Initiated Logout support in NGINX OIDC native module helps provide a seamless user experience by enhancing the consistency of authentication and logout workflows, particularly in Single Sign-On (SSO) environments. It significantly helps improve security by ensuring user sessions are terminated securely thereby reducing the risk of unauthorized access. It also simplifies the development processes by minimizing the need for custom coding and promoting adherence to best practices. Additionally, it strengthens user privacy and supports compliance efforts enabling users to easily terminate sessions, thereby reducing the exposure from lingering session. The implementation involves the client (browser) initiating a logout by sending a request to the relying party's (NGINX) logout endpoint. NGINX(RP) adds additional parameters to the request and redirects it to the IdP, which terminates the associated user session and redirects the client to the specified post_logout_uri. Finally, NGINX as the relying party presents a post-logout confirmation page, signaling the completion of the logout process and ensuring session termination across both the relying party and the identity provider. UserInfo Retrieval Support The OIDC UserInfo endpoint is used by applications to retrieve profile information about the authenticated Identity. Applications can use this endpoint to retrieve profile information, preferences and other user-specific information to ensure a consistent user management process. The support for UserInfo endpoint in the native OIDC module provides a standardized mechanism to fetch user claims from Identity Providers (IdPs) helping simplify the authentication workflows and reducing overall system complexity. Having a standard mechanism also helps define and adopt development best practices across client applications for retrieving user claims offering tremendous value to developers, administrators, and end-users. The implementation enables the RP (nginx) to call an identity provider's OIDC UserInfo endpoint with the access token (Authorization: Bearer) and obtain scope-dependent End-user claims (e.g., profile, email, scope, address). This provides the standard, configuration-driven mechanism for claim retrieval across client applications and reduces integration complexity. Several new directives (logout_uri, post_logout_uri, logout_token_hint, and userinfo) have been added to the ngx_http_oidc_module to support both these features. Refer to our technical blog on how NGINX Plus R35 offers frictionless logout and UserInfo retrieval support as part of the native OIDC implementation for a comprehensive overview of both of these features and how they work under the hood. For instructions on how to configure the native OIDC module for various identity providers, refer the NGINX deployment guide. Early Hints Support Early Hints (RFC 8297) is a HTTP status code to improve website performance by allowing the server to send preliminary hints to the client before the final response is ready. Specifically, the server sends a 103 status code with headers indicating which resources (like CSS, JavaScript, images) the client can pre-fetch while the server is still working on generating the full response. Majority of the web browsers including Chrome, Safari and Edge support it today. A new NGINX directive early_hints has been added to specify the conditions under which backends can send Early Hints to the client. NGINX will parse the Early Hints from the backend and send them to the client. The following example shows how to proxy Early Hints for HTTP/2 and HTTP/3 clients and disable them for HTTP/1.1 early_hints $http2$http3; proxy_pass http://bar.example.com; For more details, refer NGINX docs and a detailed blog on Early Hints support in NGINX. QUIC – Support for CUBIC Congestion Control Algorithm CUBIC is a congestion control algorithm designed to optimize internet performance. It is widely used and well-tested in TCP implementations and excels in high-bandwidth and high-delay networks by efficiently managing data transmission ensuring faster speeds, rapid recovery from congestion, and reduced latency. Its adaptability to various network conditions and fair resource allocation makes it a reliable choice for delivering a smooth and responsive online experience and enhance overall user satisfaction. We announced support for CUBIC congestion algorithm in NGINX open source mainline version 1.27.4. All the bug fixes and enhancements since then are being merged into NGINX Plus R35. For a detailed overview of the implementation, refer to our blog on the topic. NGINX Javascript QuickJS - Full ES2023 support We introduced preview support for QuickJS runtime in NGINX JavaScript(njs) version 0.8.6 in the NGINX Plus R33 release. We have been quietly focused on this initiative since and are pleased to announce full ES2023 JavaScript specification support in NGINX JavaScript(njs) version 0.9.1 with NGINX Plus R35 release. With full ES2023 specification support, you can now use the latest JavaScript features that modern developers expect as standard to extend NGINX capabilities using njs. Refer to this detailed blog for a comprehensive overview of our QuickJS implementation, the motivation behind QuickJS runtime support and where we are headed with NGINX JavaScript. For specific details on how you can leverage QuickJS in your njs scripts, please refer to our documentation. Other Enhancements and Bug Fixes Variable based Access Control Support To enable robust access control using identity claims, R34 and earlier versions required a workaround involving the auth_jwt_require directive. This involved reprocessing the ID token with the auth_jwt module to manage access based on claims. This approach introduced configuration complexity and performance overhead. With R35, NGINX simplifies this process through the auth_require directive, which allows direct use of claims for resource-based access control without relying on auth_jwt. This directive is part of a new module ngx_http_auth_require_module added in this release. For ex, the following NGINX OIDC configuration maps the role claim from the id_token to $admin_role variable and sets it to 1 if the user’s role is “admin”. The /location block then uses auth_require $admin_role to restrict access, allowing only the users with admin role to proceed. http { oidc_provider my_idp { ... } map $oidc_claim_role $admin_role { "admin" 1; } server { auth_oidc my_idp; location /admin { auth_require $admin_role; } } } Though the directive is not exclusive to OIDC, when paired with auth_oidc, it provides a clean and declarative Role-Based Access Control (RBAC) mechanism within the server configuration. For example, you can easily configure access so only admins reach the /admin location, while either admins or users with specific permissions access other locations. The result is streamlined, efficient, and practical access management directly in NGINX. Note that the new auth_require directive does not replace auth_jwt_require as the two serve distinct purposes. While auth_jwt_require is an integral part of JWT validation in the JWT module focusing on headers and claims checks, auth_require operates in a separate ACCESS phase for access control. Deprecating auth_jwt_require would reduce flexibility, particularly in "satisfy" modes of operation, and complicate configurations. Additionally, auth_jwt_require plays a critical role in initializing JWT-related variables, enabling their use in subrequests. This initialization, crucial for JWE claims, cannot be done via REWRITE module directives as JWE claims are not available before JWT decryption. Support for JWS RSASSA-PSS algorithms: RSASSA-PSS algorithms are used for verifying the signatures of JSON Web Tokens (JWTs) to ensure their authenticity and integrity. In NGINX, these algorithms are typically employed via the auth_jwt_module when validating JWTs signed using RSASSA-PSS. We are adding support for following algorithms as specified in RFC 7518 (Section 3.5): PS256 PS384 PS512 Improved Node Outage Detection and Logging This release also introduces improvements in the timeout handling for zone_sync connections enabling faster detection of offline nodes and reducing counter accumulation risks. This improvement is aimed at improving synchronization of nodes in a cluster and early detection of failures improving system’s overall performance and reliability. Additional heuristics are added to detect blocked workers to proactively address prolonged event loop times. License API Updates NGINX license API endpoint now provides additional information. The “uuid” parameter in the license information is now available via the API endpoint. Changes Inherited from NGINX Open Source NGINX Plus R35 is based on NGINX 1.29.0 mainline release and inherits all functional changes, features, and bug fixes made since NGINX Plus R34 was released (which was based on 1.27.4 mainline release). Features: Early Hints support - support for response code 103 from proxy and gRPC backends; CUBIC congestion control algorithm support in QUIC connections. Loading of secret keys from hardware tokens with OpenSSL provider. Support for the "so_keepalive" parameter of the "listen" directive on macOS. Changes: The logging level of SSL errors in a QUIC handshake has been changed from "error" to "crit" for critical errors, and to "info" for the rest; the logging level of unsupported QUIC transport parameters has been lowered from "info" to "debug". Bug Fixes: nginx could not be built by gcc 15 if ngx_http_v2_module or ngx_http_v3_module modules were used. nginx might not be built by gcc 14 or newer with -O3 -flto optimization if ngx_http_v3_module was used. In the "grpc_ssl_password_file", "proxy_ssl_password_file", and "uwsgi_ssl_password_file" directives when loading SSL certificates and encrypted keys from variables; the bug had appeared in 1.23.1. In the $ssl_curve and $ssl_curves variables when using pluggable curves in OpenSSL. nginx could not be built with musl libc. Bugfixes and performance improvements in HTTP/3. Security: (CVE-2025-53859) SMTP Authentication process memory over-read: This vulnerability in the NGINX ngx_mail_smtp_module may allow an unauthenticated attacker to trigger buffer over-read resulting in worker process memory disclosure to the authentication server. For the full list of new changes, features, bug fixes, and workarounds inherited from recent releases, see the NGINX changes . Changes to the NGINX Javascript Module NGINX Plus R35 incorporates changes from the NGINX JavaScript (njs) module version 0.9.1. The following is a list of notable changes in njs since 0.8.9 (which was the version shipped with NGINX Plus R34). Features: Added support for the QuickJS-NG library. Added support for WebCrypto API, FetchAPI, TextEncoder and TextDecoder, querystring module, crypto module and xml module for the QuickJS engine. Added state file for a shared dictionary. Added ECDH support for WebCrypto. Added support for reading r.requestText or r.requestBuffer from a temporary file. Improvements: Performance improvements due to refactored handling of built-in strings, symbols, and small integers Multiple memory usage improvements improved reporting of unhandled promise rejections. Bug Fixes: Fixed segfault in njs_property_query(). The issue was introduced in b28e50b1 (0.9.0). Fixed Function constructor template injection. Fixed GCC compilation with O3 optimization level. Fixed constant is too large for 'long' warning on MIPS -mabi=n32. Fixed compilation with GCC 4.1. Fixed %TypedArray%.from() with the buffer is detached by the mapper. Fixed %TypedArray%.prototype.slice() with overlapping buffers. Fixed handling of detached buffers for typed arrays. Fixed frame saving for async functions with closures. Fixed RegExp compilation of patterns with escaped '[' characters. Fixed handling of undefined values of a captured group in RegExp.prototype[Symbol.split](). Fixed GCC 15 build error with -Wunterminated-string-initialization. Fixed name corruption in variables and headers processing. Fixed incr() method of a shared dictionary with an empty init argument for the QuickJS engine. Bugfix: accepting response headers with underscore characters in Fetch API. Fixed Buffer.concat() with a single argument in QuickJS. Bugfix: added missed syntax error for await in template literal. Fixed non-NULL terminated strings formatting in exceptions for the QuickJS engine. Fixed compatibility with recent change in QuickJS and QuickJS-NG. Fixed serializeToString(). Previously, serializeToString() was exclusiveC14n() which returned a string instead of Buffer. According to the published documentation, it should be c14n() For a comprehensive list of all the features, changes, and bug fixes, see the njs Changelog. F5 NGINX in F5’s Application Delivery & Security Platform NGINX One is part of F5’s Application Delivery & Security Platform. It helps organizations deliver, improve, and secure new applications and APIs. This platform is a unified solution designed to ensure reliable performance, robust security, and seamless scalability for applications deployed across cloud, hybrid, and edge architectures. NGINX One is the all-in-one, subscription-based package that unifies all of NGINX’s capabilities. NGINX One brings together the features of NGINX Plus, F5 NGINX App Protect, and NGINX Kubernetes and management solutions into a single, easy-to-consume package. NGINX Plus, a key component of NGINX One, adds features to open-source NGINX that are designed for enterprise-grade performance, scalability, and security. Ready to try the new release? Follow this guide for more information on installing and deploying NGINX Plus.1.7KViews1like0CommentsF5 NGINX One Console July features
Introduction We are very excited to announce the new set of F5 NGINX One Console features that were released in July: • F5 NGINX App Protect WAF Policy Orchestration • F5 NGINX Ingress Controller Fleet Visibility The NGINX One Console is a central management service in the F5 Distributed Cloud that makes it easier to deploy, manage, monitor, and operate NGINX. It is available to all NGINX and F5 Distributed Cloud subscribers and is included as a part of their subscription. If you have any questions on how to get access, please reach out to me. Workshops Additionally, we are offering complementary workshops on how to use the NGINX One Console in August and September (North American Time Zone). These include a presentation, hands-on labs, and demos with live support and guidance. July 30 - F5 Test Drive Labs - F5 NGINX One - https://www.f5.com/company/events/test-drive-labs#agenda Aug 19 - NGINXperts Workshop - NGINX One Console - https://www.eventbrite.com/e/nginxpert-workshops-nginx-one-console-tickets-1511189742199 Sept 9 - F5 Test Drive Labs - NGINX One - https://www.f5.com/company/events/test-drive-labs#agenda Sept 10 - NGINXperts Workshop - NGINX One Console - https://www.eventbrite.com/e/nginxpert-workshops-nginx-one-console-tickets-1393597239859?aff=oddtdtcreator NGINX App Protect WAF Policy Orchestration You can now centrally manage NGINX App Protect WAF in the NGINX One Console. Easily create and edit WAF policies in the NGINX One Console, compare changes, and publish to instances and configuration sync groups. Both App Protect v4 and v5 are supported! Find the guides on how to secure F5 NGINX with NGINX App Protect and the NGINX One Console here: https://docs.nginx.com/nginx-one/nap-integration/ NGINX Ingress Controller Fleet Visibility NGINX Ingress Controller deployments can now be monitored in the NGINX One Console. See all versions of NGINX Ingress Controller deployments, what underlying instances make up the control pods, and see the underlying NGINX configuration. Coming later this quarter, CVE monitoring for NGINX Ingress Controller and F5 NGINX Gateway Fabric inclusion. For details, see how you can Connect Ingress Controller to NGINX One Console: https://docs.nginx.com/nginx-one/k8s/add-nic/ Find all the latest additions to the NGINX One Console in our changelog: https://docs.nginx.com/nginx-one/changelog/https://docs.nginx.com/nginx-one/changelog/ The NGINX Impact in F5’s Application Delivery & Security Platform NGINX One Console is part of F5’s Application Delivery & Security Platform. It helps organizations deliver, improve, and secure new applications and APIs. This platform is a unified solution designed to ensure reliable performance, robust security, and seamless scalability. It is used for applications deployed across cloud, hybrid, and edge architectures. The NGINX One Console is also a key component of NGINX One, the all-in-one, subscription-based package that unifies all of NGINX’s capabilities. NGINX One brings together the features of NGINX Plus, NGINX App Protect, and NGINX Kubernetes and management solutions into a single, easy-to-consume package. As a cornerstone of the NGINX One package, NGINX One Console extends the capabilities of open-source NGINX. It adds features designed specifically for enterprise-grade performance, scalability, and security. Find all the latest additions to the NGINX One Console in our changelog: https://docs.nginx.com/nginx-one/changelog/https://docs.nginx.com/nginx-one/changelog/124Views0likes0CommentsF5 NGINX One Console June Features
Introduction We are happy to announce the new set of F5 NGINX One Console features that were released in June: • Fleet Visibility Alerts • Import/Export for Staged Configurations The F5 NGINX One Console is a central management service in the F5 Distributed Cloud. It makes it easier to deploy, manage, monitor, and operate F5 NGINX. It is available to all NGINX and Distributed Cloud subscribers and is included as a part of your subscription. If you have any questions on how to access, please reach out to your F5 Customer Success or Account team. Fleet Visibility Alerts The NGINX One Console will now be creating Alerts for important notifications related to your NGINX fleet. For example, if a connected NGINX instance has an insecure configuration or an instance is impacted by a newly announced Critical Vulnerability, an Alert will be generated. Using Distributed Cloud’s robust Alerting system, you can configure notifications of your choice to be sent to your preferred system such as SMS, email, Slack, Pager Duty, webhook, etc. Find the full list of Alerts here: https://docs.cloud.f5.com/docs-v2/platform/reference/alerts-reference See instructions on how to set up Alert Receivers and Policies here: https://docs.cloud.f5.com/docs-v2/shared-configuration/how-tos/alerting/alerts-email-sms Import/Export for Staged Configurations Effortlessly share your configurations with the new import/export. This feature makes it easier to work together. You can share configurations quickly with teammates, F5 support, or the community. You can also easily import configurations from others. Whether you're fine-tuning configurations for your team or seeking advice, this update makes sharing and receiving configurations simple and efficient. This also allows those who prefer the NGINX One Console configuration editing experience but operate in dark or air-gapped environments, an easy way to move configurations from the NGINX One Console to your instance. You can craft and refine configurations within the NGINX One Console, then export them for deployment in isolated instances without hassle. Similarly, it lets you easily export configurations from the Console and import them into F5 NGINXaas for Azure. The process is highly flexible and intuitive: Create new Staged Configurations by importing configuration files directly as a .tar.gz archive via the UI or API. Export Staged Configurations with just one click or API call, generating a .tar.gz archive that’s ready to be unpacked and applied wherever you need your configuration files. See the documentation for more information: [N1 docks link] Find all the latest additions to the NGINX One Console in our changelog: https://docs.nginx.com/nginx-one/changelog/https://docs.nginx.com/nginx-one/changelog/ The NGINX Impact in F5’s Application Delivery & Security Platform NGINX One Console is part of F5’s Application Delivery & Security Platform. It helps organizations deliver, improve, and secure new applications and APIs. This platform is a unified solution designed to ensure reliable performance, robust security, and seamless scalability for applications deployed across cloud, hybrid, and edge architectures. NGINX One Console is also a key component of NGINX One, the all-in-one, subscription-based package that unifies all of NGINX’s capabilities. NGINX One brings together the features of NGINX Plus, F5 NGINX App Protect, and NGINX Kubernetes and management solutions into a single, easy-to-consume package. NGINX One Console is a key part of the NGINX One package. It adds features to open-source NGINX that are designed for enterprise-grade performance, scalability, and security.220Views1like0CommentsAnnouncing F5 NGINX Instance Manager 2.20
We’re thrilled to announce the release of F5 NGINX Instance Manager 2.20, now available for download! This update focuses on improving accessibility, simplifying deployments, improving observability, and enriching the user experience based on valuable customer feedback. What’s New in This Release? Lightweight Mode for NGINX Instance Manager Reduce resource usage with the new "Lightweight Mode", which allows you to deploy F5 NGINX Instance Manager without requiring a ClickHouse database. While metrics and events will no longer be available without ClickHouse, all other instance management functionalities—such as certificate management, WAF, templates, and more—will work seamlessly across VM, Docker, and Kubernetes installations. With this change, ClickHouse becomes optional for deployments. Customers who require metrics and events should continue to include ClickHouse in their setup. For those focused on basic use cases, Lightweight Mode offers a streamlined deployment that reduces system complexity while maintaining core functionality for essential tasks. Lightweight Mode is perfect for customers who need simplified management capabilities for scenarios such as: Fleet Management WAF Configuration Usage Reporting as Part of Your Subscription (for NGINX Plus R33 or later) Certificate Management Managing Templates Scanning Instances Enabling API-based GitOps for Configuration Management In testing, NGINX Instance Manager worked well without ClickHouse. It only needed 1 CPU and 1 GB of RAM to manage up to 10 instances (without App Protect). However, please note that this represents the absolute minimum configuration and may result in performance issues depending on your use case. For optimal performance, we recommend allocating more appropriate system resources. See the updated technical specification in the documentation for more details. Support for Multiple Subscriptions Align and consolidate usage from multiple NGINX Plus subscriptions on a single NGINX Instance Manager instance. This feature is especially benficial for customers who use NGINX Instance Manager as a reporting endpoint, even in disconnected or air-gapped environments. This feature was added with NGINX Plus R33. Improved Licensing and Reporting for Disconnected Environments Managing NGINX Instance Manager in environments with no outbound internet connectivity is now simpler. Customers can configure NGINX Instance Manager to use a forward proxy for licensing and reporting. For truly air-gapped environments, we've improved offline licensing: upload your license JWT to activate all features, and enjoy a 90-day grace period to submit an initial report to F5. We've also revamped the usage reporting script to be more intuitive and backwards-compatible with older versions. Enhanced User Interface We’ve modernized the NGINX Instance Manager UI to streamline navigation and make it consistent with the F5 NGINX One Console. Features are now grouped into submenus for easier access. Additionally, breadcrumbs have been added to all pages for improved usability. Instance Export Enhancements We’ve added the ability to export instances and instance groups, simplifying the process of managing and sharing configuration details. This improvement makes it easier to keep track of large deployments and maintain consistency across environments. Performance and Stability Improvements With this release, we’ve made performance and stability improvements, a key part of every update to ensure NGINX Instance Manager runs smoothly in all environments. We’ve addressed multiple bug fixes in this release to improve stability and reliability. For more details on all the fixes included, please visit the release notes. Platform Improvements and Helm Chart Migration We’ve made significant enhancements to the Helm charts to simplify the installation process for NGINX Instance Manager in Kubernetes environments. Starting with this release, the Helm charts have moved to a new repository: nginx-stable/nim with chart version 2.0. Note: NGINX Instance Manager versions 2.19 or lower will remain in the old repository, nms-stable/nms-hybrid. Be sure to update your configurations accordingly when upgrading to version 2.20 or later. Looking Ahead: Security, Modernization, and Kubernetes Innovations As part of the F5 NGINX One product offering, NGINX Instance Manager continues to evolve to meet the demands of modern infrastructures. We're committed to improving security, scalability, usability, and observability to align with your needs. Although support for the latest F5 NGINX Agent v3 is not included in this release. We are actively exploring ways to enable it later this year to bring additional value for both NGINX Instance Manager and the NGINX One Console. Additionally, we’re exploring new ways to enhance support for data plane NGINX deployments, particularly in Kubernetes environments. Stay tuned for updates as we continue to innovate for cloud-native and containerized workloads. We’re eager to hear your feedback to help shape the roadmap for future releases. Get Started Now To explore the new lightweight mode, enhanced UI, and updated features, download NGINX Instance Manager 2.20. For more details on bug fixes and performance improvements, check out the full release notes. . The NGINX Impact in F5’s Application Delivery & Security Platform NGINX Instance Manager is part of F5’s Application Delivery & Security Platform. It helps organizations deliver, optimize, and secure modern applications and APIs. This platform is a unified solution designed to ensure reliable performance, robust security, and seamless scalability for applications deployed across cloud, hybrid, and edge architectures. NGINX Instance Manager is also a key component of NGINX One, the all-in-one, subscription-based package that unifies all of NGINX’s capabilities. NGINX One brings together the features of NGINX Plus, F5 NGINX App Protect, and NGINX Kubernetes and management solutions into a single, easy-to-consume package. A cornerstone of the NGINX One package, NGINX Instance Manager extends the capabilities of open-source NGINX with features designed specifically for enterprise-grade performance, scalability, and security.383Views0likes0CommentsIntroducing AI Assistant for F5 Distributed Cloud, F5 NGINX One and BIG-IP
This article is an introduction to AI Assistant and shows how it improves SecOps and NetOps speed across all F5 platforms (Distributed Cloud, NGINX One and BIG-IP) , by solving the complexities around configuration, analytics, log interpretation and scripting.
696Views3likes1CommentInstalling and Locking a Specific Version of F5 NGINX Plus
A guide for installing and locking a specific version of NGINX Plus to ensure stability, meet internal policies, and prepare for controlled upgrades. Introduction The most common way to install F5 NGINX Plus is by using the package manager tool native to your Linux host (e.g., yum, apt-get, etc.). By default, the package manager installs the latest available version of NGINX Plus. However, there may be scenarios where you need to install an earlier version. To help you modify your automation scripts, we’ve provided example commands for selecting a specific version. Common Scenarios for Installing an Earlier Version of NGINX Plus Your internal policy requires sticking to internally tested versions before deploying the latest release. You prefer to maintain consistency by using the same version across your entire fleet for simplicity. You’d like to verify and meet additional requirements introduced in a newer release (e.g., NGINX Plus Release 33) before upgrading. Commands for Installing and Holding a Specific Version of NGINX Plus Use the following commands based on your Linux distribution to install and lock a prior version of NGINX Plus: Ubuntu 20.04, 22.04, 24.04 LTS sudo apt-get update sudo apt-get install -y nginx-plus=<VERSION> sudo apt-mark hold nginx-plus Debian 11, 12 sudo apt-get update sudo apt-get install -y nginx-plus=<VERSION> sudo apt-mark hold nginx-plus AlmaLinux 8, 9 / Rocky Linux 8, 9 / Oracle Linux 8.1+, 9 / RHEL 8.1+, 9 sudo yum install -y nginx-plus-<VERSION> sudo yum versionlock nginx-plus Amazon Linux 2 LTS, 2023 sudo yum install -y nginx-plus-<VERSION> sudo yum versionlock nginx-plus SUSE Linux Enterprise Server 12, 15 SP5+ sudo zypper install nginx-plus=<VERSION> sudo zypper addlock nginx-plus Alpine Linux 3.17, 3.18, 3.19, 3.20 apk add nginx-plus=<VERSION> echo "nginx-plus hold" | sudo tee -a /etc/apk/world FreeBSD 13, 14 pkg install nginx-plus-<VERSION> pkg lock nginx-plus Notes Replace <VERSION> with the desired version (e.g., 32-2*). After installation, verify the installed version with the command: nginx -v. Holding or locking the package ensures it won’t be inadvertently upgraded during routine updates. Conclusion Installing and locking a specific version of NGINX Plus ensures stability, compliance with internal policies, and proper validation of new features before deployment. By following the provided commands tailored to your Linux distribution, you can confidently maintain control over your infrastructure while minimizing the risk of unintended upgrades. Regularly verifying the installed version and holding updates will help ensure consistency and reliability across your environments.384Views0likes0Comments