announcement
353 TopicsWhat’s New in BIG-IQ v8.4.1?
Introduction F5 BIG-IQ Centralized Management, a key component of the F5 Application Delivery and Security Platform (ADSP), helps teams maintain order and streamline administration of BIG-IP app delivery and security services. In this article, I’ll highlight some of the key features, enhancements, and use cases introduced in the BIG-IQ v8.4.1 release and cover the value of these updates. Effective management of this complex application landscape requires a single point of control that combines visibility, simplified management and automation tools. Demo Video New Features in BIG-IQ 8.4.1 Support for F5 BIG-IP v17.5.1.X and BIG-IP v21.0 BIG-IQ 8.4.1 provides full support for the latest versions of BIG-IP (BIG-IP 17.5.1.X and 21.0) ensuring seamless discovery and compatibility across all modules. Users who upgrade to BIG-IP 17.5.1.X+ or 21.0 retain the same functionality without disruptions, maintaining consistency in their management operations. As you look to upgrade BIG-IP instances to the latest versions, our recommendation is to use BIG-IQ. By leveraging the BIG-IQ device/software upgrade workflows, teams get a repeatable, standardized, and auditable process for upgrades in a single location. In addition to upgrades, BIG-IQ also enables teams to handle backups, licensing, and device certificate workflows in the same tool—creating a one-stop shop for BIG-IP device management. Note that BIG-IQ works with BIG-IP appliances and Virtual Editions (VEs). Updated TMOS Layer In the 8.4.1 release, BIG-IQ's underlying TMOS version has been upgraded to v17.5.1.2, which will enhance the control plane performance, improve security efficacy, and enable better resilience of the BIG-IQ solution. MCP Support BIG-IP v21.0 introduced MCP Profile support—enabling teams to support AI/LLM workloads with BIG-IP to drive better performance and security. Additionally, v21.0 also introduces support for S3-optimized profiles, enhancing the performance of data delivery for AI workloads. BIG-IQ 8.4.1 and its interoperability with v21.0 helps teams streamline and scale management of these BIG-IP instances—enabling them to support AI adoption plans and ensure fast and secure data delivery. Enhanced BIG-IP and F5OS Visibility and Management BIG-IQ 8.4.1 introduces the ability to provision, license, configure, deploy, and manage the latest BIG-IP devices and app services (v17.5.1.X and v21.0). In 8.4, BIG-IQ introduced new visibility fields—including model, serial numbers, count, slot tenancy, and SW version—to help teams effectively plan device strategy from a single source of truth. These enhancements also improved license visibility and management workflows, including exportable reports. BIG-IQ 8.4.1 continues to offer this enhanced visibility and management experience for the latest BIG-IP versions. Better Security Administration BIG-IQ 8.4.1 includes general support for SSL Orchestrator 13.0 to help teams manage encrypted traffic and potential threats. BIG-IQ includes dedicated dashboards and management workflows for SSL Orchestrator. In BIG-IQ 8.4, F5 introduced support and management for Venafi Trust Protection Platform v22.x-24.x, a leading platform for certificate management and certificate authority services. This integration enables teams to automate and centrally manage BIG-IP SSL device certificates and keys. BIG-IQ 8.4.1 continues this support. Finally, BIG-IQ 8.4.1 continues to align with AWS security protocols so customers can confidently partner with F5. In BIG-IQ 8.4, F5 introduced support for IMDSv2, which uses session-oriented authentication to access EC2 instance metadata, as opposed to the request/response method of IMDSv1. This session/token-based method is more secure as it reduces the likelihood of attackers successfully using application vulnerabilities to access instance metadata. Enhanced Automation Integration & Protocol Support BIG-IQ 8.4.1 continues with BIG-IQ's support for the latest version of AS3 and templates (v3.55+). By supporting the latest Automation Toolchain (AS3/DO) BIG-IQ is aligned with current BIG‑IP APIs and schemas, enabling reliable, repeatable app and device provisioning. It reduces deployment failures from version mismatches, improves security via updated components, and speeds operations through standardized, CI/CD-friendly automation at scale. BIG-IQ 8.4 (and 8.4.1) provides support for IPv6. IPv6 provides vastly more IP addresses, simpler routing, and end‑to‑end connectivity as IPv4 runs out. BIG‑IQ’s IPv6 profile support centralizes configuration, visibility, and policy management for IPv6 traffic across BIG‑IP devices—reducing errors and operational overhead while enabling consistent, secure IPv6 adoption. Upgrading to v8.4.1 You can upgrade from BIG-IQ 8.X to BIG-IQ 8.4.1. BIG-IQ Centralized Management Compatibility Matrix Refer to Knowledge Article K34133507 BIG-IQ Virtual Edition Supported Platforms BIG-IQ Virtual Edition Supported Platforms provides a matrix describing the compatibility between the BIG-IQ VE versions and the supported hypervisors and platforms. Conclusion Effective management—orchestration, visibility, and compliance—relies on consistent app services and security policies across on-premises and cloud deployments. Easily control all your BIG-IP devices and services with a single, unified management platform, F5® BIG-IQ®. F5® BIG-IQ® Centralized Management reduces complexity and administrative burden by providing a single platform to create, configure, provision, deploy, upgrade, and manage F5® BIG-IP® security and application delivery services. Related Content Boosting BIG-IP AFM Efficiency with BIG-IQ: Technical Use Cases and Integration Guide Five Key Benefits of Centralized Management F5 BIG-IQ What's New in v8.4.0?
61Views2likes0CommentsF5 AppWorld 2026 Registration - early bird pricing.
Join us March 10–12 at Fontainebleau Las Vegas and Meet the Moment at F5 AppWorld 2026. Connect with your community and explore how the F5 Application Delivery and Security Platform gives you control without compromise. Over three days you will experience inspiring keynotes, learn new approaches in breakouts, deepen your skills in hands-on labs, and connect with peers, F5 leaders, and partners. Register early and save: Conference pass: $499 Conference pass + F5 Academy labs: $899 Team pass: 4 for the price of 3 Take advantage of early bird pricing and register today! We look forward to seeing you in Vegas. Your DevCentral Team. --- ** Early bird pricing expires Feb 13, 2026.873Views5likes4CommentsImplementing F5 NGINX STIGs: A Practical Guide to DoD Security Compliance
Introduction In today’s security-conscious environment, particularly within federal and DoD contexts, Security Technical Implementation Guides (STIGs) have become the gold standard for hardening systems and applications. For organizations deploying NGINX—whether as a web server, reverse proxy, or load balancer—understanding and implementing NGINX STIGs is critical for maintaining compliance and securing your infrastructure. This guide walks through the essential aspects of NGINX STIG implementation, providing practical insights for security engineers and system administrators tasked with meeting these stringent requirements. Understanding STIGs and Their Importance STIGs are configuration standards created by the Defense Information Systems Agency (DISA) to enhance the security posture of DoD information systems. These guides provide detailed technical requirements for securing software, hardware, and networks against known vulnerabilities and attack vectors. For NGINX deployments, STIG compliance ensures: Protection against common web server vulnerabilities Proper access controls and authentication mechanisms Secure configuration of cryptographic protocols Comprehensive logging and auditing capabilities Defense-in-depth security posture Key NGINX STIG Categories Access Control and Authentication Critical Controls: The STIG mandates strict access controls for NGINX configuration files and directories. All NGINX configuration files should be owned by root (or the designated administrative user) with permissions set to 600 or more restrictive. # Verify permissions sudo chmod 600 /etc/nginx/nginx.conf Client Certificate Authentication: For environments requiring mutual TLS authentication, NGINX must be configured to validate client certificates: # Include the following lines in the server {} block of nginx.conf: ssl_certificate /etc/nginx/ssl/server_cert.pem; ssl_certificate_key /etc/nginx/ssl/server_key.pem; # Enable client certificate verification ssl_client_certificate /etc/nginx/ca_cert.pem; ssl_verify_client on; # Optional: Set verification depth for client certificates ssl_verify_depth 2; location / { proxy_pass http://backend_service; # Restrict access to valid PIV credentials if ($ssl_client_verify != SUCCESS) { return 403; } } Certificate Management: All certificates must be signed by a DoD-approved Certificate Authority Private keys must be protected with appropriate file permissions (400) Certificate expiration dates must be monitored and renewed before expiry Cryptographic Protocols and Ciphers One of the most critical STIG requirements involves configuring approved cryptographic protocols and cipher suites. Approved TLS Versions: STIGs typically require TLS 1.2 as a minimum, with TLS 1.3 preferred: ssl_protocols TLSv1.2 TLSv1.3; FIPS-Compliant Cipher Suites: When operating in FIPS mode, NGINX must use only FIPS 140-2 validated cipher suites: ssl_ciphers 'TLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256'; ssl_prefer_server_ciphers on; Logging and Auditing Comprehensive logging is mandatory for STIG compliance, enabling security monitoring and incident response. Required Log Formats: log_format security_log '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" ' '$request_time $upstream_response_time ' '$ssl_protocol/$ssl_cipher'; access_log /var/log/nginx/access.log security_log; error_log /var/log/nginx/error.log info; Key Logging Requirements: Log all access attempts (successful and failed) Capture client IP addresses and authentication details Record timestamps in UTC or local time consistently Ensure logs are protected from unauthorized modification (600 permissions) Implement log rotation and retention policies Pass Security Attributes via a Proxy STIGs require implementation of security attributes to implement security policy for access control and flow control for users, data, and traffic: # Include the "proxy_pass" service as well as the "proxy_set_header" values as required: proxy_pass http://backend_service; proxy_set_header X-Security-Classification "Confidential"; proxy_set_header X-Data-Origin "Internal-System"; proxy_set_header X-Access-Permissions "Read,Write"; Request Filtering and Validation Protecting against malicious requests is a core STIG requirement: # Limit request methods if ($request_method !~ ^(GET|POST|PUT|DELETE|HEAD)$) { return 405; } # Request size limits client_max_body_size 10m; client_body_buffer_size 128k; # Timeouts to prevent slowloris attacks client_body_timeout 10s; client_header_timeout 10s; keepalive_timeout 5s 5s; send_timeout 10s; # Rate limiting limit_req_zone $binary_remote_addr zone=req_limit:10m rate=10r/s; limit_req zone=req_limit burst=20 nodelay; SIEM Integration Forward NGINX logs to SIEM platforms for centralized monitoring: # Syslog integration error_log syslog:server=siem.example.com:514,facility=local7,tag=nginx,severity=info; access_log syslog:server=siem.example.com:514,facility=local7,tag=nginx NGINX Plus Specific STIG Considerations Organizations using NGINX Plus have additional capabilities to meet STIG requirements: Active Health Checks upstream backend { zone backend 64k; server backend1.example.com; server backend2.example.com; } match server_ok { status 200-399; header Content-Type ~ "text/html"; body ~ "Expected Content"; } server { location / { proxy_pass http://backend; health_check match=server_ok; } } JWT Authentication For API security, NGINX Plus can validate JSON Web Tokens: location /api { auth_jwt "API Authentication"; auth_jwt_key_file /etc/nginx/keys/jwt_public_key.pem; auth_jwt_require exp iat; } Dynamic Configuration API The NGINX Plus API must be secured and access-controlled: location /api { api write=on; allow 10.0.0.0/8; # Management network only deny all; # Require client certificate ssl_verify_client on; } Best Practices for STIG Implementation Start with Baseline Configuration: Use DISA's STIG checklist as your starting point and customize for your environment. Implement Defense in Depth: STIGs are minimum requirements; layer additional security controls where appropriate. Automate Validation: Use configuration management and automated scanning to maintain continuous compliance. Document Deviations: When technical controls aren't feasible, document risk acceptances and compensating controls. Regular Updates: STIGs are updated periodically; establish a process to review and implement new requirements. Testing Before Production: Validate STIG configurations in development/staging before deploying to production. Monitor and Audit: Implement continuous monitoring to detect configuration drift and security events. Conclusion Achieving and maintaining NGINX STIG compliance requires a comprehensive approach combining technical controls, process discipline, and ongoing vigilance. While the requirements can seem daunting initially, properly implemented STIGs significantly enhance your security posture and reduce risk exposure. By treating STIG compliance as an opportunity to improve security rather than merely a checkbox exercise, organizations can build robust, defensible NGINX deployments that meet the most stringent security requirements while maintaining operational efficiency. Remember: security is not a destination but a journey. Regular reviews, updates, and continuous improvement are essential to maintaining compliance and protecting your infrastructure in an ever-evolving threat landscape. Additional Resources DISA STIG Library: https://public.cyber.mil/stigs/ NGINX Security Controls: https://docs.nginx.com/nginx/admin-guide/security-controls/ NIST Cybersecurity Framework: https://www.nist.gov/cyberframework Have questions about implementing NGINX STIGs in your environment? Share your challenges and experiences in the comments below.85Views1like0CommentsWhat is the best practice for migrating from iseries to rseries?
hi ,we plan to migrate to new r-series F5 (v15.1.x) from i-series legacy appliance v13.x.x. We will create the same vlans and IP address config, but the physical interfaces will be different. The new r-series appliance is already licensed. What is the best practice for this migration? option1: import the whole UCS file to new r-series appliance. after importing the ucs to new appliance, what are the next steps to complete the whole migration? option2: copy the config for every module, for example to copy ltm config first, then gtm, final AFW ...... can someone please advise, thanks in advance!1.4KViews0likes9CommentsWhat's new in BIG-IP v21.0?
Introduction In November of 2025 F5 released the latest version of BIG-IP software, v21.0. This release is packed with fixes and new features that enhance the F5 Application Delivery and Security Platform (ADSP). These changes complement the Delivery, Security and Deployment aspects of the ADSP. New SSL Orchestrator Features SNI Preservation SNI (Server Name Indication) Preservation is now supported for Inbound Gateway Mode. This preserves the client’s original SNI information as traffic passes through the reverse proxy, allowing backend TLS servers to access and use this information. This enables accurate application routing and supports security workflows like threat detection and compliance enforcement. Previous software versions required custom iRules to enable this functionality. Note: SNI preservation is enabled by default. However, if you have existing Inbound Gateway Topologies, you must redeploy them for the change to take effect. iRule Control for Service Entry and Return Previously, iRules were only available on the entry (ingress) side, limiting customization to traffic entering the Inspection Service. iRule control is now extended to the return-side traffic of Inspection Services. You can now apply iRules on both sides of an Inspection Service (L2, L3, HTTP). This enhancement provides full control over traffic entering and leaving the Inspection Service, enabling more flexible, powerful, and fine-grained traffic handling. The Services page will now include configuration for iRules on service entry and iRules on service return. A typical use-case for this feature is what we call Header Enrichment. In this case, iRules are used to add headers to the payload before sending it to the Inspection Service. The headers could contain the authenticated username/group membership of the person who initiated the connection. This information can be useful for Inspection Services for either logging, policy enforcement, or both. The benefit of this feature is that the authenticated username/group membership header can be removed from the payload on egress, preventing it from being leaked to origin servers. New Access Policy Manager (APM) Features Expanded Exclusion Support for Locked Client Mode Previously, APM-locked client mode allowed a maximum of 10 exclusions, preventing administrators from adding more than 10 destinations. This limitation has now been removed, and the exclusion list can contain more than 10 entries. OAuth Authorization Server Max Claims Data Support The max claim data size is set to 8kb by default, but a large claim size can lead to excessive memory consumption. You must allocate the right amount of memory dynamically as required based on claims configuration. New Features in BIG-IP v21.0.0 Control Plane Performance and Scalability Improvements The BIG-IP 21.0.0 release introduces significant improvements to the BIG-IP control plane, including better scalability and support for large-scale configurations (up to 1 million objects). This includes MCPD efficiency enhancements and eXtremeDB scale improvements. AI Data Delivery Optimize performance and simplify configuration with new S3 data storage integrations. Use cases include secure ingestion for fine-tuning and batch inference, high-throughput retrieval for RAG and embeddings generation, policy-driven model artifact distribution with observability, and controlled egress with consistent security and compliance. F5 BIG-IP optimizes and secures S3 data ingress and egress for AI workloads. Model Context Protocol (MCP) support for AI traffic Accelerate and scale AI workloads with support for MCP that enables seamless communication between AI models, applications, and data sources. This enhances performance, secures connections, and streamlines deployment for AI workloads. F5 BIG-IP optimizes and secures S3 data ingress and egress for AI workloads. Migrating BIG-IP from Entrust to Alternative Certificate Authorities Entrust is soon to be delisted as a certificate authority by many major browsers. Following a variety of compliance failures with industry standards in recent years, browsers like Google Chrome and Mozilla made their distrust for Entrust certificates public last year. As such, Entrust certificates issued on or after November 12, 2024, are deemed insecure by most browsers. Conclusion Upgrade your BIG-IP to version 21.0 today to take advantage of these fixes and new features that enhance the F5 Application Delivery and Security Platform (ADSP). These changes complement the Delivery, Security and Deployment aspects of the ADSP. Related Content SSL Orchestrator Release Notes BIG-IP Release Notes BLOG F5 BIG-IP v21.0: Control plane, AI data delivery and security enhancements Press Release F5 launches BIG-IP v21.0 Introduction to BIG-IP SSL Orchestrator681Views3likes0CommentsWhat’s New in the NGINX Plus R36 Native OIDC Module
NGINX Plus R36 is out, and with it we hit a really important milestone for the native `ngx_http_oidc_module`, which now supports a broad set of OpenID Connect (OIDC) features commonly relied on in production environments. In this release, we add: Support for OIDC Front‑Channel Logout 1.0, enabling proper single sign‑out across multiple apps Built‑in PKCE (Proof Key for Code Exchange) support Support for the `client_secret_post` client authentication method at the token endpoint R35 gave the native module RP‑initiated logout and a UserInfo integration, R36 builds on that and closes several important gaps. In this post I’ll walk through all the new features in detail, using Microsoft’s Entra ID as the concrete example IdP. Front‑Channel Logout: Real Single Sign‑Out Why RP‑initiated logout alone isn’t enough Until now, `ngx_http_oidc_module` supported only RP‑initiated logout (per OpenID Connect RP‑Initiated Logout 1.0). That gave us a standards‑compliant “logout button”: when the user clicks “Logout” in your app, Nginx Plus sends them to the IdP’s logout endpoint, and the IdP tears down its own session. The catch is that RP‑initiated logout only reliably logs you out of: The current application (the RP that initiated the logout), and The IdP session itself Other applications that share the same IdP session typically stay logged in unless they also have a custom logout flow that goes through the IdP. That’s not what most people think of as “single sign‑out”. Imagine you borrow your partner’s personal laptop, log into a few internal apps that are all protected by NGINX Plus, finish your work, and hit “Logout” in one of them. You really want to be sure you’re logged out of all of those apps, not just the one where you pressed the button. That’s exactly what front‑channel logout is for. What front‑channel logout does The OpenID Connect Front‑Channel Logout 1.0 spec defines a way for the OP (the OpenID Provider) to notify all RPs that share a user’s session that the user has logged out. At a high level: The user logs out (either from an app using RP‑initiated logout, or directly on the IdP). The OP figures out which RPs are part of that single sign‑on session. The OP renders a page with one `<iframe>` per RP, each pointing at the RP’s `frontchannel_logout_uri`. Each RP receives a front‑channel logout request in its own back‑end and clears its local session. The browser coordinates this via iframes, but the session termination logic lives entirely in Nginx Plus, see diagram below: Configuring Front‑Channel Logout in NGINX Plus Let’s start with the NGINX Plus configuration. The change is intentionally minimal: you only need to add one directive to your existing `oidc_provider` block: oidc_provider entra_app1 { issuer https://login.microsoftonline.com/<tenant_id>/v2.0; client_id your_client_id; client_secret your_client_secret; logout_uri /logout; post_logout_uri /post_logout/; logout_token_hint on; frontchannel_logout_uri /front_logout; # Enables front-channel logout userinfo on; } That’s all that’s required on the NGINX Plus side to enable a single logout for this provider: `logout_uri` - path in your app that starts RP‑initiated logout `post_logout_uri` - where the IdP will send the browser after logout `logout_token_hint on;` - instructs NGINX Plus to send `id_token_hint` when calling the IdP’s logout endpoint `frontchannel_logout_uri` - path that will receive front‑channel logout requests from the IdP You’ll repeat that pattern for every app/provider block that should participate in single sign‑out. Configuring Front‑Channel Logout in Microsoft Entra ID On the Microsoft Entra ID side, you need to register a Front‑channel logout URL for each application. For each app: Go to Microsoft Entra admin center -> App registrations -> Your application -> Authentication. In Front‑channel logout URL, enter the URL that corresponds to your NGINX configuration, for example: `https://app1.example.com/front_logout`. This must match the URI you configured with `frontchannel_logout_uri` in `oidc_provider` configuration. Repeat for `app2.example.com`, `app3.example.com`, and any other RP that should take part in single sign‑out. End‑to‑End Flow with Three Apps Assume you have three apps configured the same way: https://app1.example.com https://app2.example.com https://app3.example.com All of them: Use `ngx_http_oidc_module` with the same Microsoft Entra tenant Have `frontchannel_logout_uri` configured in Nginx Have the same URL registered as Front‑channel logout URL in Entra ID User signs in to multiple apps The user navigates to `app1.example.com` and gets redirected to Microsoft’s Entra ID for authentication. After a successful login, NGINX Plus establishes a local OIDC session, and the user can access app1. They then repeat this process for app2 and app3. At this point, the user has active sessions in all three apps: User clicks `Logout` in app1 -> HTTP GET `https://app1.example.com/logout` Nginx redirects to Entra logout endpoint -> HTTP GET `https://login.microsoftonline.com/<tenant_id>/oauth2/v2.0/logout?...` User confirms logout at Microsoft IdP renders iframes that call all registered `frontchannel_logout_uri` values: GET `https://app1.example.com/front_logout?sid=...` GET `https://app2.example.com/front_logout?sid=...` GET `https://app3.example.com/front_logout?sid=...` `ngx_http_oidc_module` maps these `sid` values to Nginx sessions and deletes them IdP redirects browser back to https://app1.example.com/post_logout/ How Nginx Maps a sid to a Session? So how does the module know which session to terminate when it receives a front‑channel logout request like: GET /front_logout?sid=ec91a1f3-... HTTP/1.1 Host: app2.example.com The key is the `sid` claim in the ID token. Per the Front‑Channel Logout spec, when an OP supports session‑based logout it: Includes a `sid` claim in ID tokens May send `sid` (and `iss`) as query parameters to the `frontchannel_logout_uri` When `ngx_http_oidc_module` authenticates a user, it: Obtains an ID token from the provider. Extracts the sid claim (if present). Stores that sid alongside the rest of the session data in the module’s session store. Later, when a front‑channel logout request arrives: The module inspects the `sid` query parameter. It looks up any active session in its session store that matches this `sid` for the current provider. If it finds a matching active session, it terminates that session (clears cookies, removes data). If there’s no match, it ignores the request. This makes the module resilient to bogus or replayed logout requests: a random `sid` that doesn’t match any active session is simply discarded. Where is the iss Parameter? If you’ve studied the Front‑Channel Logout spec carefully, you might be wondering: where is `iss` (issuer)? The spec says: The OP MAY add `iss` and `sid` query parameters when rendering the logout URI, and if either is included, both MUST be. The reason is that the `sid` value is only guaranteed to be unique per issuer, combining `iss + sid` makes the pair globally unique. In practice, though, reality is messy. For example, Microsoft Entra ID sends a sid in front‑channel logout requests but does not send iss, even though its discovery document advertises `frontchannel_logout_session_supported: true`. This behavior has been reported publicly and has been acknowledged by Microsoft. If `ngx_http_oidc_module` strictly required iss, you simply couldn’t use front‑channel logout with Entra ID and some other providers. Instead, the module takes a pragmatic approach: It does not require iss in the logout request It already knows which provider it’s dealing with (from the `oidc_provider` context) It stores sid values per provider, so sid collisions across providers can’t happen inside that context So while this is technically looser than what the spec recommends for general‑purpose RPs, it’s safe given how the module scopes sessions and it makes the feature usable with real‑world IdPs. Cookie‑Only Front‑Channel Logout (and why you probably don’t want it) Front‑channel logout has another mode that doesn’t rely on sid at all. The spec allows an OP to call the RP’s `frontchannel_logout_uri` without any query parameters and relies entirely on the browser sending the RP’s session cookie. The RP then just checks, “do I have a session cookie?” and if yes, logs that user out. ngx_http_oidc_module supports this. However, modern browser behavior makes this approach very fragile: Recent browser versions treat cookies without a SameSite attribute as SameSite=Lax. Front‑channel logout uses iframes, which are third‑party / cross‑site contexts. SameSite=Lax cookies do not get sent on these sub‑requests, so your RP will never see its own session cookie in the front‑channel iframe request. To make cookie‑only front‑channel logout work, your session cookie would need: Set-Cookie: NGX_OIDC_SESSION=...; SameSite=None; Secure …and that has some serious downsides: SameSite=None opens you up to cross‑site request forgery (CSRF). The current version of `ngx_http_oidc_module` does not expose a way to set `SameSite=None` on its session cookie directly. Even if you tweak cookies at a lower level, you might not want to weaken your CSRF posture just to accommodate this logout variant. Because of that, the recommended and practical approach is the sid‑based mechanism: It doesn’t rely on third‑party cookies. It works in modern browsers with strict SameSite behaviors. It’s easy to reason about and debug. Is Relying on sid Secure Enough? It’s a fair question: if you no longer rely on your own session cookie, how safe is it to accept a logout request based solely on a sid received from the IdP? A few points to keep in mind: The spec defines `sid` as an opaque, high-entropy session identifier generated by the IdP. Implementations are expected to use cryptographically strong randomness with enough entropy to prevent guessing or brute force. Even if an attacker somehow learned a valid sid and sent a fake front‑channel logout request, the worst they can do is log a user out of your application. Providers like Microsoft Entra ID treat sid as a session‑scoped identifier. New sessions get new sid values, and sessions expire over time. `ngx_http_oidc_module` validates that the sid from the logout request matches an active session in its session store for that provider. A random or stale sid that doesn’t match anything is ignored. Taken together, a sid‑based front‑channel logout is a very reasonable trade‑off: you get robust single sign‑out without weakening cookie security, and the remaining risks are small and easy to understand. Front‑Channel Logout Troubleshooting If you’ve wired everything up and a single logout still doesn’t work as expected, here’s a quick checklist. Confirm that your IdP actually issues front‑channel requests Make sure: The provider’s discovery document (.well-known/openid-configuration) includes `frontchannel_logout_supported: true`. You have configured the Front‑channel logout URL for each application in your IdP. If Entra ID doesn’t send requests to your `frontchannel_logout_uri`, the RP will never know that it should log out the user. Ensure the ID token contains a sid claim Many IdPs, including Microsoft Entra ID, don’t include sid in ID tokens by default, even if they support front‑channel logout. For Entra ID you typically need to: Open your app registration -> go to Token configuration -> click add optional claim -> select Token type: ID, then select sid and add it. After that, new ID tokens will carry a sid claim, which ngx_http_oidc_module can store and later match on logout. Check what the IdP actually sends on front‑channel logout If you rely on the sid‑based mechanism, inspect the HTTP requests your app receives at frontchannel_logout_uri: Do you see a sid and iss query parameter? Does your provider also advertise `frontchannel_logout_session_supported: true` in the metadata? If all of the above is in place, front‑channel logout should “just work.” PKCE Support in ngx_http_oidc_module In earlier versions of the `ngx_http_oidc_module`, we did not support PKCE because it is not required for confidential clients, such as nginx, which are able to securely store and transmit a client_secret. However, as the module gained popularity and with the release of the OAuth 2.1 draft specification recommending the use of PKCE for all client types, we decided to add PKCE support to ngx_http_oidc_module. PKCE is an extension to OAuth 2.0 that adds an additional layer of security to the authorization code flow. The core idea is that the client generates a random code_verifier and derives a code_challenge from it, which is sent with the authorization request. When the client later exchanges the authorization code for tokens, it must send back the original code_verifier. The authorization server validates that the code_verifier matches the previously supplied code_challenge, preventing attacks such as authorization code interception. This is a brief overview of PKCE. If you’d like to learn more, I recommend reviewing the official RFC 7636 specification: https://datatracker.ietf.org/doc/html/rfc7636. How is PKCE support implemented in the ngx_http_oidc_module? The implementation of PKCE support in ngx_http_oidc_module is straightforward and intuitive. Moreover, if your identity provider supports PKCE and includes the parameter `code_challenge_methods_supported = S256` in its OIDC metadata, the module automatically enables PKCE with no configuration changes required. When initiating the authorization flow, the module generates a random code_verifier and derives a code_challenge from it using the S256 method. These parameters are sent with the authorization request. When the module later receives the authorization code, it sends the original code_verifier when requesting tokens, ensuring the authorization code exchange remains secure. If your identity provider does not support automatic PKCE discovery, you can explicitly enable PKCE in your provider configuration by adding the `pkce on;` directive inside the oidc_provider block. For example: oidc_provider entra_app2 { issuer https://login.microsoftonline.com/<tenant_id>/v2.0; client_id your_client_id; client_secret your_client_secret; pkce on; # <- this directive enables PKCE support } That is all you need to do to enable PKCE support in the ngx_http_oidc_module. client_secret_post Client Authentication Another important enhancement in the ngx_http_oidc_module is the addition of support for the client_secret_post client authentication method. Previously, the module supported only client_secret_basic, which requires sending the client_id and client_secret in the Authorization header. According to the OAuth 2.0 specification, all providers must support client_secret_basic; however, for some providers, the use of client_secret_basic may be restricted due to security or policy considerations. For this reason, we added support for client_secret_post. This method allows sending the client_id and client_secret in the body of the POST request when exchanging the authorization code for tokens. To use the client_secret_post method in ngx_http_oidc_module, you don’t need to do anything at all - the module automatically determines which method to use based on the identity provider’s metadata. If the provider indicates that it supports only client_secret_post, the module will use this method when exchanging authorization codes for tokens. If the provider supports both client_secret_basic and client_secret_post, the module will use client_secret_basic by default. Verifying this is simple - check the value of `token_endpoint_auth_methods_supported` in the provider’s OIDC metadata: $ curl https://login.microsoftonline.com/<tenant_id>/v2.0/.well-known/openid-configuration | jq { ... "token_endpoint_auth_methods_supported": [ "client_secret_post", "private_key_jwt", "client_secret_basic", "self_signed_tls_client_auth" ], ... } In this example, Microsoft Entra ID supports both methods, so the module will use client_secret_basic by default. Wrapping Up As you can see, in this release, we have significantly expanded the functionality of the ngx_http_oidc_module by adding support for front-channel logout, PKCE, and the client_secret_post client authentication method. These enhancements make the module more flexible and secure, enabling better integration with various OpenID Connect providers and offering a higher level of security for your applications. I hope this overview was useful and informative for you! See you soon!180Views3likes0CommentsCisco TACACS+ Config on ISE LTM Pair
I'm trying to add TACACS+ configuration to my ISE LTMs (v17.1.3). We use Active Directory for authentication. The problem is when I try to create the profile, the "type" dropdown does not show "TACACS+". APM is not provisioned either, not if that is needed. I provisioned it on our lab, but no help.169Views0likes8CommentsAgentic AI with F5 BIG-IP v21 using Model Context Protocoland OpenShift
Introduction to Agentic AI Agentic AI is the capability of extending the Large Language Models (LLM) by means of adding tools. This allows the LLMs to interoperate with functionalities external to the LLM. Examples of these are the capability to search a flight or to push code into github. Agentic AI operates proactively, minimising human intervention and making decisions and adapting to perform complex tasks by using tools, data, and the Internet. This is done by basically giving to the LLM the knowledge of the APIs of github or the flight agency, then the reasoning of the LLM makes use of these APIs. The external (to the LLM) functionality can be run in the local computer or in network MCP servers. This article focuses in network MCP servers, which fits in the F5 AI Reference Architecture components and the insertion point indicated in green of the shown next: Introduction to Model Context Protocol Model Context Protocol (MCP) is a universal connector between LLMs and tools. Without MCP, it is needed that the LLM is programmed to support the different APIs of the different tools. This is not a scalable model because it requires a lot of effort to add all tools for a given LLM and for a tool to support several LLMs. Instead, when using MCP, the LLM (or AI application) and the tool only need to support MCP. Without further coding, the LLM model automatically is able to use any tool that exposes its functionalities through MCP. This is exhibit in the following figure: MCP example workflow In the next diagram it is exposed the basic MCP workflow using the LibreChat AI application as example. The flow is as follows: The AI application queries agents (MCP servers) which tools they provide. The agents return a list of the tools, with a description and parameters required. When the AI application makes a request to the AI model it includes in the request information about the tools available. When the AI model finds out it doesn´t have built-in what it is required to fulfil the request, it makes use of the tools. The tools are accessed through the AI application. The AI model composes a result with its local knowledge and the results from the tools. Out of the workflow above, the most interesting is step 1 which is used to retrieve the information required for the AI model to use the tools. Using the mcpLogger iRule provided in this article later on, we can see the MCP messages exchanged. Step 1a: { "method": "tools/list", "jsonrpc": "2.0", "id": 2 } Step 1b: { "jsonrpc": "2.0", "id": 2, "result": { "tools": [ { "name": "airport_search", "description": "Search for airport codes by name or city.\n\nArgs:\n query: The search term (city name, airport name, or partial code)\n\nReturns:\n List of matching airports with their codes", "inputSchema": { "properties": { "query": { "type": "string" } }, "required": [ "query" ], "type": "object" }, "outputSchema": { "properties": { "result": { "type": "string" } }, "required": [ "result" ], "type": "object", "x-fastmcp-wrap-result": 1 }, "_meta": { "_fastmcp": { "tags": [] } } } ] } } Note from the above that the AI model only requires a description of the tool in human language and a formal declaration of the input and output parameters. That´s all!. The reasoning of the AI model is what will make good use of the API described through MCP. The AI models will interpret even the error messages. For example, if the AI model miss-interprets the input parameters (typically because of wrong descriptor of the tool), the AI model might correct itself if the error message is descriptive enough and call the tool again with the right parameters. Of course, the MCP protocol is more than this but the above is necessary to understand the basis of how tools are used by LLM and how the magic works. F5 BIG-IP and MCP BIG-IP v21 introduces support for MCP, which is based on JSON-RPC. MCP protocol had several iterations. For IP based communication, initially the transport of the JSON-RPC messages used HTTP+SSE transport (now considered legacy) but this has been completely replaced by Streamable HTTP transport. This later still uses SSE when streaming multiple server messages. Regardless of the MCP version, in the F5 BIG-IP it is just needed to enable the JSON and SSE profiles in the Virtual Server for handling MCP. This is shown next: By enabling these profiles we automatically get basic protocol validation but more relevantly, we obtain the ability to handle MCP messages with JSON and SSE oriented events and functions. These allows parsing and manipulation of MCP messages but also the capability of doing traffic management (load balancing, rate limiting, etc...). Next it can be seen the parameters available for these profiles, which allow to limit the size of the various parts of the messages. Defaults are fine for most of the cases: Check the next links for information on iRules events and commands available for the JSON and SSE protocols. MCP and persistence Session persistence is optional in MCP but when the server indicates an Mcp-Session-Id it is mandatory for the client. MCP servers require persistence when they keep a context (state) for the MCP dialog. This means that the F5 BIG-IP must support handling this Mcp-Session-Id as well and it does by using UIE (Universal) persistence with this header. A sample iRule mcpPersistence is provided in the gitHub repository. Demo and gitHub repository The video below demonstrate 3 functionalities using the BIG-IP MCP functionalities, these are: Using MCP persistence Getting visibility of MCP traffic by logging remotely the JSON-RPC payloads of the request and response messages using High Speed Logging. Controlling which tools are allowed or blocked, and logging the allowed/block actions with High Speed Logging. These functionalities are implemented with iRules available in this GitHub repository and deployed in Red Hat OpenShift using the Container Ingress Services (CIS) controller which automates the deployment of the configuration using Kubernetes resources. The overall setup is shown next: In the next embedded video we can see how this is deployed and used. Conclusion and next steps F5 BIG-IP v21 introduces support for MCP protocol and thanks to F5 CIS these setups can be automated in your OpenShift cluster using the Kubernetes API. The possibilities of Agentic AI are infinite, thanks to MCP it is possible to extend the LLM models to use any tool easily. The tools can be used to query or execute actions. I suggest to take a look to repositories of MCP servers and realize the endless possibilities of Agentic AI: https://mcpservers.org/ https://www.pulsemcp.com/servers https://mcpmarket.com/server https://mcp.so/ https://github.com/punkpeye/awesome-mcp-servers
822Views4likes0Comments