announcement
215 TopicsIntroducing the F5 Threat Report: Strategic Threat Intelligence with Real-Time Industry and Technology Trends
Challenge widespread assumptions from traditional cybersecurity tools with the latest threat landscape insights including threat movement, threat life-cycles, and more.135Views0likes0CommentsAnnouncing Unovis 1.6
Version 1.6 of Unovis is here! This is one of our most feature-packed releases yet. It brings exciting new components, enhanced graph functionality, improved axis customization, and numerous quality of-life improvements. To see the full list of updates, please look at our release note on github97Views4likes0CommentsRedesigned docs.nginx.com is now live
We're excited to announce the release of the newly-redesigned F5 NGINX documentation website, docs.nginx.com. The NGINX Documentation website hosts content for F5's enterprise NGINX offerings, including F5 NGINX One and F5 NGINX Plus. This release includes a fresh, minimalist design and a complete overhaul of our Hugo theme. We have also added redesigned product landing pages, a new sidebar, and a product selector to make it easier to navigate the site. We will continue to iterate on the site, following a continuous delivery model to release improvements as we complete them. We would love your feedback on our updated design, as well as on our ongoing site improvements. You can share your thoughts in the comments here or via the NGINX community forum, where this announcement is cross-posted. Thanks are due to the F5 DocOps team for their tireless efforts on this project.43Views1like0CommentsF5 NGINX Plus R35 Release Now Available
We’re excited to announce the availability of F5 NGINX Plus Release 35 (R35). Based on NGINX Open Source, NGINX Plus is the only all-in-one software web server, load balancer, reverse proxy, content cache, and API gateway. New and enhanced features in NGINX Plus R35 include: ACME protocol support: This release introduces native support for Automated Certificate Management Environment (ACME) protocol in NGINX Plus. The ACME protocol automates SSL/TLS certificate lifecycle management by enabling direct communication between clients and certificate authorities for issuance, installation, revocation, and replacement of SSL certificates. Automatic JWT Renewal and Update: This capability simplifies the NGINX Plus renewal experience by automating the process of updating the license JWT for F5 NGINX instances communicating directly with the F5 licensing endpoint for license reporting. Native OIDC Enhancements: This release includes additional enhancements to the Native OpenID connect module, adding support for Relying party (RP) initiated Logout and UserInfo endpoint for streamlining authentication workflows. Support for Early Hints: NGINX Plus R35 introduces support for Early Hints (HTTP 103), which optimizes website performance by allowing browsers to preload resources before the final server response, reducing latency and accelerating content display. QUIC – CUBIC Congestion Control: With R35, we have extended support for congestion algorithms in our HTTP3/QUIC implementation to also support CUBIC which provides better bandwidth utilization resulting in quicker load times and faster downloads. NGINX Javascript QuickJS - Full ES2023 support: With this NGINX Plus release, we now support full ES2023 JavaScript specification for QuickJS runtime for your custom NGINX scripting and extensibility needs using NGINX JavaScript. Changes to Platform Support NGINX Plus R35 introduces the following updates to the NGINX Plus technical specification. Added Platforms: Support for the following platforms has been added with this release Alpine Linux 3.22 RHEL 10 Removed Platforms: Support for the following platforms has been removed starting this release. Alpine Linux 3.18 – Reached End of Support in May 2025 Ubuntu 20.04 (LTS) – Reached End of support in May 2025 Deprecated Platforms: Alpine Linux 3.19 Note: For SUSE Linux Enterprise Server (SLES) 15, SP6 is now the required service pack version. The older service packs have been EOL’ed by the vendor and are no longer supported. New Features In Details ACME Protocol Support The ACME protocol (Automated Certificate Management Environment) is a communications protocol primarily designed to automate the process of issuing, validating, renewing, and revoking digital security certificates (e.g., TLS/SSL certificates). It allows clients to interact with a Certificate Authority (CA) without requiring manual intervention, simplifying the deployment of secure websites and other services that rely on HTTPS. With the NGINX Plus R35 release, we are pleased to announce the preview release of native ACME support in NGINX. ACME support is available as a Rust-based dynamic module for both NGINX Open Source, as well as enterprise F5 NGINX One customers using NGINX Plus. Native ACME support greatly simplifies and automates the process of obtaining and renewing SSL/TLS certificates. There’s no need to track certificate expiration dates and manually update or review configs each time an update is needed. With this support, NGINX can now directly communicate with ACME-compatible Certificate Authorities (CAs) like Let's Encrypt to handle certificate management without requiring external plugins like certbot, cert-manager, etc or ongoing manual intervention. This reduces complexity, minimizes operational overhead, and streamlines the deployment of encrypted HTTPS for websites and applications while also making the certificate management process more secure and less error prone. The implementation introduces a new module ngx_http_acme_module providing built-in directives for requesting, installing, and renewing certificates directly from NGINX configuration. The current implementation supports HTTP-01 challenge with support for TLS-ALPN and DNS-01 challenges planned in future. For a detailed overview of the implementation and the value it brings, refer the ACME blog post. To get step by step instructions on how to configure ACME in your environment, refer to NGINX docs. Automatic JWT Renewal and Update This feature enables the automatic update of the JWT license for customers reporting their usage directly to the F5 licensing endpoint (product.connect.nginx.com) post successful renewal of the subscription. The feature applies to subscriptions nearing expiration (within 30 days) as well as subscriptions that have expired, but remain within the 90-day grace period. Here is how this feature works: Starting 30 days prior to JWT license expiration, NGINX Plus will notify the licensing endpoint server of JWT license expiration as part of the automatic usage reporting process. The licensing endpoint server will continually check for a renewed NGINX One subscription with F5 CRM system. Once the subscription is successfully renewed, the F5 licensing endpoint server will send the updated JWT to corresponding NGINX Plus instance. NGINX Plus instance in turn will automatically deploy the renewed JWT license to the location based on your existing configuration without the need for any NGINX reload or service restart. Note: The renewed JWT file received from F5 is named nginx-mgmt-license and is located at the state_path location on your NGINX instance. For more details, refer to NGINX docs. Native OpenID Connect Module Enhancements The NGINX Plus R34 release introduced native support for OpenID Connect (OIDC) authentication. Continuing the momentum, we are excited to add support for OIDC Relying Party (RP) Initiated Logout along with support for retrieving claims via the OIDC UserInfo endpoint in this release. Relying Party (RP) Initiated Logout RP-Initiated Logout is a method used in federated authentication systems (e.g., systems using OpenID Connect (OIDC) or Security Assertion Markup Language (SAML)) to allow a user to log out of an application (called the relying party) and propagate the logout request to other services in the authentication ecosystem, such as the identity provider (IdP) and other sessions tied to the user. This facilitates session synchronization and clean-up across multiple applications or environments. The RP-Initiated Logout support in NGINX OIDC native module helps provide a seamless user experience by enhancing the consistency of authentication and logout workflows, particularly in Single Sign-On (SSO) environments. It significantly helps improve security by ensuring user sessions are terminated securely thereby reducing the risk of unauthorized access. It also simplifies the development processes by minimizing the need for custom coding and promoting adherence to best practices. Additionally, it strengthens user privacy and supports compliance efforts enabling users to easily terminate sessions, thereby reducing the exposure from lingering session. The implementation involves the client (browser) initiating a logout by sending a request to the relying party's (NGINX) logout endpoint. NGINX(RP) adds additional parameters to the request and redirects it to the IdP, which terminates the associated user session and redirects the client to the specified post_logout_uri. Finally, NGINX as the relying party presents a post-logout confirmation page, signaling the completion of the logout process and ensuring session termination across both the relying party and the identity provider. UserInfo Retrieval Support The OIDC UserInfo endpoint is used by applications to retrieve profile information about the authenticated Identity. Applications can use this endpoint to retrieve profile information, preferences and other user-specific information to ensure a consistent user management process. The support for UserInfo endpoint in the native OIDC module provides a standardized mechanism to fetch user claims from Identity Providers (IdPs) helping simplify the authentication workflows and reducing overall system complexity. Having a standard mechanism also helps define and adopt development best practices across client applications for retrieving user claims offering tremendous value to developers, administrators, and end-users. The implementation enables the RP (nginx) to call an identity provider's OIDC UserInfo endpoint with the access token (Authorization: Bearer) and obtain scope-dependent End-user claims (e.g., profile, email, scope, address). This provides the standard, configuration-driven mechanism for claim retrieval across client applications and reduces integration complexity. Several new directives (logout_uri, post_logout_uri, logout_token_hint, and userinfo) have been added to the ngx_http_oidc_module to support both these features. Refer to our technical blog on how NGINX Plus R35 offers frictionless logout and UserInfo retrieval support as part of the native OIDC implementation for a comprehensive overview of both of these features and how they work under the hood. For instructions on how to configure the native OIDC module for various identity providers, refer the NGINX deployment guide. Early Hints Support Early Hints (RFC 8297) is a HTTP status code to improve website performance by allowing the server to send preliminary hints to the client before the final response is ready. Specifically, the server sends a 103 status code with headers indicating which resources (like CSS, JavaScript, images) the client can pre-fetch while the server is still working on generating the full response. Majority of the web browsers including Chrome, Safari and Edge support it today. A new NGINX directive early_hints has been added to specify the conditions under which backends can send Early Hints to the client. NGINX will parse the Early Hints from the backend and send them to the client. The following example shows how to proxy Early Hints for HTTP/2 and HTTP/3 clients and disable them for HTTP/1.1 early_hints $http2$http3; proxy_pass http://bar.example.com; For more details, refer NGINX docs and a detailed blog on Early Hints support in NGINX. QUIC – Support for CUBIC Congestion Control Algorithm CUBIC is a congestion control algorithm designed to optimize internet performance. It is widely used and well-tested in TCP implementations and excels in high-bandwidth and high-delay networks by efficiently managing data transmission ensuring faster speeds, rapid recovery from congestion, and reduced latency. Its adaptability to various network conditions and fair resource allocation makes it a reliable choice for delivering a smooth and responsive online experience and enhance overall user satisfaction. We announced support for CUBIC congestion algorithm in NGINX open source mainline version 1.27.4. All the bug fixes and enhancements since then are being merged into NGINX Plus R35. For a detailed overview of the implementation, refer to our blog on the topic. NGINX Javascript QuickJS - Full ES2023 support We introduced preview support for QuickJS runtime in NGINX JavaScript(njs) version 0.8.6 in the NGINX Plus R33 release. We have been quietly focused on this initiative since and are pleased to announce full ES2023 JavaScript specification support in NGINX JavaScript(njs) version 0.9.1 with NGINX Plus R35 release. With full ES2023 specification support, you can now use the latest JavaScript features that modern developers expect as standard to extend NGINX capabilities using njs. Refer to this detailed blog for a comprehensive overview of our QuickJS implementation, the motivation behind QuickJS runtime support and where we are headed with NGINX JavaScript. For specific details on how you can leverage QuickJS in your njs scripts, please refer to our documentation. Other Enhancements and Bug Fixes Variable based Access Control Support To enable robust access control using identity claims, R34 and earlier versions required a workaround involving the auth_jwt_require directive. This involved reprocessing the ID token with the auth_jwt module to manage access based on claims. This approach introduced configuration complexity and performance overhead. With R35, NGINX simplifies this process through the auth_require directive, which allows direct use of claims for resource-based access control without relying on auth_jwt. This directive is part of a new module ngx_http_auth_require_module added in this release. For ex, the following NGINX OIDC configuration maps the role claim from the id_token to $admin_role variable and sets it to 1 if the user’s role is “admin”. The /location block then uses auth_require $admin_role to restrict access, allowing only the users with admin role to proceed. http { oidc_provider my_idp { ... } map $oidc_claim_role $admin_role { "admin" 1; } server { auth_oidc my_idp; location /admin { auth_require $admin_role; } } } Though the directive is not exclusive to OIDC, when paired with auth_oidc, it provides a clean and declarative Role-Based Access Control (RBAC) mechanism within the server configuration. For example, you can easily configure access so only admins reach the /admin location, while either admins or users with specific permissions access other locations. The result is streamlined, efficient, and practical access management directly in NGINX. Note that the new auth_require directive does not replace auth_jwt_require as the two serve distinct purposes. While auth_jwt_require is an integral part of JWT validation in the JWT module focusing on headers and claims checks, auth_require operates in a separate ACCESS phase for access control. Deprecating auth_jwt_require would reduce flexibility, particularly in "satisfy" modes of operation, and complicate configurations. Additionally, auth_jwt_require plays a critical role in initializing JWT-related variables, enabling their use in subrequests. This initialization, crucial for JWE claims, cannot be done via REWRITE module directives as JWE claims are not available before JWT decryption. Support for JWS RSASSA-PSS algorithms: RSASSA-PSS algorithms are used for verifying the signatures of JSON Web Tokens (JWTs) to ensure their authenticity and integrity. In NGINX, these algorithms are typically employed via the auth_jwt_module when validating JWTs signed using RSASSA-PSS. We are adding support for following algorithms as specified in RFC 7518 (Section 3.5): PS256 PS384 PS512 Improved Node Outage Detection and Logging This release also introduces improvements in the timeout handling for zone_sync connections enabling faster detection of offline nodes and reducing counter accumulation risks. This improvement is aimed at improving synchronization of nodes in a cluster and early detection of failures improving system’s overall performance and reliability. Additional heuristics are added to detect blocked workers to proactively address prolonged event loop times. License API Updates NGINX license API endpoint now provides additional information. The “uuid” parameter in the license information is now available via the API endpoint. Changes Inherited from NGINX Open Source NGINX Plus R35 is based on NGINX 1.29.0 mainline release and inherits all functional changes, features, and bug fixes made since NGINX Plus R34 was released (which was based on 1.27.4 mainline release). Features: Early Hints support - support for response code 103 from proxy and gRPC backends; CUBIC congestion control algorithm support in QUIC connections. Loading of secret keys from hardware tokens with OpenSSL provider. Support for the "so_keepalive" parameter of the "listen" directive on macOS. Changes: The logging level of SSL errors in a QUIC handshake has been changed from "error" to "crit" for critical errors, and to "info" for the rest; the logging level of unsupported QUIC transport parameters has been lowered from "info" to "debug". Bug Fixes: nginx could not be built by gcc 15 if ngx_http_v2_module or ngx_http_v3_module modules were used. nginx might not be built by gcc 14 or newer with -O3 -flto optimization if ngx_http_v3_module was used. In the "grpc_ssl_password_file", "proxy_ssl_password_file", and "uwsgi_ssl_password_file" directives when loading SSL certificates and encrypted keys from variables; the bug had appeared in 1.23.1. In the $ssl_curve and $ssl_curves variables when using pluggable curves in OpenSSL. nginx could not be built with musl libc. Bugfixes and performance improvements in HTTP/3. Security: (CVE-2025-53859) SMTP Authentication process memory over-read: This vulnerability in the NGINX ngx_mail_smtp_module may allow an unauthenticated attacker to trigger buffer over-read resulting in worker process memory disclosure to the authentication server. For the full list of new changes, features, bug fixes, and workarounds inherited from recent releases, see the NGINX changes . Changes to the NGINX Javascript Module NGINX Plus R35 incorporates changes from the NGINX JavaScript (njs) module version 0.9.1. The following is a list of notable changes in njs since 0.8.9 (which was the version shipped with NGINX Plus R34). Features: Added support for the QuickJS-NG library. Added support for WebCrypto API, FetchAPI, TextEncoder and TextDecoder, querystring module, crypto module and xml module for the QuickJS engine. Added state file for a shared dictionary. Added ECDH support for WebCrypto. Added support for reading r.requestText or r.requestBuffer from a temporary file. Improvements: Performance improvements due to refactored handling of built-in strings, symbols, and small integers Multiple memory usage improvements improved reporting of unhandled promise rejections. Bug Fixes: Fixed segfault in njs_property_query(). The issue was introduced in b28e50b1 (0.9.0). Fixed Function constructor template injection. Fixed GCC compilation with O3 optimization level. Fixed constant is too large for 'long' warning on MIPS -mabi=n32. Fixed compilation with GCC 4.1. Fixed %TypedArray%.from() with the buffer is detached by the mapper. Fixed %TypedArray%.prototype.slice() with overlapping buffers. Fixed handling of detached buffers for typed arrays. Fixed frame saving for async functions with closures. Fixed RegExp compilation of patterns with escaped '[' characters. Fixed handling of undefined values of a captured group in RegExp.prototype[Symbol.split](). Fixed GCC 15 build error with -Wunterminated-string-initialization. Fixed name corruption in variables and headers processing. Fixed incr() method of a shared dictionary with an empty init argument for the QuickJS engine. Bugfix: accepting response headers with underscore characters in Fetch API. Fixed Buffer.concat() with a single argument in QuickJS. Bugfix: added missed syntax error for await in template literal. Fixed non-NULL terminated strings formatting in exceptions for the QuickJS engine. Fixed compatibility with recent change in QuickJS and QuickJS-NG. Fixed serializeToString(). Previously, serializeToString() was exclusiveC14n() which returned a string instead of Buffer. According to the published documentation, it should be c14n() For a comprehensive list of all the features, changes, and bug fixes, see the njs Changelog. F5 NGINX in F5’s Application Delivery & Security Platform NGINX One is part of F5’s Application Delivery & Security Platform. It helps organizations deliver, improve, and secure new applications and APIs. This platform is a unified solution designed to ensure reliable performance, robust security, and seamless scalability for applications deployed across cloud, hybrid, and edge architectures. NGINX One is the all-in-one, subscription-based package that unifies all of NGINX’s capabilities. NGINX One brings together the features of NGINX Plus, F5 NGINX App Protect, and NGINX Kubernetes and management solutions into a single, easy-to-consume package. NGINX Plus, a key component of NGINX One, adds features to open-source NGINX that are designed for enterprise-grade performance, scalability, and security. Ready to try the new release? Follow this guide for more information on installing and deploying NGINX Plus.575Views1like0CommentsKickstart your journey into becoming an F5 Certified Administrator, NGINX (F5-CA, NGINX)!
The F5 Certified! NGINX Administrator Accelerator is designed and aligned with the applicable NGINX Administrator Certification exams to help candidates prepare for their certification journey.61Views1like0CommentsF5 NGINX One Console July features
Introduction We are very excited to announce the new set of F5 NGINX One Console features that were released in July: • F5 NGINX App Protect WAF Policy Orchestration • F5 NGINX Ingress Controller Fleet Visibility The NGINX One Console is a central management service in the F5 Distributed Cloud that makes it easier to deploy, manage, monitor, and operate NGINX. It is available to all NGINX and F5 Distributed Cloud subscribers and is included as a part of their subscription. If you have any questions on how to get access, please reach out to me. Workshops Additionally, we are offering complementary workshops on how to use the NGINX One Console in August and September (North American Time Zone). These include a presentation, hands-on labs, and demos with live support and guidance. July 30 - F5 Test Drive Labs - F5 NGINX One - https://www.f5.com/company/events/test-drive-labs#agenda Aug 19 - NGINXperts Workshop - NGINX One Console - https://www.eventbrite.com/e/nginxpert-workshops-nginx-one-console-tickets-1511189742199 Sept 9 - F5 Test Drive Labs - NGINX One - https://www.f5.com/company/events/test-drive-labs#agenda Sept 10 - NGINXperts Workshop - NGINX One Console - https://www.eventbrite.com/e/nginxpert-workshops-nginx-one-console-tickets-1393597239859?aff=oddtdtcreator NGINX App Protect WAF Policy Orchestration You can now centrally manage NGINX App Protect WAF in the NGINX One Console. Easily create and edit WAF policies in the NGINX One Console, compare changes, and publish to instances and configuration sync groups. Both App Protect v4 and v5 are supported! Find the guides on how to secure F5 NGINX with NGINX App Protect and the NGINX One Console here: https://docs.nginx.com/nginx-one/nap-integration/ NGINX Ingress Controller Fleet Visibility NGINX Ingress Controller deployments can now be monitored in the NGINX One Console. See all versions of NGINX Ingress Controller deployments, what underlying instances make up the control pods, and see the underlying NGINX configuration. Coming later this quarter, CVE monitoring for NGINX Ingress Controller and F5 NGINX Gateway Fabric inclusion. For details, see how you can Connect Ingress Controller to NGINX One Console: https://docs.nginx.com/nginx-one/k8s/add-nic/ Find all the latest additions to the NGINX One Console in our changelog: https://docs.nginx.com/nginx-one/changelog/https://docs.nginx.com/nginx-one/changelog/ The NGINX Impact in F5’s Application Delivery & Security Platform NGINX One Console is part of F5’s Application Delivery & Security Platform. It helps organizations deliver, improve, and secure new applications and APIs. This platform is a unified solution designed to ensure reliable performance, robust security, and seamless scalability. It is used for applications deployed across cloud, hybrid, and edge architectures. The NGINX One Console is also a key component of NGINX One, the all-in-one, subscription-based package that unifies all of NGINX’s capabilities. NGINX One brings together the features of NGINX Plus, NGINX App Protect, and NGINX Kubernetes and management solutions into a single, easy-to-consume package. As a cornerstone of the NGINX One package, NGINX One Console extends the capabilities of open-source NGINX. It adds features designed specifically for enterprise-grade performance, scalability, and security. Find all the latest additions to the NGINX One Console in our changelog: https://docs.nginx.com/nginx-one/changelog/https://docs.nginx.com/nginx-one/changelog/82Views0likes0CommentsF5 NGINX One Console June Features
Introduction We are happy to announce the new set of F5 NGINX One Console features that were released in June: • Fleet Visibility Alerts • Import/Export for Staged Configurations The F5 NGINX One Console is a central management service in the F5 Distributed Cloud. It makes it easier to deploy, manage, monitor, and operate F5 NGINX. It is available to all NGINX and Distributed Cloud subscribers and is included as a part of your subscription. If you have any questions on how to access, please reach out to your F5 Customer Success or Account team. Fleet Visibility Alerts The NGINX One Console will now be creating Alerts for important notifications related to your NGINX fleet. For example, if a connected NGINX instance has an insecure configuration or an instance is impacted by a newly announced Critical Vulnerability, an Alert will be generated. Using Distributed Cloud’s robust Alerting system, you can configure notifications of your choice to be sent to your preferred system such as SMS, email, Slack, Pager Duty, webhook, etc. Find the full list of Alerts here: https://docs.cloud.f5.com/docs-v2/platform/reference/alerts-reference See instructions on how to set up Alert Receivers and Policies here: https://docs.cloud.f5.com/docs-v2/shared-configuration/how-tos/alerting/alerts-email-sms Import/Export for Staged Configurations Effortlessly share your configurations with the new import/export. This feature makes it easier to work together. You can share configurations quickly with teammates, F5 support, or the community. You can also easily import configurations from others. Whether you're fine-tuning configurations for your team or seeking advice, this update makes sharing and receiving configurations simple and efficient. This also allows those who prefer the NGINX One Console configuration editing experience but operate in dark or air-gapped environments, an easy way to move configurations from the NGINX One Console to your instance. You can craft and refine configurations within the NGINX One Console, then export them for deployment in isolated instances without hassle. Similarly, it lets you easily export configurations from the Console and import them into F5 NGINXaas for Azure. The process is highly flexible and intuitive: Create new Staged Configurations by importing configuration files directly as a .tar.gz archive via the UI or API. Export Staged Configurations with just one click or API call, generating a .tar.gz archive that’s ready to be unpacked and applied wherever you need your configuration files. See the documentation for more information: [N1 docks link] Find all the latest additions to the NGINX One Console in our changelog: https://docs.nginx.com/nginx-one/changelog/https://docs.nginx.com/nginx-one/changelog/ The NGINX Impact in F5’s Application Delivery & Security Platform NGINX One Console is part of F5’s Application Delivery & Security Platform. It helps organizations deliver, improve, and secure new applications and APIs. This platform is a unified solution designed to ensure reliable performance, robust security, and seamless scalability for applications deployed across cloud, hybrid, and edge architectures. NGINX One Console is also a key component of NGINX One, the all-in-one, subscription-based package that unifies all of NGINX’s capabilities. NGINX One brings together the features of NGINX Plus, F5 NGINX App Protect, and NGINX Kubernetes and management solutions into a single, easy-to-consume package. NGINX One Console is a key part of the NGINX One package. It adds features to open-source NGINX that are designed for enterprise-grade performance, scalability, and security.170Views1like0CommentsRegional Edge Resiliency Zones and Virtual Sites
Introduction: This article is a follow-up article to my earlier article, F5 Distributed Cloud: Virtual Sites – Regional Edge (RE). In the last article, I talked about how to build custom topologies using Virtual Sites on our SaaS data plane, aka Regional Edges. In this article, we’re going to review an update to our Regional Edge architecture. With this new update to Regional Edges, there are some best practices regarding Virtual Sites that I’d like to review. As F5 has seen continuous growth and utilization of F5’s Distributed Cloud platform, we’ve needed to expand our capacity. We have added capacity through many different methods over the years. One strategic approach to expanding capacity is building new POPs. However, in some cases, even with new POPs, there are certain regions of the world that have a high density of connectivity. This will always cause higher utilization than in other regions. A perfect example of that is Ashburn, Virginia in the United States. Within the Ashburn POP that has high density of connectivity and utilization, we could simply “throw compute at it” within common software stacks. This is not what we’ve decided to do; F5 has decided to provide additional benefits to capacity expansions by introducing what we’re calling “Resiliency Zones”. Introduction to Resiliency Zones: What is a Resiliency Zone? A Resiliency Zone is simply another Regional Edge cluster within the same metropolitan (metro) area. These Resiliency Zones may be within the same POP, or within a common campus of POPs. The Resiliency Zones are made up of dedicated compute structures and have network hardware for different networks that make up our Regional Edge infrastructure. So why not follow in AWS’s footsteps and call these Availability Zones? Well, while in some cases we may split Resiliency Zones across a campus of data centers and be within separate physical buildings, that may not always be the design. It is possible that the Resiliency Zones are within the same facility and split between racks. We didn’t feel this level of separation provided a full Availability Zone-like infrastructure as AWS has built out. Remember, F5’s services are globally significant. While most of the cloud providers services are locally significant to a region and set of Availability Zones (in AWS case). While we strive to ensure our services are protected from catastrophic failures, F5 Distributed Cloud’s global availability of services affords us to be more condensed in our data center footprint within a single region or metro. I spoke of “additional benefits” above; let’s look at those. With Resiliency Zones, we’ve created the ability to scale our infrastructure both horizontally and vertically within our POPs. We’ve also created isolated fault and operational domains. I personally believe the operational domain is most critical. Today, when we do maintenance on a Regional Edge, all traffic to that Regional Edge is rerouted to another POP for service. With Resiliency Zones, while one Regional Edge “Zone” is under maintenance, the other Regional Edge Zone(s) can handle the traffic, keeping the traffic local to the same POP. In some regions of the world, this is critical to maintaining traffic within the same region and country. What to Expect with Resiliency Zones Resiliency Zone Visibility: Now that we have a little background on what Resiliency Zones are, what should you expect and look out for? You will begin to see Regional Edges within Console that have a letter associated to them. Example, “dc12-ash” which is the original Regional Edge; you’ll see another Regional Edge “b-dc12-ash”. We will not be appending an “a” to the original Regional Edge. As I write this article, the Resiliency Zones have not been released for routing traffic. They will be soon (June 2025). You can however, see the first resiliency zone today if you use all regional edges by default. If you navigate to a Performance Dashboard for a Load Balancer, and look at the Origin Servers tab, then sort/filter for dc12-ash, you’ll see both dc12-ash and b-dc12-ash. Customer Edge Tunnels: Customer Edge (CE) sites will not terminate their tunnels onto a Resiliency Zone. We’re working to make sure we have the right rules for tunnel terminations in different POPs. We can also give customers the option to choose if they want tunnels to be in the same POP across Resiliency Zones. Once the logic and capabilities are in place, we’ll allow CE tunnels to terminate on Resiliency Zones Regional Edges. Site Selection and Virtual Sites: The Resiliency Zones should not be chosen as the only site or virtual site available for an origin. We’ve built in some safeguards into the UI that’ll give you an error if you try to assign Resiliency Zone RE sites without the original RE site within the same association. For example, you cannot apply b-dc12-ash without including dc12-ash to an origin configuration. If you’re unfamiliar with Virtual Sites on F5’s Regional Edge data planes, please refer to the link at the top of this article. When setting up a Virtual Site, we use a site selector label. In my article, I highlight these labels that are associated per site. What we see used most often are: Country, Region, and SiteName. If you chose to use SiteName, your Virtual Site will not automatically add the new Resiliency Zone. Example, your site selector uses SiteName in dc12-ash. When b-dc12-ash comes online, it will not be matched and automatically used for additional capacity. Whereas if you used “country in USA” or “region in Ashburn”, then dc12-ash and b-dc12-ash would be available to your services right away. Best Practices for Virtual Sites: What is the best practice when it comes to Virtual Sites? I wouldn’t be in tech if I didn’t say “it depends”. It is ultimately up to you on how much control you want versus operational overhead you’re willing to have. Some people may say they don’t want to have to manage their virtual sites every time F5 changes the capacity. This could mean adding new Regional Edges in new POPs or adding Resiliency Zones into existing POPs. Whereas others may say they want to control when traffic starts routing through new capacity and infrastructure to their origins. Often times this control is to ensure customer-controlled security (firewall rules, network security groups, geo-ip db, etc.) are approved and allowed. As shown in the graph, the more control you want, the more operations you will maintain. What would I recommend? I would go less granular in how I setup Regional Edge Virtual Sites. As I would want as much compute capacity as close to them as possible to serve my clients of my applications for F5 Services. I’d also want attackers, bots, bad guys, or the traffic that isn’t an actual client to have security applied as close as possible to the source. Lastly, as we see L7 DDoS continue to rise, the more points of presence for L7 security I can provide and scale. This gives me the best chance of mitigating the attack. To achieve a less granular approach to virtual sites, it is critical to: Pay attention to our maintenance notices. If we’re adding IP prefixes to our allowed firewall/proxy list of IPs, we will send notice well in advance of these new prefixes becoming active. Update your firewall’s security groups, and verify with your geo-ip database provider Understand your client-side/downstream/VIP strategy vs. server-side/upstream/origin strategy and what the different virtual site models might impact. When in doubt, ask. Ask for help from your F5 account team. Open a support ticket. We’re here to help. Summary: F5’s Distributed Cloud platform needed an additional scaling mechanism to the infrastructure, offering services to its customers. To meet those needs, it was determined to add capacity through more Regional Edges within a common POP. This strategy offers both F5 and Customer operations teams enhanced flexibility. Remember, Resiliency Zones are just another Regional Edge. I hope this article is helpful, and please let me know what you think in the comments below.569Views7likes0Comments