announcement
100 TopicsHow I did it - "High-Performance S3 Load Balancing of Dell ObjectScale with F5 BIG-IP"
As AI and data-driven workloads grow, enterprises need scalable, high-performance, and resilient storage. Dell ObjectScale delivers with its cloud-native, S3-compatible design, ideal for AI/ML and analytics. F5 BIG-IP LTM and DNS enhance ObjectScale by providing intelligent traffic management and global load balancing—ensuring consistent performance and availability across distributed environments. This article introduces Dell ObjectScale and its integration with F5 solutions for advanced use cases.1.3KViews6likes1CommentSecure Extranet with Equinix Fabric and F5 Distributed Cloud
Why: The Challenge of Building a Secure Extranet Establishing a secure extranet that spans multiple clouds, partners, and enterprise locations is inherently complex. Organizations face several persistent challenges: Technology Fragmentation: Different clouds, vendors, and networking stacks introduce inconsistency and integration friction. Endpoint Proliferation: Each new partner or cloud region adds more endpoints to secure and manage. Configuration Drift: Manual or siloed configurations across environments increase the risk of misalignment and security gaps. Security Exposure: Without centralized control, enforcing consistent policies across environments is difficult, increasing the attack surface. Operational Overhead: Managing disparate systems and connections strains NetOps, DevOps, and SecOps teams. These challenges make it difficult to scale securely and efficiently, especially when onboarding new partners or deploying applications globally. What: A Unified, Secure, and Scalable Extranet Solution The joint solution from F5 and Equinix addresses these challenges by combining: F5® Distributed Cloud Customer Edge (CE): A virtualized network and security node deployed via Equinix Network Edge. Equinix Fabric®: A software-defined interconnection platform that provides private, high-performance connectivity between clouds, partners, and enterprise locations. Together, they create a strategic point of control at the edge of your enterprise network. This enables secure, scalable, and policy-driven connectivity across hybrid and multi-cloud environments. This solution: Simplifies deployment by eliminating physical infrastructure dependencies. Centralizes policy enforcement across all connected environments. Accelerates partner onboarding with pre-integrated, software-defined connectors. Reduces risk by isolating traffic and enforcing consistent security policies. How: Architectural Overview At the heart of the architecture is the F5 Distributed Cloud CE, deployed as a virtual network function (VNF) on Equinix Network Edge. This CE: Acts as a gateway node for each location (cloud, data center, or partner site). Connects to other CEs via F5’s global private backbone, forming a secure service mesh. Integrates with F5 Distributed Cloud Console for centralized orchestration, visibility, and policy management. The CE node(s) are interconnected to partners, vendors, etc. using Equinix Fabric, which provides: Private, low-latency interconnects to major cloud providers (AWS, Azure, GCP, OCI). Software-defined routing via Fabric Cloud Router. Tier-1 internet access for hybrid workloads. This architecture enables a hub-and-spoke or full-mesh extranet topology, depending on business needs. Key Tenets of the Solution Strategic Point of Control The CE becomes the enforcement point for traffic inspection, segmentation, and policy enforcement—across all clouds and partners. Unified Management F5 Distributed Cloud Console provides a single pane of glass for managing networking, security, and application delivery policies. Zero-Trust Connectivity Built-in support for mutual TLS, IPsec, and SSL tunnels ensures encrypted, authenticated communication between nodes. Rapid Partner Onboarding Equinix’s Fabric and F5 CE connectors allow new partners to be onboarded in minutes, not weeks. Operational Efficiency Automation hooks (GitOps, Terraform, APIs) reduce manual effort and configuration drift. Private interconnects and regional CE deployments help meet regulatory requirements. Additional Links F5 and Equinix Partnership The Business Partner Exchange - An F5 Distributed Cloud Services Demonstration Equinix Fabric Overview Additional Equinix and F5 partner information148Views0likes0CommentsIntroducing the F5 Threat Report: Strategic Threat Intelligence with Real-Time Industry and Technology Trends
Challenge widespread assumptions from traditional cybersecurity tools with the latest threat landscape insights including threat movement, threat life-cycles, and more.279Views0likes0CommentsAnnouncing Unovis 1.6
Version 1.6 of Unovis is here! This is one of our most feature-packed releases yet. It brings exciting new components, enhanced graph functionality, improved axis customization, and numerous quality of-life improvements. To see the full list of updates, please look at our release note on github142Views4likes0CommentsF5 NGINX Plus R35 Release Now Available
We’re excited to announce the availability of F5 NGINX Plus Release 35 (R35). Based on NGINX Open Source, NGINX Plus is the only all-in-one software web server, load balancer, reverse proxy, content cache, and API gateway. New and enhanced features in NGINX Plus R35 include: ACME protocol support: This release introduces native support for Automated Certificate Management Environment (ACME) protocol in NGINX Plus. The ACME protocol automates SSL/TLS certificate lifecycle management by enabling direct communication between clients and certificate authorities for issuance, installation, revocation, and replacement of SSL certificates. Automatic JWT Renewal and Update: This capability simplifies the NGINX Plus renewal experience by automating the process of updating the license JWT for F5 NGINX instances communicating directly with the F5 licensing endpoint for license reporting. Native OIDC Enhancements: This release includes additional enhancements to the Native OpenID connect module, adding support for Relying party (RP) initiated Logout and UserInfo endpoint for streamlining authentication workflows. Support for Early Hints: NGINX Plus R35 introduces support for Early Hints (HTTP 103), which optimizes website performance by allowing browsers to preload resources before the final server response, reducing latency and accelerating content display. QUIC – CUBIC Congestion Control: With R35, we have extended support for congestion algorithms in our HTTP3/QUIC implementation to also support CUBIC which provides better bandwidth utilization resulting in quicker load times and faster downloads. NGINX Javascript QuickJS - Full ES2023 support: With this NGINX Plus release, we now support full ES2023 JavaScript specification for QuickJS runtime for your custom NGINX scripting and extensibility needs using NGINX JavaScript. Changes to Platform Support NGINX Plus R35 introduces the following updates to the NGINX Plus technical specification. Added Platforms: Support for the following platforms has been added with this release Alpine Linux 3.22 RHEL 10 Removed Platforms: Support for the following platforms has been removed starting this release. Alpine Linux 3.18 – Reached End of Support in May 2025 Ubuntu 20.04 (LTS) – Reached End of support in May 2025 Deprecated Platforms: Alpine Linux 3.19 Note: For SUSE Linux Enterprise Server (SLES) 15, SP6 is now the required service pack version. The older service packs have been EOL’ed by the vendor and are no longer supported. New Features In Details ACME Protocol Support The ACME protocol (Automated Certificate Management Environment) is a communications protocol primarily designed to automate the process of issuing, validating, renewing, and revoking digital security certificates (e.g., TLS/SSL certificates). It allows clients to interact with a Certificate Authority (CA) without requiring manual intervention, simplifying the deployment of secure websites and other services that rely on HTTPS. With the NGINX Plus R35 release, we are pleased to announce the preview release of native ACME support in NGINX. ACME support is available as a Rust-based dynamic module for both NGINX Open Source, as well as enterprise F5 NGINX One customers using NGINX Plus. Native ACME support greatly simplifies and automates the process of obtaining and renewing SSL/TLS certificates. There’s no need to track certificate expiration dates and manually update or review configs each time an update is needed. With this support, NGINX can now directly communicate with ACME-compatible Certificate Authorities (CAs) like Let's Encrypt to handle certificate management without requiring external plugins like certbot, cert-manager, etc or ongoing manual intervention. This reduces complexity, minimizes operational overhead, and streamlines the deployment of encrypted HTTPS for websites and applications while also making the certificate management process more secure and less error prone. The implementation introduces a new module ngx_http_acme_module providing built-in directives for requesting, installing, and renewing certificates directly from NGINX configuration. The current implementation supports HTTP-01 challenge with support for TLS-ALPN and DNS-01 challenges planned in future. For a detailed overview of the implementation and the value it brings, refer the ACME blog post. To get step by step instructions on how to configure ACME in your environment, refer to NGINX docs. Automatic JWT Renewal and Update This feature enables the automatic update of the JWT license for customers reporting their usage directly to the F5 licensing endpoint (product.connect.nginx.com) post successful renewal of the subscription. The feature applies to subscriptions nearing expiration (within 30 days) as well as subscriptions that have expired, but remain within the 90-day grace period. Here is how this feature works: Starting 30 days prior to JWT license expiration, NGINX Plus will notify the licensing endpoint server of JWT license expiration as part of the automatic usage reporting process. The licensing endpoint server will continually check for a renewed NGINX One subscription with F5 CRM system. Once the subscription is successfully renewed, the F5 licensing endpoint server will send the updated JWT to corresponding NGINX Plus instance. NGINX Plus instance in turn will automatically deploy the renewed JWT license to the location based on your existing configuration without the need for any NGINX reload or service restart. Note: The renewed JWT file received from F5 is named nginx-mgmt-license and is located at the state_path location on your NGINX instance. For more details, refer to NGINX docs. Native OpenID Connect Module Enhancements The NGINX Plus R34 release introduced native support for OpenID Connect (OIDC) authentication. Continuing the momentum, we are excited to add support for OIDC Relying Party (RP) Initiated Logout along with support for retrieving claims via the OIDC UserInfo endpoint in this release. Relying Party (RP) Initiated Logout RP-Initiated Logout is a method used in federated authentication systems (e.g., systems using OpenID Connect (OIDC) or Security Assertion Markup Language (SAML)) to allow a user to log out of an application (called the relying party) and propagate the logout request to other services in the authentication ecosystem, such as the identity provider (IdP) and other sessions tied to the user. This facilitates session synchronization and clean-up across multiple applications or environments. The RP-Initiated Logout support in NGINX OIDC native module helps provide a seamless user experience by enhancing the consistency of authentication and logout workflows, particularly in Single Sign-On (SSO) environments. It significantly helps improve security by ensuring user sessions are terminated securely thereby reducing the risk of unauthorized access. It also simplifies the development processes by minimizing the need for custom coding and promoting adherence to best practices. Additionally, it strengthens user privacy and supports compliance efforts enabling users to easily terminate sessions, thereby reducing the exposure from lingering session. The implementation involves the client (browser) initiating a logout by sending a request to the relying party's (NGINX) logout endpoint. NGINX(RP) adds additional parameters to the request and redirects it to the IdP, which terminates the associated user session and redirects the client to the specified post_logout_uri. Finally, NGINX as the relying party presents a post-logout confirmation page, signaling the completion of the logout process and ensuring session termination across both the relying party and the identity provider. UserInfo Retrieval Support The OIDC UserInfo endpoint is used by applications to retrieve profile information about the authenticated Identity. Applications can use this endpoint to retrieve profile information, preferences and other user-specific information to ensure a consistent user management process. The support for UserInfo endpoint in the native OIDC module provides a standardized mechanism to fetch user claims from Identity Providers (IdPs) helping simplify the authentication workflows and reducing overall system complexity. Having a standard mechanism also helps define and adopt development best practices across client applications for retrieving user claims offering tremendous value to developers, administrators, and end-users. The implementation enables the RP (nginx) to call an identity provider's OIDC UserInfo endpoint with the access token (Authorization: Bearer) and obtain scope-dependent End-user claims (e.g., profile, email, scope, address). This provides the standard, configuration-driven mechanism for claim retrieval across client applications and reduces integration complexity. Several new directives (logout_uri, post_logout_uri, logout_token_hint, and userinfo) have been added to the ngx_http_oidc_module to support both these features. Refer to our technical blog on how NGINX Plus R35 offers frictionless logout and UserInfo retrieval support as part of the native OIDC implementation for a comprehensive overview of both of these features and how they work under the hood. For instructions on how to configure the native OIDC module for various identity providers, refer the NGINX deployment guide. Early Hints Support Early Hints (RFC 8297) is a HTTP status code to improve website performance by allowing the server to send preliminary hints to the client before the final response is ready. Specifically, the server sends a 103 status code with headers indicating which resources (like CSS, JavaScript, images) the client can pre-fetch while the server is still working on generating the full response. Majority of the web browsers including Chrome, Safari and Edge support it today. A new NGINX directive early_hints has been added to specify the conditions under which backends can send Early Hints to the client. NGINX will parse the Early Hints from the backend and send them to the client. The following example shows how to proxy Early Hints for HTTP/2 and HTTP/3 clients and disable them for HTTP/1.1 early_hints $http2$http3; proxy_pass http://bar.example.com; For more details, refer NGINX docs and a detailed blog on Early Hints support in NGINX. QUIC – Support for CUBIC Congestion Control Algorithm CUBIC is a congestion control algorithm designed to optimize internet performance. It is widely used and well-tested in TCP implementations and excels in high-bandwidth and high-delay networks by efficiently managing data transmission ensuring faster speeds, rapid recovery from congestion, and reduced latency. Its adaptability to various network conditions and fair resource allocation makes it a reliable choice for delivering a smooth and responsive online experience and enhance overall user satisfaction. We announced support for CUBIC congestion algorithm in NGINX open source mainline version 1.27.4. All the bug fixes and enhancements since then are being merged into NGINX Plus R35. For a detailed overview of the implementation, refer to our blog on the topic. NGINX Javascript QuickJS - Full ES2023 support We introduced preview support for QuickJS runtime in NGINX JavaScript(njs) version 0.8.6 in the NGINX Plus R33 release. We have been quietly focused on this initiative since and are pleased to announce full ES2023 JavaScript specification support in NGINX JavaScript(njs) version 0.9.1 with NGINX Plus R35 release. With full ES2023 specification support, you can now use the latest JavaScript features that modern developers expect as standard to extend NGINX capabilities using njs. Refer to this detailed blog for a comprehensive overview of our QuickJS implementation, the motivation behind QuickJS runtime support and where we are headed with NGINX JavaScript. For specific details on how you can leverage QuickJS in your njs scripts, please refer to our documentation. Other Enhancements and Bug Fixes Variable based Access Control Support To enable robust access control using identity claims, R34 and earlier versions required a workaround involving the auth_jwt_require directive. This involved reprocessing the ID token with the auth_jwt module to manage access based on claims. This approach introduced configuration complexity and performance overhead. With R35, NGINX simplifies this process through the auth_require directive, which allows direct use of claims for resource-based access control without relying on auth_jwt. This directive is part of a new module ngx_http_auth_require_module added in this release. For ex, the following NGINX OIDC configuration maps the role claim from the id_token to $admin_role variable and sets it to 1 if the user’s role is “admin”. The /location block then uses auth_require $admin_role to restrict access, allowing only the users with admin role to proceed. http { oidc_provider my_idp { ... } map $oidc_claim_role $admin_role { "admin" 1; } server { auth_oidc my_idp; location /admin { auth_require $admin_role; } } } Though the directive is not exclusive to OIDC, when paired with auth_oidc, it provides a clean and declarative Role-Based Access Control (RBAC) mechanism within the server configuration. For example, you can easily configure access so only admins reach the /admin location, while either admins or users with specific permissions access other locations. The result is streamlined, efficient, and practical access management directly in NGINX. Note that the new auth_require directive does not replace auth_jwt_require as the two serve distinct purposes. While auth_jwt_require is an integral part of JWT validation in the JWT module focusing on headers and claims checks, auth_require operates in a separate ACCESS phase for access control. Deprecating auth_jwt_require would reduce flexibility, particularly in "satisfy" modes of operation, and complicate configurations. Additionally, auth_jwt_require plays a critical role in initializing JWT-related variables, enabling their use in subrequests. This initialization, crucial for JWE claims, cannot be done via REWRITE module directives as JWE claims are not available before JWT decryption. Support for JWS RSASSA-PSS algorithms: RSASSA-PSS algorithms are used for verifying the signatures of JSON Web Tokens (JWTs) to ensure their authenticity and integrity. In NGINX, these algorithms are typically employed via the auth_jwt_module when validating JWTs signed using RSASSA-PSS. We are adding support for following algorithms as specified in RFC 7518 (Section 3.5): PS256 PS384 PS512 Improved Node Outage Detection and Logging This release also introduces improvements in the timeout handling for zone_sync connections enabling faster detection of offline nodes and reducing counter accumulation risks. This improvement is aimed at improving synchronization of nodes in a cluster and early detection of failures improving system’s overall performance and reliability. Additional heuristics are added to detect blocked workers to proactively address prolonged event loop times. License API Updates NGINX license API endpoint now provides additional information. The “uuid” parameter in the license information is now available via the API endpoint. Changes Inherited from NGINX Open Source NGINX Plus R35 is based on NGINX 1.29.0 mainline release and inherits all functional changes, features, and bug fixes made since NGINX Plus R34 was released (which was based on 1.27.4 mainline release). Features: Early Hints support - support for response code 103 from proxy and gRPC backends; CUBIC congestion control algorithm support in QUIC connections. Loading of secret keys from hardware tokens with OpenSSL provider. Support for the "so_keepalive" parameter of the "listen" directive on macOS. Changes: The logging level of SSL errors in a QUIC handshake has been changed from "error" to "crit" for critical errors, and to "info" for the rest; the logging level of unsupported QUIC transport parameters has been lowered from "info" to "debug". Bug Fixes: nginx could not be built by gcc 15 if ngx_http_v2_module or ngx_http_v3_module modules were used. nginx might not be built by gcc 14 or newer with -O3 -flto optimization if ngx_http_v3_module was used. In the "grpc_ssl_password_file", "proxy_ssl_password_file", and "uwsgi_ssl_password_file" directives when loading SSL certificates and encrypted keys from variables; the bug had appeared in 1.23.1. In the $ssl_curve and $ssl_curves variables when using pluggable curves in OpenSSL. nginx could not be built with musl libc. Bugfixes and performance improvements in HTTP/3. Security: (CVE-2025-53859) SMTP Authentication process memory over-read: This vulnerability in the NGINX ngx_mail_smtp_module may allow an unauthenticated attacker to trigger buffer over-read resulting in worker process memory disclosure to the authentication server. For the full list of new changes, features, bug fixes, and workarounds inherited from recent releases, see the NGINX changes . Changes to the NGINX Javascript Module NGINX Plus R35 incorporates changes from the NGINX JavaScript (njs) module version 0.9.1. The following is a list of notable changes in njs since 0.8.9 (which was the version shipped with NGINX Plus R34). Features: Added support for the QuickJS-NG library. Added support for WebCrypto API, FetchAPI, TextEncoder and TextDecoder, querystring module, crypto module and xml module for the QuickJS engine. Added state file for a shared dictionary. Added ECDH support for WebCrypto. Added support for reading r.requestText or r.requestBuffer from a temporary file. Improvements: Performance improvements due to refactored handling of built-in strings, symbols, and small integers Multiple memory usage improvements improved reporting of unhandled promise rejections. Bug Fixes: Fixed segfault in njs_property_query(). The issue was introduced in b28e50b1 (0.9.0). Fixed Function constructor template injection. Fixed GCC compilation with O3 optimization level. Fixed constant is too large for 'long' warning on MIPS -mabi=n32. Fixed compilation with GCC 4.1. Fixed %TypedArray%.from() with the buffer is detached by the mapper. Fixed %TypedArray%.prototype.slice() with overlapping buffers. Fixed handling of detached buffers for typed arrays. Fixed frame saving for async functions with closures. Fixed RegExp compilation of patterns with escaped '[' characters. Fixed handling of undefined values of a captured group in RegExp.prototype[Symbol.split](). Fixed GCC 15 build error with -Wunterminated-string-initialization. Fixed name corruption in variables and headers processing. Fixed incr() method of a shared dictionary with an empty init argument for the QuickJS engine. Bugfix: accepting response headers with underscore characters in Fetch API. Fixed Buffer.concat() with a single argument in QuickJS. Bugfix: added missed syntax error for await in template literal. Fixed non-NULL terminated strings formatting in exceptions for the QuickJS engine. Fixed compatibility with recent change in QuickJS and QuickJS-NG. Fixed serializeToString(). Previously, serializeToString() was exclusiveC14n() which returned a string instead of Buffer. According to the published documentation, it should be c14n() For a comprehensive list of all the features, changes, and bug fixes, see the njs Changelog. F5 NGINX in F5’s Application Delivery & Security Platform NGINX One is part of F5’s Application Delivery & Security Platform. It helps organizations deliver, improve, and secure new applications and APIs. This platform is a unified solution designed to ensure reliable performance, robust security, and seamless scalability for applications deployed across cloud, hybrid, and edge architectures. NGINX One is the all-in-one, subscription-based package that unifies all of NGINX’s capabilities. NGINX One brings together the features of NGINX Plus, F5 NGINX App Protect, and NGINX Kubernetes and management solutions into a single, easy-to-consume package. NGINX Plus, a key component of NGINX One, adds features to open-source NGINX that are designed for enterprise-grade performance, scalability, and security. Ready to try the new release? Follow this guide for more information on installing and deploying NGINX Plus.793Views1like0CommentsRegional Edge Resiliency Zones and Virtual Sites
Introduction: This article is a follow-up article to my earlier article, F5 Distributed Cloud: Virtual Sites – Regional Edge (RE). In the last article, I talked about how to build custom topologies using Virtual Sites on our SaaS data plane, aka Regional Edges. In this article, we’re going to review an update to our Regional Edge architecture. With this new update to Regional Edges, there are some best practices regarding Virtual Sites that I’d like to review. As F5 has seen continuous growth and utilization of F5’s Distributed Cloud platform, we’ve needed to expand our capacity. We have added capacity through many different methods over the years. One strategic approach to expanding capacity is building new POPs. However, in some cases, even with new POPs, there are certain regions of the world that have a high density of connectivity. This will always cause higher utilization than in other regions. A perfect example of that is Ashburn, Virginia in the United States. Within the Ashburn POP that has high density of connectivity and utilization, we could simply “throw compute at it” within common software stacks. This is not what we’ve decided to do; F5 has decided to provide additional benefits to capacity expansions by introducing what we’re calling “Resiliency Zones”. Introduction to Resiliency Zones: What is a Resiliency Zone? A Resiliency Zone is simply another Regional Edge cluster within the same metropolitan (metro) area. These Resiliency Zones may be within the same POP, or within a common campus of POPs. The Resiliency Zones are made up of dedicated compute structures and have network hardware for different networks that make up our Regional Edge infrastructure. So why not follow in AWS’s footsteps and call these Availability Zones? Well, while in some cases we may split Resiliency Zones across a campus of data centers and be within separate physical buildings, that may not always be the design. It is possible that the Resiliency Zones are within the same facility and split between racks. We didn’t feel this level of separation provided a full Availability Zone-like infrastructure as AWS has built out. Remember, F5’s services are globally significant. While most of the cloud providers services are locally significant to a region and set of Availability Zones (in AWS case). While we strive to ensure our services are protected from catastrophic failures, F5 Distributed Cloud’s global availability of services affords us to be more condensed in our data center footprint within a single region or metro. I spoke of “additional benefits” above; let’s look at those. With Resiliency Zones, we’ve created the ability to scale our infrastructure both horizontally and vertically within our POPs. We’ve also created isolated fault and operational domains. I personally believe the operational domain is most critical. Today, when we do maintenance on a Regional Edge, all traffic to that Regional Edge is rerouted to another POP for service. With Resiliency Zones, while one Regional Edge “Zone” is under maintenance, the other Regional Edge Zone(s) can handle the traffic, keeping the traffic local to the same POP. In some regions of the world, this is critical to maintaining traffic within the same region and country. What to Expect with Resiliency Zones Resiliency Zone Visibility: Now that we have a little background on what Resiliency Zones are, what should you expect and look out for? You will begin to see Regional Edges within Console that have a letter associated to them. Example, “dc12-ash” which is the original Regional Edge; you’ll see another Regional Edge “b-dc12-ash”. We will not be appending an “a” to the original Regional Edge. As I write this article, the Resiliency Zones have not been released for routing traffic. They will be soon (June 2025). You can however, see the first resiliency zone today if you use all regional edges by default. If you navigate to a Performance Dashboard for a Load Balancer, and look at the Origin Servers tab, then sort/filter for dc12-ash, you’ll see both dc12-ash and b-dc12-ash. Customer Edge Tunnels: Customer Edge (CE) sites will not terminate their tunnels onto a Resiliency Zone. We’re working to make sure we have the right rules for tunnel terminations in different POPs. We can also give customers the option to choose if they want tunnels to be in the same POP across Resiliency Zones. Once the logic and capabilities are in place, we’ll allow CE tunnels to terminate on Resiliency Zones Regional Edges. Site Selection and Virtual Sites: The Resiliency Zones should not be chosen as the only site or virtual site available for an origin. We’ve built in some safeguards into the UI that’ll give you an error if you try to assign Resiliency Zone RE sites without the original RE site within the same association. For example, you cannot apply b-dc12-ash without including dc12-ash to an origin configuration. If you’re unfamiliar with Virtual Sites on F5’s Regional Edge data planes, please refer to the link at the top of this article. When setting up a Virtual Site, we use a site selector label. In my article, I highlight these labels that are associated per site. What we see used most often are: Country, Region, and SiteName. If you chose to use SiteName, your Virtual Site will not automatically add the new Resiliency Zone. Example, your site selector uses SiteName in dc12-ash. When b-dc12-ash comes online, it will not be matched and automatically used for additional capacity. Whereas if you used “country in USA” or “region in Ashburn”, then dc12-ash and b-dc12-ash would be available to your services right away. Best Practices for Virtual Sites: What is the best practice when it comes to Virtual Sites? I wouldn’t be in tech if I didn’t say “it depends”. It is ultimately up to you on how much control you want versus operational overhead you’re willing to have. Some people may say they don’t want to have to manage their virtual sites every time F5 changes the capacity. This could mean adding new Regional Edges in new POPs or adding Resiliency Zones into existing POPs. Whereas others may say they want to control when traffic starts routing through new capacity and infrastructure to their origins. Often times this control is to ensure customer-controlled security (firewall rules, network security groups, geo-ip db, etc.) are approved and allowed. As shown in the graph, the more control you want, the more operations you will maintain. What would I recommend? I would go less granular in how I setup Regional Edge Virtual Sites. As I would want as much compute capacity as close to them as possible to serve my clients of my applications for F5 Services. I’d also want attackers, bots, bad guys, or the traffic that isn’t an actual client to have security applied as close as possible to the source. Lastly, as we see L7 DDoS continue to rise, the more points of presence for L7 security I can provide and scale. This gives me the best chance of mitigating the attack. To achieve a less granular approach to virtual sites, it is critical to: Pay attention to our maintenance notices. If we’re adding IP prefixes to our allowed firewall/proxy list of IPs, we will send notice well in advance of these new prefixes becoming active. Update your firewall’s security groups, and verify with your geo-ip database provider Understand your client-side/downstream/VIP strategy vs. server-side/upstream/origin strategy and what the different virtual site models might impact. When in doubt, ask. Ask for help from your F5 account team. Open a support ticket. We’re here to help. Summary: F5’s Distributed Cloud platform needed an additional scaling mechanism to the infrastructure, offering services to its customers. To meet those needs, it was determined to add capacity through more Regional Edges within a common POP. This strategy offers both F5 and Customer operations teams enhanced flexibility. Remember, Resiliency Zones are just another Regional Edge. I hope this article is helpful, and please let me know what you think in the comments below.598Views7likes0CommentsAnnouncing F5 NGINX Instance Manager 2.20
We’re thrilled to announce the release of F5 NGINX Instance Manager 2.20, now available for download! This update focuses on improving accessibility, simplifying deployments, improving observability, and enriching the user experience based on valuable customer feedback. What’s New in This Release? Lightweight Mode for NGINX Instance Manager Reduce resource usage with the new "Lightweight Mode", which allows you to deploy F5 NGINX Instance Manager without requiring a ClickHouse database. While metrics and events will no longer be available without ClickHouse, all other instance management functionalities—such as certificate management, WAF, templates, and more—will work seamlessly across VM, Docker, and Kubernetes installations. With this change, ClickHouse becomes optional for deployments. Customers who require metrics and events should continue to include ClickHouse in their setup. For those focused on basic use cases, Lightweight Mode offers a streamlined deployment that reduces system complexity while maintaining core functionality for essential tasks. Lightweight Mode is perfect for customers who need simplified management capabilities for scenarios such as: Fleet Management WAF Configuration Usage Reporting as Part of Your Subscription (for NGINX Plus R33 or later) Certificate Management Managing Templates Scanning Instances Enabling API-based GitOps for Configuration Management In testing, NGINX Instance Manager worked well without ClickHouse. It only needed 1 CPU and 1 GB of RAM to manage up to 10 instances (without App Protect). However, please note that this represents the absolute minimum configuration and may result in performance issues depending on your use case. For optimal performance, we recommend allocating more appropriate system resources. See the updated technical specification in the documentation for more details. Support for Multiple Subscriptions Align and consolidate usage from multiple NGINX Plus subscriptions on a single NGINX Instance Manager instance. This feature is especially benficial for customers who use NGINX Instance Manager as a reporting endpoint, even in disconnected or air-gapped environments. This feature was added with NGINX Plus R33. Improved Licensing and Reporting for Disconnected Environments Managing NGINX Instance Manager in environments with no outbound internet connectivity is now simpler. Customers can configure NGINX Instance Manager to use a forward proxy for licensing and reporting. For truly air-gapped environments, we've improved offline licensing: upload your license JWT to activate all features, and enjoy a 90-day grace period to submit an initial report to F5. We've also revamped the usage reporting script to be more intuitive and backwards-compatible with older versions. Enhanced User Interface We’ve modernized the NGINX Instance Manager UI to streamline navigation and make it consistent with the F5 NGINX One Console. Features are now grouped into submenus for easier access. Additionally, breadcrumbs have been added to all pages for improved usability. Instance Export Enhancements We’ve added the ability to export instances and instance groups, simplifying the process of managing and sharing configuration details. This improvement makes it easier to keep track of large deployments and maintain consistency across environments. Performance and Stability Improvements With this release, we’ve made performance and stability improvements, a key part of every update to ensure NGINX Instance Manager runs smoothly in all environments. We’ve addressed multiple bug fixes in this release to improve stability and reliability. For more details on all the fixes included, please visit the release notes. Platform Improvements and Helm Chart Migration We’ve made significant enhancements to the Helm charts to simplify the installation process for NGINX Instance Manager in Kubernetes environments. Starting with this release, the Helm charts have moved to a new repository: nginx-stable/nim with chart version 2.0. Note: NGINX Instance Manager versions 2.19 or lower will remain in the old repository, nms-stable/nms-hybrid. Be sure to update your configurations accordingly when upgrading to version 2.20 or later. Looking Ahead: Security, Modernization, and Kubernetes Innovations As part of the F5 NGINX One product offering, NGINX Instance Manager continues to evolve to meet the demands of modern infrastructures. We're committed to improving security, scalability, usability, and observability to align with your needs. Although support for the latest F5 NGINX Agent v3 is not included in this release. We are actively exploring ways to enable it later this year to bring additional value for both NGINX Instance Manager and the NGINX One Console. Additionally, we’re exploring new ways to enhance support for data plane NGINX deployments, particularly in Kubernetes environments. Stay tuned for updates as we continue to innovate for cloud-native and containerized workloads. We’re eager to hear your feedback to help shape the roadmap for future releases. Get Started Now To explore the new lightweight mode, enhanced UI, and updated features, download NGINX Instance Manager 2.20. For more details on bug fixes and performance improvements, check out the full release notes. . The NGINX Impact in F5’s Application Delivery & Security Platform NGINX Instance Manager is part of F5’s Application Delivery & Security Platform. It helps organizations deliver, optimize, and secure modern applications and APIs. This platform is a unified solution designed to ensure reliable performance, robust security, and seamless scalability for applications deployed across cloud, hybrid, and edge architectures. NGINX Instance Manager is also a key component of NGINX One, the all-in-one, subscription-based package that unifies all of NGINX’s capabilities. NGINX One brings together the features of NGINX Plus, F5 NGINX App Protect, and NGINX Kubernetes and management solutions into a single, easy-to-consume package. A cornerstone of the NGINX One package, NGINX Instance Manager extends the capabilities of open-source NGINX with features designed specifically for enterprise-grade performance, scalability, and security.273Views0likes0CommentsElevate Your Cloud Journey with F5 NGINXaaS for Azure
Introduction F5 NGINXaaS for Azure is a robust Application Delivery as a Service (ADCaaS) solution that integrates seamlessly with Microsoft Azure. By leveraging the power and flexibility of NGINX technology alongside the scalability of a cloud-native platform, this service empowers organizations to efficiently deploy, secure, and scale modern applications and microservices in Azure environments. By utilizing F5 NGINXaaS, organizations can effectively enhance their digital transformation efforts. The solution supports high-performance web applications, facilitates smooth API communications, and enables the management of containerized workloads in Azure. This accelerates deployment processes, streamlines operations, and strengthens security, ultimately providing teams with the tools they need to successfully overcome the challenges of modern application development. F5 NGINXaaS has several promising use cases on the horizon. Let's explore three key applications: Implementation of Load Balancing Native Integration of F5 NGINXaaS with Azure Monitor and Grafana Editing and Securing Applications and APIs with WAF Protection in F5 NGINXaaS for Azure Additionally, we will show how to get started with F5 NGINXaaS. Getting Started with NGINXaaS To deploy F5 NGINXaaS effectively, users have various options, including the Azure portal, Azure CLI, and Terraform. The video provided demonstrates a step-by-step deployment process using the Microsoft Azure portal, along with essential configuration guidance. This resource aims to enhance understanding of the deployment process and maximize the capabilities of NGINXaaS. Implement Advanced Load Balancing in F5 NGINXaaS for Azure F5 NGINXaaS for Azure features advanced traffic management patterns that organizations can utilize right out of the box. This empowers teams to experiment, deploy, and test their applications with minimal effort. Integrate F5 NGINXaaS Natively with Azure Monitor and Grafana The integration of Azure Monitoring with F5 NGINXaaS provides a comprehensive observability solution designed for dynamic cloud environments. This integration equips teams with the tools necessary to enhance application performance and reliability. A key feature is the inclusion of access and error logs from F5 NGINXaaS, which deliver valuable insights for troubleshooting and issue resolution. By merging these logs with detailed application performance metrics, such as request throughput, latency, error rates, and resource utilization, technical teams can make well-informed decisions to optimize their applications. Additionally, Azure Monitor metrics from F5 NGINXaaS seamlessly integrates with Azure Grafana, allowing the creation of custom dashboards to track key metrics. Secure Applications and APIs with WAF Protection in F5 NGINXaaS for Azure Protecting web applications from various malicious attacks has become increasingly critical in the digital world. Web Application Firewalls (WAFs) are key tools designed to safeguard applications against vulnerabilities such as SQL injection and cross-site scripting (XSS). This section outlines the straightforward process for users to activate and leverage F5’s Web Application Firewall powered by NGINX App Protect within F5 NGINXaaS for Azure, ensuring robust security for their applications. Conclusion F5 NGINXaaS for Azure provides a transformative cloud application deployment and management approach. It empowers organizations to effortlessly build and maintain scalable, secure, and high-performing solutions. With integrated advanced features for load balancing, observability through Azure Monitor and Grafana, and comprehensive WAF protection, businesses can ensure their applications are both efficient and resilient against modern threats. As cloud environments continue to evolve, leveraging F5 NGINXaaS enables organizations to swiftly adapt to challenges and seize opportunities, ultimately enhancing overall performance and reliability in the digital landscape. Embrace this innovative solution to elevate your cloud journey and unlock the full potential of your applications in Azure. Additional Resources: Unlocking Insights: Enhancing Observability in F5 NGINXaaS for Azure for Optimal Operations Enhancing Application Delivery with F5 NGINXaaS for Azure: ADC-as-a-Service Redefined How to Elevate Application Performance and Availability in Azure with F5 NGINXaaS GitHub Workshop: https://github.com/nginxinc/nginx-azure-workshops134Views1like0CommentsAchieving Low Latency and Resiliency with F5 Distributed Cloud CE and Arista Switch
Hey everyone! Just wanted to share how I recently put together a proof-of-concept focused on achieving low latency and high resiliency for application delivery. The core idea was integrating F5 Distributed Cloud Customer Edge (CE) with Arista switches in a smart, multi-site setup. I built this around two locations: a Primary Site in a private data center hosting the app locally, and an Overflow Site in AWS to kick in when local resources get busy. Introduction At the heart of it, the F5 Distributed Cloud Customer Edge acted as the control plane, setting up a secure network fabric connecting the F5XC backbone, my data center, and the AWS environment. Hand-in-hand with that, the Arista switch managed BGP peering and ECMP routing. I established BGP peering directly between the Arista switch and the Customer Edge nodes, leveraging ECMP to distribute traffic efficiently for resilience and scalability. I also set up an HTTP load balancer on the Customer Edge cluster, with its IP address dynamically advertised into the Arista network. This ensured that local traffic was efficiently routed to the appropriate application server within the local environment. Steering traffic over to that AWS overflow site was handled automatically based on passive health checks and performance thresholds I easily configured within the F5 Distributed Cloud console. It was a cool way to ensure the application stays available and performs well, no matter what. Architecture Overview Private Data Center Deployment F5 Distributed Cloud CE Cluster: Deploy three Customer Edge (CE) nodes in a cluster to ensure resiliency. These interconnected nodes form the primary entry point into the private data center. Configurations through a single-pane-of-glass interface on the F5 Distributed Cloud console Arista Switch Integration: The Arista switch connects with the CE cluster via L3 interfaces. BGP peering is established between the Arista switch and the CE nodes, enabling dynamic routing. ECMP (Equal-Cost Multi-Path) routing is used to distribute traffic evenly, enhancing the overall resiliency of the network. Local Load Balancing: A load balancer is configured on the CE cluster with the IP address 10.10.124.122. This IP is propagated to the Arista switch, which directs local traffic to the application server in the data center. By keeping the traffic local, users benefit from the lowest possible latency AWS Overflow Site: Secondary Deployment: The same application is also deployed on an AWS site (scs-aws3-site). This AWS deployment is configured with a lower load balancer priority compared to the local data center. The unified load balancer URL (e.g., juicebox.app.com) ensures that both sites can be accessed seamlessly. Overflow Traffic Handling: When the local data center experiences congestion, the load balancer intelligently routes traffic to the AWS site. This overflow mechanism guarantees that users always have access to the application, even during peak load times. How Overflow Works: Detailed Mechanisms One of the most intriguing aspects of this architecture is the dynamic traffic steering between the local data center and the AWS site. Here’s how the load balancer manages congestion and overflow: Health Monitoring and Probes Active Health Checks: The load balancer continuously sends health probes (HTTP/TCP requests) to the local application server. These probes monitor key metrics such as response time, error rate, and overall responsiveness. Passive Health Checks: In addition to active probes, the system monitors real-time traffic patterns and application performance metrics. Any degradation in service quality triggers a more in-depth analysis. Thresholds and Policies Configurable Thresholds: Within the F5 Distributed Cloud console, you can set specific performance thresholds. When these thresholds are exceeded, indicating that the application is under stress, the load balancer marks the local site as degraded. Priority Settings: Since the local data center is configured as the primary site, it will handle the bulk of the traffic under normal conditions. The AWS site, with its lower priority, only receives traffic when local performance metrics fall below acceptable levels. Seamless Traffic Steering Dynamic Rebalancing: Once congestion is detected at the local site, the load balancer begins redirecting a portion of the traffic to the AWS overflow site. This process is entirely dynamic and happens in real time without manual intervention. Routing Coordination: The BGP peering and ECMP routing handled by the Arista switch ensure that local traffic flows efficiently. Meanwhile, the load balancer’s policies ensure that the overflow mechanism is triggered only when necessary, preserving low latency for local users. Benefits and Use Cases Enhanced Resiliency Multiple Failover Layers: With both local and AWS deployments, your application is protected against hardware failures, congestion, or unexpected spikes in traffic. Optimized Performance Low Latency for Local Users: Local traffic is served directly from the private data center, ensuring minimal latency. Scalable Overflow: The AWS site acts as a scalable overflow solution, maintaining application availability even during high demand. Centralized Management Unified Configuration: The F5 Distributed Cloud console provides a single pane of glass for managing both local and cloud resources, simplifying operations and maintenance. Configuration Walkthrough F5 Distributed Cloud Customer Edge Configuration Overview The F5 Distributed Cloud CE cluster is central to our solution. Each Customer Edge node is configured with redundant connections to the Global Distributed Cloud Regional Edges. Key configurations include setting up health checks to monitor local app performance and defining routing policies that determine when to shift traffic to the overflow site. Create a Secure Mesh 3-node control Site in the private Data Center. The nodes are connected to the Regional Edges and the Arista Switch using L3 interfaces. You can do that by navigating to the Site Management Secure Mesh Site v2 option. Visit https://docs.cloud.f5.com/docs-v2/multi-cloud-network-connect/how-to/site-management/create-secure-mesh-site-v2 for details. Similarly, create an AWS site by referring to https://docs.cloud.f5.com/docs-v2/multi-cloud-network-connect/how-to/site-management/deploy-sms-aws-clickops Private Data Site showing the CE node connected to the Regional Edges. A three-node cluster is deployed in a private data center to ensure resilience. Displaying the BGP peering details with the Arista switch in the private data center. Below are the BGP routes learned through the Arista switch and the subnet allocated for the Juice Shop app, supporting the HTTP load balancer running on the local data center site node. Arista Data Center site node interface IPs for internal and external traffic. The Arista Data Center site displays routes for ECMP with the Arista switch. The Juice Shop app's HTTP load balancer, running in the Arista private data center, displays the origin pool with details such as priority traffic for the application. Also, show the IP address for the Load balancer for the juice shop App. Below is the IP address assigned to the HTTP Load Balancer for the Juice Shop App, along with the site used to route the traffic. Here's a basic Arista switch configuration for setting up VLANs, interfaces, and BGP peering: vlan 10 name backend ! interface Ethernet1 description connected to backend speed forced 1000full switchport access vlan 10 interface Ethernet11 speed forced 1000full no switchport ip address 192.168.60.222/24 ! interface Management1 ip address 172.16.60.195/24 ! interface Vlan10 ip address 10.70.100.1/24 ! ip routing ! ntp server 0.us.pool.ntp.org ntp server pool.ntp.org ! router bgp 65001 router-id 192.168.60.222 neighbor 192.168.60.52 remote-as 65002 neighbor 192.168.60.52 description node-32eae neighbor 192.168.60.52 ebgp-multihop 2 neighbor 192.168.60.137 remote-as 65002 neighbor 192.168.60.137 description node-8e5f4 neighbor 192.168.60.137 ebgp-multihop 2 neighbor 192.168.60.143 remote-as 65002 neighbor 192.168.60.143 description node-1f456 neighbor 192.168.60.143 ebgp-multihop 2 ! address-family ipv4 neighbor 192.168.60.52 activate neighbor 192.168.60.137 activate neighbor 192.168.60.143 activate network 10.70.100.0/24 ! end Arista switch BGP peering details with the F5 Distributed Cloud Customer Edge node, including the Juice Shop app's HTTP load balancer route learned by the Arista switch, along with the corresponding next-hop CE addresses Conclusion Integrating F5 Distributed Cloud CE with Arista switches presents a powerful solution for enterprises that require both low latency and high resiliency. By intelligently steering traffic based on real-time performance metrics and utilizing an overflow AWS site, this architecture ensures that your application remains available and responsive under all conditions. Whether you’re a network engineer, IT architect, or simply interested in modern infrastructure solutions, this deployment model offers valuable insights into building scalable, high-performance applications. For additional details on this solution, please visit the YouTube Link517Views1like0CommentsAI, Red Teaming, and Post-Quantum Cryptography: Key Insights from RSA 2025
Join Aubrey and Byron at RSA Conference 2025 as they dive into transformative topics like artificial intelligence, red-teaming strategies, and post-quantum cryptography. From exploring groundbreaking OWASP sessions to analyzing emerging AI threats, this episode highlights key insights that shape the future of cybersecurity. Discover the challenges in red team AI testing, the implications of APIs in multi-cloud environments, and how quantum-resistant cryptography is rising to meet AI-driven threats. Don't miss this exciting recap of RSA 2025!141Views0likes0Comments