announcement
325 TopicsF5 NGINX One Console June Features
Introduction We are happy to announce the new set of F5 NGINX One Console features that were released in June: • Fleet Visibility Alerts • Import/Export for Staged Configurations The F5 NGINX One Console is a central management service in the F5 Distributed Cloud. It makes it easier to deploy, manage, monitor, and operate F5 NGINX. It is available to all NGINX and Distributed Cloud subscribers and is included as a part of your subscription. If you have any questions on how to access, please reach out to your F5 Customer Success or Account team. Fleet Visibility Alerts The NGINX One Console will now be creating Alerts for important notifications related to your NGINX fleet. For example, if a connected NGINX instance has an insecure configuration or an instance is impacted by a newly announced Critical Vulnerability, an Alert will be generated. Using Distributed Cloud’s robust Alerting system, you can configure notifications of your choice to be sent to your preferred system such as SMS, email, Slack, Pager Duty, webhook, etc. Find the full list of Alerts here: https://docs.cloud.f5.com/docs-v2/platform/reference/alerts-reference See instructions on how to set up Alert Receivers and Policies here: https://docs.cloud.f5.com/docs-v2/shared-configuration/how-tos/alerting/alerts-email-sms Import/Export for Staged Configurations Effortlessly share your configurations with the new import/export. This feature makes it easier to work together. You can share configurations quickly with teammates, F5 support, or the community. You can also easily import configurations from others. Whether you're fine-tuning configurations for your team or seeking advice, this update makes sharing and receiving configurations simple and efficient. This also allows those who prefer the NGINX One Console configuration editing experience but operate in dark or air-gapped environments, an easy way to move configurations from the NGINX One Console to your instance. You can craft and refine configurations within the NGINX One Console, then export them for deployment in isolated instances without hassle. Similarly, it lets you easily export configurations from the Console and import them into F5 NGINXaas for Azure. The process is highly flexible and intuitive: Create new Staged Configurations by importing configuration files directly as a .tar.gz archive via the UI or API. Export Staged Configurations with just one click or API call, generating a .tar.gz archive that’s ready to be unpacked and applied wherever you need your configuration files. See the documentation for more information: [N1 docks link] Find all the latest additions to the NGINX One Console in our changelog: https://docs.nginx.com/nginx-one/changelog/https://docs.nginx.com/nginx-one/changelog/ The NGINX Impact in F5’s Application Delivery & Security Platform NGINX One Console is part of F5’s Application Delivery & Security Platform. It helps organizations deliver, improve, and secure new applications and APIs. This platform is a unified solution designed to ensure reliable performance, robust security, and seamless scalability for applications deployed across cloud, hybrid, and edge architectures. NGINX One Console is also a key component of NGINX One, the all-in-one, subscription-based package that unifies all of NGINX’s capabilities. NGINX One brings together the features of NGINX Plus, F5 NGINX App Protect, and NGINX Kubernetes and management solutions into a single, easy-to-consume package. NGINX One Console is a key part of the NGINX One package. It adds features to open-source NGINX that are designed for enterprise-grade performance, scalability, and security.92Views1like0CommentsRegional Edge Resiliency Zones and Virtual Sites
Introduction: This article is a follow-up article to my earlier article, F5 Distributed Cloud: Virtual Sites – Regional Edge (RE). In the last article, I talked about how to build custom topologies using Virtual Sites on our SaaS data plane, aka Regional Edges. In this article, we’re going to review an update to our Regional Edge architecture. With this new update to Regional Edges, there are some best practices regarding Virtual Sites that I’d like to review. As F5 has seen continuous growth and utilization of F5’s Distributed Cloud platform, we’ve needed to expand our capacity. We have added capacity through many different methods over the years. One strategic approach to expanding capacity is building new POPs. However, in some cases, even with new POPs, there are certain regions of the world that have a high density of connectivity. This will always cause higher utilization than in other regions. A perfect example of that is Ashburn, Virginia in the United States. Within the Ashburn POP that has high density of connectivity and utilization, we could simply “throw compute at it” within common software stacks. This is not what we’ve decided to do; F5 has decided to provide additional benefits to capacity expansions by introducing what we’re calling “Resiliency Zones”. Introduction to Resiliency Zones: What is a Resiliency Zone? A Resiliency Zone is simply another Regional Edge cluster within the same metropolitan (metro) area. These Resiliency Zones may be within the same POP, or within a common campus of POPs. The Resiliency Zones are made up of dedicated compute structures and have network hardware for different networks that make up our Regional Edge infrastructure. So why not follow in AWS’s footsteps and call these Availability Zones? Well, while in some cases we may split Resiliency Zones across a campus of data centers and be within separate physical buildings, that may not always be the design. It is possible that the Resiliency Zones are within the same facility and split between racks. We didn’t feel this level of separation provided a full Availability Zone-like infrastructure as AWS has built out. Remember, F5’s services are globally significant. While most of the cloud providers services are locally significant to a region and set of Availability Zones (in AWS case). While we strive to ensure our services are protected from catastrophic failures, F5 Distributed Cloud’s global availability of services affords us to be more condensed in our data center footprint within a single region or metro. I spoke of “additional benefits” above; let’s look at those. With Resiliency Zones, we’ve created the ability to scale our infrastructure both horizontally and vertically within our POPs. We’ve also created isolated fault and operational domains. I personally believe the operational domain is most critical. Today, when we do maintenance on a Regional Edge, all traffic to that Regional Edge is rerouted to another POP for service. With Resiliency Zones, while one Regional Edge “Zone” is under maintenance, the other Regional Edge Zone(s) can handle the traffic, keeping the traffic local to the same POP. In some regions of the world, this is critical to maintaining traffic within the same region and country. What to Expect with Resiliency Zones Resiliency Zone Visibility: Now that we have a little background on what Resiliency Zones are, what should you expect and look out for? You will begin to see Regional Edges within Console that have a letter associated to them. Example, “dc12-ash” which is the original Regional Edge; you’ll see another Regional Edge “b-dc12-ash”. We will not be appending an “a” to the original Regional Edge. As I write this article, the Resiliency Zones have not been released for routing traffic. They will be soon (June 2025). You can however, see the first resiliency zone today if you use all regional edges by default. If you navigate to a Performance Dashboard for a Load Balancer, and look at the Origin Servers tab, then sort/filter for dc12-ash, you’ll see both dc12-ash and b-dc12-ash. Customer Edge Tunnels: Customer Edge (CE) sites will not terminate their tunnels onto a Resiliency Zone. We’re working to make sure we have the right rules for tunnel terminations in different POPs. We can also give customers the option to choose if they want tunnels to be in the same POP across Resiliency Zones. Once the logic and capabilities are in place, we’ll allow CE tunnels to terminate on Resiliency Zones Regional Edges. Site Selection and Virtual Sites: The Resiliency Zones should not be chosen as the only site or virtual site available for an origin. We’ve built in some safeguards into the UI that’ll give you an error if you try to assign Resiliency Zone RE sites without the original RE site within the same association. For example, you cannot apply b-dc12-ash without including dc12-ash to an origin configuration. If you’re unfamiliar with Virtual Sites on F5’s Regional Edge data planes, please refer to the link at the top of this article. When setting up a Virtual Site, we use a site selector label. In my article, I highlight these labels that are associated per site. What we see used most often are: Country, Region, and SiteName. If you chose to use SiteName, your Virtual Site will not automatically add the new Resiliency Zone. Example, your site selector uses SiteName in dc12-ash. When b-dc12-ash comes online, it will not be matched and automatically used for additional capacity. Whereas if you used “country in USA” or “region in Ashburn”, then dc12-ash and b-dc12-ash would be available to your services right away. Best Practices for Virtual Sites: What is the best practice when it comes to Virtual Sites? I wouldn’t be in tech if I didn’t say “it depends”. It is ultimately up to you on how much control you want versus operational overhead you’re willing to have. Some people may say they don’t want to have to manage their virtual sites every time F5 changes the capacity. This could mean adding new Regional Edges in new POPs or adding Resiliency Zones into existing POPs. Whereas others may say they want to control when traffic starts routing through new capacity and infrastructure to their origins. Often times this control is to ensure customer-controlled security (firewall rules, network security groups, geo-ip db, etc.) are approved and allowed. As shown in the graph, the more control you want, the more operations you will maintain. What would I recommend? I would go less granular in how I setup Regional Edge Virtual Sites. As I would want as much compute capacity as close to them as possible to serve my clients of my applications for F5 Services. I’d also want attackers, bots, bad guys, or the traffic that isn’t an actual client to have security applied as close as possible to the source. Lastly, as we see L7 DDoS continue to rise, the more points of presence for L7 security I can provide and scale. This gives me the best chance of mitigating the attack. To achieve a less granular approach to virtual sites, it is critical to: Pay attention to our maintenance notices. If we’re adding IP prefixes to our allowed firewall/proxy list of IPs, we will send notice well in advance of these new prefixes becoming active. Update your firewall’s security groups, and verify with your geo-ip database provider Understand your client-side/downstream/VIP strategy vs. server-side/upstream/origin strategy and what the different virtual site models might impact. When in doubt, ask. Ask for help from your F5 account team. Open a support ticket. We’re here to help. Summary: F5’s Distributed Cloud platform needed an additional scaling mechanism to the infrastructure, offering services to its customers. To meet those needs, it was determined to add capacity through more Regional Edges within a common POP. This strategy offers both F5 and Customer operations teams enhanced flexibility. Remember, Resiliency Zones are just another Regional Edge. I hope this article is helpful, and please let me know what you think in the comments below.363Views4likes0CommentsAlways only one of two IP addresses as response
Hello forum, My colleagues are using a learning platform. Unfortunately, the provider is having a problem. There are two IP addresses under the URL, which appear to be randomly assigned. One of the IP addresses is unreachable. What do I need to do with the SSLO to access only one of the IPs via our proxy chain? I imagine it would be like this: when the user visits xyz.com, they always get the 1.2.3.4 IP address in response, not the 5.6.7.8. Is that possible?43Views0likes1CommentBIG-IP 16.1.x End of Technical Support July 31, 2025
Hello, Community! I wanted to share an important update regarding BIG-IP 16.1.x. As of July 31, 2025, this version will officially reach End of Technical Support (EoTS). If you are on version 16.1.x and haven’t started planning your upgrade, now is the perfect time. Keeping your system on supported software ensures continued technical support, and software development support. Planning ahead can foster a smooth transition. To help you navigate this update I have compiled a list of Knowledge Articles that can assist in planning your upgrade. K000139937: BIG-IP 15.1.x and 16.1.x are reaching End of Technical Support K5903: BIG-IP software support policy K84554955: Overview of BIG-IP system software upgrades K13845: Overview of supported BIG-IP upgrade paths and an upgrade planning reference K18074701: iHealth Upgrade Advisor K7727: License activation may be required before a software upgrade for BIG-IP K16022: Opening a proactive service request with F5 Support If you have any questions, please feel free to leave them below or contact F5 Support for customized assistance. Here we can work together to keep your systems secure, supported, and optimized.197Views3likes0Commentsall possible Domain names that F5 may need to communicate to
I want to request a comprehensive list of all possible domain names that the F5 BIG-IP system may need to communicate with, along with a mapping of each domain to its corresponding module or feature (e.g., ASM, IP Intelligence, Bot Defense, iHealth, Licensing, etc.). Does anyone have an answer?44Views0likes1CommentAnnouncing F5 NGINX Instance Manager 2.20
We’re thrilled to announce the release of F5 NGINX Instance Manager 2.20, now available for download! This update focuses on improving accessibility, simplifying deployments, improving observability, and enriching the user experience based on valuable customer feedback. What’s New in This Release? Lightweight Mode for NGINX Instance Manager Reduce resource usage with the new "Lightweight Mode", which allows you to deploy F5 NGINX Instance Manager without requiring a ClickHouse database. While metrics and events will no longer be available without ClickHouse, all other instance management functionalities—such as certificate management, WAF, templates, and more—will work seamlessly across VM, Docker, and Kubernetes installations. With this change, ClickHouse becomes optional for deployments. Customers who require metrics and events should continue to include ClickHouse in their setup. For those focused on basic use cases, Lightweight Mode offers a streamlined deployment that reduces system complexity while maintaining core functionality for essential tasks. Lightweight Mode is perfect for customers who need simplified management capabilities for scenarios such as: Fleet Management WAF Configuration Usage Reporting as Part of Your Subscription (for NGINX Plus R33 or later) Certificate Management Managing Templates Scanning Instances Enabling API-based GitOps for Configuration Management In testing, NGINX Instance Manager worked well without ClickHouse. It only needed 1 CPU and 1 GB of RAM to manage up to 10 instances (without App Protect). However, please note that this represents the absolute minimum configuration and may result in performance issues depending on your use case. For optimal performance, we recommend allocating more appropriate system resources. See the updated technical specification in the documentation for more details. Support for Multiple Subscriptions Align and consolidate usage from multiple NGINX Plus subscriptions on a single NGINX Instance Manager instance. This feature is especially benficial for customers who use NGINX Instance Manager as a reporting endpoint, even in disconnected or air-gapped environments. This feature was added with NGINX Plus R33. Improved Licensing and Reporting for Disconnected Environments Managing NGINX Instance Manager in environments with no outbound internet connectivity is now simpler. Customers can configure NGINX Instance Manager to use a forward proxy for licensing and reporting. For truly air-gapped environments, we've improved offline licensing: upload your license JWT to activate all features, and enjoy a 90-day grace period to submit an initial report to F5. We've also revamped the usage reporting script to be more intuitive and backwards-compatible with older versions. Enhanced User Interface We’ve modernized the NGINX Instance Manager UI to streamline navigation and make it consistent with the F5 NGINX One Console. Features are now grouped into submenus for easier access. Additionally, breadcrumbs have been added to all pages for improved usability. Instance Export Enhancements We’ve added the ability to export instances and instance groups, simplifying the process of managing and sharing configuration details. This improvement makes it easier to keep track of large deployments and maintain consistency across environments. Performance and Stability Improvements With this release, we’ve made performance and stability improvements, a key part of every update to ensure NGINX Instance Manager runs smoothly in all environments. We’ve addressed multiple bug fixes in this release to improve stability and reliability. For more details on all the fixes included, please visit the release notes. Platform Improvements and Helm Chart Migration We’ve made significant enhancements to the Helm charts to simplify the installation process for NGINX Instance Manager in Kubernetes environments. Starting with this release, the Helm charts have moved to a new repository: nginx-stable/nim with chart version 2.0. Note: NGINX Instance Manager versions 2.19 or lower will remain in the old repository, nms-stable/nms-hybrid. Be sure to update your configurations accordingly when upgrading to version 2.20 or later. Looking Ahead: Security, Modernization, and Kubernetes Innovations As part of the F5 NGINX One product offering, NGINX Instance Manager continues to evolve to meet the demands of modern infrastructures. We're committed to improving security, scalability, usability, and observability to align with your needs. Although support for the latest F5 NGINX Agent v3 is not included in this release. We are actively exploring ways to enable it later this year to bring additional value for both NGINX Instance Manager and the NGINX One Console. Additionally, we’re exploring new ways to enhance support for data plane NGINX deployments, particularly in Kubernetes environments. Stay tuned for updates as we continue to innovate for cloud-native and containerized workloads. We’re eager to hear your feedback to help shape the roadmap for future releases. Get Started Now To explore the new lightweight mode, enhanced UI, and updated features, download NGINX Instance Manager 2.20. For more details on bug fixes and performance improvements, check out the full release notes. . The NGINX Impact in F5’s Application Delivery & Security Platform NGINX Instance Manager is part of F5’s Application Delivery & Security Platform. It helps organizations deliver, optimize, and secure modern applications and APIs. This platform is a unified solution designed to ensure reliable performance, robust security, and seamless scalability for applications deployed across cloud, hybrid, and edge architectures. NGINX Instance Manager is also a key component of NGINX One, the all-in-one, subscription-based package that unifies all of NGINX’s capabilities. NGINX One brings together the features of NGINX Plus, F5 NGINX App Protect, and NGINX Kubernetes and management solutions into a single, easy-to-consume package. A cornerstone of the NGINX One package, NGINX Instance Manager extends the capabilities of open-source NGINX with features designed specifically for enterprise-grade performance, scalability, and security.182Views0likes0CommentsMultiple Default Gateways
Hi, I have a question regarding default gateway configuration. Please refer to the following setup: We currently have an L4 HA device setup (R2600) in Active-Standby mode, and both units are in the same subnet. The default gateway is set to 192.168.1.1, which is a VRRP address on the L3 side. Since F5 devices synchronize route configurations across devices during config sync, any change to the routing table is applied to both units. Given this, is it possible to configure different default gateways per device in an F5 HA pair? Specifically, I would like to set each unit's default gateway to the real IP of a different L3 device: Default GW for L4 #1: 192.168.1.2 Default GW for L4 #2: 192.168.1.3 I'd like to hear the opinion of experts on whether this is possible and if there is a supported way to achieve this. Thank you.Solved98Views0likes3Comments