Experience the power of F5 NGINX One with feature demos
Introduction Introducing F5 NGINX One, a comprehensive solution designed to enhance business operations significantly through improved reliability and performance. At the core of NGINX One is our data plane, which is built on our world-class, lightweight, and high-performance NGINX software. This foundation provides robust traffic management solutions that are essential for modern digital businesses. These solutions include API Gateway, Content Caching, Load Balancing, and Policy Enforcement. NGINX One includes a user-friendly, SaaS-based NGINX One Console that provides essential telemetry and overseas operations without requiring custom development or infrastructure changes. This visibility empowers teams to promptly address customer experience, security vulnerabilities, network performance, and compliance concerns. NGINX One's deployment across various environments empowers businesses to enhance their operations with improved reliability and performance. It is a versatile tool for strengthening operational efficiency, security posture, and overall digital experience. NGINX One has several promising features on the horizon. Let's highlight three key features: Monitor Certificates and CVEs, Editing and Update Configurations, and Config Sync Groups. Let's delve into these in details. Monitor Certificates and CVE’s: One of NGINX One's standout features is its ability to monitor Common Vulnerabilities and Exposures (CVEs) and Certificate status. This functionality is crucial for maintaining application security integrity in a continually evolving threat landscape. The CVE and Certificate Monitoring capability of NGINX One enables teams to: Prioritize Remediation Efforts: With an accurate and up-to-date database of CVEs and a comprehensive certificate monitoring system, NGINX One assists teams in prioritizing vulnerabilities and certificate issues according to their severity, guaranteeing that essential security concerns are addressed without delay. Maintain Compliance: Continuous monitoring for CVEs and certificates ensures that applications comply with security standards and regulations, crucial for industries subject to stringent compliance mandates. Edit and Update Configurations: This feature empowers users to efficiently edit configurations and perform updates directly within the NGINX One Console interface. With Configuration Editing, you can: Make Configuration Changes: Quickly adapt to changing application demands by modifying configurations, ensuring optimal performance and security. Simplify Management: Eliminate the need to SSH directly into each instance to edit or update configurations. Reduce Errors: The intuitive interface minimizes potential errors in configuration changes, enhancing reliability by offering helpful recommendations. Enhance Automation with NGINX One SaaS Console: Integrates seamlessly into CI/CD and GitOps workflows, including GitHub, through a comprehensive set of APIs. Config Sync Groups: The Config Sync Group feature is invaluable for environments running multiple NGINX instances. This feature ensures consistent configurations across all instances, enhancing application reliability and reducing administrative overhead. The Config Sync Group capability offers: Automated Synchronization: Configurations are seamlessly synchronized across NGINX instances, guaranteeing that all applications operate with the most current and secure settings. When a configuration sync group already has a defined configuration, it will be automatically pushed to instances as they join. Scalability Support: Organizations can easily incorporate new NGINX instances without compromising configuration integrity as their infrastructure expands. Minimized Configuration Drift: This feature is crucial for maintaining consistency across environments and preventing potential application errors or vulnerabilities from configuration discrepancies. Conclusion NGINX One Cloud Console redefines digital monitoring and management by combining all the NGINX core capabilities and use cases. This all-encompassing platform is equipped with sophisticated features to simplify user interaction, drastically cut operational overhead and expenses, bolster security protocols, and broaden operational adaptability. Read our announcement blog for moredetails on the launch. To explore the platform’s capabilities and see it in action, we invite you to tune in to our webinar on September 25th. This is a great opportunity to witness firsthand how NGINX One can revolutionize your digital monitoring and management strategies.245Views4likes0CommentsVIPTest: Rapid Application Testing for F5 Environments
VIPTest is a Python-based tool for efficiently testing multiple URLs in F5 environments, allowing quick assessment of application behavior before and after configuration changes. It supports concurrent processing, handles various URL formats, and provides detailed reports on HTTP responses, TLS versions, and connectivity status, making it useful for migrations and routine maintenance.360Views5likes2CommentsIssue with worker_connections limits in Nginx+
Hello Nginx Community, We are using Nginx+ for our Load Balancer and have encountered a problem where the current worker_connections limit is insufficient. I need our monitoring system to check the current value of worker_connections for each Nginx worker process to ensure that the active worker_connections are below the maximum allowed. The main issue is that I cannot determine the current number of connections for each Nginx worker process. In my test configuration, I set worker_connections to 28 (which is a small value used only for easily reproducing the issue). With 32 worker processes, the total capacity should be 32 * 28 = 896 connections. Using the /api/9/connections endpoint, we can see the total number of active connections: { "accepted": 2062055, "dropped": 4568, "active": 9, "idle": 28 } Despite the relatively low number of active connections, the log file continually reports that worker_connections are insufficient. Additionally, as of Nginx+ R30, there is an endpoint providing per-worker connection statistics (accepted, dropped, active, and idle connections, total and current requests). However, the reported values for active connections are much lower than 28: $ curl -s http://<some_ip>/api/9/workers | jq | grep active "active": 2, "active": 0, "active": 1, "active": 2, "active": 1, "active": 1, "active": 0, "active": 0, "active": 3, "active": 0, "active": 0, "active": 0, "active": 2, "active": 2, "active": 0, "active": 1, "active": 0, "active": 0, "active": 0, "active": 0, "active": 0, "active": 0, "active": 0, "active": 2, "active": 1, "active": 2, "active": 1, "active": 0, "active": 1, "active": 0, "active": 0, "active": 1, Could you please help us understand why the active connections are reported as lower than the limit, yet we receive logs indicating that worker_connections are not enough? Thank you for your assistance.68Views1like5CommentsNginx Reverse Proxy issue for port other than 81
I have a backend tomcat application which runs on port 8080 with IP 192.168.29.141. I am trying to reverse proxy using Nginx for which I have created the below configuration file: upstream tomcat{ server 192.168.29.141:8080; } server { #listen 192.168.122.28:80; listen 192.168.122.28:81; server_name tomcat; location / { proxy_pass http://tomcat; } } When I load the page on browser, the page is distorted and I get below error in Browser console: "Unsafe attempt to load URL http://tomcat/o/classic-theme/images/clay/icons.svg from frame with URL http://tomcat:81/. Domains, protocols and ports must match." But when I run the nginx on port 80 instead of port 81, everything works fine. Is there anything I am missing in configurations for port other than 80 ? My Nginx Server IP: 192.168.122.28 Browser screenshot when hit the URL as http://tomcat:8141Views0likes3CommentsNGINX Virtual Machine Building with cloud-init
Traditionally, building new servers was a manual process. A system administrator had a run book with all the steps required and would perform each task one by one. If the admin had multiple servers to build the same steps were repeated over and over. All public cloud compute platforms provide an automation tool called cloud-init that makes it easy to automate configuration tasks while a new VM instance is being launched. In this article, you will learn how to automate the process of building out a new NGINX Plus server usingcloud-init.298Views3likes4CommentsF5 XC vk8s workload with Open Source Nginx
I have shared the code in the link below under Devcentral code share: F5 XC vk8s open source nginx deployment on RE | DevCentral Here I will desribe the basic steps for creating a workload object that is F5 XC custom kubernetes object that creates in the background kubernetes deployments, pods and Cluster-IP type services. The free unprivileged nginx image nginxinc/docker-nginx-unprivileged: Unprivileged NGINX Dockerfiles (github.com) Create a virtual site that groups your Regional Edges and Customer Edges. After that create the vk8s virtual kubernetes and relate it to the virtual site."Note": Keep in mind for the limitations of kubernetes deployments on Regional Edges mentioned in Create Virtual K8s (vK8s) Object | F5 Distributed Cloud Tech Docs. First create the workload object and select type service that can be related to Regional Edge virtual site or Customer Edge virtual site. After select the container image that will be loaded from a public repository like github or private repo. You will need to configure advertise policy that will expose the pod/container with a kubernetes cluster-ip service. If you are deploying test containers, you will not need to advertise the container . To trigger commands at a container start, you may need to use /bin/bash -c -- and a argument."Note": This is not related for this workload deployment and it is just an example. Select to overwrite the default config file for the opensource nginx unprivileged with a file mount."Note": the volume name shouldn't have a dot as it will cause issues. For the image options select a repository with no rate limit as otherwise you will see the error under the the events for the pod. You can also configure command and parameters to push to the container that will run on boot up. You can use empty dir on the virtual kubernetes on the Regional Edges for volume mounts like the log directory or the Nginx Cache zone but the unprivileged Nginx by default exports the logs to the XC GUI, so there is no need. "Note": This is not related for this workload deployment and it is just an example. The Logs and events can be seen under the pod dashboard and even the container/pod can accessed. "Note": For some workloads to see the logs from the XC GUI you will need to direct the output to stderr but not for nginx. After that you can reference the auto created kubernetes Cluster-IP service in a origin pool, using the workload name and the XC namespace (for example niki-nginx.default). "Note": Use the same virtual-site where the workload was attached and the same port as in the advertise cluster config. Deployments and Cluster-IP services can be created directly without a workload but better use the workload option. When you modify the config of the nginx actually you are modifying a configmap that the XC workload has created in the background and mounted as volume in the deployment but you will need to trigger deployment recreation as of now not supported by the XC GUI. From the GUI you can scale the workload to 0 pod instances and then back to 1 but a better solution is to use kubectl. You can log into the virtual kubernetes like any other k8s environment using a cert and then you can run the command "kubectl rollout restart deployment/niki-nginx". Just download the SSL/TLS cert. You can automate the entire process using XC API and then you can use normal kubernetes automation to run the restart command F5 Distributed Cloud Services API for ves.io.schema.views.workload | F5 Distributed Cloud API Docs! F5 XC has added proxy_protocol support and now the nginx container can work directly with the real client ip addresses without XFF HTTP headers or non-http services like SMTP that nginx supports and this way XC now can act as layer 7 proxy for email/smpt traffic 😉. You just need to add "proxy_protocol" directive and to log the variable "$proxy_protocol_addr". Related resources: For nginx Plus deployments for advanced functions like SAML or OpenID Connect (OIDC) or the advanced functions of the Nginx Plus dynamic modules like njs that is allowing java scripting (similar to F5 BIG-IP or BIG-IP Next TCL based iRules), see: Enable SAML SP on F5 XC Application Bolt-on Auth with NGINX Plus and F5 Distributed Cloud Dynamic Modules | NGINX Documentation njs scripting language (nginx.org) Accepting the PROXY Protocol | NGINX Documentation318Views2likes1CommentGlobal Live Webinar (08/28): Deploy NGINX Faster Than You Can Say Azure: NGINXaaS Azure
Deploy NGINX Faster Than You Can Say Azure: NGINXaaS for Azure Date: August 28, 2024 Time: 10:00am PT | 1:00pm ET Speakers: Gee Chow, Solutions Architect, F5 Sundar Tiwari, Sr. Product Manager, F5 What's the webinar about? NGINX as a Service is a fully hosted offering that is tightly integrated into the Azure ecosystem, making applications fast, efficient, and reliable with full lifecycle management of advanced NGINX traffic services. NGINXaaS for Azure, powered by NGINX Plus, eliminates the need to deploy and manage individual instances or clusters of NGINX, and removes the operational burden of managing machines and images. Unlocking all NGINX Plus use cases of API Gateway, load balancer, programmable ADC, and cache managed through various Azure management tools (portal, CLI, SDK, ARM, and Terraform). And NGINXaaS for Azure can scale to meet your business, technical, or security requirements as they develop. In this in-depth session, our experts will cover: Seamless Integration with the Azure Ecosystem: Discover how NGINX Plus integrates with essential Azure services such as Key Vault, Monitor, and Log Analytics, enhancing security and monitoring capabilities. Smooth Migration Path: Learn the steps to transition your existing NGINX configurations to NGINX as a Service for Azure without hassle. Continuous Innovation and Reliability: Understand how NGINXaaS for Azure ensures your instances remain cutting-edge and robust with automatic updates and built-in failover and service resiliency. Cost-Effective Strategies: Leverage your Microsoft Azure Consumption Commitment (MACC) to make the most of NGINX as a Service for Azure. Join our knowledgeable presenters, Gee Chow, Solutions Architect at F5, and Sundar Tiwari, Sr. Product Manager at F5, for a session that promises to empower your Azure experience with NGINX Plus. Join our knowledgeable presenters, Gee Chow, Solutions Architect at F5, and Sundar Tiwari, Sr. Product Manager at F5, for a session that promises to empower your Azure experience with NGINX Plus. Register here31Views0likes0CommentsSecuring and Scaling Hybrid Apps with F5 NGINX Part 4
In previous parts of our series, we learned that NGINX is superior to cloud load balancers for two reasons: Breaking free from vendor lock-in;NGINX is a solution applicable to any infrastructure/environment Cloud providers offer basic load balancers that route and encrypt traffic to endpoints. They lack in: Visibility; Logs, traces, and statistics Functionality; Advanced traffic management and security use cases (see part 2 and 3) Functionality is especially important when scaling the environment to multiple cluster groups. The bulk of this section will be addressing best practices in scaling the architecture in part 2 and 3. Below I will depict a reference architecture that replicates my Kubernetes cluster with an NGINX Ingress Controller deployment and NGINX Load Balancer with HA (High Availability). If you recall from the part 2 and 3 of our series, I configured many ZT use cases on my NGINX Plus external load balancer. I replicated my NGINX Plus external load balancer to an active-active HA setup with NGINX Plus based on keepalived and VRRP. The method of fully rolling out HA in production will vary slightly depending on my environment. Public Cloud If I am scaling the architecture in a public cloud environment, I can replicate the NGINX Plus load balancers with cloud auto-scaling groups and front them with F5 BIG-IP. I can also enable health monitors on my BIG-IP so that unresponsive connections will fail-over to healthy NGINX Plus load balancers. On-Premises If I am scaling my architecture on-prem, I can replicate NGINX Plus load balancers with additional bare metal machines or use a virtualization software of my choosing. The HA solution with NGINX Plus on-prem can be setup in three different modes: Active-Passive; One instance is active, and the other is redundancy. the VIP (Virtual IP) will switch over to the redundancy node when the master node fails. Active-Active; Both instances are active and serving traffic. Two VIPs are required, where each VIP is assigned to an instance. If one instance becomes unavailable, the assigned VIP will switch over to the other instance and vice versa. Active-Active-Passive; Adding an additional redundancy node to the Active-Active HA pair resulting in a three node cluster group. The redundancy node will switch on when both active nodes are down. Choosing between these modes will depend on my current priorities. Going with the active-passive model, I compromise efficiency for lower cost. The active node is prone to overloading while the redundant node is idle and mostly not serving traffic. Going with the active-active or active-active-passive model, I compromise cost for better efficiency. However, I will need two VIPs and a DNS load balancer (F5 GTM) fronting my NGINX HA cluster. The table below depicts the three models measured by cost, efficiency and scale. Cost Efficiency Scale Active-Passive Low Low Low Active-Active Medium High Medium Active-Active-Passive High Medium High If I have the money to spend and choose both efficiency and scale, then active-active or active-active-passive is the right choice. Synchronizing data across NGINX Plus Cluster Nodes Recalling the part 2 and 3, we went through several ZT use cases with the NGINX Plus load balancer. Many of these ZT use cases require shared memory zones to store data and authenticate/authorize users. When scaling out the Zero Trust Architecture with HA, the key-value shared memory zone should be synchronized between NGINX Plus instances to ensure consistency. Take for example a popular ZT use case; OIDC authentication. Tokens are stored in the Key-Value storage to examine users attempting access to protected back-end applications. We can extend our configuration and enable Key-Val zone sync with two additional steps: Open a TCP medium where key-value data is exchanged. Now that you can also enable SSL on the TCP medium for extra security Append the optional sync directive to enable synchronization of key-value shared memory zones defined in openid_connect_configuration.conf from part 2. Testing the Synchronization You can test/validate the synchronization by leveraging the NGINX Plus API to pull data from individual cluster nodes and comparing them. The data pulled from each cluster node should be identical. You can connect to NGINX cluster nodes via SSH and enter: $ curl http://localhost:8010/api/7/http/keyvals/oidc_access_tokens The response data from each NGINX cluster node will match when zone sync is enabled. Data Telemetry with NGINX Management Interfaces As my IT organization grows, so will my NGINX cluster groups. Inevitably, I will need a solution that manage complexities arising from expanding NGINX cluster groups to alternative regions and cloud environments. With NGINX management tools, you can: Track your NGINX inventory for common CVEs and expired certificates Stage/Push config templates to NGINX cluster groups Collect and aggregate metrics from NGINX cluster groups Use our Flexible Consumption Model (FCP) model Installation and Deployment Methods There are two ways to get started with NGINX management capabilities: F5 Distributed Cloud (XC);No installation or deployment is required. Simply log into XC and access the NGINX One SaaS console. Get started with NGINX One Early Access. Self-Managed Installation; You can deploy NGINX Instance Manager to comply with policies and regulations that make the use of a SaaS console not feasible, for example, for air gapped environments inaccessible from the public internet. You caninstall and manage your own NGINX Instance Manager deployments by following our documentation. Once signed into my NGINX SaaS console, I can install agents on my NGINX HA clusters pair to discover them. $ curl https://agent.connect.nginx.com/nginx-agent/install | DATA_PLANE_KEY="z" sh -s -- -y $ sudo systemctl start nginx-agent I can track my overall NGINX usage and telemetry from either the UI console or APIs. Under FCPs (Flexible Consumption Plans), consumption is measured yearly based on the number of managed instances. This model is becoming increasingly popular as customers increasingly opt for flexible pays-as-go licensing models. Setting up F5 BIG-IP I touched on 2 options to configure F5 BIG-IP depending on my cloud environment. In on-prem environments with active-active HA targets, I need to configure F5 DNS load balancing. I will now step through how to configure DNS load balancing on BIG-IP. The first step is to create a VS (Virtual Server) listening on UDP with service port 53 for DNS. Then I create a wide IP with name 'nginxdemo.f5lab.com'. This will be the domain I will use to connect to my BIG-IP DNS load balancer. If I click on the 'Pools' tab, I can see my gslbPool members, where each member will correspond to VIPs assigned to my NGINX Plus HA cluster nodes. I also need to create a datacenter with a Server List on the NGINX HA cluster nodes and the BIG-IP DNS system. Under DNS>>GSLB : Servers : Server List, I can start adding my NGINX members and BIG-IP DNS system. Note:In a public cloud environment, I typically will not need to configure GSLB on the BIG-IP. I can simply create a VirtualServer with HTTPS and service port 443 and NGINX Plus pool members with a health monitor for redundancy fail over. Conclusion As we progressed in our series, I expanded my architecture to address scalability concerns that inevitably surface to any business. However, the IT architecture of a business needs to be flexible and agile if it wants to thrive, especially in this modern competitive landscape. The solutions I presented in these series are fundamental building blocks that can technically be implemented anywhere. They enable organizations to quickly maneuver and seek out alternative options when current ones are no longer viable, which brings me to the topic of AI. How will enterprises adopt AI in the present and future? Ultimately it will come down to extending reference architectures (like the ones discussed in this series) with AI components (LLMs, Vector DBs, RAG, etc...). These components plug into the overall architecture to improve automation and efficiency in the overall business model. In the next series, we will discuss AI reference architectures with F5/NGINX.216Views0likes0Comments