Forum Discussion
Issue with worker_connections limits in Nginx+
Hello Nginx Community,
We are using Nginx+ for our Load Balancer and have encountered a problem where the current worker_connections limit is insufficient.
I need our monitoring system to check the current value of worker_connections for each Nginx worker process to ensure that the active worker_connections are below the maximum allowed.
The main issue is that I cannot determine the current number of connections for each Nginx worker process.
In my test configuration, I set worker_connections to 28 (which is a small value used only for easily reproducing the issue). With 32 worker processes, the total capacity should be 32 * 28 = 896 connections.
Using the /api/9/connections endpoint, we can see the total number of active connections:
{
"accepted": 2062055,
"dropped": 4568,
"active": 9,
"idle": 28
}
Despite the relatively low number of active connections, the log file continually reports that worker_connections are insufficient.
Additionally, as of Nginx+ R30, there is an endpoint providing per-worker connection statistics (accepted, dropped, active, and idle connections, total and current requests). However, the reported values for active connections are much lower than 28:
$ curl -s http://<some_ip>/api/9/workers | jq | grep active
"active": 2,
"active": 0,
"active": 1,
"active": 2,
"active": 1,
"active": 1,
"active": 0,
"active": 0,
"active": 3,
"active": 0,
"active": 0,
"active": 0,
"active": 2,
"active": 2,
"active": 0,
"active": 1,
"active": 0,
"active": 0,
"active": 0,
"active": 0,
"active": 0,
"active": 0,
"active": 0,
"active": 2,
"active": 1,
"active": 2,
"active": 1,
"active": 0,
"active": 1,
"active": 0,
"active": 0,
"active": 1,
Could you please help us understand why the active connections are reported as lower than the limit, yet we receive logs indicating that worker_connections are not enough?
Thank you for your assistance.
Not an expert on optimization but I remember the videos about optimizing Nginx. Also maybe as you are using Nginx as reverse proxy you could need to take into account clientside and serverside connections at the same time.
Performance-Tuning NGINX Open Source and NGINX Plus | F5
Tuning NGINX for Performance (f5.com)
Class 8: Performance Tuning NGINX Plus (f5.com)
Maybe to optimize the server side you could see TCP multiplexing (like F5 OneConnect) profile.
Load Balancing with NGINX and NGINX Plus, Part 2 (f5.com)
- koefNimbostratus
Hello, Nikoolayy1
Thanks for the reply. But my main question is how to predict that the situation is getting worse.
koef you can send the logs to a external SIEM like ELK or splunk and have dashboards that show the connections called metrics and see if they start to increase over the days or weeks. If you are in openshift/kubernetes prometheus is an option as well.
Nginx | Documentation (elastic.co)
GitHub - nginxinc/nginx-prometheus-exporter: NGINX Prometheus Exporter for NGINX and NGINX Plus
Also you can review the new Nginx One Cloud GUI dashboard :
Product Preview: NGINX One | F5
F5 NGINX One Cloud Console | NGINX Documentation
This Relic stuff also seems nice but I have not used it:
Monitoring NGINX and NGINX Plus with the New Relic Plug-In | NGINX Documentation
Also I suggest seeing Avoiding the Top 10 NGINX Configuration Mistakes (f5.com) as when you have 896 connections you needs two times as many file descriptors. It is also part of the optimization links I shared but I don't know if you went through those. Also test with worker process set to auto as this way in some cases the distribution between cores can be better.
Hey what is the status on this 🙂
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com