Forum Discussion

sriramsm919's avatar
sriramsm919
Icon for Nimbostratus rankNimbostratus
Aug 18, 2022

Only one VM on load balancer pool is receiving higher traffic when mirrored via Nginx

Our existing live application is a web service, running on On-Prem and its configured with f5 load balancing pool. Application service is exposed in 10200 port in all host members. Now we are introducing Nginx reverse proxy on same host members which is exposed in 80 port so all live traffic goes through nginx 80 port and same routed to upstream application port 10200 ( on same host localhost:10200 ). We wanted to enable 80 port on only one host member in pool to test the traffic and flow before we completely enable all host members with 80 port in the pool. We did a canary deployment and opened up one host alone which will receive the on-prem traffic. So both the application port(10200) and ngnix port(80) is exposed and we noticed that the number of requests are higher on this node alone and the other members on the load balancing pool are receiving significantly lesser traffic. But when we disable the application port(10200), requests are split equally. I want to understand  the behaviour of the Load balancer. There is no persistence profile and the Load Balancer method is Round Robin. We are using HTTPS protocol.

1 Reply

  • Hello, You descrption is just really not clear and it is a hard to read. I did not get which is first the F5 or nginx or the other way around.

     

    If it is Nginx infront the F5 device then maybe as the NGINX is sending many requests from different cllients on the TCP connection to the F5 this is why the F5 could be selecting one pool member for the traffic and maybe adding F5 One Connect profile may help as this way F5 will know that there are different HTTP requests in a single TCP session:

     

    https://support.f5.com/csp/article/K7208

     

    -----

    By default, the BIG-IP system performs load balancing once for each TCP connection, rather than for each HTTP request within that connection. After the initial TCP connection is load balanced, all HTTP requests seen on the same connection are sent to the same pool member. You can modify this behavior by forcing the server-side connection to detach after each HTTP request, which in turn allows a new load balancing decision according to changing persistence information in the HTTP request.

     

    ---------

    https://support.f5.com/csp/article/K7964

     

     

    Or dissabling this similar function on the NGINX as to not use the same TCP session for many requests to the F5 device:

    ---

    With other load‑balancing tools, this technique is sometimes called multiplexing, connection pooling, connection reuse, or OneConnect.

    ---

    https://www.nginx.com/blog/load-balancing-with-nginx-plus-part-2/