For more information regarding the security incident at F5, the actions we are taking to address it, and our ongoing efforts to protect our customers, click here.

Forum Discussion

Kevin_Bozman_15's avatar
Kevin_Bozman_15
Icon for Nimbostratus rankNimbostratus
May 14, 2014

Load balance with Persistence on two virtual servers.(front end and back end)

My current setup consists of a pool of web servers and a pool of back end servers. My front end web pool has a virtual server. The load balancing method is Round Robin. For the persistence profile I have it set to http cookie. During the load test all the front ends server get a nearly equal amount of traffic. This is good!

 

My back end API servers are a member of a different pool with a different virtual server. Currently the persistence profile is set to destination address affinity. Load balancing to round robin or least connections. I’ve tried both. As far as the functionality goes, when a user connects to a web server I want them to stay on that particular web server. When they login to the system and access the back end API servers, I need the connection from their web server to their back end server to be the same for their session. My QA people tell me that this is happening.

 

The problem appears to be that the back end servers are getting an uneven amount of traffic. If I have 8 web servers and 5 API servers I’ve seen load tests where one member of a node won’t get any traffic at all and maybe one member will get 10% while the other three get the remaining 90% of the traffic.

 

I’m thinking that once the web server makes a connection to a back end server it is sticking but during the whole load test it (15 minutes) it will never try to connect to a different back end server and distribute the load more equally. Any thoughts?

 

1 Reply

  • Just spitballing here, but destination address affinity is good for things like load balancing firewalls or caching proxies, where the destination address is something forward of the load balancer. You have 5 API servers in a pool, so given that a persistence record is created on the first load balancing decision, and depending on timing you could potentially hit 100% on a single server. Have you tried source address persistence? At the very least that should spread the load across 8 persistence records.