Forum Discussion
Distribuction order Load Balance Servers - Round Robim
I would like to understand the intelligence of a round robin balancing, where a Virtual Server that has a Pool with three members and using the load balance round robim method. In the case of not having persistence and priority group configured, and be the first connection to arrive in Virtual Server, what would be the distribution order for the members? For example: First would be the member with the highest IP, and so on.
- Stanislas_Piro2
Cumulonimbus
The round robin algorithm with 3 members load balance like that:
- member1 : 1 4 7 10
- member2 : 2 5 8 11
- member3 : 3 6 9 12
Load balancing algorithms are per tmm, so if the bigip have 4 cores (4 tmm instances), first 4 connections may lead to member1!
For the second question, I don’t know if there is a official documentation about which is member1
- Anesh
Cirrostratus
Please refer K7751
From document:
Load balancing behavior on CMP enabled virtual servers Connections on a CMP enabled virtual server are distributed among the available TMM processes. The load balancing algorithm, specified within the pool associated with the CMP enabled virtual server, is applied independently in each TMM. Since each TMM handles load balancing independently from the other TMMs, distribution across the pool members may appear to be incorrect when compared with a non-CMP enabled virtual server using the same load balancing algorithm.
Consider the following example configuration:
Virtual Server: 172.16.10.10:80
Pool with 4 members: 10.0.0.1:80 10.0.0.2:80 10.0.0.3:80 10.0.0.4:80
Pool Load Balancing Method: Round Robin
Scenario 1: Virtual Server without CMP enabled
Four connections are made to the virtual server. The BIG-IP system load balances the four individual connections to the four pool members based on the Round Robin load balancing algorithm:
--Connection 1--> | | --Connection 1--> 10.0.0.1:80
--Connection 2--> |-> BIG-IP Virtual Server ->| --Connection 2--> 10.0.0.2:80
--Connection 3--> | | --Connection 3--> 10.0.0.3:80
--Connection 4--> | | --Connection 4--> 10.0.0.4:80
Scenario 2: Virtual Server with CMP enabled on a BIG-IP 8800
Four connection are made to the virtual server, unlike the first scenario where CMP was disabled, the BIG-IP distributes the connections across the multiple TMM processes. The BIG-IP 8800 with CMP enabled can use four TMM processes. Since each TMM handles load balancing independently of the other TMM processes, it is possible that all four connections are directed to the same pool member.
--Connection 1--> | | --Connection 1--> TMM0 --> 10.0.0.1:80
--Connection 2--> |-> BIG-IP Virtual Server ->| --Connection 2--> TMM1 --> 10.0.0.1:80
--Connection 3--> | | --Connection 3--> TMM2 --> 10.0.0.1:80
--Connection 4--> | | --Connection 4--> TMM3 --> 10.0.0.1:80
This behavior is expected, as CMP is designed to speed up connection handling by distributing connections across multiple TMM processes. While initially this behavior may appear to favor one or several servers, over time the load will be distributed equally across all servers.
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com