Controlling a Pool Members Ratio and Priority Group with iControl

A Little Background

A question came in through the iControl forums about controlling a pool members ratio and priority programmatically.   The issue really involves how the API’s use multi-dimensional arrays but I thought it would be a good opportunity to talk about ratio and priority groups for those that don’t understand how they work. 

In the first part of this article, I’ll talk a little about what pool members are and how their ratio and priorities apply to how traffic is assigned to them in a load balancing setup.  The details in this article were based on BIG-IP version 11.1, but the concepts can apply to other previous versions as well.

Load Balancing

In it’s very basic form, a load balancing setup involves a virtual ip address (referred to as a VIP) that virtualized a set of backend servers.  The idea is that if your application gets very popular, you don’t want to have to rely on a single server to handle the traffic.    A VIP contains an object called a “pool” which is essentially a collection of servers that it can distribute traffic to.  The method of distributing traffic is referred to as a “Load Balancing Method”.  You may have heard the term “Round Robin” before.  In this method, connections are passed one at a time from server to server.   In most cases though, this is not the best method due to characteristics of the application you are serving.  Here are a list of the available load balancing methods in BIG-IP version 11.1.

Load Balancing Methods in BIG-IP version 11.1

  • Round Robin: Specifies that the system passes each new connection request to the next server in line, eventually distributing connections evenly across the array of machines being load balanced. This method works well in most configurations, especially if the equipment that you are load balancing is roughly equal in processing speed and memory.

  • Ratio (member): Specifies that the number of connections that each machine receives over time is proportionate to a ratio weight you define for each machine within the pool.

  • Least Connections (member): Specifies that the system passes a new connection to the node that has the least number of current connections in the pool. This method works best in environments where the servers or other equipment you are load balancing have similar capabilities. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the current number of connections per node or the fastest node response time.

  • Observed (member): Specifies that the system ranks nodes based on the number of connections. Nodes that have a better balance of fewest connections receive a greater proportion of the connections. This method differs from Least Connections (member), in that the Least Connections method measures connections only at the moment of load balancing, while the Observed method tracks the number of Layer 4 connections to each node over time and creates a ratio for load balancing. This dynamic load balancing method works well in any environment, but may be particularly useful in environments where node performance varies significantly.

  • Predictive (member): Uses the ranking method used by the Observed (member) methods, except that the system analyzes the trend of the ranking over time, determining whether a node's performance is improving or declining. The nodes in the pool with better performance rankings that are currently improving, rather than declining, receive a higher proportion of the connections. This dynamic load balancing method works well in any environment.

  • Ratio (node): Specifies that the number of connections that each machine receives over time is proportionate to a ratio weight you define for each machine across all pools of which the server is a member.

  • Least Connections (node): Specifies that the system passes a new connection to the node that has the least number of current connections out of all pools of which a node is a member. This method works best in environments where the servers or other equipment you are load balancing have similar capabilities. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the number of current connections per node, or the fastest node response time.

  • Fastest (node): Specifies that the system passes a new connection based on the fastest response of all pools of which a server is a member. This method might be particularly useful in environments where nodes are distributed across different logical networks.

  • Observed (node): Specifies that the system ranks nodes based on the number of connections. Nodes that have a better balance of fewest connections receive a greater proportion of the connections. This method differs from Least Connections (node), in that the Least Connections method measures connections only at the moment of load balancing, while the Observed method tracks the number of Layer 4 connections to each node over time and creates a ratio for load balancing. This dynamic load balancing method works well in any environment, but may be particularly useful in environments where node performance varies significantly.

  • Predictive (node): Uses the ranking method used by the Observed (member) methods, except that the system analyzes the trend of the ranking over time, determining whether a node's performance is improving or declining. The nodes in the pool with better performance rankings that are currently improving, rather than declining, receive a higher proportion of the connections. This dynamic load balancing method works well in any environment.

  • Dynamic Ratio (node) : This method is similar to Ratio (node) mode, except that weights are based on continuous monitoring of the servers and are therefore continually changing. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the number of current connections per node or the fastest node response time.

  • Fastest (application): Passes a new connection based on the fastest response of all currently active nodes in a pool. This method might be particularly useful in environments where nodes are distributed across different logical networks.

  • Least Sessions: Specifies that the system passes a new connection to the node that has the least number of current sessions. This method works best in environments where the servers or other equipment you are load balancing have similar capabilities. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the number of current sessions.

  • Dynamic Ratio (member): This method is similar to Ratio (node) mode, except that weights are based on continuous monitoring of the servers and are therefore continually changing. This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the number of current connections per node or the fastest node response time.

  • L3 Address: This method functions in the same way as the Least Connections methods. We are deprecating it, so you should not use it.

  • Weighted Least Connections (member): Specifies that the system uses the value you specify in Connection Limit to establish a proportional algorithm for each pool member. The system bases the load balancing decision on that proportion and the number of current connections to that pool member. For example,member_a has 20 connections and its connection limit is 100, so it is at 20% of capacity. Similarly, member_b has 20 connections and its connection limit is 200, so it is at 10% of capacity. In this case, the system select selects member_b. This algorithm requires all pool members to have a non-zero connection limit specified.

  • Weighted Least Connections (node): Specifies that the system uses the value you specify in the node's Connection Limitand the number of current connections to a node to establish a proportional algorithm. This algorithm requires all nodes used by pool members to have a non-zero connection limit specified.

Ratios

The ratio is used by the ratio-related load balancing methods to load balance connections.  The ratio specifies the ratio weight to assign to the pool member. Valid values range from 1 through 100. The default is 1, which means that each pool member has an equal ratio proportion.

So, if you have server1 a with a ratio value of “10” and server2 with a ratio value of “1”, server1 will get served 10 connections for every one that server2 receives.  This can be useful when you have different classes of servers with different performance capabilities.

Priority Group

The priority group is a number that groups pool members together. The default is 0, meaning that the member has no priority. To specify a priority, you must activate priority group usage when you create a new pool or when adding or removing pool members. When activated, the system load balances traffic according to the priority group number assigned to the pool member. The higher the number, the higher the priority, so a member with a priority of 3 has higher priority than a member with a priority of 1.  The easiest way to think of priority groups is as if you are creating mini-pools of servers within a single pool.  You put members A, B, and C in to priority group 5 and members D, E, and F in priority group 1.  Members A, B, and C will be served traffic according to their ratios (assuming you have ratio loadbalancing configured).  If all those servers have reached their thresholds, then traffic will be distributed to servers D, E, and F in priority group 1.

he default setting for priority group activation is Disabled. Once you enable this setting, you can specify pool member priority when you create a new pool or on a pool member's properties screen. The system treats same-priority pool members as a group. To enable priority group activation in the admin GUI, select Less than from the list, and in the Available Member(s) box, type a number from 0 to 65535 that represents the minimum number of members that must be available in one priority group before the system directs traffic to members in a lower priority group. When a sufficient number of members become available in the higher priority group, the system again directs traffic to the higher priority group.

Implementing in Code

The two methods to retrieve the priority and ratio values are very similar.  They both take two parameters: a list of pools to query, and a 2-D array of members (a list for each pool member passed in).

long [] [] get_member_priority(
in String [] pool_names,
in Common__AddressPort [] [] members
);
long [] [] get_member_ratio(
in String [] pool_names,
in Common__AddressPort [] [] members
);
 
The following PowerShell function (utilizing the iControl PowerShell Library), takes as input a pool and a single member.  It then make a call to query the ratio and priority for the specific member and writes it to the console.
 
function Get-PoolMemberDetails()
{
param(
$Pool = $null,
$Member = $null
);

$AddrPort = Parse-AddressPort $Member;

$RatioAofA = (Get-F5.iControl).LocalLBPool.get_member_ratio(
@($Pool),
@( @($AddrPort) )
);

$PriorityAofA = (Get-F5.iControl).LocalLBPool.get_member_priority(
@($Pool),
@( @($AddrPort) )
);

$ratio = $RatioAofA[0][0];
$priority = $PriorityAofA[0][0];

"Pool '$Pool' member '$Member' ratio '$ratio' priority '$priority'";
}

Setting the values with the set_member_priority and set_member_ratio methods take the same first two parameters as their associated get_* methods, but add a third parameter for the priorities and ratios for the pool members.

set_member_priority(
in String [] pool_names,
in Common::AddressPort [] [] members,
in long [] [] priorities
);

set_member_ratio(
in String [] pool_names,
in Common::AddressPort [] [] members,
in long [] [] ratios
);

The following Powershell function takes as input the Pool and Member with optional values for the Ratio and Priority.  If either of those are set, the function will call the appropriate iControl methods to set their values.

function Set-PoolMemberDetails()
{
param(
$Pool = $null,
$Member = $null,
$Ratio = $null,
$Priority = $null
);

$AddrPort = Parse-AddressPort $Member;

if ( $null -ne $Ratio )
{
(Get-F5.iControl).LocalLBPool.set_member_ratio(
@($Pool),
@( @($AddrPort) ),
@($Ratio)
);
}
if ( $null -ne $Priority )
{
(Get-F5.iControl).LocalLBPool.set_member_priority(
@($Pool),
@( @($AddrPort) ),
@($Priority)
);
}
}
 
In case you were wondering how to create the Common::AddressPort structure for the $AddrPort variables in the above examples, here’s a helper function I wrote to allocate the object and fill in it’s properties.
 
function Parse-AddressPort()
{
param($Value);
$tokens = $Value.Split(":");
$r = New-Object iControl.CommonAddressPort;
$r.address = $tokens[0];
$r.port = $tokens[1];
$r;
}

Download The Source

The full source for this example can be found in the iControl CodeShare under PowerShell PoolMember Ratio and Priority.

Published Feb 17, 2012
Version 1.0
  • should traffic be balanced between group 1 and 2 after group 2 loses its priority ? In this link , https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/ltm-concepts-11-1-0/ltm_pools.html it says that : If fewer than two priority 3 members are available, traffic is directed to the priority 2 members "as well"
  • hi

    need the help , i have a pool with 3 servers as one is ative and 2 are standby , my problem is that if active down F5 did not decide who is the next server to pass traffic but pool members decide thru the API , how do i tell F5 to pull this decision from servers and cause to pass traffic thru the server ? i have vault behind the F5 and they using API to decide who is next active if the main not active any more.

    thank you