iRules: Clone Pool Persistence

In LTM, persistence is configured at the virtual server level, rather than at the pool level as was the case in BIG-IP 4.x.  The current set of LTM persistence profiles and iRules commands operate by default only on the load balancing pool, with no effect on the traffic targeted for a clone pool.  However, in many cases persistence to the clone pool for end-to-end session analysis is just as critical as persistence to the application pool. 

In this article, I'll show you an approach you can use to enforce clone pool persistence independent of load balancing persistence using any consistent, unique session value.

Want a good iRule Solution?  Start with a good Problem Definition
An iRule which will maintain persistence in a clone pool will need to:

  1. Examine each connection for the persistence token;
  2. Choose an available clone pool member explicitly for the first connection containing each unique token;
  3. Choose the same clone pool member for subsequent connections (those with a token matching one already seen) if available, or explicitly choose a new available clone pool member;
  4. Update an entry in the session table indicating the clone pool server to be used for all related subsequent connections.


Choosing a Persistence Token

Before you can start writing your iRule, you will need to determine an appropriate persistence token for the clone pool connections.

In this example, no persistence is required for the load balanced pool, but connections to the clone pool must persist to the same pool member based on SSL session ID.


The iRule Logic
 
Load balancing
This section is not really necessary if the default pool and persistence for load balancing are set as virtual server resources, but here we set the load balancing pool and disable persistence: 

when CLIENT_ACCEPTED {
 pool MY_APPPOOL
 persist none
}

Any other load balancing target decisions (pool/member/node selection or persistence options) could be made here or in the virtual server configuration without interfering with the clone pool management solution we are building here.

Looking for the clone pool persistence token
For each new connection, first we'll check if there's a preferred clone pool member by looking for the persistence token in the packet stream, and if found, looking it up in the session table.  For other persistence tokens, a different event may be more appropriate, but in this case, since persistence is based on SSL session ID, we'll look for that value in the CLIENTSSL_HANDSHAKE event after the SSL handshake is complete:

when CLIENTSSL_HANDSHAKE {
  set cserver [session lookup ssl [SSL::sessionid]]


Choosing a clone pool member for the first connections
If no session table entry for the SSL session ID was found, we'll load balance to the clone pool by using a simple random algorithm to choose a known-good clone pool member from the list of active members in the clone pool:

    set cserver [lindex [active_members -list MY_CLONEPOOL] [expr {int(rand()*[active_members MY_CLONEPOOL])}]]


Persisting subsequent connections
If a session table entry for the SSL session ID was found, we need to verify the monitor status of the preferred pool member before sending traffic there:

 set cserver [session lookup ssl [SSL::sessionid]]
  if { $cserver != "" } {
    set ip [getfield $cserver : 1]
    set port [getfield $cserver : 2]
 if { [LB::status pool MY_CLONEPOOL member $ip $port] eq "up" } {
    ...


Choosing a new clone pool member when the preferred server is unavailable
If the preferred clone pool member is not marked UP, a new clone pool member will be chosen using the same simple random algorithm as before to choose a known-good member from the list of active members in the pool:

    set cserver [lindex [active_members -list MY_CLONEPOOL] [expr {int(rand()*[active_members MY_CLONEPOOL])}]]


Sending traffic to the clone pool member
In any case, we'll send traffic to the confirmed target server using the "clone" command:

  clone pool MY_CLONEPOOL member $ip


Updating the session table
...and finally update the session table entry with the target server info using the "session" command.  The 5 minute idle timeout is refreshed at the beginning of each new connection:

  session add ssl [SSL::sessionid] $cserver 300

 



Putting It All Togther: The Full iRule


when CLIENT_ACCEPTED {
  # choose the load balancing pool using the pool command 
  pool MY_APPPOOL
  # disable persistence on the LB pool (not required for the app)
  persist none
}
when CLIENTSSL_HANDSHAKE {
  # for the clone pool, first set a flag indicating a target has not been confirmed
  set target 0
  # then check if there's a preferred member that's up
  set cserver [session lookup ssl [SSL::sessionid]]
  # if session table entry exists, verify status
  if { $cserver != "" } {
    set ip [getfield $cserver : 1]
    set port [getfield $cserver : 2]
    # if server is available, toggle target flag
    if { [LB::status pool MY_CLONEPOOL member $ip $port] eq "up" } {
      set target 1
    }
  }
  # if no session table entry was found or the preferred pool member is down,
  # choose a known-good target clone pool member from the list of active members in the clone pool
  if { $target == 0 } {
    set cserver [lindex [active_members -list MY_CLONEPOOL] [expr {int(rand()*[active_members MY_CLONEPOOL])}]]
    set ip [lindex $cserver 0]
    set port [lindex $cserver 1]
    set cserver $ip:$port
  }

  # in any case, send traffic to the confirmed target server
  clone pool MY_CLONEPOOL member $ip
  # and update the session table entry with the target server info
  session add ssl [SSL::sessionid] $cserver 300
}


(Here are the wiki pages for the clone command, the active_members command and the session command)

Published Dec 05, 2007
Version 1.0
  • Great description/problem/solution Deb. It's worth pointing out that this iRule will require LTM version 9.4.2 or later.
  • Deb_Allen_18's avatar
    Deb_Allen_18
    Historic F5 Account
    Good point, thanks Nate. (I thought I had mentioned that at some point, must have been the victim of a horrible cut & paste accident...

     

     

    To be specific, the "-list" option to the active_members command was added in 9.4.2, so if you use the random selection algorithm you'll need to be on 9.4.2 or better.