Forum Discussion

24x7_199's avatar
24x7_199
Icon for Nimbostratus rankNimbostratus
Mar 23, 2012

Not your average Persistence iRule

So we have a server that is being load balanced to on .47, this sits in a pool with .46. We are only sending traffic to .47 unless .47 goes down or a health check fails in which case traffic will go to .46(priority group activation lb). This works fine and at current when .47 comes back up all new traffic is sent to that. However we don’t want this to happen, we want all traffic to carry on going to .46 until which time as that fails – If it fails we then want it to back to .47 and not beforehand (priority group activation + single node persistency will not generate this desired result as you cant favour a node for the initial connection when both are up).

 

 

As a side note both servers are port 11001. (on V10.x)

 

6 Replies

  • Hi,

     

     

    You can use a simple iRule for this:

     

     

    https://devcentral.f5.com/wiki/iRules.SingleNodePersistence.ashx

     

     

    If you want to specify one server be active initially you could just disable the other one until the first persistence record is created for the pool member you favor.

     

     

    Aaron
  • Hi Aaron,

     

     

    This is the customers issue, he does not want to have to disable the pool member for the initial persistence recored creation.

     

    The customer would like a way there would be no user interaction within the F5 device.

     

     

    Do you know if it is possible to do this any other way which can be handled by the F5 device without having to manually disable a pool member to establish the first persistence record.
  • add .47 to poolA

    add .46 to poolB

    assign below iRule to virtual

    
    when RULE_INIT {
       set static::fg 1
    }
    
    when HTTP_REQUEST {    
      if { ([active_members poolA] < 1) } {
         set static::fg 0
      }
      if { ([active_members poolB] < 1) } {
         set static::fg 1
      }
       
      if { $static::fg == 1 } {
        pool poolA
      else
        pool poolB
     }
    }
    
     
  • Hi Sashi,

    If poolB goes down but then comes back up, that iRule will still select poolB. I think the original poster wanted to ensure poolA would continue to be used even after poolB came back up.

    Keep in mind that a static variable will be specific to the TMM the iRule is executed on. So if you change the value of a static variable outside of RULE_INIT, you'll have a unique instance of the variable per TMM. I don't think that would be a problem for your example Sashi, but I figured I'd point it out regardless.

    24x7, can you try testing with the single node persistence iRule but add .47 as a higher priority than .46 in the same pool? I think this might fit your requirements.

    You can verify a bit easier with some debug logging:

    
    when CLIENT_ACCEPTED {
    log local0. "\[active_members -list \[LB::server pool\]\]: [active_members -list [LB::server pool]]"
    persist uie 1
    }
    when LB_SELECTED {
    log local0. "\[LB::server\]: [LB::server], priority: [LB::server priority]"
    }
    when SERVER_CONNECTED {
    log local0. "[IP::server_addr]"
    }
    

    Aaron
  • Many thanks for the update, I have passed the information and link to the customer. Hopefully he will be able to test and ensure correct behaviour.
  •  

    If poolB goes down but then comes back up, that iRule will still select poolB. I think the original poster wanted to ensure poolA would continue to be used even after poolB came back up.

     

     

     

    no, it doesnt. we are setting fg only when pool goes down. so, when pool comes backup, fg value doesnt change.