Forum Discussion

satish_txt_2254's avatar
Apr 15, 2019

F5 force connection reset on pool member

I have two MySQL server behind F5 and i am using Priority group for active-passive so request will go to primary everytime until it's down.

 

Question:

 

  • primary failed and all my connection going to standby box - Good
  • now primary is back - but still my client connected to standby DB ( because of persistent connection )

How do i force my pool member to tell drop all connection and go to new primary DB? problem is if some client keep writing data to standby it will create some kind of issue.

 

force offline doesn't drop active connection

 

  • Hi Satish,

    you may take a look to the iRule below.

    The iRule deploys a periodic

    [after]
    taskjob within each TCP connection, which compares every few seconds your selected pool member with the currently prefered pool member (influenced by its health, forced offline status and priority group settings) and reject the ongoing TCP connection if those are not matching anymore.

    Cleaned iRule

    when RULE_INIT {
        set static::connection_check_interval 5000 ; msec
    }
    when LB_SELECTED {
        after $static::connection_check_interval -periodic {
            if { [lindex [LB::select] 3] ne [LB::server addr] } then {
                reject          
            }
        }
    }
    

    Debug enabled iRule

    when RULE_INIT {
        set static::connection_check_interval 5000 ; msec
    }
    when LB_SELECTED {
        set connection_timestamp "[TMM::cmp_group][TMM::cmp_unit][clock clicks]"
        log local0.debug "Node UP Check: $connection_timestamp : The pool member [LB::server addr] is currently active. Scheduling initial status check in $static::connection_health_interval ms."
        after $static::connection_check_interval -periodic {
            log local0.debug "Node UP Check: $connection_timestamp : Performing status check for pool member [LB::server addr]"
            if { [lindex [LB::select] 3] ne [LB::server addr] } then {
                log local0.debug "Node UP Check: $connection_timestamp : [LB::server addr] is not the active member anymore. Rejecting the TCP connection."
                reject
            } else {
                log local0.debug "Node UP Check: $connection_timestamp : [LB::server addr] is still the active member. Scheduling next status check in $static::connection_health_interval ms."
            }
        }
    }
    

    Cheers, Kai

    • aries22's avatar
      aries22
      Icon for Altocumulus rankAltocumulus

      Hi, Kai.

       

      Thank you for this iRule! May I know what exactly this line does and what significance is the number 3?

      if { [lindex [LB::select] 3]

       

      May I also ask how to tweak the iRule so that it will still work when there are multiple pool members assigned on each priority group? I have noticed that the iRule is only applicable if there is only one pool member assigned on each priority grp.

       

      • Kai_Wilke's avatar
        Kai_Wilke
        Icon for MVP rankMVP

        Hi Aries22,

         

        the [lindex 3] command parses the output of the  [LB::select] as a whitespace speperated list and then selects the fourth ([lindex] starts counting on zero) element.

         

        The output of [LB::select] has the following format and the bold part is the string we are going to extract...

         

        pool <poolname> member <ip_addr><port>

         

        Cheers, Kai

         

  • There is no built-in function to delete active connections to a pool member; disabling a pool member keeps new connections from being load-balanced to a pool member, setting a pool member to forced offline will stop persistence from being honored, but to end an active connection, the general advice is to go to the connection table and delete the connection there.

     

    You could create the functionality you are looking for (where connections to the secondary instance are reset as soon as the primary node is available) by doing the following:

     

    • Change the 'action on service down' setting on your virtual server to 'reject'
    • Create a monitor that checks the primary node status, but use the 'reverse' option, which means that the monitor is successful when the primary node is -down- and set the alias address to the IP address of the primary node
    • Apply the monitor you just created directly to the secondary node.

    When your primary node goes down, the secondary node comes up, new connections get load-balanced to the secondary node. When the primary node comes back up, the reverse logic of the monitor will force the secondary node offline, and the action on service down reject behavior should reset the client connection, forcing the client to make a new connection, which will be load-balanced to the primary node.

     

    • Niko1978's avatar
      Niko1978
      Icon for Nimbostratus rankNimbostratus

      hi,

      I need the same system with rtmp streams but it seems it doesn't work :

      • it works ok when both servers are online, second is well seen as down
      • BUT when primary server is down, the second stays down in monitoring so 2 servers are considered down :-(

      any idea ?

      Thanks