Forum Discussion

cjunior_138458's avatar
cjunior_138458
Icon for Altostratus rankAltostratus
May 06, 2016

sitcky persistence regardless of whether members offline

Hi,

I wrote the iRule below and I'll appreciate if anyone can tell me if have another and/or better way to do that.

I have an case which the communication needs to be persisted with source address regardless of the member becomes offline. This is due to the client access can't start a new session on another server in certain period.

By default persistence, the Big-IP will always send connections to the available server, so I need to discard this option. I also tried to setup the monitoring up time, but it was not very effective.

As the persistence record is cleared when load balance fails, I use the table command.

Does anyone know of a better way to do this?
when RULE_INIT {
    set ::debug_enabled 1
    set ::persist_table "source_addr_sticky_uie"
    set ::persist_timeout 1800
}
when CLIENT_ACCEPTED priority 300 {
    persist virtual server
    set persist_key "[IP::local_addr]:[TCP::local_port]:[IP::remote_addr]"
    set persist_record [table lookup -subtable $::persist_table -notouch $persist_key]
    if { $::debug_enabled } { log local0. "===> persist_key: $persist_key / persist_record: $persist_record" }
    if { $persist_record ne "" } {
        if { [lsearch [active_members -list [LB::server pool]] $persist_record] ne -1 } {
            pool [LB::server pool] member [lindex $persist_record 0] [lindex $persist_record 1]
            table lookup -subtable $::persist_table $persist_key
            if { $::debug_enabled } { log local0. "===> Selecting pool member $persist_record / persist_key: $persist_key" }
            if { $::debug_enabled } { log local0. "===> Updated timeout record table $persist_key" }
        } else {
            reject
            if { $::debug_enabled } { log local0. "===> Failed to select pool member $persist_record / persist_key: $persist_key" }
        }
        if { $::debug_enabled } {
            log local0. "===> Dump table: $::persist_table"
            foreach key [table keys -subtable $::persist_table -notouch] {
                log local0. "===> ${::persist_table}, ${key}, [table timeout -subtable $::persist_table -remaining $key]"
            }
        }
    }
    unset -nocomplain persist_record
}
when LB_SELECTED priority 300 {
    table add -subtable $::persist_table $persist_key [list [LB::server addr] [LB::server port]] $::persist_timeout
    if { $::debug_enabled } { log local0. "=======> Member selected [LB::server addr]:[LB::server port] / persist_key $persist_key" }
}
when LB_FAILED  priority 300 {
    LB::detach
    reject
    if { $::debug_enabled } { 
        if { [info exists persist_key] && [info exists persist_record] } {
            log local0. "===> Fail to select member $persist_record / persist_key: $persist_key"
        }
    }
}
  • It may not be appropriate for your use-case, but I have a semi-good solution. Just remove the health-check from your Pool configuration, so the status is always "unknown (blue)". For your monitoring purpose, create another Pool with the existing configuration, but do not apply it to any Virtual Servers. That way you will also know when and if your Pool Members became unavailable.

     

    Regards,

     

  • It may not be appropriate for your use-case, but I have a semi-good solution. Just remove the health-check from your Pool configuration, so the status is always "unknown (blue)". For your monitoring purpose, create another Pool with the existing configuration, but do not apply it to any Virtual Servers. That way you will also know when and if your Pool Members became unavailable.

     

    Regards,

     

    • cjunior's avatar
      cjunior
      Icon for Nacreous rankNacreous
      I thought about doing this way, but I gave up when the access could fail on the first try. On the other hand, you gave to me a good idea to take off this complex iRule. I'll change my approach, thank you again.
  • It may not be appropriate for your use-case, but I have a semi-good solution. Just remove the health-check from your Pool configuration, so the status is always "unknown (blue)". For your monitoring purpose, create another Pool with the existing configuration, but do not apply it to any Virtual Servers. That way you will also know when and if your Pool Members became unavailable.

     

    Regards,

     

    • cjunior's avatar
      cjunior
      Icon for Nacreous rankNacreous
      I thought about doing this way, but I gave up when the access could fail on the first try. On the other hand, you gave to me a good idea to take off this complex iRule. I'll change my approach, thank you again.