Especial Load Balancing Active-Passive Scenario (I)

Problem this snippet solves:

This code was written to solve this issue

REF - https://devcentral.f5.com/s/feed/0D51T00006i7jWpSAI

Specification:

  • 2 clusters with 2 nodes each one.
  • each cluster will be served as active-passive method.
  • each node in the cluster will be served as round robin.
  • when a cluster changes to active, it will keep this status although the initial active cluster change back to up status.
  • Only one BIG-IP device.

There are many topics suggesting to use "Manual Resume" trying to goal this specifications, but this requires to manually restore each node when is back online. My initial idea was to have an unattended virtual server.

To do so, I use a combination of persistence and an internal virtual server loadbalancing (Vip-targeting-Vip in the same device).

How to use this snippet:

This scenario is composed by the next set of objects:

  • 4 nodes (Node1, Node2, Node3, Node4)
  • 1 additional node called "internal_node" (which represents the VIP used on VIP-Targeting-VIP)
  • 2 pools called "ClusterA_pool" and "ClusterB_pool" (which points to each pair of nodes)
  • 1 additional pool called "MyPool" (which points the two internal VIP)
  • 2 virtual servers called "ClusterA_vs" and "ClusterB_vs" (which use RoundRobin to the pools of the same name)
  • 1 virtual server called "MyVS" (which is the visible VS and points to "MyPool")

By the way, I use a "Slow Ramp Time" of 0 to reduce the failover time.

Following you can find an example of configuration:


-----------------

ltm virtual MyVS {

  destination 10.130.40.150:http

  ip-protocol tcp

  mask 255.255.255.255

  persist {

    universal {

      default yes

    }

  }

  pool MyPool

  profiles {

    tcp { }

  }

  rules {

    MyRule

  }

  source 0.0.0.0/0

  translate-address enabled

  translate-port enabled

  vs-index 53

}


ltm virtual ClusterA_vs {

  destination 10.130.40.150:1001

  ip-protocol tcp

  mask 255.255.255.255

  pool ClusterA_pool

  profiles {

    tcp { }

  }

  source 0.0.0.0/0

  translate-address enabled

  translate-port enabled

  vs-index 54

}


ltm virtual ClusterB_vs {

  destination 10.130.40.150:1002

  ip-protocol tcp

  mask 255.255.255.255

  pool ClusterB_pool

  profiles {

    tcp { }

  }

  source 0.0.0.0/0

  translate-address enabled

  translate-port enabled

  vs-index 55

}


ltm pool ClusterA_pool {

  members {

    Node1:http {

      address 10.130.40.201

      session monitor-enabled

      state up

    }

    Node2:http {

      address 10.130.40.202

      session monitor-enabled

      state up

    }

  }

  monitor tcp 

  slow-ramp-time 0

}


ltm pool ClusterB_pool {

  members {

    Node3:http {

      address 10.130.40.203

      session monitor-enabled

      state up

    }

    Node4:http {

      address 10.130.40.204

      session monitor-enabled

      state up

    }

  }

  monitor tcp 

  slow-ramp-time 0

}


ltm node local_node {

  address 10.130.40.150

}


-----------------

Code :

when CLIENT_ACCEPTED {
    set initial 0
    set entry ""
}

when LB_SELECTED {
    incr initial
    # Checks if persistence entry exists
    catch { set entry [persist lookup uie [virtual name]] }
    # Loadbalancing selection base on persistence
    if { $entry eq "" } {
        set selection [LB::server port]
    } else {
        set selection [lindex [split $entry " "] 2]
        set status [LB::status pool MyPool member [LB::server addr] $selection]
        if { $status ne "up" } {
            catch { [persist delete uie [virtual name]] }
            set selection [LB::server port]
        }
    }
    # Adds a new persistence entry
    catch { persist add uie [virtual name] }
    # Applies the selection
    switch $selection {
        # This numbers represents the ports used at the VIP-targeting-VIP
        "1001" {
            LB::reselect virtual ClusterA_vs
        }
        "1002" {
            LB::reselect virtual ClusterB_vs
        }
    }
}

Tested this on version:

12.1
Published Mar 14, 2019
Version 1.0
No CommentsBe the first to comment