Forum Discussion

Asura2006's avatar
Asura2006
Icon for Altocumulus rankAltocumulus
Nov 25, 2022

Priority group activation and automatically closing connections when higher priorty group goes UP.

Hello we have a configuration where in one pool there are 6 nodes with 2 priority groups of 3 nodes each.

3 nodes with piriority group 1 (backup)

3 nodes with priority group 10 (primary).

Priority group activates when there are less than 2 active members on primary group. 

So far It works like it should. 

But we would also need to:

1) close all connections to the backup PG when the primary PG becomes available again,

2) it would be great to create a delay/timer of 2-3 min from the time primary PG goes up and time of closing all the connections in the backup PG.

 

We have achieved something similar using HA proxy using below configuration, but we would like to have this functonality on F5 LTM:

server rabbit1a <ip addres>:5671 check send-proxy inter 3s fall 3 rise 2 on-marked-up shutdown-backup-sessions
server rabbit2a <ip addres>:5671 check send-proxy inter 3s fall 3 rise 2 on-marked-up shutdown-backup-sessions
server rabbit3a <ip addres>:5671 check send-proxy inter 3s fall 3 rise 2 on-marked-up shutdown-backup-sessions
server rabbit1b <ip addres>:5671 check send-proxy backup
server rabbit2b <ip addres>:5671 check send-proxy backup
server rabbit3b <ip addres>:5671 check send-proxy backup

 

Is it possible on F5? I guess its not something that we can achive from pure GUI so we would need to use iRules? What events/actions would be usefull in this case?

12 Replies

  • xuwen's avatar
    xuwen
    Icon for Cumulonimbus rankCumulonimbus

    chmod +x /var/tmp/delete_backserver_session.sh

    Because the 120 seconds delay depends on tcp monitor use "tcp_delay_up_120s" and its value of "Time Until Up" set to 120s. You can use tcl script tmsh:: delete to delete a session. The Linux shell is not mandatory

    the file in /config/  directory will save in ucs,user_alert.conf and icall config also save in ucs file

    In my test environment, six Cisco routers enable the telnet service, a bip IP VE. When I close the telnet service of the three primary nodes, and then the client telnet VS, I manage the router of the backup node, and then enable the telnet service of the primary node to simulate the recovery of the primary node. During the 120 seconds, I execute the show running command. After about 120 seconds, I will be reset tcp connection

     

  • Hi Asura2006 , 
    The default behavior of Priority group Activation is when the Backup PG activated it handles your connections , but when higher PG is returned UP and active again , all comming new connections is handled by "main" PGA and the Backup PG handles only Active connections till it completes. 
    I see this is a good behavior to be done by PGAs and this is the default. 


    If you see that the new connections don’t be handled by "main" PG when it became actve , check if you enable " manual resume " option under assigned monitor configuration. 

    > Do you want to transfer active connections to be handled by "main"PG when it became available again ?
    specify more please in this section. 

    Regards

    • Asura2006's avatar
      Asura2006
      Icon for Altocumulus rankAltocumulus

      Hello,

      the problem here is that i want primary PGA to handle ALL the connections after 2-3 minutes from beeing back up. The connections on backup PGA should be closed so that they will be all on primary PGA.

       

      Manual resume on monitor doesn't really help me because it means that each time the primary PGA goes down and up ill have to manually select "enabled" to every node with this monitor.

    • Mohamed_Ahmed_Kansoh's avatar
      Mohamed_Ahmed_Kansoh
      Icon for MVP rankMVP

      Asura2006 , 
      Do you configure any type of persistence profiles ? 

      the default behavior , your new connections should be handled by "main "PGA when it becomes available, only Active connections should be handled by the "Back up" PGA till complete. 

  • Question to your scenario: Do your application have to maintain a server side session state which require the use of what F5 is calling "Session Persistence", or are your application absolutly fine if request from the same client is per-request load balanced to the available online nodes?

    Depending on the requirement of persistence, you would either need to create a GUI based setup via OneConnect (to enable per-request balacing) and Pool- and Monitor-Settings. Or you may end up with a custom iRule to handle the closing of such persistet sessions if your primary/backup systems are failover/fallback.

    Cheers, Kai

    • Asura2006's avatar
      Asura2006
      Icon for Altocumulus rankAltocumulus

      Right now there is no requirement for persistance so i guess we will have to use the OneConnect profile pool and monitor setup? But i don't see how it is going to delete the connections for backup PGA after the priority PGA will become active. 

      In one connect program there is a  maximum Age time but I dont know if I should set it to something like 2-3 minutes. Wouldnt it defeat the purpose of OneConnect profile ?

      • Kai_Wilke's avatar
        Kai_Wilke
        Icon for MVP rankMVP

        One Connect Profile will completely detach the client-side TCP connection from your server-side connections, so that your dont have to care about already established connections. Each incomming client-side HTTP request will be load balanced (based on Pool-Priority-Groups availibilites) and then use a pooled service-side connection to the individial members.

        OneConnect enables you to switch between Pool-Priority-Groups without closing the client-side connection, the very next HTTP request will be load-balanced to the Master/Fallack Priority-Group as needed.

        If applying a OneConnect Profile to your Virtual Server is not a problem, then you can continue to use your single Pool setup including Priority-Based Activation. The only change needed would be to adjust your Health-Monitors, so that a node will delay a state change to UP for 2 minutes (via "Time Until Up" setting).

        Let me know if you need further assitance to get the OneConnect setup running.

        Cheers, Kai

         

  • xuwen's avatar
    xuwen
    Icon for Cumulonimbus rankCumulonimbus

    To meet your needs, you must use icall and user_ alert.conf

    pool_server has 6 pool members,  (A,B,C) group priortity is 10; (D,E,F) group priority is 5:

     

     

     

    ltm pool pool_server {
        members {
            192.168.100.1:telnet {
                address 192.168.100.1
                priority-group 10
                session monitor-enabled
                state up
            }
            192.168.100.11:telnet {
                address 192.168.100.11
                priority-group 5
                session monitor-enabled
                state up
            }
            192.168.100.12:telnet {
                address 192.168.100.12
                priority-group 5
                session monitor-enabled
                state up
            }
            192.168.100.13:telnet {
                address 192.168.100.13
                priority-group 5
                session monitor-enabled
                state up
            }
            192.168.100.2:telnet {
                address 192.168.100.2
                priority-group 10
                session monitor-enabled
                state up
            }
            192.168.100.3:telnet {
                address 192.168.100.3
                priority-group 10
                session monitor-enabled
                state up
            }
        }
        min-active-members 2
        monitor tcp
    }

     

     

     

    The master member has A, B and C(firstly create a icall_pool its pool member is A,B,C). You also have to create three pools (pool_icall-1, pool_ical-2, and pool_icall-3). The members are (A, B), (A, C), and (B, C) respectively

    pool_back members is (D,E,F)

     

    user_alert.conf

     

     

     

     

    alert poolicall-1-DOWN "No members available for pool /Common/pool_icall-1" {
            exec command="tmsh generate sys icall event delete_backserver_session context { { name poolname value pool_ic
    all } }"
    }
    alert poolicall-2-DOWN "No members available for pool /Common/pool_icall-2" {
            exec command="tmsh generate sys icall event delete_backserver_session context { { name poolname value pool_ic
    all } }"
    }
    alert poolicall-3-DOWN "No members available for pool /Common/pool_icall-3" {
            exec command="tmsh generate sys icall event delete_backserver_session context { { name poolname value pool_ic
    all } }"
    }

     

     

     

     

    When any of the pools has no available members, an icall event is triggered. Execute the exec linux shell in the icall script (icall script is based on tcl and wants to sleep for 120 seconds, and then execute the session action of deleting the standby node. Later, the test found that icall script deleted the after xxx ms {script} function of tcl, and cannot execute the after function. Instead, we call  use exec linux shell

     

     

     

     

    #!/bin/bash
    #sleep 120;
    count=`tmsh list ltm pool pool_icall | grep "state up" | wc -l`
    if [ $count -ge 2 ] 
    then 
            for i in `tmsh show ltm pool pool_back members | grep -E "Ltm::Pool Member:" | awk '{print $NF}'`
            do 
                    address=`echo $i | awk -F ":" '{print $1}'`;
                    port=`echo $i | awk -F ":" '{print $2}'`
                    tmsh delete sys connection ss-server-addr $address ss-server-port $port
            done
    fi

     

     

     

     

     icall config:

     

     

     

     

    list sys icall handler triggered delete_backserver_session 
    sys icall handler triggered delete_backserver_session {
        script delete_backserver_session
        subscriptions {
            delete_backserver_session {
                event-name delete_backserver_session
            }
        }
    }
    
    list sys icall script delete_backserver_session           
    sys icall script delete_backserver_session {
        app-service none
        definition {
            foreach var {poolname} {
                set $var $EVENT::context($var)
            }
            puts "poolname is $poolname"
            exec /bin/bash /var/tmp/delete_backserver_session.sh
        }
        description none
        events none
    }

     

     

     

     

     

  • xuwen's avatar
    xuwen
    Icon for Cumulonimbus rankCumulonimbus

    pool_server has 6 pool members,  (A,B,C) group priortity is 10; (D,E,F) group priority is 5, tcp monitor use "tcp_delay_up_120s" and its value of "Time Until Up"  set 120s;

    pool_icall members is (A,B,C), use default tcp monitor

    pool_back members is (D,E,F), use default tcp monitor

    user_alert.conf

     

    alert poolserver-1-UP "Pool /Common/pool_server member /Common/192.168.100.1:23 monitor status up" {
            exec command="tmsh generate sys icall event delete_backserver_session context { { name poolname value pool_ic
    all } }"
    }
    alert poolserver-2-UP "Pool /Common/pool_server member /Common/192.168.100.2:23 monitor status up" {
            exec command="tmsh generate sys icall event delete_backserver_session context { { name poolname value pool_ic
    all } }"
    }
    alert poolserver-3-UP "Pool /Common/pool_server member /Common/192.168.100.3:23 monitor status up" {
            exec command="tmsh generate sys icall event delete_backserver_session context { { name poolname value pool_ic
    all } }"
    }

     

     icall config:

     

    list sys icall handler triggered delete_backserver_session 
    sys icall handler triggered delete_backserver_session {
        script delete_backserver_session
        subscriptions {
            delete_backserver_session {
                event-name delete_backserver_session
            }
        }
    }
    
    list sys icall script delete_backserver_session           
    sys icall script delete_backserver_session {
        app-service none
        definition {
            foreach var {poolname} {
                set $var $EVENT::context($var)
            }
            puts "poolname is $poolname"
            exec /bin/bash /var/tmp/delete_backserver_session.sh
        }
        description none
        events none
    }

     

     

    chmod +x /var/tmp/delete_backserver_session.sh

     

    #!/bin/bash
    count=`tmsh list ltm pool pool_icall | grep "state up" | wc -l`
    if [ $count -ge 2 ] 
    then 
            for i in `tmsh show ltm pool pool_back members | grep -E "Ltm::Pool Member:" | awk '{print $NF}'`
            do 
                    address=`echo $i | awk -F ":" '{print $1}'`;
                    port=`echo $i | awk -F ":" '{print $2}'`
                    tmsh delete sys connection ss-server-addr $address ss-server-port $port
            done
    fi

     

     

    • Asura2006's avatar
      Asura2006
      Icon for Altocumulus rankAltocumulus

      Thanks for this very detailed answer, when i implement it on test environment ill get back to you.

       

      One thing about the script delete_backserver_session.sh also configuration in user_alert.conf and icall config what about maintaining them? Will they be automatically migrated during software upgrade or ill have to remember about them and copy them every time i do upgrade?