Forum Discussion
Forcing Priority Group Usage
sticky = sourceIP 10 min
lb method = least conn
Pool webapp
appserver1 priority 10
appserver2 priority 10
appserver3 priority 10
SorryPage server priority1
So when all is good, users get the appservers. When they appservers go down they get the SorryPage server. Problem is, when they appservers come back up, they have to timeout to the Sorrypage server, then they get the appservers.
Since I do not care about breaking the connection to Sorrypage as its static and does not require login, stickiness, etc, I want an iRule that will force connections back the higher priority servers immediately as the monitors mark them active.
I know I can do this with a pool redirect if I move the sorrypage server, but I dont want to rebuild everything for all the VSs. Anyone got this?
- hooleylistCirrostratusHi Valentine,
when CLIENT_ACCEPTED { Save the name of the VS default pool set default_pool [LB::server pool] } when HTTP_REQUEST { Check if the VS default pool has any active members if {[active_members $default_pool]}{ pool $default_pool } else { pool sorry_pool } }
- Valentine_96813NimbostratusThank you for your reply. However, I really do not want to want to redirect to another pool if I can help it. I would really like to see the iRule that references the priority group number and forces connectivity to the higher level members.
- hooleylistCirrostratusIs there a reason you don't want to use two separate pools? I think the above iRule is probably simpler than using priority group activation. You might want to explicitly disable persistence for the sorry_pool using persist none. If you're using persistence for the default pool, you'd want to also enable it using the persist command.
- Baron_of_StrathHistoric F5 Account
I have just created this and tested. It is not the cleanest of code but it does work reliably. Perhaps someone with a better programming know-how can tweak it.
when CLIENT_ACCEPTED { backup node Requires the IP Address - Couldn't find a variable for the lower priority node ip set backup_node "172.29.2.95" interval is in milliseconds 60 is very aggressive set interval 60 min is a minimum number of servers to be in the pool while allowing connection to this object. set min 1 set DEBUG 1
Start Conditionalscan [LB::select] %s%s%s%s%d command current_pool command2 current_member current_port eval [LB::select] if { $DEBUG equals 1 } {log local0. "Pool Member Selected $current_member"}
Close Conditional - only run when connected to backup Send user to the selected member - This will ALWAYS be the one active priority group memberif { $current_member equals $backup_node } { after $interval -periodic { if { [active_members $current_pool] > $min } { if { $DEBUG equals 1 } {log local0. "Resetting connection"} TCP::close } else { log local0. "Number of active members in $current_pool [active_members $current_pool]" } } } else { log local0. "Sent to primary node $current_member"}
pool $current_pool
}
- Baron_of_StrathHistoric F5 Account
Forgot to say that the pool consists of 2 objects and there is a reject on service down set on the pool. The use case was an HL7 app which needed to be forced to stick to the primary and which holds connections open indefinitely and which incurs great delays if the tcp stream is passively dropped due to node failure.
- David_Vega_01_1NimbostratusBaron, What version of LTM are you using? I am getting errors trying this iRule. I have similar issues with PG and need a solution to failback to primary node after recovery. Any help is appreciated. Thanks.
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com