Forum Discussion
jgranieri_42214
Feb 09, 2012Nimbostratus
Priority Group Activation ( Preemption question)
I have a working config for priority group activation on a 2 node pool.
Due to the nature of my application I don't want the higher priority pool member to receive traffic when it comes back online, so in essence disable preemption and keep the backup lower priority pool member until we force it back.
I am assuming this will need to be done via irule, i dont see any settings in GUI that would set this up.
I guess I need an irule to set the priority lower on the primary member than the backup if it goes down so it doesn't take traffic back when its restored.
Using persistence would NOT work for our application as users from the same company need to stay on the same pool member and could come from different source IP's.
any help would be appreciated
- Nathan_Houck_65NimbostratusIt sounds like Manual Resume might me the right option for you.
- jgranieriNimbostratusthat did it. thanks, still learning all the little features on F5. moving off old css/foundry LB!
- nitassEmployeethis is just another example using irule. please feel free to revise.
[root@ve1023:Active] config b virtual bar list virtual bar { snat automap destination 172.28.19.79:22 ip protocol 6 rules myrule } [root@ve1023:Active] config b pool foo1 list pool foo1 { monitor all tcp members 200.200.200.101:22 {} } [root@ve1023:Active] config b pool foo2 list pool foo2 { monitor all tcp members 200.200.200.102:22 {} } [root@ve1023:Active] config b rule myrule list rule myrule { when RULE_INIT { set static::pool1 foo1 set static::pool2 foo2 } when CLIENT_ACCEPTED { set vs "[IP::local_addr]:[TCP::local_port]" if {[table lookup current_pool] eq ""} { table set current_pool $static::pool1 indef indef } if {[active_members [table lookup current_pool]] < 1} { if {[table lookup current_pool] eq $static::pool1} { set new $static::pool2 } elseif {[active_members $static::pool1] > 0} { set new $static::pool1 } else { reject return } table set current_pool $new 0 0 } pool [table lookup current_pool] } when SERVER_CONNECTED { log local0. "[IP::client_addr]:[TCP::client_port] -> $vs -> [IP::remote_addr]:[TCP::remote_port]" } } [root@ve1023:Active] config cat /var/log/ltm Feb 9 09:40:24 local/tmm info tmm[4822]: Rule myrule : 192.168.204.8:62065 -> 172.28.19.79:22 -> 200.200.200.101:22 Feb 9 09:40:50 local/ve1023 notice mcpd[3746]: 01070638:5: Pool member 200.200.200.101:22 monitor status down. Feb 9 09:40:50 local/tmm err tmm[4822]: 01010028:3: No members available for pool foo1 Feb 9 09:40:59 local/tmm info tmm[4822]: Rule myrule : 172.28.19.253:58019 -> 172.28.19.79:22 -> 200.200.200.102:22 Feb 9 09:41:21 local/tmm err tmm[4822]: 01010221:3: Pool foo1 now has available members Feb 9 09:41:24 local/ve1023 notice mcpd[3746]: 01070727:5: Pool member 200.200.200.101:22 monitor status up. Feb 9 09:41:46 local/tmm info tmm[4822]: Rule myrule : 172.28.19.251:50606 -> 172.28.19.79:22 -> 200.200.200.102:22 the last log shows a new connection was still sent to foo2 pool even foo1 pool was back.
Recent Discussions
Related Content
DevCentral Quicklinks
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
Discover DevCentral Connects