Forum Discussion
24x7_199
Mar 23, 2012Nimbostratus
Not your average Persistence iRule
So we have a server that is being load balanced to on .47, this sits in a pool with .46. We are only sending traffic to .47 unless .47 goes down or a health check fails in which case traffic will go ...
hooleylist
Mar 27, 2012Cirrostratus
Hi Sashi,
If poolB goes down but then comes back up, that iRule will still select poolB. I think the original poster wanted to ensure poolA would continue to be used even after poolB came back up.
Keep in mind that a static variable will be specific to the TMM the iRule is executed on. So if you change the value of a static variable outside of RULE_INIT, you'll have a unique instance of the variable per TMM. I don't think that would be a problem for your example Sashi, but I figured I'd point it out regardless.
24x7, can you try testing with the single node persistence iRule but add .47 as a higher priority than .46 in the same pool? I think this might fit your requirements.
You can verify a bit easier with some debug logging:
when CLIENT_ACCEPTED {
log local0. "\[active_members -list \[LB::server pool\]\]: [active_members -list [LB::server pool]]"
persist uie 1
}
when LB_SELECTED {
log local0. "\[LB::server\]: [LB::server], priority: [LB::server priority]"
}
when SERVER_CONNECTED {
log local0. "[IP::server_addr]"
}
Aaron
Recent Discussions
Related Content
DevCentral Quicklinks
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
Discover DevCentral Connects