Forum Discussion
Not your average Persistence iRule
So we have a server that is being load balanced to on .47, this sits in a pool with .46. We are only sending traffic to .47 unless .47 goes down or a health check fails in which case traffic will go to .46(priority group activation lb). This works fine and at current when .47 comes back up all new traffic is sent to that. However we don’t want this to happen, we want all traffic to carry on going to .46 until which time as that fails – If it fails we then want it to back to .47 and not beforehand (priority group activation + single node persistency will not generate this desired result as you cant favour a node for the initial connection when both are up).
As a side note both servers are port 11001. (on V10.x)
- hooleylistCirrostratusHi,
- 24x7_199NimbostratusHi Aaron,
- Sashi_81625Nimbostratusadd .47 to poolA
when RULE_INIT { set static::fg 1 } when HTTP_REQUEST { if { ([active_members poolA] < 1) } { set static::fg 0 } if { ([active_members poolB] < 1) } { set static::fg 1 } if { $static::fg == 1 } { pool poolA else pool poolB } }
- hooleylistCirrostratusHi Sashi,
when CLIENT_ACCEPTED { log local0. "\[active_members -list \[LB::server pool\]\]: [active_members -list [LB::server pool]]" persist uie 1 } when LB_SELECTED { log local0. "\[LB::server\]: [LB::server], priority: [LB::server priority]" } when SERVER_CONNECTED { log local0. "[IP::server_addr]" }
- 24x7_199NimbostratusMany thanks for the update, I have passed the information and link to the customer. Hopefully he will be able to test and ensure correct behaviour.
- Sashi_81625Nimbostratus
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com