Forum Discussion
primary-secondary in an iRule
set primary [ 10.1.2.100 ]
set secondary [ 10.1.2.200 ]
when HTTP_REQUEST {
if { primary node is down by monitor my_mon} {
node secondary 80
} else {
node primary 80
}
}
thanx
- JRahm
Admin
If you only want 1 server at a time instead of a load balancing scheme, why not use pool member priorities instead of an iRule?pool testpool { action on svcdown reselect min active members 1 member 10.10.10.1:http priority 4 member 10.10.10.2:http priority 3 member 10.10.10.3:http priority 2 member 10.10.10.4:http }
- Deb_Allen_18Historic F5 AccountPriority works unless you really truly need traffic to go to only 1 pool member. With just priority, and with persistence of any kind enabled, when the higher prio nodes come back up after failing, you will see traffic distributed across multiple pool members until old connections/sessions die off.
rule PriorityFailover { when CLIENT_ACCEPTED { persist uie 1 } }
- mart_58302
Nimbostratus
Is this rule still valid for 9.4.5 as I can't even save it without error: - hoolio
Cirrostratus
If you're adding the rule via the GUI or the iRuler, leave off the first and last lines as these define the iRule name. Just use this:when CLIENT_ACCEPTED { persist uie 1 }
- portoalegre
Nimbostratus
I'm going to test this scenario, I've already created the following config further below.
Would this mean that I need to change my Pool Priorities to make both Servers "priority-group 1" rather than 2 & 1?
I guess with this setup the 1st server A that listens gets the 1st connection and all subsequent connections, when server A breaks then server B takes all connections, when Server A comes back on line, old/new connections still remain with Server B....this is what I want only 1 server processing client traffic....however, I need to make sure when both servers start up Monday morning, health checks kick-in that all connections go to Server A firstly not Server B....what I'm asking is does that mean Server A needs to be started firstly, wait for the 1st client connection before its acts for all clients connections. I can't allow Server B to take connections unless Server A dies.
Monday morning, always has to be Server A.
Thanks.
(Active)(/Common)(tmos) list ltm rule "FIX" ltm rule FIX { when CLIENT_ACCEPTED { persist uie 1 } } (Active)(/Common)(tmos) list ltm persistence ltm persistence global-settings { } ltm persistence universal FIX { app-service none defaults-from universal rule FIX timeout 3600 } (Active)(/Common)(tmos) list ltm pool "FIX-19003" ltm pool FIX-19003 { load-balancing-mode least-connections-member members { fixomln1d01.zit.commerzbank.com:19003 { address 10.167.20.20 priority-group 1 session monitor-enabled state up } fixomln1d03.zit.commerzbank.com:19003 { address 10.167.20.11 priority-group 2 session monitor-enabled state up } } min-active-members 1 monitor FIX }
- nitass
Employee
Would this mean that I need to change my Pool Priorities to make both Servers "priority-group 1" rather than 2 & 1?
if you want to give specific server higher priority (e.g. server A), i think priority group (with single node persistence irule) may be helpful.
sol8968: Enabling persistence for a virtual server allows returning clients to bypass load balancing
- portoalegre
Nimbostratus
I've tried single node persistence with priority groups and this doesn't work well (my current setup). When Server A fails new connections go to Server B, problem is when Server A comes back up whilst connections remain with Server B, new connections go to Sever A...as in example below
Active)(/Common)(tmos) list ltm virtual "FIX-19003" ltm virtual FIX-19003 { destination 10.167.21.16:19003 ip-protocol tcp mask 255.255.255.255 mirror enabled persist { source_addr { default yes } } pool FIX-19003 profiles { tcp { } } vlans-disabled } )(Active)(/Common)(tmos) list ltm pool "FIX-19003" ltm pool FIX-19003 { load-balancing-mode least-connections-member members { fixomln1d01.zit.commerzbank.com:19003 { address 10.167.20.20 priority-group 1 session monitor-enabled state up } fixomln1d03.zit.commerzbank.com:19003 { address 10.167.20.11 priority-group 2 session monitor-enabled state up } } min-active-members 1 monitor FIX }
- nitass
Employee
I've tried single node persistence with priority groups and this doesn't work well (my current setup). When Server A fails new connections go to Server B, problem is when Server A comes back up whilst connections remain with Server B, new connections go to Sever A
did you see server B's persistence record when server A came back up?
- portoalegre
Nimbostratus
"show ltm persistence persist-records node-addr x.x.x.x" ......showed Persistent connections if I remember (tested 3 months ago)....need to test again to be sure.
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com