Forum Discussion
Load balance to node in the same DC where VIP is active
You need to track a unique local id somehow and then use irule to select pool member based on that id. Haven’t tested this but I would use tcl_platform(machine) to get hostname assuming they are different.
- Simon_BlakelyDec 18, 2017Employee
That isn't really an HA configuration, it is a DC failover design.
For a DC failover design, you would have a LTM (or LTM HA pair) and virtual in each DC, and use a F5 GTM/DNS to send traffic to whichever DC is preferred. If the GTM detects that a DC is down (pool member,virtual or upstream link failure), then it starts sending traffic using the wideIP to the other DC until service is restored.
- Martin_VlaskoDec 19, 2017Altocumulus
I know it's a bit special setup, but that came as a requirement from the application team. It does not make much sense to me either, but I wanted to find out if something like that is even possible if I really have to implement it.
Thanks Farshadd for the tip, it is actually exactly what I need. I tried it and it works with this simple irule assigning one of two different pools:
when CLIENT_ACCEPTED { set f5 $static::tcl_platform(machine) if { $f5 equals "f5DC1.mydomain.com" } { pool pool_APP_DC1_DC2 } elseif { $f5 equals "f5DC2.mydomain.com" } { pool pool_APP_DC2_DC1 } else { log local0. "Error: machine info invalid!" reject } }
The pools will use priority group scheme:
pool_APP_DC1_DC2: server in DC1 higher priority than the server in DC2
pool_APP_DC2_DC1: server in DC2 higher priority than the server in DC1
With this setup I need only 1 VIP, the HA will be achieved because the VIP can still fail over to the other DC, and each pool will have the possibility to fail over to second pool member shall the higher priority member become unavailable. I agree this is not about balancing the load, but more about "same DC VIP to server stickyness" while keeping the HA in place.
- Simon_BlakelyDec 19, 2017Employee
I hope the DC's have a low-latency interconnect for HA. I suspect that you will encounter unforseen issues with this implementation.
The only other recommendation I can make:
Shift the static variable declaration to RULE_INIT for efficiencywhen RULE_INIT { set f5 $static::tcl_platform(machine) } when CLIENT_ACCEPTED { if { $f5 equals "f5DC1.mydomain.com" } { pool pool_APP_DC1_DC2 } elseif { $f5 equals "f5DC2.mydomain.com" } { pool pool_APP_DC2_DC1 } else { log local0. "Error: machine info invalid!" reject } }
- Martin_VlaskoDec 21, 2017Altocumulus
Thanks for the tip with the RULE_INIT. Out of curiosity, why would the DC interconnect link latency matter in this exact situation? The irule should actually keep the traffic within a single DC.
- Simon_BlakelyDec 21, 2017Employee
The DC link latency is for the HA status messages exchanged between the two LTMs.
If latency increases too much, or HA packets get dropped, then you may get unexpected failover events.
sol7249: Overview of the network failover timer
You should also ensure that you configure multiple unicast Failover Network addresses between the two LTMs, preferably using independent DC/DC links.
- Martin_VlaskoDec 21, 2017Altocumulus
I persuaded the application team that it's not a good idea anyway, so I won't implement it. But having been discussing this I already wanted to understand what would happen shall I implement it.
Thanks for your support and help.
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com