Forum Discussion
Active-Standby configuration - more than 2 devices?
Hello,
our company is migrating both of our data centers to new ones and we need to migrate also our F5 devices. We have VCMPs and physical LTMs, which are in HA configuration of Active-Standby pairs - in case of VCMP Guests of course each guest in HA pair is on different VCMP host.
To maintain HA during migration time I wanted to add to each cluster a 3rd device (lended, leased or as a VM) so we can safely turn off 1 device in data center and transport it. After migration is done i would just delete the 3rd device.
We have sync-failover configured on all devices with HA. Devices are working on software 14.1 branch.
My question is if its possible to add a 3rd device to Active-Standby PAIR? So we would have Active-Standby-Standby? If its not possible what other approach to migrating would you suggest?
Adding a 3rd device will require additinal self IPs, additional firewall permissions to allow monitoring from the new self IPs to the pool members.
Here is the plan:
Replace machine A by machine C.
Replace machine B by machine D.
Prepare C and D with the network settings (VLAN and Self IPs) of the corresponding machines A and B.
Put C and D in "Forced Offline" mode.
That´s why I prefer to do the following:
- Backup both machines (active [A] and standby [B])
- Put machine B into "Forced Offline" mode and isolate it from the network
- Remove machine B from the sync-failover device group and delete the device trust
- Add machine D (having the network settings of machine B) to your network
- Establish device trust from A to D, add D to the sync failover group, do the initial sync from A to D
- Release D from "Forced Offline" and watch the monitoring results
- If everything becomes "green" on machine D and Self IPs are reachable, trigger a failover
- If everything runs fine through D now, you can apply the same procedure to replace A by C.
I have done this a couple of times with both v14 and 15 and faced no issues so far.
Hello Asura2006, BIG-IP sync/failover device groups support up to 8 devices so this should be possible on paper.
I'm going to add a little consideration here, while it's true that this cuts out the risk of primary unit malfunctioning, it's also true that it requires additional steps to re-configure HA which can possibly result in being more of an issue (and possibly very time consuming if issues manifest) than just powering unit off.
The "scary step" will be adding the new device to trust domain, I've performed this multiple times back in v12 for hardware refresh activities and I recall that this step was often not so smooth, I always had to delete and re-build trust from scratch with impact on HA (to prevent split brain when doing this, I forced offline both the standby and the "new" units so that only Active would see traffic, and I only plugged data interfaces back after I re-configured HA trust).
Once appliances trust each other, just add the new unit to existing device group.
You can control the way that the BIG-IP chooses a target failover device. This control is especially useful if you're using heterogeneous hardware platforms that differ in load capacity, because you can ensure that when failover occurs, the system will choose the device with the most available resource to process the application traffic.
- Asura2006Altocumulus
Thanks for answers and Ideas.
do you have any article saying it is possible to add a 3rd device to a Active-Standby PAIR and how to configure it?
I read this article but it is not very specific, although picture clearly shows more than 1 standby in ACTIVE-Standby configuration and it say that we can have up to 8 devices. Its for version 11.5...
or similar for version 13.
So you are saying that i should have 2 additional devices for every cluster? So if i have 4 devices (2 vcmp hosts and 2 physical LTMs) i need to have 4 devices or even 1 VM for each VCMP guest so its even more machines.
It looks very resource and time consuming to move devices like that...
Hello Asura,
it is certainly possible to configure a three-member cluster, I just wanted to warn you again that this might be a lot of effort if you're not going to keep it however.
If you have multiple vCMP instances, your best setup would be setting up 1 additional VA for every vCMP cluster, and then join every VA to its respective cluster.
First, you need to configure VA(s) with proper licensing, IP addresses, and confirm it can ping your vADC. Hardware/software parity requirements: https://support.f5.com/csp/article/K8665
As Stephan mentioned, make sure new IPs are permitted everywhere in firewalls, routing is on point, etc.
You should also be able to copy/paste load balancing configuration from bigip.conf file, something like:
# vCMP bigip.conf configuration file was imported to VA in /shared/tmp/prod.conf bash cp /config/bigip.conf /config/bigip.conf.backup cp /shared/tmp/prod.conf /config/bigip.conf tmsh load sys config verify #check for errors and/or missing items tmsh load sys config tmsh save sys config
Next, you need to trust VA in both existing vCMP. With VA's being "brand new" devices I'm not really expecting problems, but be aware that if trust breaks between existing devices you're going to have HA issues.
Last step will be adding the new trusted device to your device group, and then to all traffic groups. If you only have one traffic group with all floating objects, it's going to be a active/sby/sby scenario. Make sure to tune HA weights and priority on the VA so that it's the lowest.
You can find the full list of techdocs related to clustering here (relevant for v14): https://techdocs.f5.com/en-us/bigip-14-1-0/big-ip-device-service-clustering-administration-14-1-0.html
I've checked Stephan's answer as well, this is a very well-thought and detailed procedure on how to migrate your existing cluster to new hardware with minimum effort, but I don't think it will fit your scenario since I believe your question/concern is more about ensuring service resiliency while you perform maintenance on one unit.
I'm with Stephan here. The third device addes unnecessary complexity. I've done similar migrations myself using more or less the same strategy as Stephan and they've worked without a hitch.
Just make sure to have some kind of verification script before removing the passive node from forced offline to make sure that the connectivity in terms of cables etc works fine. Usually this means to ping the server side nodes, the default gateway and the peer device self ips.
Good luck!
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com