I have many active/standby pairs of VEs hosted in VMWare ESXi 6.0, 6.5, and 6.7. Our organization puts an insane amount of weight on availability. We have noticed that it is significantly less impactful to our various applications to Vmotion a F5 rather than fail over to the peer and Vmotion in standby state. There is a catch though. The F5 does not initiate outbound traffic on subnets that are dedicated for VIPs and thus the CAM table on the upstream switch does not get updated and traffic is black holed. This is not an issue for the majority of our VIP subnets because there is always some traffic coming and going on it but in some environments, where a VIP subnet is relatively quiet, traffic is black holed until the table on the switch times out.
I can fix this for self IPs by creating a pool with the SVI in it and an ICMP monitor. I have not found a way to fix this for floating IPs and VIPs short of doing a fail over to force a GARP.
I could create a forwarding VIP in each of the subnets and stick VMs behind them to constantly send pings but this would be a logistical nightmare.
Check out this KB - https://support.f5.com/csp/article/K7332
GARPs will be sent out to all VIPs and SelfIPs, during a fail-over event. However, the GARPs may be lost if they are sent out too quickly.
--GARP rate modification KB: https://support.f5.com/csp/article/K11985
Let me know if this helps.
Thanks Shaun. I am aware of the DB variable to limit GARPs to prevent the upstream switches from being flooded. What we are looking for is to avoid a fail over and simply do a VMotion. As I stated VMotions are less impactful to some of our applications than fail overs with this one exception of VIP subnets that have limited traffic.