Forum Discussion

Charles_Lamb's avatar
Charles_Lamb
Icon for Nimbostratus rankNimbostratus
Mar 17, 2020

Floating IPs and VIPs stop responding after a VMotion

I have many active/standby pairs of VEs hosted in VMWare ESXi 6.0, 6.5, and 6.7. Our organization puts an insane amount of weight on availability. We have noticed that it is significantly less impactful to our various applications to Vmotion a F5 rather than fail over to the peer and Vmotion in standby state. There is a catch though. The F5 does not initiate outbound traffic on subnets that are dedicated for VIPs and thus the CAM table on the upstream switch does not get updated and traffic is black holed. This is not an issue for the majority of our VIP subnets because there is always some traffic coming and going on it but in some environments, where a VIP subnet is relatively quiet, traffic is black holed until the table on the switch times out.

 

I can fix this for self IPs by creating a pool with the SVI in it and an ICMP monitor. I have not found a way to fix this for floating IPs and VIPs short of doing a fail over to force a GARP.

 

I could create a forwarding VIP in each of the subnets and stick VMs behind them to constantly send pings but this would be a logistical nightmare.

 

Any thoughts?

4 Replies

    • Charles_Lamb's avatar
      Charles_Lamb
      Icon for Nimbostratus rankNimbostratus

      Yes. But if no traffic is initiated on a VLAN the switchports do not get updated after a VMotion.

  • Thanks Shaun. I am aware of the DB variable to limit GARPs to prevent the upstream switches from being flooded. What we are looking for is to avoid a fail over and simply do a VMotion. As I stated VMotions are less impactful to some of our applications than fail overs with this one exception of VIP subnets that have limited traffic.