We setup F5 HA (LTM & ASM ) in the same AZ in AWS. v14.1
We are using the m5.xlarge instances. We have eth0 - MGMT - eth1 - External - eth2 - Internal.
And we setup HA to communicate over internal.
Do we still use F5 floating IP addresses for F5 HA AWS?
And it appears you need to add all the VIP address to the F5 AWS ENI interface (eth1) as secondary addresses. So we did that on F51, but AWS does not allow you to use the same secondary IP (VIP) on F52. Do you know the solution for this? We have about 13 VIP's on this non-prod setup.
HA in AWS does not work in the same way as it does with VE or physical devices.
AWS effectively does not have a Layer-2 network, so the HA based on ARP used on layer-2 networks does not work.
In AWS (and other cloud networks) a failover script uses the cloud provider api to shift the IPs from one device to another. However, this can be slow (up to five minutes for IP address failover). Another option is having multiple Active BigIPs with an AWS elastic load-balancer distributing connections.
Take a look at the F5 CloudFormation templates that guide you through building BigIPs in AWS with a templated configuration.
Thanks Simon! We are using the CFT below.
1) Is it ok to use the internal NIC for HA/Config Sync as well to save on cost (instead of having a dedicated HA subnet/interface)?
2) What the best way at the moment to get around the 15 VIP ENI allocation per NIC using a deployed template like this?