100+ Internal VIPs in AWS
Amazon Web Services (AWS) limits the number of private/public IPs that you can attach to an interface. The following is a workaround to create a private network within an Amazon Virtual Private Cloud (VPC) that will only be used for internal Virtual IP Address (VIPs). This allows you to support an arbitrary number of private VIPs (up to the capacity of the instance type) for Load Balancing internal services. For providing an external/public Elastic IP (EIP) you are still limited to the number of public IPs that Amazon allows you to attach. The following document is helpful if you need to support multiple external EIPs using multiple interfaces.
How it works
In an AWS VPC you can create your own routes that point to an interface. The most common use-case is to create your own NAT gateway that points to 0.0.0.0/0. You can also create an arbitrary route as long as it doesn't overlap with the existing VPC. (UPDATE 2021-08-30: now possible to use overlapping routes with the VPC CIDR ) Something like 172.16.10.0/23 in a VPC that is 10.1.0.0/16 that points to the BIG-IP ENI.
On the BIG-IP create a self-ip for the VIP network (overlapping on the same VLAN).
Now create 100+ VIPs in that range.
Test from another instance in the VPC.
How to use
This could help with a split architecture of two BIG-IP devices with one dedicated to FW/content-routing and an "internal" BIG-IP that is devoted to internal VIPs or collapsed on a single device (when on the same device you would need to use the iRule / local traffic policy virtual command). Using the Advanced HA iApp you can automate the process of failing over routes from one BIG-IP to another within or across Availability Zones.
Update 2020-03-16: The Advanced HA iApp has been replaced by the Cloud Failover Extension. This is installed by default when using F5's Cloud Formation Templates.
Programmable Proxy
Using the AWS API BIG-IP can help your applications Go!
- gbbaus_104974Historic F5 Account
Eric
What does your SNAT setting look like.?
SNAT AutoMap, or SNAT to specific address in the 172.16.10.x network
I’m assuming you had to turn off SRC/DST Check also ?
Thanks
Gary
- Eric_ChenEmployee
Gary,
Yes, you need to disable SRC/DST check. The SRC IP of the BIG-IP will still be the private address on the ENI, you do not need to SNAT to the 172.16.10.x network.
Hypothetical packet capture:
Client: 10.1.10.10 BIG-IP: 172.16.10.10 (fake), 10.1.20.10 (real) Backend: 10.1.20.100 Client (src: 10.1.10.10, dst: 172.16.10.10) -> BIG-IP (src: 10.1.20.10, dst: 10.1.20.100) -> Backend (src: 10.1.20.100, dst: 10.1.20.10) -> BIG-IP (src: 172.16.10.10, dst: 10.1.10.10) -> Client
Eric
- Jeff_GirouxCirrus
Yes, src/dst check must be disabled. SNAT automap is the only supported SNAT option (other than none) since SNAT pool cannot technically share the same SNAT pool IPs due to there being two different AZ (subnets don't share space across AZs). A setting of SNAT automap will send traffic out the active unit's self-ip to server side. Upon failover, this self IP will change to the other unit since there is no floating self IP. If you decide to have SNAT none, then return routes from server side need to properly point back to the F5 ENI via route tables. SNAT automap is easier for apps that support it (most).
Also, check this...
https://devcentral.f5.com/s/articles/deploy-bigip-in-aws-with-ha-across-azs-without-using-eips-33378