F5 in AWS Part 2 - Running BIG-IP in an EC2 Virtual Private Cloud
Updated for Current Versions and Documentation
- Part 1 : AWS Networking Basics
- Part 2: Running BIG-IP in an EC2 Virtual Private Cloud
- Part 3: Advanced Topologies and More on Highly-Available Services
- Part 4: Orchestrating BIG-IP Application Services with Open-Source Tools
- Part 5: Cloud-init, Single-NIC, and Auto Scale Out of BIG-IP in v12
Previously in this series, we discussed the networking fundamentals of Virtual Private Clouds (VPC) in Amazon’s Elastic Compute Cloud (EC2). Some of the topics we touched on include the impact of the removal of layer 2 access, limits on network elements like the number of interfaces and publicly routable IP addresses, and how to manage routing within your subnets. Today we’ll cover licensing models and images available in Amazon, sizing requirements, including the number of interfaces assignable to BIG-IP, some basic network topologies, and how you can use Amazon CloudFormation templates to make your life easier when deploying BIG-IP.
Licensing Models
There are two ways you can run BIG-IP in AWS, at an utility rate or Bring Your Own License (BYOL).
Utility Model
- you pay Amazon both for the compute and disk requirements of the instances AND for the BIG-IP software license at an hourly rate
- There are two forms: hourly or annual subscriptions. Using annual licenses you can save 37%. Follow the instructions on AWS to purchase an annual subscription.
- When launching hourly instances, the devices boot into a licensed state and are immediately ready for provisioning
BYOL Model
- You do pay Amazon only for the compute + disk footprint, not for the F5 software license.
- Version Plus licenses (like "V12" or "V13") can be reused in Amazon if you have them from previous deployments
- You must license the device after it launches, either manually or in an orchestration manner.
- Available as individual licenses, or in volume as license pools.
All in all, the utility licensing model offers significant flexibility to scale up your infrastructure to meet demand, while reducing the amount you pay for base traffic throughput. It may be advantagious to use this model if you experience large traffic swings. In contrast, you may be able to achieve this flexibility at a lower cost using BYOL license pools. With volume (pool) licensing, licenses can be reused across devices as you ramp these instances up/down.
In addition to choosing between utility and BYOL licenses models, you’ll also need to choose the licensed features and the throughput level. When taking a BYOL approach, the license (which you may have already) will have a max throughput level and will be associated with a Good/Better/Best (GBB) package. For more information on GBB, see Simplified Licensing: Compare our Good, Better, Best product bundles.
When deciding on the throughput level, you may license up to 1Gbit/s using hourly AMIs. It is possible to import a 3Gbit/s VE license in AWS, but note that AWS caps the throughput on an instance to 2Gbit/s, so you will be limited by Amazon EC2 restrictions, rather than F5. Driving 2Gbit/s through your virtual instance in AWS will require careful implementation of your configuration in BIG-IP. Also, note that the throughput restrictions on each image include both data plane and management traffic. You can read more about throughput restrictions for virtual instances here:
K14810: Overview of BIG-IP VE license and throughput limits.
Once you have an chosen a license model, GBB package, and throughput, select the corrosponding AMI in the Amazon Marketplace.
Disk and Compute Recommendations:
An astute individual will wonder why there exist separate images for each GBB package. In an effort to maintain the smallest footprint possible, each AMI includes just enough disk volume for licensed features. Each GBB package has different disk of requirements which are built into the AMI. For evidence of this, use the AWS CLI to see details on a specific image:
aws ec2 describe-images --filter "Name=name,Values=*F5 Networks BYOL BIGIP-13.1.0.2.0.0.6*
Truncated output:
{ "Images": [ { "ProductCodes": [ { "ProductCodeId": "91wwm31qya4s3rkc5bv4jq9b3", "ProductCodeType": "marketplace" } ], "Description": "F5 Networks BYOL BIGIP-13.1.0.2.0.0.6 - Better - Jan 16 2018 10_13_53AM", "VirtualizationType": "hvm", "Hypervisor": "xen", "ImageOwnerAlias": "aws-marketplace", "EnaSupport": true, "SriovNetSupport": "simple", "ImageId": "ami-3bbd0243", "State": "available", "BlockDeviceMappings": [ { "DeviceName": "/dev/xvda", "Ebs": { "Encrypted": false, "DeleteOnTermination": true, "VolumeType": "gp2", "VolumeSize": 82, "SnapshotId": "snap-0c9beaa9345422784" } } ], "Architecture": "x86_64", "ImageLocation": "aws-marketplace/F5 Networks BYOL BIGIP-13.1.0.2.0.0.6 - Better - Jan 16 2018 10_13_53AM-98eb3c1e-ab48-41ff-9c94-d71a5d08e49f-ami-0c93b176.4", "RootDeviceType": "ebs", "OwnerId": "679593333241", "RootDeviceName": "/dev/xvda", "CreationDate": "2018-01-24T19:58:31.000Z", "Public": true, "ImageType": "machine", "Name": "F5 Networks BYOL BIGIP-13.1.0.2.0.0.6 - Better - Jan 16 2018 10_13_53AM-98eb3c1e-ab48-41ff-9c94-d71a5d08e49f-ami-0c93b176.4" },
From the above, you can see that the Good BYOL image configures a single Elastic Block Store (EBS) 31 Gb volume, whereas the Best image comes with two EBS volumes, totaling 124Gb in space.
On the discussion of storage, we would like to take a moment to focus on analytics. While the analytics module is licensed in the "Good" package, you may need additional disk space in order to provision this module. See this link (https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/bigip-ve-setup-msft-hyper-v-11-5-0/3.html) for increasing the disk space on a specific volumes. Another option for working around this issue is to use a “Better” AMI. This will ensure you have enough space to provision the analytics module.
In addition to storage, running BIG-IP as a compute node in EC2 also requires a minimum number of interfaces, vCPUs and RAM. AskF5's Virtual Edition and Supported Hypervisors Matrix provides a list of recommended instance types although you can choose alternatives as long as they support your architecture's configurations.
In short, as you choose higher performance instance types in EC2, you get more RAM and more network interfaces. This will allow you to create more advanced topologies and services.
Basic Network Topologies
So with a limited number of interfaces, how do you build a successful multi-tier application architecture? Many customers might start with a directly connected architecture like that shown below:
In this architecture, 10.0.0.100 is the virtual server. The address matches an EC2 private IP on the external interface, either the first assigned to that interface (the primary private IP) or a secondary private IP. Not shown is the Elastic IP address (EIP) which maps to this private IP. We recommend using the primary private IP as the external self-IP on BIG-IP. An Elastic IP can then be attached to this primary private IP to allow outbound calls. The 1:1 NAT performed by Amazon between the public (elastic) IP and private IP is invisible to BIG-IP. Keep in mind that a publicly routable self-IP is required to use the BIG-IP failover mechanism, which makes HTTP calls to AWS. We’ll discuss failover in a few moments. Secondary privates IPs and corresponding EIPs on the external interface can then be used for each virtual server.
Given this discussion about interfaces and EIPs, be sure to consider that the instance type you choose in Amazon will dictate how many virtual servers you can run on BIG-IP. For example, given a m3.xlarge (allowed three interfaces) and the default account limited of 5 EIPs, you will be limited to 3 virtual servers. In this case, one interface will be used to attach each of the management, external, and internal subnets. On the external interface, you would attach 3 secondary interfaces, each with an EIP. The other two EIPs would be used for the management port and external self-IP. To get more interfaces, move up instance sizes ( -> m3.xxlarge). To get more EIPs, request them from Amazon. If you do use an EIP for the management port, be sure to ACL it appropriately.
The benefit of the directly connected architecture shown above, where BIG-IP can serve as the default gateway, is that each node in the tier can communicate with other nodes in the tiers and leverage virtual listeners on BIG-IP without having to be SNATed. This is sometimes preferred as it makes it simpler to implement E-W security & analytics. The problem, as shown below, is that as application or tenant density increases so does the number of required interfaces.
Alternatively, routed architectures (shown below) where pool members live on remote networks, are more easily migrated and suitable to situations with limited network interfaces. In the case below, the route table for all pool members must be contain a default route that leads back to BIG-IP. By doing so, you can:
- leverage BIG-IP for outbound use case (secure outbound traffic)
- return internet traffic back through the BIG-IP and avoid SNAT’ing your internet facing VIPs .
Note: requires disabling SRC/DST Check on your BIG-IP instances/interfaces
An alternate, and perhaps more realistic view of the above looks like:
Finally, it may make sense to attach an additional interfaces for each application to increase the application density BIG-IP:
These routed architectures allow you to reduce the number of interfaces used to connect internal networks, which then enables you to leverage the remaining interfaces to increase application density. Two potential drawbacks include the requirement of SNAT (as BIG-IP is no longer inline to intercept response traffic) and adding an additional network hop. The up/down stream router will generally intercept the return traffic because the client is also on a directly connected or closer network.
Elastic IPs = Floating IPs and API Based Failover
After you have figured how to incorporate BIG-IP into your network, the final step before deploying applications and network services will be ensuring you can maintain high-availability.
One of the challenges in adapting BIG-IP for public clouds was that the availability model of BIG-IP (“Device Service groups”, or DSCs) was tightly coupled to sharing L2/L3 floating addresses in the same L2 segment. An active device made an L2 broadcast (GARP) to take over "Active" ownership of IP addresses and other network listeners. In accordance with the removal of L2, BIG-IP has adapted and replaced the GARP failover method with API calls to Amazon. These API calls toggle ownership of Amazon secondary private-IP addresses between devices. Any EIPs which map to these secondary IP addresses will now point to the new active device. Note here that floating IPs in BIG-IP speak are now equivalent to secondary IPs in the EC2 world.
One issue to be aware of with the API-based failover mechanism is the increase in failover time to =~ 10 sec per EIP. This is the time it takes for changes to propagate in AWS’s network. While this downtime is still significantly less than a DNS timeout, it is troublesome as BIG-IP’s Device Service Group (DSC) feature was specifically introduced to provide sub-second failover. Newer applications built for cloud are typically designed to handle these changes in availability concepts, but this makes it more challenging to shift traditional workloads to layer-3-only environments like AWS.
Historically, the DSC group feature has also allowed the use of BIG-IP as a highly available default gateway. This was accomplished by directing the default route to the internal floating self-IP on a cluster or by directly connecting application servers. In Amazon, the default route may point to an internet gateway or a device interface, but not a statically named IP address. We'll leave the fix for this problem for the next article where we will also talk about other deployment models of BIG-IP in AWS, including those which span availability zones.
CloudFormation Templates
To close this article, we’ve decided to provide examples of how BIG-IP can be deployed using CloudFormation Templates (CFT) in AWS. CloudFormation is an AWS service that enables you to define a set of EC2 resources that can be automatically and deterministically deployed in your account. These application “stacks” are defined in JSON, making them easier to read and share.
F5 provides serveral CFTs with options for licensing model, high availability, and auto-scaling (for LTM and WAF modules). Please review the Github Big-IP Version Matrix for AWS CFT Templates document within our f5-aws-cloudformation repository to determine your deployment requirements. Enjoy!
- gbbaus_104974Historic F5 Account
Hi Chris. With AWS now supporting 4Gig (in smaller regions) and 10Gig throughput in some larger Regions. can we just leverage a 5Gig or 10Gig BYOL license to increase throughput through an AWS BigIP instance ?
- ZukeCirrostratus
I'm posting a link to each part in the series here:
 
Part 1 : AWS Networking Basics
 
Part 2: Running BIG-IP in an EC2 Virtual Private Cloud
 
Part 3: Advanced Topologies and More on Highly-Available Services
 
Part 4: Orchestrating BIG-IP Application Services with Open-Source Tools
 
Part 5: Cloud-init, Single-NIC, and Auto Scale Out of BIG-IP in v12
 
- Radovan_GibalaNimbostratus
Chris, We tried the routed architecture and configured the AWS Internal Router to route the returned traffic back to F5, but it didn't work, only in SNAT mode. Without SNAT the internal router didn't pass the traffic through F5. Could you please advise more on the router configuration and troubleshooting ? Thanks, Radovan