With the release of TMOS version 16.1, BIG-IP now supports AWS Gateway Load Balancer (GWLB).
With this integration we are making it much easier and simpler to insert BIG-IP security services into an AWS environment while maintaining high availability and supporting elastic scalability of the BIG-IP's.
F5 BIG-IP delivers a wide range of application and security services. Depending on the service and other requirements the BIG-IP's are typically deployed in one of two modes: Network mode or Proxy mode.
Important: GWLB is only applicable today with Network deployment mode.
First, you should identify which deployment mode is relevant for you:
Let's examine two scenarios in which we use the BIG-IP for inspecting network traffic:
In the two scenarios mentioned above, we need to ensure the relevant flows are routed to the BIG-IP. We will also need to verify that traffic is symmetric – meaning any flow that was sent to the BIG-IP must also return from the same BIG-IP. Today, without GWLB, our method of getting traffic to the BIG-IP is to manipulate the routing tables accordingly. AWS's routing tables accepts an ENI (Elastic Network Interface, a
Network interface of an EC2 instance running a BIG-IP) as the 'gateway' for a specific route.
Since routing tables only accepts a single ENI, we can only send traffic to a single BIG-IP instance - which of course creates a single point of failure. We can further improve this design by leveraging F5's Cloud Failover Extension (CFE) which allows us to create an active/standby pair. The CFE will automatically manipulate the route tables to always point traffic at the ENI of the active device. You can read more on this design here: https://clouddocs.f5.com/products/extensions/f5-cloud-failover/latest/userguide/aws.html?highlight=r...
In summary, the network deployment of the BIG-IP in AWS has the following challenges:
To understand how AWS GWLB works, let's start by taking a closer look at the GWLB. We can break down the GWLB functionality into two main elements:
On the frontend: GWLB presents itself as a next-hop L3 Gateway to manage traffic transparently.
On the backend: GWLB is like a L4 Load Balancer, very similar to NLB (Network Load Balancer)
BIG-IP integration details:
BIG-IP traffic flow:
The BIG-IP must be deployed in the same VPC as the GWLB. BIG-IP receives traffic from GWLB over a GENEVE tunnel that uses the BIG-IPs private ip address.
Next, we create a virtual server that is enabled on the new tunnel. The new tunnel gets the traffic from GWLB, decapsulates it and then it gets processed by the inspection virtual server.
After the virtual server processes the packet and applies the defined security profiles, it needs to forward the packet back to the tunnel. The way to 'Force' traffic back into the tunnel is to create a fake node that 'lives' inside the tunnel and use it as the target pool of the virtual server.
Once the tunnel interface receives the packets from the virtual server, it encapsulates them with GENEVE and forwards it back to GWLB.
The following diagram describes the traffic flow within a specific BIG-IP device:
Considerations and known issues:
With the GWLB integration you can now create an elastic group of BIG-IP's that will be exposed to the rest of the AWS network using AWS native networking constructs. GWLB is responsible for distributing the traffic evenly between the group while maintaining flow affinity so that each flow will always be routed to the same BIG-IP in the group and thus solving the symmetric flows without requiring any SNAT.
The security inspection zone can be deployed in its own VPC, allowing better administrative separation. Exposing the inspection service is done using VPC endpoints, those endpoints are tied to the GWLB and can be used as the target of different routing tables (TGW, Ingress, subnets).
The BIG-IP and GWLB integration enables the industry leading BIG-IP security services with the following benefits:
Check out the next articles in this series -
Test the integration yourself - Check out our self-service lab that you can deploy in your own AWS account (Fully automated deployment using Terraform):