​BIG-IP integration with AWS Gateway Load Balancer - Overview

Introduction​​​

​With the release of TMOS version 16.1, ​BIG-IP now supports AWS Gateway Load Balancer (GWLB).  

With this integration we are making it much easier and simpler to insert ​BIG-IP security services into an AWS environment while maintaining high availability and supporting elastic scalability of the ​BIG-IP's.  

When to use GWLB? 

F5 ​BIG-IP delivers a wide range of application and security services. Depending on the service and other requirements the ​BIG-IP's are typically deployed in one of two modes: Network mode or Proxy mode.

Important: GWLB is only applicable today with Network deployment mode.

First, you should identify which deployment mode is relevant for you:

  1. Network (GWLB Supported)
  • Common use cases: Network firewall, DDoS protection, Transparent WAF
  • Flow transparency is maintained (no source or destination NAT)
  • Directing traffic to the BIG-IP's is done using routing by the network, making sure traffic goes back to the same BIG-IP in order to maintain traffic symmetry is also based on routing
  1. Proxy (Not GWLB Supported)
  • Providing ingress services to the application (WAF, LB, L7 DDoS, Bot protection), services are applied to an application specific virtual server on the BIG-IP
  • The BIG-IP SNAT (source NAT) manages the traffic to ensure that the return traffic arrives at the same BIG-IP thus maintaining traffic symmetry
  • Directing user's traffic to the BIG-IP's is usually done using DNS. A FQDN will map to a specific virtual server on the BIG-IP.
  • Important: GWLB does not support proxy devices in the inspection zone. Traffic flow must remain unchanged by the inspection device, for more details see: https://aws.amazon.com/blogs/networking-and-content-delivery/integrate-your-custom-logic-or-appliance-with-aws-gateway-load-balancer/ ).​


Existing challenges of BIG-IP Network deployment without GWLB

Let's examine two scenarios in which we use the BIG-IP for inspecting network traffic:

  1. Ingress/Egress protection: In this scenario, we want to inspect all traffic coming into and going out of the AWS network. The security services that are most relevant for this scenario are: Firewall, DDOS protection, WAF and IPS.
  2. East west (Inter VPC / networks) protection: In this scenario we want to inspect all traffic between VPC's and between on-prem to VPC's. The most common security services used are Firewall and IPS.

In the two scenarios mentioned above, we need to ensure the relevant flows are routed to the BIG-IP. We will also need to verify that traffic is symmetric – meaning any flow that was sent to the BIG-IP must also return from the same BIG-IP. Today, without GWLB, our method of getting traffic to the BIG-IP is to manipulate the routing tables accordingly. AWS's routing tables accepts an ENI (Elastic Network Interface, a

Network interface of an EC2 instance running a BIG-IP) as the 'gateway' for a specific route.

Since routing tables only accepts a single ENI, we can only send traffic to a single BIG-IP instance - which of course creates a single point of failure. We can further improve this design by leveraging F5's Cloud Failover Extension (CFE) which allows us to create an active/standby pair. The CFE will automatically manipulate the route tables to always point traffic at the ENI of the active device. You can read more on this design here: https://clouddocs.f5.com/products/extensions/f5-cloud-failover/latest/userguide/aws.html?highlight=route

In summary, the network deployment of the BIG-IP in AWS has the following challenges:

  1. Limited scale - Throughput, concurrent connections etc. as only a single device processes the traffic
  2. Unoptimized compute – Passive device sits idle
  3. Mixed admin domains – BIG-IP's are deployed in the same VPC as the workloads it is protecting, routing table changes are controlled by F5's CFE which is not always wanted or possible



How GWLB works with BIG-IP

To understand how AWS GWLB works, let's start by taking a closer look at the GWLB. We can break down the GWLB functionality into two main elements:

On the frontend: GWLB presents itself as a next-hop L3 Gateway to manage traffic transparently.

  • GWLB uses a new component: GWLB endpoint. This endpoint acts as an ENI (Elastic Network Interface) of the GWLB. The only way of sending traffic to GWLB is through this endpoint, therefore it is deployed in a VPC that consumes the security services (i.e. the Consumer VPC).
  • Admin domain separation: The GWLB and BIG-IP fleet are deployed in their own VPC (Security VPC). One GWLB can receive and inspect traffic from many VPC's (find more detail on design patterns in the 'Deployments' section)

On the backend: GWLB is like a L4 Load Balancer, very similar to NLB (Network Load Balancer)

  • Provides traffic distribution across a fleet of BIG-IP instances.
  • Routes flows and assigns them to specific BIG-IP for stateful inspection.
  • Performs health-checks of the BIG-IP's, and routes traffic to healthy BIG-IP's only.
  • Traffic is sent over a L3 tunnel using the GENEVE protocol (Requires BIG-IP version 16.1 and above)



BIG-IP integration details:

Requirements:

  • BIG-IP version 16.1 and above

BIG-IP traffic flow:

The BIG-IP must be deployed in the same VPC as the GWLB. BIG-IP receives traffic from GWLB over a GENEVE tunnel that uses the BIG-IPs private ip address. 

Next, we create a virtual server that is enabled on the new tunnel. The new tunnel gets the traffic from GWLB, decapsulates it and then it gets processed by the inspection virtual server.

After the virtual server processes the packet and applies the defined security profiles, it needs to forward the packet back to the tunnel. The way to 'Force' traffic back into the tunnel is to create a fake node that 'lives' inside the tunnel and use it as the target pool of the virtual server.

Once the tunnel interface receives the packets from the virtual server, it encapsulates them with GENEVE and forwards it back to GWLB.

The following diagram describes the traffic flow within a specific BIG-IP device:





Considerations and known issues:

  • Traffic flows inside the 'inspection zone' must remain unchanged. i.e, no change in the packet's 5 tuple is allowed.
  • All virtual servers are supported as long as the 5 tuple of the packet is maintained. That might require changing some configurations, for example
  • Disable the default 'address translation' feature of a 'standard' virtual server.
  • Set the 'source port' setting to 'Preserve strict'
  • Payload manipulation is supported as long as the packet's 5 tuple is maintained 
  • Some examples:
  • HTTP rewrites
  • AWAF blocking responses – AWAF responds with a 'blocking page'​
  • High Availability
  • When a BIG-IP instance fails in the target group, GWLB continue to send existing flows to the failed instance. Only new connections will be directed to a new instance.
  • Deploy BIG-IP instances across availability zones according to the desired AZ availability requirements.
  • The Default behavior for GWLB is to keep traffic in the same AZ, meaning traffic that was received on an endpoint in AZ1 will only forward traffic to instances in AZ1.
  • That behavior is controlled by the 'CrossZoneLoadBalancing' flag  https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-disable-crosszone-lb.html
  • If you do change the 'CrossZoneLoadBalancing' flag, keep in mind that GWLB will distribute traffic across AZ's even when all instances are healthy and will incur cross-AZ traffic costs.
  • Flow timeout 
  • GWLB has its own protocol timeouts (350 seconds for TCP, 120 for UDP). the timeout values on the BIG-IP must be smaller then the ones on GWLB - BIG-IP uses 300 seconds for TCP and 60 for UDP by default. 


With the GWLB integration you can now create an elastic group of BIG-IP's that will be exposed to the rest of the AWS network using AWS native networking constructs. GWLB is responsible for distributing the traffic evenly between the group while maintaining flow affinity so that each flow will always be routed to the same BIG-IP in the group and thus solving the symmetric flows without requiring any SNAT. 


The security inspection zone can be deployed in its own VPC, allowing better administrative separation. Exposing the inspection service is done using VPC endpoints, those endpoints are tied to the GWLB and can be used as the target of different routing tables (TGW, Ingress, subnets).


Summary

The BIG-IP and GWLB integration enables the industry leading BIG-IP security services with the following benefits:

  • Elastic scalability: Scale your deployment based on your actual usage. Horizontally scale your BIG-IP's.
  • Simplified connectivity: Leverage AWS native networking constructs to 'insert' BIG-IP security services in different traffic flows (North-South, East-West) 
  • Cost effectiveness: Consolidate BIG-IP's deployments and size the deployment based on actual usage 


Next steps:

Check out the next articles in this series -

  • Ingress/Egress VPC inspection with BIG-IP and GWLB deployment pattern
  • Ingress/Egress and inter VPC inspection with BIG-IP and GWLB deployment pattern


Test the integration yourself - Check out our self-service lab that you can deploy in your own AWS account (Fully automated deployment using Terraform):

https://github.com/f5devcentral/f5-digital-customer-engagement-center/tree/main/solutions/security/ingress-egress-fw​

https://github.com/f5devcentral/f5-digital-customer-engagement-center/tree/main/solutions/security/ingress-egress-inter-vpc-fw-gwlb



Published Jul 29, 2021
Version 1.0

Was this article helpful?