F5 BIG-IP SSLO in public cloud: AWS inbound L3 use case
In this article, I will cover how to achieve BIG-IP SSL Orchestrator (SSLo) functionality in AWS. I will specifically focus on an Inbound L3 use case in AWS, and I will point out some differences to this design in Azure.
In my previous article covering SSLo in Azure, I showed how traffic can be directed through Azure load balancer, BIG-IP SSLo, and Palo Alto, without ever needing to NAT the destination or source IP address. The big benefit here is that the Palo Alto admin can see the true destination and source IP of traffic. As you can read in the article, the key to that architecture is route tables that can direct the traffic between the devices, dedicated subnets to which we can associate those route tables, and the ability to update those route tables upon failover.
I will now do the same in AWS. I'll meet the same requirements, but I'll also use an AWS Gateway Load Balancer (GWLB) for high availability (HA) of the F5 BIG-IP. I will also use a Fortinet device instead of a Palo Alto device, just because this happened to be what my most recent customer wanted.
Why use AWS GWLB?
GWLB is not a requirement for this solution, but I'll use it in this case because it's a nice way to achieve 3 things simultaneously:
- High availability (HA) of BIG-IP
- Multiple active BIG-IP appliances
- Preservation of Source and Destintation IP address
There are alternatives to GWLB, but they will achieve only 2, not 3 of these simultaneously.
What are the alternatives to GWLB when looking for high availability?
This is a side note, so feel free to skip this section. The alternatives to GWLB are the Cloud Failover Extenstion (CFE), regular AWS load balancers, or DNS-based failover with separate BIG-IP appliances.
Cloud Failover Extension (CFE) can be used to remap EIP's between Active/Standby devices, or move private IP addresses between devices, or update route tables. (One of my most popular articles over the years has been how to use route table updates for HA between appliances.) However, CFE supports Active/Standby only.
AWS Load balancers can make BIG-IP devices Active/Active, but they will change the Destination and Source IP of traffic. (Even if you use Proxy protocol for source IP preservation, ELB and ALB will require the Destination IP to be NAT'd).
And DNS-based load balancing allows us to have multiple standalone devices Active simultaneously, but they won't be clustered in a typical HA cluster.
Starting with GWLB
Here's a lab I put together to show a working architecture in AWS:
I leaned heavily on existing articles.
If you are setting up AWS GWLB with BIG-IP for the first time, you can start with BIG-IP integration with AWS Gateway Load Balancer... - DevCentral by Yossi Rosenboim.
I won't repeat Yossi's instructions, but I'll offer some reminders that caught me up when following his work:
- Don't forget that your BIG-IP must be version 16.1 or higher in order to support the implementation of Geneve that AWS uses in GWLB. (If you use a previous version, everything might look the same in your BIG-IP config, and traffic may even flow across the Geneve tunnel, but it won't show up as inbound packets on your listening Virtual Server.)
- I started with Yossi's linked Terraform demo, but as of Nov 2023, it was a little out of date. I had to update the BIG-IP AWS image to deploy in the Terraform code and the required Terraform version. I also noticed that his demo deployed a single NIC VM. In the end, I used my own BIG-IP and simply configured it with the tmsh commands in this file that I pulled from his demo.
Adding inspection: GWLB + BIG-IP SSLo + Fortinet
If you have GWLB with BIG-IP working, and are looking to add SSLo to your environment, Increase Security in AWS without Rearchitecting your Applications - Part 2: Wednesday Morning by Heath Parrot is a great article. I especially like Heath's diagrams, such as this one.
I thought to show SSLo in AWS was different to what I configured to make this work in Azure, and how I did this with Fortinet devices specifically.
How is this different from SSLo in Azure?
In my Azure lab architecture, I used Azure LB's to give me HA of the BIG-IP devices, and Azure LB's between the BIG-IP and the Palo Alto appliances.
Because Azure LB's have the option of disabling Destination NAT, I was able to keep true source and destination IP addresses all the way through the Palo Alto inspection. (And no GWLB was used in my Azure architecture.)
Due to the way Azure matches flow symmetry on load balancers, this meant my Palo Alto's required a single NIC behind Azure LB like this example from Microsoft. In turn, this meant that my BIG-IP's and Palo Alto's were in separate subnets. Again in turn, this meant I had to use Route Tables in Azure to forward traffic between my SSLo inspection devices and BIG-IP.
In summary, understanding the 3 route tables called out in my Azure diagram is the key to a working deployment of SSLo in Azure. In AWS, we aren't relying on AWS route tables, but rather using multiple NIC's on the inspection appliance, relying on the next hop configured, and ensuring that our 2x BIG-IP NIC's and our 2x Fortinet NIC's are layer-2 adjacent (ie, in the same subnets).
Heath has a nice diagram showing the subnet configuration in AWS:
What about Fortinet devices specifically?
I made a few notes about my experience learning Fortinet firewalls before discovering that my colleague KevinGallaugher has written several great articles. However, my customer was confused by the different options, so I'll call out here that I did not use Explicit Proxy in this inbound architecture. I simply used my Fortinet device as a Layer 3/4 firewall with straightforward routing of traffic. No NAT, no SNAT.
A summary of my notes made:
- You will need 2x dataplane NIC's on your Fortinet. You may want to dedicate a VRF id.
- The BIG-IP and Fortinet will share 2 subnets
- BIG-IP will have 2x NIC's to send to Fortinet and receive from Fortinet.
- Fortinet will have 2x NIC's to receive from BIG-IP and send to BIG-IP.
- If you do have an additional NIC for management, I recommend not using a default route via the mgmt interface. Use a jump host within your mgmt subnet instead.
- Fortinet has default anti-spoofing measures and anti-replay measures, which may drop your traffic if routes are configured incorrectly, making packet captures and troubleshooting difficult. The features sound roughly similar to what I know as loose init/loose close and ALH concepts. In any case, I was able to disable these measures for troubleshooting.
- For me, the handiest command I learned for Fortinet was
# get router info routing-table all
A working GWLB + BIG-IP + SSLO + Fortinet lab
My final lab environment is shown below, along with traffic flow. Again, AWS route tables are not doing the magic here, but rather the routes configured on our appliances that are Layer-2 adjacent.
If you're running security appliances like Palo Alto or Fortinet, using BIG-IP SSLo is a great idea because it offloads decryption from your appliances and allows you to load balance them easily with Active/Active architectures. There's a few options in AWS (GWLB among them) and your preference may depend on your requirements, but SSLo functionality can be achieved in the public cloud, and I'd even call it fun!
Please reach out if you need further info. There's a few of us here that have done this in the real world and enjoy sharing our knowledge!