Living on the AWS Edge - BIG-IP on AWS Outposts

Many organizations are looking at AWS Outposts to use in hybrid or edge computing scenarios.  For F5 customers, if they are running AWS Outpost in their Data Center (DC), they can use their existing BIG-IP fleet to provide security, traffic management, and optimizations services to applications running both in the DC or on the Outpost site.  

But, can you run BIG-IP on AWS Outposts?  Of course!

What is an Outpost?

When I think of Outposts I think of it in the following manner: You have a site into which you deployed one or more physical Outpost racks/servers (1:N) and, as your use of the technology grows, you may end up having deployed multiple sites (N:N) that need application, networking, and security services.  Noting that each of these sites will be anchored to an AWS region.  

Outpost Rack

Outposts - Logical Relationships to AWS

Outpost systems as have two relationships. 

1. The first relationship is with other AWS Constructs such as VPCs and Availability Zones. When you deploy an Outpost it is "anchored" to an availability zone and becomes a logical extension of that construct and has VPN tunnels, known as service links, back to that AZ.

2. The second relationship is with your data center and/or edge facilities which have Outpost racks connected to your network hardware with multiple physical uplinks.  While Outpost is a logical extension of AWS, it is physically located in a non-AWS facility of your choice,

These Outpost racks will be connected to your network via virtual interfaces (VIFs) and use Border Gateway Protocol (BGP) routing to exchange Network Layer Reachability Information (NLRI) data with the upstream routers.  Depending on how you configured Outpost and your subnets, you will see different NLRI data.  We cover that in more detail later.

Outposts Unique Objects

Outposts are very similar to many other things in AWS but they offer a couple of differences that will impact your architecture and taxonomy.   For more information please see AWS documentation.




Local Gateway (LGW)

Similar to other gateways in AWS, LGW is a routing construct that interfaces between the Outpost and it's immediately adjacent environment.  It is the demarcation between Outpost and an external network. You can route to your DC or out to the internet in this model.  Best practice is that internet and data center bound traffic egress the LGW.

Local Gateway Route Table (LGW RT)

Routes associated with the local gateway. These subnets (and/or Customer IP pools) are announced from your local gateway to the upstream router

Service Link

A VPN tunnel(s) from Outpost site back to the region. You can reach all AWS services over this link and even use it to get to an Internet Gateway (depending on how you construct your route tables)

Customer IP Pool

A pool of customer IP addresses that are associated with a local gateway. These function similar to an elastic IP.


Customer IP mode (CoIP mode)

A setting that toggles how it behaves at the LGW and what subnet(s) are or are not announced.  In customer IP mode the customer owned IP address (COIP) subnet is announced over BGP to the upstream router.

Direct VPC  Routed (DVR)

When operating in this mode subnets are directly routable between your router and the LGW to these without he use of IP mapping.  The NLRI information is the one or more subnets that point to the LGW as an egress point.

Understanding the Validation Environment

In this example, I have 4 outpost sites deployed in the following modes, pulling data from 19.02 and 19.05 



LGW Associated Subnet

CoIP Range












Outpost are associated with a VPC and we need to use route tables for control of the network forwarding.  For the validation of F5 software running on AWS Outpost, I setup multiple Outpost sites (racks) running in CoIP or DVR modes each with a dedicated VPC.  All of the systems have the same VPC route table structures.


Route Table Name

Default Gateway

Associated Subnets


VPC Internet GW

Jump host subnet, BIG-IP Management subnets


None (could be BIG-IP: see route

Outpost internal subnet (aka private)



 not used


Local Gateway

Outpost External (aka public)

Connecting Outpost to VPC Constructs

During the setup of an Outpost, you associate a LGW with a LGW Route table within a VPC.   In the image below we have already created a Local Gateway and associated it with an Outpost.  We are now associating it to a VPC.


We will also need to decide whether the Outpost will be running in Direct VPC mode (sometimes called DVR) or Customer-owned IP (CoIP) mode.  The decision to use Direct VPC or CoIP mode is part of setting up the local gateway and is mutually exclusive.   CoIP mode is commonly used when there is address overlap in your Outpost deployments.

Local Gateway Route Table Direct VPC Mode - note that lack of a CoIP pool.

If we apply this scenario to our Outpost we will see the following reachability.  The Data Center edge routers (black) will receive NLRI data from the VPC address spaced based on the subnets that are using the LGW route table (green).  Inside of the Outpost there is still reachability across the VPC (green and blue)

Local Gateway Route Table in CoIP mode

Using CoIP we will see the following reachability.  The Data Center edge routers (black) will received NLRI data for the CoIP range that are mapped to subnets in the external route table (green).  Inside of an Outpost there is still reachability across the VPC (green and blue)

Summary: The use of CoIP or Direct VPC impacts which routes are propagated to the upstream routers.  If we were to use two VPCs with the same IP range of you would see the following.



VPC Range

LGW Associated Subnets

CoIP Range

BGP Propagated (Announced)




Configuration Detail

In both of the above examples we have one or more subnets associated into an Outpost External route table in the VPC.  This rack is running in CoIP mode and I have associated with the route table.


The Outpost external route table points to a LGW for a default route

Even though is associated with the route table, the presence of the CoIP range is the address range propagated to the upstream routers.

Direct VPC Mode

Subnet associated with External Route Table (VPC)

Routes in the External Route table (VPC)

Outpost LGW Route Table

Not all of the subnets on your outpost must be associated with the LGW route table.  You can have "private" subnets associated with other route tables (such as pointing at an ENI of a VM on outpost) or subnets that are associated with your VPC egress such as IGW.  If we look at an internal route table you will see a route ( pointing to the BIG-IP ENI.  You will also see a route for the S3 prefix list in the internal subnet route table, subnets in this route table will not be announced over BGP as long as there is not a route using an LGW.

Internal route table

Internal Route Table routes

When we deploy an Outpost system we introduce several new workflows.  The key one from a network perspective is that a subnet that is to reside on an Outpost is created in the Outpost UI, but we associate it to route tables in the standard VPC UI.  For our test bed we deployed an external, internal, and management subnet to each Outpost.

BIG-IP Active / Standby Deployment

When we deploy on Outposts, no matter if it is 1 rack or several, it still operates in a single Availability Zone.   This means that both systems in an HA setup will be in the same subnets.  BIG-IP can be deployed onto an Outpost system with one or more network interfaces.  We used a three NIC HA deployment where we have external (data center or other out of AWS facing subnet), internal and management subnets.  These subnets are associated with different route tables in AWS.  Our external interface would be associated with the subnet(s) that use the Local Gateway (LGW), our internal interface and route table is not associated with a gateway and our management subnet egress via an interenet gateway in the region.

Once we have deployed our BIG-IP we use Cloud Failover Extension (CFE) for failover actions for CoIP assigned addresses, secondary IP addresses, and routes.  This is no different than we would when using a region.  At this time CFE will use an S3 bucket located in the region.  This should not be a blocking issue because Outpost CRUD operations require connectivity to the region.  When deploying BIG-IP to an AWS Outpost you must be able to use one of our validated instance types.

Outpost Modes:  CoIP and Direct VPC are both validated to work with BIG-IP and CFE, without any changes to the IAM role, or the standard CFE configuration patterns as documented on

CoIP Mode: Active standby works supporting the movement of secondary IP addresses, Elastic IP addresses (COIP addresses) and routes on the internal routing table.

Direct VPC Mode: Active standby works supporting the movement of secondary IP address and routes on the internal routing table.

CFE declaration from my lab

             "scopingTags": {
               "f5_cloud_failover_label": "mydeployment-1902"


Testing Failover

Items are mapped to F5-1-1902


Secondary IPs

CoIP / Instance Mapping

After failover


Secondary IP


CoIP / Instance Maping


I hope you find this article informative on both AWS OutpostS and on using F5 BIG-IP with AWS OutpostS. In future articles, we will look at other F5 offers on Outpost and other AWS edge compute services such as Local Zones and Wavelength. 

Updated May 12, 2023
Version 2.0

Was this article helpful?

No CommentsBe the first to comment