F5 BIG-IP and NGINX - Securing Your AWS VPC Lattice Applications for External Clients

As organizations adopt AWS and make progress on either modernizing existing applications or create new applications that are microservices based they will begin to see their AWS environment grow.  This growth can create sprawl on three axis.  

  • The first sprawl as the number of applications that are presented into the environment as a microserivce.
  • The second is the number of accounts they have in AWS.  Many times the different software teams for the different microservices will have their own accounts and AWS environments, such as VPCs.
  • The third is these different services types my exist on different types of compute such as EC2, EKS, ECS and Lambda.   Each of these compute services has it's own nuance at the network layer.

Organizations then need to figure out how to connect all these endpoints.  VPC peering, TransitGateway, PrivateLink.... anyone who has been in Network Engineering knows that there will be IP address collisions, routing issues, and other connectivity complexity encountered and technical debt incurred should you stitch all of these environments together; two words not fun.

AWS Solution - VPC Lattice 

To address these concerns AWS developed VPC Lattice.  VPC lattice creates and overlay network that can span VPCs, accounts and compute types without required changes to the VPC network and route tables.  While a service mesh is not quite the same thing it does provide a good mental model of what is happening.  I connectivity mesh is built across AWS boundaries at the account and logical environment layers.  This all sounds great!  But there is just one problem.... applications exist to serve users.  Those users may not be connected to a lattice enabled environment. For example:

  • Human - Application Users are not in AWS
  • System - Portions of the application not in AWS, due to migration and modernization.
  • System - Compute client of application may not be in AWS or not in a VPC connected to the service network. 

To address this AWS proposes that users use an instance in AWS to act as a proxy to redirect traffic into the lattice.  Great! We happen to have a couple of choices here!

If we look at the above diagram we can use either BIG-IP or NGINX to proxy requests from environments that are not lattice enabled into a VPC lattice presented application.  Let's put the pieces together so the diagram makes more sense.

Understanding VPC Lattice Services and Service Networks

A VPC lattice service is an application that is running on AWS compute services such as EC2, EKS, ECS or Lambda.   When we want to create a lattice service and service network we will use the Lattice menu (AWS Console VPC --> Lattice ) 

Instances of an application are organized into a target group, very much like we do with an ELB.  

In the image below we can see that we have one AWS instance in our target group.  

Services create a container for one or more target groups that are selected based on HTTP attributes.  In this example we are sending all traffic to a single target group. 

 

Services are each assigned unique DNS name that systems residing in an AWS VPC that is associated with a service network can be resolved. 

Service Network 

A service network creates the data path that we associate our services and VPCs to.  A client in a VPC that is connected to a service network has a network path to all services on that service network.  Users can use IAM and/or security groups to control access between instances in the VPC and the services. 

If we look back at the architecture picture our BIG-IP and NGINX both reside in a VPC that has been associated with a service network creating a data path for these systems to connect to the service.  For users that are not in AWS the data path would be the same I.E traversing the IGW or VGW going to the BIG-IP/NGINX instance and the BIG-IP/NGINX instance will process the request / response to and from the service network. 

Finding VPC Lattice Services

Once we have deployed BIG-IP or NGINX into a VPC that has the Lattice network exposed to it we can use DNS to find the services.  When we are acting as an ingress and reverse proxy into the lattice network; we are not load balancing -that is happening based on the target group configuration.  The value we are provideing is the abilty to insert advanced traffic controls such as http header modification (required as lattice inspects the host header and it must match the service name), application security, network security, bot security, user and device authentication or other controls. 

 

nslookup hparr-vpc-lattice-service-80-02dee656509364a22.7d67968.vpc-lattice-svcs.us-east-1.on.aws
Server:         10.0.0.2
Address:   10.0.0.2#53

Non-authoritative answer:
Name: hparr-vpc-lattice-service-80-02dee656509364a22.7d67968.vpc-lattice-svcs.us-east-1.on.aws
Address: 169.254.171.32
Name: hparr-vpc-lattice-service-80-02dee656509364a22.7d67968.vpc-lattice-svcs.us-east-1.on.aws
Address: fd00:ec2:80::a9fe:ab20

BIG-IP config

When you deploy a BIG-IP into AWS it will already be configured to use the internal AWS DNS server.  To be able to proxy traffic into a lattice service we will need several items.

  • Enable BIG-IP to use link local addresses 
  • A node that is DNS enabled BIG-IP.  At the node level we can control if we will use IPv4, IPv6 of both addresses.  The setting you select needs to match the IP stack of BIG-IP.  
  • A pool that uses the node, and is set to auto-populate.
  • A traffic policy to rewrite the HOST header
  • A virtual server to stitch it all together. 

Enable Link Local Addresses 

The first task is we need to edit a /sys db variable and set the value to true

​admin@(ip-10-0-7-48)(cfg-sync Standalone)(Active)(/Common)(tmos)# modify  /sys db config.allow.rfc3927 value enable
admin@(ip-10-0-7-48)(cfg-sync Standalone)(Active)(/Common)(tmos)# list /sys db config.allow.rfc3927
sys db config.allow.rfc3927 {
    value "enable"
}

Create the Node

Below we have created a node based on the DNS name of our service. Additionally we have set the system to use the TTL of the DNS response and filtered it to only use the IPv4 address (A record) 

Create the Pool

Below we have created a pool, setup it up with an HTTP monitor and associated the DNS node with it, allowing for the DNS response to populate the pool member list. 

 

Create the Traffic Policy

VPC Lattice uses layer 7 processing to select the service and target group.  If we have users entering into the environment there is a good chance that the host name in the request will be different from the host name of our service.   Lattice will not accept connections that the HOST name does not match so we will need a traffic policy to rewrite the host header. 

Create the Virtual Server

The final step is that we need a virtual server to process the traffic from the network and stitch the configuration together allowing traffic to flow. 

The virtual server uses the pool and the traffic policy that we created. 

 

CLI Output of the Config

 

ltm node lattice-node {
    fqdn {
        autopopulate enabled
        interval ttl
        name hparr-vpc-lattice-service-80-02dee656509364a22.7d67968.vpc-lattice-svcs.us-east-1.on.aws
    }
    state fqdn-up
}
ltm pool lattice-pool {
    members {
        _auto_169.254.171.32:http {
            address 169.254.171.32
            ephemeral true
            session monitor-enabled
            state up
            fqdn {
                autopopulate enabled
                name hparr-vpc-lattice-service-80-02dee656509364a22.7d67968.vpc-lattice-svcs.us-east-1.on.aws
            }
        }
        lattice-node:http {
            fqdn {
                autopopulate enabled
                name hparr-vpc-lattice-service-80-02dee656509364a22.7d67968.vpc-lattice-svcs.us-east-1.on.aws
            }
            state fqdn-up
        }
    }
    monitor http
}
ltm policy lattice-policy {
    last-modified 2023-07-21:11:21:52
    requires { http }
    rules {
        lattice-host-rewrite-policy {
            actions {
                0 {
                    http-header
                    replace
                    name HOST
                    value hparr-vpc-lattice-service-80-02dee656509364a22.7d67968.vpc-lattice-svcs.us-east-1.on.aws
                }
            }
        }
    }
    status published
    strategy first-match
}
ltm virtual lattice-virtual-server {
    creation-time 2023-07-03:13:37:14
    destination 0.0.0.0:http
    ip-protocol tcp
    last-modified-time 2023-07-03:13:37:14
    mask any
    policies {
        lattice-policy { }
    }
    pool lattice-pool
    profiles {
        http { }
        tcp { }
    }
    serverssl-use-sni disabled
    source 0.0.0.0/0
    source-address-translation {
        type automap
    }
    translate-address enabled
    translate-port enabled
    vs-index 2
}

NGINX Config

The NGINX config is easier to accomplish than BIG-IP.   To start with we do not need to edit a sys db, and since we are operating in a reverse proxy mode we can skip the step of creating an upstream pool.  We do need to do the following

  • DNS service discovery, potentially limiting the type of record (A, AAAA) used 
  • Change the HTTP version to 1.1
  • Create the rewrite policy

Here is the NGINX config

http {
    include       /etc/nginx/mime.types;

   
    resolver 10.x.x.2 valid=10s ipv6=off ;

    server {
       location / {
          proxy_set_header Host http://hx2.7x8.vpc-lattice-svcs.us-east-1.on.aws;  
         proxy_http_version 1.1;
          proxy_pass http://hx2.7x8.vpc-lattice-svcs.us-east-1.on.aws;
        }
    }
}

 

Inspecting Network Traffic

Now that we have built out the topology what does it look like on the "wire".  First we will generate traffic external to AWS.  

If we start at the BIG-IP and look at the traffic flow we will see that traffic originates from a self-Ip in the VPC address space (10.x.x.x) and goes to an address in the link local address space (169.254.x.x.)


10:03:51.089072 IP 10.0.7.48.51301 > 169.254.171.32.http: Flags [.], ack 162, win 219, options [nop,nop,TS val 568083760 ecr 2007207851], length 0 out slot1/tmm2 lis= port=1.0 trunk=
10:03:51.089087 IP 10.0.7.48.51301 > 169.254.171.32.http: Flags [F.], seq 10, ack 162, win 219, options [nop,nop,TS val 568083760 ecr 2007207851], length 0 out slot1/tmm2 lis= port=1.0 trunk=
10:03:51.089662 IP 169.254.171.32.http > 10.0.7.48.51301: Flags [F.], seq 162, ack 11, win 472, options [nop,nop,TS val 2007207851 ecr 568083760], length 0 in slot1/tmm2 lis= port=1.0 trunk=
10:03:51.089805 IP 10.0.7.48.51301 > 169.254.171.32.http: Flags [.], ack 163, win 219, options [nop,nop,TS val 568083761 ecr 2007207851], length 0 out slot1/tmm2 lis= port=1.0 trunk=

If we look at the service in the target group we can see that traffic is coming form a link local address and arriving on an interface in the VPC address space.

ubuntu@ip-10-0-14-136:~$ ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9001
        inet 10.0.14.136  netmask 255.255.240.0  broadcast 10.0.15.255
        inet6 fe80::1084:5eff:fe10:b9c9  prefixlen 64  scopeid 0x20<link>
        ether 12:84:5e:10:b9:c9  txqueuelen 1000  (Ethernet)
        RX packets 337419  bytes 57204314 (57.2 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 325096  bytes 507235216 (507.2 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 194  bytes 20854 (20.8 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 194  bytes 20854 (20.8 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ubuntu@ip-10-0-14-136:~$ sudo tcpdump -ni eth0 port 80
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
17:06:50.692472 IP 169.254.171.193.10160 > 10.0.14.136.80: Flags [S], seq 1461706243, win 60459, options [mss 8637,sackOK,TS val 2161952685 ecr 0,nop,wscale 7], length 0
17:06:50.692505 IP 10.0.14.136.80 > 169.254.171.193.10160: Flags [S.], seq 3703243651, ack 1461706244, win 62643, options [mss 8961,sackOK,TS val 1734147964 ecr 2161952685,nop,wscale 7], length 0
17:06:50.693108 IP 169.254.171.193.10160 > 10.0.14.136.80: Flags [.], ack 1, win 473, options [nop,nop,TS val 2161952686 ecr 1734147964], length 0
17:06:50.693108 IP 169.254.171.193.10160 > 10.0.14.136.80: Flags [P.], seq 1:223, ack 1, win 473, options [nop,nop,TS val 2161952686 ecr 1734147964], length 222: HTTP: GET / HTTP/1.1
17:06:50.693134 IP 10.0.14.136.80 > 169.254.171.193.10160: Flags [.], ack 223, win 488, options [nop,nop,TS val 1734147964 ecr 2161952686], length 0
17:06:50.693443 IP 10.0.14.136.80 > 169.254.171.193.10160: Flags [P.], seq 1:10927, ack 223, win 488, options [nop,nop,TS val 1734147965 ecr 2161952686], length 10926: HTTP: HTTP/1.1 200 OK
17:06:50.693829 IP 169.254.171.193.10160 > 10.0.14.136.80: Flags [.], ack 8626, win 425, options [nop,nop,TS val 2161952687 ecr 1734147965], length 0
17:06:50.693829 IP 169.254.171.193.10160 > 10.0.14.136.80: Flags [.], ack 10927, win 425, options [nop,nop,TS val 2161952687 ecr 1734147965], length 0
17:06:55.698992 IP 10.0.14.136.80 > 169.254.171.193.10160: Flags [F.], seq 10927, ack 223, win 488, options [nop,nop,TS val 1734152970 ecr 2161952687], length 0
17:06:55.699637 IP 169.254.171.193.10160 > 10.0.14.136.80: Flags [F.], seq 223, ack 10928, win 445, options [nop,nop,TS val 2161957693 ecr 1734152970], length 0
17:06:55.699657 IP 10.0.14.136.80 > 169.254.171.193.10160: Flags [.], ack 224, win 488, options [nop,nop,TS val 1734152971 ecr 2161957693], length 0

 

Conclusion

Over the years I have heard people say that BIG-IP is the 'swiss army knife' in the network.  Personally I like to think of a prism. BIG-IP and NGINX are robust proxies that allow you to bend network traffic to meet your needs.

In this case we are taking traffic and moving it into and out of a VPC lattice that would not normally accessible to users or systems that are not running in AWS or not part of the lattice service network if they are.   In addition to creating the network layer users can insert the security controls necessary such as Advanced WAF, Access Policy Manager, Advanced Firewall Manager or NGINX App Protect

Updated Aug 23, 2023
Version 2.0

Was this article helpful?