Deploying F5 Distributed Cloud Customer Edge on AWS in a scalable way with full automation

Introduction

Scaling infrastructure efficiently while maintaining operational simplicity is a critical challenge for modern enterprises. This comprehensive guide presents the foundation for a fully automated Terraform solution for deploying F5 Distributed Cloud (F5XC) Customer Edge (CE) nodes on AWS. It scales seamlessly from single-node proof-of-concepts to multi-node production deployments.

Through Infrastructure as Code automation, this project eliminates manual configuration overhead, ensures consistent deployments, and enables teams to scale their application delivery infrastructure on-demand with simple variable changes.

This comprehensive guide explores not just the "how" but the "why" behind the architectural decisions, particularly using F5XC Virtual Sites and AWS Network Load Balancers in a dual-layer load balancing strategy.

 

Understanding F5 Distributed Cloud Virtual Sites

A Virtual Site in F5 Distributed Cloud is a logical abstraction that groups multiple physical Customer Edge sites together. This creates a unified, distributed application delivery fabric. Think of it as a "site of sites"—a meta-construct that allows you to treat multiple CE deployments as a single, cohesive entity.

 

The Power of Abstraction

Virtual Sites provide several critical capabilities:

1. Location Abstraction

You don't need to know where CE nodes are physically located or how many CEs you have per location. Configuration will be pushed in a unified way.

2. Easy Membership Through Labels

# In the Terraform configuration 
f5xc_vsite_key = "environment" 
f5xc_vsite_key_label = "production"

Any CE site with the label environment=production automatically joins the Virtual Site. This enables:

  • Easy scaling—new sites auto-join based on labels
  • Environment separation—different Virtual Sites for dev/staging/prod
  • Flexible grouping—by region, purpose, or any custom criteria

3. Simplified Multi-Region and Multi-Cloud Deployments

Virtual Site: "global-app-delivery" 
├── AWS US-East CE Sites (3 nodes)
├── AWS US-West CE Sites (3 nodes) 
├── AWS EU-Central CE Sites (3 nodes) 
├── Azure West Europe CE Sites (2 nodes) 
└── On-Premises Data Center CE Sites (2 nodes)

Your CE configuration remains constant while the underlying infrastructure spans continents and cloud providers.

 

The AWS NLB Layer: Why It Matters

The Dual-Layer Load Balancing Strategy Explained

You might wonder: "Why do we need an AWS NLB when F5XC CE already provides load balancing?" The answer lies in the complementary strengths of each layer.

 

Benefits of the NLB Front-End

1. Cloud-Native Integration

The NLB provides AWS-native benefits that CE nodes alone cannot:

  • Cross-Zone Load Balancing: Even distribution across Availability Zones
  • AWS Shield Standard: Volumetric DDoS protection
  • CloudWatch Integration: Native metrics and monitoring

2. High-Performance Layer 4 Distribution

NLBs excel at what they do best:

  • Ultra-low latency: Single-digit millisecond latency
  • Massive scale: Millions of requests per second
  • Connection multiplexing: Efficiently manages TCP connections
  • Protocol flexibility: Supports TCP, UDP, and TLS

3. Separation of Concerns

Each layer focuses on what it does best:

NLB Responsibilities: 
├── TCP/UDP load balancing 
├── Connection distribution 
├── Health checking at Layer 4 
└── Volumetric DDoS mitigation 

F5XC CE Responsibilities: 
├── Application-layer security (WAF, Bot Defense) 
├── Advanced traffic management (Application Load Balancing, A/B testing, canary) 
├── API discovery and protection 
└── Multi-cloud connectivity

4. Failover and Recovery

The system handles failures gracefully at NLB level. If a CE node fails health checks, NLB automatically removes it from rotation

 

Deep Dive: The Architecture

Understanding Traffic Flow

The architecture implements a clear separation of concerns with a dual-layer load balancing strategy:

 

 

Why This Architecture?

The combination of AWS NLB, multiple CE nodes, and Virtual Sites isn't arbitrary—it's a carefully designed system that provides:

1. Resilience at Every Layer

    • NLB handles cloud infrastructure High Availability towards the CEs
    • Virtual Sites handle unified configuration across all the CEs

2. Security Through Defense in Depth

    • No direct exposure of F5 Distributed Cloud CE nodes
    • Multiple security group layers
    • Application-layer protection via F5XC CE nodes

3. Operational Flexibility

    • Add/remove CE nodes without DNS changes
    • Zero-downtime maintenance windows
    • Easy horizontal scaling based on demand

 

 

Implementation Guide: From Prerequisites to Deployment

 

Preparing Your AWS Infrastructure

Before deploying F5XC CE nodes using this Terraform configuration, you need to establish the foundational AWS infrastructure. This solution follows a bring-your-own-infrastructure model where the Terraform configuration does NOT create VPCs, subnets, or NAT Gateways—these must exist before deployment.

 

Essential AWS Components Setup

1. VPC Configuration

Create a VPC with DNS support enabled, which is crucial for F5XC CE node operation:

aws ec2 create-vpc \ --cidr-block 10.0.0.0/16 \ --enable-dns-hostnames \ --enable-dns-support \ --tag-specifications 'ResourceType=vpc,Tags=[{Key=Name,Value=my-vpc}]'

2. Three-Tier Subnet Architecture

The deployment requires three distinct subnet types, each serving a specific purpose in the security and traffic flow architecture:

  • Public Subnet: Hosts the NAT Gateway and optionally the Network Load Balancer
  • Outside Subnet (Private): Connects F5XC CE nodes' SLO interface with NAT Gateway routing
  • Inside Subnet (Private): Handles internal workload traffic via the SLI interface

3. NAT Gateway for Secure Outbound Access

# Allocate Elastic IP 
aws ec2 allocate-address \ --domain vpc \ --tag-specifications 'ResourceType=elastic-ip,Tags=[{Key=Name,Value=nat-gateway-eip}]' # Create NAT Gateway in public subnet aws ec2 create-nat-gateway \ --subnet-id subnet-public-xxxxxxxxx \ --allocation-id eipalloc-xxxxxxxxx \ --tag-specifications 'ResourceType=nat-gateway,Tags=[{Key=Name,Value=my-nat-gateway}]'

4. Route Table Configuration

Configure route tables to ensure proper traffic flow:

Subnet
Destination
Target
Purpose
Public 0.0.0.0/0 IGW Internet access
Outside 0.0.0.0/0 NAT Gateway CE outgoing Internet access
Inside 10.0.0.0/16 Local VPC internal traffic

 

F5 Distributed Cloud Prerequisites

API Credentials and AMI Access

  1. Generate API Certificate: Access your F5XC console and generate service credentials following the official documentation
  2. Locate the F5XC CE AMI: Find the latest F5XC CE AMI for your region:
aws ec2 describe-images \ --region your-region \ --filters "Name=name,Values=*f5xc-ce*" \ --query "reverse(sort_by(Images, &CreationDate))[0].ImageId" \ --output text

 

Deployment Process

Step 1: Repository Setup

The full Terraform code, including all modules, variables, and detailed configuration examples, is available on GitHub:

https://github.com/veysph/Prod-TF/tree/main/Virtual%20Site/f5xc-ce-aws

This repository contains:

  • Complete Terraform modules for F5XC CE deployment
  • Example terraform.tfvars files for different scenarios
  • Additional architecture diagrams and network topology examples
  • Advanced configuration options and customization guides
  • Troubleshooting tips and common deployment patterns

Clone and prepare the repository:

git clone https://github.com/veysph/Prod-TF.git cd Prod-TF/Virtual\ Site/f5xc-ce-aws cp terraform.tfvars.example terraform.tfvars

Step 2: Configuration

Edit your terraform.tfvars with the infrastructure details you've created:

# AWS Infrastructure References vpc_name = "your-vpc-name" outside_subnet_name = "your-outside-subnet-name" # Private subnet with NAT route inside_subnet_name = "your-inside-subnet-name" # Private subnet for workloads nlb_public_subnet_name = "your-public-subnet-name" # Public subnet for NLB aws_region = "your-aws-region" aws_ssh_key = "your-ssh-key-name" aws_f5xc_ami = "ami-xxxxxxxxxxxxxxxxx" # F5XC Configuration f5xc_ce_site_name = "your-site-name" f5xc_api_url = "https://your-tenant.console.ves.volterra.io/api" f5xc_api_p12_file = "/path/to/your/api-creds.p12" # Scaling and Features num_ce_nodes = 1 # Deploy 1-10 nodes deploy_nlb = true create_f5xc_virtual_site = true create_f5xc_loadbalancer = true

Step 3: Execute Deployment

terraform init
terraform plan
terraform apply

 

Conclusion

This comprehensive deployment solution for F5 Distributed Cloud Customer Edge on AWS represents a mature, production-ready architecture that balances security, performance, and operational flexibility. The combination of AWS Network Load Balancers and F5XC Virtual Sites creates a robust, dual-layer load balancing strategy that leverages the best of both cloud-native and application delivery technologies.

The architecture's strength lies not just in its individual components but in how they work together:

  • AWS NLB provides cloud-native, high-performance Layer 4 distribution
  • F5XC CE nodes deliver sophisticated application services and security
  • Virtual Sites enable logical abstraction and multi-site/multi-cloud strategies

Whether you're building a proof of concept with a single node or deploying a global, multi-region application delivery network, this Terraform project provides the flexibility and security needed for modern cloud-native applications. The modular design allows teams to start simple and scale up as requirements grow, while maintaining consistent security and operational patterns.

By understanding both the "what" and the "why" of this architecture, you're equipped to make informed decisions about your deployment strategy and customize the solution to meet your specific requirements.

 

Additional Resources

Published Aug 25, 2025
Version 1.0
No CommentsBe the first to comment