F5 Distributed Cloud Site Lab on Proxmox VE with Terraform

Overview

F5 Distributed Cloud (XC) Sites can be deployed in public and private cloud like VMWare and KVM using images available for download at https://docs.cloud.f5.com/docs/images . Proxmox Virtual Environment is a complete, open-source server management platform for enterprise virtualization based on KVM and Linux Containers. This article shows how a redundant Secure Mesh site protecting a multi-node App Stack site can be deployed using Terraform automation on Proxmox VE. 

Logical Topology


        ----------------- vmbr0 (with Internet Access)
             |      |      |              
           +----+ +----+ +----+
         | m0 | | m1 | | m2 | 3 node Secure Mesh Site
           +----+ +----+ +----+
             |      |      |              
-------------------------------------   vmbr1 (or vmbr0 vlan 100)
   |      |      |     |      |      |
+----+ +----+ +----+ +----+ +----+ +----+ 
| m0 | | m1 | | m2 | | w0 | | w1 | | w2 | 6 node App Stack Site
+----+ +----+ +----+ +----+ +----+ +----+ 


The redundant 3 node Secure Mesh Site connects via Site Local Outside (SLO) interfaces to the virtual network (vmbr0) for Internet Access and the Secure Mesh Site providing DHCP services and connectivity via its Site Local Inside (SLI) interfaces to the App Stack nodes connected to another virtual network (vmbr1 or vlan tagged vmbr0).

Each node from the App Stack site is getting DHCP services and Internet connectivity via the Secure Mesh Sites SLI interfaces. 

Requirements

  • Proxmox VE server or cluster (this example leverages a 3 node cluster) with a total capacity of at least 
    • 24 CPU (sufficient for 9 nodes, 4 vCPU each)
    • 144 GB of RAM (for 9 nodes, 16GB each)
    • ISO file storage for cloud-init disks (local or cephfs)
    • Block storage for VM's and template (local-lvm or cephpool)
  • F5 XC CE Template (same for Secure Mesh and App Stack site, see below on how to create one)
  • F5 Distributed Cloud access and API credentials
  • Terraform CLI
  • Terraform example configurations files from https://github.com/mwiget/f5xc-proxmox-site 

The setup used to write this article consists of 3 Intel/ASUS i3-1315U with 64GB RAM and a 2nd disk each for Ceph storage. 

Each NUC has a single 2.5G Ethernet port, which are interconnected via a physical Ethernet Switch and internally connected to Linux Bridge vmbr0. The required 2nd virtual network for the Secure Mesh Site is created using the same Linux Bridge vmbr0 but with a VLAN tag (e.g. 100). There are other options to create a second virtual network in Proxmox, e.g. via Software Defined Networking (SDN) or dedicated physical Ethernet ports.

Installation

Clone the example repo
$ git clone https://github.com/mwiget/f5xc-proxmox-site
$ cd f5xc-proxmox-site
Create F5 XC CE Template

The repo contains helper scripts to download and create the template that can be executed directly from the Proxmox server shell. Modify the environment variable in the scripts according to your setup, mainly the VM template id to one that isn't used yet and the block storage location available on your Proxmox cluster (e.g. local-lvm or cephpool):

$ cat download_ce_image.sh 
#!/bin/bash
image=$(curl -s https://docs.cloud.f5.com/docs/images/node-cert-hw-kvm-images|grep qcow2\"| cut -d\" -f2)
if test -z $image; then
  echo "can't find qcow2 image from download url. Check https://docs.cloud.f5.com/docs/images/node-cert-hw-kvm-images"
  exit 1
fi
if ! -f $image; then
  echo "downloading $image ..."
  wget $image
fi

The following script must be executed on the Proxmox server (don't forget to adjust qcow2, id and storage):

$ cat create_f5xc_ce_template.sh 
#!/bin/bash
# adjust full path to downloaded qcow2 file, target template id and storage ...
qcow2=/root/rhel-9.2024.11-20240523024833.qcow2
id=9000
storage=cephpool

echo "resizing image to 50G ..."
qemu-img resize $qcow2 50G
echo "destroying existing VM $id (if present) ..."
qm destroy $id
echo "creating vm template $id from $image .."
qm create $id --memory 16384 --net0 virtio,bridge=vmbr0 --scsihw virtio-scsi-pci
qm set $id --name f5xc-ce-template
qm set $id --scsi0 $storage:0,import-from=$qcow2
qm set $id --boot order=scsi0
qm set $id --serial0 socket --vga serial0
qm template $id
 Create terraform.tfvars

Copy the example terraform.tfvars.example to terraform.tfvars and adjust the variables based on your setup:

$ cp terraform.tfvars.example terraform.tfvars

project_prefix Site and node names will use this prefix, e.g. your initials
ssh_public_key Your ssh public key provided to each node
pm_api_url Proxmox API URL
pm_api_token_id Proxmox API Token Id, e.g. "root@pam!prox"
pm_api_token_secret Proxmox API Token Secret
pm_target_nodes List of proxmox servers to use
iso_storage_pool Proxmox storage pool for the cloud init ISO disks
pm_storage_pool Proxmox storage pool for the VM disks
pm_clone Name of the created F5 XC CE Template 
pm_pool resource pool to which the VM will be added (optional)
f5xc_api_url https://<tenant>.console.ves.volterra.io/api
f5xc_api_token F5 XC API Token
f5xc_tenant F5 XC Tenant Id
f5xc_api_p12_file

Path to the encrypted F5 XC API P12 file. The password is expected to be provided via environment variable VES_P12_PASSWORD

 

Set count=1 for module "firewall"

The examples defined in the various toplevel *.tf files use Terraform count meta-argument to enable or disable various site types to build. Setting `count=0` disables it and `count=1` enables it. It is even possible to create multiple sites of the same type by setting count to a higher number. Each site adds the count index as suffix to the site name. 

To re-created the setup documented here, edit the file lab-firewall.tf and set `count=1` in the firewall and appstack modules.

Deploy sites

Use terraform CLI to deploy:

$ terraform init

$ terraform plan

$ terraform apply

Terraform output will show periodic progress, including site status until ONLINE.

Once deployed, you can check status via F5 XC UI, including the DHCP leases assigned to the App Stack nodes:

A kubeconfig file has been automatically created and it can be sourced via env.sh and used to query the cluster:



$ source env.sh
$ kubectl get nodes -o wide                                                                      
NAME                  STATUS   ROLES        AGE    VERSION       INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                                      KERNEL-VERSION                 C  NTAINER-RUNTIME
mw-fw-appstack-0-m0   Ready    ves-master   159m   v1.29.2-ves   192.168.100.114   <none>        Red Hat Enterprise Linux 9.2024.11.4 (Plow)   5.14.0-427.16.1.el9_4.x86_64   cr  i-o://1.26.5-5.ves1.el9                  
mw-fw-appstack-0-m1   Ready    ves-master   159m   v1.29.2-ves   192.168.100.96    <none>        Red Hat Enterprise Linux 9.2024.11.4 (Plow)   5.14.0-427.16.1.el9_4.x86_64   cr  i-o://1.26.5-5.ves1.el9
mw-fw-appstack-0-m2   Ready    ves-master   159m   v1.29.2-ves   192.168.100.49    <none>        Red Hat Enterprise Linux 9.2024.11.4 (Plow)   5.14.0-427.16.1.el9_4.x86_64   cr  i-o://1.26.5-5.ves1.el9
mw-fw-appstack-0-w0   Ready    <none>       120m   v1.29.2-ves   192.168.100.121   <none>        Red Hat Enterprise Linux 9.2024.11.4 (Plow)   5.14.0-427.16.1.el9_4.x86_64   cr  i-o://1.26.5-5.ves1.el9
mw-fw-appstack-0-w1   Ready    <none>       120m   v1.29.2-ves   192.168.100.165   <none>        Red Hat Enterprise Linux 9.2024.11.4 (Plow)   5.14.0-427.16.1.el9_4.x86_64   cr  i-o://1.26.5-5.ves1.el9
mw-fw-appstack-0-w2   Ready    <none>       120m   v1.29.2-ves   192.168.100.101   <none>        Red Hat Enterprise Linux 9.2024.11.4 (Plow)   5.14.0-427.16.1.el9_4.x86_64   cr  i-o://1.26.5-5.ves1.el9
Next steps

Now it's time to explore the Secure mesh site and the App stack cluster:
Deploy a service on the new cluster, create a Load Balancer and Origin pool to expose the service? Need more or less worker nodes? Simply change the worker node count in the lab-firewall.tf file and re-apply via `terraform apply`. 

Destroy deployment

Use terraform again to destroy the site objects in F5 XC and the Virtual Machines and disks on Proxmox with

$ terraform destroy

Summary

This article documented how to deploy a dual nic Secure Mesh site and a multi-node App Stack Cluster via Terraform on Proxmox VE. There are additional examples in secure-mesh-single-nic.tf,secure-mesh-dual-nic.tf and appstack.tf. You can explore and modify the provided modules based on your particular needs. 

Resources

 

Published Jun 07, 2024
Version 1.0

Was this article helpful?

No CommentsBe the first to comment