How to deploy an F5XC SMSv2 for KVM with the help of automation
Typical KVM architecture for F5XC CE
The purpose of this article isn't to build a complete KVM environment, but some basic concepts should be explained.
To be deployed, a CE must have an interface (which is and will always be its default interface) that has Internet access. This access is necessary to perform the installation steps and provide the "control plane" part to the CE. The name of this interface will be referred to as « Site Local Outside or SLO ».
It is highly recommended (not to say required) to add at least a second interface during the site first deployment. Because depending on the infrastructure (for example, GCP) it is not possible to add network interfaces after the creation of the VM. Even on platforms where adding a network interface is possible, a reboot of the F5XC CE is needed. An F5XC SMSv2 CE can have up to eight interfaces overall.
Additional interfaces are (most of the time) used as “Site Local Inside or SLI” interfaces or "Segment interfaces" (that specific part will be covered in another article).
To match the requirements, one typical KVM deployment is described below. It's a way of doing things, but it's not the only way.
But most likely the architecture will be composed of:
- a Linux host
- user space softwares to run KVM:
- qemu
- libvirt
- virt-manager
- Linux Network Bridge Interfaces with KVM Networks mapped to those interfaces
- CE interfaces attached to KVM Networks
This is what the diagram below is picturing.
KVM Storage and Networking
We will use both components in the Terraform automation.
KVM Storage
It is necessary to define a storage pool for the F5XC CE virtual machine.
If you already have a storage pool, you can use it.
Otherwise, here is how to create one:
sudo virsh pool-define-as --name f5xc-vmdisks --type dir --target /f5xc-vmdisks
To create the target directory (/f5xc-vmdisks):
sudo virsh pool-build f5xc-vmdisks
Start the storage pool:
sudo virsh pool-start f5xc-vmdisks
Ensure the storage pool starts automatically on system boot:
sudo virsh pool-autostart f5xc-vmdisks
KVM Networking
Assuming you already have bridge interfaces configured on the Linux host but no KVM Networking yet.
KVM SLO networking
Create an XML file (kvm-net-ext.xml) with the following:
<network>
<name>kvm-net-ext</name>
<forward mode="bridge"/>
<bridge name="br0"/>
</network>
Then run:
virsh net-define kvm-net-ext.xml
virsh net-start kvm-net-ext
virsh net-autostart kvm-net-ext
KVM SLI networking
Create an XML file (kvm-net-int.xml) with the following:
<network>
<name>kvm-net-int</name>
<forward mode="bridge"/>
<bridge name="br1"/>
</network>
Then run:
virsh net-define kvm-net-int.xml
virsh net-start kvm-net-int
virsh net-autostart kvm-net-int
Terraform
Getting the F5XC CE QCOW2 image
Please use this link to retrieve the QCOW2 image and store it in your KVM Linux host.
Terraform variables
vCPU and Memory
Please refer to this page for the cpu and memory requirement. Then adjust:
variable "f5xc-ce-memory" {
description = "Memory allocated to KVM CE"
default = "32000"
}
variable "f5xc-ce-vcpu" {
description = "Number of vCPUs allocated to KVM CE"
default = "8"
}
Networking
Based on your KVM Networking setup, please adjust:
variable "f5xc-ce-network-slo" {
description = "KVM Networking for SLO interface"
default = "kvm-net-ext"
}
variable "f5xc-ce-network-sli" {
description = "KVM Networking for SLI interface"
default = "kvm-net-int"
}
Storage
Based on your KVM storage pool, please adjust:
variable "f5xc-ce-storage-pool" {
description = "KVM CE storage pool name"
default = "f5xc-vmdisks"
}
F5XC CE image location
variable "f5xc-ce-qcow2" {
description = "KVM CE QCOW2 image source"
default = "<path to the F5XC CE QCOW2 image>"
}
Cloud-init modification
It's possible to configure static IP and Gateway for the SLO interface. This is done in the cloud-init part of the Terraform code.
Please specify slo_ip and slo_gateway in the Terraform code. Like below.
data "cloudinit_config" "config" {
gzip = false
base64_encode = false
part {
content_type = "text/cloud-config"
content = yamlencode({
write_files = [
{
path = "/etc/vpm/user_data"
permissions = "0644"
owner = "root"
content = <<-EOT
token: ${replace(volterra_token.smsv2-token.id, "id=", "")}
slo_ip: 10.154.1.100/24
slo_gateway: 10.154.1.254
EOT
}
]
})
}
}
If you don't need a static IP, please comment or remove those two lines.
Sample Terraform code
Is available here.
Deployment
Make all the changes necessary in the Terraform variables and in the cloud-init. Then run:
terraform init
terraform plan
terraform apply
Should everything be correct at each step, you should get a CE object in the F5XC console, under
Multi-Cloud Network Connect --> Manage --> Site Management --> Secure Mesh Sites v2