proxmox
4 TopicsCreate F5 BIG-IP Next Instance on Proxmox Virtual Environment
If you are looking to deploy a F5 BIG-IP Next instance on Proxmox Virtual Environment (henceforth referred to as Proxmox for the sake of brevity), perhaps in your home lab, here's how: First, download the BIG-IP Next Central Manager and BIG-IP Next QCOW Files from MyF5 Downloads. Click on the "Copy Download Link" Copy the QCOW file to your Proxmox host. I am using the download links from above in the example below. proxmox $ curl -O -L -J [link for Central Manager from F5 downloads] proxmox $ curl -O -L -J [link for Next from F5 downloads] On the Proxmox host, extract the contents in the QCOW files. You will need to rename the Central Manager file from .qcow to .qcow2. proxmox $ cd ~/ proxmox $ mv BIG-IP-Next-CentralManager-20.2.1-0.3.25.qcow BIG-IP-Next-CentralManager-20.2.1-0.3.25.qcow2 proxmox $ tar -zxvf BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2.tar.gz BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2 BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2.sha512 BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2.sha512.sig BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2.sha512sum.txt.asc BIG-IP-Next-20.2.1-F5-ca-bundle.cert BIG-IP-Next-20.2.1-F5-certificate.cert Then, run the command below to create a virtual machine (VM) from the extracted QCOW files. replace the values to match your environment. # # Central Manager # # use either DHCP or Static IP example # # using DHCP (change values to match your environment) proxmox $ qm create 105 --memory 16384 --sockets 1 --cores 8 --net0 virtio,bridge=vmbr0 --name my-central-manager --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=dhcp --ciupgrade=0 --ide2=local-lvm:cloudinit # static IP (change values to match your environment) # proxmox $ qm create 105 --memory 16384 --sockets 1 --cores 8 --net0 virtio,bridge=vmbr0 --net1 virtio,bridge=vmbr1 --name my-central-manager --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=192.168.1.5/24,gw=192.168.1.1 --nameserver 192.168.1.1 --ciupgrade=0 --ide2=local-lvm:cloudinit # import disk qm set 105 --virtio0 local-lvm:0,import-from=/root/BIG-IP-Next-CentralManager-20.2.1-0.3.25.qcow2 --boot order=virtio0 # # Next instance # # Note that you need at least two interfaces, one for management and one for data-plane # # use either DHCP or Static IP example # # DHCP proxmox $ qm create 107 --memory 16384 --sockets 1 --cores 8 --net0 virtio,bridge=vmbr0 --net1 virtio,bridge=vmbr1 --name my-next-instance --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=dhcp --ciupgrade=0 --ciuser=admin --cipassword=admin --ide2=local-lvm:cloudinit # static IP # proxmox $ qm create 107 --memory 16384 --sockets 1 --cores 8 --net0 virtio,bridge=vmbr0 --net1 virtio,bridge=vmbr1 --name my-next-instance --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=192.168.1.7/24,gw=192.168.1.1 --nameserver 192.168.1.1 --ciupgrade=0 --ciuser=admin --cipassword=admin --ide2=local-lvm:cloudinit # import disk proxmox $ proxmox $ qm set 107 --virtio0 local-lvm:0,import-from=/root/BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2 --boot order=virtio0 You should now see a new VM created on the Proxmox GUI. Finally, start the VM. This will take a few minutes. The BIG-IP Next VM is now ready to be onboarded per instructions found here.2.4KViews6likes4CommentsF5 Distributed Cloud Site Lab on Proxmox VE with Terraform
Overview F5 Distributed Cloud (XC) Sites can be deployed in public and private cloud like VMWare and KVM using images available for download at https://docs.cloud.f5.com/docs/images . Proxmox Virtual Environment is a complete, open-source server management platform for enterprise virtualization based on KVM and Linux Containers. This article shows how a redundant Secure Mesh site protecting a multi-node App Stack site can be deployed using Terraform automation on Proxmox VE. Logical Topology ----------------- vmbr0 (with Internet Access) | | | +----+ +----+ +----+ | m0 | | m1 | | m2 | 3 node Secure Mesh Site +----+ +----+ +----+ | | | ------------------------------------- vmbr1 (or vmbr0 vlan 100) | | | | | | +----+ +----+ +----+ +----+ +----+ +----+ | m0 | | m1 | | m2 | | w0 | | w1 | | w2 | 6 node App Stack Site +----+ +----+ +----+ +----+ +----+ +----+ The redundant 3 node Secure Mesh Site connects via Site Local Outside (SLO) interfaces to the virtual network (vmbr0) for Internet Access and the Secure Mesh Site providing DHCP services and connectivity via its Site Local Inside (SLI) interfaces to the App Stack nodes connected to another virtual network (vmbr1 or vlan tagged vmbr0). Each node from the App Stack site is getting DHCP services and Internet connectivity via the Secure Mesh Sites SLI interfaces. Requirements Proxmox VE server or cluster (this example leverages a 3 node cluster) with a total capacity of at least 24 CPU (sufficient for 9 nodes, 4 vCPU each) 144 GB of RAM (for 9 nodes, 16GB each) ISO file storage for cloud-init disks (local or cephfs) Block storage for VM's and template (local-lvm or cephpool) F5 XC CE Template (same for Secure Mesh and App Stack site, see below on how to create one) F5 Distributed Cloud access and API credentials Terraform CLI Terraform example configurations files from https://github.com/mwiget/f5xc-proxmox-site The setup used to write this article consists of 3 Intel/ASUS i3-1315U with 64GB RAM and a 2nd disk each for Ceph storage. Each NUC has a single 2.5G Ethernet port, which are interconnected via a physical Ethernet Switch and internally connected to Linux Bridge vmbr0. The required 2nd virtual network for the Secure Mesh Site is created using the same Linux Bridge vmbr0 but with a VLAN tag (e.g. 100). There are other options to create a second virtual network in Proxmox, e.g. via Software Defined Networking (SDN) or dedicated physical Ethernet ports. Installation Clone the example repo $ git clone https://github.com/mwiget/f5xc-proxmox-site $ cd f5xc-proxmox-site Create F5 XC CE Template The repo contains helper scripts to download and create the template that can be executed directly from the Proxmox server shell. Modify the environment variable in the scripts according to your setup, mainly the VM template id to one that isn't used yet and the block storage location available on your Proxmox cluster (e.g. local-lvm or cephpool): $ cat download_ce_image.sh #!/bin/bash image=$(curl -s https://docs.cloud.f5.com/docs/images/node-cert-hw-kvm-images|grep qcow2\"| cut -d\" -f2) if test -z $image; then echo "can't find qcow2 image from download url. Check https://docs.cloud.f5.com/docs/images/node-cert-hw-kvm-images" exit 1 fi if ! -f $image; then echo "downloading $image ..." wget $image fi The following script must be executed on the Proxmox server (don't forget to adjust qcow2, id and storage): $ cat create_f5xc_ce_template.sh #!/bin/bash # adjust full path to downloaded qcow2 file, target template id and storage ... qcow2=/root/rhel-9.2024.11-20240523024833.qcow2 id=9000 storage=cephpool echo "resizing image to 50G ..." qemu-img resize $qcow2 50G echo "destroying existing VM $id (if present) ..." qm destroy $id echo "creating vm template $id from $image .." qm create $id --memory 16384 --net0 virtio,bridge=vmbr0 --scsihw virtio-scsi-pci qm set $id --name f5xc-ce-template qm set $id --scsi0 $storage:0,import-from=$qcow2 qm set $id --boot order=scsi0 qm set $id --serial0 socket --vga serial0 qm template $id Create terraform.tfvars Copy the example terraform.tfvars.example to terraform.tfvars and adjust the variables based on your setup: $ cp terraform.tfvars.example terraform.tfvars project_prefix Site and node names will use this prefix, e.g. your initials ssh_public_key Your ssh public key provided to each node pm_api_url Proxmox API URL pm_api_token_id Proxmox API Token Id, e.g. "root@pam!prox" pm_api_token_secret Proxmox API Token Secret pm_target_nodes List of proxmox servers to use iso_storage_pool Proxmox storage pool for the cloud init ISO disks pm_storage_pool Proxmox storage pool for the VM disks pm_clone Name of the created F5 XC CE Template pm_pool resource pool to which the VM will be added (optional) f5xc_api_url https://<tenant>.console.ves.volterra.io/api f5xc_api_token F5 XC API Token f5xc_tenant F5 XC Tenant Id f5xc_api_p12_file Path to the encrypted F5 XC API P12 file. The password is expected to be provided via environment variable VES_P12_PASSWORD Set count=1 for module "firewall" The examples defined in the various toplevel *.tf files use Terraform count meta-argument to enable or disable various site types to build. Setting `count=0` disables it and `count=1` enables it. It is even possible to create multiple sites of the same type by setting count to a higher number. Each site adds the count index as suffix to the site name. To re-created the setup documented here, edit the file lab-firewall.tf and set `count=1` in the firewall and appstack modules. Deploy sites Use terraform CLI to deploy: $ terraform init $ terraform plan $ terraform apply Terraform output will show periodic progress, including site status until ONLINE. Once deployed, you can check status via F5 XC UI, including the DHCP leases assigned to the App Stack nodes: A kubeconfig file has been automatically created and it can be sourced via env.sh and used to query the cluster: $ source env.sh $ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION C NTAINER-RUNTIME mw-fw-appstack-0-m0 Ready ves-master 159m v1.29.2-ves 192.168.100.114 <none> Red Hat Enterprise Linux 9.2024.11.4 (Plow) 5.14.0-427.16.1.el9_4.x86_64 cr i-o://1.26.5-5.ves1.el9 mw-fw-appstack-0-m1 Ready ves-master 159m v1.29.2-ves 192.168.100.96 <none> Red Hat Enterprise Linux 9.2024.11.4 (Plow) 5.14.0-427.16.1.el9_4.x86_64 cr i-o://1.26.5-5.ves1.el9 mw-fw-appstack-0-m2 Ready ves-master 159m v1.29.2-ves 192.168.100.49 <none> Red Hat Enterprise Linux 9.2024.11.4 (Plow) 5.14.0-427.16.1.el9_4.x86_64 cr i-o://1.26.5-5.ves1.el9 mw-fw-appstack-0-w0 Ready <none> 120m v1.29.2-ves 192.168.100.121 <none> Red Hat Enterprise Linux 9.2024.11.4 (Plow) 5.14.0-427.16.1.el9_4.x86_64 cr i-o://1.26.5-5.ves1.el9 mw-fw-appstack-0-w1 Ready <none> 120m v1.29.2-ves 192.168.100.165 <none> Red Hat Enterprise Linux 9.2024.11.4 (Plow) 5.14.0-427.16.1.el9_4.x86_64 cr i-o://1.26.5-5.ves1.el9 mw-fw-appstack-0-w2 Ready <none> 120m v1.29.2-ves 192.168.100.101 <none> Red Hat Enterprise Linux 9.2024.11.4 (Plow) 5.14.0-427.16.1.el9_4.x86_64 cr i-o://1.26.5-5.ves1.el9 Next steps Now it's time to explore the Secure mesh site and the App stack cluster: Deploy a service on the new cluster, create a Load Balancer and Origin pool to expose the service? Need more or less worker nodes? Simply change the worker node count in the lab-firewall.tf file and re-apply via `terraform apply`. Destroy deployment Use terraform again to destroy the site objects in F5 XC and the Virtual Machines and disks on Proxmox with $ terraform destroy Summary This article documented how to deploy a dual nic Secure Mesh site and a multi-node App Stack Cluster via Terraform on Proxmox VE. There are additional examples in secure-mesh-single-nic.tf,secure-mesh-dual-nic.tf and appstack.tf. You can explore and modify the provided modules based on your particular needs. Resources https://github.com/mwiget/f5xc-proxmox-site F5 Distributed Cloud Documentation Terraform Proxmox Provider by Telmate Proxmox Virtual Environment521Views0likes0CommentsBIG-IP Next under Proxmox
In my lab, I'm running Proxmox instead of other virtualization software. Installation is similar to the documentation for VMware here: https://clouddocs.f5.com/bigip-next/latest/install/cm_install_vmware.html I was able to setup BIG-IP Next CentralManager as well as BIG-IP Next instances. Start by downloading the images from here: https://my.f5.com/manage/s/downloads I chose the OVA formats for each. Group BIG-IP_Next Product Line: Central Manager (CM) Version: 20.1 File: BIG-IP-Next-CentralManager-20.1.0-0.8.112.ova Then on your proxmox instance, unpack the ova file with tar for example: mkdir CentralManager cd CentralManager tar xvf ../BIG-IP-Next-CentralManager-20.1.0-0.8.112.ova qm importovf 1100 BIG-IP-Next-CentralManager-20.1.0-0.8.112.ovf local-zfs qm set 1100 -net0 virtio,bridge=vmbr0 I'm using my local-zfs as storage and assigning the vm number 1100. You may want to change one or both of those. The import takes a long time as it's importing 350G of storage! For some reason the OVF import does not create a network interface on my setup, so I add that after the import. You may also want to change the SCSI controller to virtio SCSI as it's a bit more efficient. Next start the VM up from the web interface to Proxmox. It will take a while to boot. Once it does it will show the IPv4 and IPv6 address (if enabled) on the console. In my setup, I assign a static dhcp address to the node here, and then reboot again. Note: it appears that CM does NOT support DUID and/or dhcpv6. You can login on the console or ssh into the box using admin/admin and proceed as in the VMware docs. Note: you will need samba or nfs "External Storage" or setup will fail. The "Share Path" needs to start with a "/". Once CM setup is complete, or while you're waiting for it to do so, download: Group: BIG-IP_Next Product Line: Virtual Edition (VE) Version: 20.1 File: BIG-IP-Next-20.1.0-2.279.0+0.0.75.ova unpack it as above: mkdir Next cd Nex tar xvf ../BIG-IP-Next-20.1.0-2.279.0+0.0.75.ova qm importovf 1100 BIG-IP-Next-20.1.0-2.279.0+0.0.75.ovf local-zfs qm set 1101 -net0 virtio,bridge=vmbr0 Again, you might want to use a vm id of other then 1101 and/or storage other than local-zfs. This takes less time as it's only importing 80G of data. Again, for me, no network interfaces were created, so I create two. If you have different subnets, you may want your first interface on your management network and the second on a traffic network. I then assign a static address in DHCP and reboot again. When I create both network interfaces, the services on port 5443 don't seem to start up. As above, you may also want to change the SCSI controller to virtio SCSI as it's a bit more efficient. After the system boots, the instructions say to login as admin. This did NOT work for me. Instead, I use curl to change the admin credentials: curl -kX PUT https://next-ip:5443/api/v1/me \ -u admin:admin \ -H "Content-Type: application/json" \ -d '{"currentPassword": "admin", "newPassword": "Welcome123!"}' On my.f5.com request a license for BIP-IP Next, copy the JWT and paste it in to activate the license. Now that I've re-installed everything, I can't get the license to take again, so it might be a one-time-use.459Views3likes8CommentsF5 VE on Proxmox
Has anybody been successful running F5 BIG-IP VE on Proxmox? Proxmox: Operating System: Debian GNU/Linux 10 (buster) Kernel: Linux 5.0.18-1-pve Architecture: x86-64 F5 VE: virtual edition 14.1.2.2 from downloads.f5.com I tried both qcow2 and .ova(scsi) licensing with trial license obtained from F5 single NIC mode According to https://clouddocs.f5.com/cloud/public/v1/matrix.html, Debian should be supported distribution. Following instructions on https://techdocs.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/bigip-ve-setup-linux-kvm-13-0-0/1.html. Creating new VM in Proxmox: OS: guest OS Linux, 2.6 Kernel, no media for OS Hard Disk: bus SCSI, VirtIO SCSI, NFS storage, QEMU format (qcow2), 100GB CPU: 4 sockets Memory: 8GB Network: bridge vmbr0 openvswitch with appropriate vlan tag, VirtIO, no firewall VM is created replacing just created qcow2 on remote storage with downloaded F5 qcow2 image. VM is started I am able to get prompt in Proxmox console, log in with default root account. But then mcpd keeps on restarting - constantly every few seconds. Logs show errors caused by permission errors. For some reason F5 is complaining that it cannot create "/shared/.snapshots_d/" because of permission problem. However permissions of "/shared" are OK. When I create .snapshots_d folder manually as root, mcpd no longer restarts, no more console errors... I run config utility to setup management IP/mask/gateway. As expected in single NIC mode, https port is automatically configured to 8443. I am able to reach GUI configuration utility and login as admin. Up until now everything looks fine. When trying to license the VM, I am able to generate dossier, also receive the generated license file from F5. But when I apply the license to the VM and click next, it acts as if nothing has happened. GUI keeps showing VE is not yet licensed. LTM logs says: err mcpd: License file open fails, Permission denied. "/config/bigip.license" has read permission for all and write for tomcat. Those are expected permissions for the license file. Funny though, content of /config/bigip.license is now actually populated with the correct new license. But "Registration Key" in "tmsh show sys hardware" is empty. There are several other file system related warnings or errors in logs.. so I suspect that the whole issue is with how F5 VE is accessing file system on Proxmox. But I don't know what to check or fix further. Is it even possible to run F5 VE on Proxmox? (although F5 clearly states it should be.) thx.2.1KViews0likes3Comments