Create F5 BIG-IP Next Instance on Proxmox Virtual Environment
If you are looking to deploy a F5 BIG-IP Next instance on Proxmox Virtual Environment (henceforth referred to as Proxmox for the sake of brevity), perhaps in your home lab, here's how:
First, download the BIG-IP Next Central Manager and BIG-IP Next QCOW Files from MyF5 Downloads.
Click on the "Copy Download Link"
Copy the QCOW file to your Proxmox host. I am using the download links from above in the example below.
proxmox $ curl -O -L -J [link for Central Manager from F5 downloads]
proxmox $ curl -O -L -J [link for Next from F5 downloads]
On the Proxmox host, extract the contents in the QCOW files. You will need to rename the Central Manager file from .qcow to .qcow2.
proxmox $ cd ~/
proxmox $ mv BIG-IP-Next-CentralManager-20.2.1-0.3.25.qcow BIG-IP-Next-CentralManager-20.2.1-0.3.25.qcow2
proxmox $ tar -zxvf BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2.tar.gz
BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2
BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2.sha512
BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2.sha512.sig
BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2.sha512sum.txt.asc
BIG-IP-Next-20.2.1-F5-ca-bundle.cert
BIG-IP-Next-20.2.1-F5-certificate.cert
Then, run the command below to create a virtual machine (VM) from the extracted QCOW files. replace the values to match your environment.
#
# Central Manager
#
# use either DHCP or Static IP example
#
# using DHCP (change values to match your environment)
proxmox $ qm create 105 --memory 16384 --sockets 1 --cores 8 --net0 virtio,bridge=vmbr0 --name my-central-manager --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=dhcp --ciupgrade=0 --ide2=local-lvm:cloudinit
# static IP (change values to match your environment)
# proxmox $ qm create 105 --memory 16384 --sockets 1 --cores 8 --net0 virtio,bridge=vmbr0 --net1 virtio,bridge=vmbr1 --name my-central-manager --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=192.168.1.5/24,gw=192.168.1.1 --nameserver 192.168.1.1 --ciupgrade=0 --ide2=local-lvm:cloudinit
# import disk
qm set 105 --virtio0 local-lvm:0,import-from=/root/BIG-IP-Next-CentralManager-20.2.1-0.3.25.qcow2 --boot order=virtio0
#
# Next instance
#
# Note that you need at least two interfaces, one for management and one for data-plane
#
# use either DHCP or Static IP example
#
# DHCP
proxmox $ qm create 107 --memory 16384 --sockets 1 --cores 8 --net0 virtio,bridge=vmbr0 --net1 virtio,bridge=vmbr1 --name my-next-instance --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=dhcp --ciupgrade=0 --ciuser=admin --cipassword=admin --ide2=local-lvm:cloudinit
# static IP
# proxmox $ qm create 107 --memory 16384 --sockets 1 --cores 8 --net0 virtio,bridge=vmbr0 --net1 virtio,bridge=vmbr1 --name my-next-instance --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=192.168.1.7/24,gw=192.168.1.1 --nameserver 192.168.1.1 --ciupgrade=0 --ciuser=admin --cipassword=admin --ide2=local-lvm:cloudinit
# import disk
proxmox $
proxmox $ qm set 107 --virtio0 local-lvm:0,import-from=/root/BIG-IP-Next-20.2.1-2.430.2+0.0.48.qcow2 --boot order=virtio0
You should now see a new VM created on the Proxmox GUI.
Finally, start the VM. This will take a few minutes.
The BIG-IP Next VM is now ready to be onboarded per instructions found here.
- Eric_ChenEmployee
article was updated on July 30th, 2024 to use QCOW images instead of an OVA image from the original article.
- JRahmAdmin
Got my first proxmox Next instances installed, thanks for this! Some notes from my experience installing 20.3
1. For the life of me I could not get the curl command to work properly on my proxmox host. I switched to wget and used this format: wget -O <local file name> “<copied download link>”
2. As of 20.3, the Central Manager image (not that I used it, I already have CM) has the qcow2 extension already, so no need to rename
3. The tar command w/ dash as shown in the article fails, I use it without: tar xvfz <tar/zip file>
4. I cut the cpu/memory for instance way down and added a NIC:
qm create 141 --memory 8192 --sockets 1 --cores 2 --net0 virtio,bridge=vmbr0 --net1 virtio,bridge=vmbr1 --net2 virtio,bridge=vmbr2 --name pm-next-1 --scsihw=virtio-scsi-single --ostype=l26 --cpu=x86-64-v2-AES --citype nocloud --ipconfig0 ip=172.16.2.141/24,gw=172.16.2.254 --nameserver 8.8.8.8 --ciupgrade=0 --ciuser=admin --cipassword=admin --ide2=local-lvm:cloudinit qm set 141 --virtio0 local-lvm:0,import-from=/root/BIG-IP-Next-20.3.0-2.716.2+0.0.50.qcow2 --boot order=virtio0
5. After powering on, I pinged to make sure that was successful, then did a curl to the ip:port to make sure the instance was ready before moving on to the postman steps (the 404 is fine in this case, instance is up doing its job):
jrahm@jrahm-imac ~ % ping 172.16.2.142 PING 172.16.2.142 (172.16.2.142): 56 data bytes 64 bytes from 172.16.2.142: icmp_seq=0 ttl=64 time=0.815 ms 64 bytes from 172.16.2.142: icmp_seq=1 ttl=64 time=0.511 ms ^C --- 172.16.2.142 ping statistics --- 2 packets transmitted, 2 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 0.511/0.663/0.815/0.152 ms jrahm@jrahm-imac ~ % curl -k https://172.16.2.142:5443 {"_errors":[{"id":"d0dea238-ecde-40ee-be66-632b5d829f1f","code":"13158-00028","title":"","detail":"Page not found.","status":"404"}]}
6. If you haven't done any postman work before, download (current 20.3 collection) and import both the collection and the environment into postman, then update the appropriate environment variables, save, and then make sure to set the environment in your collection. You'll also need to disable ssl verification in the postman settings.
7. Navigate to the 20.3 collection subfolder Virtual Edition Onboarding, and if you're all good through the steps above, you can run through the API calls defined to onboard. If all successful, then you can head to CM to onboard your instance there!
Now that we have a qcow2 version available how would that change the installation process?
- MJ_1024Altocumulus
Doesn't seem to change much.
I found using the qcow2 version I had to either:- Use cloud init, as mentioned above.
For me using libvrt on ubuntu, I followed the following (adding user:admin):https://cloudinit.readthedocs.io/en/latest/howto/run_cloud_init_locally.html#libvirt
- Or in boot up/reboot, catch console, start in "recovery" mode, which puts you in root.
Use 'passwd admin' to change the admin password.
Reboot and proceed.
Obviously less ideal in most cases than using the cloud init method.