BIG-IP VE in Red Hat OpenShift Virtualization
Overview
Running BIG-IP VE in OpenShift Virtualization allows for VM and modern apps/Kubernetes to convergence, simplifying management and operations. OpenShift Virtualization is Red Hat's enterprise-ready KubeVirt offering which uses the well established QEMU+KVM Linux virtualization layers under the hood. OpenShift Virtualization/KubeVirt provides a Kubernetes declarative interface.
Configuring & running BIG-IP VE in OpenShift Virtualization is very much the same as in any other hypervisor or cloud provider. The initial BIG-IP configuration is done likewise using cloud-init and Declarative Onboarding. On the other hand, VM and platform configurations are specific of the environment. This article focuses in these latter. All the best practice recommendations in this article are aligned with the OpenShift Virtualization Reference Implementation Guide which is a recommended reading.
Update: the manifests referenced in this article can be found in https://github.com/f5devcentral/f5-bd-openshift-virt-migration/tree/main/bigip-vsphere-networking
These contain the Kubernetes manifests to setup a BIG-IP HA pair with the best practices. In the next sections it will be covered each of these highlighting the most important aspects of these as well as including relevant links for additional information.
Note: In order to use BIG-IP VE in OpenShift Virtualization it is required to use versions starting with 15.1.10, 16.1.5 and 17.1.1.
Platform configuration
The base installation of OpenShift Virtualization is done following these steps. After this, it is best practice to additionally setup CPU Manager to guarantee resource allocation and also setup Topology Manager to set a single single-numa-node policy which guarantees all VM resources are in the same NUMA domain. This configuration is applied to machineConfigPool with the label custom-kubelet: cpumanager-enabled as it can be seen in the official configuration example.
These configurations are done with a single KubeletConfig manifest which can be found in the attached bigip-platform-manifests.zip.
OpenShift Virtualization/KubeVirt uses QEMU/KVM under the hood and typically the QEMU Guest Agent is used to allow low level operations on the VMs. In the case of a F5 BIG-IP VE appliance these operations are not supported and therefore the agent is not needed.
VirtualMachine configuration
The attached bigip-vm-manifests.zip contains recommended VirtualMachine configurations for a typical HA pair with network attachments with typical external, internal, ha and management interfaces. This zip file also contains the DataVolume manifests which specify where the qcow2 files of the VE will be residing. This sample manifest downloads the qcow2 file from a web server, other methods are supported. Please accommodate these to your needs.
Next it will be described what are the key sections:
- podAntiAffinity rules to avoid two BIG-IPs of the same HA-pair to run in the same node
spec: runStrategy: Always template: metadata: labels: f5type: bigip-ve bigip-unit: unit-1 spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: "bigip-unit" operator: In values: - unit-2 topologyKey: "kubernetes.io/hostname"
- Resource allocation: CPU, memory and network queues
domain: cpu: sockets: 1 # Adjust cores to the desired number of vCPUs cores: 4 threads: 1 dedicatedCpuPlacement: true resources: # memory must be 2Gi per core at least requests: memory: 8Gi limits: memory: 8Gi devices: networkInterfaceMultiqueue: true
- Annotations to hint the CPU scheduler to enable low latency to the BIG-IPs. See this link for details.
apiVersion: kubevirt.io/v1 Kind: VirtualMachine metadata: name: bigip1 namespace: f5-bigip labels: f5type: bigip-ve annotations: k8s.v1.cni.cncf.io/networks: bigip1-mgmt,bigip1-ha,bigip1-ext,bigip1-int cpu-quota.crio.io: disable cpu-load-balancing.crio.io: disable irq-load-balancing.crio.io: disable
Network configuration
This guide follows the OpenShift Virtualization Reference Implementation Guide recommendation of using NodeNetworkConfigurationPolicy with type OVS bridge and NetworkAttachmentDefinitions with type ovn-k8s-cni-overlay and topology localnet. Alternatively, It is also fully supported by BIG-IP VE to use any other VirtIO based networking.
The ovn-k8s-cni-overlay+localnet networking provides access to the physical network and features micro-segmentation by means of using tags instead of IP addresses as well as better visibility of the traffic. This feature needs to be enabled on the Operator and configured using the MultiNetworkPolicy resource.
It is important to remark that access to the physical network is provided by ovn-k8s-cni-overlay with access ports to the VMs, that is: VLAN trunking or tagging cannot be exposed to VMs. If VLAN trunking or tagging was used previously, then the interfaces and tagging configured in the VLANs will need to be modified. BIG-IP VE can have a total of 28 NICs[1]. Considering one NIC for management, this means that BIG-IP VE can be connected to 27 VLANs with this network type.
Note: at time of this writing the ID 1492337 requires to set the MTU manually at cloud-init stage. This is done in the provided VirtualMachine manifests. Search for "ndal mtu" keywords in these.
[1] https://clouddocs.f5.com/cloud/public/v1/kvm/kvm_setup.html#virtual-machine-network-interfaces
Final remarks and next steps
In future articles we will describe further how to expand further the possibilities of OpenShift Virtualization by covering the following topics:
- Migrating BIG-IP VE from VMware to OpenShift Virtualization
- Using BIG-IP VE and CIS connected to a POD network
- MichaelOLearyEmployee
Thanks for the article, Ulises!
- Kay_Altostratus
We look forward to migrate BIG-IP VE from VMware to OpenShift Virtualization, but we have 40 VLANs on the current BIG-IP VE. As it's stated max. 28 NICs are supported in BIG-IP VE, what other options do we have?
Currently we are using VLAN trunking/tagging in VMware.
- Ulises_AlonsoEmployee
Hi Kay
Please check the followup/more comprehensive PDF attached in VMware to Red Hat OpenShift Virtualization Migration | DevCentralTrunking could be your option in OpenShift Virtualization as well:
"If VLAN trunking is required towards the VMs, a L2 configuration consisting of a NodeNetworkConfigurationPolicy linux-bridge and a NetworkAttachmentDefinition with cnv-bridge can be used."
RegardsUlises