Configuring the KVM hypervisor to run a LineRate guest
The release of LineRate 2.5.0 brings lots of new features, but the one I'm most excited about is KVM guest support. We've spent a lot of effort making LineRate run well as a guest under the KVM hypervisor "out of the box".
We've also identified a couple of configuration tweaks you can do on the hypervisor host that improve performance and efficiency. These tweaks are:
- Enable multiple queues on the Virtio network interface
- Pin the LineRate guest to vCPUs that are not used by the hypervisor host
Enable Virtio NIC multiqueue
LineRate supports multiple send and receive queues on Virtio network interfaces. This feature allows multiple vCPUs to be sending and receiving traffic simultaneously, improving network throughput.
Your hypervisor host needs to run KVM 2.0.0 or later and libvirt 1.2.2 or later to use multiqueue feature of Virtio NICs.
The optimal number of queues depends on the number of vCPUs in the LineRate guest, as shown below:
.tg {border-collapse:collapse;border-spacing:0;} .tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;} .tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;} .tg .tg-5y5n{background-color:#ecf4ff}Guest vCPUs | Queues |
---|---|
1 | 1 |
2 | 1 |
4 | 1 |
6 | 2 |
8 | 2 |
12 | 4 |
16 | 6 |
24 | 8 |
32 | 19 |
Note: The default number of queues for a KVM guest is 1. If the table above indicates that the optimal number of queues for your LineRate guest is 1, then there is no need to do anything.
To enable Virtio multiqueue support
- After creating the LineRate guest with Virtio NICs, shut down the guest.
- Manually edit the guest XML, for example using the
command.virsh edit
- In every
section, add the following element using the table above to determine the<interface>
value:queues
<driver name='vhost' queues='8'/>
- Save the file.
- Restart the guest.
Pin guest vCPUs to host vCPUs
Virtual machines share vCPUs with the hypervisor host. In many situations, you can improve LineRate performance by coordinating which vCPUs are used by the host and the guest.
You want to pin guest vCPUs to hypervisor vCPUs that are not used by the host's network drivers.
To implement vCPU pinning
- Run some traffic through the LineRate guest and determine which physical NIC on the hypervisor is carrying the LineRate guest's traffic.
- Look in the following files on the hypervisor host to determine which vCPUs are being used for the LineRate guest's network traffic:
- /proc/interrupts: Shows which interrupts are serviced by which host vCPUs. Look for the interrupts coming from the physical NIC(s) identified in step 1 above and identify the hypervisor host vCPUs that handle those interrupts.
- /sys/class/net/$DEV/device/local_cpulist (where $DEV is the name of the physical NIC identified in step 1 above): Shows which host vCPUs are connected to the physical NIC.
- Use the
command to see all of the host vCPUs. Identify vCPUs which are not used by the physical NIC, these are the proper vCPUs to pin the LineRate guest to.virsh capabilities
- Use
(or virt-manager orvirsh vcpupin
) to pin guest vCPUs to unused host vCPUs.virsh edit
If you choose to manually edit the XML file using
virsh edit
, it should look something like what's shown below. In the libvirt XML file, vcpu
specifies the guest vCPU and cpuset
specifies the hypervisor host vCPU.
<vcpu placement='static'>16</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='8'/>
<vcpupin vcpu='1' cpuset='9'/>
<vcpupin vcpu='2' cpuset='10'/>
<vcpupin vcpu='3' cpuset='11'/>
<vcpupin vcpu='4' cpuset='12'/>
<vcpupin vcpu='5' cpuset='13'/>
<vcpupin vcpu='6' cpuset='14'/>
<vcpupin vcpu='7' cpuset='15'/>
<vcpupin vcpu='8' cpuset='24'/>
<vcpupin vcpu='9' cpuset='25'/>
<vcpupin vcpu='10' cpuset='26'/>
<vcpupin vcpu='11' cpuset='27'/>
<vcpupin vcpu='12' cpuset='28'/>
<vcpupin vcpu='13' cpuset='29'/>
<vcpupin vcpu='14' cpuset='30'/>
<vcpupin vcpu='15' cpuset='31'/>
</cputune>
Example vCPU pinning configuration for a LineRate guest in KVM