on
11-Nov-2014
09:52
- edited on
05-Jun-2023
22:12
by
JimmyPackets
The release of LineRate 2.5.0 brings lots of new features, but the one I'm most excited about is KVM guest support. We've spent a lot of effort making LineRate run well as a guest under the KVM hypervisor "out of the box".
We've also identified a couple of configuration tweaks you can do on the hypervisor host that improve performance and efficiency. These tweaks are:
LineRate supports multiple send and receive queues on Virtio network interfaces. This feature allows multiple vCPUs to be sending and receiving traffic simultaneously, improving network throughput.
Your hypervisor host needs to run KVM 2.0.0 or later and libvirt 1.2.2 or later to use multiqueue feature of Virtio NICs.
The optimal number of queues depends on the number of vCPUs in the LineRate guest, as shown below:
.tg {border-collapse:collapse;border-spacing:0;} .tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;} .tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;} .tg .tg-5y5n{background-color:#ecf4ff}Guest vCPUs | Queues |
---|---|
1 | 1 |
2 | 1 |
4 | 1 |
6 | 2 |
8 | 2 |
12 | 4 |
16 | 6 |
24 | 8 |
32 | 19 |
Note: The default number of queues for a KVM guest is 1. If the table above indicates that the optimal number of queues for your LineRate guest is 1, then there is no need to do anything.
virsh edit
command.<interface>
section, add the following element using the table above to determine the queues
value:<driver name='vhost' queues='8'/>
Virtual machines share vCPUs with the hypervisor host. In many situations, you can improve LineRate performance by coordinating which vCPUs are used by the host and the guest.
You want to pin guest vCPUs to hypervisor vCPUs that are not used by the host's network drivers.
virsh capabilities
command to see all of the host vCPUs. Identify vCPUs which are not used by the physical NIC, these are the proper vCPUs to pin the LineRate guest to.virsh vcpupin
(or virt-manager or virsh edit
) to pin guest vCPUs to unused host vCPUs.If you choose to manually edit the XML file using
virsh edit
, it should look something like what's shown below. In the libvirt XML file, vcpu
specifies the guest vCPU and cpuset
specifies the hypervisor host vCPU.
<vcpu placement='static'>16</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='8'/>
<vcpupin vcpu='1' cpuset='9'/>
<vcpupin vcpu='2' cpuset='10'/>
<vcpupin vcpu='3' cpuset='11'/>
<vcpupin vcpu='4' cpuset='12'/>
<vcpupin vcpu='5' cpuset='13'/>
<vcpupin vcpu='6' cpuset='14'/>
<vcpupin vcpu='7' cpuset='15'/>
<vcpupin vcpu='8' cpuset='24'/>
<vcpupin vcpu='9' cpuset='25'/>
<vcpupin vcpu='10' cpuset='26'/>
<vcpupin vcpu='11' cpuset='27'/>
<vcpupin vcpu='12' cpuset='28'/>
<vcpupin vcpu='13' cpuset='29'/>
<vcpupin vcpu='14' cpuset='30'/>
<vcpupin vcpu='15' cpuset='31'/>
</cputune>
Example vCPU pinning configuration for a LineRate guest in KVM