cancel
Showing results for 
Search instead for 
Did you mean: 

Supported SR-IOV Configurations?

aplovich_252762
Nimbostratus
Nimbostratus

We're trying to implement SR-IOV on two VE LTMs we have. I'm running into an issue where what's specified in VMware's documentation doesn't match up with F5s:

 

VMware ESXi 5.5 Config guide: SR-IOV section

 

Configuring SR-IOV Big-IP 11.6.1

 

Based on the above documentation, in vSphere 5.5 the correct way to configure SR-IOV is to assign an SR-IOV pass through nic to the VM, and then apply a port group that'll enforce policy on it (IE vlan tagging mode, vlans, ect.) However in the F5 doc it says to add the VF as a PCI pass through device instead of a NIC. It also mentions setting a default VLAN for the VF (via pciPassthru0.defaultVlan). I think the implication is that each VLAN needs to be mapped to an individual VF which is then passed as a PCI device to the VE?

 

Ideally one VF could be passed into the VE which would then send all VLANs over it, instead of having say 16 PCI devices attached for the 16 VLANs that the VE services.

 

I've attempted to configure our VEs with just one SR-IOV pass through NIC, per the VMware doc, but it hasn't worked: the VE can't communicate over the VF. I've tested this by attempting to ping out, sourced with a VLAN on the SR-IOV interface, and trying to ping in to the self-IP hosted on the VLAN supported by SR-IOV. The only thing I've seen work is ARP resolution: I was able to arping the self ip from another host on the VLAN.

 

I was wondering if anybody had any experience setting SR-IOV up, or could shed some light on what I'm seeing.

 

1 REPLY 1

aplovich_252762
Nimbostratus
Nimbostratus

If anybody is wondering, the answer to this is that each VLAN needs to be mapped to a separate Virtual Function and passed to the VE as a PCI device per F5's docs.

 

I did some research on the SR-IOV drivers (ixgbevf/ixgbe) and found out that both are required to make it work. ixgbevf drivers are found on the guest OS, while ixgbe drivers are found on the hypervisor. They work in tandem and their versions need to be kept in sync.

 

I found this chart http://www.intel.com/content/www/us/en/support/network-and-i-o/ethernet-products/000006958.html which indicates you'd need ixgbevf version 2.16.1 or later for compatibility with ESXi's ixgbe version 3.7.13.7.

 

My assumption is that since our VE's ixgbevf driver is at 2.14.2, there's some functionality that isn't implemented between the ixgbe / ixgbevf versions we're running. This would be why VMware's method of deploying SR-IOV isn't working as expected.