For more information regarding the security incident at F5, the actions we are taking to address it, and our ongoing efforts to protect our customers, click here.

Forum Discussion

Karthick1's avatar
Karthick1
Icon for Cirrus rankCirrus
Dec 03, 2025

BIG-IP VE: 40G Throughput from 4x10G physical NICs

 

 

Hello F5 Community,

I'm designing a BIG-IP VE deployment and need to achieve 40G throughput from 4x10G physical NICs. After extensive research (including reading K97995640), I've created this flowchart to summarize the options. Can you verify if this understanding is correct?

 

**My Environment:**
- Physical server: 4x10G NICs
- ESXi 7.0
- BIG-IP VE (Performance LTM license)
- Goal: Maximize throughput for data plane

 

**Research Findings:**
From F5 K97995640: "Trunking is supported on BIG-IP VE... intended to be used with SR-IOV interfaces but not with the default vmxnet3 driver.

 

 

                           [Need 40G to F5 VE]
          ┌──────┴──---------------------- ────┐
          │                                                              │
   [F5 controls]                                       [ESXi controls]
   (F5 does LACP)                                  (ESXi does LACP)
          │                                                              │
     Only SR-IOV                                   Link Aggregation
          │                                                             │
      ┌───┴───┐                                ┌───┴───┐
      │40G per│                                     │40G agg    │
      │ flow  │                                         │10G/flow    │
      └───────┘                                └───────┘

4 Replies

  • Hi Karthick1​    Your assessment and flowchart look spot on as per the KB article you mentioned.  On your left branch of the flow diagram, you identified that you must use SR-IOV and compatible NIC's - Intel /Mellanox Connect to name some.  The right branch is also accurate just keep in mind per flow happens before the BIG-IP VE so there can be instances where 1 flow will be hashed to 1 x 10G uplink.    for your goal of 40G throughput you may want to consider allocating 4 Virtual Function's per Physical Function within SR-IOV.  Also High-Performance VE License can unlock higher vCPU allocations for multi-threaded TMM.   Leverage iperf and nuttcp to test throughput across various settings.  

  • Regarding the comment: “For your goal of 40G throughput, you may want to consider allocating 4 Virtual Functions per Physical Function within SR-IOV” — are you referring to the left side of the branch?

    Which option would be preferable?
    If I use SR-IOV, then I need to verify the compatible NICs.

    If I choose ESXi link aggregation, I assume that on the F5 VE side I will see a single 40G link. In that case, I don’t need to change any compatibility settings for the drivers. Is that correct?

  • Indeed the VF per PF reference was for the left side of your branch.  Yep SR-IOV you will need to us compatible NIC's we do have a list compiled here  If you choose the ESXI Link agg option it is much simpler in terms of changes.  The VE will see 40G not the individual links.  Just be mindful you can still run into uneven distribution on flows due to hashing on the ESXi vSwitch on the 4 x 10G links.  

    • Karthick1's avatar
      Karthick1
      Icon for Cirrus rankCirrus

      Thank you for pointing out the importance of the flow. To summarize: If you are using ESXi link aggregation, and a single flow has 30G of traffic, that particular flow—due to ESXi's hashing nature—will go through only one of the links at a maximum of 10G (real speed for single transfers = max 10G).

       

      VMs will show as  40G connection

      But - Real speed for single transfers = max 10G