Forum Discussion

Pradeepk7_19971's avatar
Pradeepk7_19971
Icon for Nimbostratus rankNimbostratus
Apr 28, 2015

BIG-IP Virtual Ed. Capped at 115mbits Via Iperf Tests

AWS Marketplace instance: F5 BIG-IP Virtual Edition 1Gbps Best

 

Hello,

 

We're seeing an oddity in speed when running the virtual edition(c3.8xlarge).

 

Using iperf to the F5 instance directly yields ~900mbits to 1000mbits.

 

Using iperf via a VPN tunnel to a back end instance(also a c3.8xlarge) through the F5 consistently yields almost exactly 115mbits.

 

3 Replies

  • BinaryCanary_19's avatar
    BinaryCanary_19
    Historic F5 Account

    My first guess is that the traffic distribution through the tunnel is not optimal. 115mbps will workout as 225mbps bi-directional, and if your instance has 4 tmms, then this may line up with why you observe 1000mbps when you connect directly (225 x 4 ~= 900). That is just an educated guess, but that is a very possible cause of the bottleneck:

     

    all the traffic is hitting only one TMM and that TMM gets maxed out (it is tied to a single CPU core) and you can't go faster than that.

     

  • Hello Fanen,

     

    Thank you for the quick response.

     

    You are correct, we did narrow it down to the TMM's as well. We reached 40% CPU util across all CPU cores.

     

    Another test we attempted was limiting the of TMM's down to one assuming that the TMM would saturate one core completely. Again, we only reached just 40% max util for the core. This seems to indicate that the TMM has a failsafe that limits CPU consumption.

     

    Do you know of a way to change the TMM algorithm to allow it to consume the entire core, or at least a greater percentage of it?

     

  • BinaryCanary_19's avatar
    BinaryCanary_19
    Historic F5 Account

    Reducing the number of TMMs will not yield better performance (except in the specific case of Hyper-Thread enabled CPUs, where having one TMM per CPU Die instead of CPU hyperthread yields improvements).

     

    TMM has no failsafes to limit it's CPU consumption. TMM runs as a realtime process and can only be pre-empted by the linux kernel itself. TMM will use all the available CPU and only yield to user-land processes about 10 to 20% of the time if CPU utilization is above 80% (version 11.5.0 and later; in earlier versions, TMM yields CPU 10% of the time).

     

    The overall throughput you get is based on all TMMs receiving a relatively equal load of the incoming traffic. If all the traffic goes to only 1 out of 4 TMMs, you can expect your peak throughput to be about 1/4 of the total possible.

     

    In real life scenarios where clients are choosing their ports randomly and naturally, the resulting load is distributed evenly among all TMMs.

     

    If you get a peak load of 115mbps when you are using a VPN, and the CPU utilization on the bigip is about 40%, it simply means that the bottleneck is unlikely to be the BigIP, but rather the rate at which your VPN is feeding traffic to the BigIP.