Forum Discussion
BIG-IP Virtual Ed. Capped at 115mbits Via Iperf Tests
Reducing the number of TMMs will not yield better performance (except in the specific case of Hyper-Thread enabled CPUs, where having one TMM per CPU Die instead of CPU hyperthread yields improvements).
TMM has no failsafes to limit it's CPU consumption. TMM runs as a realtime process and can only be pre-empted by the linux kernel itself. TMM will use all the available CPU and only yield to user-land processes about 10 to 20% of the time if CPU utilization is above 80% (version 11.5.0 and later; in earlier versions, TMM yields CPU 10% of the time).
The overall throughput you get is based on all TMMs receiving a relatively equal load of the incoming traffic. If all the traffic goes to only 1 out of 4 TMMs, you can expect your peak throughput to be about 1/4 of the total possible.
In real life scenarios where clients are choosing their ports randomly and naturally, the resulting load is distributed evenly among all TMMs.
If you get a peak load of 115mbps when you are using a VPN, and the CPU utilization on the bigip is about 40%, it simply means that the bottleneck is unlikely to be the BigIP, but rather the rate at which your VPN is feeding traffic to the BigIP.
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com