Forum Discussion

yuni's avatar
yuni
Icon for Altostratus rankAltostratus
Sep 26, 2018

Unexpected CPU utilization on LTM

Hello

 

I am testing 2 different LTMs(VE) and would like to compere the CPU performance to select CPUs for our customer enviroment.

 

BIG-IP 12.1.1 Build 0.0.184 Final 8core

 

BIG-IP 13.1.1 Build 0.0.4 Final 12core

 

During that load testing(1000TPS for SSL), I noticed that the CPU usage of Lower version LTM(8cores) is at 40%, but higher version LTM(12cores) is at 50%. I expected that 12 cores LTM had lower consumption of CPU than 8cores, these results were completely the opposit.

 

Could someone tell me why did that turn out like this ?

 

Thanks in advance for the help.

 

  • The last time I had to dig around high cpu load conditions I learned that F5s split the core usage by relegating the tmm 'worker' processes to the even numbered cores while the odd cores were held in reserve for their even numbered partner, ie. Core0 active paired with Core1.

     

    If any of the even core hit 80% they would grab almost all of the odd numbered core's resources and assign them to the tmm instance running on the even member. At least that is what was observed and my understanding after digging around the KB articles about core handling.

     

    With this in mind, are you seeing the same results repeated over repeated tests? What is the per core load when running the tests? Have you checked to see which TMM worker is handling the majority of the traffic while the tests are being carried out?

     

    My bet is that the lower core instance is potentially balancing the load across the various cores a little more evenly under the load conditions whereas the higher core number instance is loading the cores a little more before splitting off the load.

     

    Based on this I would suggest running tests with higher TPS volumes to see how the behavior changes.

     

    Also, bear in mind that the TMM system is essentially a virtual environment so when checking core load stats and the connection distribution do so via tmsh because the OS commands such as top are not able to see how the TMM environment is being used.

     

  • yuni's avatar
    yuni
    Icon for Altostratus rankAltostratus

    Thank you for your kind comment. I checked out the the KB articles about even and odd core handling. When I've run both tests(12core and 8core), I found that even numbered core load hit 80% and started to use odd numbered core's resources. I tried the load testing with higher and lower TPS volumes, but the results showed similar tendencies. Test results that I got by GUI and CLI(tmsh show sys cpu) and seems to be same.

     

    I am thinking it might the problem caused by our enviroment that I am using LTMs with OpenStack...?

     

  • You would expect 12 cores to operate better. Are they on the same hypervisor with the same networks etc? You can get 'pinning' where there is a limited number of IP addresses and source ports. You would notice that because one tmm would go high utilisation but others would not be affected. Assuming that everything else is the same then i'd look at platform config ie memory assigned, resource provisioning, licensing etc and also at the LTM config to check there isn't logging, iRules etc that may make a difference.

     

  • yuni's avatar
    yuni
    Icon for Altostratus rankAltostratus

    I was able to solve it. We have an OpenStack environment on multiple physical servers. However, I changed the physical server that 8core LTM is on, then I found the CPU usage rate is about 80%, which is the value I assumed. Thank you so much!!

     

  • Hi Yuni and all,

     

    Can you please share you experience. We have developed big-ip ver 14 running on top of openstack liberty.

     

    We are facing an issue, if we check from the dashboard F5 instance the CPU usage less than 10%, but if we check from the Host the CPU usage more than 800%. Is it normal ? Our host using 2 socket, each socket 20 cores, so total physical cores 40, 80cores with hyperthread.

     

    Is it fine if we will switch the traffic through this F5 CGNAT VM ? Since we are wondering why the CPU of the host very high. This host only running thisn VM only.

     

    Really appreciate yor insight.

     

    Thank you. Bandung