Forum Discussion

F5_Jeff's avatar
F5_Jeff
Icon for Cirrus rankCirrus
Jun 27, 2017

F5 VCMP Resource Allocation Sizing

Hi Everyone!

 

We are having some issue with provisioning 3 modules (namely LTM,APM and ASAM) on a VCMP. our vcmp has been assigned with 2 VCPU cores only.

 

is there an article where in we can reference the number of VCPU to be assigned on the guests for the different modules

 

  • Hi, In any realease note you will see how many resources you need depending on how many modules you provision. There are some exceptions but gerenally you require at least 8 GB to provision 3 modules. Here an example for 12.2.1 (at the begining of the document): Release notes for 12.1.2

     

    Regarding vcmp, you need to know how much memory you allocate per CPU/vCPU and this is hardware dependient. In this article you can figure that out: K14218

     

  • Hi Daniel,

     

    I checked this article already. I was hoping for a more detailed one.

     

    by the way, thank you for your response.

     

  • nathe's avatar
    nathe
    Icon for Cirrocumulus rankCirrocumulus

    I would check the release notes and the section "vCMP memory provisioning calculations" specifically. This will outline how much memory your platform will be allocated, and you can then use the Memory section to work out module allocation possibilities.

     

  • Here you can find the formula to calculate memory depending on your host memory capacity: https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/vcmp-viprion-configuration-11-4-1/2.html In short the formula is (total_GB_memory_per_blade - 3 GB) x (cores_per_slot_per_guest / total_cores_per_blade).

     

    So if you are planning to provision 3 or more modules make sure you have at least 8 GB (oficially between 4 and 8 shoud be ok but better if you make 8)

     

    If you need more details please ask. Hardware model is good information for this.

     

  • This topic is very interesting for me.

     

    We have a high loaded vCMP with 4 cores and LTM/AFM/GTM/ASM. We have 1493 VS and 870 pools active. The System seems to be running but some side effects are happening:

     

    • Sometimes the "load sys config" will crash tmm...
    • Sometimes changing a monitor on a pool will restart tmm...
    • Sometimes after a reboot monitors are hanging in "checking" state...

    Used version is v12.1.3.

     

    Today we increased the vCMP CPU from 4 to 8 and see what is going on with the stability.

     

    I have 2 questions now:

     

    • How is the experience with other Viprion admins and high level of load and objects with 4 cpus?
    • F5 should provide a real sizing guide and not just needed cores per module. So any information about sizing when a loaded vCMP is used?

    Thanks, Peter

     

  • Romani_2788's avatar
    Romani_2788
    Historic F5 Account

    You might not have a one size fits all, but some general guidance are indeed necessary.

     

    K14760: High density vCMP, does state that for single-core vCPU guests, these only supports:

     

    • LTM standalone* GTM standalone
    • LTM and GTM combination

    Combining multiple modules on one vCMP guests requires that it has enough processing power, and enough memory to go round all the provisioned modules.

     

    When considering processing power however, it is important put in mind the effect of HTSplit (see K23505424: Overview of the HTSplit feature), meaning what ever number of CPU you allocate to the guest, half will be used by TMM while the others will be used by Control Plane daemons and processes. This is discussed in K15003: Data and control plane tasks use separate logical cores when the BIG-IP system CPU uses Intel Hyper-Threading Technology.

     

    So when a vCMP guest is provisioned with 4 vCPUs, 2 will be allocated to TMM, while the other 2 for all other processes.

     

    Now provision that guest with LTM/ASM/APM, and you can begin to see how depending on the amount of traffic being pushed through the guests, and what needs to be done in traffic handling, this can begin to reach some thresholds on the guest. And note that such considerations will also have to be made for an appliance as well.

     

    It is unfortunately not as easy as sizing it in precise configurations, as a lot of variables would have to taken into consideration. While one implementation might get away with this configuration, with a few virtual servers, ASM policies and moderate traffic, another implementation a higher amount of virtuals, policies and a traffic throughput to match, might not.

     

    So, it is no wonder that increasing a vCMP guest from 4 vCPUs to 8, would definitely help to relieve the load if already strained.

     

    A guest with 2 vCPUs running LTM/ASM/APM, will be strained from the very beginning, as all ASM and APM processes combined with all the other Host processes will have to share that single CPU, so as a general guidance you want to have at least 2 vCPUs for TMM and at least 8Gb of memory when running LTM with other modules.