Forum Discussion

ppphatak_127926's avatar
ppphatak_127926
Icon for Nimbostratus rankNimbostratus
Oct 15, 2005

CPU utilization upon adding irule condition

Hi, I have a 540 device in place to cater for quite a few websites and peak SSL connections at a time can be around 300. The 540 SSL accl. is capable of supporting 800 per sec. Current CPU util is 30-40%.

 

 

Currently there are 4 irules in action which has 5 if conditions to check for cookie or classes and redirect to pools based on that.

 

 

I want to add 4 more "if" conditions to each rule and wish to get an idea how much CPU it will consume more upon doing so?

 

 

Is there any measure for this? If so, where can I find info on it? Last option is to actually try it, but since I dont want to take production sites down, dont want to go that route.

 

 

Thx in advance.

6 Replies

  • I'd suspect that SSL is the big consumer of resources here. There is no real way to calculate the CPU percentage that a particular rule will take up.. (You almost always want to keep the least frequently accessed rules near the bottom of the nest elseif tree.)

     

     

    I suspect that you should be fine... If you want to post your rule here before going into production, we can look it over and comment.

     

     

    One thing you can measure is the amount of time it takes a rule to process... See: Click here

     

    for more details

     

     

    Cherrs,

     

    Brian
  • I guess the link you provided to check for time reqd for each rule is for 9.x, not 4.6.x. Do you have it for 4.6.x?

     

     

    If I use vmstat command, will it be user cpu or kernel cpu which will overshoot? Which one would overshoot for SSL? Are rules compiled and saved in cache or engine picks it up from disk everytime? Sorry about so manu questions.

     

     

    Existing rule looks like following, where I want to add 4 more "nested if".

     

     

    if (tolower(http_uri) starts_with one of class-A) {

     

    redirect to "https://sitename-A/%u"

     

    }

     

    else if (not (http_header("SSLClientCipher") contains one of ssl_bits)) {

     

    redirect to "http://%h/error/html/403_5.htm?403;https://%h/%u"

     

    }

     

    else if (tolower(http_uri) starts_with one of class-B) {

     

    use pool Pool-B

     

    }

     

    else if (tolower(getfield(http_uri, "/", 2)) == one of Class-C) {

     

    use pool Pool-C

     

    }

     

    else if (tolower(getfield(http_uri, "/", 2)) == one of Class-D) {

     

    use pool Pool-D

     

    }

     

    else {

     

    use pool Pool-E

     

    }

     

  • Martin_Machacek's avatar
    Martin_Machacek
    Historic F5 Account
    Unfortunately there is no rule timing feature (stealth or otherwise) in v4.x.

     

     

    Overall impact of rules on performance can be also estimated based on CPU utilization induced by a known traffic load. CPU utilization is best monitored using the "cpu bigip" command (e.g. watch cpu bigip). Standard utilities (e.g. top, vmstat) may give misleading results especiall y on dual processor systems running in ANIP mode. Time spent in rule evaluation is accounted for (together with other operations) in the BIGIP column. NOTE: the maximum CPU utilization on a dual CPU system is 200% in SMP mode and 100% in ANIP mode (the ANIP column shows aggregated utilization on the second [ANIP] CPU).

     

     

    Rules are compiled when configuration is loaded and stored in memory.

     

     

    Assuming that you are terminating SSL connections on the BIG-IP (using SSL proxy), SSL is definitely a more limiting factor for performance than your rules. High volume of SSL traffic will translate into high CPU utilization shown for the "proxyd" process (e.g. in "top" output). Proxyd will consume both user and kernel CPU time. It consumes more kernel CPU for high connection rate and more user CPU for high traffic volume on several persistent connections. In the "cpu bigip" output, CPU used by proxyd is shown (together with all other user processes) in the "Unix" column.

     

     

    Adding another nested if-then to your rule should not have any negative impact on preformance as long as the expression is similar to your existing expressions. I'd need to see your new rule to be able to tell more.

     

     

    Performance impact of various iRule features can be ordered as follows (ordered from lowest to highest [negative] impact on performance):

     

     

    * no rules :-),

     

    * L4 rules (i.e. rules referring only to addresses and/or ports [or other

     

    data found in Ethernet, IP and/or TCP/UDP header]),

     

    ------

     

    performace with only the above features can be significantly better than with any of the features bellow, especially on dual processor systems -- this is because the above features do not require the

     

    BIG-IP to do so called late-binding

     

    -------

     

    * header insert feature (not an iRule feature per-se),

     

    * HTTP rules referring only to data in HTTP request headers (URI, header

     

    fields, cookies), header erase feature has about the same impact,

     

    * cookie persistence, SSL session id persistence (agai not iRule features

     

    per-se)

     

    * rules referring to TCP content,

     

    * rules referring to HTTP content

     

     

    In general, the more the BIG-IP has to buffer and parse in order to evaluate expressions, the more impact will the rule have on performance. As Brian already pointed out, keeping branches that are likely to match close to the top of the if-then-else chain improves performance because rule evaluation stops at the first "use pool" or "discard" statement.

     

     

    Measuring and even more so predicting performance of (any) L7 loadbalancer is very complicated because it depends on many factors with non-trivial dependencies among them.
  • Appreciate detailed response (As always you do !)

     

    Quick questions :

     

     

    How do I know if my 540 CPUs are running in ANIP mode or SMP?

     

    How do I capture output of CPU BIGIP to a file?

     

     

    Another less-related question.

     

     

    If I add mapclass2node function on a pool, will that add substantially to the reduction of performance? If I fail to add more if-then-else to avoid perf. bottlenecks, I would prefer to add mapclass2node to achieve my task, hence the question.

     

     

    Thx
  • Martin_Machacek's avatar
    Martin_Machacek
    Historic F5 Account
    Current kernel mode can be determined based on output from:

    
    b summary

    Relevant lines are:

    
    BIG-IP Total Number of CPUs              = 2
    BIG-IP Mode                              = ANIP MODE
    BIG-IP ANIP percent work                 = 0
    BIG-IP MAX ANIP percent work             = 0

    You can also use following command to get just the kernel mode:

    
    b summary | grep "BIG-IP Mode" | cut -d= -f2-

    Output of the cpu bigip command (and about any other Unix command) can be captured to file using output redirection:

    
    cpu bigip > /tmp/cpu_bigip_output

    Please refer to the Bash manual page for details. Manual pages can be displayed using the "man" command (e.g. man bash).

    Performance impact of direct node selection using the mapclass2node function depends on size of the class and the expression that produces the data processed by mapclass2node. In general it should have similar performance impact as a rule with class matching (one of operator).
  • Sorry for asking one more question on this thread, but it is related.

     

     

    Is it allowed to write If-elseIf statements in node-select-expression box under pool persistance setting?

     

    I use mapclass2node, however I wish to write 3 if-elseif statements to get my task done.

     

     

    Thx in advance.