cancel
Showing results for 
Search instead for 
Did you mean: 
JRahm
Community Manager
Community Manager

I started thinking about hashing algorithms in general and how resource intensive they are when used in iRules. Also, I’ve been a little jealous of Colin, Joe, & George and their creative use cases for the Google Charts API, so for the first entry of the New Year I thought I’d indulge myself with a little geekery.

The Algorithms

Several hashing algorithms are available for use on the LTM. First link is some background on each of the algorithms (or family of algorithms as is the case with SHA2) and the second link is the DevCentral wiki page for the algorithm’s use in iRules.

It might be of worth to note that the crc32 algorithm differs from the rest in that it is checksum function, whereas the rest of them are cryptographic functions. Checksum functions are primarily used for error detection and cryptographic functions primarily in security applications, but both of them can be used in the ordinary tasks of load balancing as well. There are pros/cons to resource utilization and distribution characteristics. I’ll just take a look at resources in this tech tip, but I’ll revisit distribution in the hash load balancing update I mentioned earlier. To give you an idea of the various digest/block sizes and the resulting output, see the table below. Note that the message in all cases is "DevCentral 2011."

0EM1T000001MwaA.png

Note: Data above actually generated from python zlib and hashlib libraries on Ubuntu 9.04. Just a representative look at the differences in hashing algorithms.

The iRule

The code is below. Note that the iRule expects a path of /hashcalc and a query (which it uses as the source of the hash computation). If you wanted to pass the number of computations to the iRule in the query, that would be a very small modification.

  when HTTP_REQUEST {
    if { [HTTP::uri] starts_with "/hashcalc" } {
      foreach i { crc32 md5 sha1 sha256 sha384 sha512 } {
        set t1 [clock clicks -milliseconds]
        for { set y 0 } { $y < 50000 } { incr y } {
          $i [HTTP::query]
        }
        append calctime "$i,[expr {[clock clicks -milliseconds] - $t1}],"
      }
   }
   set gdata [split $calctime ","]
   HTTP::respond 200 content "<html><center>BIG-IP Version $static::tcl_platform(tmmVersion)<p><hr size=3 width='75%'><p>\
    <img src='http://chart.apis.google.com/chart?chxl=0:|[lindex $gdata 0]|[lindex $gdata 2]|[lindex $gdata 4]|[lindex $gdata 6]|\
    [lindex $gdata 8]|[lindex $gdata 10]|&chxr=1,0,250&chxt=x,y&chbh=a&chs=400x300&cht=bvg&chco=A2C180&chds=0,250\
    &chd=t:[lindex $gdata 1],[lindex $gdata 3],[lindex $gdata 5],[lindex $gdata 7],[lindex $gdata 9],[lindex $gdata 11]\
    &chdl=(in+ms)&chtt=Hashing+Algorithm+Performance+(50k calculations)&chg=0,2' width='400' height='300'alt=\
    ’Hashing Algorithm Performance' /></center></html>"
   unset calctime gdata
 }

I made sure each hashing algorithm ran enough times to plot out some meaningful numbers, settling in on 50k calculations, passing the iRules command in through the forearch loop and appending the calctime variable with the algorithm and milliseconds required to run the calculations.

The Results

The numbers, courtesy of HTTP::respond and a Google Charts bar graph:

0151T000003d46DQAQ.png

You can see that md5 takes more than twice the time as crc32 to compute the hash, that md5/sha1 are relatively even before stepping to sha256 and then finally to sha384/sha512, which are then roughly twice md5/sha1.

Conclusion

It was a fun investment to look at how the numbers played out between the hashing algorithms. Note that I ran this on a 3600 platform, your mileage may vary on different hardware (or in VE). If you run this, post your numbers back, I’d be curious to see the variance in platform and TMOS version.

Version history
Last update:
‎04-Jan-2011 11:18
Updated by:
Contributors