The Open Performance Testing Initiative
Performance. Everybody wants to know how things perform, whether it be cars, laptops, XML gateways, or application delivery controllers. It's one of the few quantifiable metrics used to compare pr...
Published Sep 10, 2007
Version 1.0Lori_MacVittie
Employee
Joined October 17, 2006
Lori_MacVittie
Employee
Joined October 17, 2006
Lori_MacVittie
Oct 02, 2007Employee
Hi Alan,
Thanks for the feedback and yes, there are always room for improvements. The first step is to put something out that can be used as a foundation and to encourage the idea that there should be a formalized, industry standard benchmark for the industry.
Independent testing is, of course, the goal. Testing that is open, fair, and that provides relevant results based on common criteria and terminology such that customers can make an apples to apples comparison.
Mike Lowell has addressed the price-per-transaction data in the forums (http://devcentral.f5.com/Default.aspx?tabid=53&forumid=40&tpage=1&view=topic&postid=1678617088) . Essentially this data was outside the scope of this report.
As is often the case with applications, testing with a full suite would introduce a number of additional variables that are difficult to address. We do a lot of internal testing with products like SAP, Oracle, and Microsoft for just the reasons you state, but in a pure performance test of BIG-IP we were looking to establish the high-end of its processing capabilities.
While we were of course pleased with the results of our own testing, part of the goal is to give customers and prospective customers the ability to definitively nail down specific terminology and test parameters such that they could more accurately define their own testing as well as be better informed so that they can interpret the results of all performance results they are handed. Even if we did extensive testing using applications and full configurations it would likely not be applicable for a given customer.
Each environment is unique in terms of network capabilities, hardware platforms upon which applications are deployed, and the specific configuration and needs for which they are evaluating an application delivery controller. We're hoping to enable those customers to better evaluate products in their own environment while offering up the "best case" performance of our products discovered through our own testing.
Unfortunately it's up to the industry to generate a vendor-neutral testing group to pick up and run with such tests. Obviously if we, or any other vendor, sponsored or assisted it would immediately be seen as biasing such an organization - whether the claim of bias was warranted or not.
Regards,
Lori