The Open Performance Testing Initiative
Performance. Everybody wants to know how things perform, whether it be cars, laptops, XML gateways, or application delivery controllers. It's one of the few quantifiable metrics used to compare products when IT goes shopping, especially in the world of networking.
Back at Network Computing I did a lot of testing, and a large portion of that testing revolved around performance. How fast, how much data, how many transactions, how many users. If it was possible, I tested the performance of any product that came through our lab in Green Bay. You've not had fun until you've actually melted an SSL accelerator, nor been in true pain attempting to load test Enterprise Service Buses (ESBs). In the six years I spent at Network Computing I actually ran three separate Spirent Communications load testing products into the ground. That's how hard we tested, and how seriously we took this task.
One of our core beliefs was that every product should be compared equally using the same methodology, in the same environment, under the same conditions and using the same definitions. If an industry standard performance test existed, we tried to use it. You've probably heard of SPEC, which provides industry standard benchmarks for a number of products such as mail servers, CPUs, web servers, and even SIP, and perhaps if you're in the software world you know about the TPC-C benchmarks, which measures the performance of databases.
But have you heard of the industry standard performance test for application delivery controllers? No?
That's because there isn't one.
Knowing how products perform is important, especially for those IT folks tasked with maintaining the environments in which products are deployed. Every organization has specific performance requirements that range from end-user response times to total capacity to latency introduced by specific feature sets.
Yet every vendor and organization defines performance metrics and test methodology just a bit differently, making cross-vendor and organizational comparisons nearly impossible without designing and running tests to determine what the truth might be. Even though two vendors say "Transactions Per Second" they don't necessarily mean the same thing, and it's likely that the underlying test configuration and data used to determine that metric were completely different, making a comparison of the values meaningless.
F5 wants to change that. Whether you're interested in uncovering the subtle differences in terminology that can result in skewing of performance results, developing your own killer performance methodologies, understanding how our latest performance test results were obtained, or downloading all the configuration files from the test then the new performance testing section of DevCentral is the place for you.
Got questions? Suggestions? Comments? We've also set up a forum where you can discuss the results or the wider topic of performance testing with our very own performance guru Mike Lowell. And of course I'll be out there too, so don't be shy.
This is more than just publishing the results of our latest performance tests; it's about being open and transparent in our testing so you know exactly what we tested, how, and what the results really mean. It's about giving you tools to help you build your own tests, or recreate ours. It's about leveling the playing field and being honest about how we measure the performance of BIG-IP.
Imbibing: Coffee
- Lori_MacVittieEmployeeHi Alan,