Congratulations! You do no nothing faster than anyone else!

If you’re going to test performance of anything make sure it’s actually doing what it’s designed to do. Race cars go really fast too – but they don’t get you anywhere but around and around in a big circle.

Speed is important, especially in application delivery. We all know that the web monsters like Google and Amazon have studied and researched using real applications and users the impact of even a fraction of a second reduction in response time. It costs them money. Your users may not be quite so sensitive, but you’d rather not take the risk.

At the same time you are (or should be) sensitive to the security of your data. No one wants to be page one and trending on Twitter because they allowed an easily preventable SQL injection vulnerability to be exploited and “lost” all their users confidential, sensitive, and very private data.

So you head out to look at a WAF (web application firewall) because, well, you want to be sure that you’re full on protected – head to toe and top to bottom. Speed is still important, so you look only at those WAF solutions that can (a) protect your applications and network from exploitation and infection and (b) do it with minimal impact on performance.

Of course you’re going to want the “fastest WAF on the market”. The question is, how do you know it’s the fastest WAF on the market? Oh. Of course. The vendor tested it and said so.

WAF PERFORMANCE as a DIFFERENTIATOR 

For almost all network components performance is an important factor and, for almost all components it is measurable in some way. The performance claims of a vendor are valuable during your research phase, as long as you remember that they are (1) “best case” numbers and (2) understand exactly what it is they are measuring. For the former, that means that in a lab environment, under optimal conditions, with all the knobs and buttons set just right, the numbers presented are the best you can hope for. These a good for sizing and understanding capacity. But your mileage will vary because there are other factors that need to be considered, unless you’re acquiring a switch that’s going to do nothing but pass packets. Then you can trust that if the vendor says it’s wire-speed, it’s wire-speed and packets will in fact pass just exactly as fast as they say they will. In the latter case, it’s important to understand the methodology of the test and determine what was being measured. Measuring the base HTTP transaction rate for a WAF reveals the underlying platform’s capacity, but it tells you nothing about performance while actually protecting an application.

In any network-deployed product in which application-specific functionality may be deployed – security policies, routing tables, switching rules – your mileage will almost certainly be less than vendor-produced numbers. And I say almost certainly because nothing is certain but death and taxes.

Trust me – I spent many, many hours testing and retesting application-focused network products for Network Computing proving this point over and over again. Lori’s axiom of application-focused network product performance is this: like cars, performance depreciates a fixed percentage the moment the product is driven off the vendor lot.

Why? Because the performance of an application-focused network-deployed product is highly variable and very specifically tied to the applications it is delivering. As Don pointed out recently, your network can be capable of transferring 100Mbps but unless the applications between which that data is being exchanged is also capable of serving up and receiving data at that rate, you’re never going to see that kind of throughput. Never. 

That makes performance claims for a product like a WAF – which is designed to be flexible enough to protect your custom built, one-of-a-kind application that’s deployed in your unique, no-one-else-can-duplicate data center – meaningless as a tool for comparison. Attempting to even measure the real performance of a WAF in a lab is a ridonculous exercise in futility because the moment that WAF is actually deployed and used for what it was designed for – namely protecting an application – its performance is not going to live up to the expectations set by the lab results.

HOW CAN I TEST THEE? LET ME COUNT THE WAYS

You do want to provide some kind of upper bound performance expectations for customers. That’s valuable, after all, because at least then potential customers understand the capacity limitations and can budget and plan for a multiple-component purchase. That’s fair, that’s a good idea. You should do that.

What you shouldn’t do, however, is claim that performance of the underlying platform is the same as the performance of the WAF. After all, customers are interested in the performance of a WAF when it’s actually protecting an application, not when it’s just passing traffic to and fro – like a switch. But it’s nearly impossible to test under such conditions because, well, what application do you choose? What protections will you employ? And how will you ever explain to the marketing folks that you aren’t, after all, as fast as you thought you were when you actually configure the product to do what it’s supposed to do?

The performance of a WAF is even more dependent on the underlying application than most application delivery solutions.  What parts of the application you protect and what protections you choose to employ will have a dramatic impact on the overall performance and scalability of a WAF.  Whether or not you are doing content rewriting and PCI compliance matters.  Whether or not you are using a negative security model matters.  The degree to which you are learning new acceptable traffic and new attacks matters.  If you are virtually patching your web application with application logic and rewriting, it matters.  It all matters, and the performance characteristics are different for each one. 

This is why most WAF vendors don’t even attempt to test for performance against applications. If they did their lab engineers would never get around to testing anything else. The possible combinations of configurations are mind-boggling and, you can bet, that most customers will have some other set of requirements that those poor engineers haven’t gotten around to testing yet.

And really, if you’re testing a WAF for performance using an application comprised completely of static content you’re doing it wrong. There are a few attacks that can be executed against static content but the majority of vulnerabilities against which organizations need to be protected involve dynamic content; content that’s ultimately pulled from a database. Such attacks are highly variable in construction, use obfuscated scripting and manipulate input parameters that must be extracted and evaluated and thus consume cycles on the WAF; cycles that necessarily add latency and impact the overall product performance. Certainly test against static content, but don’t attempt to extrapolate that performance to the performance one would see when protecting dynamic content that requires a higher degree of inspection. 

PERFORMANCE of APPLICATION-SPECIFIC TOOLS is APPLICATION DEPENDENT

I’d say I’m speechless that someone actually published a report and made claims based on it that avoided testing the core functionality of a WAF but obviously that’d be a lie. Performing what is essentially a meaningless test using a completely unrealistic set of benchmarks and configuration options for a WAF is misleading and does a huge disservice to the entire market. There are no “standard” WAF benchmarks. There could be. Like general capacity of application servers, the entire WAF industry could agree to test functionality and performance against a standard application, a la the Pet Shop application. But the thing about application vulnerabilities is they change over time, there’s new ones, new twists on old ones, and more obfuscated attacks as time goes on. Such a test would need to be constantly updated lest it become quickly irrelevant. And it still wouldn’t alleviate the need for organizations to perform their own performance testing, because no one’s business relies on the Pet Shop application.

Performance is important, but performance of a WAF like any application-specific product, is highly dependent on the application. The product that is fastest for company A may not be the same one that works best for company B. A proof of concept that measures performance in your environment, protecting your applications, is the only way to evaluate which one not only protects your application the way you need it protected, but does so fast enough to keep your users happy.


Related blogs & articles:

       

AddThis Feed Button Bookmark and Share

Published Oct 13, 2010
Version 1.0
  • Don_MacVittie_1's avatar
    Don_MacVittie_1
    Historic F5 Account
    I'm with you on the call to action Joel, and there are a few of our old cohorts doing just that.

     

     

    Problem is, in the absence of the independent tech press, there is no market for thorough, unbiased testing. Who's going to pay for the right to be in a test and possibly lose? Or worse, come in last place?

     

     

    Hopefully something replaces the NWCs of the world, so independent testing with no "secret agenda" can start happening again. But I'm not optimistic in the short term.

     

     

    Don.