The Fundamental Problem with Traditional Inbound Protection

#adcfw #RSAC #infosec The focus on bandwidth and traffic continue to distract from the real problems with traditional inbound protections …

The past year brought us many stories focusing on successful attacks on organizations for a wide variety of reasons. Why an organization was targeted was not nearly as important as the result: failure to prevent an outage. While the volume of traffic often seen by these organizations was in itself impressive, it was not the always the volume of traffic that led to the outage, but rather what that traffic was designed to do: consume resources.

It’s a story we’ve heard before, particularly with respect to web and application servers. We know that over-consumption of resources impairs performance and, ultimately, causes outages. But what was perhaps new to many last year was that it wasn’t just servers that were falling to an overwhelming number of connections, it was the very protections put in place to detect and prevent such attacks – stateful firewalls.

Firewalls are the most traditional of inbound protection for data centers. Initially designed to simply prevent unauthorized access via specific ports, they have evolved to a level that includes the ability to perform limited packet inspection and make decisions based on the data within them. While this has been helpful in preventing a growing variety of attacks, they have remained unable to move laterally across protocols and understand expected and acceptable behavior within the context of a request, which results in a failure to recognize an attack.  This is because modern application layer attacks look and smell to traditional inbound protection devices like legitimate requests. They are simply unable to parse behavior in its appropriate context and make the determination that the intention behind the request is malicious.

A recent InfoWorld article presented a five-point list regarding how to deny DDoS attacks. The author and his referenced expert Neal Quinn, VP of operations at Prolexic, accurately identify the root cause of the inability of traditional inbound protection to thoroughly mitigate DDoS attacks:

But the most difficult challenge has been DDoS attackers' increasing sophistication as they've moved from targeting Layers 3 and 4 (routing and transport) to Layer 7 (the application layer). They've learned, for example, how to determine which elements comprise a victim's most popular Web page, honing in on which ones take the most time to load and have the least amount of redundancy.

"Attackers are now spending a much longer period of time researching their targets and the applications they are running, trying to figure out where they can cause the most pain with a particular application," Quinn said. "For example, they may do reconnaissance to figure out what URL post will cause the most resource-consuming Web page refresh."

-- How to deny DDoS attacks 

Unfortunately the five-point list describing the strategy and tactics to “deny DDOS attacks” completely ignores this difficult challenge, offering no advice on how to mitigate “the most difficult challenge".” While the advice to ensure enough compute resources tangentially touches upon the answer, the list is a traditional response that does not address the rising Layer 7 challenge.


To understand how to mitigate the rising layer 7 security challenge one must first understand the two core reasons traditional inbound security solutions are unlikely to mitigate these attacks. First is a failure to recognize an application layer attack for what it is. This failure cascades into the second reason traditional inbound security solutions fail: connection capacity.

Not bandwidth, connections. A million TCP connections can easily topple most modern firewalls today and yet the bandwidth involved could be miniscule compared to the gigabits of capacity many organizations have at their disposal. It isn’t about bandwidth anymore, it’s about connections. This is why the advice to ramp up compute processing power and memory is partially on target – because memory is imperative in maintaining massive session (connection) tables on infrastructure as traffic flows to and from targeted services.

Because traditional inbound protection devices are unable to recognize the malicious intent of these legitimate-appearing requests, they must maintain the connection. When combined with the need to maintain connections for all legitimate traffic, these malicious requests can quickly push a traditional device beyond its meager connection limitations. When that occurs, the results are disastrous. Performance, of course, suffers unacceptable degradation. One can only hope that is the only impact, for far more often the device simply fails, completely disrupting all services.

To complete the aforementioned list of “how to deny a DDoS attack”, it is necessary to implement a security solution at the perimeter of the network that is both able to detect and thus deny malicious requests and which has the connection capacity necessary to withstand the combined volume of legitimate and malicious requests. This solution must reside at the edge of the network, lest a less capable device be overwhelmed. This is because when it comes to perimeter security, the default is a serial strategy – nothing gets past a failed security device. If that security device is at the edge of the network, as is the case with traditional inbound security solutions like stateful firewalls, then all services residing topologically behind that device will fail should the firewall fall.

This is by design. One does not want unfettered access to services and applications. No perimeter protection, no access. It’s a sound strategy, but one that needs to employ a perimeter device capable of withstanding even the most diverse of attacks.

Traditional inbound security is too constrained in terms of connection capacity to maintain its position on the front lines. A more capable, intelligent security solution is required – one able to provide traditional inbound security protections as well as recognizing the malicious intent of more modern, application layer attacks.

Published Jan 20, 2012
Version 1.0

Was this article helpful?

No CommentsBe the first to comment