Defeating Attacks Easier Than Detecting Them
Defeating modern attacks – even distributed ones – isn’t the problem. The problem is detecting them in the first place.
Last week researchers claimed they’ve discovered a way to exploit a basic security flaw that’s used in software that’s in high use by Web 2.0 applications to essentially support if not single-sign on then the next best thing – a single source of online identity.
The prevalence of OAuth and OpenID across the Web 2.0 application realm could potentially be impacted (and not in a good way) if the flaw were to be exploited. Apparently a similar flaw was used in the past to successfully exploit Microsoft’s Xbox 360. So the technique is possible and has been proven “in the wild.”
The attacks are thought to be so difficult because they require very precise measurements. They crack passwords by measuring the time it takes for a computer to respond to a login request. On some login systems, the computer will check password characters one at a time, and kick back a "login failed" message as soon as it spots a bad character in the password. This means a computer returns a completely bad login attempt a tiny bit faster than a login where the first character in the password is correct.[…]
But Internet developers have long assumed that there are too many other factors -- called network jitter -- that slow down or speed up response times and make it almost impossible to get the kind of precise results, where nanoseconds make a difference, required for a successful timing attack.
Those assumptions are wrong, according to Lawson, founder of the security consultancy Root Labs. He and Nelson tested attacks over the Internet, local-area networks and in cloud computing environments and found they were able to crack passwords in all the environments by using algorithms to weed out the network jitter.
Researchers: Password crack could affect millions
ComputerWorld, July 2010
Actually, after reading the first few paragraphs I’m surprised that this flaw wasn’t exploited a lot sooner than it was. The ability to measure fairly accurately the components that make up web application performance is not something that’s unknown, after all. So the claim that an algorithm can correctly “weed out” network latency is not at all surprising.
But what if the performance was randomized by, say, an intermediary interjecting additional delays into the response? You can’t accurately account for something that’s randomly added (or not added, as the case may be) and as long as you seeded the random generation with something that cannot be derived from the context of the session there are few algorithms that could figure out what the random generation seed might be. That’s important because random number generation often isn’t and it can often be predicted based on knowing what was used to seed the generator. So we could defeat such an attack by simply injecting random amounts of delay into the response.
Or, because the attack depends on an observable difference in timing, simply normalizing response times for the login process would also defeat this attack. This is the solution pointed out in another article on the discovery, “OAuth and OpenID Vulnerable to Timing Attack”, in which it is reported developers of impacted libraries indicate that just six lines of code will solve this problem by normalizing response times. This, of course, illustrates a separate problem, which is the reliance on external sources to address security risks that millions may be vulnerable to now because while it’s a simple resolution, it may takes days, weeks, or more before it is available.
This particular attack would indeed be disastrous were it to be exploited given the reliance on these libraries by so many popular web sites. And though the solutions are fairly easy to implement, that isn’t the real problem. The real problem is how difficult such attacks are becoming to detect, especially in the face of the risk incurred by remaining vulnerable while solutions are developed and distributed.
DEFEATING is EASY. DETECTING is HARD.
The trick here, and what makes many modern attacks so dangerous is that it’s really really hard to detect them in the first place.
Any attack that could be distributed across multiple clients – clients smart enough to synchronize and communicate with one another – becomes difficult to detect, especially in a load balanced (elastic) environment in which those requests are likely spread across multiple application instances. The variability in where attacks are coming from makes it very difficult to see an attack occurring in real-time because no single stream exhibits the behavior most security-focused infrastructure watches for.
What’s needed to detect such an attack is to be more concerned with what is being targeted rather than by whom or from where. While you want to keep track of that data the trigger for such brute-force attacks is the target, not the client activity. Attackers are getting smart, they know that repeated attempts at X or Y will be detected and that more than likely they will find their client blacklisted for a period of time (if not permanently) so they’ve come up with alternative methods that “hide” and try to appear like normal requests and responses instead. In fact it could be postulated that it is not repeated attempts to login from a single location that are the problem today, but rather the attempt to repeatedly login from multiple locations across time that’s the problem. So what you have to look for is not necessarily (or only) repeated requests but also at repeated attempts to access specific resources, like a login page.
But a login page is going to see a lot of use so it’s not just the login page you need to be concerned with, but the credentials, as well. In any brute force account level attack there are going to be multiple requests to try to access a resource using the same credentials. That’s the trigger. It requires more context than your traditional connection or request based security triggers because you’re not just looking at IP address, or resource, you’re looking deeper and trying to infer from a combination of data and behavior what’s going on.
THIS SITUATION will BECOME the STATUS QUO
As we move forward into a new networking frontier, as applications and users are decoupled from IP addresses and distribution of both clients and applications across cloud computing environments becomes more common, the detection of attacks is going to get even more difficult without collaboration.
The new network, the integrated and collaborative network, the dynamic network, is going to become a requirement. Network and application network and security infrastructure needs a new way of combating modern threats and their attack patterns. That new way is going to require context and the sharing of that context across all strategic points of control at which such attacks might be mitigated.
This becomes important because, as pointed out earlier, many web applications rely upon third-party solutions for functionality. Open source or not, it still takes time for developers to implement a solution and subsequently for organizations to incorporate and redeploy the “patched” application. That leaves users of such applications vulnerable to exploitation or identity theft in the interim. Security infrastructure must be able to detect such attacks in order to protect users and corporate resources (infrastructure and applications and data) from exploitation while they wait for such solutions. When is more important than where when it comes to addressing newly discovered vulnerabilities, but when is highly dependent upon the ability of infrastructure to detect an attack in the first place.
We’re going to need infrastructure that is context-aware.
Related Postsfrom tag brute-force |