exploits
6 TopicsI am in your HTTP headers, attacking your application
Zero-day IE exploits and general mass SQL injection attacks often overshadow potentially more dangerous exploits targeting lesser known applications and attack vectors. These exploits are potentially more dangerous because once proven through a successful attack on these lesser known applications they can rapidly be adapted to exploit more common web applications, and no one is specifically concentrating on preventing them because they're, well, not so obvious. Recently, SANS Internet Storm Center featured a write up on attempts to exploit Roundcube Webmail via the HTTP Accept header. Such an attack is generally focused on exploitation of operating system, language, or environmental vulnerabilities, as the data contained in HTTP headers (aside from cookies) is rarely used by the application as user-input. An example provided by SANS of an attack targeting Roundcube via the HTTP Accept header: POST /roundcube/bin/html2text.php HTTP/1.1 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.5) Gecko/2008120122 Firefox/3.0.5 Host: xx.xx.xx.xx Accept: ZWNobyAoMzMzMjEyKzQzMjQ1NjY2KS4iICI7O3Bhc3N0aHJ1KCJ1bmFtZSAtYTtpZCIpOw== Content-Length: 54 What the attackers in this example were attempting to do is trick the application into evaluating system commands encoded in the Accept header in order to retrieve some data they should not have had access to. The purpose of the attack, however, could easily have been for some other nefarious deed such as potentially writing a file to the system that could be used as a cross-site scripting attack, or deleting files, or just generally wreaking havoc with the system. This is the problem security professionals and developers face every day: what devious thing could some miscreant attempt to do? What must I protect against. This is part of what makes secure coding so difficult - developers aren't always sure what they should be protecting against, and neither are the security pros because the bad guys are always coming up with a new way to exploit some aspect of an application or transport layer protocols. Think HTTP headers aren't generally used by applications? Consider the use of the custom HTTP header "SOAP Action" for SOAP web services, and cookies, and E-tags, and ... well, the list goes on. HTTP headers carry data used by applications and therefore should be considered a viable transport mechanism for malicious code. So while the exploitation of HTTP headers is not nearly as common or rampant as mass SQL injection today, the use of it to target specific applications means it is a possible attack vector for the future against which applications should be protected now, before it becomes critical to do so. No, it may never happen. Attackers may never find a way to truly exploit HTTP headers. But then again, they might and apparently have been trying. Better safe than sorry, I say. Regardless of the technology you use to, the process is the same: you need to determine what is allowed in HTTP headers and verify them just as you would any other user-generated input or you need to invest in a solution that provides this type of security for you. RFC 2616 (HTTP), specifically section 14, provide a great deal of guidance and detail on what is acceptable in an HTTP header field. Never blindly evaluate or execute upon data contained in an HTTP header field. Treat any input, even input that is not traditionally user-generated, as suspect. That's a good rule of thumb for protecting against malicious payloads anyway, but especially a good rule when dealing with what is likely considered a non-traditional attack vector (until it is used, and overused to the point it's considered typical, of course). Possible ways to prevent the potential exploitation of HTTP headers: Use network-side scripting or mod_rewrite to intercept, examine, and either sanitize or outright reject requests containing suspicious data in HTTP headers. Invest in a security solution capable of sanitizing transport (TCP) and application layer (HTTP) protocols and use it to do so. Investigate whether an existing solution - either security or application delivery focused - is capable of providing the means through which you can enforce protocol compliance. Use secure coding techniques to examine - not evaluate - the data in any HTTP headers you are using and ensure they are legitimate values before using them in any way. A little proactive security can go along way toward not being the person who inadvertently discovers a new attack methodology. Related articles by Zemanta Gmail Is Vulnerable to Hackers The Concise Guide to Proxies 3 reasons you need a WAF even though your code is (you think) secure Stop brute forcing listing of HTTP OPTIONS with network-side scripting What's the difference between a web application and a blog?556Views0likes2Comments3 reasons you need a WAF even if your code is (you think) secure
Everyone is buzzing and tweeting about the SANS Institute CWE/SANS Top 25 Most Dangerous Programming Errors, many heralding its release as the dawning of a new age in secure software. Indeed, it's already changing purchasing requirements. Byron Acohido reports that the Department of Defense is leading the way by "accepting only software tested and certified against the Top 25 flaws." Some have begun speculating that this list obviates the need for web application firewalls (WAF). After all, if applications are secured against these vulnerabilities, there's no need for an additional layer of security. Or is there? Web application firewalls, while certainly providing a layer of security against the exploitation of the application vulnerabilities resulting from errors such as those detailed by SANS, also provide other benefits in terms of security that should be carefully considered before dismissing their usefulness out of hand. 1. FUTURE PROOF AGAINST NEW VULNERABILITIES The axiom says the only things certain in life are death and taxes, but in the world of application security a third is just as certain: the ingenuity of miscreants. Make no mistake, there will be new vulnerabilities discovered and they will, eventually, affect the security of your application. When they are discovered, you'll be faced with an interesting conundrum: take your application offline to protect it while you find, fix, and test the modifications or leave the application online and take your chances. The third option, of course, is to employ a web application firewall to detect and stop attacks targeting the vulnerability. This allows you to keep your application online while mitigating the risk of exploitation, giving developers the time necessary to find, fix, and test the changes necessary to address the vulnerability. 2. ENVIRONMENTAL SECURITY No application is an island. Applications are deployed in an environment because they require additional moving parts in order to actually be useful. Without an operating system, application or web server, and a network stack, applications really aren't all that useful to the end user. While the SANS list takes a subtle stab at mentioning this with its inclusion of OS command injection vulnerability, it assumes that all software and systems required to deploy and deliver an application are certified against the list and therefore secure. This is a utopian ideal that is unlikely to become reality and even if it was to come true, see reason #1 above. Web application firewalls protect more than just your application,they can also provide needed protection against vulnerabilities specific to operating systems, application and web servers, and the rest of the environment required to deploy and deliver your application. The increase in deployment of applications in virtualized environments, too, has security risks. The potential exploitation of the hypervisor is a serious issue that few have addressed thus far in their rush to adopt virtualization technology. 3. THE NINJA ATTACK There are some attacks that cannot be detected by an application. These usually involve the exploitation of a protocol such as HTTP or TCP and appear to the application to be legitimate requests. These "ninja" style attacks take advantage of the fact that applications are generally only aware of one user session at a time, and not able to make decisions based on the activity of all users occurring at the same time. The application cannot prevent these attacks. Attacks involving the manipulation of cookies, replay, and other application layer logical attacks, too, often go undetected by applications because they appear to be legitimate. SANS offers a passing nod to some of these types of vulnerabilities in its "Risky Resource Management" category, particularly CWE-642 (External Control of Critical State Data). Addressing this category for existing applications will likely require heavy modification to existing applications and new design for new applications. In the meantime, applications remain vulnerable to this category of vulnerability as well as the ninja attacks that are not, and cannot be, addressed by the SANS list. A web application firewall detects and prevents such stealth attacks attempting to exploit the legitimate behavior of protocols and applications. The excitement with which the SANS list has been greeted is great news for security professionals, because it shows an increased awareness in the importance of secure coding and application security in general. That organizations will demand proof that applications - third-party or in-house - are secure against such a list is sorely needed as a means to ensure better application security across the whole infrastructure. But "the list" does not obviate the usefulness or need for additional security measures, and web application firewalls have long provided benefits in addition to simply protecting against exploitation of SANS Top 25 errors. Those benefits remain valid and tangible even if your code is (you think) secure. Just because you installed a digital home security system doesn't mean you should get rid of the deadbolts. Or vice versa. Related articles by Zemanta Security is not a luxury item DoS attack reveals (yet another) crack in net's core CWE/SANS TOP 25 Most Dangerous Programming Errors Mike Fratto on CWE/SANS TOP 25 Most Dangerous Programming Errors Zero Day Threat: One big step toward a safer Internet479Views0likes2CommentsA Billion More Laughs: The JavaScript hack that acts like an XML attack
Don is off in Lowell working on a project with our ARX folks so I was working late last night (finishing my daily read of the Internet) and ended up reading Scott Hanselman's discussion of threads versus processes in Chrome and IE8. It was a great read, if you like that kind of thing (I do), and it does a great job of digging into some of the RAMifications (pun intended) of the new programmatic models for both browsers. But this isn't about processes or threads, it's about an interesting comment that caught my eye: This will make IE8 Beta 2 unresponsive .. t = document.getElementById("test"); while(true) { t.innerHTML += "a"; } What really grabbed my attention is that this little snippet of code is so eerily similar to the XML "Billion Laughs" exploit, in which an entity is expanded recursively for, well, forever and essentially causes a DoS attack on whatever system (browser, server) was attempting to parse the document. What makes scripts like this scary is that many forums and blogs that are less vehement about disallowing HTML and script can be easily exploited by a code snippet like this, which could cause the browser of all users viewing the infected post to essentially "lock up". This is one of the reasons why IE8 and Chrome moved to a more segregated tabbed model, with each tab basically its own process rather than a thread - to prevent corruption in one from affecting others. But given the comment this doesn't seem to be the case with IE8 (there's no indication Chrome was tested with this code, so whether it handles the situation or not is still to be discovered). This is likely because it's not a corruption, it's valid JavaScript. It just happens to be consuming large quantities of memory very quickly and not giving the other processes in other tabs in IE8 a chance to execute. The reason the JavaScript version was so intriguing was that it's nearly impossible to stop. The XML version can be easily detected and prevented by an XML firewall and most modern XML parsers can be configured to stop parsing and thus prevent the document from wreaking havoc on a system. But this JavaScript version is much more difficult to detect and thus prevent because it's code and thus not confined to a specific format with specific syntactical attributes. I can think of about 20 different versions of this script - all valid and all of them different enough to make pattern matching or regular expressions useless for detection. And I'm no evil genius, so you can bet there are many more. The best option for addressing this problem? Disable scripts. The conundrum is that disabling scripts can cause many, many sites to become unusable because they are taking advantage of AJAX functionality, which requires...yup, scripts. You can certainly enable scripts only on specific sites you trust (which is likely what most security folks would suggest should be default behavior anyway) but that's a PITA and the very users we're trying to protect aren't likely to take the time to do this - or even understand why it's necessary. With the increasing dependence upon scripting to provide functionality for RIAs (Rich Interactive Applications) we're going to have to figure out how to address this problem, and address it soon. Eliminating scripting is not an option, and a default deny policy (essentially whitelisting) is unrealistic. Perhaps it's time for signed scripts to make a comeback.418Views0likes4CommentsWhat IT Security can learn from a restroom sign
As an industry - both security and application delivery - we talk a lot about securing the application infrastructure (databases, web and application servers) by making sure that the data going into the applications is "clean". After all, we know that GIGO (Garbage In Garbage Out) is a true statement in terms of web applications and data. Unfortunately we tend to worry a lot more about the GI than the GO. While it's better for everyone to prevent that SQL injection or XSS attack from polluting our databases and potentially distributing malicious code to hundreds or thousands of users, that's not always possible as we've seen with the recent Adobe Zero Day exploit. SearchSecurity.comreports that McAfee has discovered a large number of sites poisoned by this exploit. Poisoned. How apropos. We've all seen the sign in the restroom of a restaurant, admonishing employees to "wash their hands before returning to work." That's because restaurateurs are attempting to mitigate the risk of poisoning their patrons with infections or potentially deadly viruses like e coli. So in order to minimize their risk, they make it a policy that all employees need to make an effort to neutralize germs by washing their hands before serving food to their customers. IT Security and developers could learn a lot from that sign because it applies to anyone in the business of serving content just as well as it applies to those businesses that serve food. That's right. We should wash content before serving it to users to mitigate the potential for poisoning a user's desktop with malicious code. Unfortunately, most security solutions are only concerned with keeping "bad stuff" from getting in, not preventing "bad stuff" from getting out. There are plenty of solutions out there that deal with data leak prevention, but few that specifically prevent malicious code from leaking into the nether regions of the Internet. Yet the same solutions capable of stopping sensitive data from leaving the organization should be capable of preventing malicious content from leaving the organization as well. Those solutions with dynamic platforms capable of examining content as it leaves the building, as it were, can also stop that content before it infects users. In essence, such solutions can "wash the content" before serving it to users. You might be thinking "It's a lot easier to wash my hands than it is content. After all, I don't know what I'm looking for in the content!" Hey - you don't really know what you're looking for on your hands, either. Germs are microscopic, and even if you did know what they looked like you couldn't find them anyway - they're too small to see. But there are some things you can look for that may indicate something's rotten in the kitchen, or in your corporate database. Do your pages have Flash objects? Do those Flash objects belong in the middle of a forum? Does your application load content from external sites? Does your application use certain scripting functions like "document.write" or "window.open" routinely? If you can get to know your application, then you can start crafting the right kind of content evaluation that can more easily determine if you've been infected and are passing it along to your users. While you can certainly craft a solution to stop a specific attack, like the Adobe exploit, it's a good idea to have a general solution that might be able to counter other zero day attacks and worms. If your application should only load content from certain sites, then look for attempts to load external content and verify that the site is one of those "approved" sites. If your application never uses "document.write" and suddenly it shows up in a web page, something ain't right. What you do when you discover an infected application is up to you. If your solution is intelligent and flexible enough, you can block the whole page or just strip out the offending content. You can send a nice response to the user while simultaneously writing a log entry that will alert your security staff that there is a problem. The problem isn't that the technology to accomplish this doesn't exist - it does - it's that we've been so focused on what goes in that we've sometimes forgotten to look at what goes out. So as you're considering how to protect your applications, remember that sign in the restroom. Just as sincerely as you hope the staff at your favorite restaurant is minding their notice, so your users are hoping that you're minding one like it. Imbibing: Coffee Some F5 iRules that can help you start craft a content washing solution: iRule Security 101 #3 SSN Scrubbing Rule JsessionID XSS Security iRule Adobe Flash Exploit iRule206Views0likes0CommentsNew TCP vulnerability about trust, not technology
I read about a "new" TCP flaw that, according to C|Net News, Related Posts puts Web sites at risk. There is very little technical information available; the researchers who discovered this tasty TCP tidbit canceled a conference talk on the subject and have been sketchy about the details of the flaw when talking publicly. So I did some digging and ran into a wall of secrecy almost as high as the one Kaminsky placed around the DNS vulnerability. Layer 4 vs Layer 7 DoS Attack The Unpossible Task of Eliminating Risk Soylent Security So I hit Twitter and leveraged the simple but effective power of asking for help. Which resulted in several replies, leading me to Fyodor and an April 2000 Bugtraq entry. The consensus at this time seems to be that the wall Kaminsky built was for good reason, but this one? No one's even trying to ram it down because it doesn't appear to be anything new. Which makes the "oooh, scary!" coverage by mainstream and trade press almost amusing and definitely annoying. The latest 'exploit' appears to be, in a nutshell, a second (or more) discovery regarding the nature of TCP. It appears to exploit the way in which TCP legitimizes a client. In that sense the rediscovery (I really hesitate to call it that, by the way) is on par with Kaminsky's DNS vulnerability simply because the exploit appears to be about the way in the protocol works, and not any technical-based vulnerability like a buffer overflow. TCP and applications riding atop TCP inherently trust any client that knocks on the door (SYN) and responds correctly (ACK) when TCP answers the door (SYN ACK). It is simply the inherent trust of the TCP handshake as validation of the legitimacy of a client that makes these kinds of attacks possible. But that's what makes the web work, kids, and it's not something we should be getting all worked up about. Really, the headlines should read more like "Bad people could misuse the way the web works. Again." This likely isn't about technology, it's about trust, and the fact that the folks who wrote TCP never thought about how evil some people can be and that they'd take advantage of that trust and exploit it. Silly them, forgetting to take into account human nature when writing a technical standard. If they had, however, we wouldn't have the Internet we have today because the trust model on the Web would have to be "deny everything, trust no one" rather than "trust everyone unless they prove otherwise." So is the danger so great as is being portrayed around the web? I doubt it, unless the researchers have stumbled upon something really new. We've known about these kinds of attacks for quite some time now. Changing the inherent nature of TCP isn't something likely to happen anytime soon, but contrary to the statements made regarding there being no workarounds or solutions to these problem, there are plenty of solutions that address these kinds of attacks. I checked in with our engineers, just in case, and got the low-down on how BIG-IP handles this kind of a situation and, as expected, folks with web sites and applications being delivered via a BIG-IP really have no reason to be concerned about the style of attack described by Fyodor. If it turns out there's more to this vulnerability, then I'll check in again. But until then, I'm going to join the rest of the security world and not worry much about this "new" attack. In the end, it appears that the researchers are not only exploiting the trust model of TCP, they're exploiting the trust between people; the trust that the press has in "technology experts" to find real technical vulnerabilities and the trust that folks have in the trade press to tell them about it. That kind of exploitation is something that can't be addressed with technology. It can't be fixed by rewriting a TCP stack, and it certainly can't be patched by any vendor.200Views0likes2CommentsHow do you stop psd5c4fpsd3a4epsd227?
By now you've certainly heard about the "zero day" Adobe Flash player exploit. What appears to be going on is similar to how other exploits and malware become quickly propagated across the web: Set up a site that hosts some malware with a simple but effective password stealer hidden in a Flash file Inject malicious code via SQL injection techniques into a web site that will load the Flash files from the host you set up in step 1. Web 2.0 sites with forums and user-generated content are great places to try. Collect passwords/personal information/run a bot Profit Current suggestions on preventing this exploit from taking hold on your PC is to disable Flash, or at least run a plug-in like Flashblock or NoScript that lets you decide what Flash content/scripts to run and which to block. If you have a BIG-IP available, there are some other options for you that will allow your users to view other, non-malicious Flash and JavaScript-based content while blocking scripts attempting to exploit the vulnerability. An added side benefit of this capability is that you'll quickly discover if you've been infected, and can get on with removing the malicious code sooner rather than later. If you're wondering why the SQL injection attacks have been so successful in the first place (experts estimate over 250,000 sites are currently infected), just take a look at a screen shot captured by Dancho Danchev that shows some of the JavaScript used to enable the zero day Adobe Flash attack. (Click to view full size) Because this is a new exploit, and because it uses a couple of different domains from which to load the infected SWF files, the "signature" of this exploit isn't going to be available to most desktop security scanners. This is one of the drawbacks of using purely signature-based security products - the signature of a newly discovered threat isn't going to be available until researchers nail it down and make it available to be downloaded. And that could take days. Days in which you, your site, and your users could be vulnerable. In general, if the SQL injection attack failed, this scheme would likely fail to work in the first place. Attackers must somehow trick you into downloading or otherwise loading the code that will in the end infect users' browsers, and that means (a) getting you to visit their site or (b) hiding the code in a site you do visit often that thrives on user-generated content a.k.a. Web 2.0 sites. Now, standard SQL injection attacks are easy to spot and most developers have probably coded in a way that ensures such standard attacks aren't going to be possible. But attackers are always learning, evolving, and discovering ways to evade such protection. And lately they've figured out how to hide their attacks such that they can evade code-based protected as well as most web application firewalls. They're hiding in the all concealing torchlight (link goes to white paper on this technique). Even if your developers are using secure coding techniques, it's possible that an attack such as this could be perpetrated against their application anyway, because of the vagaries of HTML and other loosely typed, loosely defined languages used to build web applications today. So how do you stop it - and future attacks like this one? Consider a web application firewall that's capable of detecting these "hidden" attacks by taking into consideration all the possible warped combinations that can be used to hide SQL injection and XSS attacks inside requests. If you're running a BIG-IP LTM (Local Traffic Manager), you can likely prevent this attack from infecting users and discover if you've infected by using iRules to scan the payload for specific strings known to exist in the payload of the attack, such as the known domains and filenames of the flash files. Imbibing: Water182Views0likes0Comments