vuln
10 TopicsI am in your HTTP headers, attacking your application
Zero-day IE exploits and general mass SQL injection attacks often overshadow potentially more dangerous exploits targeting lesser known applications and attack vectors. These exploits are potentially more dangerous because once proven through a successful attack on these lesser known applications they can rapidly be adapted to exploit more common web applications, and no one is specifically concentrating on preventing them because they're, well, not so obvious. Recently, SANS Internet Storm Center featured a write up on attempts to exploit Roundcube Webmail via the HTTP Accept header. Such an attack is generally focused on exploitation of operating system, language, or environmental vulnerabilities, as the data contained in HTTP headers (aside from cookies) is rarely used by the application as user-input. An example provided by SANS of an attack targeting Roundcube via the HTTP Accept header: POST /roundcube/bin/html2text.php HTTP/1.1 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.5) Gecko/2008120122 Firefox/3.0.5 Host: xx.xx.xx.xx Accept: ZWNobyAoMzMzMjEyKzQzMjQ1NjY2KS4iICI7O3Bhc3N0aHJ1KCJ1bmFtZSAtYTtpZCIpOw== Content-Length: 54 What the attackers in this example were attempting to do is trick the application into evaluating system commands encoded in the Accept header in order to retrieve some data they should not have had access to. The purpose of the attack, however, could easily have been for some other nefarious deed such as potentially writing a file to the system that could be used as a cross-site scripting attack, or deleting files, or just generally wreaking havoc with the system. This is the problem security professionals and developers face every day: what devious thing could some miscreant attempt to do? What must I protect against. This is part of what makes secure coding so difficult - developers aren't always sure what they should be protecting against, and neither are the security pros because the bad guys are always coming up with a new way to exploit some aspect of an application or transport layer protocols. Think HTTP headers aren't generally used by applications? Consider the use of the custom HTTP header "SOAP Action" for SOAP web services, and cookies, and E-tags, and ... well, the list goes on. HTTP headers carry data used by applications and therefore should be considered a viable transport mechanism for malicious code. So while the exploitation of HTTP headers is not nearly as common or rampant as mass SQL injection today, the use of it to target specific applications means it is a possible attack vector for the future against which applications should be protected now, before it becomes critical to do so. No, it may never happen. Attackers may never find a way to truly exploit HTTP headers. But then again, they might and apparently have been trying. Better safe than sorry, I say. Regardless of the technology you use to, the process is the same: you need to determine what is allowed in HTTP headers and verify them just as you would any other user-generated input or you need to invest in a solution that provides this type of security for you. RFC 2616 (HTTP), specifically section 14, provide a great deal of guidance and detail on what is acceptable in an HTTP header field. Never blindly evaluate or execute upon data contained in an HTTP header field. Treat any input, even input that is not traditionally user-generated, as suspect. That's a good rule of thumb for protecting against malicious payloads anyway, but especially a good rule when dealing with what is likely considered a non-traditional attack vector (until it is used, and overused to the point it's considered typical, of course). Possible ways to prevent the potential exploitation of HTTP headers: Use network-side scripting or mod_rewrite to intercept, examine, and either sanitize or outright reject requests containing suspicious data in HTTP headers. Invest in a security solution capable of sanitizing transport (TCP) and application layer (HTTP) protocols and use it to do so. Investigate whether an existing solution - either security or application delivery focused - is capable of providing the means through which you can enforce protocol compliance. Use secure coding techniques to examine - not evaluate - the data in any HTTP headers you are using and ensure they are legitimate values before using them in any way. A little proactive security can go along way toward not being the person who inadvertently discovers a new attack methodology. Related articles by Zemanta Gmail Is Vulnerable to Hackers The Concise Guide to Proxies 3 reasons you need a WAF even though your code is (you think) secure Stop brute forcing listing of HTTP OPTIONS with network-side scripting What's the difference between a web application and a blog?556Views0likes2Comments3 reasons you need a WAF even if your code is (you think) secure
Everyone is buzzing and tweeting about the SANS Institute CWE/SANS Top 25 Most Dangerous Programming Errors, many heralding its release as the dawning of a new age in secure software. Indeed, it's already changing purchasing requirements. Byron Acohido reports that the Department of Defense is leading the way by "accepting only software tested and certified against the Top 25 flaws." Some have begun speculating that this list obviates the need for web application firewalls (WAF). After all, if applications are secured against these vulnerabilities, there's no need for an additional layer of security. Or is there? Web application firewalls, while certainly providing a layer of security against the exploitation of the application vulnerabilities resulting from errors such as those detailed by SANS, also provide other benefits in terms of security that should be carefully considered before dismissing their usefulness out of hand. 1. FUTURE PROOF AGAINST NEW VULNERABILITIES The axiom says the only things certain in life are death and taxes, but in the world of application security a third is just as certain: the ingenuity of miscreants. Make no mistake, there will be new vulnerabilities discovered and they will, eventually, affect the security of your application. When they are discovered, you'll be faced with an interesting conundrum: take your application offline to protect it while you find, fix, and test the modifications or leave the application online and take your chances. The third option, of course, is to employ a web application firewall to detect and stop attacks targeting the vulnerability. This allows you to keep your application online while mitigating the risk of exploitation, giving developers the time necessary to find, fix, and test the changes necessary to address the vulnerability. 2. ENVIRONMENTAL SECURITY No application is an island. Applications are deployed in an environment because they require additional moving parts in order to actually be useful. Without an operating system, application or web server, and a network stack, applications really aren't all that useful to the end user. While the SANS list takes a subtle stab at mentioning this with its inclusion of OS command injection vulnerability, it assumes that all software and systems required to deploy and deliver an application are certified against the list and therefore secure. This is a utopian ideal that is unlikely to become reality and even if it was to come true, see reason #1 above. Web application firewalls protect more than just your application,they can also provide needed protection against vulnerabilities specific to operating systems, application and web servers, and the rest of the environment required to deploy and deliver your application. The increase in deployment of applications in virtualized environments, too, has security risks. The potential exploitation of the hypervisor is a serious issue that few have addressed thus far in their rush to adopt virtualization technology. 3. THE NINJA ATTACK There are some attacks that cannot be detected by an application. These usually involve the exploitation of a protocol such as HTTP or TCP and appear to the application to be legitimate requests. These "ninja" style attacks take advantage of the fact that applications are generally only aware of one user session at a time, and not able to make decisions based on the activity of all users occurring at the same time. The application cannot prevent these attacks. Attacks involving the manipulation of cookies, replay, and other application layer logical attacks, too, often go undetected by applications because they appear to be legitimate. SANS offers a passing nod to some of these types of vulnerabilities in its "Risky Resource Management" category, particularly CWE-642 (External Control of Critical State Data). Addressing this category for existing applications will likely require heavy modification to existing applications and new design for new applications. In the meantime, applications remain vulnerable to this category of vulnerability as well as the ninja attacks that are not, and cannot be, addressed by the SANS list. A web application firewall detects and prevents such stealth attacks attempting to exploit the legitimate behavior of protocols and applications. The excitement with which the SANS list has been greeted is great news for security professionals, because it shows an increased awareness in the importance of secure coding and application security in general. That organizations will demand proof that applications - third-party or in-house - are secure against such a list is sorely needed as a means to ensure better application security across the whole infrastructure. But "the list" does not obviate the usefulness or need for additional security measures, and web application firewalls have long provided benefits in addition to simply protecting against exploitation of SANS Top 25 errors. Those benefits remain valid and tangible even if your code is (you think) secure. Just because you installed a digital home security system doesn't mean you should get rid of the deadbolts. Or vice versa. Related articles by Zemanta Security is not a luxury item DoS attack reveals (yet another) crack in net's core CWE/SANS TOP 25 Most Dangerous Programming Errors Mike Fratto on CWE/SANS TOP 25 Most Dangerous Programming Errors Zero Day Threat: One big step toward a safer Internet479Views0likes2CommentsUsing Resource Obfuscation to Reduce Risk of Mass SQL Injection
One of the ways miscreants locate targets for mass SQL injection attacks that can leave your applications and data tainted with malware and malicious scripts is to simply seek out sites based on file extensions. Attackers know that .ASP and .PHP files are more often than not vulnerable to SQL injection attacks, and thus use Google and other search engines to seek out these target-rich environments by extension. Using a non-standard extension will not eliminate the risk of being targeted by a mass SQL injection attack, but it can significantly reduce the possibility because your site will automatically turn up in cursory searches seeking vulnerable sites. As Jeremiah Grossman often points out, while cross-site scripting may be the most common vulnerability discovered in most sites, SQL injection is generally the most exploited vulnerability, probably due to the ease with which it can be discovered, so anything you can do to reduce that possibility is a step in the right direction. You could, of course, embark on a tedious and time-consuming mission to rename all files such that they do not show up in a generic search. However, this requires much more than simply replacing file extensions as every reference to the files must also necessarily be adjusted lest you completely break your application. You may also be able to automatically handle the substitution and required mapping in the application server itself by modifying its configuration. Alternatively there is another option: resource obfuscation. Using a network-side scripting technology like iRules or mod_rewrite, you have a great option at your disposal to thwart the automated discovery of potentially vulnerable applications. HIDE FILE EXTENSIONS You can implement network-side script functionality that simply presents to the outside world a different extension for all PHP and ASP files. While internally you are still serving up application.php the user – whether search engine, spider, or legitimate user – sees application.zzz. The network-side script must be capable of replacing all instances of “.php” with “.zzz” in responses while interpreting all requests for “.zzz” as “.php” in order to ensure that the application continues to act properly. The following iRule shows an example of both the substitution in the response and the replacement in the request to enable this functionality: when HTTP_REQUEST { # This replaces “.zzz” with ".php” in the URI HTTP::uri [string map {".zzz" ".php"} [HTTP::uri]] } when HTTP_RESPONSE { STREAM::disable If {[HTTP::header value "Content-Type"] contains "text" } { STREAM::expression "@.php@.zzz@" STREAM::enable } } One of the benefits of using a network-side script like this one to implement resource obfuscation is that in the event that the bad guys figure out what you’re doing, you can always change the mapping in a centralized location and it will immediately propagate across all your applications – without needing to change a thing on your servers or in your application. HIDE YOUR SERVER INFORMATION A second use of resource obfuscation is to hide the server information. Rather than let the world know you’re running on IIS or Apache version whatever with X and Y module extensions, consider changing the configuration to provide minimal – if any – information about the actual application infrastructure environment. For Apache you can change this in httpd.conf: ServerSignature Off ServerTokens Prod These settings prevent Apache from adding the “signature” at the bottom of pages that contains the server name and version information and changes the HTTP Server header to simply read “Apache”. In IIS you can disable the Server header completely by setting the following registry key to “1”. HKLM\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\DisableServerHeader If you’d rather change the IIS Server header instead of removing it, this KnowledgeBase Note describes how to use URLScan to achieve your goals. If you’d like to change the HTTP Server header in a centralized location you can use mod_security or network-side scripting to manipulate the Server header. As with masking file extensions, a centralized location for managing the HTTP Server header can be beneficial in many ways, especially if there are a large number of servers on which you need to make configuration changes. Using iRules, just replace the header with something else: when HTTP_RESPONSE { HTTP::header replace Server new_value } Using mod_security you can set the SecServerSignature directive: SecServerSignature "My Custom Server Name" These techniques will not prevent your applications from being exploited nor do they provide any real security against an attack, but they can reduce the risk of being discovered and subsequently targeted by making it more difficult for miscreants to recognize your environment as one that may be vulnerable to attack.267Views0likes1CommentWhy it's so hard to secure JavaScript
The discussion yesterday on JavaScript and security got me thinking about why it is that there are no good options other than script management add-ons like NoScript for securing JavaScript. In a compiled language there may be multiple ways to write a loop, but the underlying object code generated is the same. A loop is a loop, regardless of how it's represented in the language. Security products that insert shims into the stack, run as a proxy on the server, or reside in the network can look for anomalies in that object code. This is the basis for many types of network security - IDS, IPS, AVS, intelligent firewalls. They look for anomalies in signatures and if they find one they consider it a threat. While the execution of a loop in an interpreted language is also the same regardless of how it's represented, it looks different to security devices because it's often text-based as is the case with JavaScript and XML. There are only two good options for externally applying security to languages that are interpreted on the client: pattern matching/regex and parsing. Pattern matching and regular expressions provide minimal value for securing client-side interpreted languages, at best, because of the incredibly high number of possible combinations of putting together code. Where's F5? VMWorld Sept 15-18 in Las Vegas Storage Decisions Sept 23-24 in New York Networld IT Roadmap Sept 23 in Dallas Oracle Open World Sept 21-25 in San Francisco Storage Networking World Oct 13-16 in Dallas Storage Expo 2008 UK Oct 15-16 in London Storage Networking World Oct 27-29 in Frankfurt As we learned from preventing SQL injection and XSS, attackers are easily able to avoid detection by these systems by simply adding white space, removing white space, using encoding tricks, and just generally finding a new permutation of their code. Parsing is, of course, the best answer. As 7rans noted yesterday regarding the Billion More Laughs JavaScript hack, if you control the stack, you control the execution of the code. Similarly, if you parse the data you can get it into a format more akin to that of a compiled language and then you can secure it. That's the reasoning behind XML threat defense, or XML firewalls. In fact, all SOA and XML security devices necessarily parse the XML they are protecting - because that's the only way to know whether or not some typical XML attacks, like the Billion Laughs attack, are present. But this implementation comes at a price: performance. Parsing XML is compute intensive, and it necessarily adds latency. Every device you add into the delivery path that must parse the XML to route it, secure it, or transform it adds latency and increases response time, which decreases overall application performance. This is one of the primary reasons most XML-focused solutions prefer to use a streaming parser. Streaming parser performance is much better than a full DOM parser, and still provides the opportunity to validate the XML and find malicious code. It isn't a panacea, however, as there are still some situations where streaming can't be used - primarily when transformation is involved. We know this already, and also know that JavaScript and client-side interpreted languages in general are far more prolific than XML. Parsing JavaScript externally to determine whether it contains malicious code would certainly make it more secure, but it would also likely severely impact application performance - and not in a good way. We also know that streaming JavaScript isn't a solution because unlike an XML document, JavaScript is not confined. JavaScript is delimited, certainly, but it isn't confined to just being in the HEAD of an HTML document. It can be anywhere in the document, and often is. Worse, JavaScript can self-modify at run-time - and often does. That means that the security threat may not be in the syntax or the code when it's delivered to the client, but it might appear once the script is executed. Not only would an intermediate security device need to parse the JavaScript, it would need to execute it in order to properly secure it. While almost all web application security solutions - ours included - are capable of finding specific attacks like XSS and SQL injection that are hidden within JavaScript, none are able to detect and prevent JavaScript code-based exploits unless they can be identified by a specific signature or pattern. And as we've just established, that's no guarantee the exploits won't morph and change as soon as they can be prevented. That's why browser add-ons like NoScript are so popular. Because JavaScript security today is binary: allow or deny. Period. There's no real in between. There is no JavaScript proxy that parses and rejects malicious script, no solution that proactively scans JavaScript for code-based exploits, no external answer to the problem. That means we have to rely on the browser developers to not only write a good browser with all the bells and whistles we like, but for security, as well. I am not aware of any security solution that currently parses out JavaScript before it's delivered to the client. If there are any out there, I'd love to hear about them.264Views0likes1CommentInside Look: BIG-IP ASM Botnet and Web Scraping Protection
I hang with WW Security architect Corey Marshall to get an inside look at the Botnet detection and Web scraping protection in BIG-IP ASM. ps Related: F5's YouTube Channel In 5 Minutes or Less Series (23 videos – over 2 hours of In 5 Fun) Inside Look Series Technorati Tags: asm,waf,botnet,web scraping,big-ip,security,protection,vulnerabilities,silva,video,demo,brands,v11.3 Connect with Peter: Connect with F5:256Views0likes0Comments4 Reasons We Must Redefine Web Application Security
Mike Fratto loves to tweak my nose about web application security. He’s been doing it for years, so it’s (d)evolved to a pretty standard set of arguments. But after he tweaked the debate again in a tweet, I got to thinking that part of the problem is the definition of web application security itself. Web application security is almost always about the application (I know, duh! but bear with me) and therefore about the developer and secure coding. Most of the programmatic errors that lead to vulnerabilities and subsequently exploitation can be traced to a lack of secure coding practices, particularly around the validation of user input (which should never, ever be trusted). Whether it’s XSS (Cross Site Scripting) or SQL Injection, the root of the problem is that malicious data or code is submitted to an application and not properly ferreted out by sanitization routines written by developers, for whatever reason. But there are a number of “web application” attacks that have nothing to do with developers and code, and are, in fact, more focused on the exploitation of protocols. TCP and HTTP can be easily manipulated in such a way as to result in a successful attack on an application without breaking any RFC or W3C standard that specifies the proper behavior of these protocols. Worse, the application developer really can’t do anything about these types of attacks because they aren’t aware they are occurring. It is, in fact, the four layers of the TCP/IP stack - the foundation for web applications – that are the reason we need to redefine web application security and expand it to include the network where it makes sense. Being a “web” application necessarily implies network interaction, which implies reliance on the network stack, which implies potential exploitation of the protocols upon which applications and their developers rely but have little or no control over. Web applications ride atop the OSI stack; above and beyond the network and application model on which all web applications are deployed. But those web applications have very little control or visibility into that stack and network-focused security (firewalls, IDS/IPS) have very little awareness of the application. Let’s take a Layer 7 DoS attack as an example. L7 DoS works by exploiting the nature of HTTP 1.1 and its ability to essentially pipeline multiple HTTP requests over the same TCP connection. Ostensibly this behavior reduces the overhead on the server associated with opening and closing TCP connections and thus improves the overall capacity and performance of web and application servers. So far, all good. But this behavior can be used against an application. By requesting a legitimate web page using HTTP 1.1, TCP connections on the server are held open waiting for additional requests to be submitted. And they’ll stay open until the client sends the appropriate HTTP Connection: close header with a request or the TCP session itself times out based on the configuration of the web or application server. Now, imagine a single source opens a lot – say thousands – of pages. They are all legitimate requests; there’s no malicious data involved or incorrect usage of the underlying protocols. From the application’s perspective this is not an attack. And yet it is an attack. It’s a basic resource consumption attack designed to chew up all available TCP connections on the server such that legitimate users cannot be served. The distinction between legitimate users and legitimate requests is paramount to understanding why it is that web application security isn’t always just about the application; sometimes it’s about protocols and behavior external to the application that cannot be controlled let alone detected by the application or its developers. The developer, in order to detect such a misuse of the HTTP protocol, would need to keep what we in the network world call a “session table”. Now web and application servers keep this information, they have to, but they don’t make it accessible to the developer. Basically, the developer’s viewpoint is from inside a single session, dealing with a single request, dealing with a single user. The developer, and the application, have a very limited view of the environment in which the application operates. A web application firewall, however, has access to its session table necessarily. It operates with a view point external to the application, with the ability to understand the “big picture”. It can detect when a single user is opening three or four or thousands of connections and not doing anything with them. It understands that the user is not legitimate because the user’s behavior is out of line with how a normal user interacts with an application. A web application firewall has the context in which to evaluate the behavior and requests and realize that a single user opening multiple connections is not legitimate after all. Not all web application security is about the application. Sometimes it’s about the protocols and languages and platforms supporting the delivery of that application. And when the application lacks visibility into those supporting infrastructure environments, it’s pretty damn difficult for the application developer to use secure coding to protect against exploitation of those facets of the application. We really have to stop debating where web application security belongs and go back to the beginning and redefine what it means in a web driven distributed world if we’re going to effectively tackle the problem any time this century.206Views0likes1CommentBusinessWeek takes viral advertising a little too seriously
Yesterday it was reported that BusinessWeek had been infected with malware via an SQL injection attack. [begin Mom lecture] Remember when we talked about PCI DSS being a good idea for everyone, even though it's just a requirement for the payment card industry? If I've told you once, I've told you a million times: safer is better, more protection never hurts. [end Mom lecture] The coolest thing about the web is that, unlike being a mom with one teenager left in the house, I don't have to actually repeat myself. I can just link to it again...and again...and again. Interestingly, the aforementioned report indicates that "Sophos informed BusinessWeek of the infection last week, although at the time of writing the hackers' scripts are still present and active on their site." Why would that be? Perhaps because it takes time to find and fix the code responsible, and then actually deploy it out into production. This is one of the scenarios in which a web application firewall or an application delivery platform could be of assistance, as either could be quickly and easily configured to strip the offending scripts from all responses, giving developers the time they need to address the problem in the application. Where's F5? VMWorld Sept 15-18 in Las Vegas Storage Decisions Sept 23-24 in New York Networld IT Roadmap Sept 23 in Dallas Oracle Open World Sept 21-25 in San Francisco Storage Networking World Oct 13-16 in Dallas Storage Expo 2008 UK Oct 15-16 in London Storage Networking World Oct 27-29 in Frankfurt Related reading: White Paper: SQL Injection Evasion Detection Article: Preventing SQL Injections204Views0likes0CommentsNew TCP vulnerability about trust, not technology
I read about a "new" TCP flaw that, according to C|Net News, Related Posts puts Web sites at risk. There is very little technical information available; the researchers who discovered this tasty TCP tidbit canceled a conference talk on the subject and have been sketchy about the details of the flaw when talking publicly. So I did some digging and ran into a wall of secrecy almost as high as the one Kaminsky placed around the DNS vulnerability. Layer 4 vs Layer 7 DoS Attack The Unpossible Task of Eliminating Risk Soylent Security So I hit Twitter and leveraged the simple but effective power of asking for help. Which resulted in several replies, leading me to Fyodor and an April 2000 Bugtraq entry. The consensus at this time seems to be that the wall Kaminsky built was for good reason, but this one? No one's even trying to ram it down because it doesn't appear to be anything new. Which makes the "oooh, scary!" coverage by mainstream and trade press almost amusing and definitely annoying. The latest 'exploit' appears to be, in a nutshell, a second (or more) discovery regarding the nature of TCP. It appears to exploit the way in which TCP legitimizes a client. In that sense the rediscovery (I really hesitate to call it that, by the way) is on par with Kaminsky's DNS vulnerability simply because the exploit appears to be about the way in the protocol works, and not any technical-based vulnerability like a buffer overflow. TCP and applications riding atop TCP inherently trust any client that knocks on the door (SYN) and responds correctly (ACK) when TCP answers the door (SYN ACK). It is simply the inherent trust of the TCP handshake as validation of the legitimacy of a client that makes these kinds of attacks possible. But that's what makes the web work, kids, and it's not something we should be getting all worked up about. Really, the headlines should read more like "Bad people could misuse the way the web works. Again." This likely isn't about technology, it's about trust, and the fact that the folks who wrote TCP never thought about how evil some people can be and that they'd take advantage of that trust and exploit it. Silly them, forgetting to take into account human nature when writing a technical standard. If they had, however, we wouldn't have the Internet we have today because the trust model on the Web would have to be "deny everything, trust no one" rather than "trust everyone unless they prove otherwise." So is the danger so great as is being portrayed around the web? I doubt it, unless the researchers have stumbled upon something really new. We've known about these kinds of attacks for quite some time now. Changing the inherent nature of TCP isn't something likely to happen anytime soon, but contrary to the statements made regarding there being no workarounds or solutions to these problem, there are plenty of solutions that address these kinds of attacks. I checked in with our engineers, just in case, and got the low-down on how BIG-IP handles this kind of a situation and, as expected, folks with web sites and applications being delivered via a BIG-IP really have no reason to be concerned about the style of attack described by Fyodor. If it turns out there's more to this vulnerability, then I'll check in again. But until then, I'm going to join the rest of the security world and not worry much about this "new" attack. In the end, it appears that the researchers are not only exploiting the trust model of TCP, they're exploiting the trust between people; the trust that the press has in "technology experts" to find real technical vulnerabilities and the trust that folks have in the trade press to tell them about it. That kind of exploitation is something that can't be addressed with technology. It can't be fixed by rewriting a TCP stack, and it certainly can't be patched by any vendor.200Views0likes2CommentsOut, Damn’d Bot! Out, I Say!
Exorcising your digital demons Most people are familiar with Shakespeare’s The Tragedy of Macbeth. Of particularly common usage is the famous line uttered repeatedly by Lady Macbeth, “Out, damn’d spot! Out, I say” as she tries to wash imaginary bloodstains from her hands, wracked with the guilt of the many murders of innocent men, women, and children she and her husband have committed. It might be no surprise to find a similar situation in the datacenter, late at night. With the background of humming servers and cozily blinking lights shedding a soft glow upon the floor, you might hear some of your infosecurity staff roaming the racks and crying out “Out, damn’d bot! Out I say!” as they try to exorcise digital demons from their applications and infrastructure. Because once those bots get in, they tend to take up a permanent residence. Getting rid of them is harder than you’d think because like Lady Macbeth’s imaginary bloodstains, they just keep coming back – until you address the source.182Views0likes0CommentsWhy Vulnerabilities Go Unpatched
The good folks at Verizon Business who recently released their 2008 Data Breach Investigations Report sounded almost surprised by the discovery that "Intrusion attempts targeted the application layer more than the operating system and less than a quarter of attacks exploited vulnerabilities. Ninety percent of known vulnerabilities exploited by these attacks had patches available for at least six months prior to the breach." This led the researchers to conclude that "For the overwhelming majority of attacks exploiting known vulnerabilities, the patch had been available for months prior to the breach. [...] Also worthy of mention is that no breaches were caused by exploits of vulnerabilities patched within a month or less of the attack. This strongly suggests that a patch deployment strategy focusing on coverage and consistency is far more effective at preventing data breaches than “fire drills” attempting to patch particular systems as soon as patches are released." There's actually a very valid reason why vulnerabilities go unpatched for months in an organization, regardless of how frustrating that reality may be to security professionals: reliability and stability. The first rule of IT is "Business critical applications and systems shall not be disturbed." When applications are the means through which your business runs, i.e. it generates revenue, you are very careful about disturbing the status quo because even the smallest mistake can lead to downtime, which in turn results in lost revenue. For example, it's estimated that Amazon lost $31,000 per minute it was down. It was down long enough for the revenue lost to jump into six digit figures. Patching a system requires testing and certification. The operating system or application must be patched, and then all applications running on that system must be tested to ensure that the patch has not affected them in any way. IT folks have been burned one too many times in the past by patches to simply "fire and forget" when it comes to changing operating systems, platforms, and applications in production environments. That's why multiple environs exist in the enterprise - an application moves from development to quality assurance and testing to production, and why we've all sat around eating pizza and watching the clock late on a Saturday night, holding a printed copy of our "roll back" plans just in case something went wrong. Patching even a vulnerability takes time, and the more applications running on a system the more time it takes, because each one has to be tested and re-certified in a non-production environment before the patch can be applied to a production system. This process evolved to minimize the impact on production systems and reduce system downtime - which usually translates into lost revenue. Thus, it's no surprise that Verizon's researchers discovered such a high percentage of vulnerability exploits could have been prevented by patches issued months prior to the breach. It's possible that many of those breaches occurred while the patch was being tested and simply hadn't been rolled out to production yet. It's certainly not that IT professionals are unaware that these patches exist. It's simply that there are so many moving parts on a production system with higher risk factors than your average system that they aren't willing - many would say rightfully so - to apply patches that may potentially break a smooth running system. This is one of the reasons we advocate the use of web application firewalls and intelligent application delivery networks. The web application firewall will usually be updated to defend against a known web application vulnerability before the IT folks have had a chance to verify that the patch issued will not negatively affect production systems. This provides IT with the ability to take the time necessary to ensure business continuity in the face of patching systems without leaving systems vulnerable to compromise. And if the web application firewall isn't immediately updated, an intelligent application delivery platform can often be used as a stop-gap measure, filtering requests such that attempts to exploit new vulnerabilities can be stopped while IT gets ready to patch systems. An application delivery platform can also, through its load balancing capabilities, enable IT to patch individual systems without downtime by intelligently directing requests to the systems available that are not being patched at the moment. IT Security Professionals understandably desire patches to be applied as soon as possible, but the definition of as soon as possible is often days, weeks, or even months from the time the patch is issued. IT and business folks don't want to experience a breach, either, but the need to protect applications and systems from exploitation must be balanced against revenue generation and productivity. Employing a web application firewall and/or an intelligent application delivery network solution can bridge the gap between the two seemingly diametrically opposed needs, providing security while ensuring IT has the time to properly test patches. Imbibing: Coffee180Views0likes0Comments