virtual patching
3 TopicsVirtual Patching: What is it and why you should be doing it
Yesterday I was privileged to co-host a webinar with WhiteHat Security's Jeremiah Grossman on preventing SQL injection and Cross-Site scripting using a technique called "virtual patching". While I was familiar with F5's partnership with WhiteHat and our integrated solution, I wasn't familiar with the term. Virtual patching should put an end to the endless religious warring that goes on between the secure coding and web application firewall camps whenever the topic of web application security is raised. The premise of virtual patching is that a web application firewall is not, I repeat is not a replacement for secure coding. It is, in fact, an augmentation of existing security systems and practices that, in fact, enables secure development to occur without being rushed or outright ignored in favor of rushing a fix out the door. "The remediation challenges most organizations face are the time consuming process of allocating the proper personnel, prioritizing the tasks, QA / regression testing the fix, and finally scheduling a production release." -- WhiteHat Security, "WhiteHat Website Security Statistic Reports", December 2008 The WhiteHat report goes on to discuss the average number of days it took for organizations to address the top five urgent - not critical, not high, but urgent - severity vulnerabilities discovered. The fewest number of days to resolve a vulnerability (SQL Injection) was 28 in 2008, which is actually an improvement over previous years. 28 days. That's a lifetime on the Internet when your site is vulnerable to exploitation and attackers are massing at the gates faster than ants to a picnic. But you can't rush finding and fixing the vulnerability, and the option to shut down the web application may not be an option at all, especially if you rely on that application as a revenue stream, as an integration point with partners, or as part of a critical business process with a strict SLA governing its uptime. So do you leave it vulnerable? According to White Hat's data, apparently that's the decision made for many organizations given the limited options. The heads of many security professionals just exploded. My apologies if any of the detritus mussed your screen. If you're one of the ones whose head is still intact, there is a solution. Virtual patching provides the means by which you can prevent the exploitation of the vulnerability while it is addressed through whatever organizational processes are required to resolve it. Virtual patching is essentially the process of putting in place a rule on a web application firewall to prevent the exploitation of a vulnerability. This process is often times a manual one, but in the case of WhiteHat and F5 the process has been made as easy as clicking a button. When WhiteHat's Sentinel, which provides vulnerability scanning as a service, uncovers a vulnerability the operator (that's you) can decide to virtually patch the hole by adding a rule to the appropriate policy on F5's BIG-IP Application Security Manager (ASM) with the click of a button. Once the vulnerability has been addressed, you can remove the rule from the policy or leave it in place, as is your wont. It's up to you. Virtual patching provides the opportunity to close a vulnerability quickly but doesn't require that you necessarily abandon secure coding practices. Virtual patching actual enables and encourages secure coding by giving developers some breathing room in which to implement a thorough, secure solution to the vulnerability. It isn't an either-or solution, it's both, and leverages both solutions to provide the most comprehensive security coverage possible. And given statistics regarding the number of sites infected of late, that's something everyone should be able to get behind. Virtual patching as a technique does not require WhiteHat or F5, but other solutions will require a manual process to put in place rules to address vulnerabilities. The advantage of a WhiteHat-F5 solution is its tight integration via iControl and ability to immediately close discovered security holes, and of course a lengthy list of cool security options and features to further secure web applications available with ASM. You can read more about the integration between WhiteHat and F5 here or here or view a short overview of the way virtual patching works between Sentinel and ASM.1.7KViews0likes2CommentsWould you risk $31,000 for milliseconds of application response time?
Keep in mind that the time it takes a human being to blink is an average of 300 – 400 milliseconds. I just got back from Houston where I helped present on F5’s integration with web application security vendor White Hat, a.k.a. virtual patching. As almost always happens whenever anyone mentions the term web application firewall the question of performance degradation was raised. To be precise: How much will a web application firewall degrade performance? Not will it, but how much will it, degrade performance. My question back to those of you with the same question is, “How much are you willing to accept to mitigate the risk?” Or perhaps more precisely, how much are your users and customers – and therefore your business - willing to accept to mitigate the risk, because in most cases today that’s really who is the target and thus bearing the risk of today’s web application attacks. As Jeremiah Grossman often points out, mass SQL injection and XSS attacks are not designed to expose your data, they’re designed today to exploit your customers and users, by infecting them with malware designed to steal their personal data. So the people who are really bearing the burden of risk when browsing your site are your customers and users. It’s their risk we’re playing with more than our own. So they question has to be asked with them in mind: how much latency are your users and customers willing to accept in order to mitigate the risk of being infected and the potential for becoming the next statistic in one of the many fraud-oriented organizations tracking identity theft? SIX OF ONE, HALF-DOZEN OF THE OTHER No matter where you implement a security strategy that involves the deep inspection of application data you are going to incur latency. If you implement it in code, you’re increasing the amount of time it takes to execute on the server – which increases response time. If you implement it in a web application firewall, you’re increasing the amount of time it takes to get to the server- which will undoubtedly increase response time. The interesting thing is that time is generally measured in milliseconds, and is barely noticeable to the user. It literally happens in the blink of an eye and is only obvious to someone tasked with reporting on application performance, who is used to dealing with network response times that are almost always sub-second. The difference between 2ms and 5ms is not noticeable to the human brain. The impact of this level of latency is almost unnoticeable to the end-user and does not radically affect his or her experience one way or another. Even 10ms – or 100 ms - is still sub-second latency and is not noticeable unless it appears on a detailed application performance report. But let’s say that a web application firewall did increase latency to a noticeable degree. Let’s say it added 2 seconds to the overall response time. Would the user notice? Perhaps. The question then becomes, are they willing to accept that in exchange for better protection against malicious code? Are they willing to accept that in exchange for not becoming the next victim of identity theft due to malicious code that was inserted into your database via an SQL injection attack and delivered to them the next time they visited your site? Are they willing to accept 5 seconds? 10 seconds? Probably not (I wouldn’t either), but what did they say? If you can’t answer it’s probably because you haven’t asked. That’s okay, because no one has to my knowledge. It’s not a subject we freely discuss with customers because we assume they are, for the most part, ignorant of the very risks associated with just visiting our sites. THE MYTH OF SUB-SECOND LATENCY Too, we often forget that sub-second latency does not really matter in a world served up by the Internet. We’re hard-wired to the application via the LAN and expect it to instantly appear on our screens the moment we try to access it or hit “submit”. We forget that in the variable, crazy world of the Internet the user is often subjected to a myriad events of which they are blissfully unaware that affect the performance of our sites and applications. They do not expect sub-second response times because experience tells them it’s going to vary from day to day and hour to hour and much of the reasons for it are out of their – and our – control. Do we want the absolute best performance for our customers and users? Yes. But not necessarily at the risk of leaving them – and our data – exposed. If we were really worried about performance we’d get rid of all the firewalls and content scanners and A/V gateways and IPS and IDS and just deliver applications raw across the Internet, the way nature intended them to be delivered: naked and bereft of all protection. But they’d be damned fast, wouldn’t they? We don’t do that, of course, because we aren’t fruitcakes. We’ve weighed the benefits of the protection afforded by such systems against the inherent latency incurred by the solutions and decided that the benefits outweighed the risk. We need to do the same thing with web application firewalls and really any security solution that needs to sit between the application and the user; we need to weigh the risk against the benefit. We first need to really understand what the risks are for us and our customers, and then make a decision how to address that risk – either by ignoring it or mitigating it. But we need to stop fooling ourselves into discarding possible solutions for what is almost always a non-issue. We may think our customers or users will raise hell if the response time of their favorite site or application increases by 5 or 50 or even 500 ms, but will they really? Will they really even notice it? And if we asked them, would they accept it in exchange for better protection against identity theft? Against viruses and worms? Against key-loggers and the cost of a trip to the hospital when their mother has a heart-attack because she happened to look over as three hundred pop-ups filled with porn images filled their screen because they were infected with malware by your site? We need to start considering not only the risk to our own organizations and the customer data we must protect, but to our customers’ and users’ environments, and then evaluate solutions that are going to effectively address that risk in a way that satisfies everyone. To do that, we need to involve the customer and the business more in that decision making process and stop focusing only on the technical aspects of how much latency might be involved or whether we like the technology or not. Go ahead. Ask your customers and users if they’re willing to risk $31,000 – the estimated cost of identity theft today to an individual – to save 500 milliseconds of response time. And when they ask how long that is, tell them the truth, “about the time it takes to blink an eye”. As potentially one of your customers or visitors, I’ll start out your data set by saying, “No. No, I’m not.”499Views0likes1CommentOut, Damn’d Bot! Out, I Say!
Exorcising your digital demons Most people are familiar with Shakespeare’s The Tragedy of Macbeth. Of particularly common usage is the famous line uttered repeatedly by Lady Macbeth, “Out, damn’d spot! Out, I say” as she tries to wash imaginary bloodstains from her hands, wracked with the guilt of the many murders of innocent men, women, and children she and her husband have committed. It might be no surprise to find a similar situation in the datacenter, late at night. With the background of humming servers and cozily blinking lights shedding a soft glow upon the floor, you might hear some of your infosecurity staff roaming the racks and crying out “Out, damn’d bot! Out I say!” as they try to exorcise digital demons from their applications and infrastructure. Because once those bots get in, they tend to take up a permanent residence. Getting rid of them is harder than you’d think because like Lady Macbeth’s imaginary bloodstains, they just keep coming back – until you address the source.182Views0likes0Comments