web application security
39 TopicsClickjacking Protection Using X-FRAME-OPTIONS Available for Firefox
But browser support is only half the solution, don’t forget to implement the server-side, too. Clickjacking, unlike more well-known (and understood) web application vulnerabilities, has been given scant amount of attention despite its risks and its usage. Earlier this year, for example, it was used as an attack on Twitter, but never really discussed as being a clickjacking attack. Maybe because aside from rewriting applications to prevent CSRF (adding nonces and validation of the same to every page) or adding framekillers there just haven’t been many other options to prevent the attack technique from being utilized against users. Too, it is one of the more convoluted attack methods out there so it would be silly to expect non-technical media to understand it let alone explain how it works to their readers. There is, however, a solution on the horizon. IE8 has introduced an opt-in measure that allows developers – or whomever might be in charge of network-side scripting implementations – to prevent clickjacking on vulnerable pages using a custom HTTP header to prevent them from being “framed” inappropriately: X-FRAME-OPTIONS. The behavior is described in the aforementioned article as: If the X-FRAME-OPTIONS value contains the token DENY, IE8 will prevent the page from rendering if it will be contained within a frame. If the value contains the token SAMEORIGIN, IE will block rendering only if the origin of the top level-browsing-context is different than the origin of the content containing the X-FRAME-OPTIONS directive. For instance, if http://shop.example.com/confirm.asp contains a DENY directive, that page will not render in a subframe, no matter where the parent frame is located. In contrast, if the X-FRAME-OPTIONS directive contains the SAMEORIGIN token, the page may be framed by any page from the exact http://shop.example.com origin. But that’s only IE8, right? Well, natively, yes. But a development version of NoScript has been released that supports the X-FRAME-OPTIONS header and will provide the same protections as are natively achieved in IE8. The problem is that this is only half the equation: the X-FRAME-OPTIONS header needs to exist before the browser can act on it and the preventive measure for clickjacking completed. As noted in the Register, “some critics have contended the protection will be ineffective because it will require millions of websites to update their pages with proprietary code.” That’s not entirely true as there is another option that will provide support for X-FRAME-OPTIONS without updating pages/applications/sites with proprietary code: network-side scripting. The “proprietary” nature of custom HTTP headers is also debatable, as support for Firefox was provided quickly via NoScript and if the technique is successful will likely be adopted by other browser creators. HOW-TO ADD X-FRAME-OPTIONS TO YOUR APPLICATION – WITH or WITHOUT CODE CHANGES Step 1: Add the custom HTTP header “X-FRAME-OPTIONS” with a value of “DENY” or “SAMEORIGIN” before returning a response to the client Really, that’s it. The browser takes care of the rest for you. OWASP has a great article on how to implement a ClickjackFilter for JavaEE and there are sure to be many more blogs and articles popping up describing how one can implement such functionality in their language-of-choice. Even without such direct “how-to” articles and code samples, it is merely a matter of adding a new custom HTTP header – examples of which ought to be easy enough to find. Similarly a solution can be implemented using network-side scripting that requires no modification to applications. In fact, this can be accomplished via iRules in just one line of code: when HTTP_RESPONSE { HTTP::header insert "X-FRAME-OPTIONS" “(DENY || SAMEORIGIN)”} I believe the mod_rewrite network-side script would be as simple, but as I am not an expert in mod_rewrite I will hope someone who is will leave an appropriate example as a comment or write up a blog/article and leave a pointer to it. A good reason to utilize the agility of network-side scripting solutions in this case is that it is not necessary to modify each application requiring protection, which takes time to implement, test, and deploy. An even better reason is that a single network-side script can protect all applications, regardless of language and deployment platform, without a lengthy development and deployment cycle. Regardless of how you add the header, it would be a wise idea to add it as a standard part of your secure-code deployment requirements (you do have those, don’t you?) because it doesn’t hurt anything for the custom HTTP header to exist and visitors using X-FRAME-OPTIONS enabled browsers/solutions will be a lot safer than without it. Stop brute force listing of HTTP OPTIONS with network-side scripting Jedi Mind Tricks: HTTP Request Smuggling I am in your HTTP headers, attacking your application Understanding network-side scripting 9 ways to use network-side scripting to architect faster, scalable, more secure applications1.9KViews0likes3CommentsVirtual Patching: What is it and why you should be doing it
Yesterday I was privileged to co-host a webinar with WhiteHat Security's Jeremiah Grossman on preventing SQL injection and Cross-Site scripting using a technique called "virtual patching". While I was familiar with F5's partnership with WhiteHat and our integrated solution, I wasn't familiar with the term. Virtual patching should put an end to the endless religious warring that goes on between the secure coding and web application firewall camps whenever the topic of web application security is raised. The premise of virtual patching is that a web application firewall is not, I repeat is not a replacement for secure coding. It is, in fact, an augmentation of existing security systems and practices that, in fact, enables secure development to occur without being rushed or outright ignored in favor of rushing a fix out the door. "The remediation challenges most organizations face are the time consuming process of allocating the proper personnel, prioritizing the tasks, QA / regression testing the fix, and finally scheduling a production release." -- WhiteHat Security, "WhiteHat Website Security Statistic Reports", December 2008 The WhiteHat report goes on to discuss the average number of days it took for organizations to address the top five urgent - not critical, not high, but urgent - severity vulnerabilities discovered. The fewest number of days to resolve a vulnerability (SQL Injection) was 28 in 2008, which is actually an improvement over previous years. 28 days. That's a lifetime on the Internet when your site is vulnerable to exploitation and attackers are massing at the gates faster than ants to a picnic. But you can't rush finding and fixing the vulnerability, and the option to shut down the web application may not be an option at all, especially if you rely on that application as a revenue stream, as an integration point with partners, or as part of a critical business process with a strict SLA governing its uptime. So do you leave it vulnerable? According to White Hat's data, apparently that's the decision made for many organizations given the limited options. The heads of many security professionals just exploded. My apologies if any of the detritus mussed your screen. If you're one of the ones whose head is still intact, there is a solution. Virtual patching provides the means by which you can prevent the exploitation of the vulnerability while it is addressed through whatever organizational processes are required to resolve it. Virtual patching is essentially the process of putting in place a rule on a web application firewall to prevent the exploitation of a vulnerability. This process is often times a manual one, but in the case of WhiteHat and F5 the process has been made as easy as clicking a button. When WhiteHat's Sentinel, which provides vulnerability scanning as a service, uncovers a vulnerability the operator (that's you) can decide to virtually patch the hole by adding a rule to the appropriate policy on F5's BIG-IP Application Security Manager (ASM) with the click of a button. Once the vulnerability has been addressed, you can remove the rule from the policy or leave it in place, as is your wont. It's up to you. Virtual patching provides the opportunity to close a vulnerability quickly but doesn't require that you necessarily abandon secure coding practices. Virtual patching actual enables and encourages secure coding by giving developers some breathing room in which to implement a thorough, secure solution to the vulnerability. It isn't an either-or solution, it's both, and leverages both solutions to provide the most comprehensive security coverage possible. And given statistics regarding the number of sites infected of late, that's something everyone should be able to get behind. Virtual patching as a technique does not require WhiteHat or F5, but other solutions will require a manual process to put in place rules to address vulnerabilities. The advantage of a WhiteHat-F5 solution is its tight integration via iControl and ability to immediately close discovered security holes, and of course a lengthy list of cool security options and features to further secure web applications available with ASM. You can read more about the integration between WhiteHat and F5 here or here or view a short overview of the way virtual patching works between Sentinel and ASM.1.7KViews0likes2CommentsWARNING: Security Device Enclosed
If you aren’t using all the security tools at your disposal you’re doing it wrong. How many times have you seen an employee wave on by a customer when the “security device enclosed” in some item – be it DVD, CD, or clothing – sets off the alarm at the doors? Just a few weeks ago I heard one young lady explain the alarm away with “it must have be the CD I bought at the last place I was at…” This apparently satisfied the young man at the doors who nodded and turned back to whatever he’d been doing. All the data the security guy needed to make a determination was there; he had all the context necessary in which to analyze the situation and make a determination based upon that information. But he ignored it all. He failed to leverage all the tools at his disposal and potentially allowed dollars to walk out the door. In doing so he also set a precedent and unintentionally sent a message to anyone who really wanted to commit a theft: I ignore warning signs, go ahead.1.6KViews0likes2CommentsQuantifying Reputation Loss From a Breach
#infosec #security Putting a value on reputation is not as hard as you might think… It’s really easy to quantify some of the costs associated with a security breach. Number of customers impacted times the cost of a first class stamp plus the cost of a sheet of paper plus the cost of ink divided by … you get the picture. Some of the costs are easier than others to calculate. Some of them are not, and others appear downright impossible. One of the “costs” often cited but rarely quantified is the cost to an organization’s reputation. How does one calculate that? Well, if folks sat down with the business people more often (the ones that live on the other side of the Meyer-Briggs Mountain) we’d find it’s not really as difficult to calculate as one might think. While IT folks analyze flows and packet traces, business folks analyze market trends and impacts – such as those arising from poor customer service. And if a breach of security isn’t interpreted by the general populace as “poor customer service” then I’m not sure what is. While traditionally customer service is how one treats the customer, increasingly that’s expanding to include how one treats the customer’s data. And that means security. This question “how much does it really cost” is one Jeremiah Grossman asks fairly directly in a recent blog, “Indirect Hard Losses”: As stated by InformationWeek regarding a Ponemon Institute study on the Cost of a Data Breach, “Customers, it seems, lose faith in organizations that can't keep data safe and take their business elsewhere.” The next logical question is how much? Jeremiah goes on to focus on revenue lost from web transactions after a breach and that’s certainly part of the calculation, but what about those losses that might have been but now will never be? How can we measure not only the loss of revenue (meaning a decrease in first-order customers) but the potential loss of revenue? That’s harder, but just as important as it more accurately represents the “reputation loss” often mentioned in passing but never assigned a concrete value (at least not publicly, some industries discretely share such data with trusted members of the same industry, but seeing these numbers in the wild? Good luck!) HERE COMES the ALMOST SCIENCE 20% of the businesses that lost data lost customers as a direct result. The impacts were most severe for companies with more than 100 employees. Almost half of them lost sales. Rubicon Survey One of the first things we have to calculate is influence, as that directly impacts reputation. It is the ability of even a single customer to influence a given number of others (negatively or positively) that makes up reputation. It’s word of mouth, what people say about you, after all. If we turn to studies that focus more on marketing and sales and businessy things, we can find a lot of this data. It’s a well-studied area. One study 1 indicates that the reach of a single dissatisfied customer will tell approximately 8-16 people. Each of those people has a circle of influence of about 250, with 25 of those being within an organization's primary target audience. Of all those told 2% (1 in 50) will defect or avoid an organization upon hearing of the victim's dissatisfaction. So for every angry customer, the reputation impact is a loss of anywhere from 40-80 customers, existing and future. So much for thinking 100 records stolen in a breach is small potatoes, eh? Thousands of existing and potential customers loss is nothing to sneeze at. Now, here’s where it gets a little harder, because you’re going to have to talk to the businessy folks to get some values to attach to those losses. See, there’s two numbers you need yet: customer lifetime value (CLV) and the cost to replace a customer (which is higher than the cost of acquire a customer, but don’t ask me why, I’m not a businessy folk). Customer values are highly dependent upon industry. For example, based on 2010 FDIC data, the industry average annual customer value for a banking customer is $209 2 . Facebook’s annual revenue per user (ARPU) is estimated at $2.00 3 . Estimates claim Google makes $9.85 annually off each Android user 4 . And Zynga’s ARPU is estimated at $3.96 (based on a reported $0.33 monthly per user revenue) 5 . This is why you actually have to talk to the businessy guys, they know what these values are and you’ll need them to plug in to the influence calculation to come up with a at-least-it’s-closer-than-guessing value. You also need to ask what the average customer lifetime is, so you can calculate the loss from dissatisfied and defecting customers. Then you just need to start plugging in the numbers. Remember, too, that it’s a model; an estimate. It’s not a perfect valuation system, but it should give you some kind of idea of what the reputational impact from a breach would be, which is more than most folks have today. Even if you can’t obtain the cost to replace value, try the model without it. Try a small breach, just for fun, say of 100 records. Let’s use $4.00 as an annual customer value and a lifetime of ten years as an example. Affected Customer Loss: 100 * ($4 *10) = $4000 Influenced Customer Loss: 100 * (40) = 4000 * 40 = $160,000 Total Reputation Cost: $164,000 Adding in the cost to replace can only make this larger and serves very little purpose except to show that even what many consider a relatively small breach (in terms of records lost) can be costly. WHY is THIS VALUABLE? The reason this is valuable is two-fold. First, it serves as the basis for a very logical and highly motivating business case for security solutions designed to prevent breaches. The problem with much of security is it’s intangible and incalculable. It is harder to put monetary value to risk than it is to put monetary value on solutions. Thus, the ability to perform a cost-benefit analysis that is based in part on “reputation loss” is difficult for security professionals and IT in general. The business needs to be able to justify investments, and to do that they need hard-numbers that they can balance against. It is the security professionals who so often are called upon to explain the “risk” of a breach and loss of data to the business. By providing them tangible data based on accepted business metrics and behavior offers them a more concrete view of the costs – in money – of a breach. That gives IT the leverage, the justification, for investing in solutions such as web application firewalls and vulnerability scanning services that are designed to detect and ultimately prevent such breaches from occurring. It gives infosec some firm ground upon which stand and talk in terms the business understands: dollar signs. [1] PUTTING A PRICE TAG ON A LOST CUSTOMER [2] Free Checking and Debit Incentives Post-Durbin [3] Facebook’s Annual Revenue Per User [4] Each Android User Will Make Google $9.85 per Year in 2012 [5] Zynga Doubled ARPU From Last Year Even as Facebook Platform Changes Slowed Growth1.1KViews0likes0CommentsF5 Friday: It is now safe to enable File Upload
Web 2.0 is about sharing content – user generated content. How do you enable that kind of collaboration without opening yourself up to the risk of infection? Turns out developers and administrators have a couple options… The goal of many a miscreant is to get files onto your boxen. The second step after that is often remote execution or merely the hopes that someone else will look at/execute the file and spread chaos (and viruses) across your internal network. It’s a malicious intent, to be sure, and makes developing/deploying Web 2.0 applications a risky proposition. After all, Web 2.0 is about collaboration and sharing of content, and if you aren’t allowing the latter it’s hard to enable the former. Most developers know about and have used the ability to upload files of just about any type through a web form. Photos, documents, presentations – these types of content are almost always shared through an application that takes advantage of the ability to upload data via a simple web form. But if you allow users to share legitimate content, it’s a sure bet (more sure even than answering “yes” to the question “Will it rain in Seattle today?”) that miscreants will find and exploit the ability to share content. Needless to say information security professionals are therefore not particularly fond of this particular “feature” and in some organizations it is strictly verboten (that’s forbidden for you non-German speakers). So wouldn’t it be nice if developers could continue to leverage this nifty capability to enable collaboration? Well, all you really need to do is integrate with an anti-virus scanning solution and only accept that content which is deemed safe, right? After all, that’s good enough for e-mail systems and developers should be able to argue that the same should be good enough for web content, too. The bigger problem is in the integration. Luckily, ICAP (Internet Content Adaptation Protocol) is a fairly ready answer to that problem. SOLUTION: INTEGRATE ANTI-VIRUS SCANNING via ICAP The Internet Content Adaptation Protocol (ICAP) is a lightweight HTTP based protocol specified in RFC 3507 designed to off-load specific content to dedicated servers, thereby freeing up resources and standardizing the way in which features are implemented. ICAP is generally used in proxy servers to integrate with third party products like antivirus software, malicious content scanners and URL filters. ICAP in its most basic form is a "lightweight" HTTP based remote procedure call protocol. In other words, ICAP allows its clients to pass HTTP based (HTML) messages (Content) to ICAP servers for adaptation. Adaptation refers to performing the particular value added service (content manipulation) for the associated client request/response. -- Wikipedia, ICAP Now obviously developers can directly take advantage of ICAP and integrate with an anti-virus scanning solution directly. All that’s required is to extract every file in a multi-part request and then send each of them to an AV-scanning service and determine based on the result whether to continue processing or toss those bits into /dev/null. This is assuming, of course, that it can be integrated: packaged applications may not offer the ability and even open-source which ostensibly does may be in a language or use frameworks that require skills the organization simply does not have. Or perhaps the cost over time of constantly modifying the application after every upgrade/patch is just not worth the effort. For applications for which you can add this integration, it should be fairly simple as developers are generally familiar with HTTP and RPC and understand how to use “services” in their applications. Of course this being an F5 Friday post, you can probably guess that I have an alternative (and of course more efficient) solution than integration into the code. An external solution that works for custom as well as packaged applications and requires a lot less long term maintenance – a WAF (Web Application Firewall). BETTER SOLUTION: web application firewall INTEGRATION The latest greatest version (v10.2) of F5 BIG-IP Application Security Manager (ASM) included a little touted feature that makes integration with an ICAP-enabled anti-virus scanning solution take approximately 15.7 seconds to configure (YMMV). Most of that time is likely logging in and navigating to the right place. The rest is typing the information required (server host name, IP address, and port number) and hitting “save”. F5 Application security manager (ASM) v10 includes easy integration with a/v solutions It really is that simple. The configuration is actually an HTTP “class”, which can be thought of as a classification of sorts. In most BIG-IP products a “class” defines a type of traffic closely based on a specific application protocol, like HTTP. It’s quite polymorphic in that defining a custom HTTP class inherits the behavior and attributes of the “parent” HTTP class and your configuration extends that behavior and attributes, and in some cases allows you to override default (parent) behavior. The ICAP integration is derived from an HTTP class, so it can be “assigned” to a virtual server, a URI, a cookie, etc… In most ASM configurations an HTTP class is assigned to a virtual server and therefore it sees all requests sent to that server. In such a configuration ASM sees all traffic and thus every file uploaded in a multipart payload and will automatically extract it and send it via ICAP to the designated anti-virus server where it is scanned. The action taken upon a positive result, i.e. the file contains bad juju, is configurable. ASM can block the request and present an informational page to the user while logging the discovery internally, externally or both. It can forward the request to the web/application server with the virus and log it as well, allowing the developer to determine how best to proceed. ASM can be configured to never allow requests to reach the web/application server that have not been scanned for viruses using the “Guarantee Enforcement” option. When configured, if the anti-virus server is unavailable or doesn’t respond, requests will be blocked. This allows administrators to configure a “fail closed” option that absolutely requires AV scanning before a request can be processed. A STRATEGIC POINT of CONTROL Leveraging a strategic point of control to provide AV scanning integration and apply security policies regarding the quality of content has several benefits over its application-modifying code-based integration cousin: Allows integration of AV scanning in applications for which it is not feasible to modify the application, for whatever reason (third-party, lack of skills, lack of time, long term maintenance after upgrades/patches ) Reduces the resource requirements of web/application servers by offloading the integration process and only forwarding valid uploads to the application. In a cloud-based or other pay-per-use model this reduces costs by eliminating the processing of invalid requests by the application. Aggregates logging/auditing and provides consistency of logs for compliance and reporting, especially to prove “due diligence” in preventing infection. Related Posts All F5 Friday Entries on DevCentral All About ASM632Views0likes4CommentsI am in your HTTP headers, attacking your application
Zero-day IE exploits and general mass SQL injection attacks often overshadow potentially more dangerous exploits targeting lesser known applications and attack vectors. These exploits are potentially more dangerous because once proven through a successful attack on these lesser known applications they can rapidly be adapted to exploit more common web applications, and no one is specifically concentrating on preventing them because they're, well, not so obvious. Recently, SANS Internet Storm Center featured a write up on attempts to exploit Roundcube Webmail via the HTTP Accept header. Such an attack is generally focused on exploitation of operating system, language, or environmental vulnerabilities, as the data contained in HTTP headers (aside from cookies) is rarely used by the application as user-input. An example provided by SANS of an attack targeting Roundcube via the HTTP Accept header: POST /roundcube/bin/html2text.php HTTP/1.1 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.5) Gecko/2008120122 Firefox/3.0.5 Host: xx.xx.xx.xx Accept: ZWNobyAoMzMzMjEyKzQzMjQ1NjY2KS4iICI7O3Bhc3N0aHJ1KCJ1bmFtZSAtYTtpZCIpOw== Content-Length: 54 What the attackers in this example were attempting to do is trick the application into evaluating system commands encoded in the Accept header in order to retrieve some data they should not have had access to. The purpose of the attack, however, could easily have been for some other nefarious deed such as potentially writing a file to the system that could be used as a cross-site scripting attack, or deleting files, or just generally wreaking havoc with the system. This is the problem security professionals and developers face every day: what devious thing could some miscreant attempt to do? What must I protect against. This is part of what makes secure coding so difficult - developers aren't always sure what they should be protecting against, and neither are the security pros because the bad guys are always coming up with a new way to exploit some aspect of an application or transport layer protocols. Think HTTP headers aren't generally used by applications? Consider the use of the custom HTTP header "SOAP Action" for SOAP web services, and cookies, and E-tags, and ... well, the list goes on. HTTP headers carry data used by applications and therefore should be considered a viable transport mechanism for malicious code. So while the exploitation of HTTP headers is not nearly as common or rampant as mass SQL injection today, the use of it to target specific applications means it is a possible attack vector for the future against which applications should be protected now, before it becomes critical to do so. No, it may never happen. Attackers may never find a way to truly exploit HTTP headers. But then again, they might and apparently have been trying. Better safe than sorry, I say. Regardless of the technology you use to, the process is the same: you need to determine what is allowed in HTTP headers and verify them just as you would any other user-generated input or you need to invest in a solution that provides this type of security for you. RFC 2616 (HTTP), specifically section 14, provide a great deal of guidance and detail on what is acceptable in an HTTP header field. Never blindly evaluate or execute upon data contained in an HTTP header field. Treat any input, even input that is not traditionally user-generated, as suspect. That's a good rule of thumb for protecting against malicious payloads anyway, but especially a good rule when dealing with what is likely considered a non-traditional attack vector (until it is used, and overused to the point it's considered typical, of course). Possible ways to prevent the potential exploitation of HTTP headers: Use network-side scripting or mod_rewrite to intercept, examine, and either sanitize or outright reject requests containing suspicious data in HTTP headers. Invest in a security solution capable of sanitizing transport (TCP) and application layer (HTTP) protocols and use it to do so. Investigate whether an existing solution - either security or application delivery focused - is capable of providing the means through which you can enforce protocol compliance. Use secure coding techniques to examine - not evaluate - the data in any HTTP headers you are using and ensure they are legitimate values before using them in any way. A little proactive security can go along way toward not being the person who inadvertently discovers a new attack methodology. Related articles by Zemanta Gmail Is Vulnerable to Hackers The Concise Guide to Proxies 3 reasons you need a WAF even though your code is (you think) secure Stop brute forcing listing of HTTP OPTIONS with network-side scripting What's the difference between a web application and a blog?556Views0likes2CommentsWould you risk $31,000 for milliseconds of application response time?
Keep in mind that the time it takes a human being to blink is an average of 300 – 400 milliseconds. I just got back from Houston where I helped present on F5’s integration with web application security vendor White Hat, a.k.a. virtual patching. As almost always happens whenever anyone mentions the term web application firewall the question of performance degradation was raised. To be precise: How much will a web application firewall degrade performance? Not will it, but how much will it, degrade performance. My question back to those of you with the same question is, “How much are you willing to accept to mitigate the risk?” Or perhaps more precisely, how much are your users and customers – and therefore your business - willing to accept to mitigate the risk, because in most cases today that’s really who is the target and thus bearing the risk of today’s web application attacks. As Jeremiah Grossman often points out, mass SQL injection and XSS attacks are not designed to expose your data, they’re designed today to exploit your customers and users, by infecting them with malware designed to steal their personal data. So the people who are really bearing the burden of risk when browsing your site are your customers and users. It’s their risk we’re playing with more than our own. So they question has to be asked with them in mind: how much latency are your users and customers willing to accept in order to mitigate the risk of being infected and the potential for becoming the next statistic in one of the many fraud-oriented organizations tracking identity theft? SIX OF ONE, HALF-DOZEN OF THE OTHER No matter where you implement a security strategy that involves the deep inspection of application data you are going to incur latency. If you implement it in code, you’re increasing the amount of time it takes to execute on the server – which increases response time. If you implement it in a web application firewall, you’re increasing the amount of time it takes to get to the server- which will undoubtedly increase response time. The interesting thing is that time is generally measured in milliseconds, and is barely noticeable to the user. It literally happens in the blink of an eye and is only obvious to someone tasked with reporting on application performance, who is used to dealing with network response times that are almost always sub-second. The difference between 2ms and 5ms is not noticeable to the human brain. The impact of this level of latency is almost unnoticeable to the end-user and does not radically affect his or her experience one way or another. Even 10ms – or 100 ms - is still sub-second latency and is not noticeable unless it appears on a detailed application performance report. But let’s say that a web application firewall did increase latency to a noticeable degree. Let’s say it added 2 seconds to the overall response time. Would the user notice? Perhaps. The question then becomes, are they willing to accept that in exchange for better protection against malicious code? Are they willing to accept that in exchange for not becoming the next victim of identity theft due to malicious code that was inserted into your database via an SQL injection attack and delivered to them the next time they visited your site? Are they willing to accept 5 seconds? 10 seconds? Probably not (I wouldn’t either), but what did they say? If you can’t answer it’s probably because you haven’t asked. That’s okay, because no one has to my knowledge. It’s not a subject we freely discuss with customers because we assume they are, for the most part, ignorant of the very risks associated with just visiting our sites. THE MYTH OF SUB-SECOND LATENCY Too, we often forget that sub-second latency does not really matter in a world served up by the Internet. We’re hard-wired to the application via the LAN and expect it to instantly appear on our screens the moment we try to access it or hit “submit”. We forget that in the variable, crazy world of the Internet the user is often subjected to a myriad events of which they are blissfully unaware that affect the performance of our sites and applications. They do not expect sub-second response times because experience tells them it’s going to vary from day to day and hour to hour and much of the reasons for it are out of their – and our – control. Do we want the absolute best performance for our customers and users? Yes. But not necessarily at the risk of leaving them – and our data – exposed. If we were really worried about performance we’d get rid of all the firewalls and content scanners and A/V gateways and IPS and IDS and just deliver applications raw across the Internet, the way nature intended them to be delivered: naked and bereft of all protection. But they’d be damned fast, wouldn’t they? We don’t do that, of course, because we aren’t fruitcakes. We’ve weighed the benefits of the protection afforded by such systems against the inherent latency incurred by the solutions and decided that the benefits outweighed the risk. We need to do the same thing with web application firewalls and really any security solution that needs to sit between the application and the user; we need to weigh the risk against the benefit. We first need to really understand what the risks are for us and our customers, and then make a decision how to address that risk – either by ignoring it or mitigating it. But we need to stop fooling ourselves into discarding possible solutions for what is almost always a non-issue. We may think our customers or users will raise hell if the response time of their favorite site or application increases by 5 or 50 or even 500 ms, but will they really? Will they really even notice it? And if we asked them, would they accept it in exchange for better protection against identity theft? Against viruses and worms? Against key-loggers and the cost of a trip to the hospital when their mother has a heart-attack because she happened to look over as three hundred pop-ups filled with porn images filled their screen because they were infected with malware by your site? We need to start considering not only the risk to our own organizations and the customer data we must protect, but to our customers’ and users’ environments, and then evaluate solutions that are going to effectively address that risk in a way that satisfies everyone. To do that, we need to involve the customer and the business more in that decision making process and stop focusing only on the technical aspects of how much latency might be involved or whether we like the technology or not. Go ahead. Ask your customers and users if they’re willing to risk $31,000 – the estimated cost of identity theft today to an individual – to save 500 milliseconds of response time. And when they ask how long that is, tell them the truth, “about the time it takes to blink an eye”. As potentially one of your customers or visitors, I’ll start out your data set by saying, “No. No, I’m not.”499Views0likes1Comment3 reasons you need a WAF even if your code is (you think) secure
Everyone is buzzing and tweeting about the SANS Institute CWE/SANS Top 25 Most Dangerous Programming Errors, many heralding its release as the dawning of a new age in secure software. Indeed, it's already changing purchasing requirements. Byron Acohido reports that the Department of Defense is leading the way by "accepting only software tested and certified against the Top 25 flaws." Some have begun speculating that this list obviates the need for web application firewalls (WAF). After all, if applications are secured against these vulnerabilities, there's no need for an additional layer of security. Or is there? Web application firewalls, while certainly providing a layer of security against the exploitation of the application vulnerabilities resulting from errors such as those detailed by SANS, also provide other benefits in terms of security that should be carefully considered before dismissing their usefulness out of hand. 1. FUTURE PROOF AGAINST NEW VULNERABILITIES The axiom says the only things certain in life are death and taxes, but in the world of application security a third is just as certain: the ingenuity of miscreants. Make no mistake, there will be new vulnerabilities discovered and they will, eventually, affect the security of your application. When they are discovered, you'll be faced with an interesting conundrum: take your application offline to protect it while you find, fix, and test the modifications or leave the application online and take your chances. The third option, of course, is to employ a web application firewall to detect and stop attacks targeting the vulnerability. This allows you to keep your application online while mitigating the risk of exploitation, giving developers the time necessary to find, fix, and test the changes necessary to address the vulnerability. 2. ENVIRONMENTAL SECURITY No application is an island. Applications are deployed in an environment because they require additional moving parts in order to actually be useful. Without an operating system, application or web server, and a network stack, applications really aren't all that useful to the end user. While the SANS list takes a subtle stab at mentioning this with its inclusion of OS command injection vulnerability, it assumes that all software and systems required to deploy and deliver an application are certified against the list and therefore secure. This is a utopian ideal that is unlikely to become reality and even if it was to come true, see reason #1 above. Web application firewalls protect more than just your application,they can also provide needed protection against vulnerabilities specific to operating systems, application and web servers, and the rest of the environment required to deploy and deliver your application. The increase in deployment of applications in virtualized environments, too, has security risks. The potential exploitation of the hypervisor is a serious issue that few have addressed thus far in their rush to adopt virtualization technology. 3. THE NINJA ATTACK There are some attacks that cannot be detected by an application. These usually involve the exploitation of a protocol such as HTTP or TCP and appear to the application to be legitimate requests. These "ninja" style attacks take advantage of the fact that applications are generally only aware of one user session at a time, and not able to make decisions based on the activity of all users occurring at the same time. The application cannot prevent these attacks. Attacks involving the manipulation of cookies, replay, and other application layer logical attacks, too, often go undetected by applications because they appear to be legitimate. SANS offers a passing nod to some of these types of vulnerabilities in its "Risky Resource Management" category, particularly CWE-642 (External Control of Critical State Data). Addressing this category for existing applications will likely require heavy modification to existing applications and new design for new applications. In the meantime, applications remain vulnerable to this category of vulnerability as well as the ninja attacks that are not, and cannot be, addressed by the SANS list. A web application firewall detects and prevents such stealth attacks attempting to exploit the legitimate behavior of protocols and applications. The excitement with which the SANS list has been greeted is great news for security professionals, because it shows an increased awareness in the importance of secure coding and application security in general. That organizations will demand proof that applications - third-party or in-house - are secure against such a list is sorely needed as a means to ensure better application security across the whole infrastructure. But "the list" does not obviate the usefulness or need for additional security measures, and web application firewalls have long provided benefits in addition to simply protecting against exploitation of SANS Top 25 errors. Those benefits remain valid and tangible even if your code is (you think) secure. Just because you installed a digital home security system doesn't mean you should get rid of the deadbolts. Or vice versa. Related articles by Zemanta Security is not a luxury item DoS attack reveals (yet another) crack in net's core CWE/SANS TOP 25 Most Dangerous Programming Errors Mike Fratto on CWE/SANS TOP 25 Most Dangerous Programming Errors Zero Day Threat: One big step toward a safer Internet479Views0likes2CommentsThe Many Faces of DDoS: Variations on a Theme or Two
Many denial of service attacks boil down to the exploitation of how protocols work and are, in fact, very similar under the hood. Recognizing these themes is paramount to choosing the right solution to mitigate the attack. When you look across the “class” of attacks used to perpetrate a denial of service attack you start seeing patterns. These patterns are important in determining what resources are being targeted because it provides the means to implement solutions that mitigate the consumption of those resources while under an attack. Once you recognize the underlying cause of a service outage due to an attack you can enact policies and solutions that mitigate that root cause, which better serves to protect against the entire class of attacks rather than employing individual solutions that focus on specific attack types. This is because attacks are constantly evolving, and the attacks solutions protect against today will certainly morph into a variation on that theme, and solutions that protect against specific attacks rather than addressing the root cause will not necessarily be capable of defending against those evolutions. In general, there are two types of denial of service attacks: those that target the network layers and those that target the application layer. And of course as we’ve seen this past week or so, attackers are leveraging both types simultaneously to exhaust resources and affect outages across the globe. NETWORK DoS ATTACKS Network-focused DoS attacks often take advantage of the way network protocols work innately. There’s nothing wrong with the protocols, no security vulnerabilities, nada. It’s just the way they behave and the inherent trust placed in the communication that takes place using these protocols. Still others simply attempt to overwhelm a single host with so much traffic that it falls over. Sometimes successful, other times it turns out the infrastructure falls over before the individual host and results in more a disruption of service than a complete denial, but with similar impact to the organization and customers. SYN FLOOD A SYN flood is an attack against a system for the purpose of exhausting that system’s resources. An attacker launching a SYN flood against a target system attempts to occupy all available resources used to establish TCP connections by sending multiple SYN segments containing incorrect IP addresses. Note that the term SYN refers to a type of connection state that occurs during establishment of a TCP/IP connection. More specifically, a SYN flood is designed to fill up a SYN queue. A SYN queue is a set of connections stored in the connection table in the SYN-RECEIVED state, as part of the standard three-way TCP handshake. A SYN queue can hold a specified maximum number of connections in the SYN-RECEIVED state. Connections in the SYN-RECEIVED state are considered to be half-open and waiting for an acknowledgement from the client. When a SYN flood causes the maximum number of allowed connections in the SYN-RECEIVED state to be reached, the SYN queue is said to be full, thus preventing the target system from establishing other legitimate connections. A full SYN queue therefore results in partially-open TCP connections to IP addresses that either do not exist or are unreachable. In these cases, the connections must reach their timeout before the server can continue fulfilling other requests. ICMP FLOOD (Smurf) The ICMP flood, sometimes referred to as a Smurf attack, is an attack based on a method of making a remote network send ICMP Echo replies to a single host. In this attack, a single packet from the attacker goes to an unprotected network’s broadcast address. Typically, this causes every machine on that network to answer with a packet sent to the target. UDP FLOOD The UDP flood attack is most commonly a distributed denial-of-service attack (DDoS), where multiple remote systems are sending a large flood of UDP packets to the target. UDP FRAGMENT The UDP fragment attack is based on forcing the system to reassemble huge amounts of UDP data sent as fragmented packets. The goal of this attack is to consume system resources to the point where the system fails. PING of DEATH The Ping of Death attack is an attack with ICMP echo packets that are larger than 65535 bytes. Since this is the maximum allowed ICMP packet size, this can crash systems that attempt to reassemble the packet. NETWORK ATTACK THEME: FLOOD The theme with network-based attacks is “flooding”. A target is flooded with some kind of traffic, forcing the victim to expend all its resources on processing that traffic and, ultimately, becoming completely unresponsive. This is the traditional denial of service attack that has grown into distributed denial of service attacks primarily because of the steady evolution of web sites and applications to handle higher and higher volumes of traffic. These are also the types of attacks with which most network and application components have had long years of experience with and are thus well-versed in mitigating. APPLICATION DoS ATTACKS Application DoS attacks are becoming the norm primarily because we’ve had years of experience with network-based DoS attacks and infrastructure has come a long way in being able to repel such attacks. That and Moore’s Law, anyway. Application DoS attacks are likely more insidious simply because like their network-based counterparts they take advantage of application protocol behaviors but unlike their network-based counterparts it requires far fewer clients to overwhelm a host. This is part of the reason application-based DoS attacks are so hard to detect – because there are fewer clients necessary (owing to the large chunks of resources consumed by a single client) they don’t fit the “blast” pattern that is so typical of a network-based DoS. It can take literally millions of ICMP requests to saturate a host and its network, but it requires only tens of thousands of requests to consume the resources of an application host such that it becomes unreliable and unavailable. And given the ubiquitous nature of HTTP – over which most of these attacks are perpetrated – and the relative ease with which it is possible to hijack unsuspecting browsers and force their participation in such an attack – an attack can be in progress and look like nothing more than a “flash crowd” – a perfectly acceptable and in many industries desirable event. A common method of attack involves saturating the target (victim) machine with external communications requests, so that the target system cannot respond to legitimate traffic, or responds so slowly as to be rendered effectively unavailable. In general terms, DoS attacks are implemented by forcing the targeted computer to reset, or by consuming its resources so that it can no longer provide its intended service, or by obstructing the communication media between the intended users and the victim so that they can no longer communicate adequately. HTTP GET FLOOD An HTTP GET flood is as exactly as it sounds: it’s a massive influx of legitimate HTTP GET requests that come from large numbers of users, usually connection-oriented bots. These requests mimic legitimate users and are nearly impossible for applications and even harder for traditional security components to detect. This result of this attack is similar to the effect: server errors, increasingly degraded performance, and resource exhaustion. This attack is particularly dangerous to applications deployed in cloud-based environments (public or private) that are enabled with auto-scaling policies, as the system will respond to the attack by launching more and more instances of the application. Limits must be imposed on auto-scaling policies to ensure the financial impact of an HTTP GET flood does not become overwhelming. SLOW LORIS Slowloris consumes resources by “holding” connections open by sending partial HTTP requests. It subsequently sends headers at regular intervals to keep the connections from timing out or being closed due to lack of activity. This causes resources on the web /application servers to remain dedicated to the clients attacking and keeps them unavailable for fulfilling legitimate requests. SLOW HTTP POST A slow HTTP Post is a twist on Slow Loris in which the client sends POST headers with a legitimate content-length. After the headers are sent the message body is transmitted at slow speed, thus tying up the connection (server resources) for long periods of time. A relatively small number of clients performing this attack can effectively consume all resources on the web / application server and render it useless to legitimate users. APPLICATION ATTACK THEME: SLOW Notice a theme, here? That’s because clients can purposefully (and sometimes inadvertently) affect a DoS on a service simply by filling its send/receive queues slowly. The reason this works is similar to the theory behind SYN flood attacks, where all available queues are filled and thus render the server incapable of accepting/responding until the queues have been emptied. Slow pulls or pushes of content keep data in the web/application server queue and thus “tie up” the resources (RAM) associated with that queue. A web/application server has only so much RAM available to commit to queues, and thus a DoS can be affected simply by using a small number of v e r y slow clients that do little other than tie up resources with what are otherwise legitimate interactions. While the HTTP GET flood (page flood) is still common (and works well) the “slow” variations are becoming more popular because they require fewer clients to be successful. Fewer clients makes it harder for infrastructure to determine an attack is in progress because historically flooding using high volumes of traffic is more typical of an attack and solutions are designed to recognize such events. They are not, however, generally designed to recognize what appears to be a somewhat higher volume of very slow clients as an attack. THEMES HELP POINT to a SOLUTION Recognizing the common themes underlying modern attacks are helpful in detecting the attack and subsequently determining what type of solution is necessary to mitigate such an attack. In the case of flooding, high-performance security infrastructure and policies regarding transaction rates coupled with rate shaping based on protocols can mitigate attacks. In the case of slow consumption of resources, it is generally necessary to leverage a high-capacity intermediary that essentially shields the web/application servers from the impact of such requests, coupled with emerging technology that enables a context-aware solution better detect such attacks and then act upon that knowledge to reject them. When faced with a new attack type, it is useful to try to determine the technique behind the attack – regardless of implementation – as it can provide the clues necessary to implement a solution and address the attack before it can impact the availability and performance of web applications. It is important to recognize that solutions only mitigate denial of service attacks. They cannot prevent them from occurring. What We Learned from Anonymous: DDoS is now 3DoS There Is No Such Thing as Cloud Security Jedi Mind Tricks: HTTP Request Smuggling When Is More Important Than Where in Web Application Security Defeating Attacks Easier Than Detecting Them The Application Delivery Spell Book: Contingency What is a Strategic Point of Control Anyway? Layer 4 vs Layer 7 DoS Attack Putting a Price on Uptime Why Single-Stack Infrastructure Sucks420Views0likes3CommentsWho Took the Cookie from the Cookie Jar … and Did They Have Proper Consent?
Cookies as a service enabled via infrastructure services provide an opportunity to improve your operational posture. Fellow DevCentral blogger Robert Haynes posted a great look at a UK law regarding cookies. Back in May a new law went info effect regarding “how cookies and other “cookie-like” objects are stored on users’ devices.” If you haven’t heard about it, don’t panic – there’s a one-year grace period before enforcement begins and those £500 000 fines are being handed out. The clock is ticking, however. What do the new regulations say? Well essentially whereas cookies could be stored with what I, as a non-lawyer, would term implied consent, i.e. the cookies you set are listed along with their purpose and how to opt out in some interminable privacy policy on your site, you are now going to have to obtain a more active and informed consent to store cookies on a user’s device. -- The UK Cookie Law – Robert goes on to explain that the solution to this problem requires (1) capturing cookies and (2) implementing some mechanism to allow users to grant consent. Mind you, this is not a trivial task. There are logic considerations – not all cookies are set at the same time – as well as logistical issues – how do you present a request for consent? Once consent is granted, where do you store that? In a cookie? That you must gain consent to store? Infinite loops are never good. And of course, what do you if consent is not granted, but the application depends on that cookie existing? To top it all off, the process of gathering consent requires modification to application behavior, which means new code, testing and eventually deployment. Infrastructure services may present an alternative approach that is less disruptive technologically, but does not necessarily address the business or logic ramifications resulting from such a change. COOKIES as a SERVICE Cookies as a Service, a.k.a. cookie gateways, wherein cookie authorization and management is handled by an intermediate proxy, is likely best able to mitigate the expense and time associated with modifying applications to meet the new UK regulation. As Robert describes, he’s currently working on a network-side scripting solution to meet the UK requirements that leverages a full-proxy architecture’s ability to mediate for applications and communicate directly with clients before passing requests on to the application. Not only is this a valid approach to managing privacy regulations, it’s also a good means of providing additional security against attacks that leverage cookies either directly or indirectly. Cross-site scripting, browser vulnerabilities and other attacks that bypass the same origin policy of modern web browsers – sometimes by design to circumvent restrictions on integration methods – as well as piggy-backing on existing cookies as a means to gain unauthorized access to applications are all potential dangerous of cookies. By leveraging encryption of cookies in conjunction with transport layer security, i.e. SSL, organizations can better protect both users and applications from unintended security consequences. Implementing a cookie gateway should make complying with regulations like the new UK policy a less odious task. By centralizing cookie management on an intermediate device, they can be easily collected and displayed along with the appropriate opt-out / consent policies without consuming application resources or requiring every application to be modified to include the functionality to do so. AN INFRASTRUCTURE SERVICE This is one of the (many) ways in which an infrastructure service hosted in “the network” can provide value to both application developers and business stakeholders. Such a reusable infrastructure-hosted service can be leveraged to provide services to all applications and users simultaneously, dramatically reducing the time and effort required to support such a new initiative. Reusing an infrastructure service also reduces the possibility of human error during the application modification, which can drag out the deployment lifecycle and delay time to market. In the case of meeting the requirements of the new UK regulations, that delay could become costly. According to a poll of CIOs regarding their budgets for 2010, The Society for Information Management found that “Approximately 69% of IT spending in 2010 will be allocated to existing systems, while about 31% will be spent on building and buying new systems. This ratio remains largely the same compared to this year's numbers.” If we are to be more responsive to new business initiatives and flip that ratio such that we are spending less on maintaining existing systems and more on developing new systems and methods of management (i.e. cloud computing ) we need to leverage strategic points of control within the network to provide services that minimize resources, time and money on existing systems. Infrastructure services such as cookie gateways provide the opportunity to enhance security, comply with regulations and eliminate costs in application development that can be reallocated to new initiatives and projects. We need to start treating the network and its unique capabilities as assets to be leveraged and services to be enabled instead of a fat, dumb pipe. Understanding network-side scripting This is Why We Can’t Have Nice Things When the Data Center is Under Siege Don’t Forget to Watch Under the Floor IT as a Service: A Stateless Infrastructure Architecture Model You Can’t Have IT as a Service Until IT Has Infrastructure as a Service F5 Friday: Eliminating the Blind Spot in Your Data Center Security Strategy The UK Cookie Law –321Views0likes1Comment