ajax
23 TopicsA Billion More Laughs: The JavaScript hack that acts like an XML attack
Don is off in Lowell working on a project with our ARX folks so I was working late last night (finishing my daily read of the Internet) and ended up reading Scott Hanselman's discussion of threads versus processes in Chrome and IE8. It was a great read, if you like that kind of thing (I do), and it does a great job of digging into some of the RAMifications (pun intended) of the new programmatic models for both browsers. But this isn't about processes or threads, it's about an interesting comment that caught my eye: This will make IE8 Beta 2 unresponsive .. t = document.getElementById("test"); while(true) { t.innerHTML += "a"; } What really grabbed my attention is that this little snippet of code is so eerily similar to the XML "Billion Laughs" exploit, in which an entity is expanded recursively for, well, forever and essentially causes a DoS attack on whatever system (browser, server) was attempting to parse the document. What makes scripts like this scary is that many forums and blogs that are less vehement about disallowing HTML and script can be easily exploited by a code snippet like this, which could cause the browser of all users viewing the infected post to essentially "lock up". This is one of the reasons why IE8 and Chrome moved to a more segregated tabbed model, with each tab basically its own process rather than a thread - to prevent corruption in one from affecting others. But given the comment this doesn't seem to be the case with IE8 (there's no indication Chrome was tested with this code, so whether it handles the situation or not is still to be discovered). This is likely because it's not a corruption, it's valid JavaScript. It just happens to be consuming large quantities of memory very quickly and not giving the other processes in other tabs in IE8 a chance to execute. The reason the JavaScript version was so intriguing was that it's nearly impossible to stop. The XML version can be easily detected and prevented by an XML firewall and most modern XML parsers can be configured to stop parsing and thus prevent the document from wreaking havoc on a system. But this JavaScript version is much more difficult to detect and thus prevent because it's code and thus not confined to a specific format with specific syntactical attributes. I can think of about 20 different versions of this script - all valid and all of them different enough to make pattern matching or regular expressions useless for detection. And I'm no evil genius, so you can bet there are many more. The best option for addressing this problem? Disable scripts. The conundrum is that disabling scripts can cause many, many sites to become unusable because they are taking advantage of AJAX functionality, which requires...yup, scripts. You can certainly enable scripts only on specific sites you trust (which is likely what most security folks would suggest should be default behavior anyway) but that's a PITA and the very users we're trying to protect aren't likely to take the time to do this - or even understand why it's necessary. With the increasing dependence upon scripting to provide functionality for RIAs (Rich Interactive Applications) we're going to have to figure out how to address this problem, and address it soon. Eliminating scripting is not an option, and a default deny policy (essentially whitelisting) is unrealistic. Perhaps it's time for signed scripts to make a comeback.418Views0likes4Comments5 Years Later: OpenAJAX Who?
Five years ago the OpenAjax Alliance was founded with the intention of providing interoperability between what was quickly becoming a morass of AJAX-based libraries and APIs. Where is it today, and why has it failed to achieve more prominence? I stumbled recently over a nearly five year old article I wrote in 2006 for Network Computing on the OpenAjax initiative. Remember, AJAX and Web 2.0 were just coming of age then, and mentions of Web 2.0 or AJAX were much like that of “cloud” today. You couldn’t turn around without hearing someone promoting their solution by associating with Web 2.0 or AJAX. After reading the opening paragraph I remembered clearly writing the article and being skeptical, even then, of what impact such an alliance would have on the industry. Being a developer by trade I’m well aware of how impactful “standards” and “specifications” really are in the real world, but the problem – interoperability across a growing field of JavaScript libraries – seemed at the time real and imminent, so there was a need for someone to address it before it completely got out of hand. With the OpenAjax Alliance comes the possibility for a unified language, as well as a set of APIs, on which developers could easily implement dynamic Web applications. A unifiedtoolkit would offer consistency in a market that has myriad Ajax-based technologies in play, providing the enterprise with a broader pool of developers able to offer long term support for applications and a stable base on which to build applications. As is the case with many fledgling technologies, one toolkit will become the standard—whether through a standards body or by de facto adoption—and Dojo is one of the favored entrants in the race to become that standard. -- AJAX-based Dojo Toolkit , Network Computing, Oct 2006 The goal was simple: interoperability. The way in which the alliance went about achieving that goal, however, may have something to do with its lackluster performance lo these past five years and its descent into obscurity. 5 YEAR ACCOMPLISHMENTS of the OPENAJAX ALLIANCE The OpenAjax Alliance members have not been idle. They have published several very complete and well-defined specifications including one “industry standard”: OpenAjax Metadata. OpenAjax Hub The OpenAjax Hub is a set of standard JavaScript functionality defined by the OpenAjax Alliance that addresses key interoperability and security issues that arise when multiple Ajax libraries and/or components are used within the same web page. (OpenAjax Hub 2.0 Specification) OpenAjax Metadata OpenAjax Metadata represents a set of industry-standard metadata defined by the OpenAjax Alliance that enhances interoperability across Ajax toolkits and Ajax products (OpenAjax Metadata 1.0 Specification) OpenAjax Metadata defines Ajax industry standards for an XML format that describes the JavaScript APIs and widgets found within Ajax toolkits. (OpenAjax Alliance Recent News) It is interesting to see the calling out of XML as the format of choice on the OpenAjax Metadata (OAM) specification given the recent rise to ascendancy of JSON as the preferred format for developers for APIs. Granted, when the alliance was formed XML was all the rage and it was believed it would be the dominant format for quite some time given the popularity of similar technological models such as SOA, but still – the reliance on XML while the plurality of developers race to JSON may provide some insight on why OpenAjax has received very little notice since its inception. Ignoring the XML factor (which undoubtedly is a fairly impactful one) there is still the matter of how the alliance chose to address run-time interoperability with OpenAjax Hub (OAH) – a hub. A publish-subscribe hub, to be more precise, in which OAH mediates for various toolkits on the same page. Don summed it up nicely during a discussion on the topic: it’s page-level integration. This is a very different approach to the problem than it first appeared the alliance would take. The article on the alliance and its intended purpose five years ago clearly indicate where I thought this was going – and where it should go: an industry standard model and/or set of APIs to which other toolkit developers would design and write such that the interface (the method calls) would be unified across all toolkits while the implementation would remain whatever the toolkit designers desired. I was clearly under the influence of SOA and its decouple everything premise. Come to think of it, I still am, because interoperability assumes such a model – always has, likely always will. Even in the network, at the IP layer, we have standardized interfaces with vendor implementation being decoupled and completely different at the code base. An Ethernet header is always in a specified format, and it is that standardized interface that makes the Net go over, under, around and through the various routers and switches and components that make up the Internets with alacrity. Routing problems today are caused by human error in configuration or failure – never incompatibility in form or function. Neither specification has really taken that direction. OAM – as previously noted – standardizes on XML and is primarily used to describe APIs and components - it isn’t an API or model itself. The Alliance wiki describes the specification: “The primary target consumers of OpenAjax Metadata 1.0 are software products, particularly Web page developer tools targeting Ajax developers.” Very few software products have implemented support for OAM. IBM, a key player in the Alliance, leverages the OpenAjax Hub for secure mashup development and also implements OAM in several of its products, including Rational Application Developer (RAD) and IBM Mashup Center. Eclipse also includes support for OAM, as does Adobe Dreamweaver CS4. The IDE working group has developed an open source set of tools based on OAM, but what appears to be missing is adoption of OAM by producers of favored toolkits such as jQuery, Prototype and MooTools. Doing so would certainly make development of AJAX-based applications within development environments much simpler and more consistent, but it does not appear to gaining widespread support or mindshare despite IBM’s efforts. The focus of the OpenAjax interoperability efforts appears to be on a hub / integration method of interoperability, one that is certainly not in line with reality. While certainly developers may at times combine JavaScript libraries to build the rich, interactive interfaces demanded by consumers of a Web 2.0 application, this is the exception and not the rule and the pub/sub basis of OpenAjax which implements a secondary event-driven framework seems overkill. Conflicts between libraries, performance issues with load-times dragged down by the inclusion of multiple files and simplicity tend to drive developers to a single library when possible (which is most of the time). It appears, simply, that the OpenAJAX Alliance – driven perhaps by active members for whom solutions providing integration and hub-based interoperability is typical (IBM, BEA (now Oracle), Microsoft and other enterprise heavyweights – has chosen a target in another field; one on which developers today are just not playing. It appears OpenAjax tried to bring an enterprise application integration (EAI) solution to a problem that didn’t – and likely won’t ever – exist. So it’s no surprise to discover that references to and activity from OpenAjax are nearly zero since 2009. Given the statistics showing the rise of JQuery – both as a percentage of site usage and developer usage – to the top of the JavaScript library heap, it appears that at least the prediction that “one toolkit will become the standard—whether through a standards body or by de facto adoption” was accurate. Of course, since that’s always the way it works in technology, it was kind of a sure bet, wasn’t it? WHY INFRASTRUCTURE SERVICE PROVIDERS and VENDORS CARE ABOUT DEVELOPER STANDARDS You might notice in the list of members of the OpenAJAX alliance several infrastructure vendors. Folks who produce application delivery controllers, switches and routers and security-focused solutions. This is not uncommon nor should it seem odd to the casual observer. All data flows, ultimately, through the network and thus, every component that might need to act in some way upon that data needs to be aware of and knowledgeable regarding the methods used by developers to perform such data exchanges. In the age of hyper-scalability and über security, it behooves infrastructure vendors – and increasingly cloud computing providers that offer infrastructure services – to be very aware of the methods and toolkits being used by developers to build applications. Applying security policies to JSON-encoded data, for example, requires very different techniques and skills than would be the case for XML-formatted data. AJAX-based applications, a.k.a. Web 2.0, requires different scalability patterns to achieve maximum performance and utilization of resources than is the case for traditional form-based, HTML applications. The type of content as well as the usage patterns for applications can dramatically impact the application delivery policies necessary to achieve operational and business objectives for that application. As developers standardize through selection and implementation of toolkits, vendors and providers can then begin to focus solutions specifically for those choices. Templates and policies geared toward optimizing and accelerating JQuery, for example, is possible and probable. Being able to provide pre-developed and tested security profiles specifically for JQuery, for example, reduces the time to deploy such applications in a production environment by eliminating the test and tweak cycle that occurs when applications are tossed over the wall to operations by developers. For example, the jQuery.ajax() documentation states: By default, Ajax requests are sent using the GET HTTP method. If the POST method is required, the method can be specified by setting a value for the type option. This option affects how the contents of the data option are sent to the server. POST data will always be transmitted to the server using UTF-8 charset, per the W3C XMLHTTPRequest standard. The data option can contain either a query string of the form key1=value1&key2=value2 , or a map of the form {key1: 'value1', key2: 'value2'} . If the latter form is used, the data is converted into a query string using jQuery.param() before it is sent. This processing can be circumvented by setting processData to false . The processing might be undesirable if you wish to send an XML object to the server; in this case, change the contentType option from application/x-www-form-urlencoded to a more appropriate MIME type. Web application firewalls that may be configured to detect exploitation of such data – attempts at SQL injection, for example – must be able to parse this data in order to make a determination regarding the legitimacy of the input. Similarly, application delivery controllers and load balancing services configured to perform application layer switching based on data values or submission URI will also need to be able to parse and act upon that data. That requires an understanding of how jQuery formats its data and what to expect, such that it can be parsed, interpreted and processed. By understanding jQuery – and other developer toolkits and standards used to exchange data – infrastructure service providers and vendors can more readily provide security and delivery policies tailored to those formats natively, which greatly reduces the impact of intermediate processing on performance while ensuring the secure, healthy delivery of applications.399Views0likes0CommentsThe New Distribution of The 3-Tiered Architecture Changes Everything
As the majority of an application’s presentation layer logic moves to the client it induces changes that impact the entire application delivery ecosystem The increase in mobile clients, in demand for rich, interactive web applications, and the introduction of the API as one of the primary means by which information and content is shared across applications on the web is slowly but surely forcing a change back toward a traditional three-tiered architecture, if not in practice then in theory. This change will have a profound impact on the security, delivery, and scalability of the application but it also forces changes in the underlying network and application network infrastructure to support what is essentially a very different delivery model. What began with Web 2.0 – AJAX, primarily – is continuing to push in what seems a backward direction in architecture as a means to move web applications forward. In the old days the architecture was three-tiered, yes, but those tiers were maintained almost exclusive on the server-side of the architecture, with the browser acting only as the interpreter of the presentation layer data that was assembled on the server. Early AJAX applications continued using this model, leveraging the out-of-band (asynchronous) access provided by the XMLHTTPRequest object in major browsers as a means to dynamically assemble smaller pieces of the presentation layer. The browser was still relegated primarily to providing little more than rendering support. Enter Web 2.0 and RESTful APIs and a subtle change occurred. These APIs returned not presentation layer fragments, but data. The presentation layer logic required to display that data in a meaningful way based on the application became the responsibility of the browser. This was actually a necessary evolution in web application architecture to support the increasingly diverse set of end-user devices being used to access web applications. Very few people would vote for maintaining the separation of presentation layer logic used to support mobile devices and richer, desktop clients like browsers. By forcing the client to assemble and maintain the presentation layer that complexity on the server side is removed and a single, unified set of application logic resources can be delivered to every device without concern for cross-browser, cross-device support being “built in” to the presentation layer logic. This has a significant impact on the ability to rapidly support emerging clients – mobile and otherwise – that may not support the same robust set of capabilities available on a traditional browser. By reducing the presentation layer assembly on the server side to little more than layout – if that – the responsibility for assembling all the components and their display and routing data to the proper component is laid on the client. This means one server-side application truly can support both mobile and desktop clients with very little modification. It means an API provided by a web application can not only be used by the provider of that API to build its own presentation layer (client) but third-party developers can also leverage that API and the data it provides in whatever way it needs/chooses/desires. This is essentially the point to which we are almost at today.327Views0likes1CommentF5 Friday: Domain Sharding On-Demand
Domain sharding is a well-known practice to improve application performance – and you can implement automatically without modifying your applications today. If you’re a web developer, especially one that deals with AJAX or is responsible for page optimization (aka “Make It Faster or Else”), then you’re likely familiar with the technique of domain sharding, if not the specific terminology. For those who aren’t familiar with the technique (or the term), domain sharding is a well-known practice used to trick browsers into opening many more connections with a server than is allowed by default. This is important for improving page load times in the face of a page containing many objects. Given that the number of objects comprising a page has more than tripled in the past 8 years, now averaging nearly 85 objects per page, this technique is not only useful, it’s often a requirement. Modern browsers like to limit browsers to 8 connections per host, which means just to load one page a browser has to not only make 85 requests over 8 connections, but it must also receive those requests over those same, limited 8 connections. Consider, too, that the browser only downloads 2-6 objects over a single connection at a time, making this process somewhat fraught with peril when it comes to performance. This is generally why bandwidth isn’t a bottleneck for web applications but rather it’s TCP related issues such as round trip time (latency). Here are the two main points that need to be understood when discussing Bandwidth vs. RTT in regards to page load times: 1.) The average web page has over 50 objects that will need to be downloaded (reference: http://www.websiteoptimization.com/speed/tweak/average-web-page/) to complete page rendering of a single page. 2.) Browsers cannot (generally speaking) request all 50 objects at once. They will request between 2-6 (again, generally speaking) objects at a time, depending on browser configuration. This means that to receive the objects necessary for an average web page you will have to wait for around 25 Round Trips to occur, maybe even more. Assuming a reasonably low 150ms average RTT, that’s a full 3.75 seconds of page loading time not counting the time to download a single file. That’s just the time it takes for the network communication to happen to and from the server. Here’s where the bandwidth vs. RTT discussion takes a turn decidedly in the favor of RTT. -- RTT (Round Trip Time): Aka – Why bandwidth doesn’t matter So the way this is generally addressed is to “shard” the domain – create many imaginary hosts that the browser views as being separate and thus eligible for their own set of connections. This spreads out the object requests and responses over more connections simultaneously, allowing what is effectively parallelization of page loading functions to improve performance. Obviously this requires some coordination, because every host name needs a DNS entry and then you have to … yeah, modify the application to use those “new” hosts to improve performance. The downside is that you have to modify the application, of course, but also that this results in a static mapping. On the one hand, this can be the perfect time to perform some architectural overhauls and combine domain sharding with creating scalability domains to improve not only performance but scalability (and thus availability). You’ll still be stuck with the problem of tightly-coupled hosts to content, but hey – you’re getting somewhere which is better than nowhere. Or the better way (this is an F5 Friday so you knew that was coming) would be to leverage a solution capable of automatically sharding domains for you. No mess, no fuss, no modifying the application. All the benefits at one-tenth the work. DOMAIN SHARDING with BIG-IP WebAccelerator What BIG-IP WebAccelerator does, automatically, is shard domains by adding a prefix to the FQDN (Fully Qualified Domain Name). The user would initiate a request for “www.example.com” and WebAccelerator would go about its business of requesting it (or pulling objects from the cache, as per its configuration). Before returning the content to the user, however, WebAccelerator then shards the domain, adding prefixes to objects. The browser then does its normal processing and opens the appropriate number of connections to each of the hosts, requesting each of the individual objects. As WebAccelerator receives those requests, it knows to deshard (unshard?) the hosts and make the proper requests to the web or application server, thus insuring that the application understands the requests. This means no changes to the actual application. The only changes necessary are to DNS to ensure the additional hosts are recognized appropriately and to WebAccelerator, to configure domain sharding on-demand. This technique is useful for improving performance of web applications and is further enhanced with BIG-IP platform technology like OneConnect which multiplexes (and thus reuses) TCP connections to origin servers. This reduces the round trip time between WebAccelerator and the origin servers by keeping connections open, thus eliminating the overhead of TCP connection management. It improves page load time by allowing the browser to request more objects simultaneously. This particular feature falls into the transformative category of web application acceleration as it transforms content as a means to improve performance. This is also called FEO (Front End Optimization) as opposed to WPO (Web Performance Optimization) which focuses on optimization and acceleration of delivery channels, such as the use of compressing and caching. Happy Sharding! Fire and Ice, Silk and Chrome, SPDY and HTTP F5 Friday: The Mobile Road is Uphill. Both Ways. F5 Friday: Performance, Throughput and DPS F5 Friday: Protocols are from Venus. Data is from Mars. Acceleration is strategic, optimization is a tactic. Top-to-Bottom is the New End-to-End As Time Goes By: The Law of Cloud Response Time" (Joe Weinman) All F5 Friday Entries on DevCentral Network Optimization Won’t Fix Application Performance in the Cloud323Views0likes0CommentsFixing Internet Explorer & AJAX
A few weeks ago, as developers are wont to do, I rewrote our online gameroom. Version 1 was getting crusty, and I'd written all the AJAX handlers manually and wanted to clean up the code by using Prototype and Script.aculo.us. You may recall we discussed using these tools to build a Web 2.0 interface to iControl. So I rewrote it and was pretty pleased with myself. Until one of our players asked why it wasn't working in Internet Explorer (IE). Now Version 1 hadn't worked in IE either, but because I have a captive set of users I ignored the problem and forced them all to use FireFox instead. But this player's wife will be joining us soon and she's legally blind. She uses a reader to get around the Internet and as luck would have it, the reader only works with IE. So I started digging into the problem. I had thought it was my code (silly me), and thus moving to prototype would solve the problem. No such luck. Everything but the periodically updated pieces of the application worked fine. The real-time updating components? Broken in IE. I looked around and found this very interesting article on Wikipedia regarding known problems with IE and XMLHTTPRequest, the core of AJAX. From the Wikipedia article Most of the implementations also realize HTTP caching. Internet Explorer and Firefox do, but there is a difference in how and when the cached data is revalidated. Firefox revalidates the cached response every time the page is refreshed, issuing an "If-Modified-Since" header with value set to the value of the "Last-Modified" header of the cached response. Internet Explorer does so only if the cached response is expired (i.e., after the date of received "Expires" header). Basically, the problem lies with IE's caching mechanisms. So if you were trying to build an AJAX application with a real-time updating component and it didn't seem to work in IE, now you may know why that is. There are workarounds: Modify the AJAX call (within the client-side script) to check the response and, if necessary, make a second call with a Date value in the past to force the call to the server. Append a unique query string to the call, for example appending a timestamp. This makes the URI unique, ensuring it won't be in the cache and forcing IE to call out to the server to get it. Change all requests to use POST instead of GET. Force the "Expires" header to be set in the past (much in the way we expire cookies programmatically). Setting cache control headers may also help force IE to act according to expectations. I used option #3, because it was a simple, quick fix for me to search the single script using Ajax.PeriodicalUpdater and automatically change all the GETs to POSTs. That may not feasible for everyone, hence the other available options. Option #4 could easily be achieved using iRules, and could be coded such that only requests sent via IE were modified. In fact, Joe has a great post on how to prevent caching on specific file types that can be easily modified to solve the problem with IE. First we want to know if the browser is IE, and if so, we want to modify the caching behavior on the response. Don't forget that IE7 is using a slightly different User-Agent header than previous versions of IE. Don't look for specific versions, just try to determine if the browser is a version of IE. when HTTP_REQUEST { if {[string tolower [HTTP::header "User-Agent"]] contains "msie"} { set foundmatch 1 } } when HTTP_RESPONSE { if {$foundmatch == 1} { HTTP::header replace Cache-Control no-cache HTTP::header replace Pragma no-cache HTTP::header replace Expires -1 } } You could also use an iRule to accomplish #3 dynamically, changing the code only for IE browsers instead of all browsers. This requires a bit more work as you'll have to search through the payload for 'GET' and replace it with 'POST'. It's a good idea to make the search string as specific as possible to ensure that only the HTTP methods are replaced in the Ajax.PeriodicalUpdater calls and not everyplace the letters may appear in the document, hence the inclusion of the quotes around the methods. Happy Coding! Imbibing: Coffee287Views0likes4CommentsiRules: Simulating RESTful Behavior
One of the premises of REST (Representational State Transfer) is that it is simpler to use well-known HTTP methods (PUT, DELETE, GET, POST) to perform actions upon resources than it is to construct complex SOAP or traditional HTTP-based applicationmessages. REST resources are identified by URI (Uniform Resource Identifiers) that are specific to the resource. For example, instead of retreiving information about a city with a URI something like this: http://www.example.com/getcityinformation.php?city=Madison&state=WI you would use the GET HTTP method along with a URI that looks more like this: http://www.example.com/Madison/WI You could also (ostensibly) use the PUT method to add a new city, the POST method to update an existing city, and the DELETE method to remove the resource entirely. All assuming you had permission to do so, of course. The problem is that many browsers do not support PUT and DELETE. Some even *gasp* fail to properly support POST, but in most instances you can rely upon GET and POST supportat the very least. Also problematic are the security concerns around allowing PUT and DELETE methods on a web or application server. For years we've warned and even threatened administrators with horror stories about the potential vulnerabilities that can be exposed by enabling these methods, so it's highly unlikely that you could convince that admin to enable them now. But what if you really, really want to support REST-like URIs? All you need isa BIG-IPand some iRule fu. Rewrite that Request iRules can help here because it enables you to manipulate just about every aspect of an HTTP request, including rewriting the URI and transforming the actual request message. This means you can use URIs like: http://www.example.com/put/Madison/WIor http://www.example.com/delete/Madison/WI in order to simulate REST-like behavior without encoding the resource values in hidden form fields. This is especially nice for AJAX-based applications which can rely more upon URI query parameters to perform actions than form fields. To enable this transformation, encode the appropriate REST action and subsequent resource identifiers in the URI path. http://www.example.com/put/resource/identifier/ Use the POST method if you're sending along a message that will modify the resource using PUT (add) or POST (update). The following example shows a form whose action indicates it is a REST POST operation (update) and the resource identifier is a city-state combination, in this case Madison, WI. The actual request and its associated data could also be constructed dynamically using JavaScript and the XMLHTTPRequestObject. The important thing is to encode the REST-like parameters in the URI path in such a way that it can be easily deconstructed by an iRule and is consistent across the application. http://www.example.com/post/Madison/WI> Next, write an iRule to extract the method and resource identifiers from the URI path, making sure to (1) append the appropriate resource identifiers and parameter names within the HTTP::payload and (2) update HTTP::uri so your request is properly handled by the receiving server. This rule assumes a very simple application approach - each REST action corresponds to a PHP script bearing the same name as the action. It also assumes the client is using POST. We can parse out the parameters from a GET query string just as easily in an iRule, but I prefer POST because it alleviates any potential issues arising from very long URIs. when HTTP_REQUEST { # parse out the REST like parms to build a POST body set pathvalues [split [HTTP::path] "/"] set action [lindex $postvalues 1] set city [lindex $postvalues 2] set state [lindex $postvalues 3] # create the new POST body to include the rewritten parameters set payload [HTTP::payload] set newpayload "$payload&action=$action&city=$city&state=$state" set clen [string length $newpayload] # replace the payload if {$action eq "put"} { HTTP::payload replace 0 $clen $newpayload } HTTP::uri "/city/$action.php" } Disclaimer: there is no error checking in this rule, it assumes everything is perfect, which of course is unlikely to be the case in the real world. I am the archtypical developer, so I'll just say "it works on MY BIG-IP". :-) Obviously this particular iRule is highly customized for a specific application, so YMMV. The basic concept of using the URI path and rewriting it is not only applicable for REST-like applications, but can also be used to support vanity URIs and other customizable path-based applications. The question certainly arises: Why would I use REST-like behavior when I could achieve the same functionality using traditional web-based forms and eliminate the transformation step? That's a good question, and there are several benefits to this approach: It completely eliminates the path to the application/script, which hides the implementation language from the client. This is often referred to as resource cloaking or service virtualization and is a security mechanism designed to hide as much information from attackers as possible. The resulting code required to support AJAX-based requests and applications is much cleaner and somewhat smaller, resulting in a better performing application and less application logic on the client that can be manipulated (or exploited!). For Web 2.0 applications it can provide a simpler URI to be shared with other members of the community and a URI that is more easily recalled by users. It lays the groundwork for further transformations that enable integration between WOA (Web Oriented Architecture) and SOA (Service-oriented Architecture). This would require rewriting the request in a SOAP-XML format for exchange with a Web Service instead the conversion to a simple POST structure and a subsequent rewrite of the response. Given the flexbility of iRules there are likely many other ways in which you could implement REST-like application support. That's what makes iRules so cool - there's always one more way to solve any given problem and it's likely that one of those ways is appropriate for your unique environment. Imbibing: Pink Lemonade Technorati tags: F5, MacVittie, iRules, REST, application delivery, Web 2.0, AJAX287Views0likes0CommentsAPM Disable JavaScript Patching
I want to disable JavaScript Patching from a portal access resource. The web site contains JavaScripts using AJAX (There are no links content in JavaScript code). The web site should fully function after disabling JavaScript Patching. I know that there is a need to enable JavaScript Patching for some of the links (XMLHttpRequests,*.JS) How can I know which links I need to patch ? Many Thanks for answer !277Views0likes1CommentWeb 2.0 Security Part 3: A MASHup of Problems
This is Part3 of a series on Web 2.0 Security. A good way to remember things is to use mnemonics, so when you're trying to list the security issues relevant to Web 2.0 just remember this: it's a MASHup. More of everything. Asymmetric data formats Scripting based Hidden URLs and code This episode is brought to you by the letter "S". Scripting-based Web 2.0 technologies, specifically AJAX, are based on the execution of scripts. As we mentioned in Part Iof this series, there are a lot more scripts than is typically found in a web-based application. While on the server side this is often alleviated by combining multiple scripts into a single application that takes advantage of parameter-based execution that is more closely related to SOA than not, there are also scripts on the client that open up new security threats. In fact, here's a few client-side scripting vulnerabilities that have been discovered - and subsequently exploited: Yahoo Worm MySpace Worm AJAX-Spell HTML Tag Script Injection Vulnerability These vulnerabilities only scratch the surface of how JavaScript might be exploited. One of the problems with JavaScript is that it's interpreted on the client, and there are no validation mechanisms. That is, malicious JavaScript looks just like valid JavaScript. You can't just examine the script for specific keywords or patterns and determine that the script is malicious. JavaScript is also self-extensible. That is to say that you can modify existing JavaScript objects - like the XMLHttpRequest object - by forcing the browser to evaluate new JavaScript that extends and adds functionality to the object. And by "forcing" I really mean by delivering a script to the client; the browser will gleefully interpret any script in the page as long as it's in a language it understands. JavaScript is also dynamic. It can evaluate code that extends itself which in turns evaluates more code and so on. The possibilities are limited only by the creativeness of the author. Where the sandbox (the JVM) was supposed to - and for the most part does - protect the client from most of the really horrible possible exploits such as destruction of your files, it doesn't prevent some of the more subtle exploits dealing with sensitive data such as Cookie Theft or just generally grabbing data from your global clipboard. The Risks There is no way to distinguish malicious script from valid script, leaving attackers free to inject scripts into the client via infected web sites or other techniques that modify the core behavior of Web 2.0 applications Developers don't "own" the client (browser) so it's difficult to enforce specific security policies on users that might assist in protecting them from scripting-based vulnerabilities Sensitive data can be easily be retreived JavaScript is often used to construct URLs for communication; most vulnerability assessment scanners cannot interpret JavaScript and therefore cannot validate the constructed URLs. The issue of hidden URLs is the subject of the letter "H", which we'll discuss inthenext part of this series. Next: Hidden URLs Imbibing: Apple Juice (no, I'm not kidding) Technorati tags: web 2.0, security, MacVittie, F5, AJAX Technorati tags: F5, MacVittie, Web 2.0, AJAX, security, application security, Javascript273Views0likes0CommentsAs Client-Server Style Applications Resurface Performance Metrics Must Include the API
Mobile and tablet platforms are hyping HTML5, but many applications are bound to a traditional client-server model, making API performance a top concern for organizations. I recently received an e-mail from Strangeloop Networks with a subject of: “The quest for the holy grail of Web speed: 2-second page load times". Being focused on optimizing the user-interface, they appropriately quoted usability expert Jakob Nielsen, but also included some interesting statistics: 57% of site visitors will bounce after waiting 3 seconds or less for a page to load. Aberdeen Group surveyed 160 companies and discovered that, on average, slowing down a site by just one second results in a 7% reduction in conversions. Shopzilla accelerated its average page load time from 6 seconds to 1.2 seconds and experienced a 12% increase in revenue and a 25% increase in page views. Amazon performed its own page speed optimization and announced that, for every 100 milliseconds of improvement, revenues increased by 1%. Microsoft slowed down its Bing site by two seconds, which led to a 4.3% loss in revenue per visitor. The problem is not that this information is inaccurate in any way. It’s not that I don’t agree that performance is a top concern for organizations – especially those for whom web applications directly generate revenue. It’s that “applications” are quickly becoming a mash-up of architectural models, not all of which leverage the ubiquitous web browser as the client. It is particularly true on mobile and tablet platforms, but increasingly true of web-delivered applications, as well. Too, many applications are dependent upon third-party services via the use of Web 2.0 APIs that can compromise performance of any application, browser-based or not. API PERFORMANCE WILL BECOME CRITICAL I was browsing Blackberry’s App World on my Playbook with my youngest the other day, looking for some games appropriate for a 3-year old. He was helping, navigating like a pro, and pulling up descriptions of applications he found interesting based on their icon. When the application descriptions started loading slowly, i.e. took more than about 3 seconds to pop up on the screen, he started hitting the “back” button and trying another one. And another one. And another one. Ultimately he became quite frustrated with the situation and decided his Transformers were more interesting as they were more interactive at the moment. Turns out I was having some connectivity issues that, in turn, impacted the Playbook’s performance. I took away two things from the experience: 1. A three-year old’s patience with application load times is approximately equal to the “ideal” load time for adults. Interesting, isn’t it? 2. These mobile and tablet-deployed “applications” often require server-side, API, access. Therefore, API performance is critical to maintaining a responsive application. It is further unlikely that all applications will simply turn to HTML5 as the answer to address the challenges inherent in application platform deployment diversity. APIs have become a staple traffic on the Internet as a means to interconnect and integrate disparate services and it is folly to think they are going anywhere. What’s more, if you know a thing or three about web applications, APIs are enabling real-time updating in record numbers today, with more and more application logic responsible for parsing and displaying data returned from those API calls residing on the client. Consider, if you will, the data from the “API Billionaires Club” presented last year in “Open API Madness: The Party Has Just Begun for Cloud Developers” These are not just API calls coming from external sources; these are API calls coming from each organization’s own applications as well as integrated, external sources. These APIs are generally calls for data in JSON or XML formats, not pre-formatted user interface markup (HTML*). No amount of HTML manipulation is likely to improve the performance of API calls because there is no HTML to optimize. It’s data, pure and simple, which means the bulk of the responsibility for ensuring wicked fast performance suitable to a three-year old’s patience is going to land squarely on the application delivery chain and the application developer. That means minimizing processing and delivery time through carefully optimizing code (developers) and the delivery chain (operations). OPTIMIZING the DELIVERY CHAIN When the web first became popular any one who could use a scripting language and spit out HTML called themselves “web developers.” The need for highly optimized code to support the demanding performance requirements of end-users means that it’s no longer enough to be able to spit out HTML or even JSON. It means developers need to be highly skilled in optimizing code on the server-side such that processing times are as tight as can be. Calculating Big (O) may become a valued skill once again. But applications are not islands and even the most highly optimized function in the world can be negatively impacted by factors outside the developer’s control. The load on the application server – whether physical or virtual – can have a deleterious effect on application performance. Higher loads, more RAM, fewer CPU cycles translates into slower executing code – no matter how optimized it may be. Processing cryptographic operations of any kind, be it for compression or security purposes, can consume resources and introduce latency into processing times when performed on the server. And the overhead from managing connections, usually TCP, can take as much time as processing a request. All those operations add up to latency that can drive the end-user response time over the patience threshold that results in an aborted transaction. And when I say transaction I mean request-reply transaction, not necessarily those that involve money. Aborted transactions are also bad for application performance because it’s wasting resources. That connection is held open based on the web or application server’s configuration, and if the end-user aborted the transaction, it’s ignoring the response but tying up resources that could be used elsewhere. Application delivery chain optimization is a critical component to improving response time. Offloading cryptographic processing and protocol management can alleviate much of the load that negatively impacts application processing times and shifts the delivery-time component of application performance management from the developer to operations, where optimization and acceleration technologies can be applied regardless of data format. Whether it’s HTML or JSON or XML is irrelevant, compression, caching and cryptographic offload can benefit both end-users and developers by mitigating those factors outside the developer’s demesne that impact performance negatively. THE WHOLE is GREATER than the SUM of its PARTS It is no longer enough to measure the end-user experience based on load times in a browser. The growing use of mobile devices – whether phones or tablets – and the increasingly interconnected web of integrated applications means performance of an application is more complicated than it was in the past. The performance of an application today is highly dependent on the performance of APIs, and thus testing APIs specifically from a variety of devices and platforms is critical in understand the impact high volume and load has on overall application performance. Testing API performance is critical to ensuring the end-user experience is acceptable regardless of the form factor of the client. If you aren’t sure what acceptable performance might be, grab the nearest three-year old; they’ll let you know loud and clear. How to Earn Your Data Center Merit Badge The Stealthy Ascendancy of JSON Cloud Testing: The Next Generation Data Center Feng Shui: Architecting for Predictable Performance Now Witness the Power of this Fully Operational Feedback Loop On Cloud, Integration and Performance The cost of bad cloud-based application performance I Find Your Lack of Win Disturbing Operational Risk Comprises More Than Just Security Challenging the Firewall Data Center Dogma 50 Ways to Use Your BIG-IP: Performance269Views0likes0CommentsThat's Not Always an Option
Improving the performance of AJAX applications by switching servers isn't always feasible in a real environment It's nice to see the analysis of AJAX I did last year being validated, especially by one of the creators of the popular AJAX-focused toolkit, Dojo. While I agree with Dylan's assessment of where to begin the "search & destroy mission" and the reasons behind poor performance of AJAX-based applications, I just can't get behind his suggestion to switch Web servers simply to resolve highly aggressive polling-based applications. The best place to begin a thorough search & destroy mission is with HTTP-level performance problems that can be resolved in server configuration and fine-tuning. Is the caching configured properly? Are there issues with load balancing? How many concurrent requests per server before performance suffers? AJAX applications typically reduce the data size per request, but highly aggressive polling can saturate your servers with too many requests. As AJAX applications become increasingly common, users expect real-time updates to accompany their real-time experience. To be effective, real-time or highly collaborative applications require a significant amount of AJAX polling to make the application work. If this is simply too demanding on your Web server, you may want to consider switching to a Comet server implementation such as Cometd, Lightstreamer, KnowNow, or lighttpd. Comet servers are optimized for longer-lived connections and higher volumes of concurrency than typical Web servers. There are several reasons why this option simply isn't feasible. First and most obvious would be the availability of such servers if you're hosting an application. The hosting provider is going to determine what servers are available, and even they are going to consider stability and potential costs of maintenance (in terms of skills, training, and hardware required) before simplydeploying yet another web server. Within the enterprise, the changes of deploying another web server are even slimmer and will certainly take more time. Similar factors must be considered, as well as the stability and reliability of the software and the cost-benefit analysis of whether another server is really worth the investment. It often simply isn't as simple as switching to a new web server. In large scale environments it's almost certainly more advantageous to implement a known, proven method to address the issues associated with highly aggressive polling applications such as AJAX. Application delivery network infrastructure handles these issues with more than just old fashioned load balancing. TCP multiplexing has long been an advanced feature set of application delivery controllers like BIG-IP and helps to alleviate the burden on servers caused by excessive connections often by up to 33%. Additionally, advanced features and product modules that support caching-even of so-called dynamic content - like WebAccelerator can further keep web servers from becoming undully challenged by this new breed of applications. Employing an application delivery network solution has additional benefits over tuning the cache on a web server or switching to a new web server, such ascompression. WebAccelerator specifically goes one step further and throughits unique MultiConnect technology improves the handling of connectionsfor AJAX applications on the browser as well, where connections are typically limited by defaut configurations.Increasing the connections available on the client further improves theperformance ofAJAX applications by allowingrequests to be delivered as soon as possible, rather than whenevera connection becomes available. Certainly changing your web server of choice is an option, but there are too many situations and environments where such a choice is not feasible. In such situations it mightbe wiser to consider a transparently deployedoption such as an application delivery controller that can provide benefits above and beyond even the most highly tuned web server. Imbibing: Water Technorati tags: MacVittie, F5, BIG-IP, application delivery, AJAX, Web 2.0, application acceleration255Views0likes0Comments