ajax
23 Topics5 Years Later: OpenAJAX Who?
Five years ago the OpenAjax Alliance was founded with the intention of providing interoperability between what was quickly becoming a morass of AJAX-based libraries and APIs. Where is it today, and why has it failed to achieve more prominence? I stumbled recently over a nearly five year old article I wrote in 2006 for Network Computing on the OpenAjax initiative. Remember, AJAX and Web 2.0 were just coming of age then, and mentions of Web 2.0 or AJAX were much like that of “cloud” today. You couldn’t turn around without hearing someone promoting their solution by associating with Web 2.0 or AJAX. After reading the opening paragraph I remembered clearly writing the article and being skeptical, even then, of what impact such an alliance would have on the industry. Being a developer by trade I’m well aware of how impactful “standards” and “specifications” really are in the real world, but the problem – interoperability across a growing field of JavaScript libraries – seemed at the time real and imminent, so there was a need for someone to address it before it completely got out of hand. With the OpenAjax Alliance comes the possibility for a unified language, as well as a set of APIs, on which developers could easily implement dynamic Web applications. A unifiedtoolkit would offer consistency in a market that has myriad Ajax-based technologies in play, providing the enterprise with a broader pool of developers able to offer long term support for applications and a stable base on which to build applications. As is the case with many fledgling technologies, one toolkit will become the standard—whether through a standards body or by de facto adoption—and Dojo is one of the favored entrants in the race to become that standard. -- AJAX-based Dojo Toolkit , Network Computing, Oct 2006 The goal was simple: interoperability. The way in which the alliance went about achieving that goal, however, may have something to do with its lackluster performance lo these past five years and its descent into obscurity. 5 YEAR ACCOMPLISHMENTS of the OPENAJAX ALLIANCE The OpenAjax Alliance members have not been idle. They have published several very complete and well-defined specifications including one “industry standard”: OpenAjax Metadata. OpenAjax Hub The OpenAjax Hub is a set of standard JavaScript functionality defined by the OpenAjax Alliance that addresses key interoperability and security issues that arise when multiple Ajax libraries and/or components are used within the same web page. (OpenAjax Hub 2.0 Specification) OpenAjax Metadata OpenAjax Metadata represents a set of industry-standard metadata defined by the OpenAjax Alliance that enhances interoperability across Ajax toolkits and Ajax products (OpenAjax Metadata 1.0 Specification) OpenAjax Metadata defines Ajax industry standards for an XML format that describes the JavaScript APIs and widgets found within Ajax toolkits. (OpenAjax Alliance Recent News) It is interesting to see the calling out of XML as the format of choice on the OpenAjax Metadata (OAM) specification given the recent rise to ascendancy of JSON as the preferred format for developers for APIs. Granted, when the alliance was formed XML was all the rage and it was believed it would be the dominant format for quite some time given the popularity of similar technological models such as SOA, but still – the reliance on XML while the plurality of developers race to JSON may provide some insight on why OpenAjax has received very little notice since its inception. Ignoring the XML factor (which undoubtedly is a fairly impactful one) there is still the matter of how the alliance chose to address run-time interoperability with OpenAjax Hub (OAH) – a hub. A publish-subscribe hub, to be more precise, in which OAH mediates for various toolkits on the same page. Don summed it up nicely during a discussion on the topic: it’s page-level integration. This is a very different approach to the problem than it first appeared the alliance would take. The article on the alliance and its intended purpose five years ago clearly indicate where I thought this was going – and where it should go: an industry standard model and/or set of APIs to which other toolkit developers would design and write such that the interface (the method calls) would be unified across all toolkits while the implementation would remain whatever the toolkit designers desired. I was clearly under the influence of SOA and its decouple everything premise. Come to think of it, I still am, because interoperability assumes such a model – always has, likely always will. Even in the network, at the IP layer, we have standardized interfaces with vendor implementation being decoupled and completely different at the code base. An Ethernet header is always in a specified format, and it is that standardized interface that makes the Net go over, under, around and through the various routers and switches and components that make up the Internets with alacrity. Routing problems today are caused by human error in configuration or failure – never incompatibility in form or function. Neither specification has really taken that direction. OAM – as previously noted – standardizes on XML and is primarily used to describe APIs and components - it isn’t an API or model itself. The Alliance wiki describes the specification: “The primary target consumers of OpenAjax Metadata 1.0 are software products, particularly Web page developer tools targeting Ajax developers.” Very few software products have implemented support for OAM. IBM, a key player in the Alliance, leverages the OpenAjax Hub for secure mashup development and also implements OAM in several of its products, including Rational Application Developer (RAD) and IBM Mashup Center. Eclipse also includes support for OAM, as does Adobe Dreamweaver CS4. The IDE working group has developed an open source set of tools based on OAM, but what appears to be missing is adoption of OAM by producers of favored toolkits such as jQuery, Prototype and MooTools. Doing so would certainly make development of AJAX-based applications within development environments much simpler and more consistent, but it does not appear to gaining widespread support or mindshare despite IBM’s efforts. The focus of the OpenAjax interoperability efforts appears to be on a hub / integration method of interoperability, one that is certainly not in line with reality. While certainly developers may at times combine JavaScript libraries to build the rich, interactive interfaces demanded by consumers of a Web 2.0 application, this is the exception and not the rule and the pub/sub basis of OpenAjax which implements a secondary event-driven framework seems overkill. Conflicts between libraries, performance issues with load-times dragged down by the inclusion of multiple files and simplicity tend to drive developers to a single library when possible (which is most of the time). It appears, simply, that the OpenAJAX Alliance – driven perhaps by active members for whom solutions providing integration and hub-based interoperability is typical (IBM, BEA (now Oracle), Microsoft and other enterprise heavyweights – has chosen a target in another field; one on which developers today are just not playing. It appears OpenAjax tried to bring an enterprise application integration (EAI) solution to a problem that didn’t – and likely won’t ever – exist. So it’s no surprise to discover that references to and activity from OpenAjax are nearly zero since 2009. Given the statistics showing the rise of JQuery – both as a percentage of site usage and developer usage – to the top of the JavaScript library heap, it appears that at least the prediction that “one toolkit will become the standard—whether through a standards body or by de facto adoption” was accurate. Of course, since that’s always the way it works in technology, it was kind of a sure bet, wasn’t it? WHY INFRASTRUCTURE SERVICE PROVIDERS and VENDORS CARE ABOUT DEVELOPER STANDARDS You might notice in the list of members of the OpenAJAX alliance several infrastructure vendors. Folks who produce application delivery controllers, switches and routers and security-focused solutions. This is not uncommon nor should it seem odd to the casual observer. All data flows, ultimately, through the network and thus, every component that might need to act in some way upon that data needs to be aware of and knowledgeable regarding the methods used by developers to perform such data exchanges. In the age of hyper-scalability and über security, it behooves infrastructure vendors – and increasingly cloud computing providers that offer infrastructure services – to be very aware of the methods and toolkits being used by developers to build applications. Applying security policies to JSON-encoded data, for example, requires very different techniques and skills than would be the case for XML-formatted data. AJAX-based applications, a.k.a. Web 2.0, requires different scalability patterns to achieve maximum performance and utilization of resources than is the case for traditional form-based, HTML applications. The type of content as well as the usage patterns for applications can dramatically impact the application delivery policies necessary to achieve operational and business objectives for that application. As developers standardize through selection and implementation of toolkits, vendors and providers can then begin to focus solutions specifically for those choices. Templates and policies geared toward optimizing and accelerating JQuery, for example, is possible and probable. Being able to provide pre-developed and tested security profiles specifically for JQuery, for example, reduces the time to deploy such applications in a production environment by eliminating the test and tweak cycle that occurs when applications are tossed over the wall to operations by developers. For example, the jQuery.ajax() documentation states: By default, Ajax requests are sent using the GET HTTP method. If the POST method is required, the method can be specified by setting a value for the type option. This option affects how the contents of the data option are sent to the server. POST data will always be transmitted to the server using UTF-8 charset, per the W3C XMLHTTPRequest standard. The data option can contain either a query string of the form key1=value1&key2=value2 , or a map of the form {key1: 'value1', key2: 'value2'} . If the latter form is used, the data is converted into a query string using jQuery.param() before it is sent. This processing can be circumvented by setting processData to false . The processing might be undesirable if you wish to send an XML object to the server; in this case, change the contentType option from application/x-www-form-urlencoded to a more appropriate MIME type. Web application firewalls that may be configured to detect exploitation of such data – attempts at SQL injection, for example – must be able to parse this data in order to make a determination regarding the legitimacy of the input. Similarly, application delivery controllers and load balancing services configured to perform application layer switching based on data values or submission URI will also need to be able to parse and act upon that data. That requires an understanding of how jQuery formats its data and what to expect, such that it can be parsed, interpreted and processed. By understanding jQuery – and other developer toolkits and standards used to exchange data – infrastructure service providers and vendors can more readily provide security and delivery policies tailored to those formats natively, which greatly reduces the impact of intermediate processing on performance while ensuring the secure, healthy delivery of applications.401Views0likes0CommentsAPM Disable JavaScript Patching
I want to disable JavaScript Patching from a portal access resource. The web site contains JavaScripts using AJAX (There are no links content in JavaScript code). The web site should fully function after disabling JavaScript Patching. I know that there is a need to enable JavaScript Patching for some of the links (XMLHttpRequests,*.JS) How can I know which links I need to patch ? Many Thanks for answer !289Views0likes1CommentThe Next Evolution of Application Architecture is Upon Us
And what it means to #devops One of the realities of application development is that there are a lot of factors that go into its underlying architecture. I came of age during the Epoch of Client-Server Architecture (also known as the 1990s and later the .com era) and we were taught, very firmly, that despite the implication of "client-server" that there were but two, distinct "tiers" comprising an application, there were actually three. The presentation layer, the GUI, resided on the client. All business logic resided on the server, and a third, data access tier completed the trifecta. This worked well because developers tailored each application to a specific business purpose. Thus, implementation of business logic occurred along with application logic. There was no real reason to separate it out. As applications grew in complexity and use, SOA came of age. SOA introduced the principles of reuse we still adhere to today, and the idea that business logic should be consistent and shared across all applications that might need it. I know, revolutionary, wasn't it? Before SOA could complete its goal of world domination, however, the Internet and deployment models changed. Dramatically. [Interestingly enough, a post two years ago on this topic was fairly accurate on this migration ] THE NEW APP WORLD ORDER Today, an "application" is a no longer defined necessarily by business function, but by a unique combination of client and business function. It's not just the business logic, it's the delivery and presentation mechanism that make it unique and of course, more challenging for operations. Business logic has moved into a converged business logic-data tier more commonly known as The API. The client is still, of course, responsible for the presentation. But application logic - the domain of state and access - is in flux for some. As illustrated by the ill-advised decision to place application-logic in the presentation layer, some developers haven't quite adopted the practice of deploying an application (or domain) logic tier to intermediate and maintain consistent application behavior. But there always exist unique processing that must occur based on context. In some cases, some data might be marked cacheable as a means to achieve better performance for mobile clients when communicating over a mobile network while in others, it might not be because the client is a web-based application running on a PC over a LAN. A native mobile application deals with the state of the user "are they logged in" differently than a web application relying on cookies. The business logic should not be impacted by this. Ultimately the business logic for retrieving order X for customer Y does not inherently change based on client characteristics. It cannot, in fact, be shared - reused - if it contains application specific details regarding the validation of state, unless such an implementation uses a lot of conditional statements (that must be modified every time a new method is introduced, by the way) to determine whether a user is logged in or not. Thus, we move the application-specific, the domain, logic to its own tier, usually implemented by or on a proxy that intermediates between the API and the client. Domain Logic Tier Business Logic Tier • Maintains an internally consistent model representation on both sides of the app (client and server) • Is ontological • Often involves application state and access requirements such as "the user must be logged-in and an admin" or "this object is cacheable" • Consists of elements and functions that are specific to this application • Concerned with coordinating valid interactions between presentation and data (client and data) • Is teleological • Often involves direct access to data or application of business requirements such as “if order is > 1000 apply discount X” • Consists of elements that are common to all applications, i.e. does not rely on a given UI or interface What's most fascinating about this change is that a "proxy" tier traditionally proxies for the server-side application, but in this new model it is proxying for the client. That's not odd for developers, because if you break down the traditional model the "server" piece of the three-tiered architecture really was just a big application-specific proxy to the data. But it is odd for operations, because the new model takes advantage of a converged application-network proxy that is capable of performing tasks like load balancing and authentication and caching as well as mediating for an API, which may include transformative and translative functions. IMPACT ON DEVOPS What this ultimately means for devops is an increasing role in application architecture, from inception to production. It means devops will need to go beyond concerns of web performance or application deployment lifecycle management and into the realm of domain logic implementation and deployment. That may mean a new breed of developer; one that is still focused on development but does so in a primarily operational environment, in the network. It means enterprise architects will need to extend their view into operations, into the network, and codify for developers the lines of demarcation between domain and business logic. It means a very interesting new application model that basically adopts the premise of application delivery but adds domain logic to its catalog of services. It means devops is going to get even more interesting as more applications adopt this new, three-tiered architecture.186Views0likes0CommentsProgrammable Cache-Control: One Size Does Not Fit All
#webperf For addressing challenges related to performance of #mobile devices and networks, caching is making a comeback. It's interesting - and almost amusing - to watch the circle of technology run around best practices with respect to performance over time. Back in the day caching was the ultimate means by which web application performance was improved and there was no lack of solutions and techniques that manipulated caching capabilities to achieve optimal performance. Then it was suddenly in vogue to address the performance issues associated with Javascript on the client. As Web 2.0 ascended and AJAX-based architectures ruled the day, Javascript was Enemy #1 of performance (and security, for that matter). Solutions and best practices began to arise to address when Javascript loaded, from where, and whether or not it was even active. And now, once again, we're back at the beginning with caching. In the interim years, it turns out developers have not become better about how they mark content for caching and with the proliferation of access from mobile devices over sometimes constrained networks, it's once again come to the attention of operations (who are ultimately responsible for some reason for performance of web applications) that caching can dramatically improve the performance of web applications. [ Excuse me while I take a breather - that was one long thought to type. ] Steve Souders, web performance engineer extraordinaire, gave a great presentation at HTML5DevCon that was picked up by High Scalability: Cache is King!. The aforementioned articles notes: Use HTTP cache control mechanisms: max-age, etag, last-modified, if-modified-since, if-none-match, no-cache, must-revalidate, no-store. Want to prevent HTTP sending conditional GET requests, especially over high latency mobile networks. Use a long max-age and change resource names any time the content changes so that it won't be cached improperly. -- Better Browser Caching Is More Important Than No Javascript Or Fast Networks For HTTP Performance The problem is, of course, that developers aren't putting all these nifty-neato-keen tags and meta-data in their content and the cost to modify existing applications to do so may result in a prioritization somewhere right below having an optional, unnecessary root canal. In other cases the way in which web applications are built today - we're still using AJAX-based, real-time updates of chunks of content rather than whole pages - means simply adding tags and meta-data to the HTML isn't necessarily going to help because it refers to the page and not the data/content being retrieved and updated for that "I'm a live, real-time application" feel that everyone has to have today. Too, caching tags and meta-data in HTML doesn't address every type of data. JSON, for example, commonly returned as the response to an API call (used as the building blocks for web applications more and more frequently these days) aren't going to be impacted by the HTML caching directives. That has to be addressed in a different way, either on the server side (think Apache mod_expire) or on the client (HTML5 contains new capabilities specifically for this purpose and there are usually cache directives hidden in AJAX frameworks like jQuery). The Programmable Network to the Rescue What you need is the ability to insert the appropriate tags, on the appropriate content, in such a way as to make sure whatever you're about to do (a) doesn't break the application and (b) is actually going to improve the performance of the end-user experience for that specific request. Note that (b) is pretty important, actually, because there are things you do to content being delivered to end users on mobile devices over mobile networks that might make things worse if you do it to the same content being delivered to the same end user on the same device over the wireless LAN. Network capabilities matter, so it's important to remember that. To avoid rewriting applications (and perhaps changing the entire server-side architecture by adding on modules) you could just take advantage of programmability in the network. When enabled as part of a full-proxy, network intermediary the ability to programmatically modify content in-flight becomes invaluable as a mechanism for improving performance, particularly with respect to adding (or modifying) cache headers, tags, and meta-data. By allowing the intermediary to cache the cacheable content while simultaneously inserting the appropriate cache control headers to manage the client-side cache, performance is improved. By leveraging programmability, you can start to apply device or network or application (or any combination thereof) logic to manipulate the cache as necessary while also using additional performance-enhancing techniques like compression (when appropriate) or image optimization (for mobile devices). The thing is that a generic "all on" or "all off" for caching isn't going to always result in the best performance. There's logic to it that says you need the capability to say "if X and Y then ON else if Z then OFF". That's the power of a programmable network, of the ability to write the kind of logic that takes into consideration the context of a request and takes the appropriate actions in real-time. Because one size (setting) simply does not fit all.207Views0likes0CommentsF5 Friday: Domain Sharding On-Demand
Domain sharding is a well-known practice to improve application performance – and you can implement automatically without modifying your applications today. If you’re a web developer, especially one that deals with AJAX or is responsible for page optimization (aka “Make It Faster or Else”), then you’re likely familiar with the technique of domain sharding, if not the specific terminology. For those who aren’t familiar with the technique (or the term), domain sharding is a well-known practice used to trick browsers into opening many more connections with a server than is allowed by default. This is important for improving page load times in the face of a page containing many objects. Given that the number of objects comprising a page has more than tripled in the past 8 years, now averaging nearly 85 objects per page, this technique is not only useful, it’s often a requirement. Modern browsers like to limit browsers to 8 connections per host, which means just to load one page a browser has to not only make 85 requests over 8 connections, but it must also receive those requests over those same, limited 8 connections. Consider, too, that the browser only downloads 2-6 objects over a single connection at a time, making this process somewhat fraught with peril when it comes to performance. This is generally why bandwidth isn’t a bottleneck for web applications but rather it’s TCP related issues such as round trip time (latency). Here are the two main points that need to be understood when discussing Bandwidth vs. RTT in regards to page load times: 1.) The average web page has over 50 objects that will need to be downloaded (reference: http://www.websiteoptimization.com/speed/tweak/average-web-page/) to complete page rendering of a single page. 2.) Browsers cannot (generally speaking) request all 50 objects at once. They will request between 2-6 (again, generally speaking) objects at a time, depending on browser configuration. This means that to receive the objects necessary for an average web page you will have to wait for around 25 Round Trips to occur, maybe even more. Assuming a reasonably low 150ms average RTT, that’s a full 3.75 seconds of page loading time not counting the time to download a single file. That’s just the time it takes for the network communication to happen to and from the server. Here’s where the bandwidth vs. RTT discussion takes a turn decidedly in the favor of RTT. -- RTT (Round Trip Time): Aka – Why bandwidth doesn’t matter So the way this is generally addressed is to “shard” the domain – create many imaginary hosts that the browser views as being separate and thus eligible for their own set of connections. This spreads out the object requests and responses over more connections simultaneously, allowing what is effectively parallelization of page loading functions to improve performance. Obviously this requires some coordination, because every host name needs a DNS entry and then you have to … yeah, modify the application to use those “new” hosts to improve performance. The downside is that you have to modify the application, of course, but also that this results in a static mapping. On the one hand, this can be the perfect time to perform some architectural overhauls and combine domain sharding with creating scalability domains to improve not only performance but scalability (and thus availability). You’ll still be stuck with the problem of tightly-coupled hosts to content, but hey – you’re getting somewhere which is better than nowhere. Or the better way (this is an F5 Friday so you knew that was coming) would be to leverage a solution capable of automatically sharding domains for you. No mess, no fuss, no modifying the application. All the benefits at one-tenth the work. DOMAIN SHARDING with BIG-IP WebAccelerator What BIG-IP WebAccelerator does, automatically, is shard domains by adding a prefix to the FQDN (Fully Qualified Domain Name). The user would initiate a request for “www.example.com” and WebAccelerator would go about its business of requesting it (or pulling objects from the cache, as per its configuration). Before returning the content to the user, however, WebAccelerator then shards the domain, adding prefixes to objects. The browser then does its normal processing and opens the appropriate number of connections to each of the hosts, requesting each of the individual objects. As WebAccelerator receives those requests, it knows to deshard (unshard?) the hosts and make the proper requests to the web or application server, thus insuring that the application understands the requests. This means no changes to the actual application. The only changes necessary are to DNS to ensure the additional hosts are recognized appropriately and to WebAccelerator, to configure domain sharding on-demand. This technique is useful for improving performance of web applications and is further enhanced with BIG-IP platform technology like OneConnect which multiplexes (and thus reuses) TCP connections to origin servers. This reduces the round trip time between WebAccelerator and the origin servers by keeping connections open, thus eliminating the overhead of TCP connection management. It improves page load time by allowing the browser to request more objects simultaneously. This particular feature falls into the transformative category of web application acceleration as it transforms content as a means to improve performance. This is also called FEO (Front End Optimization) as opposed to WPO (Web Performance Optimization) which focuses on optimization and acceleration of delivery channels, such as the use of compressing and caching. Happy Sharding! Fire and Ice, Silk and Chrome, SPDY and HTTP F5 Friday: The Mobile Road is Uphill. Both Ways. F5 Friday: Performance, Throughput and DPS F5 Friday: Protocols are from Venus. Data is from Mars. Acceleration is strategic, optimization is a tactic. Top-to-Bottom is the New End-to-End As Time Goes By: The Law of Cloud Response Time" (Joe Weinman) All F5 Friday Entries on DevCentral Network Optimization Won’t Fix Application Performance in the Cloud335Views0likes0CommentsF5 Friday: If Only the Odds of a Security Breach were the Same as Being Hit by Lightning
#v11 AJAX, JSON and an ever increasing web application spread increase the odds of succumbing to a breach. BIG-IP ASM v11 reduces those odds, making it more likely you’ll win at the security table When we use analogy often enough it becomes pervasive, to the point of becoming an idiom. One such idiom is the expression of unlikelihood of an event by comparing it to being hit by lightning. The irony is that the odds of being hit by lightning are actually fairly significant – about 1:576,000. Too many organizations view their risk of a breach as bring akin to being hit by lightning because they’re small, or don’t have a global presence or what have you. The emergence years ago of “mass” web attacks rendered – or should have rendered - such arguments ineffective. Given the increasing number of web transactions on the Internet and the success of web-based attacks to enact a breach, even comparing the risk to the odds of being hit by lightning does little but prove that eventually, you’re going to get hit. Research by ZScaler earlier this year indicated an average (median) number of web transactions per day, per user at 1912. Analysts put the number of Internet users at about two billion. That translates into more than three trillion web transactions per day. Every day, three trillion transactions are flying around the web. Based on the odds of being hit by lightning, that means over 6 million of those transactions would breach an organization. The odds suddenly aren’t looking as good as they might seem, are they? If you think that’s bad, you ain’t read the most recent Ponemon results, which recently concluded that the odds of being breached in the next year were a “statistical certainty.” No, it’s not paranoia if they really are out to get you and guess what? Apparently they are out to get you. Truth be told, I’m not entirely convinced of the certainty of a breach because it assumes precautionary measures and behavior is not modified in the face of such a dire prediction. If organizations were to say, change their strategy as a means to get better odds, then the only statistical certainty would likely be that a breach would be attempted – but not necessarily be successful. The bad news is that even if you have protections in place, the bad guys methods are evolving. If your primary means of protection are internal to your applications, the possibility remains that a new attack will require a rewrite – and redeployment. And even if you are taking advantage of external protection such as a web application firewall like BIG-IP ASM (Application Security Manager) it’s possible that it hasn’t provided complete coverage or accounted for what are misconfiguration errors: typographical case-sensitivity errors that can effectively erode protections. The good news is that even as the bad guys are evolving, so too are those external protective mechanisms like BIG-IP ASM. BIG-IP ASM v11 introduced significant enhancements that provide better protection for emerging development format standards as well as address those operational oops that can leave an application vulnerable to being breached. BIG-IP v11 Enhancements AJAX and JSON Support AJAX growth over the past few years have established it as the status quo for building interactive web applications. Increasingly these interaction exchanges via AJAX use JSON as their preferred data format of choice. Previous versions of BIG-IP ASM were unable to properly parse and therefore secure JSON payloads. A secondary issue with AJAX is related to the blocking pages generally returned by web application firewalls. For example, a BIG-IP ASM blocking page is HTML-based. When an AJAX embedded control triggers a policy violation, this means it can't present the blocking page because it doesn't expect to receive back HTML – it expects JSON. This leaves operators in the dark as it makes troubleshooting AJAX issues very difficult. To address both these issues, BIG-IP ASM v11 can now parse JSON payloads and enforce proper security policies. This is advantageous not only for protecting AJAX-exchanged payloads, but for managing integration via JSON-based APIs from external sources. Being able to secure what is essentially third-party content is paramount to ensuring a positive security posture regardless of external providers’ level of security. BIG-IP ASM v11 can also now also display a blocking page by injecting JavaScript into the response that will popup a window with a support ID, traceable by operators for easier troubleshooting. The ability to display a blocking page and ID is unique to BIG-IP ASM v11. Case Insensitivity Case sensitivity in general is derived from the underlying web server OS. While having a case sensitivity policy is an advantage on Unix/Linux platforms it can be painful to manage on other platforms. This is due to the fact that many times developers will write code without considering sensitivity. For example, a web server configured to serve a single file type, “html”, may also need to configure Html, hTml, HTml, etc… because a developer may have fat-fingered links in the code with these typographical errors. On Windows platforms, this is not a problem for the application, but it becomes an issue for the web application firewall because it is sensitive to case necessarily. BIG-IP ASM v11 now includes a simple checkbox-style flag that indicates it should ignore case, making it more adaptable to Windows-based platforms in which case may be variable. This is important in reducing false positives – situations where the security device thinks a request is malicious but in reality it is not. As web application firewalls generally contain very granular, URI-level policies to better protect against injection-style attacks, they often flag case differences as being “errors” or “possible attacks.” If configured to block such requests, the web application firewall would incorrectly reject requests for pages or URIs with case differences caused by typographical errors. This enhancement allows operators to ignore case and focus on securing the payload. BIG-IP ASM VE BIG-IP ASM is now available in a virtual form-factor, ASM VE. A virtual form-factor makes it easier to evaluate and test in lab environments, as well as enabling developers to assist in troubleshooting when vulnerabilities or issues arise that involve the application directly. Virtual patching, as well, is better enabled by a virtual form factor, as is the ability to deploy remotely in cloud computing environments. There is no solution short of a scissors that can reduce your risk of breach to 0. But there are solutions that can reduce that risk to a more acceptable level, and one of those solutions is BIG-IP ASM. Getting hit by lightning on the Internet is a whole lot more likely than the idiom makes it sound, and anything that can reduce the odds is worth investigating sooner rather than later. More BIG-IP ASM v11 Resources: Application Security in the Cloud with BIG-IP ASM Securing JSON and AJAX Messages with F5 BIG-IP ASM BIG-IP Application Security Manager Page Audio White Paper - Application Security in the Cloud with BIG-IP ASM F5 Friday: You Will Appsolutely Love v11 SQL injection – past, present and future Introducing v11: The Next Generation of Infrastructure BIG-IP v11 Information Page F5 Monday? The Evolution To IT as a Service Continues … in the Network F5 Friday: The Gap That become a Chasm All F5 Friday Posts on DevCentral ABLE Infrastructure: The Next Generation – Introducing v11242Views0likes0CommentsDynamic Application Control and Attack Protection
If you’ve perused any media outlet of late, the barrage of cyber threats are unrelenting and protecting your networks and applications continues to be a never ending task. Organizations are making significant investments in IT security to improve their attack protection but still need to control costs and keep the systems running efficiently. Since these attacks are targeting multiple layers of the infrastructure, both the network and applications, it is increasingly difficult to properly reduce the risk of exposure. Siloes of protection and network firewalls alone cannot do the trick. Add to that, the dynamic nature of today’s infrastructures especially with cloud environments, makes the job even tougher. Federal mandates and standards for government agencies, contractors and the public sector adds to an organization’s growing list of concerns. DNS can be vulnerable to attacks; interactive Web 2.0 applications can be vulnerable; and IT needs analytics and detailed reporting to understand what’s happening within their dynamic data center. On top of that, IPv6 is now a reality and v6 to v4 translation services are in demand. F5’s most recent release, BIG-IP v11, delivers a unified platform that helps protect Web 2.0 applications and data, secure DNS infrastructures, and establish centralized application access and policy control. In BIG-IP v10, F5 offered the Application Ready Solution Templates to reduce the time, effort, and application-specific knowledge required of administrators to optimally deploy applications. With BIG-IP v11, F5 introduces iApp, a template-driven system that automates application deployment. iApp helps reduce human error by enabling an organization’s IT department to apply preconfigured, approved security policies and repeat and reuse them with each application deployment. Also iApp analytics provides real-time visibility into application performance, which helps IT staff identify the root cause of security and performance issues quickly and efficiently. For DNS, BIG-IP GTM has offered a DNSSEC solution since v10 and with v11, we’ve added DNS Express, a high-speed authoritative DNS delivery solution. DNS query response performance can be improved as much as 10x. DNS Express offloads existing DNS servers and absorbs the flood of illegitimate DNS requests during an attack—all while supporting legitimate queries. It’s critical to have the ability to protect and scale the DNS infrastructure when a DoS or DDoS attacks occur, since DNS is just as vulnerable as the web application or service that is being targeted. For interactive web applications, BIG-IP ASM v11 can parse JSON (JavaScript Object Notation) payloads and protect AJAX (Asynchronous JavaScript and XML) applications that use JSON for data transfer between the client and server. AJAX, which is a mix of technologies, is becoming more pervasive since it allows developers to deliver content without having to load the entire HTML page in which the AJAX objects are embedded. Unfortunately, poor AJAX code can allow an attacker to modify the application and prevent a user from seeing their customized content, or even initiate an XSS attack. Additionally, some developers are also using JSON payloads, a lightweight data-interchange format that is understandable by most modern programming languages and used to exchange information between browser and server. If JSON is insecure and carrying sensitive information, there is the potential for data leakage. BIG-IP ASM can enforce the proper security policy and can even display an embedded blocking alert message. Very few WAF vendors are capable of enforcing JSON (other than the XML Gateways), and no other vendor can display an embedded blocking alert message. F5 is the only WAF vendor that fully supports AJAX, which is becoming more and more common even within enterprises. Also with v11, BIG-IP ASM is now available in a Virtual Edition (BIG-IP ASM VE), either as a stand-alone appliance or an add-on module for BIG-IP Local Traffic Manager Virtual Edition (LTM VE). BIG-IP ASM VE delivers the same functionality as the physical edition and helps companies maintain compliance, including PCI DSS, when they deploy applications in the cloud. If an organization discovers an application vulnerability, BIG-IP ASM VE can quickly be deployed in a cloud environment, enabling organizations to immediately virtually patch vulnerabilities until the development team can permanently fix the application. Additionally, organizations are often unable to fix applications developed by third parties, and this lack of control prevents many of them from considering cloud deployments. But with BIG-IP ASM VE, organizations have full control over securing their cloud infrastructure. After about 5 years of IPv4 depletion stories, it finally seems to be coming soon and IPv6 is starting to be deployed. Problem is that most enterprise networks are not yet ready to handle IPv4 and IPv6 at the same time. BIG-IP v11 provides advanced support for IPv6 with built-in DNS 6-to-4 translation services and the ability to direct traffic to any server in mixed (IPv4 and IPv6) environments. This gives organizations the flexibility to support IPv6 devices today while transitioning their backend servers to IPv6 over time. Many more new features are available across all F5 solutions including BIG-IP APM which added support for site-to-site IPsec tunnels, AppTunnels, Kerberos ticketing, enhanced virtual desktops, Android and iOS clients, and multi-domain single sign-on. These are just a few of the many new enhancements available in BIG-IP v11. ps Resources: The BIG-IP v11 System F5 Delivers on Dynamic Data Center Vision with New Application Control Plane Architecture F5’s Enhanced BIG-IP Security Solutions Thwart Multilayer Cyber Attacks ABLE Infrastructure: The Next Generation – Introducing v11 The Evolution To IT as a Service Continues … in the Network iApp Wiki on DevCentral Whitepapers: High-Performance DNS Services in BIG-IP Version 11 Symmetric Optimization in the Cloud with BIG-IP WOM VE Secure, Optimized Global Access to Corporate Resources Maximizing the Strategic Point of Control in the Application Delivery Network Application Security in the Cloud with BIG-IP ASM F5 iApp: Moving Application Delivery Beyond the Network Simplifying Single Sign-On with F5 BIG-IP APM and Active Directory BIG-IP Virtual Edition Products,The Virtual ADCs Your Application Delivery Network Has Been Missing240Views0likes0CommentsAs Client-Server Style Applications Resurface Performance Metrics Must Include the API
Mobile and tablet platforms are hyping HTML5, but many applications are bound to a traditional client-server model, making API performance a top concern for organizations. I recently received an e-mail from Strangeloop Networks with a subject of: “The quest for the holy grail of Web speed: 2-second page load times". Being focused on optimizing the user-interface, they appropriately quoted usability expert Jakob Nielsen, but also included some interesting statistics: 57% of site visitors will bounce after waiting 3 seconds or less for a page to load. Aberdeen Group surveyed 160 companies and discovered that, on average, slowing down a site by just one second results in a 7% reduction in conversions. Shopzilla accelerated its average page load time from 6 seconds to 1.2 seconds and experienced a 12% increase in revenue and a 25% increase in page views. Amazon performed its own page speed optimization and announced that, for every 100 milliseconds of improvement, revenues increased by 1%. Microsoft slowed down its Bing site by two seconds, which led to a 4.3% loss in revenue per visitor. The problem is not that this information is inaccurate in any way. It’s not that I don’t agree that performance is a top concern for organizations – especially those for whom web applications directly generate revenue. It’s that “applications” are quickly becoming a mash-up of architectural models, not all of which leverage the ubiquitous web browser as the client. It is particularly true on mobile and tablet platforms, but increasingly true of web-delivered applications, as well. Too, many applications are dependent upon third-party services via the use of Web 2.0 APIs that can compromise performance of any application, browser-based or not. API PERFORMANCE WILL BECOME CRITICAL I was browsing Blackberry’s App World on my Playbook with my youngest the other day, looking for some games appropriate for a 3-year old. He was helping, navigating like a pro, and pulling up descriptions of applications he found interesting based on their icon. When the application descriptions started loading slowly, i.e. took more than about 3 seconds to pop up on the screen, he started hitting the “back” button and trying another one. And another one. And another one. Ultimately he became quite frustrated with the situation and decided his Transformers were more interesting as they were more interactive at the moment. Turns out I was having some connectivity issues that, in turn, impacted the Playbook’s performance. I took away two things from the experience: 1. A three-year old’s patience with application load times is approximately equal to the “ideal” load time for adults. Interesting, isn’t it? 2. These mobile and tablet-deployed “applications” often require server-side, API, access. Therefore, API performance is critical to maintaining a responsive application. It is further unlikely that all applications will simply turn to HTML5 as the answer to address the challenges inherent in application platform deployment diversity. APIs have become a staple traffic on the Internet as a means to interconnect and integrate disparate services and it is folly to think they are going anywhere. What’s more, if you know a thing or three about web applications, APIs are enabling real-time updating in record numbers today, with more and more application logic responsible for parsing and displaying data returned from those API calls residing on the client. Consider, if you will, the data from the “API Billionaires Club” presented last year in “Open API Madness: The Party Has Just Begun for Cloud Developers” These are not just API calls coming from external sources; these are API calls coming from each organization’s own applications as well as integrated, external sources. These APIs are generally calls for data in JSON or XML formats, not pre-formatted user interface markup (HTML*). No amount of HTML manipulation is likely to improve the performance of API calls because there is no HTML to optimize. It’s data, pure and simple, which means the bulk of the responsibility for ensuring wicked fast performance suitable to a three-year old’s patience is going to land squarely on the application delivery chain and the application developer. That means minimizing processing and delivery time through carefully optimizing code (developers) and the delivery chain (operations). OPTIMIZING the DELIVERY CHAIN When the web first became popular any one who could use a scripting language and spit out HTML called themselves “web developers.” The need for highly optimized code to support the demanding performance requirements of end-users means that it’s no longer enough to be able to spit out HTML or even JSON. It means developers need to be highly skilled in optimizing code on the server-side such that processing times are as tight as can be. Calculating Big (O) may become a valued skill once again. But applications are not islands and even the most highly optimized function in the world can be negatively impacted by factors outside the developer’s control. The load on the application server – whether physical or virtual – can have a deleterious effect on application performance. Higher loads, more RAM, fewer CPU cycles translates into slower executing code – no matter how optimized it may be. Processing cryptographic operations of any kind, be it for compression or security purposes, can consume resources and introduce latency into processing times when performed on the server. And the overhead from managing connections, usually TCP, can take as much time as processing a request. All those operations add up to latency that can drive the end-user response time over the patience threshold that results in an aborted transaction. And when I say transaction I mean request-reply transaction, not necessarily those that involve money. Aborted transactions are also bad for application performance because it’s wasting resources. That connection is held open based on the web or application server’s configuration, and if the end-user aborted the transaction, it’s ignoring the response but tying up resources that could be used elsewhere. Application delivery chain optimization is a critical component to improving response time. Offloading cryptographic processing and protocol management can alleviate much of the load that negatively impacts application processing times and shifts the delivery-time component of application performance management from the developer to operations, where optimization and acceleration technologies can be applied regardless of data format. Whether it’s HTML or JSON or XML is irrelevant, compression, caching and cryptographic offload can benefit both end-users and developers by mitigating those factors outside the developer’s demesne that impact performance negatively. THE WHOLE is GREATER than the SUM of its PARTS It is no longer enough to measure the end-user experience based on load times in a browser. The growing use of mobile devices – whether phones or tablets – and the increasingly interconnected web of integrated applications means performance of an application is more complicated than it was in the past. The performance of an application today is highly dependent on the performance of APIs, and thus testing APIs specifically from a variety of devices and platforms is critical in understand the impact high volume and load has on overall application performance. Testing API performance is critical to ensuring the end-user experience is acceptable regardless of the form factor of the client. If you aren’t sure what acceptable performance might be, grab the nearest three-year old; they’ll let you know loud and clear. How to Earn Your Data Center Merit Badge The Stealthy Ascendancy of JSON Cloud Testing: The Next Generation Data Center Feng Shui: Architecting for Predictable Performance Now Witness the Power of this Fully Operational Feedback Loop On Cloud, Integration and Performance The cost of bad cloud-based application performance I Find Your Lack of Win Disturbing Operational Risk Comprises More Than Just Security Challenging the Firewall Data Center Dogma 50 Ways to Use Your BIG-IP: Performance275Views0likes0CommentsExtend Cross-Domain Request Security using Access-Control-Allow-Origin with Network-Side Scripting
The W3C specification now offers the means by which cross-origin AJAX requests can be achieved. Leveraging network and application network services in conjunction with application-specific logic improves security of allowing cross-domain requests and has some hidden efficiency benefits, too. The latest version of the W3C working draft on “Cross-Origin Resource Sharing” lays out the means by which a developer can use XMLHTTPRequest (in Firefox) or XDomainRequest (in IE8) to make cross-site requests. As is often the case, the solution is implemented by extending HTTP headers, which makes the specification completely backwards and cross-platform compatible even if the client-side implementation is not. While this sounds like a good thing, forcing changes to HTTP headers is often thought to require changes to the application. In many cases, that’s absolutely true. But there is another option: network-side scripting. There are several benefits to using network-side scripting to implement this capability, but more importantly the use of a mediating system (proxy) enables the ability to include more granular security than is currently offered by the Cross-Domain Request specification.241Views0likes1Comment