adobe
5 Topics5 Years Later: OpenAJAX Who?
Five years ago the OpenAjax Alliance was founded with the intention of providing interoperability between what was quickly becoming a morass of AJAX-based libraries and APIs. Where is it today, and why has it failed to achieve more prominence? I stumbled recently over a nearly five year old article I wrote in 2006 for Network Computing on the OpenAjax initiative. Remember, AJAX and Web 2.0 were just coming of age then, and mentions of Web 2.0 or AJAX were much like that of “cloud” today. You couldn’t turn around without hearing someone promoting their solution by associating with Web 2.0 or AJAX. After reading the opening paragraph I remembered clearly writing the article and being skeptical, even then, of what impact such an alliance would have on the industry. Being a developer by trade I’m well aware of how impactful “standards” and “specifications” really are in the real world, but the problem – interoperability across a growing field of JavaScript libraries – seemed at the time real and imminent, so there was a need for someone to address it before it completely got out of hand. With the OpenAjax Alliance comes the possibility for a unified language, as well as a set of APIs, on which developers could easily implement dynamic Web applications. A unifiedtoolkit would offer consistency in a market that has myriad Ajax-based technologies in play, providing the enterprise with a broader pool of developers able to offer long term support for applications and a stable base on which to build applications. As is the case with many fledgling technologies, one toolkit will become the standard—whether through a standards body or by de facto adoption—and Dojo is one of the favored entrants in the race to become that standard. -- AJAX-based Dojo Toolkit , Network Computing, Oct 2006 The goal was simple: interoperability. The way in which the alliance went about achieving that goal, however, may have something to do with its lackluster performance lo these past five years and its descent into obscurity. 5 YEAR ACCOMPLISHMENTS of the OPENAJAX ALLIANCE The OpenAjax Alliance members have not been idle. They have published several very complete and well-defined specifications including one “industry standard”: OpenAjax Metadata. OpenAjax Hub The OpenAjax Hub is a set of standard JavaScript functionality defined by the OpenAjax Alliance that addresses key interoperability and security issues that arise when multiple Ajax libraries and/or components are used within the same web page. (OpenAjax Hub 2.0 Specification) OpenAjax Metadata OpenAjax Metadata represents a set of industry-standard metadata defined by the OpenAjax Alliance that enhances interoperability across Ajax toolkits and Ajax products (OpenAjax Metadata 1.0 Specification) OpenAjax Metadata defines Ajax industry standards for an XML format that describes the JavaScript APIs and widgets found within Ajax toolkits. (OpenAjax Alliance Recent News) It is interesting to see the calling out of XML as the format of choice on the OpenAjax Metadata (OAM) specification given the recent rise to ascendancy of JSON as the preferred format for developers for APIs. Granted, when the alliance was formed XML was all the rage and it was believed it would be the dominant format for quite some time given the popularity of similar technological models such as SOA, but still – the reliance on XML while the plurality of developers race to JSON may provide some insight on why OpenAjax has received very little notice since its inception. Ignoring the XML factor (which undoubtedly is a fairly impactful one) there is still the matter of how the alliance chose to address run-time interoperability with OpenAjax Hub (OAH) – a hub. A publish-subscribe hub, to be more precise, in which OAH mediates for various toolkits on the same page. Don summed it up nicely during a discussion on the topic: it’s page-level integration. This is a very different approach to the problem than it first appeared the alliance would take. The article on the alliance and its intended purpose five years ago clearly indicate where I thought this was going – and where it should go: an industry standard model and/or set of APIs to which other toolkit developers would design and write such that the interface (the method calls) would be unified across all toolkits while the implementation would remain whatever the toolkit designers desired. I was clearly under the influence of SOA and its decouple everything premise. Come to think of it, I still am, because interoperability assumes such a model – always has, likely always will. Even in the network, at the IP layer, we have standardized interfaces with vendor implementation being decoupled and completely different at the code base. An Ethernet header is always in a specified format, and it is that standardized interface that makes the Net go over, under, around and through the various routers and switches and components that make up the Internets with alacrity. Routing problems today are caused by human error in configuration or failure – never incompatibility in form or function. Neither specification has really taken that direction. OAM – as previously noted – standardizes on XML and is primarily used to describe APIs and components - it isn’t an API or model itself. The Alliance wiki describes the specification: “The primary target consumers of OpenAjax Metadata 1.0 are software products, particularly Web page developer tools targeting Ajax developers.” Very few software products have implemented support for OAM. IBM, a key player in the Alliance, leverages the OpenAjax Hub for secure mashup development and also implements OAM in several of its products, including Rational Application Developer (RAD) and IBM Mashup Center. Eclipse also includes support for OAM, as does Adobe Dreamweaver CS4. The IDE working group has developed an open source set of tools based on OAM, but what appears to be missing is adoption of OAM by producers of favored toolkits such as jQuery, Prototype and MooTools. Doing so would certainly make development of AJAX-based applications within development environments much simpler and more consistent, but it does not appear to gaining widespread support or mindshare despite IBM’s efforts. The focus of the OpenAjax interoperability efforts appears to be on a hub / integration method of interoperability, one that is certainly not in line with reality. While certainly developers may at times combine JavaScript libraries to build the rich, interactive interfaces demanded by consumers of a Web 2.0 application, this is the exception and not the rule and the pub/sub basis of OpenAjax which implements a secondary event-driven framework seems overkill. Conflicts between libraries, performance issues with load-times dragged down by the inclusion of multiple files and simplicity tend to drive developers to a single library when possible (which is most of the time). It appears, simply, that the OpenAJAX Alliance – driven perhaps by active members for whom solutions providing integration and hub-based interoperability is typical (IBM, BEA (now Oracle), Microsoft and other enterprise heavyweights – has chosen a target in another field; one on which developers today are just not playing. It appears OpenAjax tried to bring an enterprise application integration (EAI) solution to a problem that didn’t – and likely won’t ever – exist. So it’s no surprise to discover that references to and activity from OpenAjax are nearly zero since 2009. Given the statistics showing the rise of JQuery – both as a percentage of site usage and developer usage – to the top of the JavaScript library heap, it appears that at least the prediction that “one toolkit will become the standard—whether through a standards body or by de facto adoption” was accurate. Of course, since that’s always the way it works in technology, it was kind of a sure bet, wasn’t it? WHY INFRASTRUCTURE SERVICE PROVIDERS and VENDORS CARE ABOUT DEVELOPER STANDARDS You might notice in the list of members of the OpenAJAX alliance several infrastructure vendors. Folks who produce application delivery controllers, switches and routers and security-focused solutions. This is not uncommon nor should it seem odd to the casual observer. All data flows, ultimately, through the network and thus, every component that might need to act in some way upon that data needs to be aware of and knowledgeable regarding the methods used by developers to perform such data exchanges. In the age of hyper-scalability and über security, it behooves infrastructure vendors – and increasingly cloud computing providers that offer infrastructure services – to be very aware of the methods and toolkits being used by developers to build applications. Applying security policies to JSON-encoded data, for example, requires very different techniques and skills than would be the case for XML-formatted data. AJAX-based applications, a.k.a. Web 2.0, requires different scalability patterns to achieve maximum performance and utilization of resources than is the case for traditional form-based, HTML applications. The type of content as well as the usage patterns for applications can dramatically impact the application delivery policies necessary to achieve operational and business objectives for that application. As developers standardize through selection and implementation of toolkits, vendors and providers can then begin to focus solutions specifically for those choices. Templates and policies geared toward optimizing and accelerating JQuery, for example, is possible and probable. Being able to provide pre-developed and tested security profiles specifically for JQuery, for example, reduces the time to deploy such applications in a production environment by eliminating the test and tweak cycle that occurs when applications are tossed over the wall to operations by developers. For example, the jQuery.ajax() documentation states: By default, Ajax requests are sent using the GET HTTP method. If the POST method is required, the method can be specified by setting a value for the type option. This option affects how the contents of the data option are sent to the server. POST data will always be transmitted to the server using UTF-8 charset, per the W3C XMLHTTPRequest standard. The data option can contain either a query string of the form key1=value1&key2=value2 , or a map of the form {key1: 'value1', key2: 'value2'} . If the latter form is used, the data is converted into a query string using jQuery.param() before it is sent. This processing can be circumvented by setting processData to false . The processing might be undesirable if you wish to send an XML object to the server; in this case, change the contentType option from application/x-www-form-urlencoded to a more appropriate MIME type. Web application firewalls that may be configured to detect exploitation of such data – attempts at SQL injection, for example – must be able to parse this data in order to make a determination regarding the legitimacy of the input. Similarly, application delivery controllers and load balancing services configured to perform application layer switching based on data values or submission URI will also need to be able to parse and act upon that data. That requires an understanding of how jQuery formats its data and what to expect, such that it can be parsed, interpreted and processed. By understanding jQuery – and other developer toolkits and standards used to exchange data – infrastructure service providers and vendors can more readily provide security and delivery policies tailored to those formats natively, which greatly reduces the impact of intermediate processing on performance while ensuring the secure, healthy delivery of applications.401Views0likes0Comments20 Lines or Less #8
What could you do with your code in 20 Lines or Less? That's the question I ask every week, and every week I go looking to find cool new examples that show just how flexible and powerful iRules can be without getting in over your head. For this week's 20LoL sampling I've dipped into my own private stash of iRule goodness. Some of these are oldies but goodies, one of them I actually just wrote yesterday as an example for Lori's Blog. As such the newly written example is the only one with a URI. The others will just have a description and the iRule source. I'm sure I'll be diving back into the Forums and CodeShare in the coming weeks as there just seems to be an endless stream of cool stuff to dig through out there, but I wanted to toss up a few of my own rules this week. Be gentle with comments, some of these are old as I said. ;) Content Scrubbing for Adobe Flash Exploit http://devcentral.f5.com/s/weblogs/macvittie/archive/2008/05/29/3309.aspx This iRule digs through the contents of the HTTP responses being sent out from your servers and looks for known exploit sites, then blocks those responses from going to your users. In this way it attempts to help protect them from the spread of the Adobe Flash exploit Lori's been talking about. when HTTP_RESPONSE { HTTP::collect } when HTTP_RESPONSE_DATA { switch -glob [string tolower [HTTP::payload]] { "*0novel.com*" - "*dota11.cn*" - "*wuqing17173.cn*" - "*woai117.cn*" - "*guccime.net*" - "*play0nlnie.com*" { HTTP::respond 200 content "The server is currently unable to serve the requested content. Please try again later." log local0. "Adobe Flash exploit infected Server IP: [IP::server_addr]." } } HTTP::release } IP Client Limiting via Array This iRule was written to deal with a very high-volume need for client limiting. By storing the IPs in an array and accessing them in the most optimized format I could come up with, this rule was able to stand up to some pretty impressive numbers. If memory serves it was somewhere near 200K connections per second with nearly 3 million concurrent connections. Not too shabby! when RULE_INIT { array set connections { } } when CLIENT_ACCEPTED { if { [info exists ::connections([IP::client_addr])] } { if { [incr ::connections([IP::client_addr])] > 1000 } { reject } } else { set ::connections([IP::client_addr]) 1 } } when CLIENT_CLOSED { if { [incr ::connections([IP::client_addr]) -1] unset ::connections([IP::client_addr]) } } Selective HTTPS Redirect This is a slight variant on a popular concept. This iRule does a selective redirect to HTTPS by checking a given class to see if the incoming URI is one that should be served via HTTPS. The neat part here is that it also does a port check and a preventative else statement, meaning this iRule should be able to be deployed on a global virtual, serving all ports, where most examples like this require the traffic to be broken up into two VIPS, port 80 and port 443, to avoid infinite looping. when HTTP_REQUEST { if { [TCP::local_port] == 80 } { log local0. "connecting on HTTP server" if { [matchclass [HTTP::uri] starts_with $::secure_uris] } { HTTP::redirect "http://[HTTP::host][HTTP::uri]" } } } So there you have it, another few examples of what can be done via iRules in less than 21 lines of code. This 20 LoL brought to you from my personal vault, so I hope you enjoy. As always, please let me know if you have any feedback, comments, questions, suggestions, musical recommendations or other pertinent information to share. See you next week. #Colin522Views0likes2CommentsTurn Your Podcast Into A Interactive Live Streaming Experience
The folks here on the DevCentral team have been producing a weekly podcast for a while now. Trying to keep our budget low, we opted to copy what some other podcasters were doing by making use of Skype for our audio communication and found a great little Skype add-on called Pamela which creates high-quality WAV files from Skype conversations. We created a dedicated Skype account on an old machine here in the office that will auto-record whenever that account is added to a conversation. We would occasionally have differences in audio levels between the callers so we incorporated the awesome The Levelator (from the Conversations Network) into the post production and we were set. A few weeks ago we broke through our one hundredth podcast and along with that milestone we decided to expand things a bit by making the podcast more interactive. I spent a few weeks investigating how to best accomplish this and I ultimately decided on a setup. I figured that I'd go ahead and share it with you all so you can see how we do it on our end. Here's the high-level diagram: I experimented with a mixing board but ended up with a software only solution that anyone can implement. Everything besides the top left three items in the image were added to enable us to record our Skype podcast as well as stream it to UStream and allow callers with TalkShoe. The Components Here's a list of the following components and how we used them. Skype - http://www.skype.com - $0 Skype, for those that don't know, is a software program that allows you to make video and voice calls across the internet. Calls are free to other skype users and you can purchase the ability to call out to land line numbers or to get your own number so you can accept phone calls. We have a dedicated server with Skype and Pamela running to do the recording and each person on the podcast is connected through the main Skype instance on our publishing system. Pamela Skype Recorder - http://www.pamela.biz/en/ - $24 Pamela is an add-on program for Skype that will allow you to record your Skype conversations. It features Skype call, video, and chat recording, answering machine, video mail, and skype based publishing. We use it primarily for the ability to automatically record skype calls to either our skype account or our Skype dial-in number. .WAV files are created and we are emailed when a recording is completed. Windows Media Player - $0 Windows Media Player is the default music/video player on the Windows platforms. We use it to play background music into our skype and UStream sessions. You could just as easily use another audio player as long as it supports the ability to customize which output audio device it uses. Virtual Audio Cable - http://software.muzychenko.net/eng/vac.html - $30 This little gem really saved the day with our setup. There are various Mac programs that allow you to map audio streams from one program to another one but this is the best one I could find for Windows. Virtual Audio Cable allows you to transfer audio (wave) streams between applications and/or devices. In our setup, it allows us to map windows media player back into Skype and our multiple audio streams back into our live video processing through VidBlaster. VidBlaster - http://vidblaster.com/ - $0 VidBlaster is a powerful, economic way to record, stream, and produce high quality videos. VidBlaster comes in three versions: Home, Pro, and Studio with the only difference being the number of "modules" you can use at one time. What's best about this product is that you can use it for free if you can live with a "VidBlaster" add in the top right corner of your video stream. We'll likely put the couple hundred bucks down for the Pro version but as of now it hasn't costed us a cent. Adobe Flash Media Encoder - http://www.adobe.com/products/flashmediaserver/flashmediaencoder/ - $0 The Adobe Flash Media Encoder allows you to capture live audio and video while streaming it in real time to a flash media server. While we could stream directly from VidBlaster to UStream, the quality is not as good as moving some of the processing down to the client. UStream - http://www.ustream.tv - $0 UStream is the live interactive video broadcast platform that enables anyone with a camera (or VidBlaster!) and an internet connection to quickly and easily broadcast to a global audience of unlimited size. Best of all, it's free! TalkShoe - http://www.talkshoe.com - $0 We struggled for a while trying to figure out the best way to include a "live" audience in our recording. TalkShoe is a service that enables anyone to easily create, join, or listen to live interactive discussions, conversations, podcasts, and audioblogs. It fit the bill of just what we needed and since we already had our Skype conversation going, bolting on TalkShoe was as simple as adding the conference number to our group Skype conversation. TalkShoe also has some pretty nifty user controls allowing the moderator to control muting of each of the participants. Another great free service! Production Setup Walkthrough Here's the steps I go through in preparation for our weekly podcast. Create 3 Virtual Audio Cables with the Virtual Audio Cable Control Panel (VAC #1, VAC #2, VAC #3) Start up 2 instances of the Virtual Audio Cable Audio Repeater. Set the first one (R1) from VAC #3 to VAC #1 and the second from VAC #3 to VAC #2. Start and login to Skype. Go into the Audio settings and change the audio input to VAC #1 and the output to VAC #2. Start Windows Media Player and change the speaker device settings to output to VAC #3. Start VidBlaster and setup various screen captures, set the video resolution to 640x480 and the frame rate to 15fps (much higher than that drives the CPU way up). Finally click "Start" on the Streamer module. Start the Adobe Flash Media Encoder. Load the stream configuration that you can download from your UStream show's advanced settings and then Select VidBlaster for the Video Device and "Line 2 (Virtual Audio Cable)" (ie. VAC #2) for the Audio device. Click "Start" to begin streaming. Login to our UStream.tv account and click "Broadcast Now" for your show. Another browser window will come up detecting the media stream. Click "Start Broadcast" to start the stream and click "Start Record" to being recording your stream on the server. Bring up the Skype window and create a conference with the podcast members including our account that auto-records the conversations. Then finally, call the TalkShoe conference bridge and initiate the meeting on their side. Login to the TalkShoe account on their website and click the option to join the meeting. When the admin console is loaded, you can optionally select "Record" from the top left to have TalkShoe make an alternate recording. At this point we are recording our Skype session in audio, UStream is recording the video stream and TalkShoe is recording a secondary audio recording. At this point, I cue up the intro music in Windows Media Player and start into the podcast. When we are finished, I reverse the process above by stopping and saving the various recordings. I take our Pamela based recording, run it through The Levelator, convert it to a mp3 with Audacity, edit the ID3 tags, and publish it to our media server. At this point, the blog post is created and we are done for the week. Reflections There are a few things that are still causing some issues. The main issue is that of horsepower on my media system. The desktop I'm using for all of this was not meant to run these types of CPU intensive applications. While it works, my dual-core CPU system is hovering at 99-100% CPU usage during the podcast which is a bit worrisome. Depending on the success of the podcast, we may invest in a new desktop to run this on. But, seeing how we've spent well under $100 for the entire software suite, I don't think we'll have much trouble justifying it! Hopefully this helps some of you out there with some ideas and I'd love to hear feedback on how I could do things better! -Joe502Views0likes1CommentThe Great Client-Server Architecture Myth
The webification of applications over the years has led to the belief that client-server as an architecture is dying. But very few beliefs about architecture have been further from the truth. The belief that client-server was dying - or at least falling out of favor - was primarily due to fact that early browser technology was used only as a presentation mechanism. The browser did not execute application logic, did not participate in application logic, and acted more or less like a television: smart enough to know how to display data but not smart enough to do anything about it. But the sudden explosion of Web 2.0 style applications and REST APIs have changed all that and client-server is very much in style again, albeit with a twist. Developers no longer need to write the core of a so-called "fat client" from the ground up. The browser or a framework such as Adobe AIR or Microsoft's Silverlight provide the client-side platform on which applications are developed and deployed. These client-side platforms have become very similar in nature to their server-side cousins, application servers, taking care of the tedious tasks associated with building and making connections to servers, parsing data, and even storage of user-specific configuration data. Even traditional thin-client applications are now packing on the pounds, using AJAX and various JavaScript libraries to provide both connectivity and presentation components to developers in the same fashion that AIR and Silverlight provide a framework for developers to build richer, highly interactive applications. These so-called RIAs (Rich Internet Applications) are, in reality, thin-clients that are rapidly gaining weight. One of the core reasons client-server architecture is being reinvigorated is the acceptance of standards. As developers have moved toward not only HTTP as the de facto transport protocol but HTML, DHTML, CSS, and JavaScript as primary client-side technologies so have device makers accepted these technologies as the "one true way" to deliver applications to multiple clients from a single server-side architecture. It's no longer required that a client be developed for every possible operating system and device combination. A single server-side application can serve any and all clients capable of communicating via HTTP and rendering HTML, DHTML, CSS, and executing client-side scripts. Standards, they are good things after all. Client-server architectures are not going away. They have simply morphed from an environment-specific model to an environment-agnostic model that is much more efficient in terms of development costs and ability to support a wider range of users, but they are still based on the same architectural principles. Client-server as a model works and will continue to work as long as the infrastructure over which such applications are delivered continue to mature and recognize that while one application may be capable of being deployed and utilized from any device that the environments over which they are delivered may impact the performance and security of those applications. The combination of fatter applications and increasing client-side application logic execution means more opportunities for exploitation as well as the potential for degradation of performance. Because client-server applications are now agnostic and capable of being delivered and used on a variety of devices and clients that they are not specifically optimized for any given environment and developers do not necessarily have access to the network and transport layer components they would need in order to optimize them. These applications are written specifically to not care, and yet the device and the location of the user and the network over which the application is delivered is relevant to application performance and security. The need for context-aware application delivery is more important now than ever, as the same application may be served to the same user but rendered in a variety of different client environments and in a variety of locations. All these variables must be accounted for in order to deliver these fat clients RIAs in the most secure, performant fashion regardless of where the user may be, over what network the application is being delivered, and what device the user may be using at the time.248Views0likes0CommentsiRules: Content Scrub rule for the Adobe Flash Exploit
After reading most of what's available on the Adobe Zero Day Exploit, and getting an idea of how it propagates (Flash and JavaScript inserted via an SQL injection attack), I turned to iRules guru Colin for some help crafting an iRule that might stop a site from serving up infected content to a user. This is particularly helpful for those who are running a BIG-IP but who aren't running a web application firewall like ASM (Application Security Manager) and may have been inadvertently infected. After looking through the screen capture of some JavaScript that attempts to load the malware from one of several sites, it appears that scanning the response for one of the malicious domains would be the easiest way to catch a page that's been infected with this code. (This is similar to the way in which we can use iRules to prevent sensitive data from being served up to users.) From another blog on the subject Google search reports approximately 20,000 web pages (not necessarily distinct servers or domains) injected with a script redirecting users to this malicious site. A wide variety of legitimate third-party sites appear to be affected. The code then redirects users to sites hosting malicious Flash files exploiting this issue. [emphasis added] Dancho Danchev's most recently updated post on the exploit suggests blocking the following domains known to be hosting the malicious Flash files (as of 5/28): tongji123.org bb.wudiliuliang.com user1.12-26.net user1.12-27.net ageofconans.net lkjrc.cn psp1111.cn zuoyouweinan.com user1.isee080.net guccime.net woai117.cn wuqing17173.cn dota11.cn play0nlnie.com 0novel.com Blocking the domains is great for your internal users, but because the exploit is run from the client's browser this doesn't seem to be a great help for your external customers, particularly if you happen to be a Web 2.0 site. What we want to do is (a) find out if a page references one of the domains hosting malicious content (which would indicate the page is likely infected) and then (b) do something about it. In this case, we're just going to tell the user we can't deliver the page and write a log message to get the attention of the security guys. when HTTP_RESPONSE { HTTP::collect } when HTTP_RESPONSE_DATA { switch -glob [string tolower [HTTP::payload]] { "*0novel.com*" - "*dota11.cn*" - "*wuqing17173.cn*" - "*woai117.cn*" - "*guccime.net*" - "*play0nlnie.com*" { HTTP::respond 200 content "The server is currently unable to serve the requested content. Please try again later." log local0. "Adobe Flash exploit infected page found. Server IP: [IP::server_addr]" } } HTTP::release } I've not included every domain in the iRule, but you get the picture. If we detect a malicious domain in the page, we're not going to serve that page. Period. This won't stop the original SQL injection attack that infects your site, that'd be handled by a different iRule or product (BIG-IP ASM. If you're running a BIG-IP and don't have this product module, you should seriously consider it as it can prevent SQL injection attacks like the one being used to infect sites with this exploit). What this will do is tell you if you're one of the sites currently infected, and prevent your site from passing it on to your users. Because despite what our mothers told us, it isn't always nice to share. Imbibing: Coffee Thanks again to Colin for applying his iRule ninja skillz to this problem.176Views0likes0Comments