interoperability
17 Topics5 Years Later: OpenAJAX Who?
Five years ago the OpenAjax Alliance was founded with the intention of providing interoperability between what was quickly becoming a morass of AJAX-based libraries and APIs. Where is it today, and why has it failed to achieve more prominence? I stumbled recently over a nearly five year old article I wrote in 2006 for Network Computing on the OpenAjax initiative. Remember, AJAX and Web 2.0 were just coming of age then, and mentions of Web 2.0 or AJAX were much like that of “cloud” today. You couldn’t turn around without hearing someone promoting their solution by associating with Web 2.0 or AJAX. After reading the opening paragraph I remembered clearly writing the article and being skeptical, even then, of what impact such an alliance would have on the industry. Being a developer by trade I’m well aware of how impactful “standards” and “specifications” really are in the real world, but the problem – interoperability across a growing field of JavaScript libraries – seemed at the time real and imminent, so there was a need for someone to address it before it completely got out of hand. With the OpenAjax Alliance comes the possibility for a unified language, as well as a set of APIs, on which developers could easily implement dynamic Web applications. A unifiedtoolkit would offer consistency in a market that has myriad Ajax-based technologies in play, providing the enterprise with a broader pool of developers able to offer long term support for applications and a stable base on which to build applications. As is the case with many fledgling technologies, one toolkit will become the standard—whether through a standards body or by de facto adoption—and Dojo is one of the favored entrants in the race to become that standard. -- AJAX-based Dojo Toolkit , Network Computing, Oct 2006 The goal was simple: interoperability. The way in which the alliance went about achieving that goal, however, may have something to do with its lackluster performance lo these past five years and its descent into obscurity. 5 YEAR ACCOMPLISHMENTS of the OPENAJAX ALLIANCE The OpenAjax Alliance members have not been idle. They have published several very complete and well-defined specifications including one “industry standard”: OpenAjax Metadata. OpenAjax Hub The OpenAjax Hub is a set of standard JavaScript functionality defined by the OpenAjax Alliance that addresses key interoperability and security issues that arise when multiple Ajax libraries and/or components are used within the same web page. (OpenAjax Hub 2.0 Specification) OpenAjax Metadata OpenAjax Metadata represents a set of industry-standard metadata defined by the OpenAjax Alliance that enhances interoperability across Ajax toolkits and Ajax products (OpenAjax Metadata 1.0 Specification) OpenAjax Metadata defines Ajax industry standards for an XML format that describes the JavaScript APIs and widgets found within Ajax toolkits. (OpenAjax Alliance Recent News) It is interesting to see the calling out of XML as the format of choice on the OpenAjax Metadata (OAM) specification given the recent rise to ascendancy of JSON as the preferred format for developers for APIs. Granted, when the alliance was formed XML was all the rage and it was believed it would be the dominant format for quite some time given the popularity of similar technological models such as SOA, but still – the reliance on XML while the plurality of developers race to JSON may provide some insight on why OpenAjax has received very little notice since its inception. Ignoring the XML factor (which undoubtedly is a fairly impactful one) there is still the matter of how the alliance chose to address run-time interoperability with OpenAjax Hub (OAH) – a hub. A publish-subscribe hub, to be more precise, in which OAH mediates for various toolkits on the same page. Don summed it up nicely during a discussion on the topic: it’s page-level integration. This is a very different approach to the problem than it first appeared the alliance would take. The article on the alliance and its intended purpose five years ago clearly indicate where I thought this was going – and where it should go: an industry standard model and/or set of APIs to which other toolkit developers would design and write such that the interface (the method calls) would be unified across all toolkits while the implementation would remain whatever the toolkit designers desired. I was clearly under the influence of SOA and its decouple everything premise. Come to think of it, I still am, because interoperability assumes such a model – always has, likely always will. Even in the network, at the IP layer, we have standardized interfaces with vendor implementation being decoupled and completely different at the code base. An Ethernet header is always in a specified format, and it is that standardized interface that makes the Net go over, under, around and through the various routers and switches and components that make up the Internets with alacrity. Routing problems today are caused by human error in configuration or failure – never incompatibility in form or function. Neither specification has really taken that direction. OAM – as previously noted – standardizes on XML and is primarily used to describe APIs and components - it isn’t an API or model itself. The Alliance wiki describes the specification: “The primary target consumers of OpenAjax Metadata 1.0 are software products, particularly Web page developer tools targeting Ajax developers.” Very few software products have implemented support for OAM. IBM, a key player in the Alliance, leverages the OpenAjax Hub for secure mashup development and also implements OAM in several of its products, including Rational Application Developer (RAD) and IBM Mashup Center. Eclipse also includes support for OAM, as does Adobe Dreamweaver CS4. The IDE working group has developed an open source set of tools based on OAM, but what appears to be missing is adoption of OAM by producers of favored toolkits such as jQuery, Prototype and MooTools. Doing so would certainly make development of AJAX-based applications within development environments much simpler and more consistent, but it does not appear to gaining widespread support or mindshare despite IBM’s efforts. The focus of the OpenAjax interoperability efforts appears to be on a hub / integration method of interoperability, one that is certainly not in line with reality. While certainly developers may at times combine JavaScript libraries to build the rich, interactive interfaces demanded by consumers of a Web 2.0 application, this is the exception and not the rule and the pub/sub basis of OpenAjax which implements a secondary event-driven framework seems overkill. Conflicts between libraries, performance issues with load-times dragged down by the inclusion of multiple files and simplicity tend to drive developers to a single library when possible (which is most of the time). It appears, simply, that the OpenAJAX Alliance – driven perhaps by active members for whom solutions providing integration and hub-based interoperability is typical (IBM, BEA (now Oracle), Microsoft and other enterprise heavyweights – has chosen a target in another field; one on which developers today are just not playing. It appears OpenAjax tried to bring an enterprise application integration (EAI) solution to a problem that didn’t – and likely won’t ever – exist. So it’s no surprise to discover that references to and activity from OpenAjax are nearly zero since 2009. Given the statistics showing the rise of JQuery – both as a percentage of site usage and developer usage – to the top of the JavaScript library heap, it appears that at least the prediction that “one toolkit will become the standard—whether through a standards body or by de facto adoption” was accurate. Of course, since that’s always the way it works in technology, it was kind of a sure bet, wasn’t it? WHY INFRASTRUCTURE SERVICE PROVIDERS and VENDORS CARE ABOUT DEVELOPER STANDARDS You might notice in the list of members of the OpenAJAX alliance several infrastructure vendors. Folks who produce application delivery controllers, switches and routers and security-focused solutions. This is not uncommon nor should it seem odd to the casual observer. All data flows, ultimately, through the network and thus, every component that might need to act in some way upon that data needs to be aware of and knowledgeable regarding the methods used by developers to perform such data exchanges. In the age of hyper-scalability and über security, it behooves infrastructure vendors – and increasingly cloud computing providers that offer infrastructure services – to be very aware of the methods and toolkits being used by developers to build applications. Applying security policies to JSON-encoded data, for example, requires very different techniques and skills than would be the case for XML-formatted data. AJAX-based applications, a.k.a. Web 2.0, requires different scalability patterns to achieve maximum performance and utilization of resources than is the case for traditional form-based, HTML applications. The type of content as well as the usage patterns for applications can dramatically impact the application delivery policies necessary to achieve operational and business objectives for that application. As developers standardize through selection and implementation of toolkits, vendors and providers can then begin to focus solutions specifically for those choices. Templates and policies geared toward optimizing and accelerating JQuery, for example, is possible and probable. Being able to provide pre-developed and tested security profiles specifically for JQuery, for example, reduces the time to deploy such applications in a production environment by eliminating the test and tweak cycle that occurs when applications are tossed over the wall to operations by developers. For example, the jQuery.ajax() documentation states: By default, Ajax requests are sent using the GET HTTP method. If the POST method is required, the method can be specified by setting a value for the type option. This option affects how the contents of the data option are sent to the server. POST data will always be transmitted to the server using UTF-8 charset, per the W3C XMLHTTPRequest standard. The data option can contain either a query string of the form key1=value1&key2=value2 , or a map of the form {key1: 'value1', key2: 'value2'} . If the latter form is used, the data is converted into a query string using jQuery.param() before it is sent. This processing can be circumvented by setting processData to false . The processing might be undesirable if you wish to send an XML object to the server; in this case, change the contentType option from application/x-www-form-urlencoded to a more appropriate MIME type. Web application firewalls that may be configured to detect exploitation of such data – attempts at SQL injection, for example – must be able to parse this data in order to make a determination regarding the legitimacy of the input. Similarly, application delivery controllers and load balancing services configured to perform application layer switching based on data values or submission URI will also need to be able to parse and act upon that data. That requires an understanding of how jQuery formats its data and what to expect, such that it can be parsed, interpreted and processed. By understanding jQuery – and other developer toolkits and standards used to exchange data – infrastructure service providers and vendors can more readily provide security and delivery policies tailored to those formats natively, which greatly reduces the impact of intermediate processing on performance while ensuring the secure, healthy delivery of applications.399Views0likes0CommentsWhat Do Database Connectivity Standards and the Pirate’s Code Have in Common?
A: They’re both more what you’d call “guidelines” than actual rules. An almost irrefutable fact of application design today is the need for a database, or at a minimum a data store – i.e. a place to store the data generated and manipulated by the application. A second reality is that despite the existence of database access “standards”, no two database solutions support exactly the same syntax and protocols. Connectivity standards like JDBC and ODBC exist, yes, but like SQL they are variable, resulting in just slightly different enough implementations to effectively cause vendor lock-in at the database layer. You simply can’t take an application developed to use an Oracle database and point it at a Microsoft or IBM database and expect it to work. Life’s like that in the development world. Database connectivity “standards” are a lot like the pirate’s Code, described well by Captain Barbossa in Pirates of the Carribbean as “more what you’d call ‘guidelines’ than actual rules.” It shouldn’t be a surprise, then, to see the rise of solutions that address this problem, especially in light of an increasing awareness of (in)compatibility at the database layer and its impact on interoperability, particularly as it relates to cloud computing . Forrester Analyst Noel Yuhanna recently penned a report on what is being called Database Compatibility Layers (DCL). The focus of DCL at the moment is on migration across database platforms because, as pointed out by Noel, they’re expensive, time consuming and very costly. Database migrations have always been complex, time-consuming, and costly due to proprietary data structures and data types, SQL extensions, and procedural languages. It can take up to several months to migrate a database, depending on database size, complexity, and usage of these proprietary features. A new technology has recently emerged for solving this problem: the database compatibility layer, a database access layer that supports another database management system’s (DBMS’s) proprietary extensions natively, allowing existing applications to access the new database transparently. -- Simpler Database Migrations Have Arrived (Forrester Research Report) Anecdotally, having been on the implementation end of such a migration I can’t disagree with the assessment. Whether the right answer is to sit down and force some common standards on database connectivity or build a compatibility layer is a debate for another day. Suffice to say that right now the former is unlikely given the penetration and pervasiveness of existing database connectivity, so the latter is probably the most efficient and cost-effective solution. After all, any changes in the core connectivity would require the same level of application modification as a migration; not an inexpensive proposition at all. According to Forrester a Database Compatibility Layer (DCL) is a “database layer that supports another DBMS’s proprietary SQL extensions, data types, and data structures natively. Existing applications can transparently access the newly migrated database with zero or minimal changes.” By extension, this should also mean that an application could easily access one database and a completely different one using the same code base (assuming zero changes, of course). For the sake of discussion let’s assume that a DCL exists that exhibits just that characteristic – complete interoperability at the connectivity layer. Not just for migration, which is of course the desired use, but for day to day use. What would that mean for cloud computing providers – both internal and external? ENABLING IT as a SERVICE Based on our assumption that a DCL exists and is implemented by multiple database solution vendors, a veritable cornucopia of options becomes a lot more available for moving enterprise architectures toward IT as a Service than might be at first obvious. Consider that applications have variable needs in terms of performance, redundancy, disaster recovery, and scalability. Some applications require higher performance, others just need a nightly or even weekly backup and some, well, some are just not that important that you can’t use other IT operations backups to restore if something goes wrong. In some cases the applications might have varying needs based on the business unit deploying them. The same application used by finance, for example, might have different requirements than the same one used by developers. How could that be? Because the developers may only be using that application for integration or testing while finance is using it for realz. It happens. What’s more interesting, however, is how a DCL could enable a more flexible service-oriented style buffet of database choices, especially if the organization used different database solutions to support different transactional, availability, and performance goals. If a universal DCL (or near universal at least) existed, business stakeholders – together with their IT counterparts – could pick and choose the database “service” they wished to employ based on not only the technical characteristics and operational support but also the costs and business requirements. It would also allow them to “migrate” over time as applications became more critical, without requiring a massive investment in upgrading or modifying the application to support a different back-end database. Obviously I’m picking just a few examples that may or may not be applicable to every organization. The bigger thing here, I think, is the flexibility in architecture and design that is afforded by such a model that balances costs with operational characteristics. Monitoring of database resource availability, too, could be greatly simplified from such a layer, providing solutions that are natively supported by upstream devices responsible for availability at the application layer, which ultimately depends on the database but is often an ignored component because of the complexity currently inherent in supporting such a varied set of connectivity standards. It should also be obvious that this model would work for a PaaS-style provider who is not tied to any given database technology. A PaaS-style vendor today must either invest effort in developing and maintaining a services layer for database connectivity or restrict customers to a single database service. The latter is fine if you’re creating a single-stack environment such as Microsoft Azure but not so fine if you’re trying to build a more flexible set of offerings to attract a wider customer base. Again, same note as above. Providers would have a much more flexible set of options if they could rely upon what is effectively a single database interface regardless of the specific database implementation. More importantly for providers, perhaps, is the migration capability noted by the Forrester report in the first place, as one of the inhibitors of moving existing applications to a cloud computing provider is support for the same database across both enterprise and cloud computing environments. While services layers are certainly a means to the same end, such layers are not universally supported. There’s no “standard” for them, not even a set of best practice guidelines, and the resulting application code suffers exactly the same issues as with the use of proprietary database connectivity: lock in. You can’t pick one up and move it to the cloud, or another database without changing some code. Granted, a services layer is more efficient in this sense as it serves as an architectural strategic point of control at which connectivity is aggregated and thus database implementation and specifics are abstracted from the application. That means the database can be changed without impacting end-user applications, only the services layer need be modified. But even that approach is problematic for packaged applications that rely upon database connectivity directly and do not support such service layers. A DCL, ostensibly, would support packaged and custom applications if it were implemented properly in all commercial database offerings. CONNECTIVITY CARTEL And therein lies the problem – if it were implemented properly in all commercial database offerings. There is a risk here of a connectivity cartel arising, where database vendors form alliances with other database vendors to support a DCL while “locking out” vendors whom they have decided do not belong. Because the DCL depends on supporting “proprietary SQL extensions, data types, and data structures natively” there may be a need for database vendors to collaborate as a means to properly support those proprietary features. If collaboration is required, it is possible to deny that collaboration as a means to control who plays in the market. It’s also possible for a vendor to slightly change some proprietary feature in order to “break” the others’ support. And of course the sheer volume of work necessary for a database vendor to support all other database vendors could overwhelm smaller database vendors, leaving them with no real way to support everyone else. The idea of a DCL is an interesting one, and it has its appeal as a means to forward compatibility for migration – both temporary and permanent. Will it gain in popularity? For the latter, perhaps, but for the former? Less likely. The inherent difficulties and scope of supporting such a wide variety of databases natively will certainly inhibit any such efforts. Solutions such as a REST-ful interface, a la PHP REST SQL or a JSON-HTTP based solution like DBSlayer may be more appropriate in the long run if they were to be standardized. And by standardized I mean standardized with industry-wide and agreed upon specifications. Not more of the “more what you’d call ‘guidelines’ than actual rules” that we already have. Database Migrations are Finally Becoming Simpler Enterprise Information Integration | Data Without Borders Review: EII Suites | Don't Fear the Data The Database Tier is Not Elastic Infrastructure Scalability Pattern: Sharding Sessions F5 Friday: THE Database Gets Some Love The Impossibility of CAP and Cloud Sessions, Sessions Everywhere Cloud-Tiered Architectural Models are Bad Except When They Aren’t277Views0likes1CommentDefense in Depth in Context
In the days of yore, a military technique called Defense-in-Depth was used to protect kingdoms, castles, and other locations where you might be vulnerable to attack. It's a layered defense strategy where the attacker would have to breach several layers of protection to finally reach the intended target. It allows the defender to spread their resources and not put all of the protection in one location. It's also a multifaceted approach to protection in that there are other mechanisms in place to help; and it's redundant so if a component failed or is compromised, there are others that are ready to step in to keep the protection in tack. Information technology also recognizes this technique as one of the 'best practices' when protecting systems. The infrastructure and systems they support are fortified with a layered security approach. There are firewalls at the edge and often, security mechanisms at every segment of the network. Circumvent one, the next layer should net them. There is one little flaw with the Defense-in-Depth strategy - it is designed to slow down attacks, not necessarily stop them. It gives you time to mobilize a counter-offensive and it's an expensive and complex proposition if you are an attacker. It's more of a deterrent than anything and ultimately, the attacker could decide that the benefits of continuing the attack outweigh the additional costs. In the digital world, it is also interpreted as redundancy. Place multiple iterations of a defensive mechanism within the path of the attacker. The problem is that the only way to increase the cost and complexity for the attacker is to raise the cost and complexity of your own defenses. Complexity is the kryptonite of good security and what you really need is security based on context. Context takes into account the environment or conditions surrounding an event to make an informed decision about how to apply security. This is especially true when protecting a database. Database firewalls are critical components to protecting your valuable data and can stop a SQL Injection attack, for instance, in an instant. What they lack is the ability to decipher contextual data like userid, session, cookie, browser type, IP address, location and other meta-data of who or what actually performed the attack. While it can see that a particular SQL query is invalid, it cannot decipher who made the request. Web Application Firewalls on the other hand can gather user side information since many of its policy decisions are based on the user's context. A WAF monitors every request and response from the browser to the web application and consults a policy to determine if the action and data are allowed. It uses such information as user, session, cookie and other contextual data to decide if it is a valid request. Independent technologies that protect against web attacks or database attacks are available, but they have not been linked to provide unified notification and reporting. Now imagine if your database was protected by a layered, defense-in-depth architecture along with the contextual information to make informed, intelligent decisions about database security incidents. The integration of BIG-IP ASM with Oracle's Database Firewall offers the database protection that Oracle is known for and the contextual intelligence that is baked into every F5 solution. Unified reporting for both the application firewall and database firewall provides more convenient and comprehensive security monitoring. Integration between the two security solutions offers a holistic approach to protecting web and database tiers from SQL injection type of attacks. The integration gives you the layered protection many security professionals recognize as a best practice, plus the contextual information needed to make intelligent decisions about what action to take. This solution provides improved SQL injection protection to F5 customers and correlated reporting for richer forensic information on SQL injection attacks to Oracle database customers. It’s an end-to-end web application and database security solution to protect data, customers, and their businesses. ps Resources: F5 Joins with Oracle to Offer Enhanced Security for Web-Based Database Applications Security for Web-Based Database Applications Enhanced With F5 and Oracle Product Integration Using Oracle Database Firewall with BIG-IP ASM F5 Networks Adds To Oracle Database Oracle Database Firewall BIG-IP Application Security Manager The “True Security Company” Red Herring F5 Friday: Two Heads are Better Than One260Views0likes0CommentsThe STAR of Cloud Security
The Cloud Security Alliance (CSA), a not-for-profit organization with a mission to promote the use of best practices for providing security assurance within Cloud Computing, recently announced that they are launching (Q4 of 2011) a publicly accessible registry that will document the security controls provided by various cloud computing offerings. The idea is to encourage transparency of security practices within cloud providers and help users evaluate and determine the security of their current cloud provider or a provider they are considering. The service will be free. CSA STAR (Security, Trust and Assurance Registry) is open to all cloud providers whether they offer SaaS, PaaS or IaaS and allows them to submit self assessment reports that document compliance in relation to the CSA published best practices. The CSA says that the searchable registry will allow potential cloud customers to review the security practices of providers, accelerating their due diligence and leading to higher-quality procurement experiences. There are two different types of reports that the cloud provider can submit to to indicate their compliance with CSA best practices. The Consensus Assessments Initiative Questionnaire (CAIQ), a 140 question document which provides industry-accepted ways to document what security controls exist in IaaS, PaaS, and SaaS offerings and the Cloud Control Matrix (CCM) which provides a controls framework that gives detailed understanding of security concepts and principles that are aligned to the Cloud Security Alliance guidance in areas like ISACA COBIT, PCI, and NIST. Providers who chose to take part and submit the documents are on the ‘honor system’ since this is a self assessment and users will need to trust that the information is accurate. CSA is encouraging providers to participate and says, in doing so, they will address some of the most urgent and important security questions buyers are asking, and can dramatically speed up the purchasing process for their services. In addition to self-assessments, CSA will provide a list of providers who have integrated CAIQ and CCM and other components from CSA’s Governance, Risk Management and Compliance (GRC) stack into their compliance management tools. This should help with those who are still a bit hesitant about Cloud services. The percentage of those claiming ‘security issues’ as a deterrent for cloud deployments has steadily dropped over the last year. Last year around this time on any given survey, anywhere from 42% to 73% of those respondents said cloud technology does not provide adequate security safeguards and that that security concerns have prevented their adoption of cloud computing. In a recent cloud computing study from TheInfoPro, only 13% cited security worries as a cloud roadblock, after up-front costs at 15%. Big difference than a year ago. In this most recent survey, they found that ‘fear of change’ to be the biggest hurdle for cloud adoption. Ahhhh, change. One of the things most difficult for humans. Change is constant yet the basics are still the same - education, preparation, and anticipation of what cloud is about and what it can offer is a necessity for success. ps References: CSA focuses best-practice lens on cloud security Assessing the security of cloud providers CSA Registry Strives for Security Transparency of Providers Cloud Security Alliance Introduces Provider Trust and Assurance Registry Transparency Key To Cloud Security Cloud Security Alliance launches registry: not a moment too soon Fear of Change Impedes Cloud Adoption for Many Companies F5 Cloud Computing Solutions255Views0likes0CommentsInteroperability between clouds requires more than just VM portability
The issue of application state and connection management is one often discussed in the context of cloud computing and virtualized architectures. That's because the stress placed on existing static infrastructure due to the potentially rapid rate of change associated with dynamic application provisioning is enormous and, as is often pointed out, existing "infrastructure 1.0" systems are generally incapable of reacting in a timely fashion to such changes occurring in real-time. The most basic of concerns continues to revolve around IP address management. This is a favorite topic of Greg Ness at Infrastructure 2.0 and has been subsequently addressed in a variety of articles and blogs since the concepts of cloud computing and virtualization have gained momentum. The Burton Group has addressed this issue with regards to interoperability in a recent post, positing that perhaps changes are needed (agreed) to support emerging data center models. What is interesting is that the blog supports the notion of modifying existing core infrastructure standards (IP) to support the dynamic nature of these new models and also posits that interoperability is essentially enabled simply by virtual machine portability. From The Burton Group's"What does the Cloud Need? Standards for Infrastructure as a Service" First question is: How do we migrate between clouds? If we're talking System Infrastructure as a Service, then what happens when I try to migrate a virtual machine (VM) between my internal cloud running ESX (say I'm running VDC-OS) and a cloud provider who is running XenServer (running Citrix C3)? Are my cloud vendor choices limited to those vendors that match my internal cloud infrastructure? Well, while its probably a good idea, there are published standards out there that might help. Open Virtualization Format (OVF) is a meta-data format used to describe VMs in standard terms. While the format of the VM is different, the meta-data in OVF can be used to facilitate VM conversion from one format to other, thereby enabling interoperability. ... Another biggie is application state and connection management. When I move a workload from one location to another, the application has made some assumptions about where external resources are and how to get to them. The IP address the application or OS use to resolve DNS names probably isn't valid now that the VM has moved to a completely different location. That's where Locator ID Separation Protocol (LISP -- another overloaded acronym) steps in. The idea with LISP is to add fields to the IP header so that packets can be redirected to the correct location. The "ID" and and "locator" are separated so that the packet with the "ID" can be sent to the "locator" for address resolution. The "locator" can change the final address dynamically, allowing the source application or OS to change locations as long as they can reach the "locator". [emphasis added] If LISP sounds eerily familiar to some of you, it should. It's the same basic premise behind UDDI and the process of dynamically discovering the "location" of service end-points in a service-based architecture. Not exactly the same, but the core concepts are the same. The most pressing issue with proposing LISP as a solution is that it focuses only on the problems associated with moving workloads from one location to another with the assumption that the new location is, essentially, a physically disparate data center, and not simply a new location within the same data center; an issue with LISP does not even consider. That it also ignores other application networking infrastructure that requires the same information - that is, the new location of the application or resource - is also disconcerting but not a roadblock, it's merely a speed-bump in the road to implementation. We'll come back to that later; first let's examine the emphasized statement that seems to imply that simply migrating a virtual image from one provider to another equates to interoperability between clouds - specifically IaaS clouds. I'm sure the author didn't mean to imply that it's that simple; that all you need is to be able to migrate virtual images from one system to another. I'm sure there's more to it, or at least I'm hopeful that this concept was expressed so simply in the interests of brevity rather than completeness because there's a lot more to porting any application from one environment to another than just the application itself. Applications, and therefore virtual images containing applications, are not islands. They are not capable of doing anything without a supporting infrastructure - application and network - and some of that infrastructure is necessarily configured in such a way as to be peculiar to the application - and vice-versa. We call it an "ecosystem" for a reason; because there's a symbiotic relationship between applications and their supporting infrastructure that, when separated, degrades or even destroys the usability of that application. One cannot simply move a virtual machine from one location to another, regardless of the interoperability of virtualization infrastructure, and expect things to magically work unless all of the required supporting infrastructure has also been migrated as seamlessly. And this infrastructure isn't just hardware and network infrastructure; authentication and security systems, too, are an integral part of an application deployment. Even if all the necessary components were themselves virtualized (and I am not suggesting this should be the case at all) simply porting the virtual instances from one location to another is not enough to assure interoperability as the components must be able to collaborate, which requires connectivity information. Which brings us back to the problems associated with LISP and its focus on external discovery and location. There's just a lot more to interoperability than pushing around virtual images regardless of what those images contain: application, data, identity, security, or networking. Portability between virtual images is a good start, but it certainly isn't going to provide the interoperability necessary to ensure the seamless transition from one IaaS cloud environment to another. RELATED ARTICLES & BLOGS Who owns application delivery meta-data in the cloud? More on the meta-data menagerie The Feedback Loop Must Include Applications How VM sprawl will drive the urgency of the network evolution The Diseconomy of Scale Virus Flexibility is Key to Dynamic Infrastructure The Three Horsemen of the Coming Network Revolution As a Service: The Many Faces of the Cloud241Views0likes2CommentsCloud Connect 2010: Got a Question About Infrastructure Interoperability But You Can’t Attend? I Got Your Back…
Hey there! CloudConnect is next week (already?) and while some of us are already on a plane heading to the Bay area to kick things off (Shlomo Swidler is already on his way, according to his tweets at 36,000 feet) some of us will be lounging preparing for our various workshops and panels until early next week. That being the case, if you’re not going to be attending and thus missing the panel I’m moderating (what? How could you miss that?) but had a burning question you wanted to ask one of the panelists, let me know. Leave a comment, send a tweet, compose an e-mail, write me a letter (better hurry, Green Bay is a long way from everywhere). If we can fit it in (how many people actually ask live questions during the Q&A, right?) we’ll get it answered, and tweet or post a follow-up next week. Come to think of it, even if you are attending and just can’t make the panel, or you’re like me and don’t like asking questions in public, send your question anyway. I NFRASTRUCTURE INTEROPERABILITY in a CLOUDY WORLD Interoperability between networks has fueled the growth of applications that has in turn spurred the growth of networks and internetworking. This cycle has led to increasing strain on networks, applications, and people who manage them. The 'virtualization' of networks, servers, storage, and applications as well as cloud computing not only quickens growth rates, but changes the nature of demand placed on infrastructure. Virtualization and cloud computing ultimately require new kinds of interoperability to reduce the burdens imposed by these technologies. This panel of cloud computing vendors and users will review the challenges of dynamic infrastructure design -- infrastructure capable of sustaining growth while relieving stress -- and will suggest the types of standards necessary to make those infrastructures a reality. Our notable panel includes (in order of the value of the bribes they sent to ensure I was nice to them*): Rich Miller, General Manager and Principal, Telematica Inc. Surendra Reddy, Vice President, Cloud Computing, Yahoo Glenn Dasmalchi, Technical Chief of Staff, Cisco Bob Grossman, Managing Partner, Open Data Group Apurva Dave, Vice President, Product Marketing and Alliances, Riverbed Technology So if you’ve got a question, let me know! I’ll do my best to get it answered. *Not really true, the order is from the panel listing on the CloudConnect site. No one has bribed me to be nice. Yet. :-)224Views0likes1CommentF5 Friday: Thanks for calling... please press 1 for IPv6 or 2 for IPv4.
World IPv6 Day is June 8. We’re ready, how about you? World IPv6 day, scheduled for 8 June 2011, is a global-scale test flight of IPv6 sponsored by the Internet Society. On World IPv6 Day, major web companies and other industry players will come together to enable IPv6 on their main websites for 24 hours. The goal is to motivate organizations across the industry — Internet service providers, hardware makers, operating system vendors and web companies — to prepare their services for IPv6 to ensure a successful transition as IPv4 address space runs out. This is more than a marketing play to promote IPv6 capabilities, it’s a serious test to ensure that services are prepared to meet the challenge of a dual-stack environment of the kind that will be necessary to support the migration from IPv4 to IPv6. Such a migration is not a trivial task as it requires more than simply flipping a switch in the billions of components, applications and services that make up what we call “The Internet”. That’s because IPv6 shares basic concepts like routing, switching and internetworking communication with IPv4, but the technical bits that describe hosts, services and endpoints on the Internet and in the data center are different enough to make cross-protocol communication challenging. Supporting IPv6 is easy; supporting communication between IPv6 and IPv4 during such a massive migration is not. If you consider how tightly coupled not only routing and switching but applications and myriad security, acceleration, access and application-centric networking policies are to IP you start to see how large a task such a migration really will be. cloud computing hasn’t helped there by relying on IP address as the primary mechanism for identifying instances of applications as they are provisioned and decommissioned throughout the day. All that eventually needs to change, to be replaced with IPv6 compatible systems, components and management frameworks, and it’s not going to happen in a single day. FIRST THINGS FIRST The first step is simply to lay the foundation for services and core Internet communications to support IPv6, and that’s what World IPv6 Day is promoting – an IPv6 Internet with IPv6 capable services on the outside interacting with other IPv6 capable services and networking components and clients. In many ways, World IPv6 Day will illustrate the power of loose coupling, of service-oriented networking and architectures. Most organizations aren’t ready for the gargantuan task of migrating their data centers to IPv6, nor the investment that may be required in upgrading or replacing core infrastructure to support the new standard. The beautify of loose-coupling and translative gateways, however, is that they don’t have to – yet. As part of our participation in World IPv6 Day, F5’s IT team worked hard - and ate a whole lot of our own dog food - to ensure that users have a positive experience while browsing our sites from an IPv6 device. That means you don’t have to press “1” for IPv6 or “2” for IPv4 as you do when communicating with organizations that supporting multiple languages. Like our own customers, we have an organizational reliance on IP addresses in the network and application infrastructure that thoroughly permeates throughout configurations and even application logic. But leveraging our own BIG-IP Local Traffic Manager (LTM) with IPv6 Gateway Module means we don’t have to perform a mass IPectomy on our entire internal infrastructure. Using the IPv6 gateway we’re able to maintain our existing infrastructure – all talking IPv4 – while providing IPv6 interfaces to Internet-facing infrastructure and clients. Both our corporate site (www.f5.com) and our community site (devcentral.f5.com) have been “migrated” to IPv6 and stand ready to speak what will one day be the lingua franca of the Internet. Granted, we had some practice at Interop 2011 supporting the Interop NOC IPv6 environment. F5 provided network and DNS translations and facilitated access and functionality for both IPv4 and IPv6 clients to resources on the Interop network. F5 also provided an IPv6 gateway to the www.interop.com website. Because organizations can continue to leverage IPv4 internally with an IPv6 gateway – and thus make no changes to its internal architecture, infrastructure, and applications – there is less pressure to migrate immediately, which can reduce the potential for introducing problems that cause downtime and outages. As Mike Fratto mentioned when describing Network Computing’s IPv6 enablement using BIG-IP: Like many other organizations, we have to migrate to IPv6 at some point, and this is the first step in the process--getting our public-facing servers ready. There is no rush to roll out massive changes, and by taking the transition in smaller bits, you will be able to manage the transition seamlessly. A planned, conscious effort to move to IPv6 internally in stages will reduce the overall headaches and inevitable outages caused by issues sure to arise during the process. F5 and IPv6 F5 BIG-IP supports IPv6 but more importantly its IPv6 Gateway Module supports efforts to present an IPv6 interface to the public-facing world while maintaining existing IPv4 based infrastructure. Deploying a gateway can provide the translation necessary to enable the entire organization to communicate with IPv6 regardless of IP version utilized internally. A gateway translates between IP versions rather than leveraging tunneling or other techniques that can cause confusion to IP-version specific infrastructure and applications. Thus if an IPv6 client communicates with the gateway and the internal network is still completely IPv4, the gateway performs a full translation of the requests bi-directionally to ensure seamless interoperation. This allows organizations to continue utilizing their existing investments – including network management software and packaged applications that may be under the control of a third party and are not IPv6 aware yet – but publicly move to supporting IPv6. Additionally, F5 BIG-IP Global Traffic Manager (GTM) handles IPv6 integration natively when answering AAAA (IPv6) DNS requests and includes a checkbox feature to reject IPv6 queries for Wide IPs that only have IPv4 addresses, which forces the client DNS resolver to re-request asking for the IPv4 address. This solves a common problem with deployment of dual stack IPv6 and IPv4 addressing. The operating systems try to query for an IPv6 address first and will hang or delay if they don’t get a rejection. GTM solves this problem by supporting both address schemes simultaneously. Learn More For Enterprises Controlling Your Migration to IPv6: A Gateway to Tomorrow IPv6—101: Introduction IPv6 - Bridging the Gap to Tomorrow For Service Providers Managing IPv6 in Service Provider Networks with BIG-IP Devices Service Provider Series: Managing the IPv6 Migration If you’ve got an IPv6-enabled device, give the participating sites on June 8 a try. While we’ll all learn a lot about IPv6 and any potential pitfalls with a rollout throughout the day just by virtue of the networking that’s always going on under the hood, without client participation it’s hard to gauge whether there’s more work to be done on that front. Even if your client isn’t IPv6 enabled, give these sites a try – they should be supporting both IPv6 and IPv4, and thus you should see no discernable difference when connecting. If you do, let us (or the site you’re visiting) know – it’s important for everyone participating in IPv6 day to hear about any unexpected issues or problems so we can all work to address them before a full IPv6 migration gets under way. You can also participate on DevCentral: Post your IPv6 questions and our DevCentral team members will do their best to answer them. Join our live roundtable podcast on June 8 at 11:00 a.m. (Pacific) to hear your IPv6 questions answered and get professional tips from F5's IPv6 expert guests. So give it a try and participate if you can, and make it a great day!220Views0likes1CommentThe Inevitable Eventual Consistency of Cloud Computing
An IDC survey highlights the reasons why private clouds will mature before public, leading to the eventual consistency of public and private cloud computing frameworks Network Computing recently reported on a very interesting research survey from analyst firm IDC. This one was interesting because it delved into concerns regarding public cloud computing in a way that most research surveys haven’t done, including asking respondents to weight their concerns as it relates to application delivery from a public cloud computing environment. The results? Security, as always, tops the list. But close behind are application delivery related concerns such as availability and performance. N etwork Computing – IDC Survey: Risk In The Cloud While growing numbers of businesses understand the advantages of embracing cloud computing, they are more concerned about the risks involved, as a survey released at a cloud conference in Silicon Valley shows. Respondents showed greater concern about the risks associated with cloud computing surrounding security, availability and performance than support for the pluses of flexibility, scalability and lower cost, according to a survey conducted by the research firm IDC and presented at the Cloud Leadership Forum IDC hosted earlier this week in Santa Clara, Calif. “However, respondents gave more weight to their worries about cloud computing: 87 percent cited security concerns, 83.5 percent availability, 83 percent performance and 80 percent cited a lack of interoperability standards.” The respondents rated the risks associated with security, availability, and performance higher than the always-associated benefits of public cloud computing of lower costs, scalability, and flexibility. Which ultimately results in a reluctance to adopt public cloud computing and is likely driving these organizations toward private cloud computing because public cloud can’t or won’t at this point address these challenges, but private cloud computing can and is – by architecting a collection of infrastructure services that can be leveraged by (internal) customers on an application by application (and sometimes request by request) basis. PRIVATE CLOUD will MATURE FIRST What will ultimately bubble up and become more obvious to public cloud providers is customer demand. Clouderati like James Urquhart and Simon Wardley often refer to this process as commoditization or standardization of services. These services – at the infrastructure layer of the cloud stack – will necessarily be driven by customer demand; by the market. Because customers right now are not fully exercising public cloud computing as they would their own private implementation – replete with infrastructure services, business critical applications, and adherence to business-focused service level agreements – public cloud providers are a bit of a disadvantage. The market isn’t telling them what they want and need, thus public cloud providers are left to fend for themselves. Or they may be pandering necessarily to the needs and demands of a few customers that have fully adopted their platform as their data center du jour. Internal to the organization there is a great deal more going on than some would like to admit. Organizations have long since abandoned even the pretense of caring about the definition of “cloud” and whether or not there exists such a thing as “private” cloud and have forged their way forward past “virtualization plus” (a derogatory and dismissive term often used to describe such efforts by some public cloud providers) and into the latter stages of the cloud computing maturity model. Internal IT organizations can and will solve the “infrastructure as a service” conundrum because they necessarily have a smaller market to address. They have customers, but it is a much smaller and well-defined set of customers which they must support and thus they are able to iterate over the development processes and integration efforts necessary to get there much quicker and without as much disruption. Their goal is to provide IT as a service, offering a repertoire of standardized application and infrastructure services that can easily be extended to support new infrastructure services. They are, in effect, building their own cloud frameworks (stack) upon which they can innovate and extend as necessary. And as they do so they are standardizing, whether by conscious effort or as a side-effect of defining their frameworks. But they are doing it, regardless of those who might dismiss their efforts as “not real cloud.” When you get down to it, enterprise IT isn’t driven by adherence to some definition put forth by pundits. They’re driven by a need to provide business value to their customers at the best possible “profit margin” they can. And they’re doing it faster than public cloud providers because they can. WHEN CLOUDS COLLIDE - EVENTUAL CONSISTENCY What that means is that in a relatively short amount of time, as measured by technological evolution at least, the “private clouds” of customers will have matured to the point they will be ready to adopt a private/public (hybrid) model and really take advantage of that public, cheap, compute on demand that’s so prevalent in today’s cloud computing market. Not just use them as inexpensive development or test playgrounds but integrate them as part of their global application delivery strategy. The problem then is aligning the models and APIs and frameworks that have grown up in each of the two types of clouds. Like the concept of “eventual consistency” with regards to data and databases and replication across clouds (intercloud) the same “eventual consistency” theory will apply to cloud frameworks. Eventually there will be a standardized (consistent) set of infrastructure services and network services and frameworks through which such services are leveraged. Oh, at first there will be chaos and screaming and gnashing of teeth as the models bump heads, but as more organizations and providers work together to find that common ground between them they’ll find that just like the peanut-butter and chocolate in a Reese’s Peanut Butter cup, the two disparate architectures can “taste better together.” The question that remains is which standardization will be the one with which others must become consistent. Without consistency, interoperability and portability will remain little more than a pipe dream. Will it be standardization driven by the customers, a la the Enterprise Buyer’s Cloud Council? Or will it be driven by providers in a “if you don’t like what we offer go elsewhere” market? Or will it be driven by a standards committee comprised primarily of vendors with a few “interested third parties”? Related Posts from tag interoperability Despite Good Intentions PaaS Interoperability Still Only Skin Deep Apple iPad Pushing Us Closer to Internet Armageddon Cloud, Standards, and Pants Approaching cloud standards with end-user focus only is full of fail Interoperability between clouds requires more than just VM portability Who owns application delivery meta-data in the cloud? Cloud interoperability must dig deeper than the virtualization layer from tag standards How Coding Standards Can Impair Application Performance The Dynamic Infrastructure Mashup The Great Client-Server Architecture Myth Infrastructure 2.0: Squishy Name for a Squishy Concept Can You Teach an Old Developer New Tricks? (more..) del.icio.us Tags: MacVittie,F5,cloud computing,standards,interoperability,integration,hybrid cloud,private cloud,public cloud,infrastructure216Views0likes1CommentThe Inter-Cloud: Will MAE become a MAC?
If public, private, hybrid, cumulus, stratus wasn’t enough, the ‘Inter-Cloud’ concept came up again at the Cloud Connect gathering in San Jose last week. According to the Wikipedia entry, it was first introduced in 2007 by Kevin Kelly, both Lori MacVittie and Greg Ness wrote about the Intercloud last June and many reference James Urquhart in bringing it to everyone’s attention. Since there is no real interoperability between clouds, what happens when one cloud instance wants to reference a service in another cloud? Enter the Inter-Cloud. As with most things related to cloud computing, there has been lots of debate about exactly what it is, what it’s supposed to do and when it’s time will come. In the ‘Infrastructure Interoperability in a Cloudy World’ session at Cloud Connect, the Inter-Cloud was referenced as the ‘transition point’ when applications in a particular cloud need to move. Application mobility comes into play with Cloud Balancing, Cloud Bursting, disaster recovery, sensitive data in private/application in public and any other scenario where application fluidity is desired and/or required. An Inter-Cloud is, in essence, a mesh of different cloud infrastructures governed by standards that allow them to interoperate. As ISPs were building out their own private backbones in the 1990’s, the Internet needed a way to connect all the autonomous systems to exchange traffic. The Network Access Points (NAPs) and Metropolitan Area Ethernets (now Exchange – MAE East/MAE West/etc) became today’s Internet Exchange Points (IXP). Granted, the agreed standard for interoperability, TCP/IP and specifically BGP, made that possible and we’re still waiting on something like that for the cloud; plus we’re now dealing with huge chunks of data (images, systems, etc) rather than simple email or light web browsing. I would imagine that the major cloud providers already have connections the major peering points and someday there just might be the Metro Area Clouds (MAC West, MAC East, MAC Central) and other cloud peering locations for application mobility. Maybe cloud providers with similar infrastructures (running a particular hypervisor on certain hardware with specific services) will start with private peering, like the ISPs of yore. The reality is that it probably won’t happen that way since clouds are already part of the internet, the needs of the cloud are different and an agreed method is far from completion. It is still interesting to envision though. I also must admit, I had completely forgotten about the Inter-Cloud and you hear me calling it the ‘Intra-Cloud’ in this interview with Lori at Cloud Connect. Incidentally, it’s fun to read articles from 1999 talking about the Internet’s ‘early days’ of ISP Peering and those from today on how it has changed over the years. ps215Views0likes1CommentF5 BIG-IP Access Policy Manager and Oracle Access Manager Integration - Part 2
A demonstration of how BIG-IP Access Policy Manager and Oracle Access Manager are integrated along with steps on how to configure the BIG-IP APM. If you missed it, Part 1 can be viewed: F5 Access Policy Manager & Oracle Access Manager Integration - Part 1212Views0likes1Comment