5 Years Later: OpenAJAX Who?
Five years ago the OpenAjax Alliance was founded with the intention of providing interoperability between what was quickly becoming a morass of AJAX-based libraries and APIs. Where is it today, and why has it failed to achieve more prominence? I stumbled recently over a nearly five year old article I wrote in 2006 for Network Computing on the OpenAjax initiative. Remember, AJAX and Web 2.0 were just coming of age then, and mentions of Web 2.0 or AJAX were much like that of “cloud” today. You couldn’t turn around without hearing someone promoting their solution by associating with Web 2.0 or AJAX. After reading the opening paragraph I remembered clearly writing the article and being skeptical, even then, of what impact such an alliance would have on the industry. Being a developer by trade I’m well aware of how impactful “standards” and “specifications” really are in the real world, but the problem – interoperability across a growing field of JavaScript libraries – seemed at the time real and imminent, so there was a need for someone to address it before it completely got out of hand. With the OpenAjax Alliance comes the possibility for a unified language, as well as a set of APIs, on which developers could easily implement dynamic Web applications. A unifiedtoolkit would offer consistency in a market that has myriad Ajax-based technologies in play, providing the enterprise with a broader pool of developers able to offer long term support for applications and a stable base on which to build applications. As is the case with many fledgling technologies, one toolkit will become the standard—whether through a standards body or by de facto adoption—and Dojo is one of the favored entrants in the race to become that standard. -- AJAX-based Dojo Toolkit , Network Computing, Oct 2006 The goal was simple: interoperability. The way in which the alliance went about achieving that goal, however, may have something to do with its lackluster performance lo these past five years and its descent into obscurity. 5 YEAR ACCOMPLISHMENTS of the OPENAJAX ALLIANCE The OpenAjax Alliance members have not been idle. They have published several very complete and well-defined specifications including one “industry standard”: OpenAjax Metadata. OpenAjax Hub The OpenAjax Hub is a set of standard JavaScript functionality defined by the OpenAjax Alliance that addresses key interoperability and security issues that arise when multiple Ajax libraries and/or components are used within the same web page. (OpenAjax Hub 2.0 Specification) OpenAjax Metadata OpenAjax Metadata represents a set of industry-standard metadata defined by the OpenAjax Alliance that enhances interoperability across Ajax toolkits and Ajax products (OpenAjax Metadata 1.0 Specification) OpenAjax Metadata defines Ajax industry standards for an XML format that describes the JavaScript APIs and widgets found within Ajax toolkits. (OpenAjax Alliance Recent News) It is interesting to see the calling out of XML as the format of choice on the OpenAjax Metadata (OAM) specification given the recent rise to ascendancy of JSON as the preferred format for developers for APIs. Granted, when the alliance was formed XML was all the rage and it was believed it would be the dominant format for quite some time given the popularity of similar technological models such as SOA, but still – the reliance on XML while the plurality of developers race to JSON may provide some insight on why OpenAjax has received very little notice since its inception. Ignoring the XML factor (which undoubtedly is a fairly impactful one) there is still the matter of how the alliance chose to address run-time interoperability with OpenAjax Hub (OAH) – a hub. A publish-subscribe hub, to be more precise, in which OAH mediates for various toolkits on the same page. Don summed it up nicely during a discussion on the topic: it’s page-level integration. This is a very different approach to the problem than it first appeared the alliance would take. The article on the alliance and its intended purpose five years ago clearly indicate where I thought this was going – and where it should go: an industry standard model and/or set of APIs to which other toolkit developers would design and write such that the interface (the method calls) would be unified across all toolkits while the implementation would remain whatever the toolkit designers desired. I was clearly under the influence of SOA and its decouple everything premise. Come to think of it, I still am, because interoperability assumes such a model – always has, likely always will. Even in the network, at the IP layer, we have standardized interfaces with vendor implementation being decoupled and completely different at the code base. An Ethernet header is always in a specified format, and it is that standardized interface that makes the Net go over, under, around and through the various routers and switches and components that make up the Internets with alacrity. Routing problems today are caused by human error in configuration or failure – never incompatibility in form or function. Neither specification has really taken that direction. OAM – as previously noted – standardizes on XML and is primarily used to describe APIs and components - it isn’t an API or model itself. The Alliance wiki describes the specification: “The primary target consumers of OpenAjax Metadata 1.0 are software products, particularly Web page developer tools targeting Ajax developers.” Very few software products have implemented support for OAM. IBM, a key player in the Alliance, leverages the OpenAjax Hub for secure mashup development and also implements OAM in several of its products, including Rational Application Developer (RAD) and IBM Mashup Center. Eclipse also includes support for OAM, as does Adobe Dreamweaver CS4. The IDE working group has developed an open source set of tools based on OAM, but what appears to be missing is adoption of OAM by producers of favored toolkits such as jQuery, Prototype and MooTools. Doing so would certainly make development of AJAX-based applications within development environments much simpler and more consistent, but it does not appear to gaining widespread support or mindshare despite IBM’s efforts. The focus of the OpenAjax interoperability efforts appears to be on a hub / integration method of interoperability, one that is certainly not in line with reality. While certainly developers may at times combine JavaScript libraries to build the rich, interactive interfaces demanded by consumers of a Web 2.0 application, this is the exception and not the rule and the pub/sub basis of OpenAjax which implements a secondary event-driven framework seems overkill. Conflicts between libraries, performance issues with load-times dragged down by the inclusion of multiple files and simplicity tend to drive developers to a single library when possible (which is most of the time). It appears, simply, that the OpenAJAX Alliance – driven perhaps by active members for whom solutions providing integration and hub-based interoperability is typical (IBM, BEA (now Oracle), Microsoft and other enterprise heavyweights – has chosen a target in another field; one on which developers today are just not playing. It appears OpenAjax tried to bring an enterprise application integration (EAI) solution to a problem that didn’t – and likely won’t ever – exist. So it’s no surprise to discover that references to and activity from OpenAjax are nearly zero since 2009. Given the statistics showing the rise of JQuery – both as a percentage of site usage and developer usage – to the top of the JavaScript library heap, it appears that at least the prediction that “one toolkit will become the standard—whether through a standards body or by de facto adoption” was accurate. Of course, since that’s always the way it works in technology, it was kind of a sure bet, wasn’t it? WHY INFRASTRUCTURE SERVICE PROVIDERS and VENDORS CARE ABOUT DEVELOPER STANDARDS You might notice in the list of members of the OpenAJAX alliance several infrastructure vendors. Folks who produce application delivery controllers, switches and routers and security-focused solutions. This is not uncommon nor should it seem odd to the casual observer. All data flows, ultimately, through the network and thus, every component that might need to act in some way upon that data needs to be aware of and knowledgeable regarding the methods used by developers to perform such data exchanges. In the age of hyper-scalability and über security, it behooves infrastructure vendors – and increasingly cloud computing providers that offer infrastructure services – to be very aware of the methods and toolkits being used by developers to build applications. Applying security policies to JSON-encoded data, for example, requires very different techniques and skills than would be the case for XML-formatted data. AJAX-based applications, a.k.a. Web 2.0, requires different scalability patterns to achieve maximum performance and utilization of resources than is the case for traditional form-based, HTML applications. The type of content as well as the usage patterns for applications can dramatically impact the application delivery policies necessary to achieve operational and business objectives for that application. As developers standardize through selection and implementation of toolkits, vendors and providers can then begin to focus solutions specifically for those choices. Templates and policies geared toward optimizing and accelerating JQuery, for example, is possible and probable. Being able to provide pre-developed and tested security profiles specifically for JQuery, for example, reduces the time to deploy such applications in a production environment by eliminating the test and tweak cycle that occurs when applications are tossed over the wall to operations by developers. For example, the jQuery.ajax() documentation states: By default, Ajax requests are sent using the GET HTTP method. If the POST method is required, the method can be specified by setting a value for the type option. This option affects how the contents of the data option are sent to the server. POST data will always be transmitted to the server using UTF-8 charset, per the W3C XMLHTTPRequest standard. The data option can contain either a query string of the form key1=value1&key2=value2 , or a map of the form {key1: 'value1', key2: 'value2'} . If the latter form is used, the data is converted into a query string using jQuery.param() before it is sent. This processing can be circumvented by setting processData to false . The processing might be undesirable if you wish to send an XML object to the server; in this case, change the contentType option from application/x-www-form-urlencoded to a more appropriate MIME type. Web application firewalls that may be configured to detect exploitation of such data – attempts at SQL injection, for example – must be able to parse this data in order to make a determination regarding the legitimacy of the input. Similarly, application delivery controllers and load balancing services configured to perform application layer switching based on data values or submission URI will also need to be able to parse and act upon that data. That requires an understanding of how jQuery formats its data and what to expect, such that it can be parsed, interpreted and processed. By understanding jQuery – and other developer toolkits and standards used to exchange data – infrastructure service providers and vendors can more readily provide security and delivery policies tailored to those formats natively, which greatly reduces the impact of intermediate processing on performance while ensuring the secure, healthy delivery of applications.399Views0likes0CommentsXML Threat Prevention
Where should security live? The divide between operations and application developers is pretty wide, especially when it comes to defining who should be ultimately responsible for application security. Mike Fratto and I have often had lively discussions (read: arguments) about whether security is the responsibility of the developer or the network and security administrators. It's wholly inappropriate to recreate any of these discussions here, as they often devolve to including the words your mother said not to use in public. 'Nuff said. The truth is that when XML enters the picture then the responsibility for securing that traffic has to be borne by both the network/security administrators and the developers. While there is certainly good reason to expect that developers are doing simply security checks for buffer overflows, length restrictions on incoming data, and strong typing, the fact is that there are some attacks in XML that make it completely impractical to check for in the code. Let's take a couple of attack types as examples. XML Entity Expansion This attack is a million laughs, or at least a million or more bytes of memory. Applications need to parse XML in order to manipulate it, so the first thing that happens when XML hits an application is that it is parsed - before the developer even has a chance to check it. In an application server this is generally done before the arguments to the specific operation being invoked are marshaled - meaning it is the application server, not the developer that is responsible for handling this type of attack. These messages can be used to force recursive entity expansion or other repeated processing that exhausts server resources. The most common example of this type of attack is the "billion laughs" attack, which is widely available. The CPU is monopolized while the entities are being expanded, and each entity takes up X amount of memory - eventually consuming all available resources and effectively preventing legitimate traffic from being processed. It's essentially a DoS (Denial of Service) attack. ... ]> &ha128; It is accepted that almost all traditional DoS attacks (ping of death, teardrop, etc...) should be handled by a perimeter security device - a firewall oran application delivery controller - so why should a DoS attack that is perpetrated through XML be any different? It shouldn't. This isn't a developer problem, it's a parser problem and for the most part developers have little or no control over the behavior of the parser used by the application server. The application admin, however, can configure most modern parsers to prevent this type of attack, but that's assuming that their parser is modern and can be configured to handle it. Of course then you have to wonder what happens if that arbitrary limit inhibits processing of valid traffic? Yeah, it's a serious problem. SQL Injection SQL Injection is one of the most commonly perpetrated attacks via web-based applications. It consists basically of inserting SQL code into string-based fields which the attacks thinks (or knows) will be passed to a database as part of an SQL query. This type of attack can easily be accomplished via XML as well simply by inserting the appropriate SQL code into a string element. Aha! The developer can stop this one, you're thinking. After all, the developer has the string and builds the SQL that will be executed, so he can just check for it before he builds the string and sends it off for execution. While this is certainly true, there are myriad combinations of SQL commands that might induce the database to return more data than it should, or to return sensitive data not authorized to the user. This extensive list of commands and combinations of commands would need to be searched for in each and every parameter used to create an SQL query and on every call to the database. That's a lot of extra code and a lot of extra processing - which is going to slow down the application and impede performance. And when a new attack is discovered, each and every function and application needs to be updated, tested, and re-deployed. I'm fairly certain developers have better ways to spend their time than updating parameter checking in every function in every application they have in production. And we won't even talk about third-party applications and the dangers inherent in that scenario. One of the goals of SOA is engendering reuse, and this is one of the best examples of taking advantage of reuse in order to ensure consistent behavior between applications and to reduce the lengthy development cycle required to update, test, and redeploy whenever a new attack is discovered. By placing the onus for keeping this kind of attack from reaching the server on an edge device such as an application firewall like F5's application firewall, updates to address new attacks are immediately applied to all applications and there is no need to recode and redeploy applications. Although there are some aspects of security that are certainly best left to the developer, there are other aspects of security that are better deployed in the network. It's the most effective plan in terms of effort, cost, and consistent behavior where applications are concerned. Imbibing: Mountain Dew Technorati tags: security, application security, application firewall, XML, developers, networking, application delivery295Views0likes0CommentsIT as a Service: A Stateless Infrastructure Architecture Model
The dynamic data center of the future, enabled by IT as a Service, is stateless. One of the core concepts associated with SOA – and one that failed to really take hold, unfortunately – was the ability to bind, i.e. invoke, a service at run-time. WSDL was designed to loosely couple services to clients, whether they were systems, applications or users, in a way that was dynamic. The information contained in the WSDL provided everything necessary to interface with a service on-demand without requiring hard-coded integration techniques used in the past. The theory was you’d find an appropriate service, hopefully in a registry (UDDI-based), grab the WSDL, set up the call, and then invoke the service. In this way, the service could “migrate” because its location and invocation specific meta-data was in the WSDL, not hard-coded in the client, and the client could “reconfigure”, as it were, on the fly. There are myriad reasons why this failed to really take hold (notably that IT culture inhibited the enforcement of a strong and consistent governance strategy) but the idea was and remains sound. The goal of a “stateless” architecture, as it were, remains a key characteristic of what is increasingly being called IT as a Service – or “private” cloud computing . TODAY: STATEFUL INFRASTRUCTURE ARCHITECTURE The reason the concept of a “stateless” infrastructure architecture is so vital to a successful IT as a Service initiative is the volatility inherent in both the application and network infrastructure needed to support such an agile ecosystem. IP addresses, often used to bypass the latency induced by resolution of host names at run-time from DNS calls, tightly couple systems together – including network services. Routing and layer 3 switching use IP addresses to create a virtual topology of the architecture and ensure the flow of data from one component to the next, based on policy or pre-determine routes as meets the needs of the IT organization. It is those policies that in many cases can be eliminated; replaced with a more service-oriented approach that provisions resources on-demand, in real-time. This eliminates the “state” of an application architecture by removing delivery dependencies on myriad policies hard-coded throughout the network. Policies are inexorably tied to configurations, which are the infrastructure equivalent of state in the infrastructure architecture. Because of the reliance on IP addresses imposed by the very nature of network and Internet architectural design, we’ll likely never reach full independence from IP addresses. But we can move closer to a “stateless” run-time infrastructure architecture inside the data center by considering those policies that can be eliminated and instead invoked at run-time. Not only would such an architecture remove the tight coupling between policies and infrastructure, but also between applications and the infrastructure tasked with delivering them. In this way, applications could more easily be migrated across environments, because they are not tightly bound to the networking and security policies deployed on infrastructure components across the data center. The pre-positioning of policies across the infrastructure requires codifying topological and architectural meta-data in a configuration. That configuration requires management; it requires resources on the infrastructure – storage and memory – while the device is active. It is an extra step in the operational process of deploying, migrating and generally managing an application. It is “state” and it can be reduced – though not eliminated – in such a way as to make the run-time environment, at least, stateless and thus more motile. TOMORROW: STATELESS INFRASTRUCTURE ARCHITECTURE What’s needed to move from a state-dependent infrastructure architecture to one that is more stateless is to start viewing infrastructure functions as services. Services can be invoked, they are loosely coupled, they are independent of solution and product. Much in the same way that stateless application architectures address the problems associated with persistence and impede real-time migration of applications across disparate environments, so too does stateless infrastructure architectures address the same issues inherent in policy-based networking – policy persistence. While standardized APIs and common meta-data models can alleviate much of the pain associated with migration of architectures between environments, they still assume the existence of specific types of components (unless, of course, a truly service-oriented model in which services, not product functions, are encapsulated). Such a model extends the coupling between components and in fact can “break” if said service does not exist. Conversely, a stateless architecture assumes nothing; it does not assume the existence of any specific component but merely indicates the need for a particular service as part of the application session flow that can be fulfilled by any appropriate infrastructure providing such a service. This allows the provider more flexibility as they can implement the service without exposing the underlying implementation – exactly as a service-oriented architecture intended. It further allows providers – and customers – to move fluidly between implementations without concern as only the service need exist. The difficulty is determining what services can be de-coupled from infrastructure components and invoked on-demand, at run-time. This is not just an application concern, it becomes an infrastructure component concern, as well, as each component in the flow might invoke an upstream – or downstream – service depending on the context of the request or response being processed. Assuming that such services exist and can be invoked dynamically through a component and implementation-agnostic mechanism, it is then possible to eliminate many of the pre-positioned, hard-coded policies across the infrastructure and instead invoke them dynamically. Doing so reduces the configuration management required to maintain such policies, as well as eliminating complexity in the provisioning process which must, necessarily, include policy configuration across the infrastructure in a well-established and integrated enterprise-class architecture. Assuming as well that providers have implemented support for similar services, one can begin to see the migratory issues are more easily redressed and the complications caused by needed to pre-provision services and address policy persistence during migration mostly eliminated. SERVICE-ORIENTED THINKING One way of accomplishing such a major transformation in the data center – from policy to service-oriented architecture – is to shift our thinking from functions to services. It is not necessarily efficient to simply transplant a software service-oriented approach to infrastructure because the demands on performance and aversion to latency makes a dynamic, run-time binding to services unappealing. It also requires a radical change in infrastructure architecture by adding the components and services necessary to support such a model – registries and the ability of infrastructure components to take advantage of them. An in-line, transparent invocation method for infrastructure services offers the same flexibility and motility for applications and infrastructure without imposing performance or additional dependency constraints on implementers. But to achieve a stateless infrastructure architectural model, one must first shift their thinking from functions to services and begin to visualize a data center in which application requests and responses communicate the need for particular downstream and upstream services with them, rather than completely in hard-coded policies stored in component configurations. It is unlikely that in the near-term we can completely eliminate the need for hard-coded configuration, we’re just no where near that level of dynamism and may never be. But for many services – particularly those associated with run-time delivery of applications, we can achieve the stateless architecture necessary to realize a more mobile and dynamic data center. Now Witness the Power of this Fully Operational Feedback Loop Cloud is the How not the What Challenging the Firewall Data Center Dogma Cloud-Tiered Architectural Models are Bad Except When They Aren’t Cloud Chemistry 101 You Can’t Have IT as a Service Until IT Has Infrastructure as a Service Let’s Face It: PaaS is Just SOA for Platforms Without the Baggage The New Distribution of The 3-Tiered Architecture Changes Everything497Views0likes1CommentWide IP and SOA query
Hi, That is probably obvious question but I am not sure if I am getting things right. All related to 13.1.0.6 version. Setup: Wide IP: created Automatically ZoneRunner Zone wip.exmaple.com creted with: SOA NS A for DNS profile with: Only GSLB enabled Unhandled Query Actions: Reject When performing A query for answer with correct IP returned. When performing SOA query for wip.exmaple.foo reply with REFUSED status returned. Only way I figured out to make SOA query work is: Unhandled Query Actions: Allow Use BIND Server on BIG-IP: Enabled I wonder if above is really only way to have SOA query working? Are SOA and NS RRs created just because bind zone file db.external.wip.example.foo. format requires it and those RR are not really necessary for any real life scenarios? Sure this is just W2K8 implementation but to create delegation (using wizard) without error for configured NS (for wip.example.foo subdomain) SOA query has to work. That is not big deal because even if there is error in wizard, name resolution is working. Still I am a bit curious if lack of ability to answer SOA query can be important? Piotr243Views1like0CommentsWide IP and SOA query
Hi, That is probably obvious question but I am not sure if I am getting things right. All related to 13.1.0.6 version. Setup: Wide IP: created Automatically ZoneRunner Zone wip.exmaple.com creted with: SOA NS A for DNS profile with: Only GSLB enabled Unhandled Query Actions: Reject When performing A query for answer with correct IP returned. When performing SOA query for wip.exmaple.foo reply with REFUSED status returned. Only way I figured out to make SOA query work is: Unhandled Query Actions: Allow Use BIND Server on BIG-IP: Enabled I wonder if above is really only way to have SOA query working? Are SOA and NS RRs created just because bind zone file db.external.wip.example.foo. format requires it and those RR are not really necessary for any real life scenarios? Sure this is just W2K8 implementation but to create delegation (using wizard) without error for configured NS (for wip.example.foo subdomain) SOA query has to work. That is not big deal because even if there is error in wizard, name resolution is working. Still I am a bit curious if lack of ability to answer SOA query can be important? Piotr385Views0likes0CommentsScript for validating BIG-IP DNS records before and after cutover
Does anyone have a great script to dig all existing DNS records before and after a BIG-IP DNS cutover? Then maybe I could just do a diff on both files. Or are there other ways anyone has found that has proven to be a good way to validate a BIG-IP DNS cutover? Thanks!212Views0likes1CommentFlashback Friday: The Death of SOA Has (Still) Been Greatly Exaggerated
Late in 2008, I posited that the death of SOA had been greatly exaggerated. A few months later we pointed out that Anne Thomas Manes of the Burton Group declared SOA officially dead on January 1, 2009, but maintains that "although the word “SOA” is dead, the requirement for service-oriented architecture is stronger than ever." Back then, I was firmly of the opinion that the gleam of SOA was certainly gone, but that it had “reached the beginning of its productive life and if the benefits of SOA are real (and they are) then organizations are likely to start truly realizing the return on their investments.” Well, it’s been almost ten years, so it seems appropriate to look at whether or not SOA has finally stopped kicking. From the dearth of commentary on the topic, it would appear so. Microservices and REST have long captured the bulk of our attention, eclipsing SOA and other XML-related architectures for many years now. A quick look at some stats on the tag “SOA” at Stack Overflow shows 1.7k followers and a suspiciously equal number of questions. Which, when compared to the tens of millions of developers in the world that use the site on a daily basis, certainly seems to show a lack of robust life. REST, on the hand, shows an active, vibrant life with more than 53K questions and 13.5K followers. SOAP can only boast 2.9k followers with about 21K questions. Interestingly, a peek at COBOL shows nearly as many followers (1.5k) as SOA, just with fewer questions. Interestingly, all are active, with questions posted within the last 30 minutes of my queries. Which shows life. Faltering, shaky life for some of the technologies in question, perhaps, but life nonetheless. But let’s take a step back and remember that SOA is not a language like COBOL, nor is it an interface style like REST or SOAP. It is also not equal to its typically associated standards of SOAP and WSDL. SOA is, as it represents, a service oriented architecture. One that can make use of SOAP or REST, and in some cases, both. Its services can be implemented in COBOL or Go or node.js. SOA isn’t concerned with the implementation details, but rather the design and architecture of the system as a whole. SOA is primarily about services, and architecting systems based on the interaction of those services. Whether they are implemented using REST or SOAP is no more relevant today than was in 2009. One could argue (and I might) that where “web services” based on SOAP and WSDL were only the penultimate manifestation of SOA, and that microservices and APIs are the ultimate and perhaps ideal form of the concept brought to life. There are few other examples of such a service oriented architecture in existence today, though I would be foolish to discount the possibility of a better one in the future. In fact, without the concept of “service oriented” anything, cloud would not exist. Its entire premise is based on exposing interfaces between services that enable provisioning and management of, well, services automatically. Whether they are SOAP or REST is really irrelevant, except to the implementers and invokers of said services. Like COBOL, SOAP and XML may be waning, but they are both still in active use. Even APIs published yesterday tend to support both SOAP and REST (and we are no exception with iControl and iControl REST), given that many enterprises that invested in the architectural shift to SOA when SOAP was the thing to use are as unlikely to re-architect as those who are still actively developing mainframe applications using COBOL. And they are, trust me. So SOA is very much alive. It is thriving in cloud and within containerized architectures that now put the paradigm into practice using microservices and APIs built on RESTful principles, rather than standardized SOAPy ones. The death of SOA was (and sometimes still is) greatly exaggerated. Long live SOA.333Views0likes0CommentsLayer 7 Switching + Load Balancing = Layer 7 Load Balancing
Modern load balancers (application delivery controllers) blend traditional load-balancing capabilities with advanced, application aware layer 7 switching to support the design of a highly scalable, optimized application delivery network. Here's the difference between the two technologies, and the benefits of combining the two into a single application delivery controller. LOAD BALANCING Load balancing is the process of balancing load (application requests) across a number of servers. The load balancer presents to the outside world a "virtual server" that accepts requests on behalf of a pool (also called a cluster or farm) of servers and distributes those requests across all servers based on a load-balancing algorithm. All servers in the pool must contain the same content. Load balancers generally use one of several industry standard algorithms to distribute request. Some of the most common standard load balancing algorithms are: round-robin weighted round-robin least connections weighted least connections Load balancers are used to increase the capacity of a web site or application, ensure availability through failover capabilities, and to improve application performance. LAYER 7 SWITCHING Layer 7 switching takes its name from the OSI model, indicating that the device switches requests based on layer 7 (application) data. Layer 7 switching is also known as "request switching", "application switching", and "content based routing". A layer 7 switch presents to the outside world a "virtual server" that accepts requests on behalf of a number of servers and distributes those requests based on policies that use application data to determine which server should service which request. This allows for the application infrastructure to be specifically tuned/optimized to serve specific types of content. For example, one server can be tuned to serve only images, another for execution of server-side scripting languages like PHP and ASP, and another for static content such as HTML , CSS , and JavaScript. Unlike load balancing, layer 7 switching does not require that all servers in the pool (farm/cluster) have the same content. In fact, layer 7 switching expects that servers will have different content, thus the need to more deeply inspect requests before determining where they should be directed. Layer 7 switches are capable of directing requests based on URI, host, HTTP headers, and anything in the application message. The latter capability is what gives layer 7 switches the ability to perform content based routing for ESBs and XML/SOAP services. LAYER 7 LOAD BALANCING By combining load balancing with layer 7 switching, we arrive at layer 7 load balancing, a core capability of all modern load balancers (a.k.a. application delivery controllers). Layer 7 load balancing combines the standard load balancing features of a load balancing to provide failover and improved capacity for specific types of content. This allows the architect to design an application delivery network that is highly optimized to serve specific types of content but is also highly available. Layer 7 load balancing allows for additional features offered by application delivery controllers to be applied based on content type, which further improves performance by executing only those policies that are applicable to the content. For example, data security in the form of data scrubbing is likely not necessary on JPG or GIF images, so it need only be applied to HTML and PHP. Layer 7 load balancing also allows for increased efficiency of the application infrastructure. For example, only two highly tuned image servers may be required to meet application performance and user concurrency needs, while three or four optimized servers may be necessary to meet the same requirements for PHP or ASP scripting services. Being able to separate out content based on type, URI, or data allows for better allocation of physical resources in the application infrastructure.1.6KViews0likes2CommentsSimplifying Application Architecture in a Dynamic Data Center through Virtualization
Application architecture has never really been easy, but the introduction of virtualization may make it even less easy – unless you plan ahead Most applications today maintain at least two if not three or more "tiers" within their architecture. Web (presentation), application server (business logic), and database (data) are the three most common "tiers" to an application, web-based or otherwise, with the presence of middleware (queues and buses) being an optional fourth tier, depending on the application and its integration needs. It's never been an easy task to ensure that the web servers know where the application servers know where the middleware know where the databases are during a deployment. Most of this information is manually configured as pools of connections defined by IP addresses. IP addresses that, with the introduction of virtualization and cloud computing , may be dynamic. Certainly not as dynamic as one would expect in a public cloud environment, but dynamic nonetheless. A high frequency of change is really no more disruptive to such an IP-dependent environment than a single change, as it requires specific configuration modifications, additions or, in the case of elasticity, removals. It is only somewhat ironic that the same technology that introduces the potential for problems is the same technology that offers a simple solution: virtualization. Only this virtualization is not the one-to-many server-style virtualization, but rather the many-to-one network virtualization that has existed since the advent of proxy-based solutions. VIRTUAL SERVICE ENDPOINTS If considered before deployment – such as during design and implementation – then the use of virtual service endpoints as implemented through network virtualization techniques can dramatically improve the ability of application architectures to scale up and down and deal with any mobility within the infrastructure that might occur. Rather than directing any given tier to the next directly, all that is necessary is to direct the tier to a virtual service endpoint (an IP-port combination) on the appropriate infrastructure and voila! Instant scalability. The endpoint on the infrastructure, such as an application delivery controller, can ensure the scalability and availability (as well as security and other operationally necessary functions) by acting on that single "virtual service endpoint". Each tier scales elastically of its own accord, based on demand, without disrupting the connectivity and availability of other tiers. No other tier need care or even be aware of changes in the IP address assignments of other tiers, because it sees the entire tier as being the "virtual service endpoint" all the time. It stabilizes integration challenges by eliminating the problems associated with changing IP addresses in each tier of the architecture. It also has the benefit of eliminating service-IP sprawl that is often associated with clustering or other proxy-based solutions that must also scale along with demand and consume more and more (increasingly) valuable IP addresses. This decoupling is also an excellent form of abstraction that enables versioning and upgrades to occur without disruption, and can be used to simultaneously support multiple versions of the same interface simply by applying some application (page level) routing at the point of ingress (the virtual service endpoint) rather than using redirects or rewrites in the application itself. The ability to leverage network virtualization to create virtual service endpoints through which application architectures can be simplified and scaled should be seriously considered during the architectural design phase to ensure applications are taking full advantage of its benefits. Once virtual service endpoints are employed, there are a variety of other functions and capabilities that may be able to further simplify or extend application architectures such as two-factor authentication and OTP (one time password) options. The future is decoupled – from internal application design to application integration to the network. Employing a decoupling-oriented approach to enterprise architecture should be a focus for architects to enable IT to take advantage of the benefits and eliminate potential operational challenges.200Views0likes0CommentsF5 Friday: Programmability and Infrastructure as Code
#SDN #ADN #cloud #devops What does that mean, anyway? SDN and devops share some common themes. Both focus heavily on the notion of programmability in network devices as a means to achieve specific goals. For SDN it’s flexibility and rapid adaptation to changes in the network. For devops, it’s more a focus on the ability to treat “infrastructure as code” as a way to integrate into automated deployment processes. Each of these notions is just different enough to mean that systems supporting one don’t automatically support the other. An API focused on management or configuration doesn’t necessarily provide the flexibility of execution exhorted by SDN proponents as a significant benefit to organizations. And vice-versa. INFRASTRUCTURE as CODE Devops is a verb, it’s something you do. Optimizing application deployment lifecycle processes is a primary focus, and to do that many would say you must treat “infrastructure as code.” Doing so enables integration and automation of deployment processes (including configuration and integration) that enables operations to scale along with the environment and demand. The result is automated best practices, the codification of policy and process that assures repeatable, consistent and successful application deployments. F5 supports the notion (and has since 2003 or so) of infrastructure as code in two ways: iControl iControl, the open, standards-based API for the entire BIG-IP platform, remains the primary integration point for partners and customers alike. Whether it’s inclusion in Opscode Chef recipes, or pre-packaged solutions with systems from HP, Microsoft, or VMware, iControl offers the ability to manage the control plane of BIG-IP from just about anywhere. iControl is service-enabled and has been accessed and integrated through more programmatic languages than you can shake a stick at. Python, PERL, Java, PHP, C#, PowerShell… if it can access web-based services, it can communicate with BIG-IP via iControl. iApp A latter addition to the BIG-IP platform, iApp is best practice application delivery service deployment codified. iApps are service- and application-oriented, enabling operations and consumers of IT as a Service to more easily deploy requisite application delivery services without requiring intimate knowledge of the hundreds of individual network attributes that must be configured. iApp is also used in conjunction with iControl to better automate and integrate application delivery services into an IT as a Service environment. Using iApp to codify performance and availability policies based on application and business requirements, consumers – through pre-integrated solutions – can simply choose an appropriate application delivery “profile” along with their application to ensure not only deployment but production success. Infrastructure as code is an increasingly important view to take of the provisioning and deployment processes for network and application delivery services as they enable more consistent, accurate policy configuration and deployment. Consider research from Dimension Data that found “total number of configuration violations per device has increased from 29 to 43 year over year -- and that the number of security-related configuration errors (such as AAA Authentication, Route Maps and ACLS, Radius and TACACS+) also increased. AAA Authentication errors in particular jumped from 9.3 per device to 13.6, making it the most frequently occurring policy violation.” The ability to automate a known “good” configuration and policy when deploying application and network services can decrease the risk of these violations and ensure a more consistent, stable (and ultimately secure) network environment. PROGRAMMABILITY Less with “infrastructure as a code” (devops) and more-so with SDN comes the notion of programmability. On the one hand, this notion squares well with the “infrastructure as code” concept, as it requires infrastructure to be enabled in such as a way as to provide the means to modify behavior at run time, most often through support for a common standard (OpenFlow is the darling standard du jour for SDN).For SDN, this tends to focus on the forwarding information base (FIB) but broader applicability has been noted at times, and no doubt will continue to gain traction. The ability to “tinker” with emerging and experimental protocols, for example, is one application of programmability of the network. Rather than wait for vendor support, it is proposed that organizations can deploy and test support for emerging protocols through OpenFlow enabled networks. While this capability is likely not really something large production networks would undertake, still, the notion that emerging protocols could be supported on-demand, rather than on a vendor' driven timeline, is often desirable. Consider support for SIP, before UCS became nearly ubiquitous in enterprise networks. SIP is a message-based protocol, requiring deep content inspection (DCI) capabilities to extract AVP codes as a means to determine routing to specific services. Long before SIP was natively supported by BIG-IP, it was supported via iRules, F5’s event-driven network-side scripting language. iRules enabled customers requiring support for SIP (for load balancing and high-availability architectures) to program the network by intercepting, inspecting, and ultimately routing based on the AVP codes in SIP payloads. Over time, this functionality was productized and became a natively supported protocol on the BIG-IP platform. Similarly, iRules enables a wide variety of dynamism in application routing and control by providing a robust environment in which to programmatically determine which flows should be directed where, and how. Leveraging programmability in conjunction with DCI affords organizations the flexibility to do – or do not – as they so desire, without requiring them to wait for hot fixes, new releases, or new products. SDN and ADN – BIRDS of a FEATHER The very same trends driving SDN at layer 2-3 are the same that have been driving ADN (application delivery networking) for nearly a decade. Five trends in network are driving the transition to software defined networking and programmability. They are: • User, device and application mobility; • Cloud computing and service; • Consumerization of IT; • Changing traffic patterns within data centers; • And agile service delivery. The trends stretch across multiple markets, including enterprise, service provider, cloud provider, massively scalable data centers -- like those found at Google, Facebook, Amazon, etc. -- and academia/research. And they require dynamic network adaptability and flexibility and scale, with reduced cost, complexity and increasing vendor independence, proponents say. -- Five needs driving SDNs Each of these trends applies equally to the higher layers of the networking stack, and are addressed by a fully programmable ADN platform like BIG-IP. Mobile mediation, cloud access brokers, cloud bursting and balancing, context-aware access policies, granular traffic control and steering, and a service-enabled approach to application delivery are all part and parcel of an ADN. From devops to SDN to mobility to cloud, the programmability and service-oriented nature of the BIG-IP platform enables them all. The Half-Proxy Cloud Access Broker Devops is a Verb SDN, OpenFlow, and Infrastructure 2.0 Devops is Not All About Automation Applying ‘Centralized Control, Decentralized Execution’ to Network Architecture Identity Gone Wild! Cloud Edition Mobile versus Mobile: An Identity Crisis355Views0likes0Comments