soa delivery
38 TopicsLayer 7 Switching + Load Balancing = Layer 7 Load Balancing
Modern load balancers (application delivery controllers) blend traditional load-balancing capabilities with advanced, application aware layer 7 switching to support the design of a highly scalable, optimized application delivery network. Here's the difference between the two technologies, and the benefits of combining the two into a single application delivery controller. LOAD BALANCING Load balancing is the process of balancing load (application requests) across a number of servers. The load balancer presents to the outside world a "virtual server" that accepts requests on behalf of a pool (also called a cluster or farm) of servers and distributes those requests across all servers based on a load-balancing algorithm. All servers in the pool must contain the same content. Load balancers generally use one of several industry standard algorithms to distribute request. Some of the most common standard load balancing algorithms are: round-robin weighted round-robin least connections weighted least connections Load balancers are used to increase the capacity of a web site or application, ensure availability through failover capabilities, and to improve application performance. LAYER 7 SWITCHING Layer 7 switching takes its name from the OSI model, indicating that the device switches requests based on layer 7 (application) data. Layer 7 switching is also known as "request switching", "application switching", and "content based routing". A layer 7 switch presents to the outside world a "virtual server" that accepts requests on behalf of a number of servers and distributes those requests based on policies that use application data to determine which server should service which request. This allows for the application infrastructure to be specifically tuned/optimized to serve specific types of content. For example, one server can be tuned to serve only images, another for execution of server-side scripting languages like PHP and ASP, and another for static content such as HTML , CSS , and JavaScript. Unlike load balancing, layer 7 switching does not require that all servers in the pool (farm/cluster) have the same content. In fact, layer 7 switching expects that servers will have different content, thus the need to more deeply inspect requests before determining where they should be directed. Layer 7 switches are capable of directing requests based on URI, host, HTTP headers, and anything in the application message. The latter capability is what gives layer 7 switches the ability to perform content based routing for ESBs and XML/SOAP services. LAYER 7 LOAD BALANCING By combining load balancing with layer 7 switching, we arrive at layer 7 load balancing, a core capability of all modern load balancers (a.k.a. application delivery controllers). Layer 7 load balancing combines the standard load balancing features of a load balancing to provide failover and improved capacity for specific types of content. This allows the architect to design an application delivery network that is highly optimized to serve specific types of content but is also highly available. Layer 7 load balancing allows for additional features offered by application delivery controllers to be applied based on content type, which further improves performance by executing only those policies that are applicable to the content. For example, data security in the form of data scrubbing is likely not necessary on JPG or GIF images, so it need only be applied to HTML and PHP. Layer 7 load balancing also allows for increased efficiency of the application infrastructure. For example, only two highly tuned image servers may be required to meet application performance and user concurrency needs, while three or four optimized servers may be necessary to meet the same requirements for PHP or ASP scripting services. Being able to separate out content based on type, URI, or data allows for better allocation of physical resources in the application infrastructure.1.6KViews0likes2CommentsThe API is the Center of the Application (Integration) Universe
#mobile #fasterapp #ccevent Today, at least. Tomorrow, who knows? Some have tried to distinguish between “mobile cloud” and “cloud” by claiming the former is the use of the web browser on a mobile device to access services while the latter uses device-native applications. Like all things cloud, the marketing fluff is purposefully obfuscating and sweeping under the rug the technology required to make things work for consumers, whether those consumers be your kids or IT professionals. Infrastructure is not eliminated when organizations take to the cloud nor do the constraints of web-based protocols and methodologies become irrelevant when Bob uses a service to store photos of his kid’s piano recital on Flickr. The applications and web browsers on a mobile device are using the same technology, the same protocols, suffering under the same constraints as the rest of us in wireline land. If developers are as smart as they are lazy (and I say that as a compliment because it is the laziness of developers that more often than not leads to innovation) they have already moved to an API-centric model in which web site and device native-app interfaces both leverage the same APIs. This isn’t just a social integration phenomenon – it isn’t just about Twitter and Facebook and Google. API usage and demand is growing, and it is not expected to stop any time soon. Given the option, developers asked about desire to connect to services (assuming service = API) the overwhelming response was developers would like to connect to “everything, if it were easy.” (API Integration Pain Survey Results) The API is rapidly becoming (if it isn’t already) the center of the application (integration) universe. This unfortunately has the potential to cause confusion and chaos in the data center. When a single API is consumed by multiple clients – mobile, remote, applications, partners, etc.. – solutions unique to each quickly seem to make their way into the code to deal with “exceptions” and “peculiarities” inherent to the client platform. That’s inefficient and, when one considers the growing number of platforms and form-factors associated with mobile communications alone, it is not scalable from a people and process perspective. But reality is that these exceptions and peculiarities – often times caused by a lack of feature parity across form-factors and platforms – must be addressed somewhere, and that somewhere is unfortunately almost unilaterally determined to be the application. Do we need to treat mobile devices differently? In terms of performance and delivery concerns, yes. But that’s where we leverage the application delivery tier to differentiate by device to ensure delivery. That’s the beauty of an abstracted, service-enabled data center – there’s an intelligent and agile layer of application delivery services that mediates between clients (regardless of their form factor) and services to ensure that delivery needs (security, performance, and availability) are met in part by addressing the unique characteristics and reality of access via mobile devices. ABSTRACT and ISOLATE This is exactly the type of problem application delivery is designed to address. Multiple clients, multiple networks, all accessing the same application service or API but requiring specific authentication, security, and delivery characteristics to ensure that operational risk is mitigated in the most efficient manner possible. This includes the ability to throttle services based on user and client, a common approach used by mega-sites such as Twitter. This includes the ability to provide single sign-on capabilities to all clients, regardless of platform, form-factor and support for enterprise-grade authentication integration to the same API or application service. This includes leveraging the appropriate security policies to ensure inbound and outbound security of data regardless of client, such that corporate data is not infected and spread to other consumers. A flexible, scalable application delivery tier addresses the problem of a single API being utilized by a variety of clients in a way that precludes the need to codify specific functionality on a per-platform or form-factor basis in the application logic itself, making the API simpler and easier to maintain as well as test and upgrade. It makes APIs and application services more scalable in terms of people and processes, which in turn makes the development and deployment process more efficient and able to focus on new services rather than constantly modifying and updating existing ones. Service-oriented architecture may have begun in the application demesne as a means to abstract and isolate services such that they could more easily be integrated, maintained, and changed without disruption, but the concept is applicable to the data center as a whole. By leveraging SOA concepts at the data center architecture level, the entire technological landscape of the business can be transformed into one that is ultimately more adaptable, more scalable, and more secure. I’ll be at CloudConnect 2012 and we’ll discuss the subject of cloud and performance a whole lot more at the show! Sessions Facebook Wins “Worst API” in Developer Survey API Integration Pain Survey Results IT Survey: Businesses Embrace APIs for Apps Integration, Not Social The Pythagorean Theorem of Operational Risk At the Intersection of Cloud and Control… Operational Risk Comprises More Than Just Security IT Services: Creating Commodities out of Complexity The Three Axioms of Application Delivery The Magic of Mobile Cloud205Views0likes0CommentsF5 Friday: Performance Analytics–More Than Eye-Candy Reports
#v11 Application-centric analytics provide better visibility into performance, capacity and infrastructure utilization Maintaining performance and capacity of web sites and critical applications – especially those of the revenue-generating ilk – can be particularly difficult in complex environments. The mix of infrastructure and integration can pose problems when trying to determine exactly where capacity may be constrained or from where performance troubles are originating. Visibility into the application delivery chain is critical if we are to determine where and at what points in the chain performance is being impaired or constraints on capacity imposed, perhaps artificially. The trouble is that the end-user view of application performance tells you only that a site is or isn’t not performing up to expectations. It’s myopic in that sense, and provides little to no assistance in determining why an application may be performing poorly. Is it the Internet? An external component? An integration point? A piece of the infrastructure? Is it because of capacity (load on servers) or a lack thereof? It’s easy to point out a site is performing poorly or has unacceptable downtime, it’s quite another to determine why such that operations and developers can resolve the problem. Compounding the difficulties inherent in application performance monitoring is the way in which network infrastructure – including application delivery controllers – traditionally report statistics related to performance. It’s very, well, network-focused – with reports that provide details on network-oriented objects such as Virtual IP addresses and network segments. This is problematic because when a specific application is having issues, you want to drill down into the application, not a shared IP address or network segment. And you wouldn’t be digging into performance data if you didn’t already know there was a problem. So providing a simple “total response time” report isn’t very helpful at all in pinpointing the root cause. It’s important to be able to break out, by application, a more granular view of performance that provides insight into client, network, and server-side performance. iApp Analytics BIG-IP v11 introduced iApp, which addresses a real need to provision and manage application delivery services with an application-centric perspective. Along with iApp comes iApp Analytics, a more application-centric view of performance and capacity-related data. This data provides a more holistic view of performance across network, client and server-side components that allows operations to drill down into a specific application and dig around in the data to determine from where performance problems may be originating. This includes per URI reporting, which is critical for understanding API usage and impact on overall capacity. For organizations considered more advanced architectural solutions to addressing performance and capacity, this information is critical. For example, architecting scalability domains as part of a partitioning or hyper-local scalability pattern will need to have a detailed understanding of per-URI utilization. Being able to tie metrics back to a specific business application enables IT to provide a more accurate operational cost to business stakeholders, which enables more a accurate ROI analysis and proactive approach to addressing potential growth issues before they negatively impact performance or worse, availability. Transactions can be captured for diagnosing customer or application issues. Filters make it easy to narrow down the results to specific user agents, geographies, pools or even a client IP. The ability to view the headers and and the response data can shave valuable time off identifying problems. Thresholds on a per application, virtual server or pool member can be configured to identify if transactions or throughput levels increase or decrease significantly. This can be an early warning sign that problems are occurring. An alert can be delivered via syslog, SNMP or email when these thresholds are exceeded. Viewing of analytics can be accomplished through iApp application-specific view which provides the context of the associated business application. Metrics can also be delivered to an off-box SIEM solution such as Splunk. While detailed, per-application performance and usage data does, in fact, make for very nice eye-candy reports, such data is critical to reducing the time associated with troubleshooting and enabling more advanced, integrated scalability-focused architectures to be designed and deployed. Because without the right data, it’s hard to make the right decisions for your applications and your infrastructure. Happy Performance Monitoring! Infrastructure Scalability Pattern: Partition by Function or Type Lots of Little Virtual Web Applications Scale Out Better than Scaling Up Forget Hyper-Scale. Think Hyper-Local Scale. F5 Friday: You Will Appsolutely Love v11 Introducing v11: The Next Generation of Infrastructure BIG-IP v11 Information Page F5 Monday? The Evolution To IT as a Service Continues … in the Network F5 Friday: The Gap That become a Chasm All F5 Friday Posts on DevCentral ABLE Infrastructure: The Next Generation – Introducing v11202Views0likes0CommentsForget Hyper-Scale. Think Hyper-Local Scale.
It’s kind of like thinking globally but acting locally… While I rail against the use of the too vague and cringe-inducing descriptor “workload” with respect to scalability and cloud computing , it is perhaps at least bringing to the fore an important distinction that needs to be made: that of the impact of different compute resource utilization patterns on scalability. What categorizing workloads has done is to separate “types” of processing and resource needs: some applications require more I/O, some less. Others are CPU hogs while others chew up memory at an alarming rate. Applications have different resource utilization needs across the network, storage and compute spectrum that have a profound impact on their scalability. This leads to models in which some applications scale better horizontally and others vertically. Unfortunately, there are very few “pure” applications that can be dissected down to a simple model in which it is simply a case of providing more “X” as a means to scale. It is more often the case that some portions of the application are more network intensive while others require more compute. Functional partitioning is certainly a better option for scaling out such applications, but is an impractical design methodology during development as the overhead resulting from separation of duties at the functional level requires a more service-oriented approach, one that is not currently aligned with modern web application development practices. Yet we see the need on a daily basis for hyper-scalability of applications. Applications are being pushed to their resource limits with more users, more devices, more environments in which they must perform without delay. The “one size fits all” scalability model offered by cloud computing providers today is inadequate as a means to scale our rapidly and nearly infinitely, without overrunning budgets. This is because along with resource consumption patterns comes constraints on concurrency. Cloud computing offers an easy button to this problem – auto-scalability. Concurrency demands are easily met, just spin up another instance. While certainly one answer, it can be an expensive one and it’s absolutely an inefficient model of scalability. HYPER-LOCAL SCALE The lessons we should learn from cloud computing and hyper-scalability demands is that different functional processing scales, well, differently. The resource consumption patterns of one functional process may differ dramatically from another, and both need to be addressed in order to efficiently scale the application. If it is impractical to functionally separate “workloads” in the design and development process, then it is necessary to do so during the deployment phase leveraging those identifying contextual clues indicating the type of workload being invoked, i.e. hyper-local scale. Hyperlocal scalability requires leveraging scalability domains, those functional workload divisions that are similar in nature and require similar resources to scale. Scalability domains allow functional partitioning as a scalability pattern to be applied to an application without requiring function level separation (and all the management, maintenance and deployment headaches that go along with it). Scalability domains are discrete pools of similar processing workloads, partitioned as part of the architecture, that allow specific configuration and architectural techniques to be applied to the underlying network and platform that specifically increase the performance and scalability of those workloads. This is the notion of hyperlocal scalability: an architectural scaling strategy that leverages scalability domains to isolate similar functional partitions requiring hyper-scale from those partitions that do not. In doing so, highly efficient scalability domains can be used to scale up or out those functional partitions requiring it while allowing other functional partitions to scale at a more nominal rate, incurring therefore less costs. Consider the notion a form of offload, where high resource impact processing is offloaded to another instance, thereby increasing the available resources on the first instance which results in higher concurrency. The offloaded processing can hyper-scale as necessary in a purpose-configured instance at higher efficiency, resulting in better concurrency. Where a traditional scalability pattern – effectively replication – may have required ten instances to meet demand, a hyper-localized scalability pattern may require only six or seven, with the plurality of those serving the high resource consuming processing. Fewer instances results in lower costs whilst simultaneously improving performance and efficiency. INFRASTRUCTURE REQUIRED Hyper-localized scalability architectures can leverage a variety of infrastructure scalability patterns, but the fact that they are dependent upon infrastructure and its ability to perform application layer routing and switching is paramount to success. Today’s demanding business and operational requirements cannot be met with the simple scalability strategies of yesterday. Not only are legacy strategies based on infinite resources and budgets, they are inherently based on legacy application design in which functional partitioning was not only difficult, it was nearly impossible without the aid of methodologies like SOA. Web applications are uniquely positioned such that they are perfectly suited to partitioning strategies whether at the functional or type or session layers. The contextual data shared by web applications with infrastructure capable of intercepting, inspecting and acting upon that data means more modern, architectural-based scaling strategies can be put into play. Doing so affords organizations the means to achieve higher efficiency and utilization rates, while in turn improving the performance, resiliency and availability of those applications. These strategies require infrastructure services and an understanding of the resource needs and usage patterns of the application as well as the ability to implement that architecture using infrastructure services and platform optimization. It’s a job for devops. Service Virtualization Helps Localize Impact of Elastic Scalability Intercloud: Are You Moving Applications or Architectures? IT as a Service: A Stateless Infrastructure Architecture Model Load Balancing Fu: Beware the Algorithm and Sticky Sessions Infrastructure Scalability Pattern: Sharding Sessions Infrastructure Scalability Pattern: Partition by Function or Type It’s 2am: Do You Know What Algorithm Your Load Balancer is Using? Lots of Little Virtual Web Applications Scale Out Better than Scaling Up Sessions, Sessions Everywhere Choosing a Load Balancing Algorithm Requires DevOps Fu234Views0likes0CommentsAs Client-Server Style Applications Resurface Performance Metrics Must Include the API
Mobile and tablet platforms are hyping HTML5, but many applications are bound to a traditional client-server model, making API performance a top concern for organizations. I recently received an e-mail from Strangeloop Networks with a subject of: “The quest for the holy grail of Web speed: 2-second page load times". Being focused on optimizing the user-interface, they appropriately quoted usability expert Jakob Nielsen, but also included some interesting statistics: 57% of site visitors will bounce after waiting 3 seconds or less for a page to load. Aberdeen Group surveyed 160 companies and discovered that, on average, slowing down a site by just one second results in a 7% reduction in conversions. Shopzilla accelerated its average page load time from 6 seconds to 1.2 seconds and experienced a 12% increase in revenue and a 25% increase in page views. Amazon performed its own page speed optimization and announced that, for every 100 milliseconds of improvement, revenues increased by 1%. Microsoft slowed down its Bing site by two seconds, which led to a 4.3% loss in revenue per visitor. The problem is not that this information is inaccurate in any way. It’s not that I don’t agree that performance is a top concern for organizations – especially those for whom web applications directly generate revenue. It’s that “applications” are quickly becoming a mash-up of architectural models, not all of which leverage the ubiquitous web browser as the client. It is particularly true on mobile and tablet platforms, but increasingly true of web-delivered applications, as well. Too, many applications are dependent upon third-party services via the use of Web 2.0 APIs that can compromise performance of any application, browser-based or not. API PERFORMANCE WILL BECOME CRITICAL I was browsing Blackberry’s App World on my Playbook with my youngest the other day, looking for some games appropriate for a 3-year old. He was helping, navigating like a pro, and pulling up descriptions of applications he found interesting based on their icon. When the application descriptions started loading slowly, i.e. took more than about 3 seconds to pop up on the screen, he started hitting the “back” button and trying another one. And another one. And another one. Ultimately he became quite frustrated with the situation and decided his Transformers were more interesting as they were more interactive at the moment. Turns out I was having some connectivity issues that, in turn, impacted the Playbook’s performance. I took away two things from the experience: 1. A three-year old’s patience with application load times is approximately equal to the “ideal” load time for adults. Interesting, isn’t it? 2. These mobile and tablet-deployed “applications” often require server-side, API, access. Therefore, API performance is critical to maintaining a responsive application. It is further unlikely that all applications will simply turn to HTML5 as the answer to address the challenges inherent in application platform deployment diversity. APIs have become a staple traffic on the Internet as a means to interconnect and integrate disparate services and it is folly to think they are going anywhere. What’s more, if you know a thing or three about web applications, APIs are enabling real-time updating in record numbers today, with more and more application logic responsible for parsing and displaying data returned from those API calls residing on the client. Consider, if you will, the data from the “API Billionaires Club” presented last year in “Open API Madness: The Party Has Just Begun for Cloud Developers” These are not just API calls coming from external sources; these are API calls coming from each organization’s own applications as well as integrated, external sources. These APIs are generally calls for data in JSON or XML formats, not pre-formatted user interface markup (HTML*). No amount of HTML manipulation is likely to improve the performance of API calls because there is no HTML to optimize. It’s data, pure and simple, which means the bulk of the responsibility for ensuring wicked fast performance suitable to a three-year old’s patience is going to land squarely on the application delivery chain and the application developer. That means minimizing processing and delivery time through carefully optimizing code (developers) and the delivery chain (operations). OPTIMIZING the DELIVERY CHAIN When the web first became popular any one who could use a scripting language and spit out HTML called themselves “web developers.” The need for highly optimized code to support the demanding performance requirements of end-users means that it’s no longer enough to be able to spit out HTML or even JSON. It means developers need to be highly skilled in optimizing code on the server-side such that processing times are as tight as can be. Calculating Big (O) may become a valued skill once again. But applications are not islands and even the most highly optimized function in the world can be negatively impacted by factors outside the developer’s control. The load on the application server – whether physical or virtual – can have a deleterious effect on application performance. Higher loads, more RAM, fewer CPU cycles translates into slower executing code – no matter how optimized it may be. Processing cryptographic operations of any kind, be it for compression or security purposes, can consume resources and introduce latency into processing times when performed on the server. And the overhead from managing connections, usually TCP, can take as much time as processing a request. All those operations add up to latency that can drive the end-user response time over the patience threshold that results in an aborted transaction. And when I say transaction I mean request-reply transaction, not necessarily those that involve money. Aborted transactions are also bad for application performance because it’s wasting resources. That connection is held open based on the web or application server’s configuration, and if the end-user aborted the transaction, it’s ignoring the response but tying up resources that could be used elsewhere. Application delivery chain optimization is a critical component to improving response time. Offloading cryptographic processing and protocol management can alleviate much of the load that negatively impacts application processing times and shifts the delivery-time component of application performance management from the developer to operations, where optimization and acceleration technologies can be applied regardless of data format. Whether it’s HTML or JSON or XML is irrelevant, compression, caching and cryptographic offload can benefit both end-users and developers by mitigating those factors outside the developer’s demesne that impact performance negatively. THE WHOLE is GREATER than the SUM of its PARTS It is no longer enough to measure the end-user experience based on load times in a browser. The growing use of mobile devices – whether phones or tablets – and the increasingly interconnected web of integrated applications means performance of an application is more complicated than it was in the past. The performance of an application today is highly dependent on the performance of APIs, and thus testing APIs specifically from a variety of devices and platforms is critical in understand the impact high volume and load has on overall application performance. Testing API performance is critical to ensuring the end-user experience is acceptable regardless of the form factor of the client. If you aren’t sure what acceptable performance might be, grab the nearest three-year old; they’ll let you know loud and clear. How to Earn Your Data Center Merit Badge The Stealthy Ascendancy of JSON Cloud Testing: The Next Generation Data Center Feng Shui: Architecting for Predictable Performance Now Witness the Power of this Fully Operational Feedback Loop On Cloud, Integration and Performance The cost of bad cloud-based application performance I Find Your Lack of Win Disturbing Operational Risk Comprises More Than Just Security Challenging the Firewall Data Center Dogma 50 Ways to Use Your BIG-IP: Performance275Views0likes0CommentsJSON versus XML: Your Choice Matters More Than You Think
Should the enterprise standardize on JSON or XML as their lingua franca for Web 2.0 integration?Or should they use both as best fits the application?The decision impacts more than just integration – it resounds across the entire infrastructure and impacts everything from security to performance to availability of those applications. One of the things a developer may or may not have control over when building enterprise applications is the format of the data used to communicate (integrate) with other applications. Increasingly services external to the enterprise are very Web 2.0 in that they provide HTTP-based APIs for integration that exchange data in one of a couple of standard formats: XML and JSON. While RSS and ATOM are also seen in APIs as options, these are generally used only when the data being presented is frequently updated and of a “listing” style nature. XML and JSON are used to deliver more complex structures that do not fit well in to the paradigm described by RSS and ATOM formatted information. Increasingly libraries or toolkits are used to build interactive Web 2.0 style applications – XAJAX, SAJAX, Dojo, Prototype, script.aculo.us – and these, too, generally default to XML or JSON, though other formats are often supported as well. So as you’re building out that Web 2.0 style application and thinking about the API you’re going to offer to make it easier for partners/customers/other departments to handle integration with their Web 2.0 style applications – or even thinking about the way in which data will be exchanged with the client (browser) - you need to think carefully about the choice you’re making. There are pros and cons to both JSON and XML, and the choice has implications outside the confines of application development in your organization. The debate on which is “best” or “optimal” is far from over, and it’s likely to eclipse – for developers anyway – the religious-style wars over the choice of browser. Even mainstream technology coverage is taking an interest in the subject. A recent piece from C|NET on “NoSQL and the future of cloud databases” says “Mapping object data to JSON, a JavaScript data interchange format, is far less complex. The "schemaless" nature of many of these products is an excellent fit with agile development methodologies.” Indeed, schemaless data formats are certainly more flexible, but that flexibility has a price that may need to be paid by the rest of the infrastructure.259Views0likes2CommentsThe Stealthy Ascendancy of JSON
While everyone was focused on cloud, JSON has slowly but surely been taking over the application development world It looks like the debate between XML and JSON may be coming to a close with JSON poised to take the title of preferred format for web applications. If you don’t consider these statistics to be impressive, consider that ProgrammableWeb indicated that its “own statistics on ProgrammableWeb show a significant increase in the number of JSON APIs over 2009/2010. During 2009 there were only 191 JSON APIs registered. So far in 2010 [August] there are already 223!” Today there are 1262 JSON APIs registered, which means a growth rate of 565% in the past eight months, nearly catching up to XML which currently lists 2162 APIs. At this rate, JSON will likely overtake XML as the preferred format by the end of 2011. This is significant to both infrastructure vendors and cloud computing providers alike, because it indicates a preference for a programmatic model that must be accounted for when developing services, particularly those in the PaaS (Platform as a Service) domain. PaaS has yet to grab developers mindshare and it may be that support for JSON will be one of the ways in which that mindshare is attracted. Consider the results of the “State of Web Development 2010” survey from Web Directions in which developers were asked about their cloud computing usage; only 22% responded in the affirmative to utilizing cloud computing. But of those 22% that do leverage cloud computing, the providers they use are telling: PaaS represents a mere 7.35% of developers use of cloud computing, with storage (Amazon S3) and IaaS (Infrastructure as a Service) garnering 26.89% of responses. Google App Engine is the dominant PaaS platform at the moment, most likely owing to the fact that it is primarily focused on JavaScript, UI, and other utility-style services as opposed to Azure’s middle-ware and definitely more enterprise-class focused services. SaaS, too, is failing to recognize the demand from developers and the growing ascendancy of JSON. Consider this exchange on the Salesforce.com forums regarding JSON. Come on salesforce lets get this done. We need to integrate, we need this [JSON]. If JSON continues its steady rise into ascendancy, PaaS and SaaS providers alike should be ready to support JSON-style integration as its growth pattern indicates it is not going away, but is instead picking up steam. Providers able to support JSON for PaaS and SaaS will have a competitive advantage over those that do not, especially as they vie for the hearts and minds of developers which are, after all, their core constituency. THE IMPACT What the steady rise of JSON should trigger for providers and vendors alike is a need to support JSON as the means by which services are integrated, invoked, and data exchanged. Application delivery, service-providers and Infrastructure 2.0 focused solutions need to provide APIs that are JSON compatible and which are capable of handling the format to provide core infrastructure services such as firewalling and data scrubbing duties. The increasing use of JSON-based APIs to integrate with external, third-party services continues to grow and the demand for enterprise-class service to support JSON as well will continue to rise. There are drawbacks, and this steady movement toward JSON has in some cases a profound impact on the infrastructure and architectural choices made by IT organizations, especially in terms of providing for consistency of services across what is likely a very mixed-format environment. Identity and access management and security services may not be prepared to handle JSON APIs nor provide the same services as it has for XML, which through long established usage and efforts comes with its own set of standards. Including social networking “streams” in applications and web-sites is now as common as including images, but changes to APIs may make basic security chores difficult. Consider that Twitter – very quietly – has moved to supporting JSON only for its Streaming API. Organizations that were, as well they should, scrubbing such streams to prevent both embarrassing as well as malicious code from being integrated unknowingly into their sites, may have suddenly found that infrastructure providing such services no longer worked: API providers and developers are making their choice quite clear when it comes to choosing between XML and JSON. A nearly unanimous choice seems to be JSON. Several API providers, including Twitter, have either stopped supporting the XML format or are even introducing newer versions of their API with only JSON support. In our ProgrammableWeb API directory, JSON seems to be the winner. A couple of items are of interest this week in the XML versus JSON debate. We had earlier reported that come early December, Twitter plans to stop support for XML in its Streaming API. --JSON Continues its Winning Streak Over XML, ProgrammableWeb (Dec 2010) Similarly, caching and acceleration services may be confused by a change from XML to JSON; from a format that was well-understood and for which solutions were enabled with parsing capabilities to one that is not. IT’S THE DATA, NOT the API The fight between JSON and XML is one we continue to see in a general sense. See, it isn’t necessarily the API that matters, in the end, but the data format (the semantics) used to exchange that data which matters. XML is considered unstructured, though in practice it’s far more structured than JSON in the sense that there are meta-data standards for XML that constrain security, identity, and even application formats. JSON, however, although having been included natively in ECMA v5 (JSON data interchange format gets ECMA standards blessing) has very few standards aside from those imposed by frameworks and toolkits such as JQuery. This will make it challenging for infrastructure vendors to support services targeting application data – data scrubbing, web application firewall, IDS, IPS, caching, advanced routing – to continue to effectively deliver such applications without recognizing JSON as an option. The API has become little more than a set of URIs and nearly all infrastructure directly related to application delivery is more than capable of handling them. It is the data, however, that presents a challenge and which makes the developers’ choice of formats so important in the big picture. It isn’t just the application and integration that is impacted, it’s the entire infrastructure and architecture that must adapt to support the data format. The World Doesn’t Care About APIs – but it does care about the data, about the model. Right now, it appears that model is more than likely going to be presented in a JSON-encoded format. JSON data interchange format gets ECMA standards blessing JSON Continues its Winning Streak Over XML JSON versus XML: Your Choice Matters More Than You Think I am in your HTTP headers, attacking your application The Web 2.0 API: From collaborating to compromised Would you risk $31,000 for milliseconds of application response time? Stop brute force listing of HTTP OPTIONS with network-side scripting The New Distribution of The 3-Tiered Architecture Changes Everything Are You Scrubbing the Twitter Stream on Your Web Site?906Views0likes0CommentsCloud-Tiered Architectural Models are Bad Except When They Aren’t
Database as a service is part of an emerging model that should be evaluated as an architecture, not based on where it might be deployed These days everything is being delivered “as a Service”. Compute, storage, platforms, IT, databases. The concept, of course, is sound and it is generally speaking a good one. If you’re going to offer an environment in which applications can be deployed, you’d best offer the services appropriate to the deployment and delivery of that application. And that includes data services; some kind of database. Shortly after the announcement by Salesforce.com of its database as a platform service – database.com – Phil Wainewright posited that such an offering might “squish” smaller providers. Some vehemently disagreed, and Mr. Wainewright recently published a guest post, written by Matt McAdams, CEO of TrackVia, regarding the offering that disputes that prediction: Rather, Salesforce.com is making the existing database that currently underlies its CRM and Force.com platforms accessible to subscribers who don’t have accounts on one of those two platforms. The target audience is programmers who want to build an application outside of Force.com, but want a hosted database. Unfortunately, web application developers will find the idea of hosting their data outside their application platform severely unappealing. The reason is latency. Database.com: nice name, shame about the platform Mr. McAdams goes into more detail about the architecture of modern web applications and explains the logic behind his belief. He’s right about latency being an issue and is backed up by research conducted by developer-focused Evans Data Corporation. Developers report that performance is the second-most important attribute found in frameworks and platforms. The ability of a framework or a platform to deliver high throughput, minimal latency and efficient use of computing resources is a major factor in decisions regarding which application frameworks to use for development. [emphasis added] -- Evans Data CorporationUsers’ Choice: 2010 Frameworks (April 2010) So performance is definitely a factor, but there’s more going on here than just counting ticks and some of what’s happening completely obviates his concerns. That’s because he ignored the architectural model in favor of narrowing in on a specific deployment model. There are a few critical trends that may ultimately make the Database as a Service (DaaS) architectural model (not necessarily the provider-based deployment model) a success. The CLIENT-DATABASE ARCHITECTURAL MODEL The consumerization of IT is well underway. No one lightly dismisses the impact on application platform development of consumer-oriented gadgets like the iPad and the iPhone. More and more vendors as well as organizations are taking these “toys” seriously and subsequently targeting these environments as clients for what have traditionally been considered enterprise-class applications. But the application architecture of the applications deployed on mobile devices is different from traditional web-based applications. In most cases an “app” for a mobile device employs a modernized version of the client-server model with nearly all client and application logic deployed on the device and only the data store – the database – residing on the “server”. It’s more accurate to call these “client-database” models than “client-server” as often it’s the case that the “server” portion of these applications is little more than a set of services encapsulating database-focused functions. In other words, it’s leveraging data as a service. It’s still a three-tiered application architecture, to be sure, but the tiers involved have slightly different responsibilities than a modern web application. A modern web application generally “hosts” all three tiers – web (interface or ‘presentation’ in developer-speak), application, and data – in one location. For all the reasons cited by Mr. McAdams in his guest post, developers and subsequently organizations are loathe to break apart those tiers and distribute them around the “cloud”. I’d add that reliability and availability as well as security (in terms of access control and the enforcement of roles in a complex, enterprise-class application) play a part in that decision but it’s the latency that really is the deal-breaker especially between the application and the database. But Lori, you say, HTML5 and tablets are going to change all that. Applications will all be HTML5 and even mobile devices will return to the more comfortable three-tiered architecture. Will it? HTML5 has some interesting changes and additions that make it more compatible with a mobile application architecture than earlier versions of the specification. Offline storage, more application logic capabilities, more control. HTML5 is a step toward a fatter robust, more complete application client platform. Couple that with the fact that web applications have been moving toward a more client-centric deployment model for years and you’ve got a trend toward a more client-database application architectural model. Web applications have been getting fatter and fatter with more and more application logic being pushed onto the client. HTML5 appears to support and even encourage that trend. Consider, for a moment, this web application written nearly entirely in JavaScript. Yes, that’s right. Nearly all of the functionality in this application is contained within the client, in 80 lines of JavaScript code. That’s a client-database model. Even on mobile devices, on tablets designed to better support “traditional” web-applications, they are there almost as an after-thought. The browser is used for reading articles and watching video – not interacting with data. It’s the applications, the ones developed specifically for that device, that make or break the platform these days. If that weren’t the case, you wouldn’t need separate “applications” for Facebook, or Twitter, or Salesforce.com. You’d just leverage the existing web application, wouldn’t you? And you certainly wouldn’t need APIs upon which applications could be built, would you? Of course not. So what, you might be asking, does all this have to do with the success or failure of database as a service? The answer is this: developers are lazy, and because we’re lazy we’re masters at architecting solutions that can be reused as much as possible to limit the amount of tedious and mundane coding we have to do. Innovation and change is often driven not by inspiration, but by the inherent laziness of developers looking for an "easier" way to do a thing. And supporting two completely separate application architectures is certainly in the category of “tedious”. WHEN BEING LAZY is a BONUS As more and more organizations decide to support both mobile and web versions of their critical business applications, it’s the development staff that’s going to be called upon to provide them. And developers are - no offense intended as I still self-identify as a developer - lazy. We don’t like the tedious and the mundane; we don’t like to repeat the same code over and over and over. We look for ways to reuse and simplify such that we can concentrate on those pieces of development that are exciting to us – and that doesn’t include anything repetitious. So as developers look at the two models, they’re eventually going to abstract them, especially as they explore HTML5 and find more and more ways to align the two models. They’re going to note the differences and the similarities in architecture and come to the conclusion that it is inefficient and potentially risky to support two completely different versions of the same application, especially when the option exists to simplify them. An architectural model in which the data access portions are shared – hosted as a service – for both mobile and traditional desktop clients makes a lot more sense in terms of cost to develop and maintain than does attempting to support two completely separate architectural models. Given the movement toward more client-centric applications – whether because of platform restrictions (mobile devices) or increasing demand for functionality that only comes from client-deployed application logic (Web 2.0) – it is likely we’ll see a shift in deployment models toward client-database models more and more often in the future. That means data as a service is going to be an integral part of a developers’ lives. That’s really the crux of what Mr. McAdams is arguing against – that a data as a service deployment model is untenable due to the latency. But what he misses is that we’re already half-way there with mobile device applications and we’ve been moving in that direction for several years with web-based applications anyway. APIs for web applications today exist to provide access to what – data. Even when they’re performing what appears to be application “tasks” – say, following a Twitter user – they’re really just manipulating data in the database. There is no real difference between accessing a data service over the Internet that is deployed in the enterprise data center versus in a cloud computing provider’s data center. So the real question is not whether such an architectural model will be employed – it will – it’s where will those data services reside? And the answer to that question doesn’t necessarily require a lot of technical discussion; ultimately that has to be a business decision. The Database Tier is Not Elastic 80-line JavaScript Web Application Does This Application Make My Browser Look Fat? HTTP Now Serving … Everything The New Distribution of The 3-Tiered Architecture Changes Everything The Great Client-Server Architecture Myth Infrastructure Scalability Pattern: Sharding Sessions Infrastructure Scalability Pattern: Partition by Function or Type Applying Scalability Patterns to Infrastructure Architecture Sessions, Sessions Everywhere236Views0likes0CommentsApplying Scalability Patterns to Infrastructure Architecture
Too often software design patterns are overlooked by network and application delivery network architects but these patterns are often equally applicable to addressing a broad range of architectural challenges in the application delivery tier of the data center. The “High Scalability” blog is fast becoming one of my favorite reads. Last week did not disappoint with a post highlighting a set of scalability design patterns that was, apparently, inspired by yet another High Scalability post on “6 Ways to Kill Your Servers: Learning to Scale the Hard Way.” Credit:Michael Chow/azcentral.com This particular post caught my attention primarily because although I’ve touched on many of these patterns in the past, I’ve never thought to call them what they are: scalability patterns. That’s probably a side-effect of forgetting that building an architecture of any kind is at its core computer science and thus algorithms and design patterns are applicable to both micro- and macro-architectures, such as those used when designing a scalable architecture. This is actually more common than you’d think, as it’s rarely the case that a network guy and a developer sit down and discuss scalability patterns over beer and deep fried cheese curds (hey, I live in Wisconsin and it’s my blog post so just stop making faces until you’ve tried it). Developers and architects sit over there and think about how to design a scalable application from the perspective of its components – databases, application servers, middleware, etc… Network architects sit over here and think about how to scale an application from the perspective of network components – load balancers, trunks, VLANs, and switches. The thing is that the scalability patterns leveraged by developers and architects can almost universally be abstracted and applied to the application delivery network – the set of components integrated as a means to ensure availability, performance, and security of applications. That’s why devops is so important and why devops has to bring dev into ops as much as its necessary to bring some ops into dev. There needs to be more cross-over, more discussion, between the two groups if not an entirely new group in order to leverage the knowledge and skills that each has in new and innovative ways. ABSTRACT and APPLY So the aforementioned post is just a summary of a longer and more detailed post, but for purposes of this post I think the summary will do with the caveat that the original, “Scalability patterns and an interesting story...” by Jesper Söderlund is a great read that should definitely be on your “to read” list in the very near future. For now, let’s briefly touch on the scalability patterns and sub-patterns Jesper described with some commentary on how they fit into scalability from a network and application delivery network perspective. The original text from the High Scalability blog are in red(dish) text. Load distribution - Spread the system load across multiple processing units This is a horizontal scaling strategy that is well-understood. It may take the form of “clustering” or “load balancing” but in both cases it is essentially an aggregation coupled with a distributed processing model. The secret sauce is almost always in the way in which the aggregation point (strategic point of control) determines how best to distribute the load across the “multiple processing units.” load balancing / load sharing - Spreading the load across many components with equal properties for handling the request This is what most people think of when they hear “load balancing”, it’s just that at the application delivery layer we think in terms of directing application requests (usually HTTP but can just about any application protocol) to equal “servers” (physical or virtual) that handle the request. This is a “scaling out” approach that is most typically associated today with cloud computing and auto-scaling: launch additional clones of applications as virtual instances in order to increase the total capacity of an application. The load balancing distributes requests across all instances based on the configured load balancing algorithm. Partitioning - Spreading the load across many components by routing an individual request to a component that owns that data specific This is really where the architecture comes in and where efficiency and performance can be dramatically increased in an application delivery architecture. Rather than each instance of an application being identical to every other one, each instance (or pool of instances) is designated as the “owner”. This allows for devops to tweak configurations of the underlying operating system, web and application server software for the specific type of request being handled. This is, also, where the difference between “application switching” and “load balancing” becomes abundantly clear as “application switching” is used as a means to determine where to route a particular request which is/can be then load balanced across a pool of resources. It’s a subtle distinction but an important one when architecting not only efficient and fast but resilient and reliable delivery networks. Vertical partitioning - Spreading the load across the functional boundaries of a problem space, separate functions being handled by different processing units When it comes to routing application requests we really don’t separate by function unless that function is easily associated with a URI. The most common implementation of vertical partitioning at the application switching layer will be by content. Example: creating resource pools based on the Content-Type HTTP header: images in pool “image servers” and content in pool “content servers”. This allows for greater optimization of the web/application server based on the usage pattern and the content type, which can often also be related to a range of sizes. This also, in a distributed environment, allows architects to leverage say cloud-based storage for static content while maintaining dynamic content (and its associated data stores) on-premise. This kind of hybrid cloud strategy has been postulated as one of the most common use cases since the first wispy edges of cloud were seen on the horizon. Horizontal partitioning - Spreading a single type of data element across many instances, according to some partitioning key, e.g. hashing the player id and doing a modulus operation, etc. Quite often referred to as sharding. This sub-pattern is inline with the way in which persistence-based load balancing is accomplished, as well as the handling of object caching. This also describes the way in which you might direct requests received from specific users to designated instances that are specifically designed to handle their unique needs or requirements, such as the separation of “gold” users from “free” users based on some partitioning key which in HTTP land is often a cookie containing the relevant data. Queuing and batch - Achieve efficiencies of scale by processing batches of data, usually because the overhead of an operation is amortized across multiple request I admit defeat in applying this sub-pattern to application delivery. I know, you’re surprised, but this really is very specific to middleware and aside from the ability to leverage queuing for Quality of Service (QoS) at the delivery layer this one is just not fitting in well. If you have an idea how this fits, feel free to let me know – I’d love to be able to apply all the scalability patterns and sub-patterns to a broader infrastructure architecture. Relaxing of data constraints - Many different techniques and trade-offs with regards to the immediacy of processing / storing / access to data fall in this strategy This one takes us to storage virtualization and tiering and the way in which data storage and access is intelligently handled in varying properties based on usage and prioritization of the content. If one relaxes the constraints around access times for certain types of data, it is possible to achieve a higher efficiency use of storage by subjugating some content to secondary and tertiary tiers which may not have the same performance attributes as your primary storage tier. And make no mistake, storage virtualization is a part of the application delivery network – has been since its inception – and as cloud computing and virtualization have grown so has the importance of a well-defined storage tiering strategy. We can bring this back up to the application layer by considering that a relaxation of data constraints with regards to immediacy of access can be applied by architecting a solution that separates data reads from writes. This implies eventual consistency, as data updated/written to one database must necessarily be replicated to the databases from which reads are, well, read, but that’s part of relaxing a data constraint. This is a technique used by many large, social sites such as Facebook and Plenty of Fish in order to scale the system to the millions upon millions of requests it handles in any given hour. Parallelization - Work on the same task in parallel on multiple processing units I’m not going to be able to apply this one either, unless it was in conjunction with optimizing something like MapReduce and SPDY. I’ve been thinking hard about this one, and the problem is the implication that “same task” is really the “same task”, and that processing is distributed. That said, if the actual task can be performed by multiple processing units, then an application delivery controller could certainly be configured to recognize that a specific URL should be essentially sent to some other proxy/solution that performs the actual distribution, but the processing model here deviates sharply from the request-reply paradigm under which most applications today operate. DEVOPS CAN MAKE THIS HAPPEN I hate to sound-off too much on the “devops” trumpet, but one of the primary ways in which devops will be of significant value in the future is exactly in this type of practical implementation. Only by recognizing that many architectural patterns are applicable to not only application but infrastructure architecture can we start to apply a whole lot of “lessons that have already been learned” by developers and architects to emerging infrastructure architectural models. This abstraction and application from well-understood patterns in application design and architecture will be invaluable in designing the new network; the next iteration of network theory and implementation that will allow it to scale along with the applications it is delivering. Related blogs & articles: Cloud is not Rocket Science but it is Computer Science Implementing SOA Patterns: The Router Implementing SOA Patterns: The Service Firewall Implementing SOA Patterns: Input/Output Validator Lori MacVittie - interstitial request pattern (AJAX) Business-Layer Load Balancing I Find Your Lack of Win Disturbing Cloud Computing: Vertical Scalability is Still Your Problem Vertical Scalability Cloud Computing Style Scalability Only One Half the Reliability Equation Automating scalability and high availability services Service Virtualization Helps Localize Impact of Elastic Scalability Web 2.0: Integration, APIs, and Scalability To Take Advantage of Cloud Computing You Must Unlearn, Luke. Statistics Collection and Management Pack Scalability370Views0likes1CommentInfrastructure 2.0 + Cloud + IT as a Service = An Architectural Parfait
Infrastructure 2.0 ≠ cloud computing ≠ IT as a Service. There is a difference between Infrastructure 2.0 and cloud. There is also a difference between cloud and IT as a Service. But they do go together, like a parfait. And everybody likes a parfait… The introduction of the newest member of the cloud computing buzzword family is “IT as a Service.” It is understandably causing some confusion because, after all, isn’t that just another way to describe “private cloud”? No, actually it isn’t. There’s a lot more to it than that, and it’s very applicable to both private and public models. Furthermore, equating “cloud computing” to “IT as a Service” does both a big a disservice as making synonyms of “Infrastructure 2.0” and “cloud computing.” These three [ concepts | models | technologies ] are highly intertwined and in some cases even interdependent, but they are not the same. In the simplest explanation possible: infrastructure 2.0 enables cloud computing which enables IT as a service. Now that we’ve got that out of the way, let’s dig in. ENABLE DOES NOT MEAN EQUAL TO One of the core issues seems to be the rush to equate “enable” with “equal”. There is a relationship between these three technological concepts but they are in no wise equivalent nor should be they be treated as such. Like SOA, the differences between them revolve primarily around the level of abstraction and the layers at which they operate. Not the layers of the OSI model or the technology stack, but the layers of a data center architecture. Let’s start at the bottom, shall we? INFRASTRUCTURE 2.0 At the very lowest layer of the architecture is Infrastructure 2.0. Infrastructure 2.0 is focused on enabling dynamism and collaboration across the network and application delivery network infrastructure. It is the way in which traditionally disconnected (from a communication and management point of view) data center foundational components are imbued with the ability to connect and collaborate. This is primarily accomplished via open, standards-based APIs that provide a granular set of operational functions that can be invoked from a variety of programmatic methods such as orchestration systems, custom applications, and via integration with traditional data center management solutions. Infrastructure 2.0 is about making the network smarter both from a management and a run-time (execution) point of view, but in the case of its relationship to cloud and IT as a Service the view is primarily focused on management. Infrastructure 2.0 includes the service-enablement of everything from routers to switches, from load balancers to application acceleration, from firewalls to web application security components to server (physical and virtual) infrastructure. It is, distilled to its core essence, API-enabled components. CLOUD COMPUTING Cloud computing is the closest to SOA in that it is about enabling operational services in much the same way as SOA was about enabling business services. Cloud computing takes the infrastructure layer services and orchestrates them together to codify an operational process that provides a more efficient means by which compute, network, storage, and security resources can be provisioned and managed. This, like Infrastructure 2.0, is an enabling technology. Alone, these operational services are generally discrete and are packaged up specifically as the means to an end – on-demand provisioning of IT services. Cloud computing is the service-enablement of operational services and also carries along the notion of an API. In the case of cloud computing, this API serves as a framework through which specific operations can be accomplished in a push-button like manner. IT as a SERVICE At the top of our technology pyramid, as it is likely obvious at this point we are building up to the “pinnacle” of IT by laying more aggressively focused layers atop one another, we have IT as a Service. IT as a Service, unlike cloud computing, is designed not only to be consumed by other IT-minded folks, but also by (allegedly) business folks. IT as a Service broadens the provisioning and management of resources and begins to include not only operational services but those services that are more, well, businessy, such as identity management and access to resources. IT as a Service builds on the services provided by cloud computing, which is often called a “cloud framework” or a “cloud API” and provides the means by which resources can be provisioned and managed. Now that sounds an awful lot like “cloud computing” but the abstraction is a bit higher than what we expect with cloud. Even in a cloud computing API we are steal interacting more directly with operational and compute-type resources. We’re provisioning, primarily, infrastructure services but we are doing so at a much higher layer and in a way that makes it easy for both business and application developers and analysts to do so. An example is probably in order at this point. THE THREE LAYERS in the ARCHITECTURAL PARFAIT Let us imagine a simple “application” which itself requires only one server and which must be available at all times. That’s the “service” IT is going to provide to the business. In order to accomplish this seemingly simple task, there’s a lot that actually has to go on under the hood, within the bowels of IT. LAYER ONE Consider, if you will, what fulfilling that request means. You need at least two servers and a Load balancer, you need a server and some storage, and you need – albeit unknown to the business user – firewall rules to ensure the application is only accessible to those whom you designate. So at the bottom layer of the stack (Infrastructure 2.0) you need a set of components that match these functions and they must be all be enabled with an API (or at a minimum by able to be automated via traditional scripting methods). Now the actual task of configuring a load balancer is not just a single API call. Ask RackSpace, or GoGrid, or Terremark, or any other cloud provider. It takes multiple steps to authenticate and configure – in the right order – that component. The same is true of many components at the infrastructure layer: the APIs are necessarily granular enough to provide the flexibility necessary to be combined in a way as to be customizable for each unique environment in which they may be deployed. So what you end up with is a set of infrastructure services that comprise the appropriate API calls for each component based on the specific operational policies in place. LAYER TWO At the next layer up you’re providing even more abstract frameworks. The “cloud API” at this layer may provide services such as “auto-scaling” that require a great deal of configuration and registration of components with other components. There’s automation and orchestration occurring at this layer of the IT Service Stack, as it were, that is much more complex but narrowly focused than at the previous infrastructure layer. It is at this layer that the services become more customized and able to provide business and customer specific options. It is also at this layer where things become more operationally focused, with the provisioning of “application resources” comprising perhaps the provisioning of both compute and storage resources. This layer also lays the foundation for metering and monitoring (cause you want to provide visibility, right?) which essentially overlays, i.e. makes a service of, multiple infrastructure resource monitoring services. LAYER THREE At the top layer is IT as a Service, and this is where systems become very abstracted and get turned into the IT King “A La Carte” Menu that is the ultimate goal according to everyone who’s anyone (and a few people who aren’t). This layer offers an interface to the cloud in such a way as to make self-service possible. It may not be Infrabook or even very pretty, but as long as it gets the job done cosmetics are just enhancing the value of what exists in the first place. IT as a Service is the culmination of all the work done at the previous layers to fine-tune services until they are at the point where they are consumable – in the sense that they are easy to understand and require no real technical understanding of what’s actually going on. After all, a business user or application developer doesn’t really need to know how the server and storage resources are provisioned, just in what sizes and how much it’s going to cost. IT as a Service ultimately enables the end-user – whomever that may be – to easily “order” IT services to fulfill the application specific requirements associated with an application deployment. That means availability, scalability, security, monitoring, and performance. A DYNAMIC DATA CENTER ARCHITECTURE One of the first questions that should come to mind is: why does it matter? After all, one could cut out the “cloud computing” layer and go straight from infrastructure services to IT as a Service. While that’s technically true it eliminates one of the biggest benefits of a layered and highly abstracted architecture : agility. By presenting each layer to the layer above as services, we are effectively employing the principles of a service-oriented architecture and separating the implementation from the interface. This provides the ability to modify the implementation without impacting the interface, which means less down-time and very little – if any – modification in layers above the layer being modified. This translates into, at the lowest level, vender agnosticism and the ability to avoid vendor-lock in. If two components, say a Juniper switch and a Cisco switch, are enabled with the means by which they can be enabled as services, then it becomes possible to switch the two at the implementation layer without requiring the changes to trickle upward through the interface and into the higher layers of the architecture. It’s polymorphism applied to an data center operation rather than a single object’s operations, to put it in developer’s terms. It’s SOA applied to a data center rather than an application, to put it in an architect’s terms. It’s an architectural parfait and, as we all know, everybody loves a parfait, right? Related blogs & articles: Applying Scalability Patterns to Infrastructure Architecture The Other Hybrid Cloud Architecture The New Distribution of The 3-Tiered Architecture Changes Everything Infrastructure 2.0: Aligning the network with the business (and ... Infrastructure 2.0: As a matter of fact that isn't what it means Infrastructure 2.0: Flexibility is Key to Dynamic Infrastructure Infrastructure 2.0: The Diseconomy of Scale Virus Lori MacVittie - Infrastructure 2.0 Infrastructure 2.0: Squishy Name for a Squishy Concept Pay No Attention to the Infrastructure Behind the Cloudy Curtain Making Infrastructure 2.0 reality may require new standards The Inevitable Eventual Consistency of Cloud Computing Cloud computing is not Burger King. You can't have it your way. Yet.731Views0likes0Comments