iaas
20 TopicsThe Stealthy Ascendancy of JSON
While everyone was focused on cloud, JSON has slowly but surely been taking over the application development world It looks like the debate between XML and JSON may be coming to a close with JSON poised to take the title of preferred format for web applications. If you don’t consider these statistics to be impressive, consider that ProgrammableWeb indicated that its “own statistics on ProgrammableWeb show a significant increase in the number of JSON APIs over 2009/2010. During 2009 there were only 191 JSON APIs registered. So far in 2010 [August] there are already 223!” Today there are 1262 JSON APIs registered, which means a growth rate of 565% in the past eight months, nearly catching up to XML which currently lists 2162 APIs. At this rate, JSON will likely overtake XML as the preferred format by the end of 2011. This is significant to both infrastructure vendors and cloud computing providers alike, because it indicates a preference for a programmatic model that must be accounted for when developing services, particularly those in the PaaS (Platform as a Service) domain. PaaS has yet to grab developers mindshare and it may be that support for JSON will be one of the ways in which that mindshare is attracted. Consider the results of the “State of Web Development 2010” survey from Web Directions in which developers were asked about their cloud computing usage; only 22% responded in the affirmative to utilizing cloud computing. But of those 22% that do leverage cloud computing, the providers they use are telling: PaaS represents a mere 7.35% of developers use of cloud computing, with storage (Amazon S3) and IaaS (Infrastructure as a Service) garnering 26.89% of responses. Google App Engine is the dominant PaaS platform at the moment, most likely owing to the fact that it is primarily focused on JavaScript, UI, and other utility-style services as opposed to Azure’s middle-ware and definitely more enterprise-class focused services. SaaS, too, is failing to recognize the demand from developers and the growing ascendancy of JSON. Consider this exchange on the Salesforce.com forums regarding JSON. Come on salesforce lets get this done. We need to integrate, we need this [JSON]. If JSON continues its steady rise into ascendancy, PaaS and SaaS providers alike should be ready to support JSON-style integration as its growth pattern indicates it is not going away, but is instead picking up steam. Providers able to support JSON for PaaS and SaaS will have a competitive advantage over those that do not, especially as they vie for the hearts and minds of developers which are, after all, their core constituency. THE IMPACT What the steady rise of JSON should trigger for providers and vendors alike is a need to support JSON as the means by which services are integrated, invoked, and data exchanged. Application delivery, service-providers and Infrastructure 2.0 focused solutions need to provide APIs that are JSON compatible and which are capable of handling the format to provide core infrastructure services such as firewalling and data scrubbing duties. The increasing use of JSON-based APIs to integrate with external, third-party services continues to grow and the demand for enterprise-class service to support JSON as well will continue to rise. There are drawbacks, and this steady movement toward JSON has in some cases a profound impact on the infrastructure and architectural choices made by IT organizations, especially in terms of providing for consistency of services across what is likely a very mixed-format environment. Identity and access management and security services may not be prepared to handle JSON APIs nor provide the same services as it has for XML, which through long established usage and efforts comes with its own set of standards. Including social networking “streams” in applications and web-sites is now as common as including images, but changes to APIs may make basic security chores difficult. Consider that Twitter – very quietly – has moved to supporting JSON only for its Streaming API. Organizations that were, as well they should, scrubbing such streams to prevent both embarrassing as well as malicious code from being integrated unknowingly into their sites, may have suddenly found that infrastructure providing such services no longer worked: API providers and developers are making their choice quite clear when it comes to choosing between XML and JSON. A nearly unanimous choice seems to be JSON. Several API providers, including Twitter, have either stopped supporting the XML format or are even introducing newer versions of their API with only JSON support. In our ProgrammableWeb API directory, JSON seems to be the winner. A couple of items are of interest this week in the XML versus JSON debate. We had earlier reported that come early December, Twitter plans to stop support for XML in its Streaming API. --JSON Continues its Winning Streak Over XML, ProgrammableWeb (Dec 2010) Similarly, caching and acceleration services may be confused by a change from XML to JSON; from a format that was well-understood and for which solutions were enabled with parsing capabilities to one that is not. IT’S THE DATA, NOT the API The fight between JSON and XML is one we continue to see in a general sense. See, it isn’t necessarily the API that matters, in the end, but the data format (the semantics) used to exchange that data which matters. XML is considered unstructured, though in practice it’s far more structured than JSON in the sense that there are meta-data standards for XML that constrain security, identity, and even application formats. JSON, however, although having been included natively in ECMA v5 (JSON data interchange format gets ECMA standards blessing) has very few standards aside from those imposed by frameworks and toolkits such as JQuery. This will make it challenging for infrastructure vendors to support services targeting application data – data scrubbing, web application firewall, IDS, IPS, caching, advanced routing – to continue to effectively deliver such applications without recognizing JSON as an option. The API has become little more than a set of URIs and nearly all infrastructure directly related to application delivery is more than capable of handling them. It is the data, however, that presents a challenge and which makes the developers’ choice of formats so important in the big picture. It isn’t just the application and integration that is impacted, it’s the entire infrastructure and architecture that must adapt to support the data format. The World Doesn’t Care About APIs – but it does care about the data, about the model. Right now, it appears that model is more than likely going to be presented in a JSON-encoded format. JSON data interchange format gets ECMA standards blessing JSON Continues its Winning Streak Over XML JSON versus XML: Your Choice Matters More Than You Think I am in your HTTP headers, attacking your application The Web 2.0 API: From collaborating to compromised Would you risk $31,000 for milliseconds of application response time? Stop brute force listing of HTTP OPTIONS with network-side scripting The New Distribution of The 3-Tiered Architecture Changes Everything Are You Scrubbing the Twitter Stream on Your Web Site?898Views0likes0CommentsInfrastructure 2.0 + Cloud + IT as a Service = An Architectural Parfait
Infrastructure 2.0 ≠ cloud computing ≠ IT as a Service. There is a difference between Infrastructure 2.0 and cloud. There is also a difference between cloud and IT as a Service. But they do go together, like a parfait. And everybody likes a parfait… The introduction of the newest member of the cloud computing buzzword family is “IT as a Service.” It is understandably causing some confusion because, after all, isn’t that just another way to describe “private cloud”? No, actually it isn’t. There’s a lot more to it than that, and it’s very applicable to both private and public models. Furthermore, equating “cloud computing” to “IT as a Service” does both a big a disservice as making synonyms of “Infrastructure 2.0” and “cloud computing.” These three [ concepts | models | technologies ] are highly intertwined and in some cases even interdependent, but they are not the same. In the simplest explanation possible: infrastructure 2.0 enables cloud computing which enables IT as a service. Now that we’ve got that out of the way, let’s dig in. ENABLE DOES NOT MEAN EQUAL TO One of the core issues seems to be the rush to equate “enable” with “equal”. There is a relationship between these three technological concepts but they are in no wise equivalent nor should be they be treated as such. Like SOA, the differences between them revolve primarily around the level of abstraction and the layers at which they operate. Not the layers of the OSI model or the technology stack, but the layers of a data center architecture. Let’s start at the bottom, shall we? INFRASTRUCTURE 2.0 At the very lowest layer of the architecture is Infrastructure 2.0. Infrastructure 2.0 is focused on enabling dynamism and collaboration across the network and application delivery network infrastructure. It is the way in which traditionally disconnected (from a communication and management point of view) data center foundational components are imbued with the ability to connect and collaborate. This is primarily accomplished via open, standards-based APIs that provide a granular set of operational functions that can be invoked from a variety of programmatic methods such as orchestration systems, custom applications, and via integration with traditional data center management solutions. Infrastructure 2.0 is about making the network smarter both from a management and a run-time (execution) point of view, but in the case of its relationship to cloud and IT as a Service the view is primarily focused on management. Infrastructure 2.0 includes the service-enablement of everything from routers to switches, from load balancers to application acceleration, from firewalls to web application security components to server (physical and virtual) infrastructure. It is, distilled to its core essence, API-enabled components. CLOUD COMPUTING Cloud computing is the closest to SOA in that it is about enabling operational services in much the same way as SOA was about enabling business services. Cloud computing takes the infrastructure layer services and orchestrates them together to codify an operational process that provides a more efficient means by which compute, network, storage, and security resources can be provisioned and managed. This, like Infrastructure 2.0, is an enabling technology. Alone, these operational services are generally discrete and are packaged up specifically as the means to an end – on-demand provisioning of IT services. Cloud computing is the service-enablement of operational services and also carries along the notion of an API. In the case of cloud computing, this API serves as a framework through which specific operations can be accomplished in a push-button like manner. IT as a SERVICE At the top of our technology pyramid, as it is likely obvious at this point we are building up to the “pinnacle” of IT by laying more aggressively focused layers atop one another, we have IT as a Service. IT as a Service, unlike cloud computing, is designed not only to be consumed by other IT-minded folks, but also by (allegedly) business folks. IT as a Service broadens the provisioning and management of resources and begins to include not only operational services but those services that are more, well, businessy, such as identity management and access to resources. IT as a Service builds on the services provided by cloud computing, which is often called a “cloud framework” or a “cloud API” and provides the means by which resources can be provisioned and managed. Now that sounds an awful lot like “cloud computing” but the abstraction is a bit higher than what we expect with cloud. Even in a cloud computing API we are steal interacting more directly with operational and compute-type resources. We’re provisioning, primarily, infrastructure services but we are doing so at a much higher layer and in a way that makes it easy for both business and application developers and analysts to do so. An example is probably in order at this point. THE THREE LAYERS in the ARCHITECTURAL PARFAIT Let us imagine a simple “application” which itself requires only one server and which must be available at all times. That’s the “service” IT is going to provide to the business. In order to accomplish this seemingly simple task, there’s a lot that actually has to go on under the hood, within the bowels of IT. LAYER ONE Consider, if you will, what fulfilling that request means. You need at least two servers and a Load balancer, you need a server and some storage, and you need – albeit unknown to the business user – firewall rules to ensure the application is only accessible to those whom you designate. So at the bottom layer of the stack (Infrastructure 2.0) you need a set of components that match these functions and they must be all be enabled with an API (or at a minimum by able to be automated via traditional scripting methods). Now the actual task of configuring a load balancer is not just a single API call. Ask RackSpace, or GoGrid, or Terremark, or any other cloud provider. It takes multiple steps to authenticate and configure – in the right order – that component. The same is true of many components at the infrastructure layer: the APIs are necessarily granular enough to provide the flexibility necessary to be combined in a way as to be customizable for each unique environment in which they may be deployed. So what you end up with is a set of infrastructure services that comprise the appropriate API calls for each component based on the specific operational policies in place. LAYER TWO At the next layer up you’re providing even more abstract frameworks. The “cloud API” at this layer may provide services such as “auto-scaling” that require a great deal of configuration and registration of components with other components. There’s automation and orchestration occurring at this layer of the IT Service Stack, as it were, that is much more complex but narrowly focused than at the previous infrastructure layer. It is at this layer that the services become more customized and able to provide business and customer specific options. It is also at this layer where things become more operationally focused, with the provisioning of “application resources” comprising perhaps the provisioning of both compute and storage resources. This layer also lays the foundation for metering and monitoring (cause you want to provide visibility, right?) which essentially overlays, i.e. makes a service of, multiple infrastructure resource monitoring services. LAYER THREE At the top layer is IT as a Service, and this is where systems become very abstracted and get turned into the IT King “A La Carte” Menu that is the ultimate goal according to everyone who’s anyone (and a few people who aren’t). This layer offers an interface to the cloud in such a way as to make self-service possible. It may not be Infrabook or even very pretty, but as long as it gets the job done cosmetics are just enhancing the value of what exists in the first place. IT as a Service is the culmination of all the work done at the previous layers to fine-tune services until they are at the point where they are consumable – in the sense that they are easy to understand and require no real technical understanding of what’s actually going on. After all, a business user or application developer doesn’t really need to know how the server and storage resources are provisioned, just in what sizes and how much it’s going to cost. IT as a Service ultimately enables the end-user – whomever that may be – to easily “order” IT services to fulfill the application specific requirements associated with an application deployment. That means availability, scalability, security, monitoring, and performance. A DYNAMIC DATA CENTER ARCHITECTURE One of the first questions that should come to mind is: why does it matter? After all, one could cut out the “cloud computing” layer and go straight from infrastructure services to IT as a Service. While that’s technically true it eliminates one of the biggest benefits of a layered and highly abstracted architecture : agility. By presenting each layer to the layer above as services, we are effectively employing the principles of a service-oriented architecture and separating the implementation from the interface. This provides the ability to modify the implementation without impacting the interface, which means less down-time and very little – if any – modification in layers above the layer being modified. This translates into, at the lowest level, vender agnosticism and the ability to avoid vendor-lock in. If two components, say a Juniper switch and a Cisco switch, are enabled with the means by which they can be enabled as services, then it becomes possible to switch the two at the implementation layer without requiring the changes to trickle upward through the interface and into the higher layers of the architecture. It’s polymorphism applied to an data center operation rather than a single object’s operations, to put it in developer’s terms. It’s SOA applied to a data center rather than an application, to put it in an architect’s terms. It’s an architectural parfait and, as we all know, everybody loves a parfait, right? Related blogs & articles: Applying Scalability Patterns to Infrastructure Architecture The Other Hybrid Cloud Architecture The New Distribution of The 3-Tiered Architecture Changes Everything Infrastructure 2.0: Aligning the network with the business (and ... Infrastructure 2.0: As a matter of fact that isn't what it means Infrastructure 2.0: Flexibility is Key to Dynamic Infrastructure Infrastructure 2.0: The Diseconomy of Scale Virus Lori MacVittie - Infrastructure 2.0 Infrastructure 2.0: Squishy Name for a Squishy Concept Pay No Attention to the Infrastructure Behind the Cloudy Curtain Making Infrastructure 2.0 reality may require new standards The Inevitable Eventual Consistency of Cloud Computing Cloud computing is not Burger King. You can't have it your way. Yet.721Views0likes0CommentsCloud Computing: Will data integration be its Achilles Heel?
Wesley: Now, there may be problems once our app is in the cloud. Inigo: I'll say. How do I find the data? Once I do, how do I integrate it with the other apps? Once I integrate it, how do I replicate it? If you remember this somewhat altered scene from the Princess Bride, you also remember that no one had any answers for Inigo. That's apropos of this discussion, because no one has any good answers for this version of Inigo either. And no, a holocaust cloak is not going to save the day this time. If you've been considering deploying applications in a public cloud, you've certainly considered what must be the Big Hairy Question regarding cloud computing: how do I get at my data? There's very little discussion about this topic, primarily because at this point there's no easy answer. Data stored in the cloud is not easily accessible for integration with applications not residing in the cloud, which can definitely be a roadblock to adopting public cloud computing. Stacey Higginbotham at GigaOM had a great post on the topic of getting data into the cloud, and while the conclusion that bandwidth is necessary is also applicable to getting your data out of the cloud, the details are left in your capable hands. We had this discussion when SaaS (Software as a Service) first started to pick up steam. If you're using a service like salesforce.com to store business critical data, how do you integrate that back into other applications that may need it? Web services were the first answer, followed by integration appliances and solutions that included custom-built adapters for salesforce.com to more easily enable access and integration to data stored "out there", in the cloud. Amazon offers URL-based and web services access to data stored in its SimpleDB offering, but that doesn't help folks who are using Oracle, SQL Server, or MySQL offerings in the cloud. And SimpleDB is appropriately named; it isn't designed to be an enterprise class service - caveat emptor is in full force if you rely upon it for critical business data. RDBMS' have their own methods of replication and synchronization, but mirroring and real-time replication methods require a lot of bandwidth and very low latency connections - something not every organization can count on having. Of course you can always deploy custom triggers and services that automatically replicate back into the local data center, but that, too, is problematic depending on bandwidth availability and accessibility of applications and databases inside the data center. The reverse scenario is much more likely, with a daemon constantly polling the cloud computing data and pulling updates back into the data center. You can also just leave that data out there in the cloud, implement, or take advantage of if they exist, service-based access to the data and integrate it with business processes and applications inside the data center. You're relying on the availability of the cloud, the Internet, and all the infrastructure in between, but like the solution for integrating with salesforce.com and other SaaS offerings, this is likely the best of a set of "will have to do" options. The issue of data and its integration has not yet raised its ugly head, mostly because very few folks are moving critical business applications into the cloud and admittedly, cloud computing is still in its infancy. But even non-critical applications are going to use or create data, and that data will, invariably, become important or need to be accessed by folks in the organization, which means access to that data will - probably sooner rather than later - become a monkey on the backs of IT. The availability of and ease of access to data stored in the public cloud for integration, data mining, business intelligence, and reporting - all common enterprise application use of data - will certainly affect adoption of cloud computing in general. The benefits of saving dollars on infrastructure (management, acquisition, maintenance) aren't nearly as compelling a reason to use the cloud when those savings would quickly be eaten up by the extra effort necessary to access and integrate data stored in the cloud. Related articles by Zemanta SQL-as-a-Service with CloudSQL bridges cloud and premises Amazon SimpleDB ready for public use Blurring the functional line - Zoho CloudSQL merges on-site and on-cloud As a Service: The many faces of the cloud A comparison of major cloud-computing providers (Amazon, Mosso, GoGrid) Public Data Goes on Amazon's Cloud300Views0likes2CommentsThe Future of Hybrid Cloud
You keep using that word. I do no think it means what you think it means... An interesting and almost ancillary point was made during a recent #cloudtalk hosted by VMware vCloud with respect to the definition of "hybrid" cloud. Sure, it implies some level of integration, but how much integration is required to be considered a hybrid cloud? The way I see it, there has to be some level of integration that supports the ability to automate something - either resources or a process - in order for an architecture to be considered a hybrid cloud. A "hybrid" anything, after all, is based on the premise of joining two things together to form something new. Simply using Salesforce.com and Ceridian for specific business functions doesn't seem to quality. They aren't necessarily integrated (joined) in any way to the corporate systems or even to processes that execute within the corporate environs. Thus, it seems to me that in order to truly be a "hybrid" cloud, there must be some level of integration. Perhaps that's simply at the process level, as is the case with SaaS providers when identity is federated as a means to reassert control over access as well as potentially provide single sign-on services. Similarly, merely launching a development or test application in a public IaaS environment doesn't really "join" anything, does it? To be classified as "hybrid" one would expect there be network or resource integration, via such emerging technologies as cloud bridges and gateways. The same is true internally with SDN and existing network technologies. Integration must be more than "able to run in the environment". There must be some level of orchestration and collaboration between the networking models in order to consider it "hybrid". From that perspective, the future of hybrid cloud seems to rely upon the existence of a number of different technological solutions: Cloud bridges Cloud gateways Cloud brokers SDN (both application layer and network layer) APIs (to promote the integration of networks and resources) Standards (such as SAML to enable the orchestration and collaboration at the application layer) Putting these technologies all together and you get what seems to be the "future" of a hybrid cloud: SaaS, IaaS, SDN and traditional technology integrated at some layer that enables both the business and operations to choose the right environment for the task at hand at the time they need it. In other words, our "network" diagrams of the future will necessarily need to extend beyond the traditional data center perimeter and encompass both SaaS and IaaS environments. That means as we move forward IT and operations needs to consider how such environments will fit into and with existing solutions, as well as how emerging solutions will enable this type of hybrid architecture to come to fruition. Yes, you did notice I left out PaaS. Isn't that interesting?250Views0likes0CommentsThe Future of Cloud: Infrastructure as a Platform
Cloud needs to become a platform, and that means its comprising infrastructure must also embrace the platform paradigm. There’s been a spate of articles, blogs, and mentions of OpenFlow in the past few months. IBM was the latest entry into the OpenFlow game, releasing an enabling RackSwitch G8264, an update of a 64-port, 10 Gigabit Ethernet switch IBM put out a year ago. Interest in the specification appears to be growing and not just because it’s got the prefix-du-jour as part of its name, implying everything to everyone – free, extensible, interoperable, etc… While all those modifiers are indeed interesting and, to some, a highly important facet of the would-be standard, there’s something else about it that is driving its popularity. That something-else can be summed it with the statement: “infrastructure as a platform.” THE WEB 2.0 LESSON. AGAIN. The importance of turning infrastructure into a platform can be evidenced by noting commentary on Web 2.0, a.k.a. social networking, applications and their failure/success to garner mind-share. Recently, a high-profile engineer at Google mistakenly posted a length and refreshingly blunt commentary on what he views as Google’s failure to recognize the importance of platform to successful offerings in today’s demanding marketplace. To Google’s credit, once the erroneous posting was discovered, it decided to “let it stand” and thus we are able to glean some insight about the importance of platform to today’s successful offerings: While Yegge doesn’t have a lot of good things to say about Amazon and its founder Jeff Bezos, he does note that Bezos – unlike Google – understands that its not just about developing interesting products, but that it takes a platform to create a great product. -- SiliconFilter, “Google Engineer: “Google+ is a Prime Example of Our Complete Failure to Understand Platforms” This insight is not restricted to software developers and engineers at all; the rising interest of PaaS (Platform as a Service) and the continued siren’s song that it will dominate the cloud landscape in the future is all tied to the same premise: it is the availability of a robust platform that makes or breaks solutions today, not features or functions or price. It is the ability to be successful by building, as Yegge says in his post, “an entire constellation of products by allowing other people to do the work.” Lest you think this concept applicable only to software, let me remind you of Nokia CEO Stephen Elop’s somewhat blunt assessment of his company’s failure to recognize this truth: The battle of devices has now become a war of ecosystems, where ecosystems include not only the hardware and software of the device, but developers, applications, ecommerce, advertising, search, social applications, location-based services, unified communications and many other things. Our competitors aren’t taking our market share with devices; they are taking our market share with an entire ecosystem. This means we’re going to have to decide how we either build, catalyse or join an ecosystem. -- DevCentral F5 Friday, “A War of Ecosystems” Interestingly, 47% of respondents surveyed by Zenoss/Cloud.com for its Cloud Computing Outlook 2011 indicated use of PaaS in 2011. Like SaaS, PaaS has some wiggle room in its definition, but its general popularity seems to indicate that yes, indeed, platform is an important factor. OpenFlow essentially provides this capability, turning infrastructure into a platform and enabling extensibility and customization that could not be achieved otherwise. It basically turns a piece of infrastructure into a giant backplane for new functions, features, and services. It introduces, allegedly, dynamism into what is typically a static network. It is what IaaS had the promise to be, but as of yet has failed to achieve. CLOUD as a PLATFORM The takeaway for cloud and infrastructure providers is that organizations want platforms. Developers want platforms. Operations wants platforms (see Puppet and Chef as examples of operational platforms). It’s about enabling an ecosystem that encourages innovation, i.e. new features and functions and services, without requiring the wheel to be reinvented. It’s about drag and drop, figuratively speaking, in the realm of infrastructure. Bringing the ability to deploy new services atop a platform that provides the basics. OpenFlow promises just such capabilities for infrastructure much in the same way Facebook provides these basics for game and application developers. Mobile platforms offer the same for devices and operating systems. It’s about enabling an ecosystem in which organizations can focus on not the core infrastructure, but on custom functionality and process automation that delivers efficiency to IT across operations and development alike. “The beauty of this is it gives more flexibility and control to the network,” said Shaughnessy [marketing manager for system networking at IBM], “so you could actually adjust the way the traffic flows go through your network dynamically based on what’s going on with your applications.” -- IBM releases OpenFlow-enabled switch It enables flexibility in the network, the means to deploy more dynamism in traffic policy enforcement and shaping and ties back to cloud with its ability to impart multi-tenant capabilities to infrastructure without completely modifying the internal architecture of components – a major obstacle for many network-focused devices. OpenFlow is not a panacea, there are myriad reasons why it may not be appropriate as the basis for architecting the cloud platform foundation required to support future initiatives. But it is a prime example of the kind of platform-focused capabilities organizations desire to move ahead in their journey to IT as a Service. The cloud on which organizations will be able to build their future data center architecture will be a platform, and that means from the bottom (infrastructure) to the middle (development) to the top (operations). What cloud and infrastructure providers must do is simulate the Facebook experience at the infrastructure layer. Infrastructure as a platform is the next step in the evolution of cloud computing . IT Services: Creating Commodities out of Complexity IBM releases OpenFlow-enabled switch The Cloud Configuration Management Conundrum IT as a Service: A Stateless Infrastructure Architecture Model If a Network Can’t Go Virtual Then Virtual Must Come to the Network You Can’t Have IT as a Service Until IT Has Infrastructure as a Service This is Why We Can’t Have Nice Things WILS: Automation versus Orchestration The Infrastructure Turk: Lessons in Services Putting the Cloud Before the Horse238Views0likes0CommentsInfrastructure 2.0: As a matter of fact that isn't what it means
We've been talking a lot about the benefits of Infrastructure 2.0, or Dynamic Infrastructure, a lot about why it's necessary, and what's required to make it all work. But we've never really laid out what it is, and that's beginning to lead to some misconceptions. As Daryl Plummer of Gartner pointed out recently, the definition of cloud computing is still, well, cloudy. Multiple experts can't agree on the definition, and the same is quickly becoming true of dynamic infrastructure. That's no surprise; we're at the beginning of what Gartner would call the hype cycle for both concepts, so there's some work to be done on fleshing out exactly what each means. That dynamic infrastructure is tied to cloud computing is no surprise, either, as dynamic infrastructure is very much an enabler of such elastic models of application deployment. But dynamic infrastructure is applicable to all kinds of models of application deployment: so-called legacy deployments, cloud computing and its many faces, and likely new models that have yet to be defined. The biggest confusion out there seems to be that dynamic infrastructure is being viewed as Infrastructure as a Service (IaaS). Dynamic infrastructure is not the same thing as IaaS. IaaS is a deployment model in which application infrastructure resides elsewhere, in the cloud, and is leveraged by organizations desiring an affordable option for scalability that reduces operating and capital expenses by sharing compute resources "out there" somewhere, at a provider. Dynamic infrastructure is very much a foundational technology for IaaS, but it is not, in and of itself, IaaS. Indeed, simply providing network or application network solution services "as a service" has never required dynamic infrastructure. CDN (Content Delivery Networks), managed VPNs, secure remote access, and DNS services have long been available as services to be used by organizations as a means by which they can employ a variety of "infrastructure services" without the capital expenditure in hardware and time/effort required to configure, deploy, and maintain such solutions. Simply residing "in the cloud" is not enough. A CDN is not "dynamic infrastructure" nor are hosted DNS servers. They are infrastructure 1.0, legacy infrastructure, whose very nature is such that physical location has never been important to their deployment. Indeed, these services were designed without physical location as a requirement, necessarily, as their core functions are supposed to work in a distributed, location agnostic manner. Dynamic infrastructure is an evolution of traditional network and application network solutions to be more adaptable, support integration with its environment and other foundational technologies, and to be aware of context (connectivity intelligence). Adaptable It is able to understand its environment and react to conditions in that environment in order to provide scale, security, and optimal performance for applications. This adaptability comes in many forms, from the ability to make management and configuration changes on the fly as necessary to providing the means by which administrators and developers can manually or automatically make changes to the way in which applications are being delivered. The configuration and policies applied by dynamic infrastructure are not static; they are able to change based on predefined criteria or events that occur in the environment such that the security, scalability, or performance of an application and its environs are preserved. Some solutions implement this capability through event-driven architectures, such as "IP_ADDRESS_ASSIGNED" or "HTTP_REQUEST_MADE". Some provide network-side scripting capabilities to extend the ability to react and adapt to situations requiring flexibility while others provide the means by which third-party solutions can be deployed on the solution to address the need for application and user specific capabilities at specific touch-points in the architecture. Context Aware Dynamic infrastructure is able to understand the context that surrounds an application, its deployment environment, and its users and apply relevant policies based on that information. Being context aware means being able to recognize that a user accessing Application X from a coffee shop has different needs than the same user accessing Application X from home or from the corporate office. It is able to recognize that a user accessing an application over a WAN or high-latency connection requires different policies than one accessing that application via a LAN or from close physical proximity over the Internet. Being context aware means being able to recognize the current conditions of the network and the application, and then leveraging its adaptable nature to choose the right policies at the time the request is made such that the application is delivered most efficiently and quickly. Collaborative Dynamic infrastructure is capable of integrating with other application network and network infrastructure, as well as the management and control solutions required to manage both the infrastructure and the applications it is tasked with delivering. The integration capabilities of dynamic infrastructure requires that the solution be able to direct and take direction from other solutions such that changes in the infrastructure at all layers of the stack can be recognized and acted upon. This integration allows network and application network solutions to leverage its awareness of context in a way that ensures it is adaptable and can support the delivery of applications in an elastic, flexible manner. Most solutions use a standards-based control plane through which they can be integrated with other systems to provide the connectivity intelligence necessary to implement IaaS, virtualized architectures, and other cloud computing models in such a way that the perceived benefits of reduced operating expenses and increased productivity through automation can actually be realized. These three properties of dynamic infrastructure work together, in concert, to provide the connectivity intelligence and ability to act on information gathered through that intelligence. All three together form the basis for a fluid, adaptable, dynamic application infrastructure foundation on which emerging compute models such as cloud computing and virtualized architectures can be implemented. But dynamic infrastructure is not exclusively tied to emerging compute models and next-generation application architectures. Dynamic infrastructure can be leveraged to provide benefit to traditional architectures, as well. The connectivity intelligence and adaptable nature of dynamic infrastructure improves the security, availability, and performance of applications in so-called legacy architectures as well. Dynamic infrastructure is a set of capabilities implemented by network and application network solutions that provide the means by which an organization can improve the efficiency of their application delivery and network architecture. That's why it's just not accurate to equate Infrastructure 2.0/Dynamic Infrastructure with Infrastructure as a Service cloud computing models. The former is a description of the next generation of network and network application infrastructure solutions; the evolution from static, brittle solutions to fluid, dynamic, adaptable ones. The latter is a deployment model that, while likely is built atop dynamic infrastructure solutions, is not wholly comprised of dynamic infrastructure. IaaS is not a product, it's a service. Dynamic infrastructure is a product that may or may not be delivered "as a service". Glad we got that straightened out.237Views0likes1CommentDoes Cloud Solve or Increase the 'Four Pillars' Problem?
It has long been said – often by this author – that there are four pillars to application performance: Memory CPU Network Storage As soon as you resolve one in response to application response times, another becomes the bottleneck, even if you are not hitting that bottleneck yet. For a bit more detail, they are “memory consumption” – because this impacts swapping in modern Operating Systems. “CPU utilization” – because regardless of OS, there is a magic line after which performance degrades radically. “Network throughput” – because applications have to communicate over the network, and blocking or not (almost all coding for networks today is), the information requested over the network is necessary and will eventually block code from continuing to execute. “Storage” – because IOPS matter when writing/reading to/from disk (or the OS swaps memory out/back in). These four have long been relatively easy to track. The relationship is pretty easy to spot, when you resolve one problem, one of the others becomes the “most dangerous” to application performance. But historically, you’ve always had access to the hardware. Even in highly virtualized environments, these items could be considered both at the Host and Guest level – because both individual VMs and the entire system matter. When moving to the cloud, the four pillars become much less manageable. The amount “much less” implies depends a lot upon your cloud provider, and how you define “cloud”. Put in simple terms, if you are suddenly struck blind, that does not change what’s in front of you, only your ability to perceive it. In the PaaS world, you have only the tools the provider offers to measure these things, and are urged not to think of the impact that host machines may have on your app. But they do have an impact. In an IaaS world you have somewhat more insight, but as others have pointed out, less control than in your datacenter. Picture Courtesy of Stanley Rabinowitz, Math Pro Press. In the SaaS world, assuming you include that in “cloud”, you have zero control and very little insight. If you app is not performing, you’ll have to talk to the vendors’ staff to (hopefully) get them to resolve issues. But is the problem any worse in the cloud than in the datacenter? I would have to argue no. Your ability to touch and feel the bits is reduced, but the actual problems are not. In a pureplay public cloud deployment, the performance of an application is heavily dependent upon your vendor, but the top-tier vendors (Amazon springs to mind) can spin up copies as needed to reduce workload. This is not a far cry from one common performance trick used in highly virtualized environments – bring up another VM on another server and add them to load balancing. If the app is poorly designed, the net result is not that you’re buying servers to host instances, it is instead that you’re buying instances directly. This has implications for IT. The reduced up-front cost of using an inefficient app – no matter which of the four pillars it is inefficient in – means that IT shops are more likely to tolerate inefficiency, even though in the long run the cost of paying monthly may be far more than the cost of purchasing a new server was, simply because the budget pain is reduced. There are a lot of companies out there offering information about cloud deployments that can help you to see if you feel blind. Fair disclosure, F5 is one of them, I work for F5. That’s all you’re going to hear on that topic in this blog. While knowing does not always directly correlate to taking action, and there is some information that only the cloud provider could offer you, knowing where performance bottlenecks are does at least give some level of decision-making back to IT staff. If an application is performing poorly, looking into what appears to be happening (you can tell network bandwidth, VM CPU usage, VM IOPS, etc, but not what’s happening on the physical hardware) can inform decision-making about how to contain the OpEx costs of cloud. Internal cloud is a much easier play, you still have access to all the information you had before cloud came along, and generally the investigation is similar to that used in a highly virtualized environment. From a troubleshooting performance problems perspective, it’s much the same. The key with both virtualization and internal (private) clouds is that you’re aiming for maximum utilization of resources, so you will have to watch for the bottlenecks more closely – you’re “closer to the edge” of performance problems, because you designed it that way. A comprehensive logging and monitoring environment can go a long way in all cloud and virtualization environments to keeping on top of issues that crop up – particularly in a large datacenter with many apps running. And developer education on how not to be a resource hog is helpful for internally developed apps. For externally developed apps the best you can do is ask for sizing information and then test their assumptions before buying. Sometimes, cloud simply is the right choice. If network bandwidth is the prime limiting factor, and your organization can accept the perceived security/compliance risks, for example, the cloud is an easy solution – bandwidth in the cloud is either not limited, or limited by your willingness to write a monthly check to cover usage. Either way, it’s not an Internet connection upgrade, which can be dastardly expensive not just at install, but month after month. Keep rocking it. Get the visibility you need, don’t worry about what you don’t need. Related Articles and Blogs: Don MacVittie - Load Balancing For Developers Advanced Load Balancing For Developers. The Network Dev Tool Load Balancers for Developers – ADCs Wan Optimization ... Intro to Load Balancing for Developers – How they work Intro to Load Balancing for Developers – The Gotchas Intro to Load Balancing for Developers – The Algorithms Load Balancing For Developers: Security and TCP Optimizations Advanced Load Balancers for Developers: ADCs - The Code Advanced Load Balancing For Developers: Virtual Benefits Don MacVittie - ADCs for Developers Devops Proverb: Process Practice Makes Perfect Devops is Not All About Automation 1024 Words: Why Devops is Hard Will DevOps Fork? DevOps. It's in the Culture, Not Tech. Lori MacVittie - Development and General Devops: Controlling Application Release Cycles to Avoid the ... An Aristotlean Approach to Devops and Infrastructure Integration How to Build a Silo Faster: Not Enough Ops in your Devops233Views0likes0CommentsLet’s Face It: PaaS is Just SOA for Platforms Without the Baggage
At some point in the past few years SOA apparently became a four-letter word (as opposed to just a TLA that leaves a bad taste in your mouth) or folks are simply unwilling – or unable – to recognize the parallels between SOA and cloud computing . This is mildly amusing given the heavy emphasis of services in all things now under the “cloud computing” moniker. Simeon Simeonov was compelled to pen an article for GigaOM on the evolution/migration of cloud computing toward PaaS after an experience playing around with some data from CrunchBase. He came to the conclusion that if only there were REST-based web services (note the use of the term “web services” here for later in the discussion) for both MongoDB and CrunchBase his life would have been a whole lot easier. For an application developer, as opposed to an infrastructure developer, all these vestiges of decades-old operating system architecture add little value. In fact, they cause deployment and operational headaches—lots of them. If I had taken almost any other approach to the problem using the tools I’m familiar with I would have performed HTTP operations against the REST-based web services interface for CrunchBase and then used HTTP to send the data to MongoDB. My code would have never operated against a file or any other OS-level construct directly. […] Most assume that server virtualization as we know it today is a fundamental enabler of the cloud, but it is only a crutch we need until cloud-based application platforms mature to the point where applications are built and deployed without any reference to current notions of servers and operating systems. -- Simeon Simeonov “The next reincarnation of cloud computing” Now I’m certainly not going to disagree with Simeon on his point that REST-based web services for data sources would make life a whole lot easier for a whole lot of people. I’m not even going to disagree with his assertion that PaaS is where cloud is headed. What needs to be pointed out is what he (and a lot of other people) are describing is essentially SOA minus the standards baggage. You’ve got the notion of abstraction in the maturation of platforms removing the need for developers to reference servers or operating systems (and thus files). You’ve got ubiquity in a standards-based transport protocol (HTTP) through which such services are consumed. You’ve got everything except the standards baggage. You know them, the real four-letter words of SOA: SOAP, WSDL, WSIL and, of course, the stars of the “we hate SOA show”, WS-everything. But the underlying principles that were the foundation and the vision of SOA – abstraction of interface from implementation, standards-based communication channels, discrete chunks of reusable logic – are all present in Simeon’s description. If they are not spelled out they are certainly implied by his frustration with a required interaction with file system constructs, desiring instead some higher level abstracted interface through which the underlying implementation is obscured from view. CLOUDS AREN’T CALLED “as a SERVICE” for NOTHING Whether we’re talking about compute, storage, platform, or infrastructure as a service the operative word is service. It’s a services-based model, a service-oriented model. It’s a service-oriented architecture that’s merely moved down the stack a bit, into the underlying and foundational technologies upon which applications are built. Instead of building business services we’re talking about building developer services – messaging services, data services, provisioning services. Services, services, and more services. Move down the stack again and when we talk about devops and automation or cloud and orchestration we’re talking about leveraging services – whether RESTful or SOAPy – to codify operational and datacenter level processes as a means to shift the burden of managing infrastructure from people to technology. Infrastructure services that can be provisioned on-demand, that can be managed on-demand, that can apply policies on-demand. PaaS is no different. It’s about leveraging services instead of libraries or adapters or connectors. It’s about platforms – data, application, messaging – as a service. And here’s where I’ll diverge from agreeing with Simeon, because it shouldn’t matter to PaaS how the underlying infrastructure is provisioned or managed, either. I agree that virtualization isn’t necessary to build a highly scalable, elastic and on-demand cloud computing environment. But whether that data services is running on bare-metal, or on a physical server supported by an operating system, or on a virtual server should not be the concern of the platform services. Whether elastic scalability of a RabbitMQ service is enabled via virtualization or not is irrelevant. It is exactly that level of abstraction that makes it possible to innovate at the next layer, for PaaS offerings to focus on platform services and not the underlying infrastructure, for developers to focus on application services and not the underlying platforms. Thus his musings on the migration of IaaS into PaaS are ignoring that for most people, “cloud” is essentially a step pyramid, with each “level” in that pyramid being founded upon a firm underlying layer that exposes itself as services. SOA IS ALIVE and LIVING UNDER an ASSUMED NAME for ITS OWN PROTECTION If we return to the early days of SOA you’ll find this is exactly the same prophetic message offered by proponents riding high on the “game changing” technology of that time. SOA promised agility through abstraction, reuse through a services-oriented approach to composition, and relieving developers of the need to be concerned with how and where a services was implemented so they could focus instead on innovating new solutions. That’s the same thing that all the *aaS are trying to provide – and with many of the same promises. The “cloud” plays into the paradigm by introducing elastic scalability, multi-tenancy, and the notion of self-service for provisioning that brings the financial incentives to the table. The only thing missing from the “as a service” paradigm is a plethora of standards and the bad taste they left in many a developer’s mouth. And it is that facet of SOA that is likely the impetus for refusing to say the “S” word in close proximity to cloud and *aaS. The conflict, the disagreement, the confusion, the difficulties, the lack of interoperability that nearly destroyed the interoperability designed in the first place – all the negatives associated with SOA come to the fore upon hearing that TLA instead of its underlying concepts and architectural premises. Premises which, if you look around hard enough, you’ll find still very much in use and successfully doing exactly what it promised to do. Simeon himself does not appear to disagree with the SOA-aaS connection. In a Twitter conversation he said, “I still have scars from the early #SOA days. Shouldn't we start with something simpler for PaaS?” To which I would now say “but we are”. After all it wasn’t – and isn’t - SOA that was so darn complex, it was its myriad complex and often competing standards. A rose by any other name, and all that. We can refuse to use the acronym, but that doesn’t change the fact that the core principles we’re applying (successfully, I might add) are, in fact, service-oriented. Related Posts230Views0likes0CommentsIs PaaS Just Outsourced Application Server Platforms?
There’s a growing focus on PaaS (Platform as a Service), particularly as Microsoft has been rolling out Azure and VMware continues to push forward with its SpringSource acquisition. Amazon, though generally labeled as IaaS (Infrastructure as a Service) is also a “player” with its SimpleDB and SQS (Simple Queue Service) and more recently, its SNS (Simple Notification Service). But there’s also Force.com, the SaaS (Software as a Service) giant Salesforce.com’s incarnation of a “platform” as well as Google’s App Engine. As is the case with “cloud” in general, the definition of PaaS is varied and depends entirely on to whom you’re speaking at the moment. What’s interesting about SpringSource and Azure and many other PaaS offerings is that as far as the customer is concerned they’re very much like an application server platform. The biggest difference being, of course, that the customer need not concern themselves with the underlying management and scalability. The application however, is still the customer’s problem. That’s not that dissimilar from what enterprise-class organizations build out in their own data centers using traditional application server platforms like .NET and JavaEE. The application server platform is, well, a platform, in which multiple applications are deployed in their own cozy little isolated containers. You might even recall that JavaEE containers are called, yeah, “virtual machines.” And even though Force.com and Google App Engine are proprietary platforms (and generally unavailable for deployment elsewhere) they still bear many of the characteristic marks of an application server platform.227Views0likes0CommentsCloud Computing: It's the destination, not the journey that is important
How the cloud acts and is used is more important than where it physically resides Cloud computing and SOA suffer from the same lack of prescriptive architectures. They are defined by how they act rather than what they are, or from what they are composed. They are, in a way, existential technology that cannot be confined to a simple architectural diagram but require instead a set of properties or ways of acting in order to be recognized. To over simplify and paraphrase Jean-Paul Sartre's concepts of existentialism, we define ourselves (mankind) through our actions. To apply this to technology is a fairly easy thing: some technology is defined through what it does rather than what it is. Cloud computing is nothing but the way in which an infrastructure deploys and delivers applications. That will surely irritate cloud purists as much as the impure use of object-oriented principles used to annoy me when I was first developing applications. But with age and experience comes wisdom, and the hind-sight to see that there are many roads which lead to the same end. Unlike many philosophical theories, with technology it often is the destination and not the journey that is important. Definition of Cloud Computing The First Principle of Existentialism "Gartner defines cloud computing (hereafter referred to as "cloud") as a style of computing where massively scalable IT-related functions and information are provided as a service across the Internet, potentially to multiple external customers, where the consumers of the services need only care about what the service does for them, not how it is implemented. Cloud is not an architecture, a platform, a tool, an infrastructure, a Web site or a vendor. It is a style of computing. Many architectures can be used to support its implementation and use. For example, it is possible to use cloud in private enterprises to build private clouds, but there is only one public cloud based on the Internet." SOURCE: GARTNER RESEARCH ID: G00157908 28 MAY 2008 Man is nothing but what he makes of himself. SOURCE "ESSAYS IN EXISTENTIALISM" JEAN-PAUL SARTRE 1965 The First Principle of Cloud Computing Cloud computing is nothing but the way in which an infrastructure deploys and delivers applications. Many pundits argue that the "cloud" in "cloud computing" is the Internet, and only the Internet. But it's telling that almost every application architecture diagram offered up uses the same "cloud" 'to represent the network, whether it's internal or external to the organization. That "cloud" represents abstraction, obfuscation, and is meant to show that there is a network responsible for delivering the applications depicted, it's just too complex (or sensitive) to be depicted in a diagram or, as is more often the case, the folks responsible for application infrastructure aren't concerned about the network infrastructure supporting the applications. Which brings us back full circle to the definition of cloud computing, which includes a lack of concern regarding the implementation details of how applications get delivered. As Gartner has posited, the cloud is not an architecture, a platform, a tool, or an infrastructure. It's not a web site, it's not a vendor. "It is a style of computing." It is a deployment model, much in the same way SOA is a style of computing; it is a deployment model and not a prescriptive architecture. It defines itself by how it acts, not what it is. If it is used to deliver applications in a way that is transparent, that does not require the end-user to understand (or concern themselves with) the underlying infrastructure - application and network - then it likely fits under the moniker "cloud computing". If the same principles used by vendors like Amazon, BlueLock, Joyent, and Microsoft are used by organizations to implement a dynamic, scalable on-demand application and network infrastructure, does it really matter where that infrastructure physically resides? If Microsoft deploys an application in its own cloud, in its data center, and then makes use of that cloud for internal organizational applications, is it still cloud computing? Yes, of course it is. So why should it matter if an enterprise does the same thing? It's still cloud computing based on how the infrastructure acts and what it delivers, not where it is or who uses it. What's important is what the cloud infrastructure does. Scalability, transparency, supporting an on-demand computing model. That's what cloud computing is, whether it's implemented as SaaS (Software as a Service) or as IaaS (Infrastructure as a Service) or using automated virtualization solutions within the data center. As long as the infrastructure you build out is capable of providing the benefits of cloud computing: efficiency, scalability, and agility, you're doing cloud computing. That sounds a lot easier than is, because you have to scale while being efficient, and you have to be agile without sacrificing scalability, and you have to do it in a way that the end-user (who may be a developer) doesn't need to know how its implemented. So rather than worry about how, worry about what. At the end of your implementation is your infrastructure agile? Is it scalable (transparently)? Is it efficient? Is it abstracted? Does it support on-demand computing without sacrificing those properties? If it is, then you've reached your cloud computing destination.226Views0likes0Comments