cloud computing infrastructure
24 TopicsIs Your Cloud Opaque or Transparent?
Cloud computing promises customers the ability to deliver scalable applications on-demand without the overhead of a massive data center. The visibility - and flexibility as well as control - you have into and over the cloud computing environment depends on whether the provider you select offers an opaque or a transparent cloud computing environment. OPAQUE CLOUD COMPUTING MODEL In an opaque cloud computing model all details are hidden from the organization. The hardware and software infrastructure details are not necessarily known or controlled by the organization but are completely managed by the cloud computing provider. This allows for a completely on-demand environment in which all resources necessary to deliver applications according to service level agreements are automatically provisioned, de-provisioned, and managed by the cloud computing provider. The organization need only develop and deploy the application to the cloud, the rest of the details are opaque and handled for the organization by the cloud computing provider. Most opaque cloud computing providers currently allow the organization to determine how many "instances" of their application is running at any given time, with the details of provisioning the appropriate resources to support those instances hidden from the customer view. In many ways, SaaS (Software as a Service) providers such as Salesforce.com have been using an opaque cloud computing model for many years, as the implementation details in terms of hardware and software infrastructure are completely hidden from and unavailable to the customer (the organization). The difference between a SaaS offering and an opaque cloud computing model is in the application development and deployment processes. A SaaS offering such as Salesforce.com or Google Apps requires development of applications on a specific platform, almost always proprietary to the SaaS provider and non-negotiable. An opaque cloud computing provider allows the organization to determine the platform upon which applications will be deployed, more akin to some of Amazon's cloud offerings, such as EC2, though the underlying operating system and hardware may not be known due to the extensive use of operating system virtualization by the cloud computing provider. This is why virtualization is inexorably tied to cloud computing - it is the most efficient way to deploy an application to a remote, virtual data center without the overhead of configuration and management of the incredibly high number of combinations possible for application platforms. Opaque cloud computing providers like Joyent have an infrastructure already constructed to scale. Customers may be aware of what that infrastructure comprises, but cannot necessarily specify the choice of switches, routers, or application delivery infrastructure. TRANSPARENT CLOUD COMPUTING MODEL In a transparent cloud computing model the organization is left with determining its specific needs. The organization decides how much computing power it requires, what hardware and software solutions it will require, and it manages its provisioned resources in the cloud. The transparent cloud computing model is more akin to an outsourced data-center, a virtual data center if you will, than it is to an on-demand opaque cloud computing model. The acquisition and provisioning of resources becomes much difficult in a transparent cloud computing model. The prospect of automated on-demand computing in a transparent cloud computing model is good and in some cases already available for some functions. The same mechanisms used to manage the opaque cloud computing environment could become customer-facing ones. Some management and configuration of cloud computing resources are currently being offered to customers by providers like RightScale, though what infrastructure functions can be delegated to the organization vary greatly from provider to provider. Rackspace is a good example of a transparent cloud computing model, as are many of the traditional hosting providers. The transparent cloud computing environment is still evolving, currently comprising several different models for architecting your infrastructure. Some, like Areti, offer flexibility in infrastructure choices by taking advantage of virtual appliances. This allows the customer to choose from a number of application and network infrastructure solutions while keeping the cost of acquisition and management down for both the provider and the customer. Other providers continue to focus on physical infrastructure deployment in a fully or collaboratively managed environment, understanding that some organizations will require dedicated, proven solutions if they move into the cloud. THE HYBRID MODEL CIO.com recently offered a list of 11 cloud computing vendors to watch, culled from a Forrester Research report. The list comprises a mix of opaque and transparent providers, with a tendency to lean toward the opaque. A few of the providers on the list are leaning toward a hybrid cloud computing model, with the ability to specify a choice of infrastructure devices from those supported by the provider while providing fully managed services. A hybrid model is likely where providers will eventually converge, as it promises to provide the greatest flexibility for customers without sacrificing some of the control necessary on the part of the provider. After all, while allowing customers to manage and configure some components of the application delivery network, others (routers, switches) are likely to not require such hands-on management by customers. In fact, such mucking around at layer 2 and 3 could very well do more harm than good, as most value added features and functionality for application delivery comes at layer 4 and above and is best handled by a solution separate from core network infrastructure. And while many customers will be comfortable testing the waters with virtual appliances or open-source solutions for application delivery infrastructure, eventually more proven solutions will be required as customers begin to demand more flexibility and functionality in the cloud. Cloud computing providers who evolve quickly and take advantage of componentized application delivery infrastructure will be able to better differentiate their offerings by offering additional features such as acceleration, security, rate shaping, and advanced authentication. These advanced features are one of the primary reasons that "Option #3" will be increasingly important for application delivery and networking infrastructure providers. Cloud computing providers appear willing to support such features, but require that the solutions be able to be integrated and remotely managed, on-demand, before they will do so.608Views0likes0CommentsDynamic Infrastructure: The Cloud within the Cloud
When folks are asked to define the cloud they invariably, somewhere in the definition, bring up the point that “users shouldn’t care” about the actual implementation. When asked to diagram a cloud environment we end up with two clouds: one representing the “big cloud” and one inside the cloud, representing the infrastructure we aren’t supposed to care about, usually with some pretty graphics representing applications being delivered out of the cloud over the Internet. But yet some of us need to care what’s obscured; the folks tasked with building out a cloud environment need to know what’s hidden in the cloud in order to build out an infrastructure that will support such a dynamic, elastic environment. It is the obscuring of the infrastructure that makes cloud seem so simple. Because we’re hiding all the moving parts that need to work in concert to achieve such a fluid environment it appears as if all you need is virtualization and voila! The rest will take care of itself. But without a dynamic infrastructure supporting all the virtualized applications and, in many cases, infrastructure such an environment is exceedingly difficult to build. WHAT’S HIDDEN IN THE CLOUD Inside the “cloud within the cloud” there are a great number of pieces of infrastructure working together. Obviously there are the core networking components: routers, switches, DNS, and DHCP, without which connectivity would be impossible. Moving up the stack we find load balancing and application delivery infrastructure; the core application networking components that enable the dynamism promised by virtualized environments to be achieved. Without a layer of infrastructure bridging the gap between the network and the applications, virtualized or not, it is difficult to achieve the kind of elasticity and dynamism necessary for the cloud to “just work” for end users. It is the application networking layer that is responsible for ensuring availability, proper routing of requests, and applying application level policies such as security and acceleration. This layer must be dynamic, because the actual virtualized layers of web and application servers are themselves dynamic. Application instances may move from IP to IP across hours or days, and it is necessary for the application networking layer to be able to adapt to that change without requiring manual intervention in the form of configuration modification. Storage virtualization, too, resides in this layer of the infrastructure. Storage virtualization provides enables a dynamic infrastructure by presenting a unified view of storage to the applications and internal infrastructure, ensuring that the application need not be modified in order to access file-based resources. Storage virtualization can further be the means through which cloud control mechanisms manage the myriad virtual images required to support a cloud computing infrastructure. The role of the application networking layer is to mediate, or broker, between clients and the actual applications to ensure a seamless access experience regardless of where the actual application instance might be running at any given time. It is the application networking layer that provides network and server virtualization such that the actual implementation of the cloud is hidden from external constituents. Much like storage virtualization, application networking layers present a “virtual” view of the applications and resources requiring external access. This is why dynamism is such an integral component of a cloud computing infrastructure: the application networking layer must, necessarily, keep tabs on application instances and be able to associate them with the appropriate “virtual” application it presents to external users. Classic load balancing solutions are incapable of such dynamic, near real-time reconfiguration and discovery and almost always require manual intervention. Dynamic application networking infrastructure is not only capable but excels at this type of autonomous function, integrating with the systems necessary to enable awareness of changes within the application infrastructure and act upon them. The “cloud within the cloud” need only be visible to implementers; but as we move forward and more organizations attempt to act on a localized cloud computing strategy it becomes necessary to peer inside the cloud and understand how the disparate pieces of technology combine. This visibility is a requirement if organizations are to achieve the goals desired through the implementation of a cloud computing-based architecture: efficiency and scalability.458Views0likes4CommentsBursting the Cloud
The cloud computing craze is leading to some interesting new terms. Cloudware and cloudbursting are two terms I particularly like for their ability to describe specific computing models based on cloud computing. Today we're going to look at cloudbursting, which is basically a new twist on an old concept. Cloudbursting appears to be to marry the traditional safe enterprise computing model with cloud computing; in essence, bursting into the cloud when necessary or using the cloud when additional compute resources are required temporarily. Jeff at Amazon Web Services Blog talks about the inception of this term as applied to the latter and describes it in his blog post as a method used by Thomas Brox Røst to regenerate a number of dynamic pages in 5 hours rather than the 7 hours that would be required if he had attempted such a feat internally. His approach is further described on The High Scalability Blog. Cloudbursting can also be used to shoulder the burden of some of an application's processing. For example, basic application functionality could be provided from within the cloud while more critical (e.g. revenue-generating) applications continue to be served from within the controlled enterprise data center. This assumes that only a portion of consumers will actually be interacting with the data-driven side of a web site (customer management, process visibility, etc...) while the greater portion will simply be browsing around on the non-interactive, as it were, side of the site. Bursting has traditionally been applied to resource allocation and automated provisioning/de-provisioning of resources, historically focused on bandwidth. Today, in the cloud, it is being applied to resources such as servers, application servers, application delivery systems, and other infrastructure required to provide on-demand computing environments that expand and contract as necessary, without manual intervention. This requires the ability to automate the cloud's data center. Data center automation in a cloud computing environment, regardless of the opacity of the model, requires more than simple workflow systems. It requires on-demand control and management over all devices in the delivery chain, from the storage to the application and web servers to the load-balancers and acceleration offerings that deliver the applications to end-users. This is more akin to data center orchestration than it is automation, as it requires that many moving parts and pieces be coordinated in order to perform a highly complex set of tasks seamlessly and with as little manual intervention as possible. This is one of the foundational requirements of a cloud computing infrastructure: on-demand, automated scalability. Data center automation is nothing new. Hosting and service providers have long automated their data centers in order to reduce the cost of customer acquisition and management, and to improve efficiency of provisioning and de-provisioning processes. These benefits can also be realized inside the data center, regardless of the model being employed. The same automation required for smooth, cost-effective management of a cloud computing data center can be utilized to achieve smooth, cost-effective management of an enterprise data center. The hybrid application deployment model involving cloud computing requires additional intelligence on the part of the application delivery network. The application delivery network must be able to understand what is being requested and where it resides; it must be able to intelligently route requests. This, too, is a fundamental attribute of cloud computing infrastructure: intelligence. When distributing an application across multiple locations, whether local servers or remote data centers or "in the cloud", it becomes necessary for a controlling node to properly route those requests based on application data. In a less sophisticated model, global load balancing could be substituted as a means of directing requests to the appropriate site, a task for which global load balancers seem a perfect fit. A hybrid approach like cloudbursting seems to be particularly appealing. Enterprises seem reluctant to move business critical applications into the cloud at this juncture but are likely more willing to assign responsibility to an outsourced provider for less critical application functionality with variable volume requirements, which fits well with an on-demand resource bursting model. Cloudbursting may be one solution that makes everyone happy.352Views0likes1CommentLoad balancing is key to successful cloud-based (dynamic) architectures
Much of the dialogue today surrounding cloud computing and virtualization is still taking the 50,000 foot view. It's all conceptual; it's all about business value, justification, interoperability, and use cases. These are all good conversations that need to happen in order for cloud computing and virtualization-based architectures to mature, but as is often the case that leaves the folks tasked with building something right now a bit on their own. So let's ignore the high-level view for just a bit and talk reality. Many folks are being tasked, now, with designing or even implementing some form of a cloud computing architecture - usually based around virtualization technology like VMWare (a March 2008 Gartner Research report predicted VMWare would likely hold 85% of the virtualization market by the end of 2008). But architecting a cloud-based environment requires more than just deploying virtual images and walking away. Cloud-based computing is going to require that architects broaden their understanding of the role that infrastructure like load balancers play in enterprise architecture because they are a key component to a successful cloud-based implementation, whether that's a small proof of concept or a complex, enterprise-wide architectural revolution. The goal of a cloud-based architecture is to provide some form of elasticity, the ability to expand and contract capacity on-demand. The implication is that at some point additional instances of an application will be needed in order for the architecture to scale and meet demand. That means there needs to be some mechanism in place to balance requests between two or more instances of that application. The mechanism most likely to be successful in performing such a task is a load balancer. The challenges of attempting to build such an architecture without a load balancer are staggering. There's no other good way to take advantage of additional capacity introduced by multiple instances of an application that's also efficient in terms of configuration and deployment. All other methods require modifications and changes to multiple network devices in order to properly distribute requests across multiple instances of an application. Likewise, when the additional instances of that application are de-provisioned, the changes to the network configuration need to be reversed. Obviously a manual process would be time consuming and inefficient, effectively erasing the benefits gained by introducing a cloud-based architecture in the first place. A load balancer provides the means by which instances of applications can be provisioned and de-provisioned automatically, without requiring changes to the network or its configuration. It automatically handles the increases and decreases in capacity and adapts its distribution decisions based on the capacity available at the time a request is made. Because the end-user is always directed to a virtual server, or IP address, on the load balancer the increase or decrease of capacity provided by the provisioning and de-provisioning of application instances is non-disruptive. As is required by even the most basic of cloud computing definitions, the end user is abstracted by the load balancer from the actual implementation and needs not care about the actual implementation. The load balancer makes one, two, or two-hundred resources - whether physical or virtual - appear to be one resource; this decouples the user from the physical implementation of the application and allows the internal implementation to grow, to shrink, and to change without any obvious affect on the user. Choosing the right load balancer at the beginning of such an initiative is imperative to the success of more complex implementations later. The right load balancer will be able to provide the basics required to lay the foundation for more advanced cloud computing architectures in the future, while supporting even the most basic architectures today. The right load balancer will be extensible. When first implementing a cloud-based architecture you need simple load balancing capabilities, and little more. But as your environment grows more complex there will likely be a need for more advanced features, like layer 7 switching, acceleration, optimization, SSL termination and redirection, application security, and secure access. The right load balancing solution will allow you to start with the basics but be able to easily provide more advanced functionality as you need it - without requiring new devices or solutions that often require re-architecture of the network or application infrastructure. A load balancer is a key component to building out any cloud computing architecture, whether it's just a simple proof of concept or an advanced, provider-oriented implementation. Related articles by Zemanta Managing Virtual Infrastructure Requires an Application Centric Approach Infrastructure 2.0: The Diseconomy of Scale Virus The next tech boom is already underway The Context-Aware Cloud Battle brewing over next-generation private clouds 4 Things You Need in a Cloud Computing Infrastructure329Views0likes1CommentInfrastructure 2.0: The Feedback Loop Must Include Applications
Greg Ness calls it "connectivity intelligence" but it seems that we're really talking about is the ability of network infrastructure to both be agile itself and enable IT agility at the same time. Brittle, inflexible infrastructures - whether they are implemented in hardware or software or both - are not agile enough to deal with an evolving, dynamic application architecture. Greg says in a previous post The static infrastructure was not architected to keep up with these new levels of change and complexity without a new layer of connectivity intelligence, delivering dynamic information between endpoint instances and everything from Ethernet switches and firewalls to application front ends. Empowered with dynamic feedback, the existing deployed infrastructure can evolve into an even more responsive, resilient and flexible network and deliver new economies of scale. The issue I see is this: it's all too network focused. Knowing that a virtual machine instance came online and needs an IP address, security policies, and to be added to a VLAN on the switch is very network-centric. Necessary, but network-centric. The VM came online for a reason, and that reason is most likely an application specific one. Greg has referred several times to the Trusted Computing Group's IF-MAP specification, which provides the basics through which connectivity intelligence could certainly be implemented if vendors could all agree to implement it. The problem with IF-MAP and, indeed, most specifications that come out of a group of network-focused organizers is that they are, well, network-focused. In fact, reading through IF-MAP I found many similarities between its operations (functions) and those found in the more application-focused security standard, SAML. While IF-MAP allows for custom data to be included, which could be used by application vendors to IF-MAP enable application servers through which more application specific details could be included in the dynamic infrastructure feedback loop, that's not as agile as it could be because it doesn't allow for a simple, standard mechanism through which application developers can integrate application specific details into that feedback loop. And yet that's exactly what we need to complete this dynamic feedback loop and create a truly flexible, agile infrastructure because the applications are endpoints; they, too, need to be managed and secured and integrated into the Infrastructure 2.0 world. While I agree with Greg that IP address management in general and managing a constantly changing heterogeneous infrastructure is a nightmare that standards like IF-MAP might certainly help IT wake up from, there's another level of managing the dynamic environments associated with cloud computing and virtualization that generally isn't addressed by very network-specific standards like IF-MAP: the application layer. In order for a specification like IF-MAP to address the application layer, application developers would need to integrate (become an IF-MAP client) the code necessary to act as part of an IF-MAP enabled infrastructure. That's because knowing that a virtual machine just came online is one thing; understanding which application it is, what application policies need to be applied, and what application-specific processing might be necessary in the rest of the infrastructure is another. It's all contextual, and based on variables we can't know ahead of time. This can't be determined before the application is actually written, so it can't be something written by vendors and shipped as a "value add". Application security and switching policies are peculiar to the application; they're unique and the only way we, as vendors, can provide that integration without foreknowledge of that uniqueness is to abstract applications to a general use case. That completely destroys the concept of agility because it doesn't take into consideration the application environment as it is at any given moment in time. It results in static, brittle integration that is essentially no more useful than SNMP would be if it were integrated into an application. We can all sit around and integrate with VMWare, and Hyper-V, and Xen. We can learn to speak IF-MAP or (some other common standard) and integrate with DNS and DHCP servers, with network security devices and with layer 2-3 switches. But we are still going to have to manually manage the applications that are ultimately the reason for the existence of such virtualized environments. While we are getting our infrastructure up to speed so that it is easier and less costly to manage is necessary, let's not forget about the applications we also still have to manage. Dynamic feedback is great and we have, today, the ability to enable pieces of that dynamic feedback loop. Customers can, today, use tools like iControl and iRules to build a feedback loop between their application delivery network and applications, regardless of whether those applications are in a VM or a Java EE container, or on a Microsoft server. But this feedback is specific to one vendor, and doesn't necessarily include the rest of the infrastructure. Greg is talking about general dynamic feedback at the network layer. He's specifically (and understandably) concerned with network agility, not application agility. That's why he calls it infrastructure 2.0 and not application something 2.0. Greg points as an example to the constant levels of change introduced by virtual machines coming on and off line and the difficulties inherent in trying to manage that change via static, infrastructure 1.0 products. That's all completely true and needs to be addressed by infrastructure vendors. But we also need to consider how to enable agility at the application layer, so the feedback loop that drives security and routing and switching and acceleration and delivery configurations in real-time can adapt to conditions within and around the applications we are trying to manage in the first place. It's all about the application in the end. Endpoints - whether internal or external to the data center - are requesting access and IP addresses for one reason: to get a resource served by an application. That application may be TCP-based, it may be HTTP-based, it may be riding on UDP. Regardless of the network-layer transport mechanisms, it's still an application - a browser, a server-side web application, a SOA service - and its unique needs must be considered in order for the feedback loop to be complete. How else will you know which application just came online or went offline? How do you know what security to apply if you don't know what you might be trying to secure? Somehow the network-centric standards that might evolve from a push to a more agile infrastructure must broaden their focus and consider how an application might integrate with such standards or what information they might provide as part of this dynamic feedback loop that will drive a more agile infrastructure. Any such standard emerging upon which Infrastructure 2.0 is built must somehow be accessible and developer-friendly and take into consideration application-specific resources as well as network-resources, and provide a standard means by which information about the application that can drive the infrastructure to adapt to its unique needs can be shared. If it doesn't, we're going to end up with the same fractured "us versus them" siloed infrastructure we've had for years. That's no longer reasonable. The network and the application are inexorably linked now, thanks to cloud computing and the Internet in general. Managing thousands of instances of an application will be as painful as managing thousands of IP addresses. As Greg points out, that doesn't work very well right now and it's costing us a lot of money and time and effort to do so. We know where this ends up, because we've seen it happen already. The same diseconomies of scale that affect TCP/IP are going to affect application management. We should be more proactive in addressing the same management issues that will arise with trying to manage thousands of applications and services rather than waiting until it, too, can no longer be ignored.326Views0likes1CommentCloud Computing and Infrastructure 2.0
Not every infrastructure vendor needs new capabilities to support cloud computing and infrastructure 2.0. Greg Ness of Infoblox has an excellent article on "The Next Tech Boom: Infrastructure 2.0" that is showing up everywhere. That's because it raises some interesting questions and points out some real problems that will be need to be addressed as we move further into cloud computing and virtualized environments. What is really interesting, however, is the fact that some infrastructure vendors are already there and have been for quite some time. One thing Greg mentions that's not quite accurate (at least in the case of F5) is regarding the ability of "appliances" to "look inside servers (for other servers) or dynamically keep up with fluid meshes of hypervisors". From Greg's article: The appliances that have been deployed across the last thirty years simply were not architected to look inside servers (for other servers) or dynamically keep up with fluid meshes of hypervisors powering servers on and off on demand and moving them around with mouse clicks. Enterprises already incurring dis-economies of scale today will face sheer terror when trying to manage and secure the dynamic environments of tomorrow. Rising management costs will further compromise the economics of static network infrastructure. I must disagree. Not on the sheer terror statement, that's almost certainly true, but on the capabilities of infrastructure devices to handle a virtualized environment. Some appliances and network devices have long been able to look inside servers and dynamically keep up with the rapid changes occurring in a hypervisor-driven application infrastructure. We call one of those capabilities "intelligent health monitoring", for example, and others certainly have their own special name for a similar capability. On the dynamic front, when you combine an intelligent application delivery controller with the ability to be orchestrated from within applications or within the OS, you get the ability to dynamically modify configuration of application delivery in real-time based on current conditions within the data center. And if you're monitoring is intelligent enough, you can sense within seconds when an application - whether virtualized or not - has disappeared or conversely, when it's come back on line. F5 has been supporting this kind of dynamic, flexible application infrastructure for years. It's not really new except that its importance has suddenly skyrocketed due to exactly the scenario Greg points out using virtualization. WHAT ABOUT THE VIRTSEC PIECE? There has never been a better case for centralized web application security through a web application firewall and an application delivery controller. The application delivery controller - which necessarily sits between clients and those servers - provides security at layers 2 through 7. The full stack. There's nothing really that special about a virtualized environment as far as the architecture goes for delivering applications running on those virtual servers; the protocols are still the same, and the same vulnerabilities that have plagued non-virtualized applications will also plague virtualized ones. That means that existing solutions can address those vulnerabilities in either environment, or a mix. Add in a web application firewall to centralize application security and it really doesn't matter whether applications are going up and down like the stock market over the past week. By deploying the security at the edge, rather than within each application, you can let the application delivery controller manage the availability state of the application and concentrate on cleaning up and scanning requests for malicious content. Centralizing security for those applications - again, whether they are deployed on a "real" or "virtual" server - has a wealth of benefits including improving performance and reducing the very complexity Greg points out that makes information security folks reach for a valium. BUT THEY'RE DYNAMIC! Yes, yes they are. The assumption is that given the opportunity to move virtual images around that organizations will do so - and do so on a frequent basis. I think that assumption is likely a poor one for the enterprise and probably not nearly as willy nilly for cloud computing providers, either. Certainly there will some movement, some changes, but it's not likely to be every few minutes, as is often implied. Even if it was, some infrastructure is already prepared to deal with that dynamism. Dynamism is just another term for agility and makes the case well for loose-coupling of security and delivery with the applications living in the infrastructure. If we just apply the lessons we've learned from SOA to virtualization and cloud computing and 90% of the "Big Hairy Questions" can be answered by existing technology. We just may have to change our architectures a bit to adapt to these new computing models. Network infrastructure, specifically application delivery, has had to deal with applications coming online and going offline since their inception. It's the nature of applications to have outages, and application delivery infrastructure, at least, already deals with those situations. It's merely the frequency of those "outages" that is increasing, not the general concept. But what if they change IP addresses? That would indeed make things more complex. This requires even more intelligence but again, we've got that covered. While the functionality necessary to handle this kind of a scenario is not "out of the box" (yet) it is certainly not that difficult to implement if the infrastructure vendor provides the right kind of integration capability. Which most do already. Greg isn't wrong in his assertions. There are plenty of pieces of network infrastructure that need to take a look at these new environments and adjust how they deal with the dynamic nature of virtualization and cloud computing in general. But it's not all infrastructure that needs to "get up to speed". Some infrastructure has been ready for this scenario for years and it's just now that the application infrastructure and deployment models (SOA, cloud computing, virtualization) has actually caught up and made those features even more important to a successful application deployment. Application delivery in general has stayed ahead of the curve and is already well-suited to cloud computing and virtualized environments. So I guess some devices are already "Infrastructure 2.0" ready. I guess what we really need is a sticker to slap on the product that says so. Related Links Are you (and your infrastructure) ready for virtualization? Server virtualization versus server virtualization Automating scalability and high availability services The Three "Itys" of Cloud Computing 4 things you need in a cloud computing infrastructure324Views0likes3CommentsInfrastructure 2.0: As a matter of fact that isn't what it means
We've been talking a lot about the benefits of Infrastructure 2.0, or Dynamic Infrastructure, a lot about why it's necessary, and what's required to make it all work. But we've never really laid out what it is, and that's beginning to lead to some misconceptions. As Daryl Plummer of Gartner pointed out recently, the definition of cloud computing is still, well, cloudy. Multiple experts can't agree on the definition, and the same is quickly becoming true of dynamic infrastructure. That's no surprise; we're at the beginning of what Gartner would call the hype cycle for both concepts, so there's some work to be done on fleshing out exactly what each means. That dynamic infrastructure is tied to cloud computing is no surprise, either, as dynamic infrastructure is very much an enabler of such elastic models of application deployment. But dynamic infrastructure is applicable to all kinds of models of application deployment: so-called legacy deployments, cloud computing and its many faces, and likely new models that have yet to be defined. The biggest confusion out there seems to be that dynamic infrastructure is being viewed as Infrastructure as a Service (IaaS). Dynamic infrastructure is not the same thing as IaaS. IaaS is a deployment model in which application infrastructure resides elsewhere, in the cloud, and is leveraged by organizations desiring an affordable option for scalability that reduces operating and capital expenses by sharing compute resources "out there" somewhere, at a provider. Dynamic infrastructure is very much a foundational technology for IaaS, but it is not, in and of itself, IaaS. Indeed, simply providing network or application network solution services "as a service" has never required dynamic infrastructure. CDN (Content Delivery Networks), managed VPNs, secure remote access, and DNS services have long been available as services to be used by organizations as a means by which they can employ a variety of "infrastructure services" without the capital expenditure in hardware and time/effort required to configure, deploy, and maintain such solutions. Simply residing "in the cloud" is not enough. A CDN is not "dynamic infrastructure" nor are hosted DNS servers. They are infrastructure 1.0, legacy infrastructure, whose very nature is such that physical location has never been important to their deployment. Indeed, these services were designed without physical location as a requirement, necessarily, as their core functions are supposed to work in a distributed, location agnostic manner. Dynamic infrastructure is an evolution of traditional network and application network solutions to be more adaptable, support integration with its environment and other foundational technologies, and to be aware of context (connectivity intelligence). Adaptable It is able to understand its environment and react to conditions in that environment in order to provide scale, security, and optimal performance for applications. This adaptability comes in many forms, from the ability to make management and configuration changes on the fly as necessary to providing the means by which administrators and developers can manually or automatically make changes to the way in which applications are being delivered. The configuration and policies applied by dynamic infrastructure are not static; they are able to change based on predefined criteria or events that occur in the environment such that the security, scalability, or performance of an application and its environs are preserved. Some solutions implement this capability through event-driven architectures, such as "IP_ADDRESS_ASSIGNED" or "HTTP_REQUEST_MADE". Some provide network-side scripting capabilities to extend the ability to react and adapt to situations requiring flexibility while others provide the means by which third-party solutions can be deployed on the solution to address the need for application and user specific capabilities at specific touch-points in the architecture. Context Aware Dynamic infrastructure is able to understand the context that surrounds an application, its deployment environment, and its users and apply relevant policies based on that information. Being context aware means being able to recognize that a user accessing Application X from a coffee shop has different needs than the same user accessing Application X from home or from the corporate office. It is able to recognize that a user accessing an application over a WAN or high-latency connection requires different policies than one accessing that application via a LAN or from close physical proximity over the Internet. Being context aware means being able to recognize the current conditions of the network and the application, and then leveraging its adaptable nature to choose the right policies at the time the request is made such that the application is delivered most efficiently and quickly. Collaborative Dynamic infrastructure is capable of integrating with other application network and network infrastructure, as well as the management and control solutions required to manage both the infrastructure and the applications it is tasked with delivering. The integration capabilities of dynamic infrastructure requires that the solution be able to direct and take direction from other solutions such that changes in the infrastructure at all layers of the stack can be recognized and acted upon. This integration allows network and application network solutions to leverage its awareness of context in a way that ensures it is adaptable and can support the delivery of applications in an elastic, flexible manner. Most solutions use a standards-based control plane through which they can be integrated with other systems to provide the connectivity intelligence necessary to implement IaaS, virtualized architectures, and other cloud computing models in such a way that the perceived benefits of reduced operating expenses and increased productivity through automation can actually be realized. These three properties of dynamic infrastructure work together, in concert, to provide the connectivity intelligence and ability to act on information gathered through that intelligence. All three together form the basis for a fluid, adaptable, dynamic application infrastructure foundation on which emerging compute models such as cloud computing and virtualized architectures can be implemented. But dynamic infrastructure is not exclusively tied to emerging compute models and next-generation application architectures. Dynamic infrastructure can be leveraged to provide benefit to traditional architectures, as well. The connectivity intelligence and adaptable nature of dynamic infrastructure improves the security, availability, and performance of applications in so-called legacy architectures as well. Dynamic infrastructure is a set of capabilities implemented by network and application network solutions that provide the means by which an organization can improve the efficiency of their application delivery and network architecture. That's why it's just not accurate to equate Infrastructure 2.0/Dynamic Infrastructure with Infrastructure as a Service cloud computing models. The former is a description of the next generation of network and network application infrastructure solutions; the evolution from static, brittle solutions to fluid, dynamic, adaptable ones. The latter is a deployment model that, while likely is built atop dynamic infrastructure solutions, is not wholly comprised of dynamic infrastructure. IaaS is not a product, it's a service. Dynamic infrastructure is a product that may or may not be delivered "as a service". Glad we got that straightened out.308Views0likes1CommentManaging Virtual Infrastructure Requires an Application Centric Approach
Thanks to a tweet from @Archimedius, I found an insightful blog post from cloud computing provider startup Kaavo that essentially makes the case for a move to application-centric management rather than the traditional infrastructure-centric systems on which we've always relied. We need to have an application centric approach for deploying, managing, and monitoring applications. A software which can provisions optimal virtual servers, network, storage (storage, CPU, bandwidth, Memory, alt.) resources on-demand and provide automation and ease of use to application owners to easily and securely run and maintain their applications will be critical for the success of virtualization and cloud computing. In short we need to start managing distributed systems for specific applications rather than managing servers and routers. [emphasis added] This is such a simple statement that gets right to the heart of the problem: when applications are decoupled from the servers on which they are deployed and the network infrastructure that supports and delivers them, they cannot be effectively managed unless they are recognized as individual components themselves. Traditional infrastructure and its associated management intrinsically ties applications to servers and servers to IP addresses and IP addresses to switches and routers. This is a tightly coupled model that leaves very little room to address the dynamic nature of a virtual infrastructure such as those most often seen in cloud computing models. We've watched as SOA was rapidly adopted and organizations realized the benefits of a loosely coupled application architecture. We've watched the explosion of virtualization and the excitement of de-coupling applications from their underlying server infrastructure. But in the network infrastructure space, we still see applications tied to servers tied to IP addresses tied to switches and routers. That model is broken in a virtual, dynamic infrastructure because applications are no longer bound to servers or IP addresses. They can be anywhere at any time, and infrastructure and management systems that insist on binding the two together are simply going to impede progress and make managing that virtual infrastructure even more painful. It's all about the application. Finally. And that's what makes application delivery focused solutions so important to both virtualization and cloud computing models in which virtualization plays a large enabling role. Because virtualization and cloud computing, like application delivery solution providers, is application-centric. Because these solutions are, and have been for years, focused on application awareness and on the ability of the infrastructure solutions to be adaptable; to be agile. Because they have long since moved beyond simple load balancing and into application delivery, where the application is what is delivered, not bits, bytes, and packets. Because application delivery controllers are more platforms than they are devices; they are programmable, adaptable, and internally focused on application delivery, scalability, and security.They are capable of dealing with the demands that a virtualized application infrastructure places on the entire delivery infrastructure. Where simple load balancing fails to adapt dynamically to the ever changing internal network of applications both virtual and non-virtual, application delivery excels. It is capable of monitoring, intelligently, the availability of applications not only in terms of whether it is up or down, but where it currently resides within the data center. Application delivery solutions are loosely coupled, and like SOA-based solutions they rely on real-time information about infrastructure and applications to determine how best to distribute requests, whether that's within the confines of a single data center or fifteen data centers. Application delivery controllers focus on distributing requests to applications, not servers or IP addresses, and they are capable of optimizing and securing both requests and responses based on the application as well as the network. They are the solution that bridges the gap that lies between applications and network infrastructure, and enables the agility necessary to build a scalable, dynamic delivery system suitable for virtualization and cloud computing. There's still work to be done, but for many vendors, at least, the framework already exists for managing the complexity of a dynamic, virtual environment. Related articles by Zemanta Cloud Computing: Is your cloud sticky? It should be. Infrastructure 2.0: The Diseconomy of Scale Virus Cloud Computing: Vertical Scalability is Still Your Problem Gartner picks tech top 10 for 2009 Clouding over the issues.300Views0likes3CommentsInteroperability between clouds requires more than just VM portability
The issue of application state and connection management is one often discussed in the context of cloud computing and virtualized architectures. That's because the stress placed on existing static infrastructure due to the potentially rapid rate of change associated with dynamic application provisioning is enormous and, as is often pointed out, existing "infrastructure 1.0" systems are generally incapable of reacting in a timely fashion to such changes occurring in real-time. The most basic of concerns continues to revolve around IP address management. This is a favorite topic of Greg Ness at Infrastructure 2.0 and has been subsequently addressed in a variety of articles and blogs since the concepts of cloud computing and virtualization have gained momentum. The Burton Group has addressed this issue with regards to interoperability in a recent post, positing that perhaps changes are needed (agreed) to support emerging data center models. What is interesting is that the blog supports the notion of modifying existing core infrastructure standards (IP) to support the dynamic nature of these new models and also posits that interoperability is essentially enabled simply by virtual machine portability. From The Burton Group's "What does the Cloud Need? Standards for Infrastructure as a Service" First question is: How do we migrate between clouds? If we're talking System Infrastructure as a Service, then what happens when I try to migrate a virtual machine (VM) between my internal cloud running ESX (say I'm running VDC-OS) and a cloud provider who is running XenServer (running Citrix C3)? Are my cloud vendor choices limited to those vendors that match my internal cloud infrastructure? Well, while its probably a good idea, there are published standards out there that might help. Open Virtualization Format (OVF) is a meta-data format used to describe VMs in standard terms. While the format of the VM is different, the meta-data in OVF can be used to facilitate VM conversion from one format to other, thereby enabling interoperability. ... Another biggie is application state and connection management. When I move a workload from one location to another, the application has made some assumptions about where external resources are and how to get to them. The IP address the application or OS use to resolve DNS names probably isn't valid now that the VM has moved to a completely different location. That's where Locator ID Separation Protocol (LISP -- another overloaded acronym) steps in. The idea with LISP is to add fields to the IP header so that packets can be redirected to the correct location. The "ID" and and "locator" are separated so that the packet with the "ID" can be sent to the "locator" for address resolution. The "locator" can change the final address dynamically, allowing the source application or OS to change locations as long as they can reach the "locator". [emphasis added] If LISP sounds eerily familiar to some of you, it should. It's the same basic premise behind UDDI and the process of dynamically discovering the "location" of service end-points in a service-based architecture. Not exactly the same, but the core concepts are the same. The most pressing issue with proposing LISP as a solution is that it focuses only on the problems associated with moving workloads from one location to another with the assumption that the new location is, essentially, a physically disparate data center, and not simply a new location within the same data center; an issue with LISP does not even consider. That it also ignores other application networking infrastructure that requires the same information - that is, the new location of the application or resource - is also disconcerting but not a roadblock, it's merely a speed-bump in the road to implementation. We'll come back to that later; first let's examine the emphasized statement that seems to imply that simply migrating a virtual image from one provider to another equates to interoperability between clouds - specifically IaaS clouds. I'm sure the author didn't mean to imply that it's that simple; that all you need is to be able to migrate virtual images from one system to another. I'm sure there's more to it, or at least I'm hopeful that this concept was expressed so simply in the interests of brevity rather than completeness because there's a lot more to porting any application from one environment to another than just the application itself. Applications, and therefore virtual images containing applications, are not islands. They are not capable of doing anything without a supporting infrastructure - application and network - and some of that infrastructure is necessarily configured in such a way as to be peculiar to the application - and vice-versa. We call it an "ecosystem" for a reason; because there's a symbiotic relationship between applications and their supporting infrastructure that, when separated, degrades or even destroys the usability of that application. One cannot simply move a virtual machine from one location to another, regardless of the interoperability of virtualization infrastructure, and expect things to magically work unless all of the required supporting infrastructure has also been migrated as seamlessly. And this infrastructure isn't just hardware and network infrastructure; authentication and security systems, too, are an integral part of an application deployment. Even if all the necessary components were themselves virtualized (and I am not suggesting this should be the case at all) simply porting the virtual instances from one location to another is not enough to assure interoperability as the components must be able to collaborate, which requires connectivity information. Which brings us back to the problems associated with LISP and its focus on external discovery and location. There's just a lot more to interoperability than pushing around virtual images regardless of what those images contain: application, data, identity, security, or networking. Portability between virtual images is a good start, but it certainly isn't going to provide the interoperability necessary to ensure the seamless transition from one IaaS cloud environment to another. RELATED ARTICLES & BLOGS Who owns application delivery meta-data in the cloud? More on the meta-data menagerie The Feedback Loop Must Include Applications How VM sprawl will drive the urgency of the network evolution The Diseconomy of Scale Virus Flexibility is Key to Dynamic Infrastructure The Three Horsemen of the Coming Network Revolution As a Service: The Many Faces of the Cloud299Views0likes2CommentsCloud Computing: The Last Definition You'll Ever Need
The VirtualDC has asked the same question that's been roaming about in every technophile's head since the beginning of the cloud computing craze: what defines a cloud? We've chatted internally about this very question, which led to Alan's questions in a recent blog post. Lori and others have suggested that the cloud comes down to how a service is delivered rather than what is delivered, and I’m fine with that as a long term definition or categorization. I don’t think it’s narrow enough, though, to answer the question “Is Gmail a cloud service?” because if Gmail is delivered over the web, my internet connection is my work infrastructure, so therefore…Gmail is a cloud service for me? No, it's not. It may be for the developers, if they're using cloud computing to develop and deploy GMail, but for you it's naught but cloudware, an application accessed through the cloud. From the end-user perspective it's a hosted application, it's software as a service (SaaS), but it isn't cloud computing or a cloud service. The problem here, I think, is that we're using the same terms to describe two completely different things - and perspectives. The real users of cloud computing are IT folks: developers, architects, administrators. Unfortunately, too many definitions include verbiage indicating that the "user" should not need any knowledge of the infrastructure. Take, for example, Wikipedia's definition: It is a style of computing in which IT-related capabilities are provided “as a service”, allowing users to access technology-enabled services from the Internet ("in the cloud") without knowledge of, expertise with, or control over the technology infrastructure that supports them. It's the use of "user" that's problematic. I would argue that it almost never the case that the end-user of an application has knowledge of the infrastructure. Ask your mom, ask your dad, ask any Internet neophyte and you'll quickly find that it's probably the case that they have no understanding or knowledge (and certainly no control) of the underlying infrastructure for any application. If we used the term "user" to mean the traditional end-user, then every application and web site on the Internet is "cloud computing" and has been for more than a decade. FINALLY, IT REALLY IS ALL ABOUT *US* The "user" in cloud computing definitions are developers, administrators, and IT folks. Folks who are involved in the development and deployment of applications, not necessarily using them. It is from IT's perspective, not the end-user or consumer of the application, from which cloud computing can be - and must be - defined. We are the users, the consumers, of cloud computing services; not our customers or consumers. We are the center of the vortex around which cloud computing revolves, because we are the ones who will consume and make use of those services in order to develop and deploy applications. Cloud computing is not about the application itself; it is about how the application is deployed as how it is delivered. Cloud computing is a deployment model leveraged by IT in order to reduce infrastructure costs and/or address capacity/scalability concerns. Just as an end-user cannot "do" SOA, they can't "do" cloud computing. End-users use applications, and an application is not cloud computing. It is the infrastructure and model of deployment that defines whether it is cloud computing, and even then, it's never cloud computing to the end-user, only the folks involved in developing and deploying that application. Cloud computing is about how an application or service is deployed and delivered. But defining how it is deployed and delivered could be problematic because when we talk about how we often tend to get prescriptive and start talking in absolute checklists. With a fluid concept like cloud computing that doesn't work. There's just not one single model nor is there one single architecture that you can definitively point to and say "We are doing that, ergo we are doing cloud computing." THE FOUR BEHAVIORS THAT DEFINE CLOUD COMPUTING It's really about the behavior of the entire infrastructure; how the cloud delivers an application, that's important. The good thing is that we can define that behavior, we can determine whether an application infrastructure is behaving in a cloud computing manner in order to categorize it as cloud computing or something else. This is not dissimilar to SOA (Service Oriented Architecture), a deployment model in which we look to the way in which applications are architected and subsequently delivered to determine whether we are or are not "doing SOA." DYNAMISM. Amazon calls this "elasticity", but it means the same thing: this is the ability of the application delivery infrastructure to expand and contract automatically based on capacity needs. Note that this does not require virtualization technology, though many providers are using virtualization to build this capability. There are other means of implementing dynamism in an architecture. ABSTRACTION. Do you need to care about the underlying infrastructure when developing an application for deployment in the cloud. If you have to care about the operating system or any piece of the infrastructure, it's not abstracted enough to be cloud computing. RESOURCE SHARING. The architecture must be such that the compute and network resources of the cloud infrastructure are sharable among applications. This ties back to dynamism and the ability to expand and contract as needed. If an application's method of scaling is to simply add more servers on which it is deployed rather than be able to consume resources on other servers as needed, the infrastructure is not capable of resource sharing. PROVIDES A PLATFORM. Cloud computing is essentially a deployment model. If it provides a platform on which you can develop and/or deploy an application and meets the other three criterion, it is cloud computing. Dynamism and resource sharing are the key architectural indicators of cloud computing. Without these two properties you're simply engaging in remote hosting and outsourcing, which is not a bad thing, it's just not cloud computing. Hosted services like Gmail are cloudware, but not necessarily cloud computing, because they are merely accessed through the cloud and don't actually provide a platform on which applications can be deployed. Salesforce.com, however, which provides such a platform - albeit somewhat restricted - then fits into the definition of cloud computing. Cloudware is an extension of cloud computing but they do not enable businesses to leverage cloud computing in the same way as an Amazon or BlueLock or Joyent. Cloudware may grow into cloud computing, as Salesforce.com has done over the years. Remember when Salesforce.com started it was purely SaaS - it simply provided a hosted CRM (Customer Relationship Management) solution. Over the years it has expanded and begun to offer a platform on which organizations can develop and deploy their own applications. Cloud computing, as Gartner analysts have recently put forth, is a "style of computing". That style of computing is defined from the perspective of IT, and has specific properties which make something cloud computing - or not cloud computing as the case may be.297Views0likes3Comments