cloud computing infrastructure
24 TopicsLoad balancing is key to successful cloud-based (dynamic) architectures
Much of the dialogue today surrounding cloud computing and virtualization is still taking the 50,000 foot view. It's all conceptual; it's all about business value, justification, interoperability, and use cases. These are all good conversations that need to happen in order for cloud computing and virtualization-based architectures to mature, but as is often the case that leaves the folks tasked with building something right now a bit on their own. So let's ignore the high-level view for just a bit and talk reality. Many folks are being tasked, now, with designing or even implementing some form of a cloud computing architecture - usually based around virtualization technology like VMWare (a March 2008 Gartner Research report predicted VMWare would likely hold 85% of the virtualization market by the end of 2008). But architecting a cloud-based environment requires more than just deploying virtual images and walking away. Cloud-based computing is going to require that architects broaden their understanding of the role that infrastructure like load balancers play in enterprise architecture because they are a key component to a successful cloud-based implementation, whether that's a small proof of concept or a complex, enterprise-wide architectural revolution. The goal of a cloud-based architecture is to provide some form of elasticity, the ability to expand and contract capacity on-demand. The implication is that at some point additional instances of an application will be needed in order for the architecture to scale and meet demand. That means there needs to be some mechanism in place to balance requests between two or more instances of that application. The mechanism most likely to be successful in performing such a task is a load balancer. The challenges of attempting to build such an architecture without a load balancer are staggering. There's no other good way to take advantage of additional capacity introduced by multiple instances of an application that's also efficient in terms of configuration and deployment. All other methods require modifications and changes to multiple network devices in order to properly distribute requests across multiple instances of an application. Likewise, when the additional instances of that application are de-provisioned, the changes to the network configuration need to be reversed. Obviously a manual process would be time consuming and inefficient, effectively erasing the benefits gained by introducing a cloud-based architecture in the first place. A load balancer provides the means by which instances of applications can be provisioned and de-provisioned automatically, without requiring changes to the network or its configuration. It automatically handles the increases and decreases in capacity and adapts its distribution decisions based on the capacity available at the time a request is made. Because the end-user is always directed to a virtual server, or IP address, on the load balancer the increase or decrease of capacity provided by the provisioning and de-provisioning of application instances is non-disruptive. As is required by even the most basic of cloud computing definitions, the end user is abstracted by the load balancer from the actual implementation and needs not care about the actual implementation. The load balancer makes one, two, or two-hundred resources - whether physical or virtual - appear to be one resource; this decouples the user from the physical implementation of the application and allows the internal implementation to grow, to shrink, and to change without any obvious affect on the user. Choosing the right load balancer at the beginning of such an initiative is imperative to the success of more complex implementations later. The right load balancer will be able to provide the basics required to lay the foundation for more advanced cloud computing architectures in the future, while supporting even the most basic architectures today. The right load balancer will be extensible. When first implementing a cloud-based architecture you need simple load balancing capabilities, and little more. But as your environment grows more complex there will likely be a need for more advanced features, like layer 7 switching, acceleration, optimization, SSL termination and redirection, application security, and secure access. The right load balancing solution will allow you to start with the basics but be able to easily provide more advanced functionality as you need it - without requiring new devices or solutions that often require re-architecture of the network or application infrastructure. A load balancer is a key component to building out any cloud computing architecture, whether it's just a simple proof of concept or an advanced, provider-oriented implementation. Related articles by Zemanta Managing Virtual Infrastructure Requires an Application Centric Approach Infrastructure 2.0: The Diseconomy of Scale Virus The next tech boom is already underway The Context-Aware Cloud Battle brewing over next-generation private clouds 4 Things You Need in a Cloud Computing Infrastructure297Views0likes1CommentDynamic Infrastructure: The Cloud within the Cloud
When folks are asked to define the cloud they invariably, somewhere in the definition, bring up the point that “users shouldn’t care” about the actual implementation. When asked to diagram a cloud environment we end up with two clouds: one representing the “big cloud” and one inside the cloud, representing the infrastructure we aren’t supposed to care about, usually with some pretty graphics representing applications being delivered out of the cloud over the Internet. But yet some of us need to care what’s obscured; the folks tasked with building out a cloud environment need to know what’s hidden in the cloud in order to build out an infrastructure that will support such a dynamic, elastic environment. It is the obscuring of the infrastructure that makes cloud seem so simple. Because we’re hiding all the moving parts that need to work in concert to achieve such a fluid environment it appears as if all you need is virtualization and voila! The rest will take care of itself. But without a dynamic infrastructure supporting all the virtualized applications and, in many cases, infrastructure such an environment is exceedingly difficult to build. WHAT’S HIDDEN IN THE CLOUD Inside the “cloud within the cloud” there are a great number of pieces of infrastructure working together. Obviously there are the core networking components: routers, switches, DNS, and DHCP, without which connectivity would be impossible. Moving up the stack we find load balancing and application delivery infrastructure; the core application networking components that enable the dynamism promised by virtualized environments to be achieved. Without a layer of infrastructure bridging the gap between the network and the applications, virtualized or not, it is difficult to achieve the kind of elasticity and dynamism necessary for the cloud to “just work” for end users. It is the application networking layer that is responsible for ensuring availability, proper routing of requests, and applying application level policies such as security and acceleration. This layer must be dynamic, because the actual virtualized layers of web and application servers are themselves dynamic. Application instances may move from IP to IP across hours or days, and it is necessary for the application networking layer to be able to adapt to that change without requiring manual intervention in the form of configuration modification. Storage virtualization, too, resides in this layer of the infrastructure. Storage virtualization provides enables a dynamic infrastructure by presenting a unified view of storage to the applications and internal infrastructure, ensuring that the application need not be modified in order to access file-based resources. Storage virtualization can further be the means through which cloud control mechanisms manage the myriad virtual images required to support a cloud computing infrastructure. The role of the application networking layer is to mediate, or broker, between clients and the actual applications to ensure a seamless access experience regardless of where the actual application instance might be running at any given time. It is the application networking layer that provides network and server virtualization such that the actual implementation of the cloud is hidden from external constituents. Much like storage virtualization, application networking layers present a “virtual” view of the applications and resources requiring external access. This is why dynamism is such an integral component of a cloud computing infrastructure: the application networking layer must, necessarily, keep tabs on application instances and be able to associate them with the appropriate “virtual” application it presents to external users. Classic load balancing solutions are incapable of such dynamic, near real-time reconfiguration and discovery and almost always require manual intervention. Dynamic application networking infrastructure is not only capable but excels at this type of autonomous function, integrating with the systems necessary to enable awareness of changes within the application infrastructure and act upon them. The “cloud within the cloud” need only be visible to implementers; but as we move forward and more organizations attempt to act on a localized cloud computing strategy it becomes necessary to peer inside the cloud and understand how the disparate pieces of technology combine. This visibility is a requirement if organizations are to achieve the goals desired through the implementation of a cloud computing-based architecture: efficiency and scalability.388Views0likes4CommentsBursting the Cloud
The cloud computing craze is leading to some interesting new terms. Cloudware and cloudbursting are two terms I particularly like for their ability to describe specific computing models based on cloud computing. Today we're going to look at cloudbursting, which is basically a new twist on an old concept. Cloudbursting appears to be to marry the traditional safe enterprise computing model with cloud computing; in essence, bursting into the cloud when necessary or using the cloud when additional compute resources are required temporarily. Jeff at Amazon Web Services Blog talks about the inception of this term as applied to the latter and describes it in his blog post as a method used by Thomas Brox Røst to regenerate a number of dynamic pages in 5 hours rather than the 7 hours that would be required if he had attempted such a feat internally. His approach is further described on The High Scalability Blog. Cloudbursting can also be used to shoulder the burden of some of an application's processing. For example, basic application functionality could be provided from within the cloud while more critical (e.g. revenue-generating) applications continue to be served from within the controlled enterprise data center. This assumes that only a portion of consumers will actually be interacting with the data-driven side of a web site (customer management, process visibility, etc...) while the greater portion will simply be browsing around on the non-interactive, as it were, side of the site. Bursting has traditionally been applied to resource allocation and automated provisioning/de-provisioning of resources, historically focused on bandwidth. Today, in the cloud, it is being applied to resources such as servers, application servers, application delivery systems, and other infrastructure required to provide on-demand computing environments that expand and contract as necessary, without manual intervention. This requires the ability to automate the cloud's data center. Data center automation in a cloud computing environment, regardless of the opacity of the model, requires more than simple workflow systems. It requires on-demand control and management over all devices in the delivery chain, from the storage to the application and web servers to the load-balancers and acceleration offerings that deliver the applications to end-users. This is more akin to data center orchestration than it is automation, as it requires that many moving parts and pieces be coordinated in order to perform a highly complex set of tasks seamlessly and with as little manual intervention as possible. This is one of the foundational requirements of a cloud computing infrastructure: on-demand, automated scalability. Data center automation is nothing new. Hosting and service providers have long automated their data centers in order to reduce the cost of customer acquisition and management, and to improve efficiency of provisioning and de-provisioning processes. These benefits can also be realized inside the data center, regardless of the model being employed. The same automation required for smooth, cost-effective management of a cloud computing data center can be utilized to achieve smooth, cost-effective management of an enterprise data center. The hybrid application deployment model involving cloud computing requires additional intelligence on the part of the application delivery network. The application delivery network must be able to understand what is being requested and where it resides; it must be able to intelligently route requests. This, too, is a fundamental attribute of cloud computing infrastructure: intelligence. When distributing an application across multiple locations, whether local servers or remote data centers or "in the cloud", it becomes necessary for a controlling node to properly route those requests based on application data. In a less sophisticated model, global load balancing could be substituted as a means of directing requests to the appropriate site, a task for which global load balancers seem a perfect fit. A hybrid approach like cloudbursting seems to be particularly appealing. Enterprises seem reluctant to move business critical applications into the cloud at this juncture but are likely more willing to assign responsibility to an outsourced provider for less critical application functionality with variable volume requirements, which fits well with an on-demand resource bursting model. Cloudbursting may be one solution that makes everyone happy.289Views0likes1CommentCloud Computing: The Last Definition You'll Ever Need
The VirtualDC has asked the same question that's been roaming about in every technophile's head since the beginning of the cloud computing craze: what defines a cloud? We've chatted internally about this very question, which led to Alan's questions in a recent blog post. Lori and others have suggested that the cloud comes down to how a service is delivered rather than what is delivered, and I’m fine with that as a long term definition or categorization. I don’t think it’s narrow enough, though, to answer the question “Is Gmail a cloud service?” because if Gmail is delivered over the web, my internet connection is my work infrastructure, so therefore…Gmail is a cloud service for me? No, it's not. It may be for the developers, if they're using cloud computing to develop and deploy GMail, but for you it's naught but cloudware, an application accessed through the cloud. From the end-user perspective it's a hosted application, it's software as a service (SaaS), but it isn't cloud computing or a cloud service. The problem here, I think, is that we're using the same terms to describe two completely different things - and perspectives. The real users of cloud computing are IT folks: developers, architects, administrators. Unfortunately, too many definitions include verbiage indicating that the "user" should not need any knowledge of the infrastructure. Take, for example, Wikipedia's definition: It is a style of computing in which IT-related capabilities are provided “as a service”, allowing users to access technology-enabled services from the Internet ("in the cloud") without knowledge of, expertise with, or control over the technology infrastructure that supports them. It's the use of "user" that's problematic. I would argue that it almost never the case that the end-user of an application has knowledge of the infrastructure. Ask your mom, ask your dad, ask any Internet neophyte and you'll quickly find that it's probably the case that they have no understanding or knowledge (and certainly no control) of the underlying infrastructure for any application. If we used the term "user" to mean the traditional end-user, then every application and web site on the Internet is "cloud computing" and has been for more than a decade. FINALLY, IT REALLY IS ALL ABOUT *US* The "user" in cloud computing definitions are developers, administrators, and IT folks. Folks who are involved in the development and deployment of applications, not necessarily using them. It is from IT's perspective, not the end-user or consumer of the application, from which cloud computing can be - and must be - defined. We are the users, the consumers, of cloud computing services; not our customers or consumers. We are the center of the vortex around which cloud computing revolves, because we are the ones who will consume and make use of those services in order to develop and deploy applications. Cloud computing is not about the application itself; it is about how the application is deployed as how it is delivered. Cloud computing is a deployment model leveraged by IT in order to reduce infrastructure costs and/or address capacity/scalability concerns. Just as an end-user cannot "do" SOA, they can't "do" cloud computing. End-users use applications, and an application is not cloud computing. It is the infrastructure and model of deployment that defines whether it is cloud computing, and even then, it's never cloud computing to the end-user, only the folks involved in developing and deploying that application. Cloud computing is about how an application or service is deployed and delivered. But defining how it is deployed and delivered could be problematic because when we talk about how we often tend to get prescriptive and start talking in absolute checklists. With a fluid concept like cloud computing that doesn't work. There's just not one single model nor is there one single architecture that you can definitively point to and say "We are doing that, ergo we are doing cloud computing." THE FOUR BEHAVIORS THAT DEFINE CLOUD COMPUTING It's really about the behavior of the entire infrastructure; how the cloud delivers an application, that's important. The good thing is that we can define that behavior, we can determine whether an application infrastructure is behaving in a cloud computing manner in order to categorize it as cloud computing or something else. This is not dissimilar to SOA (Service Oriented Architecture), a deployment model in which we look to the way in which applications are architected and subsequently delivered to determine whether we are or are not "doing SOA." DYNAMISM. Amazon calls this "elasticity", but it means the same thing: this is the ability of the application delivery infrastructure to expand and contract automatically based on capacity needs. Note that this does not require virtualization technology, though many providers are using virtualization to build this capability. There are other means of implementing dynamism in an architecture. ABSTRACTION. Do you need to care about the underlying infrastructure when developing an application for deployment in the cloud. If you have to care about the operating system or any piece of the infrastructure, it's not abstracted enough to be cloud computing. RESOURCE SHARING. The architecture must be such that the compute and network resources of the cloud infrastructure are sharable among applications. This ties back to dynamism and the ability to expand and contract as needed. If an application's method of scaling is to simply add more servers on which it is deployed rather than be able to consume resources on other servers as needed, the infrastructure is not capable of resource sharing. PROVIDES A PLATFORM. Cloud computing is essentially a deployment model. If it provides a platform on which you can develop and/or deploy an application and meets the other three criterion, it is cloud computing. Dynamism and resource sharing are the key architectural indicators of cloud computing. Without these two properties you're simply engaging in remote hosting and outsourcing, which is not a bad thing, it's just not cloud computing. Hosted services like Gmail are cloudware, but not necessarily cloud computing, because they are merely accessed through the cloud and don't actually provide a platform on which applications can be deployed. Salesforce.com, however, which provides such a platform - albeit somewhat restricted - then fits into the definition of cloud computing. Cloudware is an extension of cloud computing but they do not enable businesses to leverage cloud computing in the same way as an Amazon or BlueLock or Joyent. Cloudware may grow into cloud computing, as Salesforce.com has done over the years. Remember when Salesforce.com started it was purely SaaS - it simply provided a hosted CRM (Customer Relationship Management) solution. Over the years it has expanded and begun to offer a platform on which organizations can develop and deploy their own applications. Cloud computing, as Gartner analysts have recently put forth, is a "style of computing". That style of computing is defined from the perspective of IT, and has specific properties which make something cloud computing - or not cloud computing as the case may be.245Views0likes3CommentsThe Rubik’s Cube of IT
Rubik’s Cube was first patented in 1974. The first book talking about a solution algorithm was published in 1981. In 2007, computers were used to deduce the smallest number of moves to solve a cube, and in 2008 that number was further reduced. That’s 34 years after it was invented. And it’s just a toy. Related Articles and Blogs The Unvarnished Truth About VDI Desktop Virtualization VDI Image Layering Cross-VM Vulnerabilities in Cloud Computing (pdf) Lab Report – The Deduplication of Primary Storage219Views0likes0Comments- 169Views0likes0Comments
Damned if you do, damned if you don't
There has been much fervor around the outages of cloud computing providers of late, which seems to be leading to an increased and perhaps unwarranted emphasis on SLAs the likes of which we haven't seen since...well, the last time the IT saw outsourced anything reach the hype-level of cloud computing. Consider this snippet of goodness for a moment, and pay careful attention to the last paragraph. From Five Key Challenges of Enterprise Cloud Computing I won’t beat the dead “Gmail down, EC2 down, etc down” horse here. But the truth of the matter is enterprises today cannot reasonably rely on the cloud infrastructures/platforms to run their business. There’s almost no SLAs provided by the cloud providers today. Even Jeff Barr from Amazon said that AWS only provides SLA for their S3 service. [...] Can you imagine enterprises signing up cloud computing contracts without SLAs clearly defined? It’s like going to host their business critical infrastructure in a data center that doesn’t have clearly defined SLA. We all know that SLAs really doesn’t buy you much. In most cases, enterprises get refunded for the amount of time that the network was down. No SLA will cover business loss. However, as one of the CSOs I met said, it’s about risk transfer. As long as there’s a defined SLA on paper, when the network/site goes down, they can go after somebody. If there’s no SLA, it will be the CIO/CSO’s head that’s on the chopping block. Let's look at this rationally for a moment. SLAs really don't buy you much. True. True of cloud computing providers, true of the enterprise. No SLA covers business loss. True. True of cloud computing providers, true of the enterprise. What I find amusing about this article is that the author asks if we can imagine "signing up cloud computing contracts without SLAs clearly defined?" Well, why not? Businesses do it every day when IT deploys the latest "Business App v4.5.3.2a". Microsoft Office 2007 relies heavily on on-line components, but we don't demand an SLA from Microsoft for it. Likewise, the anti-phishing capabilities of IE7 don't necessarily come with an SLA and businesses don't shy away from making it their corporate standard anyway. In fact, I'd argue that most cloudware today comes with an anti-SLA: use at your own risk, we don't guarantee anything. The CIO/CSO's head is on the chopping block if he does have an SLA, because there's no guarantee that IT can meet it. Oh, usually they do, because the SLA is broadly defined for all of IT in terms of "we'll have 5 9's of availability for the network" and "applications will have less than an X second response time" and so on. But it isn't as if IT and the business sit down and negotiate SLAs for every single application they deploy into the enterprise data center. If they do, then they're the exception, not the rule. And the applications this is true of are so time-sensitive and mission critical that it's unlikely the responsibility for them will ever be outsourced. Financial services and brokerages are a good example of this. Outsourced? Unlikely. The IT folks responsible for the applications and networks in those industries are probably laughing uproariously at the idea. The argument that an SLA is simply to place a target on someone's head regarding responsibility for uptime and performance of applications is largely true. But that would seem to indicate that if you're a CIO/CSO and can wrangle any SLA out of a cloud computing provider that you should immediately use them for everything, because you can pass the mantle of responsibility for failing to meet SLAs to them instead of shouldering it yourself. This isn't a cloud computing problem, this is a problem of responsibility and managing expectations. It's a problem with expecting that a million moving parts, hundreds of connections, routers, switches, intermediaries, servers, operating systems, libraries, and applications will somehow always manage to be available. Unpossible, I say, and unrealistic regardless of whether we're talking cloud computing or enterprise infrastructure. Basically, the CIO/CSO is damned if he has an SLA because chances are IT is going to fail to meet them at some point, and he's damned if he doesn't have an SLA because that means he's solely responsible for the reliability and performance of all of IT. And people wonder why C-level execs command the compensation levels they do. It's to make sure they can afford the steady stream of antacids they need just to get through the day.204Views0likes1CommentManaging Virtual Infrastructure Requires an Application Centric Approach
Thanks to a tweet from @Archimedius, I found an insightful blog post from cloud computing provider startup Kaavo that essentially makes the case for a move to application-centric management rather than the traditional infrastructure-centric systems on which we've always relied. We need to have an application centric approach for deploying, managing, and monitoring applications. A software which can provisions optimal virtual servers, network, storage (storage, CPU, bandwidth, Memory, alt.) resources on-demand and provide automation and ease of use to application owners to easily and securely run and maintain their applications will be critical for the success of virtualization and cloud computing. In short we need to start managing distributed systems for specific applications rather than managing servers and routers. [emphasis added] This is such a simple statement that gets right to the heart of the problem: when applications are decoupled from the servers on which they are deployed and the network infrastructure that supports and delivers them, they cannot be effectively managed unless they are recognized as individual components themselves. Traditional infrastructure and its associated management intrinsically ties applications to servers and servers to IP addresses and IP addresses to switches and routers. This is a tightly coupled model that leaves very little room to address the dynamic nature of a virtual infrastructure such as those most often seen in cloud computing models. We've watched as SOA was rapidly adopted and organizations realized the benefits of a loosely coupled application architecture. We've watched the explosion of virtualization and the excitement of de-coupling applications from their underlying server infrastructure. But in the network infrastructure space, we still see applications tied to servers tied to IP addresses tied to switches and routers. That model is broken in a virtual, dynamic infrastructure because applications are no longer bound to servers or IP addresses. They can be anywhere at any time, and infrastructure and management systems that insist on binding the two together are simply going to impede progress and make managing that virtual infrastructure even more painful. It's all about the application. Finally. And that's what makes application delivery focused solutions so important to both virtualization and cloud computing models in which virtualization plays a large enabling role. Because virtualization and cloud computing, like application delivery solution providers, is application-centric. Because these solutions are, and have been for years, focused on application awareness and on the ability of the infrastructure solutions to be adaptable; to be agile. Because they have long since moved beyond simple load balancing and into application delivery, where the application is what is delivered, not bits, bytes, and packets. Because application delivery controllers are more platforms than they are devices; they are programmable, adaptable, and internally focused on application delivery, scalability, and security.They are capable of dealing with the demands that a virtualized application infrastructure places on the entire delivery infrastructure. Where simple load balancing fails to adapt dynamically to the ever changing internal network of applications both virtual and non-virtual, application delivery excels. It is capable of monitoring, intelligently, the availability of applications not only in terms of whether it is up or down, but where it currently resides within the data center. Application delivery solutions are loosely coupled, and like SOA-based solutions they rely on real-time information about infrastructure and applications to determine how best to distribute requests, whether that's within the confines of a single data center or fifteen data centers. Application delivery controllers focus on distributing requests to applications, not servers or IP addresses, and they are capable of optimizing and securing both requests and responses based on the application as well as the network. They are the solution that bridges the gap that lies between applications and network infrastructure, and enables the agility necessary to build a scalable, dynamic delivery system suitable for virtualization and cloud computing. There's still work to be done, but for many vendors, at least, the framework already exists for managing the complexity of a dynamic, virtual environment. Related articles by Zemanta Cloud Computing: Is your cloud sticky? It should be. Infrastructure 2.0: The Diseconomy of Scale Virus Cloud Computing: Vertical Scalability is Still Your Problem Gartner picks tech top 10 for 2009 Clouding over the issues.252Views0likes3CommentsInfrastructure 2.0: As a matter of fact that isn't what it means
We've been talking a lot about the benefits of Infrastructure 2.0, or Dynamic Infrastructure, a lot about why it's necessary, and what's required to make it all work. But we've never really laid out what it is, and that's beginning to lead to some misconceptions. As Daryl Plummer of Gartner pointed out recently, the definition of cloud computing is still, well, cloudy. Multiple experts can't agree on the definition, and the same is quickly becoming true of dynamic infrastructure. That's no surprise; we're at the beginning of what Gartner would call the hype cycle for both concepts, so there's some work to be done on fleshing out exactly what each means. That dynamic infrastructure is tied to cloud computing is no surprise, either, as dynamic infrastructure is very much an enabler of such elastic models of application deployment. But dynamic infrastructure is applicable to all kinds of models of application deployment: so-called legacy deployments, cloud computing and its many faces, and likely new models that have yet to be defined. The biggest confusion out there seems to be that dynamic infrastructure is being viewed as Infrastructure as a Service (IaaS). Dynamic infrastructure is not the same thing as IaaS. IaaS is a deployment model in which application infrastructure resides elsewhere, in the cloud, and is leveraged by organizations desiring an affordable option for scalability that reduces operating and capital expenses by sharing compute resources "out there" somewhere, at a provider. Dynamic infrastructure is very much a foundational technology for IaaS, but it is not, in and of itself, IaaS. Indeed, simply providing network or application network solution services "as a service" has never required dynamic infrastructure. CDN (Content Delivery Networks), managed VPNs, secure remote access, and DNS services have long been available as services to be used by organizations as a means by which they can employ a variety of "infrastructure services" without the capital expenditure in hardware and time/effort required to configure, deploy, and maintain such solutions. Simply residing "in the cloud" is not enough. A CDN is not "dynamic infrastructure" nor are hosted DNS servers. They are infrastructure 1.0, legacy infrastructure, whose very nature is such that physical location has never been important to their deployment. Indeed, these services were designed without physical location as a requirement, necessarily, as their core functions are supposed to work in a distributed, location agnostic manner. Dynamic infrastructure is an evolution of traditional network and application network solutions to be more adaptable, support integration with its environment and other foundational technologies, and to be aware of context (connectivity intelligence). Adaptable It is able to understand its environment and react to conditions in that environment in order to provide scale, security, and optimal performance for applications. This adaptability comes in many forms, from the ability to make management and configuration changes on the fly as necessary to providing the means by which administrators and developers can manually or automatically make changes to the way in which applications are being delivered. The configuration and policies applied by dynamic infrastructure are not static; they are able to change based on predefined criteria or events that occur in the environment such that the security, scalability, or performance of an application and its environs are preserved. Some solutions implement this capability through event-driven architectures, such as "IP_ADDRESS_ASSIGNED" or "HTTP_REQUEST_MADE". Some provide network-side scripting capabilities to extend the ability to react and adapt to situations requiring flexibility while others provide the means by which third-party solutions can be deployed on the solution to address the need for application and user specific capabilities at specific touch-points in the architecture. Context Aware Dynamic infrastructure is able to understand the context that surrounds an application, its deployment environment, and its users and apply relevant policies based on that information. Being context aware means being able to recognize that a user accessing Application X from a coffee shop has different needs than the same user accessing Application X from home or from the corporate office. It is able to recognize that a user accessing an application over a WAN or high-latency connection requires different policies than one accessing that application via a LAN or from close physical proximity over the Internet. Being context aware means being able to recognize the current conditions of the network and the application, and then leveraging its adaptable nature to choose the right policies at the time the request is made such that the application is delivered most efficiently and quickly. Collaborative Dynamic infrastructure is capable of integrating with other application network and network infrastructure, as well as the management and control solutions required to manage both the infrastructure and the applications it is tasked with delivering. The integration capabilities of dynamic infrastructure requires that the solution be able to direct and take direction from other solutions such that changes in the infrastructure at all layers of the stack can be recognized and acted upon. This integration allows network and application network solutions to leverage its awareness of context in a way that ensures it is adaptable and can support the delivery of applications in an elastic, flexible manner. Most solutions use a standards-based control plane through which they can be integrated with other systems to provide the connectivity intelligence necessary to implement IaaS, virtualized architectures, and other cloud computing models in such a way that the perceived benefits of reduced operating expenses and increased productivity through automation can actually be realized. These three properties of dynamic infrastructure work together, in concert, to provide the connectivity intelligence and ability to act on information gathered through that intelligence. All three together form the basis for a fluid, adaptable, dynamic application infrastructure foundation on which emerging compute models such as cloud computing and virtualized architectures can be implemented. But dynamic infrastructure is not exclusively tied to emerging compute models and next-generation application architectures. Dynamic infrastructure can be leveraged to provide benefit to traditional architectures, as well. The connectivity intelligence and adaptable nature of dynamic infrastructure improves the security, availability, and performance of applications in so-called legacy architectures as well. Dynamic infrastructure is a set of capabilities implemented by network and application network solutions that provide the means by which an organization can improve the efficiency of their application delivery and network architecture. That's why it's just not accurate to equate Infrastructure 2.0/Dynamic Infrastructure with Infrastructure as a Service cloud computing models. The former is a description of the next generation of network and network application infrastructure solutions; the evolution from static, brittle solutions to fluid, dynamic, adaptable ones. The latter is a deployment model that, while likely is built atop dynamic infrastructure solutions, is not wholly comprised of dynamic infrastructure. IaaS is not a product, it's a service. Dynamic infrastructure is a product that may or may not be delivered "as a service". Glad we got that straightened out.253Views0likes1CommentInteroperability between clouds requires more than just VM portability
The issue of application state and connection management is one often discussed in the context of cloud computing and virtualized architectures. That's because the stress placed on existing static infrastructure due to the potentially rapid rate of change associated with dynamic application provisioning is enormous and, as is often pointed out, existing "infrastructure 1.0" systems are generally incapable of reacting in a timely fashion to such changes occurring in real-time. The most basic of concerns continues to revolve around IP address management. This is a favorite topic of Greg Ness at Infrastructure 2.0 and has been subsequently addressed in a variety of articles and blogs since the concepts of cloud computing and virtualization have gained momentum. The Burton Group has addressed this issue with regards to interoperability in a recent post, positing that perhaps changes are needed (agreed) to support emerging data center models. What is interesting is that the blog supports the notion of modifying existing core infrastructure standards (IP) to support the dynamic nature of these new models and also posits that interoperability is essentially enabled simply by virtual machine portability. From The Burton Group's"What does the Cloud Need? Standards for Infrastructure as a Service" First question is: How do we migrate between clouds? If we're talking System Infrastructure as a Service, then what happens when I try to migrate a virtual machine (VM) between my internal cloud running ESX (say I'm running VDC-OS) and a cloud provider who is running XenServer (running Citrix C3)? Are my cloud vendor choices limited to those vendors that match my internal cloud infrastructure? Well, while its probably a good idea, there are published standards out there that might help. Open Virtualization Format (OVF) is a meta-data format used to describe VMs in standard terms. While the format of the VM is different, the meta-data in OVF can be used to facilitate VM conversion from one format to other, thereby enabling interoperability. ... Another biggie is application state and connection management. When I move a workload from one location to another, the application has made some assumptions about where external resources are and how to get to them. The IP address the application or OS use to resolve DNS names probably isn't valid now that the VM has moved to a completely different location. That's where Locator ID Separation Protocol (LISP -- another overloaded acronym) steps in. The idea with LISP is to add fields to the IP header so that packets can be redirected to the correct location. The "ID" and and "locator" are separated so that the packet with the "ID" can be sent to the "locator" for address resolution. The "locator" can change the final address dynamically, allowing the source application or OS to change locations as long as they can reach the "locator". [emphasis added] If LISP sounds eerily familiar to some of you, it should. It's the same basic premise behind UDDI and the process of dynamically discovering the "location" of service end-points in a service-based architecture. Not exactly the same, but the core concepts are the same. The most pressing issue with proposing LISP as a solution is that it focuses only on the problems associated with moving workloads from one location to another with the assumption that the new location is, essentially, a physically disparate data center, and not simply a new location within the same data center; an issue with LISP does not even consider. That it also ignores other application networking infrastructure that requires the same information - that is, the new location of the application or resource - is also disconcerting but not a roadblock, it's merely a speed-bump in the road to implementation. We'll come back to that later; first let's examine the emphasized statement that seems to imply that simply migrating a virtual image from one provider to another equates to interoperability between clouds - specifically IaaS clouds. I'm sure the author didn't mean to imply that it's that simple; that all you need is to be able to migrate virtual images from one system to another. I'm sure there's more to it, or at least I'm hopeful that this concept was expressed so simply in the interests of brevity rather than completeness because there's a lot more to porting any application from one environment to another than just the application itself. Applications, and therefore virtual images containing applications, are not islands. They are not capable of doing anything without a supporting infrastructure - application and network - and some of that infrastructure is necessarily configured in such a way as to be peculiar to the application - and vice-versa. We call it an "ecosystem" for a reason; because there's a symbiotic relationship between applications and their supporting infrastructure that, when separated, degrades or even destroys the usability of that application. One cannot simply move a virtual machine from one location to another, regardless of the interoperability of virtualization infrastructure, and expect things to magically work unless all of the required supporting infrastructure has also been migrated as seamlessly. And this infrastructure isn't just hardware and network infrastructure; authentication and security systems, too, are an integral part of an application deployment. Even if all the necessary components were themselves virtualized (and I am not suggesting this should be the case at all) simply porting the virtual instances from one location to another is not enough to assure interoperability as the components must be able to collaborate, which requires connectivity information. Which brings us back to the problems associated with LISP and its focus on external discovery and location. There's just a lot more to interoperability than pushing around virtual images regardless of what those images contain: application, data, identity, security, or networking. Portability between virtual images is a good start, but it certainly isn't going to provide the interoperability necessary to ensure the seamless transition from one IaaS cloud environment to another. RELATED ARTICLES & BLOGS Who owns application delivery meta-data in the cloud? More on the meta-data menagerie The Feedback Loop Must Include Applications How VM sprawl will drive the urgency of the network evolution The Diseconomy of Scale Virus Flexibility is Key to Dynamic Infrastructure The Three Horsemen of the Coming Network Revolution As a Service: The Many Faces of the Cloud251Views0likes2Comments