cloud computing infrastructure
24 TopicsIs Your Cloud Opaque or Transparent?
Cloud computing promises customers the ability to deliver scalable applications on-demand without the overhead of a massive data center. The visibility - and flexibility as well as control - you have into and over the cloud computing environment depends on whether the provider you select offers an opaque or a transparent cloud computing environment. OPAQUE CLOUD COMPUTING MODEL In an opaque cloud computing model all details are hidden from the organization. The hardware and software infrastructure details are not necessarily known or controlled by the organization but are completely managed by the cloud computing provider. This allows for a completely on-demand environment in which all resources necessary to deliver applications according to service level agreements are automatically provisioned, de-provisioned, and managed by the cloud computing provider. The organization need only develop and deploy the application to the cloud, the rest of the details are opaque and handled for the organization by the cloud computing provider. Most opaque cloud computing providers currently allow the organization to determine how many "instances" of their application is running at any given time, with the details of provisioning the appropriate resources to support those instances hidden from the customer view. In many ways, SaaS (Software as a Service) providers such as Salesforce.com have been using an opaque cloud computing model for many years, as the implementation details in terms of hardware and software infrastructure are completely hidden from and unavailable to the customer (the organization). The difference between a SaaS offering and an opaque cloud computing model is in the application development and deployment processes. A SaaS offering such as Salesforce.com or Google Apps requires development of applications on a specific platform, almost always proprietary to the SaaS provider and non-negotiable. An opaque cloud computing provider allows the organization to determine the platform upon which applications will be deployed, more akin to some of Amazon's cloud offerings, such as EC2, though the underlying operating system and hardware may not be known due to the extensive use of operating system virtualization by the cloud computing provider. This is why virtualization is inexorably tied to cloud computing - it is the most efficient way to deploy an application to a remote, virtual data center without the overhead of configuration and management of the incredibly high number of combinations possible for application platforms. Opaque cloud computing providers like Joyent have an infrastructure already constructed to scale. Customers may be aware of what that infrastructure comprises, but cannot necessarily specify the choice of switches, routers, or application delivery infrastructure. TRANSPARENT CLOUD COMPUTING MODEL In a transparent cloud computing model the organization is left with determining its specific needs. The organization decides how much computing power it requires, what hardware and software solutions it will require, and it manages its provisioned resources in the cloud. The transparent cloud computing model is more akin to an outsourced data-center, a virtual data center if you will, than it is to an on-demand opaque cloud computing model. The acquisition and provisioning of resources becomes much difficult in a transparent cloud computing model. The prospect of automated on-demand computing in a transparent cloud computing model is good and in some cases already available for some functions. The same mechanisms used to manage the opaque cloud computing environment could become customer-facing ones. Some management and configuration of cloud computing resources are currently being offered to customers by providers like RightScale, though what infrastructure functions can be delegated to the organization vary greatly from provider to provider. Rackspace is a good example of a transparent cloud computing model, as are many of the traditional hosting providers. The transparent cloud computing environment is still evolving, currently comprising several different models for architecting your infrastructure. Some, like Areti, offer flexibility in infrastructure choices by taking advantage of virtual appliances. This allows the customer to choose from a number of application and network infrastructure solutions while keeping the cost of acquisition and management down for both the provider and the customer. Other providers continue to focus on physical infrastructure deployment in a fully or collaboratively managed environment, understanding that some organizations will require dedicated, proven solutions if they move into the cloud. THE HYBRID MODEL CIO.com recently offered a list of 11 cloud computing vendors to watch, culled from a Forrester Research report. The list comprises a mix of opaque and transparent providers, with a tendency to lean toward the opaque. A few of the providers on the list are leaning toward a hybrid cloud computing model, with the ability to specify a choice of infrastructure devices from those supported by the provider while providing fully managed services. A hybrid model is likely where providers will eventually converge, as it promises to provide the greatest flexibility for customers without sacrificing some of the control necessary on the part of the provider. After all, while allowing customers to manage and configure some components of the application delivery network, others (routers, switches) are likely to not require such hands-on management by customers. In fact, such mucking around at layer 2 and 3 could very well do more harm than good, as most value added features and functionality for application delivery comes at layer 4 and above and is best handled by a solution separate from core network infrastructure. And while many customers will be comfortable testing the waters with virtual appliances or open-source solutions for application delivery infrastructure, eventually more proven solutions will be required as customers begin to demand more flexibility and functionality in the cloud. Cloud computing providers who evolve quickly and take advantage of componentized application delivery infrastructure will be able to better differentiate their offerings by offering additional features such as acceleration, security, rate shaping, and advanced authentication. These advanced features are one of the primary reasons that "Option #3" will be increasingly important for application delivery and networking infrastructure providers. Cloud computing providers appear willing to support such features, but require that the solutions be able to be integrated and remotely managed, on-demand, before they will do so.546Views0likes0CommentsDynamic Infrastructure: The Cloud within the Cloud
When folks are asked to define the cloud they invariably, somewhere in the definition, bring up the point that “users shouldn’t care” about the actual implementation. When asked to diagram a cloud environment we end up with two clouds: one representing the “big cloud” and one inside the cloud, representing the infrastructure we aren’t supposed to care about, usually with some pretty graphics representing applications being delivered out of the cloud over the Internet. But yet some of us need to care what’s obscured; the folks tasked with building out a cloud environment need to know what’s hidden in the cloud in order to build out an infrastructure that will support such a dynamic, elastic environment. It is the obscuring of the infrastructure that makes cloud seem so simple. Because we’re hiding all the moving parts that need to work in concert to achieve such a fluid environment it appears as if all you need is virtualization and voila! The rest will take care of itself. But without a dynamic infrastructure supporting all the virtualized applications and, in many cases, infrastructure such an environment is exceedingly difficult to build. WHAT’S HIDDEN IN THE CLOUD Inside the “cloud within the cloud” there are a great number of pieces of infrastructure working together. Obviously there are the core networking components: routers, switches, DNS, and DHCP, without which connectivity would be impossible. Moving up the stack we find load balancing and application delivery infrastructure; the core application networking components that enable the dynamism promised by virtualized environments to be achieved. Without a layer of infrastructure bridging the gap between the network and the applications, virtualized or not, it is difficult to achieve the kind of elasticity and dynamism necessary for the cloud to “just work” for end users. It is the application networking layer that is responsible for ensuring availability, proper routing of requests, and applying application level policies such as security and acceleration. This layer must be dynamic, because the actual virtualized layers of web and application servers are themselves dynamic. Application instances may move from IP to IP across hours or days, and it is necessary for the application networking layer to be able to adapt to that change without requiring manual intervention in the form of configuration modification. Storage virtualization, too, resides in this layer of the infrastructure. Storage virtualization provides enables a dynamic infrastructure by presenting a unified view of storage to the applications and internal infrastructure, ensuring that the application need not be modified in order to access file-based resources. Storage virtualization can further be the means through which cloud control mechanisms manage the myriad virtual images required to support a cloud computing infrastructure. The role of the application networking layer is to mediate, or broker, between clients and the actual applications to ensure a seamless access experience regardless of where the actual application instance might be running at any given time. It is the application networking layer that provides network and server virtualization such that the actual implementation of the cloud is hidden from external constituents. Much like storage virtualization, application networking layers present a “virtual” view of the applications and resources requiring external access. This is why dynamism is such an integral component of a cloud computing infrastructure: the application networking layer must, necessarily, keep tabs on application instances and be able to associate them with the appropriate “virtual” application it presents to external users. Classic load balancing solutions are incapable of such dynamic, near real-time reconfiguration and discovery and almost always require manual intervention. Dynamic application networking infrastructure is not only capable but excels at this type of autonomous function, integrating with the systems necessary to enable awareness of changes within the application infrastructure and act upon them. The “cloud within the cloud” need only be visible to implementers; but as we move forward and more organizations attempt to act on a localized cloud computing strategy it becomes necessary to peer inside the cloud and understand how the disparate pieces of technology combine. This visibility is a requirement if organizations are to achieve the goals desired through the implementation of a cloud computing-based architecture: efficiency and scalability.358Views0likes4CommentsLoad balancing is key to successful cloud-based (dynamic) architectures
Much of the dialogue today surrounding cloud computing and virtualization is still taking the 50,000 foot view. It's all conceptual; it's all about business value, justification, interoperability, and use cases. These are all good conversations that need to happen in order for cloud computing and virtualization-based architectures to mature, but as is often the case that leaves the folks tasked with building something right now a bit on their own. So let's ignore the high-level view for just a bit and talk reality. Many folks are being tasked, now, with designing or even implementing some form of a cloud computing architecture - usually based around virtualization technology like VMWare (a March 2008 Gartner Research report predicted VMWare would likely hold 85% of the virtualization market by the end of 2008). But architecting a cloud-based environment requires more than just deploying virtual images and walking away. Cloud-based computing is going to require that architects broaden their understanding of the role that infrastructure like load balancers play in enterprise architecture because they are a key component to a successful cloud-based implementation, whether that's a small proof of concept or a complex, enterprise-wide architectural revolution. The goal of a cloud-based architecture is to provide some form of elasticity, the ability to expand and contract capacity on-demand. The implication is that at some point additional instances of an application will be needed in order for the architecture to scale and meet demand. That means there needs to be some mechanism in place to balance requests between two or more instances of that application. The mechanism most likely to be successful in performing such a task is a load balancer. The challenges of attempting to build such an architecture without a load balancer are staggering. There's no other good way to take advantage of additional capacity introduced by multiple instances of an application that's also efficient in terms of configuration and deployment. All other methods require modifications and changes to multiple network devices in order to properly distribute requests across multiple instances of an application. Likewise, when the additional instances of that application are de-provisioned, the changes to the network configuration need to be reversed. Obviously a manual process would be time consuming and inefficient, effectively erasing the benefits gained by introducing a cloud-based architecture in the first place. A load balancer provides the means by which instances of applications can be provisioned and de-provisioned automatically, without requiring changes to the network or its configuration. It automatically handles the increases and decreases in capacity and adapts its distribution decisions based on the capacity available at the time a request is made. Because the end-user is always directed to a virtual server, or IP address, on the load balancer the increase or decrease of capacity provided by the provisioning and de-provisioning of application instances is non-disruptive. As is required by even the most basic of cloud computing definitions, the end user is abstracted by the load balancer from the actual implementation and needs not care about the actual implementation. The load balancer makes one, two, or two-hundred resources - whether physical or virtual - appear to be one resource; this decouples the user from the physical implementation of the application and allows the internal implementation to grow, to shrink, and to change without any obvious affect on the user. Choosing the right load balancer at the beginning of such an initiative is imperative to the success of more complex implementations later. The right load balancer will be able to provide the basics required to lay the foundation for more advanced cloud computing architectures in the future, while supporting even the most basic architectures today. The right load balancer will be extensible. When first implementing a cloud-based architecture you need simple load balancing capabilities, and little more. But as your environment grows more complex there will likely be a need for more advanced features, like layer 7 switching, acceleration, optimization, SSL termination and redirection, application security, and secure access. The right load balancing solution will allow you to start with the basics but be able to easily provide more advanced functionality as you need it - without requiring new devices or solutions that often require re-architecture of the network or application infrastructure. A load balancer is a key component to building out any cloud computing architecture, whether it's just a simple proof of concept or an advanced, provider-oriented implementation. Related articles by Zemanta Managing Virtual Infrastructure Requires an Application Centric Approach Infrastructure 2.0: The Diseconomy of Scale Virus The next tech boom is already underway The Context-Aware Cloud Battle brewing over next-generation private clouds 4 Things You Need in a Cloud Computing Infrastructure289Views0likes1CommentInfrastructure 2.0: The Feedback Loop Must Include Applications
Greg Ness calls it "connectivity intelligence" but it seems that we're really talking about is the ability of network infrastructure to both be agile itself and enable IT agility at the same time. Brittle, inflexible infrastructures - whether they are implemented in hardware or software or both - are not agile enough to deal with an evolving, dynamic application architecture. Greg says in a previous post The static infrastructure was not architected to keep up with these new levels of change and complexity without a new layer of connectivity intelligence, delivering dynamic information between endpoint instances and everything from Ethernet switches and firewalls to application front ends. Empowered with dynamic feedback, the existing deployed infrastructure can evolve into an even more responsive, resilient and flexible network and deliver new economies of scale. The issue I see is this: it's all too network focused. Knowing that a virtual machine instance came online and needs an IP address, security policies, and to be added to a VLAN on the switch is very network-centric. Necessary, but network-centric. The VM came online for a reason, and that reason is most likely an application specific one. Greg has referred several times to the Trusted Computing Group's IF-MAP specification, which provides the basics through which connectivity intelligence could certainly be implemented if vendors could all agree to implement it. The problem with IF-MAP and, indeed, most specifications that come out of a group of network-focused organizers is that they are, well, network-focused. In fact, reading through IF-MAP I found many similarities between its operations (functions) and those found in the more application-focused security standard, SAML. While IF-MAP allows for custom data to be included, which could be used by application vendors to IF-MAP enable application servers through which more application specific details could be included in the dynamic infrastructure feedback loop, that's not as agile as it could be because it doesn't allow for a simple, standard mechanism through which application developers can integrate application specific details into that feedback loop. And yet that's exactly what we need to complete this dynamic feedback loop and create a truly flexible, agile infrastructure because the applications are endpoints; they, too, need to be managed and secured and integrated into the Infrastructure 2.0 world. While I agree with Greg that IP address management in general and managing a constantly changing heterogeneous infrastructure is a nightmare that standards like IF-MAP might certainly help IT wake up from, there's another level of managing the dynamic environments associated with cloud computing and virtualization that generally isn't addressed by very network-specific standards like IF-MAP: the application layer. In order for a specification like IF-MAP to address the application layer, application developers would need to integrate (become an IF-MAP client) the code necessary to act as part of an IF-MAP enabled infrastructure. That's because knowing that a virtual machine just came online is one thing; understanding which application it is, what application policies need to be applied, and what application-specific processing might be necessary in the rest of the infrastructure is another. It's all contextual, and based on variables we can't know ahead of time. This can't be determined before the application is actually written, so it can't be something written by vendors and shipped as a "value add". Application security and switching policies are peculiar to the application; they're unique and the only way we, as vendors, can provide that integration without foreknowledge of that uniqueness is to abstract applications to a general use case. That completely destroys the concept of agility because it doesn't take into consideration the application environment as it is at any given moment in time. It results in static, brittle integration that is essentially no more useful than SNMP would be if it were integrated into an application. We can all sit around and integrate with VMWare, and Hyper-V, and Xen. We can learn to speak IF-MAP or (some other common standard) and integrate with DNS and DHCP servers, with network security devices and with layer 2-3 switches. But we are still going to have to manually manage the applications that are ultimately the reason for the existence of such virtualized environments. While we are getting our infrastructure up to speed so that it is easier and less costly to manage is necessary, let's not forget about the applications we also still have to manage. Dynamic feedback is great and we have, today, the ability to enable pieces of that dynamic feedback loop. Customers can, today, use tools like iControl and iRules to build a feedback loop between their application delivery network and applications, regardless of whether those applications are in a VM or a Java EE container, or on a Microsoft server. But this feedback is specific to one vendor, and doesn't necessarily include the rest of the infrastructure. Greg is talking about general dynamic feedback at the network layer. He's specifically (and understandably) concerned with network agility, not application agility. That's why he calls it infrastructure 2.0 and not application something 2.0. Greg points as an example to the constant levels of change introduced by virtual machines coming on and off line and the difficulties inherent in trying to manage that change via static, infrastructure 1.0 products. That's all completely true and needs to be addressed by infrastructure vendors. But we also need to consider how to enable agility at the application layer, so the feedback loop that drives security and routing and switching and acceleration and delivery configurations in real-time can adapt to conditions within and around the applications we are trying to manage in the first place. It's all about the application in the end. Endpoints - whether internal or external to the data center - are requesting access and IP addresses for one reason: to get a resource served by an application. That application may be TCP-based, it may be HTTP-based, it may be riding on UDP. Regardless of the network-layer transport mechanisms, it's still an application - a browser, a server-side web application, a SOA service - and its unique needs must be considered in order for the feedback loop to be complete. How else will you know which application just came online or went offline? How do you know what security to apply if you don't know what you might be trying to secure? Somehow the network-centric standards that might evolve from a push to a more agile infrastructure must broaden their focus and consider how an application might integrate with such standards or what information they might provide as part of this dynamic feedback loop that will drive a more agile infrastructure. Any such standard emerging upon which Infrastructure 2.0 is built must somehow be accessible and developer-friendly and take into consideration application-specific resources as well as network-resources, and provide a standard means by which information about the application that can drive the infrastructure to adapt to its unique needs can be shared. If it doesn't, we're going to end up with the same fractured "us versus them" siloed infrastructure we've had for years. That's no longer reasonable. The network and the application are inexorably linked now, thanks to cloud computing and the Internet in general. Managing thousands of instances of an application will be as painful as managing thousands of IP addresses. As Greg points out, that doesn't work very well right now and it's costing us a lot of money and time and effort to do so. We know where this ends up, because we've seen it happen already. The same diseconomies of scale that affect TCP/IP are going to affect application management. We should be more proactive in addressing the same management issues that will arise with trying to manage thousands of applications and services rather than waiting until it, too, can no longer be ignored.288Views0likes1CommentBursting the Cloud
The cloud computing craze is leading to some interesting new terms. Cloudware and cloudbursting are two terms I particularly like for their ability to describe specific computing models based on cloud computing. Today we're going to look at cloudbursting, which is basically a new twist on an old concept. Cloudbursting appears to be to marry the traditional safe enterprise computing model with cloud computing; in essence, bursting into the cloud when necessary or using the cloud when additional compute resources are required temporarily. Jeff at Amazon Web Services Blog talks about the inception of this term as applied to the latter and describes it in his blog post as a method used by Thomas Brox Røst to regenerate a number of dynamic pages in 5 hours rather than the 7 hours that would be required if he had attempted such a feat internally. His approach is further described on The High Scalability Blog. Cloudbursting can also be used to shoulder the burden of some of an application's processing. For example, basic application functionality could be provided from within the cloud while more critical (e.g. revenue-generating) applications continue to be served from within the controlled enterprise data center. This assumes that only a portion of consumers will actually be interacting with the data-driven side of a web site (customer management, process visibility, etc...) while the greater portion will simply be browsing around on the non-interactive, as it were, side of the site. Bursting has traditionally been applied to resource allocation and automated provisioning/de-provisioning of resources, historically focused on bandwidth. Today, in the cloud, it is being applied to resources such as servers, application servers, application delivery systems, and other infrastructure required to provide on-demand computing environments that expand and contract as necessary, without manual intervention. This requires the ability to automate the cloud's data center. Data center automation in a cloud computing environment, regardless of the opacity of the model, requires more than simple workflow systems. It requires on-demand control and management over all devices in the delivery chain, from the storage to the application and web servers to the load-balancers and acceleration offerings that deliver the applications to end-users. This is more akin to data center orchestration than it is automation, as it requires that many moving parts and pieces be coordinated in order to perform a highly complex set of tasks seamlessly and with as little manual intervention as possible. This is one of the foundational requirements of a cloud computing infrastructure: on-demand, automated scalability. Data center automation is nothing new. Hosting and service providers have long automated their data centers in order to reduce the cost of customer acquisition and management, and to improve efficiency of provisioning and de-provisioning processes. These benefits can also be realized inside the data center, regardless of the model being employed. The same automation required for smooth, cost-effective management of a cloud computing data center can be utilized to achieve smooth, cost-effective management of an enterprise data center. The hybrid application deployment model involving cloud computing requires additional intelligence on the part of the application delivery network. The application delivery network must be able to understand what is being requested and where it resides; it must be able to intelligently route requests. This, too, is a fundamental attribute of cloud computing infrastructure: intelligence. When distributing an application across multiple locations, whether local servers or remote data centers or "in the cloud", it becomes necessary for a controlling node to properly route those requests based on application data. In a less sophisticated model, global load balancing could be substituted as a means of directing requests to the appropriate site, a task for which global load balancers seem a perfect fit. A hybrid approach like cloudbursting seems to be particularly appealing. Enterprises seem reluctant to move business critical applications into the cloud at this juncture but are likely more willing to assign responsibility to an outsourced provider for less critical application functionality with variable volume requirements, which fits well with an on-demand resource bursting model. Cloudbursting may be one solution that makes everyone happy.276Views0likes1CommentCompliance in the Cloud
Who is responsible for security in the cloud? Let's say you just developed a web app through which customers can order widgets. You're pretty sure your widgets are going to be the hit of the year and you want to make sure that you don't suffer outages and performance issues like many retailers have in the past, especially around Black Friday. So you've decided to take advantage of the fact that a cloud computing provider can and will shoulder the responsibility for scaling your application even in the face of hundreds of thousands of customers knocking on your web site to order your widgets. The question is who is responsible for worrying about compliance with regulations that may be pertinent to your application and its infrastructure? You? The provider? And if you're running in a cloud like Amazon or Joyent but using a third-party like RightScale to provide additional features, which one of them is responsible for compliance? Both? Neither? Just you? Really, it's not just a question of compliance, it's a question of responsibility for security. You have control over ... the application. That's it. So you can use secure coding techniques and perform code reviews and make sure that your application is secure, but what about the rest of the infrastructure? If you're employing a cloud so that you don't have to worry about all the moving parts that go into scaling up an application - or even if you aren't, but just don't want the headache and cost of building out a massive data center to host that start-up - you may have no idea what kind of server OS is actually running the virtual machine upon which your images are distributed. And you probably don't know what the underlying infrastructure might be, or how secure it is. There are still questions to be answered that have yet to be addressed with cloud computing, such as compliance with regulations like Sarbanes-Oxley (SOX), PCI DSS, HIPAA, and SB 1386. Before any cloud computing model can be fully adopted, compliance with regulations regarding the security and transport of sensitive corporate data such as financial information, personal identification data, and credit information must be carefully considered and addressed, especially as failure to do so is no longer a matter of a simple slap on the wrist but can involve large fines and even jail time for responsible executives. It's nice to not have to worry about the infrastructure that's delivering your applications "out there in the cloud", but there still needs to be an awareness of what that infrastructure is in order to rest a bit easier at night. Even without the prospect of regulatory fines and punishment looming over your head, there's still the question of basic security that needs to be addressed. You may not be worried about HIPAA or SOX, or even PCI DSS, but core security of all the components of the infrastructure used to deliver your applications is paramount to ensuring the safety of your applications and the data it is manipulating. Ultimately it's your application being delivered, so you'll have to burden the lion's share of responsibility for ensuring it is secure, even if that simply entails asking some basic questions of your cloud computing provider about its security and what it has put in place to ensure your applications are delivered not only as fast as possible, but as secure as possible. So maybe the better question is who will shoulder the responsibility for the "big picture"? Or perhaps more appropriately, who are the regulatory commissions going to blame if and when there is a breach?271Views0likes0CommentsCloud Computing and Infrastructure 2.0
Not every infrastructure vendor needs new capabilities to support cloud computing and infrastructure 2.0. Greg Ness of Infoblox has an excellent article on "The Next Tech Boom: Infrastructure 2.0" that is showing up everywhere. That's because it raises some interesting questions and points out some real problems that will be need to be addressed as we move further into cloud computing and virtualized environments. What is really interesting, however, is the fact that some infrastructure vendors are already there and have been for quite some time. One thing Greg mentions that's not quite accurate (at least in the case of F5) is regarding the ability of "appliances" to "look inside servers (for other servers) or dynamically keep up with fluid meshes of hypervisors". From Greg's article: The appliances that have been deployed across the last thirty years simply were not architected to look inside servers (for other servers) or dynamically keep up with fluid meshes of hypervisors powering servers on and off on demand and moving them around with mouse clicks. Enterprises already incurring dis-economies of scale today will face sheer terror when trying to manage and secure the dynamic environments of tomorrow. Rising management costs will further compromise the economics of static network infrastructure. I must disagree. Not on the sheer terror statement, that's almost certainly true, but on the capabilities of infrastructure devices to handle a virtualized environment. Some appliances and network devices have long been able to look inside servers and dynamically keep up with the rapid changes occurring in a hypervisor-driven application infrastructure. We call one of those capabilities "intelligent health monitoring", for example, and others certainly have their own special name for a similar capability. On the dynamic front, when you combine an intelligent application delivery controller with the ability to be orchestrated from within applications or within the OS, you get the ability to dynamically modify configuration of application delivery in real-time based on current conditions within the data center. And if you're monitoring is intelligent enough, you can sense within seconds when an application - whether virtualized or not - has disappeared or conversely, when it's come back on line. F5 has been supporting this kind of dynamic, flexible application infrastructure for years. It's not really new except that its importance has suddenly skyrocketed due to exactly the scenario Greg points out using virtualization. WHAT ABOUT THE VIRTSEC PIECE? There has never been a better case for centralized web application security through a web application firewall and an application delivery controller. The application delivery controller - which necessarily sits between clients and those servers - provides security at layers 2 through 7. The full stack. There's nothing really that special about a virtualized environment as far as the architecture goes for delivering applications running on those virtual servers; the protocols are still the same, and the same vulnerabilities that have plagued non-virtualized applications will also plague virtualized ones. That means that existing solutions can address those vulnerabilities in either environment, or a mix. Add in a web application firewall to centralize application security and it really doesn't matter whether applications are going up and down like the stock market over the past week. By deploying the security at the edge, rather than within each application, you can let the application delivery controller manage the availability state of the application and concentrate on cleaning up and scanning requests for malicious content. Centralizing security for those applications - again, whether they are deployed on a "real" or "virtual" server - has a wealth of benefits including improving performance and reducing the very complexity Greg points out that makes information security folks reach for a valium. BUT THEY'RE DYNAMIC! Yes, yes they are. The assumption is that given the opportunity to move virtual images around that organizations will do so - and do so on a frequent basis. I think that assumption is likely a poor one for the enterprise and probably not nearly as willy nilly for cloud computing providers, either. Certainly there will some movement, some changes, but it's not likely to be every few minutes, as is often implied. Even if it was, some infrastructure is already prepared to deal with that dynamism. Dynamism is just another term for agility and makes the case well for loose-coupling of security and delivery with the applications living in the infrastructure. If we just apply the lessons we've learned from SOA to virtualization and cloud computing and 90% of the "Big Hairy Questions" can be answered by existing technology. We just may have to change our architectures a bit to adapt to these new computing models. Network infrastructure, specifically application delivery, has had to deal with applications coming online and going offline since their inception. It's the nature of applications to have outages, and application delivery infrastructure, at least, already deals with those situations. It's merely the frequency of those "outages" that is increasing, not the general concept. But what if they change IP addresses? That would indeed make things more complex. This requires even more intelligence but again, we've got that covered. While the functionality necessary to handle this kind of a scenario is not "out of the box" (yet) it is certainly not that difficult to implement if the infrastructure vendor provides the right kind of integration capability. Which most do already. Greg isn't wrong in his assertions. There are plenty of pieces of network infrastructure that need to take a look at these new environments and adjust how they deal with the dynamic nature of virtualization and cloud computing in general. But it's not all infrastructure that needs to "get up to speed". Some infrastructure has been ready for this scenario for years and it's just now that the application infrastructure and deployment models (SOA, cloud computing, virtualization) has actually caught up and made those features even more important to a successful application deployment. Application delivery in general has stayed ahead of the curve and is already well-suited to cloud computing and virtualized environments. So I guess some devices are already "Infrastructure 2.0" ready. I guess what we really need is a sticker to slap on the product that says so. Related Links Are you (and your infrastructure) ready for virtualization? Server virtualization versus server virtualization Automating scalability and high availability services The Three "Itys" of Cloud Computing 4 things you need in a cloud computing infrastructure267Views0likes3CommentsMaking Infrastructure 2.0 reality may require new standards
Managing a heterogeneous infrastructure is difficult enough, but managing a dynamic, ever changing heterogeneous infrastructure that must be stable enough to deliver dynamic applications makes the former look like a walk in the park. Part of the problem is certainly the inability to manage heterogeneous network infrastructure devices from a single management system. SNMP (Simple Network Management Protocol), the only truly interoperable network management standard used by infrastructure vendors for over a decade, is not robust enough to deal with the management nightmare rapidly emerging for cloud computing vendors. It's called "Simple" for a reason, after all. And even if it weren't, SNMP, while interoperable with network management systems like HP OpenView and IBM's Tivoli, is not standardized at the configuration level. Each vendor generally provides their own customized MIB (Management Information Base). Customized, which roughly translates to "proprietary"; if not in theory then in practice. MIBs are not interchangeable, they aren't interoperable, and they aren't very robust. Generally they're used to share information and are not capable of being used to modify device configuration. In other words, SNMP and customized MIBs are just not enough to support efficient management of a very large heterogeneous data center. As Greg Ness pointed out in his latest blog post on Infrastructure 2.0, the diseconomies of scale in the IP address management space are applicable more generally to the network management space. There's just no good way today to efficiently manage the kind of large, heterogeneous environment required of cloud computing vendors. SNMP wasn't designed for this kind of management any more than TCP/IP was designed to handle the scaling needs of today's applications. While some infrastructure vendors, F5 among them, have seen fit to provide a standards-based management and configuration framework, none of us are really compatible with the other in terms of methodology. The way in which we, for example, represent a pool or a VIP (Virtual IP address), or a VLAN (Virtual LAN) is not the same way Cisco or Citrix or Juniper represent the same network objects. Indeed, our terminology may even be different; we use pool, other ADC vendors use "farm" or "cluster" to represent the same concept. Add virtualization to the mix and yet another set of terms is added to the mix, often conflicting with those used by network infrastructure vendors. "Virtual server" means something completely different when used by an application delivery vendor than it does when used by a virtualization vendor like VMWare or Microsoft. And the same tasks must be accomplished regardless of which piece of the infrastructure is being configured. VLANs, IP addresses, gateway, routes, pools, nodes, and other common infrastructure objects must be managed and configured across a variety of implementations. Scaling the management of these disparate devices and solutions is quickly becoming a nightmare for vendors involved in trying to build out large-scale data centers, whether those are large enterprises or cloud computing vendors or service providers. In a response to Cloud Computing and Infrastructure 2.0, "johnar" points out: Companies are forced to either roll the dice on single-vendor solutions for simplicity, or fill the voids with their own home-brew solutions and therefore assume responsibility for a lot of very complex code that is tightly coupled with ever-changing vendor APIs and technology. The same technology that vendors tout as their differentiator is what is causing the integrators grey hair. Because we all "do it different" with our modern day equivalents of customized MIBs it makes it difficult to integrate all the disparate nodes that make up a full application delivery network and infrastructure into a single, cohesive, efficient management mechanism. We're standards-based, but we aren't based on a single management standard. And as "johnar" points out, it seems unlikely that we'll "unite for data center peace" any time soon: "Unlike ratifying a new Ethernet standard, there's little motivation for ADC vendors to play nice with each other." I think there is motivation and reason for us to play nice with each other in this regard. Disparate competitive vendors came together in the past to ratify Ethernet standards, which led to interoperability and simpler management as we built out the infrastructure that makes the web work today. If we can all agree that application delivery controllers (ADCs) are an integral part of Infrastructure 2.0 (and I'm betting we all can) then in order to forward adoption of ADCs in general and make it possible for customers to choose based on features and functionality then we must make an effort to come together and consider standardizing a management model across the industry. And if we're really going to do it right, we need to encourage other infrastructure vendors to agree on a common base network management model to further simplify management of large heterogeneous network infrastructures. A VLAN is a VLAN regardless of whether it's implemented in a switch, an ADC, or on a server. If a lack of standards might hold back adoption or prevent the ability of vendors to compete for business, then that's a damn good motivating factor right there for us to unite for data center peace. If Microsoft, IBM, BEA, and Oracle were able to unite and agree upon a single web services interoperability standard (which they were, the result of which is WS-I) then it is not crazy to think that F5 and its competitors can come together and agree upon a single, standards-based management interface that will drive Infrastructure 2.0 to be reality. Major shifts in architectural paradigms often require new standards. That's where we got all the WS-* specifications and that's where we got all the 802.x standards: major architectural paradigm shifts. Cloud computing and the pervasive webification of, well, everything is driving yet another major architectural paradigm shift. And that may very well mean we need new standards to move forward and make the shift as painless as possible for everyone.249Views0likes0CommentsManaging Virtual Infrastructure Requires an Application Centric Approach
Thanks to a tweet from @Archimedius, I found an insightful blog post from cloud computing provider startup Kaavo that essentially makes the case for a move to application-centric management rather than the traditional infrastructure-centric systems on which we've always relied. We need to have an application centric approach for deploying, managing, and monitoring applications. A software which can provisions optimal virtual servers, network, storage (storage, CPU, bandwidth, Memory, alt.) resources on-demand and provide automation and ease of use to application owners to easily and securely run and maintain their applications will be critical for the success of virtualization and cloud computing. In short we need to start managing distributed systems for specific applications rather than managing servers and routers. [emphasis added] This is such a simple statement that gets right to the heart of the problem: when applications are decoupled from the servers on which they are deployed and the network infrastructure that supports and delivers them, they cannot be effectively managed unless they are recognized as individual components themselves. Traditional infrastructure and its associated management intrinsically ties applications to servers and servers to IP addresses and IP addresses to switches and routers. This is a tightly coupled model that leaves very little room to address the dynamic nature of a virtual infrastructure such as those most often seen in cloud computing models. We've watched as SOA was rapidly adopted and organizations realized the benefits of a loosely coupled application architecture. We've watched the explosion of virtualization and the excitement of de-coupling applications from their underlying server infrastructure. But in the network infrastructure space, we still see applications tied to servers tied to IP addresses tied to switches and routers. That model is broken in a virtual, dynamic infrastructure because applications are no longer bound to servers or IP addresses. They can be anywhere at any time, and infrastructure and management systems that insist on binding the two together are simply going to impede progress and make managing that virtual infrastructure even more painful. It's all about the application. Finally. And that's what makes application delivery focused solutions so important to both virtualization and cloud computing models in which virtualization plays a large enabling role. Because virtualization and cloud computing, like application delivery solution providers, is application-centric. Because these solutions are, and have been for years, focused on application awareness and on the ability of the infrastructure solutions to be adaptable; to be agile. Because they have long since moved beyond simple load balancing and into application delivery, where the application is what is delivered, not bits, bytes, and packets. Because application delivery controllers are more platforms than they are devices; they are programmable, adaptable, and internally focused on application delivery, scalability, and security.They are capable of dealing with the demands that a virtualized application infrastructure places on the entire delivery infrastructure. Where simple load balancing fails to adapt dynamically to the ever changing internal network of applications both virtual and non-virtual, application delivery excels. It is capable of monitoring, intelligently, the availability of applications not only in terms of whether it is up or down, but where it currently resides within the data center. Application delivery solutions are loosely coupled, and like SOA-based solutions they rely on real-time information about infrastructure and applications to determine how best to distribute requests, whether that's within the confines of a single data center or fifteen data centers. Application delivery controllers focus on distributing requests to applications, not servers or IP addresses, and they are capable of optimizing and securing both requests and responses based on the application as well as the network. They are the solution that bridges the gap that lies between applications and network infrastructure, and enables the agility necessary to build a scalable, dynamic delivery system suitable for virtualization and cloud computing. There's still work to be done, but for many vendors, at least, the framework already exists for managing the complexity of a dynamic, virtual environment. Related articles by Zemanta Cloud Computing: Is your cloud sticky? It should be. Infrastructure 2.0: The Diseconomy of Scale Virus Cloud Computing: Vertical Scalability is Still Your Problem Gartner picks tech top 10 for 2009 Clouding over the issues.242Views0likes3CommentsInteroperability between clouds requires more than just VM portability
The issue of application state and connection management is one often discussed in the context of cloud computing and virtualized architectures. That's because the stress placed on existing static infrastructure due to the potentially rapid rate of change associated with dynamic application provisioning is enormous and, as is often pointed out, existing "infrastructure 1.0" systems are generally incapable of reacting in a timely fashion to such changes occurring in real-time. The most basic of concerns continues to revolve around IP address management. This is a favorite topic of Greg Ness at Infrastructure 2.0 and has been subsequently addressed in a variety of articles and blogs since the concepts of cloud computing and virtualization have gained momentum. The Burton Group has addressed this issue with regards to interoperability in a recent post, positing that perhaps changes are needed (agreed) to support emerging data center models. What is interesting is that the blog supports the notion of modifying existing core infrastructure standards (IP) to support the dynamic nature of these new models and also posits that interoperability is essentially enabled simply by virtual machine portability. From The Burton Group's"What does the Cloud Need? Standards for Infrastructure as a Service" First question is: How do we migrate between clouds? If we're talking System Infrastructure as a Service, then what happens when I try to migrate a virtual machine (VM) between my internal cloud running ESX (say I'm running VDC-OS) and a cloud provider who is running XenServer (running Citrix C3)? Are my cloud vendor choices limited to those vendors that match my internal cloud infrastructure? Well, while its probably a good idea, there are published standards out there that might help. Open Virtualization Format (OVF) is a meta-data format used to describe VMs in standard terms. While the format of the VM is different, the meta-data in OVF can be used to facilitate VM conversion from one format to other, thereby enabling interoperability. ... Another biggie is application state and connection management. When I move a workload from one location to another, the application has made some assumptions about where external resources are and how to get to them. The IP address the application or OS use to resolve DNS names probably isn't valid now that the VM has moved to a completely different location. That's where Locator ID Separation Protocol (LISP -- another overloaded acronym) steps in. The idea with LISP is to add fields to the IP header so that packets can be redirected to the correct location. The "ID" and and "locator" are separated so that the packet with the "ID" can be sent to the "locator" for address resolution. The "locator" can change the final address dynamically, allowing the source application or OS to change locations as long as they can reach the "locator". [emphasis added] If LISP sounds eerily familiar to some of you, it should. It's the same basic premise behind UDDI and the process of dynamically discovering the "location" of service end-points in a service-based architecture. Not exactly the same, but the core concepts are the same. The most pressing issue with proposing LISP as a solution is that it focuses only on the problems associated with moving workloads from one location to another with the assumption that the new location is, essentially, a physically disparate data center, and not simply a new location within the same data center; an issue with LISP does not even consider. That it also ignores other application networking infrastructure that requires the same information - that is, the new location of the application or resource - is also disconcerting but not a roadblock, it's merely a speed-bump in the road to implementation. We'll come back to that later; first let's examine the emphasized statement that seems to imply that simply migrating a virtual image from one provider to another equates to interoperability between clouds - specifically IaaS clouds. I'm sure the author didn't mean to imply that it's that simple; that all you need is to be able to migrate virtual images from one system to another. I'm sure there's more to it, or at least I'm hopeful that this concept was expressed so simply in the interests of brevity rather than completeness because there's a lot more to porting any application from one environment to another than just the application itself. Applications, and therefore virtual images containing applications, are not islands. They are not capable of doing anything without a supporting infrastructure - application and network - and some of that infrastructure is necessarily configured in such a way as to be peculiar to the application - and vice-versa. We call it an "ecosystem" for a reason; because there's a symbiotic relationship between applications and their supporting infrastructure that, when separated, degrades or even destroys the usability of that application. One cannot simply move a virtual machine from one location to another, regardless of the interoperability of virtualization infrastructure, and expect things to magically work unless all of the required supporting infrastructure has also been migrated as seamlessly. And this infrastructure isn't just hardware and network infrastructure; authentication and security systems, too, are an integral part of an application deployment. Even if all the necessary components were themselves virtualized (and I am not suggesting this should be the case at all) simply porting the virtual instances from one location to another is not enough to assure interoperability as the components must be able to collaborate, which requires connectivity information. Which brings us back to the problems associated with LISP and its focus on external discovery and location. There's just a lot more to interoperability than pushing around virtual images regardless of what those images contain: application, data, identity, security, or networking. Portability between virtual images is a good start, but it certainly isn't going to provide the interoperability necessary to ensure the seamless transition from one IaaS cloud environment to another. RELATED ARTICLES & BLOGS Who owns application delivery meta-data in the cloud? More on the meta-data menagerie The Feedback Loop Must Include Applications How VM sprawl will drive the urgency of the network evolution The Diseconomy of Scale Virus Flexibility is Key to Dynamic Infrastructure The Three Horsemen of the Coming Network Revolution As a Service: The Many Faces of the Cloud241Views0likes2Comments