collaboration
6 TopicsGaming the System: The $23,698,655.93 per hour Cloud Computing Instance?
An interesting look at how automation combined with cloud computing resource brokering could go very, very wrong Automation is not a new concept. People – regular old people – have been using it for years for tasks that require specific timing or reaction to other actions, like bidding on eBay or other auction-focused sites. The general concept is pretty simple as it’s just an event-driven system that automatically performs an action when the specified trigger occurs. Usually, at least when money is concerned, there’s an upper limit. The action can’t be completed if the resulting total would be above a specified maximum amount. Sometimes, however, things go horribly wrong. THE MOST EXPENSIVE BOOK IN HISTORY I was out trolling Facebook and happened to see a link to an article claiming a book was actually listed on Amazon for – wait for it, wait for it - $23,698,655.93. Seriously, it was listed for that much for a short period of time. There’s a lengthy explanation of why and it turns out that an automated “pricing war” of sorts was to blame. Two competing sellers tried to keep their prices within specific percentages – one slightly below 100% of the other while the other tried to stay slightly above 100% of the other. The mathematically astute can figure out what happens when the differences were not equal – specifically the seller keeping his price higher used a higher percentage off 100% than the seller trying to stay below the other guy. Stair-step increases over time ultimately resulted in the price hitting over $23 million dollars before someone noticed what was going on. Needless to say neither seller found a buyer at that price, mores the pity for them. THE POTENTIAL DANGER for CLOUD BROKERS The concept of cloud brokers, services that provide competitive bidding and essentially auctioning of cloud resources, is one that plays well into the demesne of commoditized resource services. Commoditization, after all, engenders an environment in which the consumer indicates the value and therefore price they will pay for a resource, and generally providers of that resource respond. Allowing consumers to “bid” on the resource allows the market to determine the value in a very agile manner. Seasonal or event-driven spikes in capacity needs, for example, could allow those resources that are most valuable in those moments to rise in price, while at other times it may drive the price downward. While making it difficult, perhaps, to budget properly across a financial reporting period, such volatility can be positive as it also indicates to the market the price consumers will bear in general. But assume that, like the Amazon marketplace, two such brokers begin setting prices based on each other rather than through market participation. Two brokers that wish to remain competitive, each with different value propositions such that one sets its price slightly lower than other, automatically, while the other sets the pricing of instances slightly higher than the other, automatically. Indeed, you could arrive at the nearly $24 million dollar per hour cloud computing instance. Or nearly $24 million dollar block storage, or gigabit per second of bandwidth or whatever resource the two brokers are offering. THE POTENTITIAL DANGER for DATA CENTERS Now certainly this is an extreme – and unlikely - scenario. But if we apply the same concept to a dynamic, integrated infrastructure tasked with delivering applications based on certain business and operational parameters, you might see that the same scenario could become reality with slightly different impacts to the data center and the business it serves. While not directly related to pricing, it is other policies regarding the security, availability and performance of a applications that could be impacted and problems compounded if controls and limitations are not clearly set upon automated responses to conditions within the data center. Policies that govern network speeds and feeds, for example, could impose limitations on users or applications based on prioritization or capacity. Other policies regarding performance might react to the initiation of those policies in an attempt to counter a degradation of performance, which again triggers a tightening of network consumption, which again triggers… You get the picture. Circular references – whether in a book or cloud computing resource market or internal to the data center infrastructure – can cascade such that the inevitable result is a negative impact on availability and performance. Limitations, thresholds, and clear controls are necessary in any automated system. In programming we use the term “terminal condition” to indicate at what point a piece of given code should terminate, or exit, a potentially infinite loop. Such terminal conditions must be present in data center automation as a means to combat a potentially infinite loop between two or more pieces of infrastructure that control the flow of application data. Collaboration, not just integration, is critical. While Infrastructure 2.0 enables the integration necessary to support a context-aware data center architecture capable of adapting on-demand to conditions as a means of ensuring availability, security and performance goals are met, that integration requires collaboration across people – across architects and devops and admins – who can recognize such potential infinite loops and address them by implementing the proper terminal conditions in those processes. COLLABORATION without CONTROL is BAD, M’KAY? Whether the implementation is focusing on automating a pricing process or the enablement of a security or performance policy in the data center, careful attention to controls is necessary to avoid an infinite regression of policies that counteract one another. Terminal conditions, limitations, thresholds. These are necessary implements to ensure that the efficiencies gained through automation do not negatively impact application delivery. The slow but steady increase of a book beyond a “normal” price should have been recognized as being out of bounds in context – the context of the market, of activity, of other similar book pricing. IN the data center, the same contextual-awareness is necessary to understand why more capacity may be needed or why performance may be degrading. Is it a multi-layer (modern) attack? Is it a legitimate flash crowd of traffic? These questions must be able to be answered in order to properly adjust policies and ensure the right folks are notified in the event that changes in the volume being handled by the data center may be detrimental to not only the security but the budget of the data center and applications it is delivering. Collaboration and integration go hand in hand, as do automation and control. Amazon’s $23,698,655.93 book about flies Solutions are Strategic. Technology is Tactical. What CIOs Can Learn from the Spartans What is a Strategic Point of Control Anyway? Cloud is the How not the What Cloud Control Does Not Always Mean ‘Do it yourself’ The Strategy Not Taken: Broken Doesn’t Mean What You Think It Means Data Center Feng Shui: Process Equally Important as Preparation Some Services are More Equal than Others The Battle of Economy of Scale versus Control and Flexibility229Views0likes0CommentsProvisioning a Virtual Network is Only the Beginning
Deploying a virtual network appliance is the easy part, it’s the operational management that’s hard. The buzz and excitement over VMware’s announcement of its new products at VMworld was high and for a brief moment there was a return to focusing on the network. You know, the large portion of the data center that provides connectivity and enables collaboration; the part that delivers applications to users (which really is the point of all architectures). Unfortunately the buzz reared up and overtook that focus with yet another round of double rainbow guy commentary regarding how cool and great it’s going to be when the network is virtualized and is “flexible” and “rapidly provisioned” and “cheap.” Two of out three ain’t bad, I guess. As noted by open-source management provider Xenoss in a forthcoming survey a lot of the folks (more than 70 percent) actually doing the work of managing a virtualized environment “prefer tools that manage their entire infrastructure as opposed to a virtualization-specific solution”. Interestingly the author of the aforementioned article echoes the belief that the “killer” application for cloud computing is tooling, i.e. management. So let’s get our head out of the clouds for a minute and think about this realistically. There are, after all, two different sets of concerns regarding network and application network infrastructure in the data center and only one of them is addressed by the vision of a completely virtualized data center. The other requires a deeper management strategy and dynamic infrastructure components. THE TWO FACES of CLOUD and DYNAMIC DATA CENTERS There are two parts to a dynamic data center: Deployment Execution What virtualization of the infrastructure makes easy is the tasks associated with number one: deployment. The “flexibility” touted by proponents of virtual network appliance-comprised architectures speak only to the deployment flexibility of traditional hardware-based network components. There’s almost no discussion of the flexibility of the network infrastructure component itself (which is just as if not more important to a dynamic data center) and of the way in which components (virtual or iron) will be integrated with the rest of the infrastructure. No, no they don’t. And “integration” with a management or orchestration system for the purposes of provisioning and initial configuration is (again) only half (or less) of the picture. The use of something like VMware’s vCloud API will certainly get a virtual network appliance deployed (a.k.a. rapidly provisioned) and if you need to move it or launch more there’s no better way to integrate the operational procedures associated with those tasks. But if you need to deploy a new web application firewall policy, that’s a way different story. The vCloud API is generalized, it’s not going to necessarily have the specific means by which you can deploy – and subsequently codify the appropriate application of that policy – on any given network component. And even if it did, at this point it’s going to be very general, as in the policy will still have to be specific to the component. Managing a network infrastructure component is not the same as managing a virtual machine. Moving around a VM is easy, moving around what’s in that VM is not. If you move across network partitions (VLANs, subnets, networks) you’re going to have to reconfigure the component. Period. It is the management of that aspect of a network infrastructure component – physical or virtual – that is problematic and which is not addressed by virtualization. ADAPTATION at RUN-TIME The second face of a dynamic data center is the adaptability and the capability to collaborate (share context) of the infrastructure itself. Without the ability of the infrastructure to essentially “reconfigure” itself during execution, what you end up with is the means to rapidly deploy and migrate a static infrastructure. If the behavior of each of the networking components deployed as virtual network appliances is codified in a rigid manner and does not automatically adapt based on the context of the environment and applications it is delivering, it is static. It is the adaptability of the infrastructure that makes it dynamic, not the way in which it is deployed. Example: You deployed a Load balancer as a virtual network appliance. It can be migrated and even scaled out using virtual data center management technology. It can be deployed on-demand. It is now flexible. But while it’s running how do you add new resources to a pool? Remove resources from a pool? How do you ensure that users accessing applications in that environment doing so via a high-latency WAN are experiencing the best possible response time? How do you virtually patch a platform level vulnerability to prevent exploitation while the defect is addressed? How do you marry the very different delivery requirements for a mobile device with a LAN-attached desktop browser? How do you integrate it with the management platform so you can manage it and not just its virtual container? More importantly, perhaps, to the operational (devops) folks is how do you adapt to the changing application environment? As applications are being launched and decommissioned, how does one instruct the infrastructure to modify its configuration to the “new” environment? It is this adaptation, this automation, that provides the greatest value in a highly virtualized or cloud computing environment because this is where the rubber meets the road. The rapid provisioning of components requires the rapid adaption of supporting network and application network infrastructure as a means to eliminate the cost in dollars and time of manually adapting infrastructure configuration to the “real-time” configuration and needs of applications. CLOUD is ABOUT APPS and OPS Rich Miller summed it up well when he said “cloud is all about ops and apps.” It is not about any single technology. It’s a means to deliver and scale applications in a way that is efficient and more affordable than it’s ever been. But in order to achieve that efficiency and that reduction in costs the focus is necessarily on ops and the infrastructure they are tasked with deploying and subsequently managing. While virtualization certainly addresses many of the challenges associated with the former, it does almost nothing to ease the costs and effort required for the latter. Consider the additional layer of networking abstraction introduced by virtualization. That has to be managed. For every IP address you add in the virtualization layer (that’s in addition to the IP addresses already used/required by the the cost of managing every other IP address already in service also increases. The cost of IP address management is linear function of the number of IP addresses in use. And if you’re going to be managing that virtual machine via a management system, it’s going to have at least one IP address itself. Increasing the cost of IP address management is exactly the opposite of what the new network and a dynamic infrastructure, a.k.a. infrastructure 2.0, is supposed to be producing. This is not solving the diseconomy of scale problem introduced by virtualization and cloud computing so often referenced by Greg Ness , it is the problem. Virtualization is making it easier to deploy and even scale applications and lowering CAPEX, but in doing so it is introducing additional complexity that can only be addressed by a solid, holistic management strategy – one that embraces integration across the entire infrastructure. That does not, by the way, yet exist. But they’re coming. In a services based infrastructure, which is what a dynamic infrastructure strategy is trying to achieve, the “platform” is less important than the services (and how they are integrated) are provided. It is not virtualization that makes a network fluid instead of brittle, it is the services and the way in which they adapt to the environment to ensure availability, security, and a high-performing delivery system. Virtualization is a means to an end, it is not the end itself. It is not addressing the operational needs of a highly fluid and volatile environment. Virtualization is not making it any easier to manage the actual components or behavior of the network, it’s just making it easier to deploy them. Survey: Virtualization and cloud need management The IT-as-a-Service Evolution: What Does it Mean for Your Job? And the Killer App for Private Cloud Computing Is… Infrastructure 2.0: Aligning the network with the business (and ...210Views0likes1CommentCall Me Crazy but Application-Awareness Should Be About the Application
I recently read a strategic article about how networks were getting smarter. The deck of this article claimed, “The app-aware network is advancing. Here’s how to plan for a network that’s much more than a dumb channel for data.” So far, so good. I agree with this wholeheartedly and sat back, expecting to read something astoundingly brilliant regarding application awareness. I was, to say the least, not just disappointed but really disappointed by the time I finished the article. See, I expected at some point that applications would enter the picture. But they didn’t. Oh, there was a paragraph on application monitoring and its importance to app-aware networks, but it was almost as an offhanded commentary that was out of place in a discussion described as being about the “network.” There was, however, a discussion on 10gb networking, and then some other discussion on CPU and RAM and memory (essentially server or container concerns, not the application) and finally some words on the importance of automation and orchestration. Applications and application-aware networking were largely absent from the discussion. That makes baby Lori angry. Application-aware networking is about being able to understand an application’s data and its behavior. It’s about recognizing that some data is acceptable for an application and some data is not – at the parameter level. It’s about knowing the application well enough to make adjustments to the way in which the network handles requests and responses dynamically to ensure performance and security of that application. It’s about making the network work with and for, well, the application.226Views0likes0CommentsNow is the conference of our discontent…
Talking about standards apparently brings out some very strong feelings in a whole lot of people. From “it’s too early” to “we need standards now” to “meh, standards will evolve where they are necessary”, some of the discussions at CloudConnect this week were tinged with a bit of hostility toward, well, standards in general and the folks trying to define them. In some cases the hostility was directed toward the fact that we don’t have any standards yet. [William Vambenepe has a post on the subject, having been one of the folks hostility was directed toward during one session ] Lee Badger, Computer Scientist at NIST, during a panel on “The Standards Real Users Need Now” offered a stark reminder that standards take time. He pointed out the 32 months it took to define and agree on consensus regarding the ASCII standard and the more than ten years it took to complete POSIX. Then Lee reminded us that “cloud” is more like POSIX than ASCII. Do we have ten years? Ten years ago we couldn’t imagine that we’d be here with Web 2.0 and Cloud Computing, so should we expect that in ten years we’ll still be worried about cloud computing? Probably not. The problem isn’t that people don’t agree standards are a necessary thing, the problem appears to be agreeing on what needs to be standardized and when and, in some cases, who should have input into those standards. There are at least three different constituents interested in standards, and they are all interested in standards for different reasons which of course leads to different views on what should be standardized.200Views0likes0CommentsMobility Can Be a Pain in the aaS
What does a 2-year old and cloud-based applications have in common? The Toddler has recently decided that he can navigate the stairs by himself. Insists on it, in fact. That’s a bit nerve-wracking, especially when he decides that 2:30am is a good time to get up, have a snack, and recreate a Transformers battle in the family room. It’s worse when you’re asleep and don’t know about it. Oh eventually you hear him and you get up and try to convince him it’s time for sleep (see? all the grown ups are doing it) but it takes a while before he finally agrees and you can climb back into bed yourself. Mobility. It’s a double-edged sword that can bite not only parents of Toddlers testing out their newly discovered independence but the operators and administrators trying to deal with applications that, thanks to virtualization, have also discovered they have wings – and they want to use them.220Views0likes1CommentInteroperability between clouds requires more than just VM portability
The issue of application state and connection management is one often discussed in the context of cloud computing and virtualized architectures. That's because the stress placed on existing static infrastructure due to the potentially rapid rate of change associated with dynamic application provisioning is enormous and, as is often pointed out, existing "infrastructure 1.0" systems are generally incapable of reacting in a timely fashion to such changes occurring in real-time. The most basic of concerns continues to revolve around IP address management. This is a favorite topic of Greg Ness at Infrastructure 2.0 and has been subsequently addressed in a variety of articles and blogs since the concepts of cloud computing and virtualization have gained momentum. The Burton Group has addressed this issue with regards to interoperability in a recent post, positing that perhaps changes are needed (agreed) to support emerging data center models. What is interesting is that the blog supports the notion of modifying existing core infrastructure standards (IP) to support the dynamic nature of these new models and also posits that interoperability is essentially enabled simply by virtual machine portability. From The Burton Group's"What does the Cloud Need? Standards for Infrastructure as a Service" First question is: How do we migrate between clouds? If we're talking System Infrastructure as a Service, then what happens when I try to migrate a virtual machine (VM) between my internal cloud running ESX (say I'm running VDC-OS) and a cloud provider who is running XenServer (running Citrix C3)? Are my cloud vendor choices limited to those vendors that match my internal cloud infrastructure? Well, while its probably a good idea, there are published standards out there that might help. Open Virtualization Format (OVF) is a meta-data format used to describe VMs in standard terms. While the format of the VM is different, the meta-data in OVF can be used to facilitate VM conversion from one format to other, thereby enabling interoperability. ... Another biggie is application state and connection management. When I move a workload from one location to another, the application has made some assumptions about where external resources are and how to get to them. The IP address the application or OS use to resolve DNS names probably isn't valid now that the VM has moved to a completely different location. That's where Locator ID Separation Protocol (LISP -- another overloaded acronym) steps in. The idea with LISP is to add fields to the IP header so that packets can be redirected to the correct location. The "ID" and and "locator" are separated so that the packet with the "ID" can be sent to the "locator" for address resolution. The "locator" can change the final address dynamically, allowing the source application or OS to change locations as long as they can reach the "locator". [emphasis added] If LISP sounds eerily familiar to some of you, it should. It's the same basic premise behind UDDI and the process of dynamically discovering the "location" of service end-points in a service-based architecture. Not exactly the same, but the core concepts are the same. The most pressing issue with proposing LISP as a solution is that it focuses only on the problems associated with moving workloads from one location to another with the assumption that the new location is, essentially, a physically disparate data center, and not simply a new location within the same data center; an issue with LISP does not even consider. That it also ignores other application networking infrastructure that requires the same information - that is, the new location of the application or resource - is also disconcerting but not a roadblock, it's merely a speed-bump in the road to implementation. We'll come back to that later; first let's examine the emphasized statement that seems to imply that simply migrating a virtual image from one provider to another equates to interoperability between clouds - specifically IaaS clouds. I'm sure the author didn't mean to imply that it's that simple; that all you need is to be able to migrate virtual images from one system to another. I'm sure there's more to it, or at least I'm hopeful that this concept was expressed so simply in the interests of brevity rather than completeness because there's a lot more to porting any application from one environment to another than just the application itself. Applications, and therefore virtual images containing applications, are not islands. They are not capable of doing anything without a supporting infrastructure - application and network - and some of that infrastructure is necessarily configured in such a way as to be peculiar to the application - and vice-versa. We call it an "ecosystem" for a reason; because there's a symbiotic relationship between applications and their supporting infrastructure that, when separated, degrades or even destroys the usability of that application. One cannot simply move a virtual machine from one location to another, regardless of the interoperability of virtualization infrastructure, and expect things to magically work unless all of the required supporting infrastructure has also been migrated as seamlessly. And this infrastructure isn't just hardware and network infrastructure; authentication and security systems, too, are an integral part of an application deployment. Even if all the necessary components were themselves virtualized (and I am not suggesting this should be the case at all) simply porting the virtual instances from one location to another is not enough to assure interoperability as the components must be able to collaborate, which requires connectivity information. Which brings us back to the problems associated with LISP and its focus on external discovery and location. There's just a lot more to interoperability than pushing around virtual images regardless of what those images contain: application, data, identity, security, or networking. Portability between virtual images is a good start, but it certainly isn't going to provide the interoperability necessary to ensure the seamless transition from one IaaS cloud environment to another. RELATED ARTICLES & BLOGS Who owns application delivery meta-data in the cloud? More on the meta-data menagerie The Feedback Loop Must Include Applications How VM sprawl will drive the urgency of the network evolution The Diseconomy of Scale Virus Flexibility is Key to Dynamic Infrastructure The Three Horsemen of the Coming Network Revolution As a Service: The Many Faces of the Cloud250Views0likes2Comments