private cloud
7 TopicsBehind the Scenes: The F5 Private Cloud Solution Package for Red Hat OpenStack – Measuring Success
F5 and Red Hat have just released our Private Cloud Solution Package for OpenStack. There are all kinds of marketing pieces which describe this package, what it includes, how to buy it, and what the target market looks like. I wanted to take a few minutes and tell you why I think this is such a cool offering. But first, let me steal the description paragraph from the Deployment Guide: F5 Private Cloud Solution Package for OpenStack The F5 private cloud solution package for OpenStack provides joint certification and testing with Red Hat to orchestrate F5® BIG-IP® Application Delivery Controllers (ADCs) with OpenStack Networking services. The validated solutions and use cases are based on customer requirements utilizing BIG-IP ADC and OpenStack integrations. F5’s OpenStack LBaaSv2 integration provides under-the-cloud L4–L7 services for OpenStack Networking tenants. F5’s OpenStack orchestration (HEAT) templates provide over-the-cloud, single-tenant onboarding of BIG-IP virtual edition (VE) ADC clusters and F5 iApps® templating for application services deployment. So why did we spend the time to develop this Private Cloud Solution Package for Red Hat OpenStack Platform v9? And why do I think it is valuable? Well several different reasons. First, if you are like me, six months ago I had no idea where to start. How do I build an OpenStack cloud so I can test and understand OpenStack? Do I build a Faux-penStack in a single VM on my laptop? Do I purchase a labs worth of machines to build it out? Those were my initial questions. It seems to be the question a lot of enterprises are also facing. In a study commissioned by Suse, 50% of those who have started an OpenStack initiative have failed. The fact is, OpenStack is difficult. There are many, many options. There are many, many configuration files. Until you are grounded in what each does and how they interact, it all seems like a bunch of gibberish. So, we first created this Private Cloud Solution Package with Red Hat to provide that starting point. It is intended to be the ‘You Are Here’ marker for a successful deployment of a real-world production cloud which meets the needs of an enterprise. The deployment guide marries the Red Hat install guide with specific instruction gained through setting up our Red Hat OpenStack Platform many times. The aim isn’t to provide answers to questions about each configuration option, or to provide multiple paths with a decision tree for options which differ and many times conflict with each other. Our guidance is specific, and prescriptive. We wanted a documentation path that ensures a functioning cloud. Follow this step by step using the variable inputs we did and you will end up with a validated, known-good cloud. I hope you will find the effort we put into it to be of value. John, Mark, Dave, and I (Our “Pizza team”, as John called it—although we didn’t eat any pizza as we were all working remotely from one another) spent many hours getting to the point where we could create documentation, that when followed, creates a reproducible, functioning, and validated Red Hat OpenStack Platform v9 Overcloud with F5 LBaaS v2 functionality connected to a pair of F5’s new iSeries 5800 appliances. Get that, REPRODUCABLE, and VALIDATED. We can wipe our overcloud away, and redeploy the overcloud in our environment in 44 minutes. That includes reinstalling the LBaaS components and validating that they are working using OpenStack community tests. VALIDATED. That is the second reason we spent all this time creating this solution package. We wanted to define a way for our customers, the Red Hat and F5 support organizations, and for our two awesome Professional Services organizations to KNOW that what has been built is validated, and will function. To that end, as part of the package development we created a test and troubleshooting Docker container, and have released it on Github here. This container bundles up all the software requirements and specific software versions required to run the community OpenStack tempest tests against any newly installed OpenStack Red Hat Platform environment. These tests let you know definitely whether the cloud was built correctly. We’ll run a set of test against your cloud installation assuring networking is working before we install any F5 service components. We’ll run a set of tests after we install F5 service components assuring the proper functionality of these services we provide. We’ll leave you with the test results in a common testing format as documentation that your cloud tenants should be good to go. As we develop certified use cases of our technologies with our customers, we’ll write tests for those and run those too. Cool huh? This is DevOps after all. It’s all about tested solutions. You don’t have to wait for our professional services to test your own cloud. By default, the test client includes all of the community tempest tests needed to validate LBaaS v2 on Liberty Red Hat OSP v8 (just for fun and to demonstrate the extensibility of the toolset) and Mitaka Red Hat OSPv9. Not only does it include the tests, and the toolsets to run them, get this, John even created a script which will generate the required tempest configuration files on the fly. Simply provide your overcloudrc file and the environment will be interrogated and the proper settings will be added to the config files. Again Cool. Testing is king, and we’re doing our best to hand out crowns.491Views0likes0CommentsPrivate Cloud is not a Euphemism for Managing Hardware
#cloud #infosec #devops Private cloud is about management, but not about hardware As with every technology, definitions almost immediately become muddled when it becomes apparent that the technology is going to "change the world." SDN is currently suffering from this phenomenon and it appears that cloud continues to suffer from it. Let me present Exhibit AAAA: Which Cloud Delivery Model is Right for Your Business? Private clouds are great solutions for organizations looking to keep their hardware locally managed. The association of "private cloud" with "hardware" is misguided and, in most instances, just plain wrong. Organizations implementing or planning on implementing private cloud (or on-premise cloud) are not doing so because they can't bear to part with their hardware. What they can't bear to part with is control. Control over security, over performance, over availability. Control over data and access. Control over their own destiny. Private cloud, assuming an organization adopts the model fully, affords IT the same benefits of a service-focused approach to resource management as does public cloud. The difference is solely in who incurs the expense of maintaining and managing the hardware. In fact, the statement above would be more properly expressed as "private clouds are great solutions for organizations in spite of keeping hardware locally managed." It is unlikely any organization wants to continue maintaining and managing hardware. In addition to management overhead, it comes with other baggage such as heating and cooling and power costs (not to mention the liability insurance against tripping over cables in the data center). But when measured against the weight of losing control over policies (particularly those related to compliance and security) as well as access to what most would consider standard application services (acceleration, optimization, programmability, identity management), the overhead from managing hardware locally just can't win. And no matter how ambitious, no organization with an existing data center is going to initiate a project that includes a wholesale transition to public cloud. Whether it's integration concerns, costs associated with transitioning legacy applications to a cloud-friendly architecture, or other application-related issues arising from such a transition, non green-field organizations are simply not going to wholesale pick up their toys and move to a public cloud. But that doesn't mean those organizations or its leadership is immune to recognizing and desiring the value inherent in the cloud computing model. Private cloud models can afford organizations the same benefits as public cloud minus the cost savings that come from economy of scale. And even if private cloud can only realize half the cost savings of public cloud, so what? When measured against potential losses from unacceptable risk, that's likely quite the deal. Private cloud is simply not a euphemism for managing hardware any more than public cloud is a euphemism for not managing hardware.286Views0likes1CommentTotally Unscientific SDN and Cloud Survey Results
#SDN #Cloud #F5 #agility2013 #SDDC #devops We did some question asking during SDN sessions at F5's Agility conference. Here's what we discovered about ... everything that's a buzzword Before you get any further, let me reiterate - these are totally unscientific and the sample size is rather small as it was taken from two sessions at the conference focusing on SDN and application services. The good news is that the results pretty much mirror every other survey and research with respect to devops, cloud, and SDN. One of my favorite questions is to determine whether cloud is being adopted because it makes sense or because it's a mandate from on high. As expected, the majority (62%) of respondents indicated the adoption of cloud in their organization was... a company mandate. Some interesting points I pulled out specifically around current data center initiative priorities: 25% of respondents are migrating to a devops model of application deployment 50% of respondents are incorporating SaaS offerings into solutions sold by the organization 65% of respondents are focused on implementing a software defined data center (SDDC) using virtualized resources Despite all the hype around mobile applications, only 20% of respondents have a mobile "first" strategy for applications and infrastructure When looking at responses with regard to private cloud deployments: 24% say a full service deployment is in place 17% of respondents have a pilot deployment in place 19% of respondents are putting plans together with no solid timeline for deployment Only 12% had no plans at all for private cloud Of the popular private cloud platforms: 86% indicated they were using VMware 22% are adopting OpenStack 11% have elected to use CloudStack And of course the one you've been waiting for: SDN A whopping 41% have no plans to deploy SDN 36% of respondents are putting plans together but have no solid timeline for deployment A somewhat surprising 10% of respondents have a pilot SDN deployment in place350Views0likes4CommentsThe Inevitable Eventual Consistency of Cloud Computing
An IDC survey highlights the reasons why private clouds will mature before public, leading to the eventual consistency of public and private cloud computing frameworks Network Computing recently reported on a very interesting research survey from analyst firm IDC. This one was interesting because it delved into concerns regarding public cloud computing in a way that most research surveys haven’t done, including asking respondents to weight their concerns as it relates to application delivery from a public cloud computing environment. The results? Security, as always, tops the list. But close behind are application delivery related concerns such as availability and performance. N etwork Computing – IDC Survey: Risk In The Cloud While growing numbers of businesses understand the advantages of embracing cloud computing, they are more concerned about the risks involved, as a survey released at a cloud conference in Silicon Valley shows. Respondents showed greater concern about the risks associated with cloud computing surrounding security, availability and performance than support for the pluses of flexibility, scalability and lower cost, according to a survey conducted by the research firm IDC and presented at the Cloud Leadership Forum IDC hosted earlier this week in Santa Clara, Calif. “However, respondents gave more weight to their worries about cloud computing: 87 percent cited security concerns, 83.5 percent availability, 83 percent performance and 80 percent cited a lack of interoperability standards.” The respondents rated the risks associated with security, availability, and performance higher than the always-associated benefits of public cloud computing of lower costs, scalability, and flexibility. Which ultimately results in a reluctance to adopt public cloud computing and is likely driving these organizations toward private cloud computing because public cloud can’t or won’t at this point address these challenges, but private cloud computing can and is – by architecting a collection of infrastructure services that can be leveraged by (internal) customers on an application by application (and sometimes request by request) basis. PRIVATE CLOUD will MATURE FIRST What will ultimately bubble up and become more obvious to public cloud providers is customer demand. Clouderati like James Urquhart and Simon Wardley often refer to this process as commoditization or standardization of services. These services – at the infrastructure layer of the cloud stack – will necessarily be driven by customer demand; by the market. Because customers right now are not fully exercising public cloud computing as they would their own private implementation – replete with infrastructure services, business critical applications, and adherence to business-focused service level agreements – public cloud providers are a bit of a disadvantage. The market isn’t telling them what they want and need, thus public cloud providers are left to fend for themselves. Or they may be pandering necessarily to the needs and demands of a few customers that have fully adopted their platform as their data center du jour. Internal to the organization there is a great deal more going on than some would like to admit. Organizations have long since abandoned even the pretense of caring about the definition of “cloud” and whether or not there exists such a thing as “private” cloud and have forged their way forward past “virtualization plus” (a derogatory and dismissive term often used to describe such efforts by some public cloud providers) and into the latter stages of the cloud computing maturity model. Internal IT organizations can and will solve the “infrastructure as a service” conundrum because they necessarily have a smaller market to address. They have customers, but it is a much smaller and well-defined set of customers which they must support and thus they are able to iterate over the development processes and integration efforts necessary to get there much quicker and without as much disruption. Their goal is to provide IT as a service, offering a repertoire of standardized application and infrastructure services that can easily be extended to support new infrastructure services. They are, in effect, building their own cloud frameworks (stack) upon which they can innovate and extend as necessary. And as they do so they are standardizing, whether by conscious effort or as a side-effect of defining their frameworks. But they are doing it, regardless of those who might dismiss their efforts as “not real cloud.” When you get down to it, enterprise IT isn’t driven by adherence to some definition put forth by pundits. They’re driven by a need to provide business value to their customers at the best possible “profit margin” they can. And they’re doing it faster than public cloud providers because they can. WHEN CLOUDS COLLIDE - EVENTUAL CONSISTENCY What that means is that in a relatively short amount of time, as measured by technological evolution at least, the “private clouds” of customers will have matured to the point they will be ready to adopt a private/public (hybrid) model and really take advantage of that public, cheap, compute on demand that’s so prevalent in today’s cloud computing market. Not just use them as inexpensive development or test playgrounds but integrate them as part of their global application delivery strategy. The problem then is aligning the models and APIs and frameworks that have grown up in each of the two types of clouds. Like the concept of “eventual consistency” with regards to data and databases and replication across clouds (intercloud) the same “eventual consistency” theory will apply to cloud frameworks. Eventually there will be a standardized (consistent) set of infrastructure services and network services and frameworks through which such services are leveraged. Oh, at first there will be chaos and screaming and gnashing of teeth as the models bump heads, but as more organizations and providers work together to find that common ground between them they’ll find that just like the peanut-butter and chocolate in a Reese’s Peanut Butter cup, the two disparate architectures can “taste better together.” The question that remains is which standardization will be the one with which others must become consistent. Without consistency, interoperability and portability will remain little more than a pipe dream. Will it be standardization driven by the customers, a la the Enterprise Buyer’s Cloud Council? Or will it be driven by providers in a “if you don’t like what we offer go elsewhere” market? Or will it be driven by a standards committee comprised primarily of vendors with a few “interested third parties”? Related Posts from tag interoperability Despite Good Intentions PaaS Interoperability Still Only Skin Deep Apple iPad Pushing Us Closer to Internet Armageddon Cloud, Standards, and Pants Approaching cloud standards with end-user focus only is full of fail Interoperability between clouds requires more than just VM portability Who owns application delivery meta-data in the cloud? Cloud interoperability must dig deeper than the virtualization layer from tag standards How Coding Standards Can Impair Application Performance The Dynamic Infrastructure Mashup The Great Client-Server Architecture Myth Infrastructure 2.0: Squishy Name for a Squishy Concept Can You Teach an Old Developer New Tricks? (more..) del.icio.us Tags: MacVittie,F5,cloud computing,standards,interoperability,integration,hybrid cloud,private cloud,public cloud,infrastructure227Views0likes1CommentPublic, Private and Enterprise Cloud: Economy of Scale versus Efficiency of Scale
What distinguishes these three models of cloud computing are the business and operational goals for which they were implemented and the benefits derived. A brief Twitter conversation recently asked the question how one would distinguish between the three emerging dominant cloud computing models: public, private and enterprise. Interestingly, if you were to take a "public cloud" implementation and transplant it into the enterprise, it is unlikely to deliver the value IT was expecting. Conversely, transplanting a private cloud implementation to a public provider would also similarly fail to achieve the desired goals. When you dig into it, the focus of the implementation – the operational and business goals – play a much larger role in distinguishing these models than any technical architecture could. Public cloud computing is also often referred to as "utility" computing. That's because its purpose is to reduce the costs associated with deployment and subsequent scalability of an application. It's about economy of scale – for the customer, yes, but even more so for the provider. The provider is able to offer commoditized resources at a highly affordable rate because of the scale of its operations. The infrastructure – from the network to the server to the storage – is commoditized. It's all shared resources that combine to form the basis for a economically viable business model in which resources are scaled out on-demand with very little associated effort. There is very little or no customization (read: alignment of process with business/operational goals) available because economy of scale is achieved by standardizing as much as possible and limiting interaction. Enterprise cloud computing is not overly concerned with scalability of resources but is rather more focused on the efficiency of resources, both technological and human. An enterprise cloud computing implementation has the operational and business goal of enabling a more agile IT that serves its customers (business and IT) more efficiently and with greater alacrity. Enterprise cloud computing focuses on efficient provisioning of resources and automating operational processes such that deployment of applications is repeatable and consistent. IT wants to lay the foundation for IT as a Service. Public cloud computing wants to lay the foundation for resources as a service. No where is that difference more apparent than when viewed within the scope of the data center as a whole. Private cloud computing, if we're going to differentiate, is the hybrid model; the model wherein IT incorporates public cloud computing as an extension of its data center and, one hopes, its own enterprise cloud computing initiative. It's the use of economy of scale to offset costs associated with new initiatives and scalability of existing applications without sacrificing the efficiency of scale afforded by process automation and integration efforts. It's the best of both worlds: utility computing resources that can be incorporated and managed as though they are enterprise resources. Public and enterprise cloud computing have different goals and therefore different benefits. Public cloud computing is about economy of scale of resources and commoditized operational processes. Forklifting a model such as AWS into the data center would be unlikely to succeed. The model assumes no integration or management of resources via traditional or emerging means and in fact the model as implemented by most public cloud providers would inhibit such efforts. Public cloud computing assumes that scale of resources is king and at that it excels. Enterprise cloud computing, on the other hand, assumes that efficiency is king and at that, public cloud computing is fair to middling at best. Enterprise cloud computing implementations recognize that enterprise applications are holistic units comprising all of the resources necessary to deploy, deliver and secure that application. Infrastructure services from the network to the application delivery network to storage and security are not adjunct to the application but are a part of the application. Integration with identity and access management services is not an afterthought, but an architectural design. Monitoring and management is not a "green is good, red is bad" icon on a web application, but an integral part of the overall data center strategy. Enterprise cloud computing is about efficiency of scale; a means of managing growth in ways that reduces the burden placed on people and leverages technology through process automation and devops to improve the operational posture of IT in such a way as to enable repeatable, rapid deployment of applications within the enterprise context. That means integration, management, and governance is considered part and parcel of any application deployment. These processes and automation that enable repeatable deployments and dynamic, run-time management that includes the proper integration and assignment of operational and business policies to newly provisioned resources are unique, because the infrastructure and services comprising the architectural foundation of the data center are unique. These are two very different sets of goals and benefits and, as such, cannot easily be substituted. They can, however, be conjoined into a broader architectural strategy that is known as private (hybrid) cloud computing. PRIVATE CLOUD: EFFICIENT ECONOMY of SCALE There are, for every organization, a number of applications that are in fact drivers of the need for economy of scale, i.e. a public cloud computing environment. Private (hybrid) cloud computing is a model that allows enterprise organizations to leverage the power of utility computing while addressing the very real organizational need for at a minimum architectural control over those resources for integration, management and cost containment governance. It is the compromise of cheap resources coupled with control that affords organizations the flexibility and choice required to architect a data center solution that can meet the increasing demand for self-service of its internal customers while addressing ever higher volumes of demand on external-facing applications without substantially increasing costs. Private (hybrid) cloud computing is not a panacea; it's not the holy grail of cloud computing but it is the compromise many require to simultaneously address both a need for economy and efficiency of scale. Both goals are of interest to enterprise organizations – as long as their basic needs are met. Chirag Mehta summed it up well in a recent post on CloudAve: "It turns out that IT doesn’t mind at all if business can perform certain functions in a self-service way, as long as the IT is ensured that they have underlying control over data and (on-premise) infrastructure." See: Cloud Control Does Not Always Mean ‘Do it yourself’. Control over infrastructure. It may be that these three simple words are the best way to distinguish between public and enterprise cloud computing after all, because that's ultimately what it comes down to. Without control over infrastructure organizations cannot integrate and manage effectively its application deployments. Without control over infrastructure organizations cannot achieve the agility necessary to leverage a dynamic, services-based governance strategy over performance, security and availability of applications. Public cloud computing requires that control be sacrificed on the altar of cheap resources. Enterprise and private (hybrid) cloud computing do not. Which means the latter is more likely able to empower IT to realize the operational and business goals for which it undertook a cloud computing initiative in the first place. Selling To Enterprise – Power Struggle Between IT And Line Of Business Cloud Control Does Not Always Mean ‘Do it yourself’ Cloud is the How not the What What CIOs Can Learn from the Spartans Hybrid Cloud: Fact, Fiction or Future? Data Center Feng Shui: Process Equally Important as Preparation Putting the Cloud Before the Horse If You Focus on Products You’ll Miss the Cloud The Zero-Product Property of IT What is a Strategic Point of Control Anyway? Why You Need a Cloud to Call your Own | F5 White Paper The New Network595Views0likes0CommentsF5 Friday: The Evolution of Reference Architectures to Repeatable Architectures
A reference architecture is a solution with the “some assembly required” instructions missing. As a developer and later an enterprise architect, I evaluated and leveraged untold number of “reference architectures.” Reference architectures, in and of themselves, are a valuable resource for organizations as they provide a foundational framework around which a concrete architecture can be derived and ultimately deployed. As data center architecture becomes more complex, employing emerging technologies like cloud computing and virtualization, this process becomes fraught with difficulty. The sheer number of moving parts and building blocks upon which such a framework must be laid is growing, and it is rarely the case that a single vendor has all the components necessary to implement such an architecture. Integration and collaboration across infrastructure solutions alone, a necessary component of a dynamic data center capable of providing the economy of scale desired, becomes a challenge on top of the expected topological design and configuration of individual components required to successfully deploy an enterprise infrastructure architecture from the blueprint of a reference architecture. It is becoming increasingly important to provide not only reference architectures, but repeatable architectures. Architectural guidelines that not only provide the abstraction of a reference architecture but offer the kind of detailed topological and integration guidance necessary for enterprise architects to move from concept to concrete implementation. Andre Kindness of Forrester Research said it well in a recent post titled, “Don’t Underestimate The Value Of Information, Documentation, And Expertise!”: Support documentation and availability to knowledge is especially critical in networking design, deployment, maintenance, and upgrades. Some pundits have relegated networking to a commodity play, but networking is more than plumbing. It’s the fabric that supports a dynamic business connecting users to services that are relevant to the moment, are aggregated at the point of use, and originate from multiple locations. The complexity has evolved from designing in a few links to tens of hundreds of relationships (security, acceleration, prioritization, etc.) along the flow of apps and data through a network. Virtualization, convergence, consolidation, and the evolving data center networks are prime examples of today’s network complexity. REPEATABLE ARCHITECTURE For many years one of F5’s differentiators has been the development and subsequent offering of “Application Ready Solutions”. The focus early on was on providing optimal deployment configuration of F5 solutions for specific applications including IBM, Oracle, Microsoft and more recently, VMware. These deployment guides are step-by-step, detailed documentation developed through collaborative testing with the application provider that offer the expertise of both organizations in deploying F5 solutions for optimal performance and efficiency. As the data center grows more complex, so do the challenges associated with architecting a firm foundation. It requires more than application-specific guidance, it now requires architectural guidance. While reference architectures are certainly still germane and useful, there also needs to be an evolution toward repeatable architectures such that the replication of proposed solutions derived from the collaborative efforts of vendors is achievable. It’s not enough to throw up an architecture comprised of multiple solutions from multiple vendors without providing the insight and guidance necessary to actually replicate that architecture in the data center. That’s why it’s exciting to see our collaborative efforts with vendors of key data center solutions like IBM and VMware result in what are “repeatable architectures.” These are not simply white papers and Power Point decks that came out of joint meetings; these are architectural blueprints that can be repeated in the data center. These are the missing instructions for the “some assembly required” architecture. These jointly designed and developed architectures have already been implemented and tested – and then tested again and again. The repeatable architecture that emerges from such efforts are based on the combined knowledge and expertise of the engineers involved from both organizations, providing insight normally not discovered – and certainly not validated – by an isolated implementation. This same collaboration, this cooperative and joint design and implementation of architectures, is required within the enterprise as well. It’s not enough for architects to design and subsequently “toss over the wall” an enterprise reference architecture. It’s not enough for application specialists in the enterprise to toss a deployment over the wall to the network and security operations teams. Collaboration across compute, network and storage infrastructure requires collaboration across the teams responsible for their management, implementation and optimal configuration. THE FUTURE is REPEATABLE This F5-IBM solution is the tangible representation of an emerging model of collaborative, documented and repeatable architectures. It’s an extension of an existing model F5 has used for years to provide the expertise and insight of the engineers and architects inside the organization that know the products best, and understand how to integrate, optimize and deploy successfully such joint efforts. Repeatable architectures are as important an evolution in the support of jointly developed solutions as APIs and dynamic control planes are to the successful implementation of data center automation. More information on the F5-IBM repeatable enterprise cloud architecture: Why You Need a Cloud to Call Your Own – F5 and IBM White Paper Building an Enterprise Cloud with F5 and IBM – F5 Tech Brief SlideShare Presentation F5 and IBM: Cloud Computing Architecture – Demo Related blogs & articles: F5 Application Ready Solutions F5 and IBM Help Enterprise Customers Confidently Deploy Private Clouds F5 Friday: A War of Ecosystems Data Center Feng Shui: Process Equally Important as Preparation Don’t Underestimate The Value Of Information, Documentation, And Expertise! Service Provider Series: Managing the IPv6 Migration222Views0likes0CommentsFocus of Cloud Implementation Depends on the Implementer
Public cloud computing is about capacity and scale on-demand, private cloud computing however, is not. Legos. Nearly every child has them, and nearly every parent knows that giving a child a Lego “set” is going to end the same way: the set will be put together according to instructions exactly once (usually by the parent) and then the blocks will be incorporated into the large collection of other Lego sets to become part of something completely different. This is a process we actually encourage as parents – the ability to envision and end-result and to execute on that vision by using the tools at hand to realize it. A child “sees” an end-product, a “thing” they wish to build and they have no problem with using pieces from disparate “sets” to build it. We might call that creativity, innovation, and ingenuity. We are proud when our children identify a problem – how do I build this thing – and are able to formulate a plan to solve it. So why is it when we grow up and start talking about cloud computing that we suddenly abhor those same characteristics in IT? RESOURCES as BUILDING BLOCKS That’s really what’s happening right now within our industry. Cloud computing providers and public-only pundits have a set of instructions that define how the building blocks of cloud computing (compute, network, and storage resources) should be put together to form an end-product. But IT, like our innovative and creative children, has a different vision; they see those building blocks as capable of serving other purposes within the data center. They are the means to an end, a tool, a foundation. Judith Hurwitz recently explored the topic of private clouds in “What’s a private cloud anyway?” and laid out some key principles of cloud computing: There are some key principles of the cloud that I think are worth recounting: 1. A cloud is designed to optimize and manage workloads for efficiency. Therefore repeatable and consistent workloads are most appropriate for the cloud. 2. A cloud is intended to implement automation and virtualization so that users can add and subtract services and capacity based on demand. 3. A cloud environment needs to be economically viable. Why aren’t traditional data centers private clouds? What if a data center adds some self-service and virtualization? Is that enough? Probably not. -- “What’s a private cloud anyway?”, Judith Hurwitz’s Cloud-Centric Weblog What’s common to these “key principles” is that they assume an intent that may or may not be applicable to the enterprise. Judith lays this out in key principle number two and makes the assumption that “cloud” is all about auto-scaling services. Herein lies the disconnect between public and private cloud computing. While public cloud computing focuses on providing resources as a utility, private cloud computing is more about efficiency in resource distribution and processes. The resource model, the virtualization and integrated infrastructure supporting the rapid provisioning and migration of workloads around an environment are the building blocks upon which a cloud computing model is built. The intended use and purpose to which the end-product is ultimately put is different. Public cloud puts those resources to work generating revenue by offering them up cheaply affordably to other folks while private cloud puts those resources to work generating efficiency and time savings for enterprise IT staff. IS vs DOES What is happening is that the focus of cloud computing is evolving; it’s moving from “what is it” to “what does it do”. And it is the latter that is much more important in the big scheme of things than the former. Public cloud provides resources-on-demand, primarily compute or storage resources on demand. Private cloud provides flexibility and efficiency and process automation. Public cloud resources may be incorporated into a private cloud as part of the flexibility and efficiency goals, but it is not a requirement. The intent behind a private cloud is in fact not capacity on demand, but more efficient usage and management of resources. The focus of cloud is changing from what it is to what it does and the intention behind cloud computing implementations is highly variable and dependent on the implementers. Private cloud computing is implemented for different reasons than public cloud computing. Private cloud implementations are not focused on economy of scale or cheap resources, they are focused on efficiency and processes. Private cloud implementers are not trying to be Amazon or Google or Salesforce.com. They’re trying to be a more efficient, leaner version of themselves – IT as a Service. They’ve taken the building blocks – the resources – and are putting them together in a way that makes it possible for them to achieve their goals, not the goals of public cloud computing. If that efficiency sometimes requires the use of external, public cloud computing resources then that’s where the two meet and are often considered “hybrid” cloud computing. The difference between what a cloud “is” and what it “does” is an important distinction especially for those who want to “sell” a cloud solution. Enterprises aren’t trying to build a public cloud environment, so trying to sell the benefits of a solution based on its ability to mimic a public cloud in a private data center is almost certainly a poor strategy. Similarly, trying to “sell” public cloud computing as the answer to all IT’s problems when you haven’t ascertained what it is the enterprise is trying to do with cloud computing is also going to fail. Rather we should take a lesson from our own experiences outside IT with our children and stop trying to force IT into a mold based on some set of instructions someone else put together and listen to what it is they are trying to do. The intention of a private cloud computing implementation is not the same as that of a public cloud computing implementation. Which ultimately means that “success” or “failure” of such implementations will be measured by a completely different set of sticks. We’ll debate private cloud and dig into the obstacles (and solutions to them) enterprises are experiencing in moving forward with private cloud computing in the Private Cloud Track at CloudConnect 2011. Hope to see you there! Cloud Chemistry 101 Infrastructure 2.0 + Cloud + IT as a Service = An Architectural Parfait What’s a private cloud anyway? It’s Called Cloud Computing not Cheap Computing Public Cloud Computing is NOT For Everyone If a Cat has Catness Does a Cloud have Cloudness? The Three Reasons Hybrid Clouds Will Dominate The Other Hybrid Cloud Architecture The Cloudy Enterprise: Hours More Important Than Dollars Don’t Throw the Baby out with the Bath Water Why IT Needs to Take Control of Public Cloud Computing Multi-Tenant Security Is More About the Neighbors Than the Model The Battle of Economy of Scale versus Control and Flexibility175Views0likes0Comments