public cloud
5 TopicsActive/Active load balancing examples with F5 BIG-IP and Azure load balancer
Background A couple years ago Iwrote an article about some practical considerations using Azure Load Balancer. Over time it's been used by customers, so I thought to add a further article that specifically discusses Active/Active load balancing options. I'll use Azure's standard load balancer as an example, but you can apply this to other cloud providers. In fact, the customer I helped most recently with this very question was running in Google Cloud. This article focuses on using standard TCP load balancers in the cloud. Why Active/Active? Most customers run 2x BIG-IP's in an Active/Standby cluster on-premises, and it's extremely common to do the same in public cloud. Since simplicity and supportability are key to successful migration projects, often it's best to stick with architectures you know and can support. However, if you are confident in your cloud engineering skills or if you want more than 2x BIG-IP's processing traffic, you may consider running them all Active. Of course, if your totalthroughput for N number of BIG-IP's exceeds the throughput thatN-1 can support, the loss of a single VM will leave you with more traffic than the remaining device(s) can handle. I recommend choosing Active/Active only if you're confident in your purpose and skillset. Let's define Active/Active Sometimes this term is used with ambiguity. I'll cover three approaches using Azure load balancer, each slightly different: multiple standalone devices Sync-Only group using Traffic Group None Sync-Failover group using Traffic Group None Each of these will use a standard TCP cloud load balancer. This article does not cover other ways to run multiple Active devices, which I've outlined at the end for completeness. Multiple standalone appliances This is a straightforward approach and an ideal target for cloud architectures. When multiple devices each receive and process traffic independently, the overhead work of disaggregating traffic to spread between the devices can be done by other solutions, like a cloud load balancer. (Other out-of-scope solutions could be ECMP, BGP, DNS load balancing, or gateway load balancers). Scaling out horizontally can be a matter of simple automation and there is no cluster configuration to maintain. The only limit to the number of BIG-IP's will be any limits of the cloud load balancer. The main disadvantage to this approach is the fear of misconfiguration by human operators. Often a customer is not confident that they can configure two separate devices consistently over time. This is why automation for configuration management is ideal. In the real world, it's also a reason customers consider our next approach. Clustering with a sync-only group A Sync-Only device group allows us to sync some configuration data between devices, but not fail over configuration objects in floating traffic groups between devices, as we would in a Sync-Failover group. With this approach, we can sync traffic objects between devices, assign them to Traffic Group None, and both devices will be considered Active. Both devices will process traffic, but changes only need to be made to a single device in the group. In the example pictured above: The 2x BIG-IP devices are in a Sync-Only group called syncGroup /Common partition isnotsynced between devices /app1 partition issynced between devices the /app1 partition has Traffic Group None selected the /app1 partition has the Sync-Only group syncGroup selected Both devices are Active and will process traffic received on Traffic Group None The disadvantage to this approach is that you can create an invalid configuration by referring to objects that are not synced. For example, if Nodes are created in/Common, they will exist on the device on which they were created, but not on other devices. If a Pool in /app1 then references Nodes from /Common, the resulting configuration will be invalid for devices that do not have these Nodes configured. Another consideration is that an operator must use and understand partitions. These are simple and should be embraced. However, not all customers understand the use of partitions and many prefer to use /Common only, if possible. The big advantage here is that changes only need to be made on a single device, and they will be replicated to other devices (up to 32 devices in a Sync-Only group). The risk of inconsistent configuration due to human error is reduced. Each device has a small green "Active" icon in the top left hand of the console, reminding operators that each device is Active and will process incoming traffic onTraffic Group None. Failover clustering using Traffic Group None Our third approach is very similar to our second approach. However, instead of a Sync-Only group, we will use a Sync-Failover group. A Sync-Failover group will sync all traffic objects in the default /Common partition, allowing us to keep all traffic objects in the default partition and avoid the use of additional partitions. This creates a traditional Active/Standby pair for a failover traffic group, and a Standby device will not respond to data plane traffic. So how do we make this Active/Active? When we create our VIPs in Traffic Group None, all devices will process traffic received on these Virtual Servers. One device will show "Active" and the other "Standby" in their console, but this is only the status for the floating traffic group. We don't need to use the floating traffic group, and by using Traffic Group None we have an Active/Active configuration in terms of traffic flow. The advantage here is similar to the previous example: human operators only need to configure objects in a single device, and all changes are synced between device group members (up to 8 in a Sync-Failover group). Another advantage is that you can use the/Common partition, which was not possible with the previous example. The main disadvantage here is that the console will show the word "Active" and "Standby" on devices, and this can confuse an operator that is familiar only with Active/Standby clusters using traffic groups for failover. While this third approach is a very legitimate approach and technically sound, it's worth considering if your daily operations and support teams have the knowledge to support this. Other considerations Source NAT (SNAT) It is almost always a requirement that you SNAT traffic when using Active/Active architecture, and this especially applies to the public cloud, where our options for other networking tricks are limited. If you have a requirement to see true source IPandneed to use multiple devices in Active/Active fashion, consider using Azure or AWS Gateway Load Balancer options. Alternative solutions like NGINX and F5 Distributed Cloud may also be worth considering in high-value, hard-requirement situations. Alternatives to a cloud load balancer This article is not referring to F5 with Azure Gateway Load Balancer, or to F5 with AWS Gateway Load Balancer. Those gateway load balancer solutions are another way for customers to run appliances as multiple standalone devices in the cloud. However, they typically requirerouting, not proxying the traffic (ie, they don't allow destination NAT, which many customers intend with BIG-IP). This article is also not referring to other ways you might achieve Active/Active architectures, such as DNS-based high availability, or using routing protocols, like BGP or ECMP. Note that using multiple traffic groups to achieve Active/Active BIG-IP's - the traditional approach on-prem or in private cloud - is not practical in public cloud, as briefly outlined below. Failover of traffic groups with Cloud Failover Extension (CFE) One option for Active/Standby high availability of BIG-IP is to use the CFE , which can programmatically update IP addresses and routes in Azure at time of device failure. Since CFE does not support Active/Active scenarios, it is appropriate only for failover of a single traffic group (ie., Active/Standby). Conclusion Thanks for reading! In general I see that Active/Standby solutions work for many customers, but if you are confident in your skills and have a need for Active/Active F5 BIG-IP devices in the cloud, please reach out if you'd like me to walk you through these options and explore any other possibilities. Related articles Practical Considerations using F5 BIG-IP and Azure Load Balancer Deploying F5 BIG-IP with Azure Cross-Region Load Balancer1.5KViews2likes2CommentsSecuring your applications with F5 Virtual Editions in Microsoft Azure
In Nov 2015, F5 announced the availability of F5 BIG-IP virtual editions in Microsoft Azure Cloud. What this meant to our enterprise customers is the ability to create advanced networking and security policies in azure. More importantly, our customers were able to achieve consistency in services across their on-premises and Azure environments. When we talk about security, incorporating a network firewall is critical. Typical network firewall functions are facilitated by attaching L2 interfaces to separate security zones. This would mean setting up a dedicated virtual NIC interface (VNIC) for each VLAN on a virtual machine. In Azure the number of VNICs supported differ by compute instance type. Larger instances have higher VNIC interfaces but also incur a higher cost. However there is another way to segment zones in azure through the use of UDR (user defined routing). In a nutshell, UDR provides a means for enterprises to design, and secure, the Azure networking infrastructure. For further information on UDR, please refer to the link below. https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-udr-overview/ Using UDR along with the F5 Advanced Firewall Manager (AFM) service, you can implement advanced firewall protection for both N-S and E-W data traffic. Here are a few key capabilities of F5 AFM: · Policy based access control to and from address/port pairs; · Network firewall rules and logging at a global context level, or at a virtual server level; · Stateful and full proxy architecture: a flow from the client is passed to the backend only if it is deemed secure; · IP Intelligence, Global and virtual server based Denial of Service (DOS) attack protection that can be configured for thresholds on multiple network parameters; and · Programmability through iRules offering dynamic packet filtering capability. To get you started with using UDR and F5 AFM, here is an example scenario. We have an F5 ADC to manage traffic to the backend tiers. We have two backend tiers (database tier and Application tier). Our goal is to stop traffic from database tier connecting to Application tier while allowing data to flow in the opposite direction. The network topology is illustrated below Here are the steps required to implement this scenario. Step1: Create UDR Step2: Enable IP forwarding in Azure Step3: Create IP forwarding Virtual Server in BIGIP Step4: Create AFM Policy in BIG-IP Step1: Create UDR There are different ways to create UDR: 1. PowerShell 2. Azure CLI 3. Template While creating UDR, you must provide the next hop address which is the IP address of your BIG-IP. The example below shows an UDR created for Application tier Subnet. Here, the address prefix is 10.2.2.0/24 which is the destination CIDR (of the database tier). The next hop address is 10.2.0.4 which is the private IP address of the BIG-IP. This route is associated to the subnet WebAppSubnet with address range of 10.2.1.0/24. With this, packets from any WebApp subnet destined for any database subnet will be routed through the BIG-IP. Prerequisites: Create a Virtual Network in new Resource Group in Azure. When you do this a default subnet will be automatically created. Create two additional subnets for the WebApp and Database tier in your Virtual Network. Create UDR for WebAppSubnet: 1. Create route ‘RouteToDatabase’ – directs traffic destined for database subnet to BIGIP. $route = New-AzureRmRouteConfig -Name RouteToDatabase ` -AddressPrefix 10.2.2.0/24 -NextHopType VirtualAppliance ` -NextHopIpAddress 10.2.0.4 2. Create route table in the region deployed (in this example –westus region used). $routeTable = New-AzureRmRouteTable -ResourceGroupName Group2 -Location westus ` -Name UDR1 -Route $route 3. Create a variable vnet that contains name of your virtual network where this subnet is. $vnet = Get-AzureRmVirtualNetwork -ResourceGroupName Group2 -Name Group2-vnet 4. Associate the route table WebAppSubnet. Set-AzureRmVirtualNetworkSubnetConfig -VirtualNetwork $vnet -Name WebAppSubnet `-AddressPrefix 10.2.1.0/24 -RouteTable $routeTable 5. Save configurations in Azure. Set-AzureRmVirtualNetwork -VirtualNetwork $vnet Once created, the table looks like the following . Create a similar UDR for database tier. Once done, the table looks like the following. Step 2: Enable IP Forwarding on NIC associated with BIG-IP: 1. $bigipnic = Get-AzureRmNetworkInterface -ResourceGroupName Group2 -Name bigip1234 (bigip1234 is the network interface created for BIG-IP.) 2. Enable IP Forwarding. $bigipnic.EnableIPForwarding = 1 3. Save NIC configurations. Set-AzureRmNetworkInterface -NetworkInterface $bigipnic Step 3: Create IP Forwarding Virtual Server in BIG-IP To enable IP forwarding in BIG-IP, you need to create IP Forwarding Virtual Server. To Create IP Forwarding Virtual Server, Log in to configuration Utility. 1. Local Traffic > Virtual Server 2. Click ‘Create’ to create Virtual Server and fill the required details. For additional information on creating forwarding virtuals, refer to this solution article. Step4: Create AFM Policy in BIG-IP To create AFM policies you must have AFM provisioned in your BIGIP. Browse to Security > Options > Network Firewall > Active Rule – click ‘Add’ to create new policy. Here you need to select ‘Virtual Server’ as a context as shown below. As mentioned above, this example illustrates a scenario to block traffic from database tier to web tier. The policy below shows the policy settings. With this step, we have accomplished the example scenario!. Contributors to this blog post : Apoorva Sahasrabudhe, Greg Coward644Views0likes0CommentsPublic, Private and Enterprise Cloud: Economy of Scale versus Efficiency of Scale
What distinguishes these three models of cloud computing are the business and operational goals for which they were implemented and the benefits derived. A brief Twitter conversation recently asked the question how one would distinguish between the three emerging dominant cloud computing models: public, private and enterprise. Interestingly, if you were to take a "public cloud" implementation and transplant it into the enterprise, it is unlikely to deliver the value IT was expecting. Conversely, transplanting a private cloud implementation to a public provider would also similarly fail to achieve the desired goals. When you dig into it, the focus of the implementation – the operational and business goals – play a much larger role in distinguishing these models than any technical architecture could. Public cloud computing is also often referred to as "utility" computing. That's because its purpose is to reduce the costs associated with deployment and subsequent scalability of an application. It's about economy of scale – for the customer, yes, but even more so for the provider. The provider is able to offer commoditized resources at a highly affordable rate because of the scale of its operations. The infrastructure – from the network to the server to the storage – is commoditized. It's all shared resources that combine to form the basis for a economically viable business model in which resources are scaled out on-demand with very little associated effort. There is very little or no customization (read: alignment of process with business/operational goals) available because economy of scale is achieved by standardizing as much as possible and limiting interaction. Enterprise cloud computing is not overly concerned with scalability of resources but is rather more focused on the efficiency of resources, both technological and human. An enterprise cloud computing implementation has the operational and business goal of enabling a more agile IT that serves its customers (business and IT) more efficiently and with greater alacrity. Enterprise cloud computing focuses on efficient provisioning of resources and automating operational processes such that deployment of applications is repeatable and consistent. IT wants to lay the foundation for IT as a Service. Public cloud computing wants to lay the foundation for resources as a service. No where is that difference more apparent than when viewed within the scope of the data center as a whole. Private cloud computing, if we're going to differentiate, is the hybrid model; the model wherein IT incorporates public cloud computing as an extension of its data center and, one hopes, its own enterprise cloud computing initiative. It's the use of economy of scale to offset costs associated with new initiatives and scalability of existing applications without sacrificing the efficiency of scale afforded by process automation and integration efforts. It's the best of both worlds: utility computing resources that can be incorporated and managed as though they are enterprise resources. Public and enterprise cloud computing have different goals and therefore different benefits. Public cloud computing is about economy of scale of resources and commoditized operational processes. Forklifting a model such as AWS into the data center would be unlikely to succeed. The model assumes no integration or management of resources via traditional or emerging means and in fact the model as implemented by most public cloud providers would inhibit such efforts. Public cloud computing assumes that scale of resources is king and at that it excels. Enterprise cloud computing, on the other hand, assumes that efficiency is king and at that, public cloud computing is fair to middling at best. Enterprise cloud computing implementations recognize that enterprise applications are holistic units comprising all of the resources necessary to deploy, deliver and secure that application. Infrastructure services from the network to the application delivery network to storage and security are not adjunct to the application but are a part of the application. Integration with identity and access management services is not an afterthought, but an architectural design. Monitoring and management is not a "green is good, red is bad" icon on a web application, but an integral part of the overall data center strategy. Enterprise cloud computing is about efficiency of scale; a means of managing growth in ways that reduces the burden placed on people and leverages technology through process automation and devops to improve the operational posture of IT in such a way as to enable repeatable, rapid deployment of applications within the enterprise context. That means integration, management, and governance is considered part and parcel of any application deployment. These processes and automation that enable repeatable deployments and dynamic, run-time management that includes the proper integration and assignment of operational and business policies to newly provisioned resources are unique, because the infrastructure and services comprising the architectural foundation of the data center are unique. These are two very different sets of goals and benefits and, as such, cannot easily be substituted. They can, however, be conjoined into a broader architectural strategy that is known as private (hybrid) cloud computing. PRIVATE CLOUD: EFFICIENT ECONOMY of SCALE There are, for every organization, a number of applications that are in fact drivers of the need for economy of scale, i.e. a public cloud computing environment. Private (hybrid) cloud computing is a model that allows enterprise organizations to leverage the power of utility computing while addressing the very real organizational need for at a minimum architectural control over those resources for integration, management and cost containment governance. It is the compromise of cheap resources coupled with control that affords organizations the flexibility and choice required to architect a data center solution that can meet the increasing demand for self-service of its internal customers while addressing ever higher volumes of demand on external-facing applications without substantially increasing costs. Private (hybrid) cloud computing is not a panacea; it's not the holy grail of cloud computing but it is the compromise many require to simultaneously address both a need for economy and efficiency of scale. Both goals are of interest to enterprise organizations – as long as their basic needs are met. Chirag Mehta summed it up well in a recent post on CloudAve: "It turns out that IT doesn’t mind at all if business can perform certain functions in a self-service way, as long as the IT is ensured that they have underlying control over data and (on-premise) infrastructure." See: Cloud Control Does Not Always Mean ‘Do it yourself’. Control over infrastructure. It may be that these three simple words are the best way to distinguish between public and enterprise cloud computing after all, because that's ultimately what it comes down to. Without control over infrastructure organizations cannot integrate and manage effectively its application deployments. Without control over infrastructure organizations cannot achieve the agility necessary to leverage a dynamic, services-based governance strategy over performance, security and availability of applications. Public cloud computing requires that control be sacrificed on the altar of cheap resources. Enterprise and private (hybrid) cloud computing do not. Which means the latter is more likely able to empower IT to realize the operational and business goals for which it undertook a cloud computing initiative in the first place. Selling To Enterprise – Power Struggle Between IT And Line Of Business Cloud Control Does Not Always Mean ‘Do it yourself’ Cloud is the How not the What What CIOs Can Learn from the Spartans Hybrid Cloud: Fact, Fiction or Future? Data Center Feng Shui: Process Equally Important as Preparation Putting the Cloud Before the Horse If You Focus on Products You’ll Miss the Cloud The Zero-Product Property of IT What is a Strategic Point of Control Anyway? Why You Need a Cloud to Call your Own | F5 White Paper The New Network575Views0likes0CommentsThe Inevitable Eventual Consistency of Cloud Computing
An IDC survey highlights the reasons why private clouds will mature before public, leading to the eventual consistency of public and private cloud computing frameworks Network Computing recently reported on a very interesting research survey from analyst firm IDC. This one was interesting because it delved into concerns regarding public cloud computing in a way that most research surveys haven’t done, including asking respondents to weight their concerns as it relates to application delivery from a public cloud computing environment. The results? Security, as always, tops the list. But close behind are application delivery related concerns such as availability and performance. N etwork Computing – IDC Survey: Risk In The Cloud While growing numbers of businesses understand the advantages of embracing cloud computing, they are more concerned about the risks involved, as a survey released at a cloud conference in Silicon Valley shows. Respondents showed greater concern about the risks associated with cloud computing surrounding security, availability and performance than support for the pluses of flexibility, scalability and lower cost, according to a survey conducted by the research firm IDC and presented at the Cloud Leadership Forum IDC hosted earlier this week in Santa Clara, Calif. “However, respondents gave more weight to their worries about cloud computing: 87 percent cited security concerns, 83.5 percent availability, 83 percent performance and 80 percent cited a lack of interoperability standards.” The respondents rated the risks associated with security, availability, and performance higher than the always-associated benefits of public cloud computing of lower costs, scalability, and flexibility. Which ultimately results in a reluctance to adopt public cloud computing and is likely driving these organizations toward private cloud computing because public cloud can’t or won’t at this point address these challenges, but private cloud computing can and is – by architecting a collection of infrastructure services that can be leveraged by (internal) customers on an application by application (and sometimes request by request) basis. PRIVATE CLOUD will MATURE FIRST What will ultimately bubble up and become more obvious to public cloud providers is customer demand. Clouderati like James Urquhart and Simon Wardley often refer to this process as commoditization or standardization of services. These services – at the infrastructure layer of the cloud stack – will necessarily be driven by customer demand; by the market. Because customers right now are not fully exercising public cloud computing as they would their own private implementation – replete with infrastructure services, business critical applications, and adherence to business-focused service level agreements – public cloud providers are a bit of a disadvantage. The market isn’t telling them what they want and need, thus public cloud providers are left to fend for themselves. Or they may be pandering necessarily to the needs and demands of a few customers that have fully adopted their platform as their data center du jour. Internal to the organization there is a great deal more going on than some would like to admit. Organizations have long since abandoned even the pretense of caring about the definition of “cloud” and whether or not there exists such a thing as “private” cloud and have forged their way forward past “virtualization plus” (a derogatory and dismissive term often used to describe such efforts by some public cloud providers) and into the latter stages of the cloud computing maturity model. Internal IT organizations can and will solve the “infrastructure as a service” conundrum because they necessarily have a smaller market to address. They have customers, but it is a much smaller and well-defined set of customers which they must support and thus they are able to iterate over the development processes and integration efforts necessary to get there much quicker and without as much disruption. Their goal is to provide IT as a service, offering a repertoire of standardized application and infrastructure services that can easily be extended to support new infrastructure services. They are, in effect, building their own cloud frameworks (stack) upon which they can innovate and extend as necessary. And as they do so they are standardizing, whether by conscious effort or as a side-effect of defining their frameworks. But they are doing it, regardless of those who might dismiss their efforts as “not real cloud.” When you get down to it, enterprise IT isn’t driven by adherence to some definition put forth by pundits. They’re driven by a need to provide business value to their customers at the best possible “profit margin” they can. And they’re doing it faster than public cloud providers because they can. WHEN CLOUDS COLLIDE - EVENTUAL CONSISTENCY What that means is that in a relatively short amount of time, as measured by technological evolution at least, the “private clouds” of customers will have matured to the point they will be ready to adopt a private/public (hybrid) model and really take advantage of that public, cheap, compute on demand that’s so prevalent in today’s cloud computing market. Not just use them as inexpensive development or test playgrounds but integrate them as part of their global application delivery strategy. The problem then is aligning the models and APIs and frameworks that have grown up in each of the two types of clouds. Like the concept of “eventual consistency” with regards to data and databases and replication across clouds (intercloud) the same “eventual consistency” theory will apply to cloud frameworks. Eventually there will be a standardized (consistent) set of infrastructure services and network services and frameworks through which such services are leveraged. Oh, at first there will be chaos and screaming and gnashing of teeth as the models bump heads, but as more organizations and providers work together to find that common ground between them they’ll find that just like the peanut-butter and chocolate in a Reese’s Peanut Butter cup, the two disparate architectures can “taste better together.” The question that remains is which standardization will be the one with which others must become consistent. Without consistency, interoperability and portability will remain little more than a pipe dream. Will it be standardization driven by the customers, a la the Enterprise Buyer’s Cloud Council? Or will it be driven by providers in a “if you don’t like what we offer go elsewhere” market? Or will it be driven by a standards committee comprised primarily of vendors with a few “interested third parties”? Related Posts from tag interoperability Despite Good Intentions PaaS Interoperability Still Only Skin Deep Apple iPad Pushing Us Closer to Internet Armageddon Cloud, Standards, and Pants Approaching cloud standards with end-user focus only is full of fail Interoperability between clouds requires more than just VM portability Who owns application delivery meta-data in the cloud? Cloud interoperability must dig deeper than the virtualization layer from tag standards How Coding Standards Can Impair Application Performance The Dynamic Infrastructure Mashup The Great Client-Server Architecture Myth Infrastructure 2.0: Squishy Name for a Squishy Concept Can You Teach an Old Developer New Tricks? (more..) del.icio.us Tags: MacVittie,F5,cloud computing,standards,interoperability,integration,hybrid cloud,private cloud,public cloud,infrastructure216Views0likes1CommentFocus of Cloud Implementation Depends on the Implementer
Public cloud computing is about capacity and scale on-demand, private cloud computing however, is not. Legos. Nearly every child has them, and nearly every parent knows that giving a child a Lego “set” is going to end the same way: the set will be put together according to instructions exactly once (usually by the parent) and then the blocks will be incorporated into the large collection of other Lego sets to become part of something completely different. This is a process we actually encourage as parents – the ability to envision and end-result and to execute on that vision by using the tools at hand to realize it. A child “sees” an end-product, a “thing” they wish to build and they have no problem with using pieces from disparate “sets” to build it. We might call that creativity, innovation, and ingenuity. We are proud when our children identify a problem – how do I build this thing – and are able to formulate a plan to solve it. So why is it when we grow up and start talking about cloud computing that we suddenly abhor those same characteristics in IT? RESOURCES as BUILDING BLOCKS That’s really what’s happening right now within our industry. Cloud computing providers and public-only pundits have a set of instructions that define how the building blocks of cloud computing (compute, network, and storage resources) should be put together to form an end-product. But IT, like our innovative and creative children, has a different vision; they see those building blocks as capable of serving other purposes within the data center. They are the means to an end, a tool, a foundation. Judith Hurwitz recently explored the topic of private clouds in “What’s a private cloud anyway?” and laid out some key principles of cloud computing: There are some key principles of the cloud that I think are worth recounting: 1. A cloud is designed to optimize and manage workloads for efficiency. Therefore repeatable and consistent workloads are most appropriate for the cloud. 2. A cloud is intended to implement automation and virtualization so that users can add and subtract services and capacity based on demand. 3. A cloud environment needs to be economically viable. Why aren’t traditional data centers private clouds? What if a data center adds some self-service and virtualization? Is that enough? Probably not. -- “What’s a private cloud anyway?”, Judith Hurwitz’s Cloud-Centric Weblog What’s common to these “key principles” is that they assume an intent that may or may not be applicable to the enterprise. Judith lays this out in key principle number two and makes the assumption that “cloud” is all about auto-scaling services. Herein lies the disconnect between public and private cloud computing. While public cloud computing focuses on providing resources as a utility, private cloud computing is more about efficiency in resource distribution and processes. The resource model, the virtualization and integrated infrastructure supporting the rapid provisioning and migration of workloads around an environment are the building blocks upon which a cloud computing model is built. The intended use and purpose to which the end-product is ultimately put is different. Public cloud puts those resources to work generating revenue by offering them up cheaply affordably to other folks while private cloud puts those resources to work generating efficiency and time savings for enterprise IT staff. IS vs DOES What is happening is that the focus of cloud computing is evolving; it’s moving from “what is it” to “what does it do”. And it is the latter that is much more important in the big scheme of things than the former. Public cloud provides resources-on-demand, primarily compute or storage resources on demand. Private cloud provides flexibility and efficiency and process automation. Public cloud resources may be incorporated into a private cloud as part of the flexibility and efficiency goals, but it is not a requirement. The intent behind a private cloud is in fact not capacity on demand, but more efficient usage and management of resources. The focus of cloud is changing from what it is to what it does and the intention behind cloud computing implementations is highly variable and dependent on the implementers. Private cloud computing is implemented for different reasons than public cloud computing. Private cloud implementations are not focused on economy of scale or cheap resources, they are focused on efficiency and processes. Private cloud implementers are not trying to be Amazon or Google or Salesforce.com. They’re trying to be a more efficient, leaner version of themselves – IT as a Service. They’ve taken the building blocks – the resources – and are putting them together in a way that makes it possible for them to achieve their goals, not the goals of public cloud computing. If that efficiency sometimes requires the use of external, public cloud computing resources then that’s where the two meet and are often considered “hybrid” cloud computing. The difference between what a cloud “is” and what it “does” is an important distinction especially for those who want to “sell” a cloud solution. Enterprises aren’t trying to build a public cloud environment, so trying to sell the benefits of a solution based on its ability to mimic a public cloud in a private data center is almost certainly a poor strategy. Similarly, trying to “sell” public cloud computing as the answer to all IT’s problems when you haven’t ascertained what it is the enterprise is trying to do with cloud computing is also going to fail. Rather we should take a lesson from our own experiences outside IT with our children and stop trying to force IT into a mold based on some set of instructions someone else put together and listen to what it is they are trying to do. The intention of a private cloud computing implementation is not the same as that of a public cloud computing implementation. Which ultimately means that “success” or “failure” of such implementations will be measured by a completely different set of sticks. We’ll debate private cloud and dig into the obstacles (and solutions to them) enterprises are experiencing in moving forward with private cloud computing in the Private Cloud Track at CloudConnect 2011. Hope to see you there! Cloud Chemistry 101 Infrastructure 2.0 + Cloud + IT as a Service = An Architectural Parfait What’s a private cloud anyway? It’s Called Cloud Computing not Cheap Computing Public Cloud Computing is NOT For Everyone If a Cat has Catness Does a Cloud have Cloudness? The Three Reasons Hybrid Clouds Will Dominate The Other Hybrid Cloud Architecture The Cloudy Enterprise: Hours More Important Than Dollars Don’t Throw the Baby out with the Bath Water Why IT Needs to Take Control of Public Cloud Computing Multi-Tenant Security Is More About the Neighbors Than the Model The Battle of Economy of Scale versus Control and Flexibility167Views0likes0Comments