cloud bursting
11 TopicsCloud bursting, the hybrid cloud, and why cloud-agnostic load balancers matter
Cloud Bursting and the Hybrid Cloud When researching cloud bursting, there are many directions Google may take you. Perhaps you come across services for airplanes that attempt to turn cloudy wedding days into memorable events. Perhaps you'd rather opt for a service that helps your IT organization avoid rainy days. Enter cloud bursting ... yes, the one involving computers and networks instead of airplanes. Cloud bursting is a term that has been around in the tech realm for quite a few years. It, in essence, is the ability to allocate resources across various public and private clouds as an organization's needs change. These needs could be economic drivers such as Cloud 2 having lower cost than Cloud 1, or perhaps capacity drivers where additional resources are needed during business hours to handle traffic. For intelligent applications, other interesting things are possible with cloud bursting where, for example, demand in a geographical region suddenly needs capacity that is not local to the primary, private cloud. Here, one can spin up resources to locally serve the demand and provide a better user experience.Nathan Pearcesummarizes some of the aspects of cloud bursting inthis minute long video, which is a great resource to remind oneself of some of the nuances of this architecture. While Cloud Bursting is a term that is generally accepted by the industry as an "on-demand capacity burst,"Lori MacVittiepoints out that this architectural solution eventually leads to aHybrid Cloudwhere multiple compute centers are employed to serve demand among both private-based resources are and public-based resources, or clouds, all the time. The primary driver for this: practically speaking,there are limitations around how fast data that is critical to one's application (think databases, for example) can be replicated across the internet to different data centers.Thus, the promises of "on-demand" cloud bursting scenarios may be short lived, eventually leaning in favor of multiple "always-on compute capacity centers"as loads increase for a given application.In any case, it is important to understand thatthat multiple locations, across multiple clouds will ultimately be serving application content in the not-too-distant future. An example hybrid cloud architecture where services are deployed across multiple clouds. The "application stack" remains the same, using LineRate in each cloud to balance the local application, while a BIG-IP Local Traffic Manager balances application requests across all of clouds. Advantages of cloud-agnostic Load Balancing As one might conclude from the Cloud Bursting and Hybrid Cloud discussion above, having multiple clouds running an application creates a need for user requests to be distributed among the resources and for automated systems to be able to control application access and flow. In order to provide the best control over how one's application behaves, it is optimal to use a load balancer to serve requests. No DNS or network routing changes need to be made and clients continue using the application as they always did as resources come online or go offline; many times, too, these load balancers offer advanced functionality alongside the load balancing service that provide additional value to the application. Having a load balancer that operates the same way no matter where it is deployed becomes important when resources are distributed among many locations. Understanding expectations around configuration, management, reporting, and behavior of a system limits issues for application deployments and discrepancies between how one platform behaves versus another. With a load balancer like F5's LineRate product line, anyone can programmatically manage the servers providing an application to users. Leveraging this programatic control, application providers have an easy way spin up and down capacity in any arbitrary cloud, retain a familiar yet powerful feature-set for their load balancer, ultimately redistribute resources for an application, and provide a seamless experience back to the user. No matter where the load balancer deployment is, LineRate can work hand-in-hand with any web service provider, whether considered a cloud or not. Your data, and perhaps more importantly cost-centers, are no longer locked down to one vendor or one location. With the right application logic paired with LineRate Precision's scripting engine, an application can dynamically react to take advantage of market pricing or general capacity needs. Consider the following scenarios where cloud-agnostic load balancer have advantages over vendor-specific ones: Economic Drivers Time-dependent instance pricing Spot instances with much lower cost becoming available at night Example: my startup's billing system can take advantage in better pricing per unit of work in the public cloud at night versus the private datacenter Multiple vendor instance pricing Cloud 2 just dropped their high-memory instance pricing lower than Cloud 1's Example: Useful for your workload during normal business hours; My application's primary workload is migrated to Cloud 2 with a simple config change Competition Having multiple cloud deployments simultaneously increases competition, and thusyour organization's negotiated pricing contracts become more attractiveover time Computational Drivers Traffic Spikes Someone in marketing just tweeted about our new product. All of a sudden, the web servers that traditionally handled all the loads thrown at them just fine are gettingslashdottedby people all around North America placing orders. Instead of having humans react to the load and spin up new instances to handle the load - or even worse: doing nothing - your LineRate system and application worked hand-in-hand to spin up a few instances in Microsoft Azure's Texas location and a few more in Amazon's Virginia region. This helps you distribute requests from geographically diverse locations: your existing datacenter in Oregon, the central US Microsoft Cloud, and the east-coast based Amazon Cloud. Orders continue to pour in without any system downtime, or worse: lost customers. Compute Orchestration A mission-critical application in your organization's private cloud unexpectedly needs extra computer power, but needs to stay internal for compliance reasons. Fortunately, your application can spin up public cloud instances and migrate traffic out of the private datacenter without affecting any users or data integrity. Your LineRate instance reaches out to Amazon to boot instances and migrate important data. More importantly, application developers and system administrators don't even realize the application has migrated since everything behaves exactly the same in the cloud location. Once the cloud systems boot, alerts are made to F5's LTM and LineRate instances that migrate traffic to the new servers, allowing the mission-critical app to compute away. You just saved the day! The benefit to having a cloud-agnostic load balancing solution for connecting users with an organization's applications not only provides a unified user experience, but provides powerful, unified way of controlling the application for its administrators as well. If all of a sudden an application needs to be moved from, say, aprivate datacenter with a 100 Mbps connection to a public cloud with a GigE connection, this can easily be done without having to relearn a new load balancing solution. F5's LineRate product is available for bare-metal deployments on x86 hardware, virtual machine deployments, and has recently deployed anAmazon Machine Image (AMI). All of these deployment types leverage the same familiar, powerful tools that LineRate offers:lightweight and scalable load balancing, modern management through its intuitive GUI or the industry-standard CLI, and automated control via itscomprehensive REST API.LineRate Point Load Balancerprovides hardened, enterprise-grade load balancing and availability services whereasLineRate Precision Load Balanceradds powerful Node.js programmability, enabling developers and DevOps teams to leveragethousands of Node.js modulesto easily create custom controlsfor application network traffic. Learn about some of LineRate'sadvanced scripting and functionalityhere, ortry it out for freeto see if LineRate is the right cloud-agnostic load balancing solution for your organization.900Views0likes0CommentsCloud bursting: what you need to know
It’s the stuff of nightmares: After months of preparation and marketing the launch day finally arrives, the “Go” button is pressed and your creation is released into the wild. Shortly after that moment your website starts failing; too many people are trying to log on and the site simply cannot cope with the extra traffic. Whether it’s a consumer electronics company announcing its latest smartphone or tablet, a band streaming its new album or an online retailer announcing a massive sale, spikes in traffic and usage can cause all sorts of problems. Sales can be lost and reputations can suffer. UK broadcaster ITV, for example, has suffered a couple of times in recent years with streaming content online - there was so much demand for the first episode of the second series of Downton Abbey that ITV’s online playersimply couldn’t cope. When infrastructure is pushed over its capacity it can be frustrating for all involved, but the simple answer of buying more and better infrastructure isn’t really suitable. Traffic spikes are generally one-off or very rare events such as those listed above and do not justify the cost of additional hardware that would not be used when the traffic spike recedes, particularly as tightening budgets and increased competition are forcing many businesses to sweat the assets and get more from their existing infrastructure. This doesn’t just apply to websites though; any application or service is at risk from collapsing if it hits peak capacity. That’s where cloud bursting comes in. During peak periods an application that is running in a corporate data centre or in a private cloud can “burst” into a public cloud, providing the extra capacity needed to keep services running smoothly. It also means the company will only pay for this extra capacity when it is used, keeping costs down. This works by abstracting the application delivery requirements from the underlying infrastructure, enabling the applications to basically span physical and virtual infrastructure in the data centre and in the cloud, as demand dictates. Increasing an application’s available resources by dynamically redirecting workloads as needed will result in a more stable and reliable service for end users, which will benefit all parties. However cloud bursting relies on data centre agility, which can sometimes be impacted by the network. Using something likeF5’s Cloud Burstingtechnology can eliminate those network bottlenecks, by using metrics of real-time service behaviour to deliver demand-based workflow routing. Doing this over public and private data centres eliminates the restrictions of physical device, connectivity and capacity experienced within data centre silos. F5’s Cloud Burstingsolution also ensures that all relevant security policies are enforced when an application is burst to the cloud, meaning regulatory requirements will still be met. Cloud bursting is a great way of ensuring applications can keep running through huge spikes in demand without forcing a business to pay for infrastructure that does nothing for long periods at a time. The business will only pay for the additional capacity it uses and end users will be able to access what they want whenever they want to, even if huge numbers of others are doing exactly the same thing at the same time.418Views0likes0CommentsThe Three Reasons Hybrid Clouds Will Dominate
In the short term, hybrid cloud is going to be the cloud computing model of choice. Amidst all the disconnect at CloudConnect regarding standards and where “cloud” is going was an undercurrent of adoption of what most have come to refer to as a “hybrid cloud computing” model. This model essentially “extends” the data center into “the cloud” and takes advantage of less expensive compute resources on-demand. What’s interesting is that the use of this cheaper compute is the granularity of on-demand. The time interval for which resources are utilized is measured more in project timelines than in minutes or even hours. Organizations need additional compute for lab and quality assurance efforts, for certification testing, for production applications for which budget is limited. These are not snap decisions but rather methodically planned steps along the project management lifecycle. It is on-demand in the sense that it’s “when the organization needs it”, and in the sense that it’s certainly faster than the traditional compute resource acquisition process, which can take weeks or even months. Also mentioned more than once by multiple panelists and speakers was the notion of separating workload such that corporate data remains in the local data center while presentation layers and GUIs move into the cloud computing environment for optimal use of available compute resources. This model works well and addresses issues with data security and privacy, a constant top concern in surveys and polls regarding inhibitors of cloud computing. It’s not just the talk at the conference that makes such a conclusion probabilistic. An Evans Data developer survey last year indicated that more than 60 percent of developers would be focusing on hybrid cloud computing in 2010. Results of the Evans Data Cloud Development Survey, released Jan. 12, show that 61 percent of the more than 400 developers polled said some portion of their organizations' IT resources "will move to the public cloud within the next year," Evans Data said. "However, over 87 percent [of the developers] say half or less then half of their resources will move ... As a result, the hybrid cloud is set to dominate the coming IT landscape." There are three reasons why this model will become the de facto standard strategy for leveraging cloud computing, at least in the short term and probably for longer than some pundits (and providers) hope.334Views0likes2CommentsCloud Bursting: Gateway Drug for Hybrid Cloud
The first hit’s cheap kid … Recently Ben Kepes started a very interesting discussion on cloud bursting by asking whether or not it was real. This led to Christofer Hoff pointing out that “true” cloud bursting required routing based on business parameters. That needs to be extended to operational parameters, but in general, Hoff’s on the mark in my opinion. The core of the issue with cloud bursting, however, is not that requests must be magically routed to the cloud in an overflow situation (that seems to be universally accepted as part of the definition), but the presumption that the content must also be dynamically pushed to the cloud as part of the process, i.e. live migration. If we accept that presumption then cloud bursting is nowhere near reality. Not because live migration can’t be done, but because the time requirement to do so prohibits a successful “just in time” bursting approach. There is already a requirement that provisioning of resources in the cloud as preparation for a bursting event happen well before the event, it’s a predictive, proactive process nor a reactionary one, and the inclusion of live migration as part of the process would likely result in false provisioning events (where content is migrated prematurely based on historical trending which fails to continue and therefore does not result in an overflow situation). So this leaves us with cloud bursting as a viable architectural solution to scale on-demand only if we pre-position content in the cloud, with the assumption that provisioning is a less time intensive process than migration plus provisioning. This results in a more permanent, hybrid cloud architecture. THE ROAD to HYBRID The constraints on the network today force organizations who wish to address their seasonal or periodic need for “overflow” capacity to pre-position the content in demand at a cloud provider. This isn’t as simple as dropping a virtual machine in EC2, it also requires DNS modifications to be made and the implementation of the policy that will ultimately trigger the routing to the cloud campus. Equally important – actually, perhaps more important – is having the process in place that will actually provision the application at the cloud campus. In other words, the organization is building out the foundation for a hybrid cloud architecture. But in terms of real usage, the cloud-deployed resources may only be used when overflow capacity is required. So it’s only used periodically. But as its user base grows, so does the need for that capacity and organizations will see those resources provisioned more and more often, until they’re virtually always on. There’s obviously an inflection point at which the use of cloud-based resources moves out of the realm of “overflow capacity” and into the realm of “capacity”, period. At that point, the organization is in possession of a full, hybrid cloud implementation. LIMITATIONS IMPOSE the MODEL Some might argue – and I’d almost certainly concede the point – that a cloud bursting model that requires pre-positioning in the first place is a hybrid cloud model and not the original intent of cloud bursting. The only substantive argument I could provide to counter is that cloud bursting focuses more on the use of the resources and not the model by which they are used. It’s the on-again off-again nature of the resources deployed at the cloud campus that make it cloud bursting, not the underlying model. Regardless, existing limitations on bandwidth force the organization’s hand; there’s virtually no way to avoid implementing what is a foundation for hybrid cloud as a means to execute on a cloud bursting strategy (which is probably a more accurate description of the concept than tying it to a technical implementation, but I’m getting off on a tangent now). The decision to embark on a cloud bursting initiative, therefore, should be made with the foresight that it requires essentially the same effort and investment as a hybrid cloud strategy. Recognizing that up front enables a broader set of options for using those cloud campus resources, particularly the ability to leverage them as true “utility” computing, rather than an application-specific (i.e. dedicated) set of resources. Because of the requirement to integrate and automate to achieve either model, organizations can architect both with an eye toward future integration needs – such as those surrounding identity management, which continues to balloon as a source of concern for those focusing in on SaaS and PaaS integration. Whether or not we’ll solve the issues with live migration as a barrier to “true” cloud bursting remains to be seen. As we’ve never managed to adequately solve the database replication issue (aside from accepting eventual consistency as reality), however, it seems likely that a “true” cloud bursting implementation may never be possible for organizations who aren’t mainlining the Internet backbone.281Views0likes0CommentsBursting the Cloud
The cloud computing craze is leading to some interesting new terms. Cloudware and cloudbursting are two terms I particularly like for their ability to describe specific computing models based on cloud computing. Today we're going to look at cloudbursting, which is basically a new twist on an old concept. Cloudbursting appears to be to marry the traditional safe enterprise computing model with cloud computing; in essence, bursting into the cloud when necessary or using the cloud when additional compute resources are required temporarily. Jeff at Amazon Web Services Blog talks about the inception of this term as applied to the latter and describes it in his blog post as a method used by Thomas Brox Røst to regenerate a number of dynamic pages in 5 hours rather than the 7 hours that would be required if he had attempted such a feat internally. His approach is further described on The High Scalability Blog. Cloudbursting can also be used to shoulder the burden of some of an application's processing. For example, basic application functionality could be provided from within the cloud while more critical (e.g. revenue-generating) applications continue to be served from within the controlled enterprise data center. This assumes that only a portion of consumers will actually be interacting with the data-driven side of a web site (customer management, process visibility, etc...) while the greater portion will simply be browsing around on the non-interactive, as it were, side of the site. Bursting has traditionally been applied to resource allocation and automated provisioning/de-provisioning of resources, historically focused on bandwidth. Today, in the cloud, it is being applied to resources such as servers, application servers, application delivery systems, and other infrastructure required to provide on-demand computing environments that expand and contract as necessary, without manual intervention. This requires the ability to automate the cloud's data center. Data center automation in a cloud computing environment, regardless of the opacity of the model, requires more than simple workflow systems. It requires on-demand control and management over all devices in the delivery chain, from the storage to the application and web servers to the load-balancers and acceleration offerings that deliver the applications to end-users. This is more akin to data center orchestration than it is automation, as it requires that many moving parts and pieces be coordinated in order to perform a highly complex set of tasks seamlessly and with as little manual intervention as possible. This is one of the foundational requirements of a cloud computing infrastructure: on-demand, automated scalability. Data center automation is nothing new. Hosting and service providers have long automated their data centers in order to reduce the cost of customer acquisition and management, and to improve efficiency of provisioning and de-provisioning processes. These benefits can also be realized inside the data center, regardless of the model being employed. The same automation required for smooth, cost-effective management of a cloud computing data center can be utilized to achieve smooth, cost-effective management of an enterprise data center. The hybrid application deployment model involving cloud computing requires additional intelligence on the part of the application delivery network. The application delivery network must be able to understand what is being requested and where it resides; it must be able to intelligently route requests. This, too, is a fundamental attribute of cloud computing infrastructure: intelligence. When distributing an application across multiple locations, whether local servers or remote data centers or "in the cloud", it becomes necessary for a controlling node to properly route those requests based on application data. In a less sophisticated model, global load balancing could be substituted as a means of directing requests to the appropriate site, a task for which global load balancers seem a perfect fit. A hybrid approach like cloudbursting seems to be particularly appealing. Enterprises seem reluctant to move business critical applications into the cloud at this juncture but are likely more willing to assign responsibility to an outsourced provider for less critical application functionality with variable volume requirements, which fits well with an on-demand resource bursting model. Cloudbursting may be one solution that makes everyone happy.276Views0likes1CommentThe Conspecific Hybrid Cloud
Operational consistency and control continue to be a driving force in hybrid cloud architectures When you’re looking to add new tank mates to an existing aquarium ecosystem, one of the concerns you must have is whether a particular breed of fish is amenable to conspecific cohabitants. Many species are not, which means if you put them together in a confined space, they’re going to fight. Viciously. To the death. Responsible aquarists try to avoid such situations, so careful attention to the conspecificity of animals is a must. Now, while in many respects the data center ecosystem correlates well to an aquarium ecosystem, in this case it does not. It’s what you usually get, today, but its not actually the best model. That’s because what you want in the data center ecosystem – particularly when it extends to include public cloud computing resources – is conspecificity in infrastructure. This desire and practice is being seen both in enterprise data center decision making as well as in startups suddenly dealing with massive growth and increasingly encountering performance bottlenecks over which IT has no control to resolve. OPERATIONAL CONSISTENCY One of the biggest negatives to a hybrid architectural approach to cloud computing is the lack of operational consistency. While enterprise systems may be unified and managed via a common platform, resources and delivery services in the cloud are managed using very different systems and interfaces. This poses a challenge for all of IT, but is particularly an impediment to those responsible for devops – for integrating and automating provisioning of the application delivery services required to support applications. It requires diverse sets of skills – often those peculiar to developers such as programming and standards knowledge (SOAP, XML) – as well as those traditionally found in the data center. “We own the base, rent the spike. We want a hybrid operation. We love knowing that shock absorber is there.” – Allan Leinwand, Zynga’s Infrastructure CTO Other bottlenecks were found in the networks to storage systems, Internet traffic moving through Web servers, firewalls' ability to process the streams of traffic, and load balancers' ability to keep up with constantly shifting demand. Zynga uses Citrix Systems CloudStack as its virtual machine management interface superimposed on all zCloud VMs, regardless of whether they're in the public cloud or private cloud. Inside Zynga’s Big Move To Private Cloud by InformationWeek’s Charles Babcock This operational inconsistency also poses a challenge in the codification of policies across the security, performance, and availability spectrum as diverse systems often require very different methods of encapsulating policies. Amazon security groups are not easily codified in enterprise-class systems, and vice-versa. Similarly, the options available to distribute load across instances required to achieve availability and performance goals are impeded by lack of consistent support for algorithms across load balancing services as well as differences in visibility and health monitoring that prevent a cohesive set of operational policies to govern the overall architecture. Thus if hybrid cloud is to become the architectural model of choice, it becomes necessary to unify operations across all environments – whether public or enterprise. UNIFIED OPERATIONS We are seeing this demand more and more, as enterprise organizations seek out ways to integrate cloud-based resources into existing architectures to support a variety of business needs – disaster recover, business continuity, and spikes in application demand. What customers are demanding is a unified approach to integrating those resources, which means infrastructure providers must be able to offer solutions that can be deployed both in a traditional enterprise-class model as well as a public cloud environment. This is also true for organizations that may have started in the cloud but are now moving to a hybrid model in order to seize control of the infrastructure as a means to address performance bottlenecks that simply cannot be addressed by cloud providers due to the innate nature of a shared model. This ability to invoke and coordinate both private and public clouds is "the hidden jewel" of Zynga's success, says Allan Leinwand, CTO of infrastructure engineering at the company. -- Lessons From FarmVille: How Zynga Uses The Cloud While much is made of Zynga’s “reverse cloud-bursting” business model, what seems to be grossly overlooked is the conspecificity of infrastructure required in order to move seamlessly between the two worlds. Whether at the virtualization layer or at the delivery infrastructure layer, a consistent model of operations is a must to transparently take advantage of the business benefits inherent in a cross-environment, aka hybrid, cloud model of deployment. As organizations converge on a hybrid model, they will continue to recognize the need and advantages of an operationally consistent model – and they are demanding it be supported. Whether it’s Zynga imposing CloudStack on its own infrastructure to maintain compatibility and consistency with its public cloud deployments or enterprise IT requiring public cloud deployable equivalents for traditional enterprise-class solutions, the message is clear: operational consistency is a must when it comes to infrastructure. H/T @Archimedius “The Hybrid Cloud is the Future of IT Infrastructure”266Views0likes0CommentsF5 Friday: Elastic Applications are Enabled by Dynamic Infrastructure
You really can’t have the one without the other. VMware enables the former, F5 provides the latter. The use of public cloud computing as a means to expand compute capacity on-demand, a la during a seasonal or unexpected spike in traffic, is often called cloud bursting and we’ve been talking about it (at least in the hypothetical sense) for some time now. When we first started talking about it the big question was, of course, but how do you get the application in the cloud in the first place? Everyone kind of glossed over that because there was no real way to do it on-demand. OVERCOMING the OBSTACLES BIT by BIT and BYTE by BYTE The challenges associated with dynamically moving a live, virtually deployed application from one location to another were not trivial but neither were they insurmountable. Early on these challenges have been directly associated with the difference in networking and issues with the distances over which a virtual image could be successfully transferred. As the industry began to address those challenges others came to the fore. It’s not enough, after all, to just transfer a virtual machine from one location to another – especially if you’re trying to do so on-demand, in response to some event. You want to migrate that application while it’s live and in use, and you don’t want to disrupt service to do it because no matter what optimizations and acceleration techniques are used to mitigate the transfer time between locations, it’s still going to take some time. The whole point of cloud bursting is to remain available and if the process to achieve that dynamic growth defeats the purpose, well, it seems like a silly thing to do, doesn’t it? As we’ve gotten past that problem now another one rears its head: the down side. Not the negatives, no, the other down side – the scaling down side of cloud bursting. Remember the purpose of performing this technological feat in the first place is dynamic scalability, to enable an elastic application that scales up and down on-demand. We want to be able to leverage the public cloud when we need it but not when we don’t, to keep really realize the benefits of cloud and its lower cost of compute capacity. FORGING AHEAD F5 has previously proven that a live migration of an application is not only possible, but feasible. This week at VMworld we took the next step: elastic applications. Yes, we not only proved you can burst an application into the cloud and scale up while live and maintaining availability, but that you can also scale back down when demand decreases. The ability to also include a BIG-IP LTM Virtual Edition with the cloud-deployed application instance means you can also consistently apply any application delivery policies necessary to maintain security, consistent application access policies, and performance. The complete solution relies on products from F5 and VMware to monitor application response times and expand into the cloud when they exceed predetermined thresholds. Once in the cloud, the solution can further expand capacity as needed based on application demand. The solution comprises the use of: VMware vCloud Director A manageable, scalable platform for cloud services, along with the necessary APIs to provision capacity on demand. F5 BIG-IP® Local Traffic Manager™ (LTM) One in each data center and/or cloud providing management and monitoring to ensure application availability. Application conditions are reported to the orchestration tool of choice, which then triggers actions (scale up or down) via the VMware vCloud API. Encryption and WAN optimization for SQLFabric communications between the data center and the cloud are also leveraged for security and performance. F5 BIG-IP® Global Traffic Manager™ (GTM) Determines when and how to direct requests to the application instances in different sites or cloud environments based on pre-configured policies that dynamically respond to application load patterns. Global application delivery (load balancing) is critical for enabling cloud bursting when public cloud-deployed applications are not integrated via a virtual private cloud architecture. VMware GemStone SQLFabric Provides the distributed caching and replication of database objects between sites (cloud and/or data center) necessary to keep application content localized and thereby minimize the performance impact of latency between the application and its data. I could talk and talk about this solution but if a picture is worth a thousand words then this video ought to be worth at least that much in demonstrating the capabilities of this joint solution. If you’re like me and not into video (I know, heresy, right?) then I invite you to take a gander at some more traditional content describing this and other VMware-related solutions: A Hybrid Cloud Architecture for Elastic Applications with F5 and VMware – Overview Hybrid Cloud Application Architecture for Elastic Java-Based Web Applications – Deployment Guide F5 and VMware Solution Guide If you do like video, however, enjoy this one explaining cloud bursting for elastic applications in a hybrid cloud architecture. Related blogs and articles: Bursting the Cloud vMotion Layer 2 Adjacency Requirements Cloud-bursting and the Database Cloud Balancing, Cloud Bursting, and Intercloud Cloud Balancing, Reverse Cloud Bursting, and Staying PCI-Compliant Virtual Private Cloud (VPC) Makes Internal Cloud bursting Reality How Microsoft is bursting into the cloud with BizTalk So You Put an Application in the Cloud. Now what? Migrate a live application across clouds with no downtime? Sure ... Just in Case. Bring Alternate Plans to the Cloud Party CloudFucius Asks: Will Open Source Open Doors for Cloud Computing? The Three Reasons Hybrid Clouds Will Dominate Pursuit of Intercloud is Practical not Premature253Views0likes1CommentLive Migration versus Pre-Positioning in the Cloud
The secret to live migration isn’t just a fat, fast pipe – it’s a dynamic infrastructure Very early on in the cloud computing hype cycle we posited about different use cases for the “cloud”. One that remains intriguing and increasingly possible thanks to a better understanding of the challenges associated with the process is cloud bursting. The first time I wrote about cloud bursting and detailed the high-level process the inevitable question that remained was, “Well, sure, but how did the application get into the cloud in the first place?” Back then there was no good answer because no one had really figured it out yet. Since that time, however, there have grown up many niche solutions that provide just that functionality in addition to the ability to achieve such a “migration” using virtualization technologies. You just choose a cloud and click a button and voila! Yeah. Right. It may look that easy, but under the covers there’s a lot more details required than might at first meet the eye. Especially when we’re talking about live migration. LIVE MIGRATION versus PRE-POSITIONING Many architectural-based cloud bursting solutions require pre-positioning of the application. In other words, the application must have been transferred into the cloud before it was needed to fulfill additional capacity demands on applications experiencing suddenly high volume. It assumed, in a way, that operators were prescient and budgets were infinite. While it’s true you only pay when an image is active in the cloud, there can be storage costs associated with pre-positioning as well as the inevitable wait time between seeing the need and filling the need for additional capacity. That’s because launching an instance in a cloud computing environment is never immediate. It takes time, sometimes as long as ten minutes or more. So either your operators must be able to see ten minutes into the future or it’s possible that the challenge for which you’re implementing a cloud bursting strategy (handle overflow) won’t be addressed by such a challenge. Enter live migration. Live migration of applications attempts to remove the issues inherent with pre-positioning (or no positioning at all) by migrating on-demand to a cloud computing environment and maintaining at the same time availability of the application. What that means is the architecture must be capable of: Transferring a very large virtual image across a constrained WAN connection in a relatively short period of time Launch the cloud-hosted application Recognize the availability of the cloud-hosted application and somehow direct users to it When demand decreases you must siphon users off (quiesce) the cloud-hosted application instance When no more users are connected to the cloud-hosted application, take it down Reading between the lines you should see a common theme: collaboration. The ability to recognize and act on what are essentially “events” occurring in the process require awareness of the process and a level of collaboration traditionally not found in infrastructure solutions. CLOUD is an EXERCISE in INFRASTRUCTURE INTEGRATION Sound familiar? It should. Live migration, and even the ability to leverage pre-positioned content in a cloud computing environment, is at its core an exercise in infrastructure integration. There must be collaboration and sharing of context, automation as well as orchestration of processes to realize the benefits of applications deployed in “the cloud.” Global application delivery services must be able to monitor and infer the health at the site level, and in turn local application delivery services must monitor and infer the health and capacity of the application if cloud bursting is to successfully support the resiliency and performance requirements of application stakeholders, i.e. the business. The relationship between capacity, location, and performance of applications is well-known. The problem is pulling all the disparate variables together from the client, application, and network components which individually hold some of the necessary information – but not all. These variables comprise context, and it requires collaboration across all three “tiers” of an application interaction to determine on-demand where any given request should be directed in order to meet service level expectations. That sharing, that collaboration, requires integration of the infrastructure components responsible for directing, routing, and delivering application data between clients and servers, especially when they may be located in physically diverse locations. As customers begin to really explore how to integrate and leverage cloud computing resources and services with their existing architectures, it will become more and more apparent that at the heart of cloud computing is a collaborative and much more dynamic data center architecture. That without the ability not just to automate and orchestrate, but integrate and collaborate infrastructure across highly diverse environments, cloud computing – aside from SaaS - will not achieve the successes it is predicted. Cloud is an Exercise in Infrastructure Integration IT as a Service: A Stateless Infrastructure Architecture Model Cloud is the How not the What Cloud-Tiered Architectural Models are Bad Except When They Aren’t Cloud Chemistry 101 You Can’t Have IT as a Service Until IT Has Infrastructure as a Service Cloud Computing Making Waves All Cloud Computing Posts on DevCentral230Views0likes0CommentsHow Sears Could Have Used the Cloud to Stay Available Black Friday
The prediction of the death of online shopping this holiday season were, apparently, greatly exaggerated. As it's been reported, Sears, along with several other well known retailers, were victims of heavy traffic on Black Friday. One wonders if the reports of a dismal shopping season this year due to economic concerns led retailers to believe that there would be no seasonal rush to online sites and therefore preparation to deal with sudden spikes in traffic were unnecessary. Most of the 63 objects (375 KB of total data) comprising sears.com home page are served from sears.com and are either images, scripts, or stylesheets. The rest of their site is similar, with a lot of static data comprising a large portion of the objects. That's a lot of static data being served, and a lot of connections required on the servers just for one page. Not knowing Sears internal architecture, it's quite possible they are already using application delivery and acceleration solutions to ensure availability and responsiveness of their site. If they aren't, they should, because even the simple connection optimizations available in today's application delivery controllers would have likely drastically reduced the burden on servers and increased the capacity of their entire infrastructure. But let's assume they are already using application delivery to its fullest and simply expended all possible capacity on their servers despite their best efforts due to the unexpected high volume of visitors. It happens. After all, server resources are limited in the data center and when the servers are full up, they're full up. Assuming that Sears, like most IT shops, isn't willing to purchase additional hardware and incur the associated management, power, and maintenance costs over the entire year simply to handle a seasonal rush, they still could have prepared for the onslaught by taking advantage of cloud computing. Cloudbursting is an obvious solution, as visitors who pushed Sears servers over capacity would have been automatically directed via global load balancing techniques to a cloud computing hosted version of their site. Not only could they have managed to stay available, this would have also improved performance of their site for all visitors as cloudbursting can use a wide array of variables to determine when requests should be directed to the cloud, including performance-based parameters. A second option would have been a hybrid cloud model, where certain files and objects are served from the local data center while others are served from the cloud. Instead of serving up static stylesheets and images from Sears.com internal servers, they could have easily been hosted in the cloud. Doing so would translate into fewer requests to sears.com internal servers which reduces the processing power required and results in higher capacity of servers. I suppose a third option would have been to commit fully to the cloud and move their entire application infrastructure to the cloud, but even though adoption appears to be imminent for many enterprises according to attendees at Gartner Data Center Conference, 2008 is certainly not "the year of the cloud" and there are still quite a few kinks in full adoption plans that need to be ironed out before folks can commit fully, such as compliance and integration concerns. Still, there are ways that Sears, and any organization with a web presence, could take advantage of the cloud without committing fully to ensure availability under exceedingly high volume. It just takes some forethought and planning. Yeah, I'm thinking it too, but I'm not going to say it either. Related articles by Zemanta Online retailers overloaded on Black Friday Online-only outlets see Black Friday boost over 2007 Sears.com out on Black Friday [Breakdowns] The Context-Aware Cloud Top 10 Reasons for NOT Using a Cloud Half of Enterprises See Cloud Presence by 2010228Views0likes3CommentsCloud computing conundrum causing confusion
It seems that every time a new technology breaks through the surface a hundred "experts", vendors, and standards-bodies appear like moths to a flame attempting to define the term such that only "they" have the answer, the solution, the standard, or the product. When my son mentioned a research paper he wrote on cloud computing (which you still haven't sent me, by the way) he did so while disagreeing with a previous post of mine on the subject. He was quite vehement that grid computing did not equal cloud computing, and seemed almost shocked that I would dare to associate the two in any way. I tell you, if a family can't agree on the definition of cloud computing, the industry certainly won't do so any time soon. There are a lot of other folks out there trying to specify what you can and cannot call cloud computing - and arguing with them is like poking a badger. They get real mean and ornery when you disagree and if you aren't careful, you might get your eyes scratched out. The first bit of sanity I've seen in these discussions comes from Gartner fellow David Mitchell Smith, quoted in a Data Center Knowledge post: “The term cloud computing has come to mean two very different things: a broader use that focuses on ‘cloud,’ and a more-focused use on system infrastructure and virtualization,” said David Mitchell Smith, vice president and Gartner Fellow. “Mixing the discussion of ‘cloud-enabling technologies’ with ‘cloud computing services’ creates confusion.” But even as I nod in agreement with the rationality of his statement I have to ask: if you use cloud-enabling technologies to build out an infrastructure that delivers services, what do you end up with? Yeah, sure seems like you'd end up with cloud computing services, wouldn't you? Looks to me like we end up right back at square one, regardless of whether we separate the two concepts or not. I see cloud computing architecture in the same light as I see SOA. It's conceptual, it's an reference architecture, it's a best practices. There isn't an RFC specifying what you MUST or SHOULD implement - and how - in order to qualify as a cloud computing architecture. I'm not going to try to define cloud computing, or determine what is or is not cloud computing because in the end, I don't think it really matters all that much to the folks in the trenches who have a job to get done. And whether we're talking about Google App style cloud computing services (cloudware), enterprise cloud computing architectures, hybrid cloudbursting architectures, or cloud computing service providers, there's one thing that remains certain in my mind: without the right infrastructure cloud computing won't work no matter where it is or what it's called.210Views0likes0Comments