on 15-May-2015 12:10
When researching cloud bursting, there are many directions Google may take you. Perhaps you come across services for airplanes that attempt to turn cloudy wedding days into memorable events. Perhaps you'd rather opt for a service that helps your IT organization avoid rainy days. Enter cloud bursting ... yes, the one involving computers and networks instead of airplanes.
Cloud bursting is a term that has been around in the tech realm for quite a few years. It, in essence, is the ability to allocate resources across various public and private clouds as an organization's needs change. These needs could be economic drivers such as Cloud 2 having lower cost than Cloud 1, or perhaps capacity drivers where additional resources are needed during business hours to handle traffic. For intelligent applications, other interesting things are possible with cloud bursting where, for example, demand in a geographical region suddenly needs capacity that is not local to the primary, private cloud. Here, one can spin up resources to locally serve the demand and provide a better user experience. Nathan Pearce summarizes some of the aspects of cloud bursting in this minute long video, which is a great resource to remind oneself of some of the nuances of this architecture.
While Cloud Bursting is a term that is generally accepted by the industry as an "on-demand capacity burst," Lori MacVittie points out that this architectural solution eventually leads to a Hybrid Cloud where multiple compute centers are employed to serve demand among both private-based resources are and public-based resources, or clouds, all the time. The primary driver for this: practically speaking, there are limitations around how fast data that is critical to one's application (think databases, for example) can be replicated across the internet to different data centers. Thus, the promises of "on-demand" cloud bursting scenarios may be short lived, eventually leaning in favor of multiple "always-on compute capacity centers" as loads increase for a given application. In any case, it is important to understand that that multiple locations, across multiple clouds will ultimately be serving application content in the not-too-distant future.
As one might conclude from the Cloud Bursting and Hybrid Cloud discussion above, having multiple clouds running an application creates a need for user requests to be distributed among the resources and for automated systems to be able to control application access and flow. In order to provide the best control over how one's application behaves, it is optimal to use a load balancer to serve requests. No DNS or network routing changes need to be made and clients continue using the application as they always did as resources come online or go offline; many times, too, these load balancers offer advanced functionality alongside the load balancing service that provide additional value to the application. Having a load balancer that operates the same way no matter where it is deployed becomes important when resources are distributed among many locations. Understanding expectations around configuration, management, reporting, and behavior of a system limits issues for application deployments and discrepancies between how one platform behaves versus another.
With a load balancer like F5's LineRate product line, anyone can programmatically manage the servers providing an application to users. Leveraging this programatic control, application providers have an easy way spin up and down capacity in any arbitrary cloud, retain a familiar yet powerful feature-set for their load balancer, ultimately redistribute resources for an application, and provide a seamless experience back to the user. No matter where the load balancer deployment is, LineRate can work hand-in-hand with any web service provider, whether considered a cloud or not. Your data, and perhaps more importantly cost-centers, are no longer locked down to one vendor or one location. With the right application logic paired with LineRate Precision's scripting engine, an application can dynamically react to take advantage of market pricing or general capacity needs. Consider the following scenarios where cloud-agnostic load balancer have advantages over vendor-specific ones:
Economic Drivers
Computational Drivers
The benefit to having a cloud-agnostic load balancing solution for connecting users with an organization's applications not only provides a unified user experience, but provides powerful, unified way of controlling the application for its administrators as well. If all of a sudden an application needs to be moved from, say, a private datacenter with a 100 Mbps connection to a public cloud with a GigE connection, this can easily be done without having to relearn a new load balancing solution.
F5's LineRate product is available for bare-metal deployments on x86 hardware, virtual machine deployments, and has recently deployed an Amazon Machine Image (AMI). All of these deployment types leverage the same familiar, powerful tools that LineRate offers: lightweight and scalable load balancing, modern management through its intuitive GUI or the industry-standard CLI, and automated control via its comprehensive REST API. LineRate Point Load Balancer provides hardened, enterprise-grade load balancing and availability services whereas LineRate Precision Load Balancer adds powerful Node.js programmability, enabling developers and DevOps teams to leverage thousands of Node.js modules to easily create custom controls for application network traffic.
Learn about some of LineRate's advanced scripting and functionality here, or try it out for free to see if LineRate is the right cloud-agnostic load balancing solution for your organization.