intercloud
7 TopicsCloud bursting, the hybrid cloud, and why cloud-agnostic load balancers matter
Cloud Bursting and the Hybrid Cloud When researching cloud bursting, there are many directions Google may take you. Perhaps you come across services for airplanes that attempt to turn cloudy wedding days into memorable events. Perhaps you'd rather opt for a service that helps your IT organization avoid rainy days. Enter cloud bursting ... yes, the one involving computers and networks instead of airplanes. Cloud bursting is a term that has been around in the tech realm for quite a few years. It, in essence, is the ability to allocate resources across various public and private clouds as an organization's needs change. These needs could be economic drivers such as Cloud 2 having lower cost than Cloud 1, or perhaps capacity drivers where additional resources are needed during business hours to handle traffic. For intelligent applications, other interesting things are possible with cloud bursting where, for example, demand in a geographical region suddenly needs capacity that is not local to the primary, private cloud. Here, one can spin up resources to locally serve the demand and provide a better user experience.Nathan Pearcesummarizes some of the aspects of cloud bursting inthis minute long video, which is a great resource to remind oneself of some of the nuances of this architecture. While Cloud Bursting is a term that is generally accepted by the industry as an "on-demand capacity burst,"Lori MacVittiepoints out that this architectural solution eventually leads to aHybrid Cloudwhere multiple compute centers are employed to serve demand among both private-based resources are and public-based resources, or clouds, all the time. The primary driver for this: practically speaking,there are limitations around how fast data that is critical to one's application (think databases, for example) can be replicated across the internet to different data centers.Thus, the promises of "on-demand" cloud bursting scenarios may be short lived, eventually leaning in favor of multiple "always-on compute capacity centers"as loads increase for a given application.In any case, it is important to understand thatthat multiple locations, across multiple clouds will ultimately be serving application content in the not-too-distant future. An example hybrid cloud architecture where services are deployed across multiple clouds. The "application stack" remains the same, using LineRate in each cloud to balance the local application, while a BIG-IP Local Traffic Manager balances application requests across all of clouds. Advantages of cloud-agnostic Load Balancing As one might conclude from the Cloud Bursting and Hybrid Cloud discussion above, having multiple clouds running an application creates a need for user requests to be distributed among the resources and for automated systems to be able to control application access and flow. In order to provide the best control over how one's application behaves, it is optimal to use a load balancer to serve requests. No DNS or network routing changes need to be made and clients continue using the application as they always did as resources come online or go offline; many times, too, these load balancers offer advanced functionality alongside the load balancing service that provide additional value to the application. Having a load balancer that operates the same way no matter where it is deployed becomes important when resources are distributed among many locations. Understanding expectations around configuration, management, reporting, and behavior of a system limits issues for application deployments and discrepancies between how one platform behaves versus another. With a load balancer like F5's LineRate product line, anyone can programmatically manage the servers providing an application to users. Leveraging this programatic control, application providers have an easy way spin up and down capacity in any arbitrary cloud, retain a familiar yet powerful feature-set for their load balancer, ultimately redistribute resources for an application, and provide a seamless experience back to the user. No matter where the load balancer deployment is, LineRate can work hand-in-hand with any web service provider, whether considered a cloud or not. Your data, and perhaps more importantly cost-centers, are no longer locked down to one vendor or one location. With the right application logic paired with LineRate Precision's scripting engine, an application can dynamically react to take advantage of market pricing or general capacity needs. Consider the following scenarios where cloud-agnostic load balancer have advantages over vendor-specific ones: Economic Drivers Time-dependent instance pricing Spot instances with much lower cost becoming available at night Example: my startup's billing system can take advantage in better pricing per unit of work in the public cloud at night versus the private datacenter Multiple vendor instance pricing Cloud 2 just dropped their high-memory instance pricing lower than Cloud 1's Example: Useful for your workload during normal business hours; My application's primary workload is migrated to Cloud 2 with a simple config change Competition Having multiple cloud deployments simultaneously increases competition, and thusyour organization's negotiated pricing contracts become more attractiveover time Computational Drivers Traffic Spikes Someone in marketing just tweeted about our new product. All of a sudden, the web servers that traditionally handled all the loads thrown at them just fine are gettingslashdottedby people all around North America placing orders. Instead of having humans react to the load and spin up new instances to handle the load - or even worse: doing nothing - your LineRate system and application worked hand-in-hand to spin up a few instances in Microsoft Azure's Texas location and a few more in Amazon's Virginia region. This helps you distribute requests from geographically diverse locations: your existing datacenter in Oregon, the central US Microsoft Cloud, and the east-coast based Amazon Cloud. Orders continue to pour in without any system downtime, or worse: lost customers. Compute Orchestration A mission-critical application in your organization's private cloud unexpectedly needs extra computer power, but needs to stay internal for compliance reasons. Fortunately, your application can spin up public cloud instances and migrate traffic out of the private datacenter without affecting any users or data integrity. Your LineRate instance reaches out to Amazon to boot instances and migrate important data. More importantly, application developers and system administrators don't even realize the application has migrated since everything behaves exactly the same in the cloud location. Once the cloud systems boot, alerts are made to F5's LTM and LineRate instances that migrate traffic to the new servers, allowing the mission-critical app to compute away. You just saved the day! The benefit to having a cloud-agnostic load balancing solution for connecting users with an organization's applications not only provides a unified user experience, but provides powerful, unified way of controlling the application for its administrators as well. If all of a sudden an application needs to be moved from, say, aprivate datacenter with a 100 Mbps connection to a public cloud with a GigE connection, this can easily be done without having to relearn a new load balancing solution. F5's LineRate product is available for bare-metal deployments on x86 hardware, virtual machine deployments, and has recently deployed anAmazon Machine Image (AMI). All of these deployment types leverage the same familiar, powerful tools that LineRate offers:lightweight and scalable load balancing, modern management through its intuitive GUI or the industry-standard CLI, and automated control via itscomprehensive REST API.LineRate Point Load Balancerprovides hardened, enterprise-grade load balancing and availability services whereasLineRate Precision Load Balanceradds powerful Node.js programmability, enabling developers and DevOps teams to leveragethousands of Node.js modulesto easily create custom controlsfor application network traffic. Learn about some of LineRate'sadvanced scripting and functionalityhere, ortry it out for freeto see if LineRate is the right cloud-agnostic load balancing solution for your organization.925Views0likes0CommentsIntercloud: Are You Moving Applications or Architectures?
The former is easy. The latter? Not so much. In the many, many – really, many – posts I’ve penned regarding cloud computing , and in particular the notion of Intercloud, I’ve struggled to come up with a way to simply articulate the problem inherent in current migratory and, for that matter, interoperability models. Recently I found the word I had long been groping for: architecture. Efforts from various working groups, standards bodies and even individual vendors still remain focused on an application; a packaged up application with a sprinkling of meta-data designed to make a migration from data center A to data center B less fraught with potential disaster. But therein lies the continuing problem – it is focused on the application as a discrete entity, with very little consideration for the architecture that enables it, delivers it and supports it. The underlying difficulty is not just that most providers simply don’t offer the services necessary to replicate the infrastructure necessary, it’s that there’s an inherent reliance on network topology built into those services in the first place. That dependency makes it a non-trivial task to move even the simplest of architectures from one location to another, and introduces even more complexity when factoring in the dynamism inherent in cloud computing environments. Virtualization will radically change how you secure and manage your computing environment," Gartner analyst Neil MacDonald said this week at the annual Gartner Security and Risk Management Summit. "Workloads are more mobile, and more difficult to secure. It breaks the security policies tied to physical location. We need security policies independent of network topology. [emphasis added] -- Gartner: New security demands arising for virtualization, cloud computing, InfoWorld We could – and probably should – expand that statement to say, “We need application delivery policies independent of network topology”, where “application delivery policies” include security but also encompass access management, load balancing, acceleration and optimization profiles. We desperately need to decouple architecture and services from network topology if we’re to ever evolve to a truly dynamic and mobile data center – to realize IT as a Service. JIT DEPLOYMENT The best solution we have, now, is scripting. Scripting usually involves devops who use agile development technologies to create scripts that essentially reconfigure the entire environment “just in time” for actual deployment. This includes things like reconfiguring IP address dependencies. Once the environment is “up”, scripts can be utilized to insert and update the appropriate policies that ultimately define the architecture’s topology. Scripting performs some other environment-dependent settings, as well, but most important perhaps is getting those IP addresses re-linked such that traffic flows in the expected topology from one end to the other. But in many ways this can be as frustrating as waiting for a neural network to converge as dependencies can actually inhibit an image from achieving full operational status. If you’ve ever booted a web server that relies upon an NFS or SMB mount on another machine for its file system, you know that if the server upon which the file system resides is not booted that it can cause excessive delays in boot time for the web server as well as rendering it inoperable – a.k.a. unavailable. That’s a simple problem to fix – unlike some configurations and policies that rely heavily upon having the IP address of other interconnected systems. These are well-known problems with not-so-best-practices solutions today. This complicates the process of moving an “application” from one location to another because you aren’t moving just an application, you’re moving an architecture. Relying upon “the cloud” to provide the same infrastructure services is a gambling proposition. Even if the same services are available you still run the (very high) risk of a less-than-stellar migratory experience due to the differences in provisioning methods. The APIs used to provision ELB are not the same as those used to provision a load balancing service in another provider’s environment, and certainly aren’t the same as what you might be using in your own data center. That makes migration a several layer process, with logically moving images just the beginning of a long process that may take weeks or months to straighten out. SNS (SERVICE NAME SYSTEM) We really do need to break free from the IP-address chains that bind architectures today. The design of a stateless infrastructure is one way to achieve that, but certainly there are other means by which we can make this process a smoother one. DNS effectively provides that layer of abstraction for the network. One can query a domain name at any time and even if the domain moves from IP to IP, we are still able to find it. In fact we leverage that dynamism every day to provide services like Global Application Delivery in our quest for multi-site resilience. We rely upon that dynamism as the primary means by which disaster recovery processes actually work as expected in the event of a disaster. We need something similar to DNS for services; something that’s universal and ubiquitous and allows configurations to reference services by name and not IP address. We need a service registry, a service name system if you will, in which we can define for each environment a set of services available and the means by which they can be integrated. Rather than relying upon scripts to reconfigure components and services, a component would need only learn the location of the SNS service and from there could determine – based on service names – the location of dependent services and components without requiring additional reconfiguration or deployment of policies. A more dynamic, service-oriented system that decouples IP from services would enable greater mobility not only across environments but within environments, enabling higher levels of resiliency in the event of inevitable failure. Purpose-built cloud services often already take this into consideration, but many of the infrastructure components – regardless of form-factor – do not. This means moving an architecture from the data center to a cloud computing environment is a nearly Herculean task today, requiring sacrifice of operationally critical services in exchange for cheaper compute and faster provisioning times. If we’re serious about moving toward IT as a Service – whether that leverages public or private cloud computing models – then we need to get serious about how to address the interdependencies inherent in enterprise infrastructure architecture that make such a goal more difficult to reach. Services will not empower migratory cloud computing behavior unless they are unchained from the network topology. That’s true for security, and it’s true for other application delivery concerns as well. A core principle of a service-oriented anything is de –coupling interface from implementation. We need to apply that core principle to infrastructure architecture in order to move forward – and outward. It’s Called Cloud Computing not Cheap Computing Operational Risk Comprises More Than Just Security Data Center Feng Shui: Reliability is not the Absence of Failure About that ‘Unassailable Economic Argument’ for Public Cloud Computing IT as a Service: A Stateless Infrastructure Architecture Model Challenging the Firewall Data Center Dogma Cloud-Tiered Architectural Models are Bad Except When They Aren’t Cloud Chemistry 101 You Can’t Have IT as a Service Until IT Has Infrastructure as a Service279Views0likes3Comments1024 Words: Scale Fail
If you’re scaling applications and not architectures, you’re doing it wrong. Intercloud: Are You Moving Applications or Architectures? The Secret to Doing Cloud Scalability Right At the Intersection of Cloud and Control… 1024 Words: Only Skin Deep 1024 Words: Ch-ch-chain of Fools127Views0likes0Comments1024 Words: Only Skin Deep
VM interoperability promotes inter-environment portability about as well as a wig would fool anyone into believing these two girls are identical twins. That level of interoperability is like beauty – it’s only skin deep. Image by Darren Kelly via Flickr. 1024 Words: Insane Clowd Posse 1024 Words: Distortion of Magnitude All 1024 Word entries A Call to Action for Virtual Machine Interoperability Cloud interoperability must dig deeper than the virtualization layer185Views0likes0CommentsThe World Doesn’t Care About APIs
Bottles, birds, and packets: how the message is exchanged is less important than what the message is as long as it gets there. I heard it said the other day, regarding the OpenStack announcement, that “the world does not care about APIs.” Unpossible! How could the world not care about APIs? After all, it is APIs that make the Web (2.0) go around. It is APIs that drive the automation of infrastructure from static toward dynamic. It is APIs that drive self-service and thin-provisioning of compute and storage in the cloud. It is APIs that make cross-environment integration of SaaS possible. In general, without APIs we’d be very unconnected, un-integrated, un-collaborative, and in many cases, uninformed. Now, it could be said that the world doesn’t care about APIs until they’re highly adopted, but unlike the chicken and the egg question (which may very well have been answered, in case you weren’t paying attention), it is still questionable whether the success of sites like Facebook and Twitter and the continued growth of SaaS darlings like Salesforce.com are dependent upon exactly that: their API. The API is the new CLI in the network. The API is the web’s version of EAI (Enterprise Application Integration), without which we wouldn’t have interesting interactions between our favorite sites and applications. The API is the cloud’s version of the ATM (Automated Teller Machine) through which services are provisioned with just a few keystrokes and a valid credit card. The API is the means by which interoperability of cloud computing will be enabled because to do otherwise is to create the mother-of-all hub-and-spoke integration points. As Jen Harvey ( co-founder of Voxilate, developer of voice-related mobile applications) pointed out, an API makes it possible to develop a user-interface that essentially obscures the underlying implementation. It’s the user-interface the users care about, and if you don’t have to change it as your application takes advantage of different clouds or services or technologies, you ensure that productivity and user-adoption – two frequently cited negative impacts of changes to applications in any organization – are not impacted at all. It’s a game-changer, to be sure. So the API is the world in technology today, how could we not care about it? WE DO. WE JUST CARE MORE ABOUT the MODEL Maybe the point is that we shouldn’t because the API today is just a URI and URIs are nearly interchangeable. Face it, if interoperability between anything were simply about the API then we’d have already solved this puppy and put it to sleep. Permanently. But it’s not about the API per se, it is, as William Vambenepe is wont to say, “it’s the model, stupid” [Edited per William's comment below, 7-27-2010] Unfortunately, when most people stand up and cheer “the API” what they’re really cheering is “the model.” They aren’t making the distinction between the interface and the data (or meta-data) exchanged. It’s what's inside the message that enables interoperability because it is through the message, the model, that we are able to exchange the information necessary to do whatever it is the API call is supposed to do. Without a meaningful, shared model the API is really not all that important. The API is how, not what, and unfortunately even if everyone agreed on how, we’d still have to worry about what and that, as anyone who has every worked with EAI systems can tell you, is the really, really, super hard part of integration. And it is integration that we’re really looking for when we talk about cloud and interoperability or portability or mobility, because what we want is to be able to share data (configuration, architectures, virtual machines, hypervisors, applications) across multiple programmatic systems in a meaningful way. COMMODITIZATION REALLY MEANS NORMALIZATION Here’s the rub: having an API is important, but the actual API itself is not nearly as important as what it’s used to exchange. APIs, at least those on the web and taking advantage of HTTP, are little more than URIs. Doesn’t matter if it’s REST or SOAP, the end-point is still just a URI. The URI is often somewhat self-descriptive and in the case of true REST (which doesn’t really exist) it would be nearly completely self-documenting but it’s still just a URI. That means it is nearly a trivial exercise to map “/start/myresource” to “/myresource/start”. But when the data, the model, is expressed as the payload of that API call, then things get … ugly. Is one using JSON? Or is it XML? Is that XML OVF? Schemaless? Bob’s Homegrown Format? Does it use common descriptors? Is a load balancer in cloud a described as an application delivery controller in cloud b? Is the description of a filter required in cloud a using iptables semantics or some obscure format the developer made up on the fly because it made sense to her? Mapping the data, the model, isn’t a trivial exercise. In fact without a common semantic model it would require not the traditional one-chicken sacrifice but probably a whole flock in order to get it working, and you’d essentially be locked-in for all the same reasons you end up locked-in today: it costs too much and takes too much effort to change. When pundits and experts talk about commoditization of cloud computing, and they do often, it’s not an attempt to minimize the importance of the model, of the infrastructure, but rather it’s a necessary step toward providing services in a consistent manner across implementations; across clouds. By defining core cloud computing services in a consistent manner and describing them in similar terms, advanced services can then be added atop those services in a like manner without impacting negatively the ability to migrate between implementations. If the underlying model is consistent, commoditized if you will, this process becomes much easier for everyone involved. Consider HTTP headers. There is a common set, a standard set of headers used to describe core functions and capabilities. They have a common model and use consistent semantics through which the name-value pairs are described. Then there’s custom headers; headers that follow the same model but which are peculiar to the service being invoked. In a cloud model these are the differentiated value-added cloud services (VACS)Randy Bias mentions in his post regarding the announcement of OpenStack and the ensuing cries of “it will be the standard! it will save the world!”. The most important aspect of custom HTTP headers that we must keep in any cloud API or stack is that if they aren’t supported, they do not negatively impact the ability to invoke the service. They are ignored by applications which do not support them. Only through commoditization and a common model can this come to fruition. Having an API is important. It’s what makes integration of applications, infrastructure, and ultimately clouds possible. But it isn’t the definition of that interface across disparate implementations of similar technology that will make or break intercloud. What will make or break intercloud is the definition of a consistent semantic model for core services and components that can be used to describe the technologies and policies and meta-data necessary to enable interoperability. Related Posts Cloud, Standards, and Pants Infrastructure 2.0: Squishy Name for a Squishy Concept Despite Good Intentions PaaS Interoperability Still Only Skin Deep204Views0likes1CommentThe Skeleton in the Global Server Load Balancing Closet
Like urban legends, every few years this one rears its head and makes its rounds. It is certainly true that everyone who has an e-mail address has received some message claiming that something bad is going on, or someone said something they didn’t, or that someone influential wrote a letter that turns out to be wishful thinking. I often point the propagators of such urban legends to Snopes because the folks who run Snopes are dedicated to hunting down the truth regarding these tidbits that make their way to the status of urban legend. It would nice, wouldn’t it, if there was such a thing for technical issues; a technology-focused Snopes, if you will. But there isn’t, and every few years a technical urban legend rears its head again and sends some folks into a panic. And we, as an industry, have to respond and provide some answers. This is certainly the case with Global Server load balancing (GSLB) and Round-Robin DNS (RR DNS). CLAIM : DNS Based Global Server Load Balancing (GSLB) Doesn’t Work STATUS : Inaccurate ORIGINS The origins of this skeleton in GLSB’s closet is a 2004 paper written by Pete Tenereillo, “Why DNS Based Global Server Load Balancing (GSLB) Doesn’t Work.” It is important to note that at the time of the writing Pete was not only very experienced with these technologies but was also considered an industry expert. Pete was intimately involved in the early days of load balancing and global server load balancing, being an instrumental part of projects at both Cisco and Alteon (Nortel, now Radware). So his perspective on the subject certainly came from experience and even “inside” knowledge about the way in which GSLB worked and was actually deployed in the “real” world. The premise upon which Pete bases his conclusion, i.e. GSLB doesn’t work, is that the features and functionality over and above that offered by standard DNS servers are inherently flawed and in theory sound good, but don’t work. His ultimate conclusion is that the only way to implement true global high-availability is to use multiple A records, which are already a standard function of DNS servers. DNS based Global Server Load Balancing (GSLB) solutions exist to provide features and functionality over and above what is available in standard DNS servers. This paper explains the pitfalls in using such features for the most common Internet services, including HTTP, HTTPS, FTP, streaming media, and any other application or protocol that relies on browser based client access. … An Axiom The only way to achieve high-availability GSLB for browser based clients is to include the use of multiple A records It would be easy to dismiss Pete’s concerns based on the fact that his commentary is nearly seven years old at this point. But the basic principles upon which DNS and GSLB are implemented today are still based on the same theories with which Pete takes issue. What Pete missed in 2004 and what remains missing from this treatise is twofold. First, GSLB implementations at that time, and today, do in fact support returning multiple A records, a.k.a. Round-Robin DNS. Second, the features and functionality provided over and above standard DNS do, in fact, address the issues raised and these features and functionality have, in fact, evolved over the past seven years. Is returning multiple A records to LDNS the only way of achieving High Availability? How is advanced health checking an important component of providing High Availability? Many people misuse the term ‘high availability’ by indicating that it only equates to when a site is either up or down. This type of binary thinking is misguided and is purely technical in focus. Our customers have all indicated that high availability also includes performance of the application or site. The reason is that by business definitions if a site or application is too slow it is unavailable. Poor performance directly impacts productivity, one of the key performance indicators used to measure the effectiveness of business employees and processes. As a result, high availability can be achieved in a number of different ways. Intelligent GSLB solutions, through advanced monitoring and statistical correlation, take into account not only whether the site is up or down, but such detail as hop count, packet loss, round-trip time, and data-center capacity to name a few. These metrics then transparently provide users with most efficient and intelligent way of steering traffic and achieving high availability. Geolocation is another means of steering traffic to the appropriate service location, as well as any number of client and business-specific variables. Context is important to application delivery in general, but is a critical component of GSLB to maintain availability – including performance. The round robin handling of the A records by the Local DNS (LDNS) is a well known problem in the industry. When multiple A records are handed back to the LDNS for an address resolution, the LDNS shuffles the list and returns the A records list back to the client without honoring the order in which it received it. The next time the client requests an address, the LDNS responds with a different ordered list of A records. This LDNS behavior makes it very difficult to predict the order in which A records are being returned to the client. In order to overcome this problem, many prefer to configure a GSLB solution to send back one A record to the LDNS. When compared with just a ‘plain’ DNS server that would send back any one of the site addresses with a TTL value, an intelligent GSLB sends back the address of the best performing site, based on the metrics that are important to the business, and sets the TTL value. A majority of the LDNS that are RFC compliant will honor the TTL value and resolve again after the TTL value has expired. The GSLB performs advanced health checking and sends back the address of the best performing site taking into account metrics like application availability, site capacity, round trip time, hops and completion rate hence providing the best user experience and meeting applicable business service level agreements. In the event of a site failure (when the link is down or because of a catastrophic event), existing clients would connect to the unavailable site for the period of time equal to the TTL value. The GSLB sets a TTL value of 30 seconds when returning an A record back to the LDNS. As soon as the 30 second time period expires, the LDNS resolves again and the GSLB uses its advanced health checking capability and determines that one of the multiple sites is unavailable. The GSLB then starts to direct users transparently to the best performing site by returning the address of that site back to the LDNS. A flexible GSLB will also provides a Manual Resume option that gives them the option of letting the unavailable site stay down to mitigate the commonly known back end database synchronization problem. An intelligent GSLB also has the option of sending multiple A records that allows delivery of content from the best performing sites. For example, let’s say an enterprise wants to deliver their content using 10 sites and wants to provide high availability. Using sophisticated health checking, the GSLB can determine the two best performing sites and return their addresses to the users. The GLSB would track each site’s performance and send back the best sites based on current network and site conditions (context) for every resolution. Slow sites or sites that are down would never be sent back to the user. What about the issues with DNS Browser Caching? Of all the issues raised by Pete in his seminal work of 2004, this is likely the one that is still relevant. Browser technology has evolved, yes, but the underlying network stack and functionality has not, mainly because DNS itself has not changed much in the past ten years. Most modern browsers may or may not (evidence is conflicting and documentation nailing it down scant) honor DNS TTL but they have, at least, reduced the caching interval on the client side. This may or may not - depending on timing - result in a slight delay in the event of a failure while resolution catches up but it does not have nearly the dramatic negative impact it once had. In early days, a delay of 15 minutes could be expected. Today that delay can generally be counted in seconds. It is, admittedly, still a delay but one that is certainly more acceptable to most business stakeholders and customers alike. And yet while the issue of DNS browser caching is still technically accurate, it is not all that relevant; the same solution Pete suggests to address the issue – RR DNS – has always been available as an option for GSLB. Any technology, when not configured to address the issue for which it was implemented, can be considered a failure. This is not a strike against the technology, but the particular implementation. The instances of browser caching impacting site availability and performance is minimal in most cases and for those organizations for which such instances would be completely unacceptable it is simply a matter of mitigation using the proper policies and configuration. > SUMMARY What it comes down to is that Pete, in his paper, is pushing for the use of Round-Robin DNS (RR DNS). Modern Global Server Load Balancing (GSLB) solutions fully support this option today and generally speaking always have. However, the focus on the technical aspects completely ignores the impact of business requirements and agreements and does not take into consideration the functions and features over and above standard DNS that assist in supporting those requirements. For example, health-checking has come a long way since 2004, and includes not only simply up-down indicators or even performance-based indicators but is now able to incorporate a full range of contextual variables into the equation. Location, client-type, client-network, data center network, capacity… all these parameters can be leveraged to perform “health” checks that enable a more accurate and ultimately adaptable decision. Interestingly, standard DNS servers leveraged to implement a GSLB solution are not capable of nor do they provide the means by which such health checks can be implemented. Such “health monitoring” is, however, a standard offering for GSLB solutions. NEW FACTORS to CONSIDER Given the dynamism inherent not only to local data centers but global implementations and the inclusion of cloud computing and virtualization, GSLB must also provide the means by which management and maintenance and process automation can be accomplished. Traditional DNS solutions like BIND do not provide such means of control; they are enabled with the ability to participate in the collaborative processes necessary to automate the migration and capacity fulfillment functions for which virtualization and cloud computing are and will be used. Thus a simple RR DNS implementation may be desirable, but the solution through which such implementations will be implemented must be more modern and capable of addressing management and business concerns as well. These are the “functions and features” over and above standard DNS servers that provide value to organizations regardless of the technical details of the algorithms and methods used to distribute DNS records. Additionally, traditional DNS solutions – while supporting new security initiatives like DNSSEC – are less able to handle such initiatives in a dynamic environment. A GSLB must be able to provide dynamic signing of records to enable global server load balancing as a means to support DNSSEC. DNSSEC introduces a variety of challenges associated with GSLB that cannot be easily or efficiently addressed by standard DNS services. Modern GLSB solutions can and do address these challenges while enabling integration and support for other emerging data center models that make use of cloud computing and virtualization. This skeleton is sure to creep out of the closet yet again in a few years, primarily because DNS itself is not changing. Extensions such as DNSSEC occasionally crop up to address issues that arise over time, but the core principles upon which DNS have always operated are still true and are likely to remain true for some time. What has changed are the data center architectures, technology, and business requirements that IT organizations must support, in part through the use of DNS and GSLB. The fact is that GSLB does work and modern GSLB solutions do provide the means by which both technical and business requirements can be met while simultaneously addressing new and emerging challenges associated with the steady march of technology toward a dynamic data center. So You Put an Application in the Cloud. Now what? F5 Friday: Hyperlocalize Applications for Everyone Location-Aware Load Balancing Cloud Needs Context-Aware Provisioning What is a Strategic Point of Control Anyway? German DPA Issues Legal Opinion on Cloud Computing As Cloud Computing Goes International, Whose Laws Matter? Load Balancing in a Cloud262Views0likes0CommentsF5 Friday: Seconds Too Late and More Than a Few Bytes Short
Correcting some misperceptions regarding ADCs, virtualization, and the use of Cisco as the definitive yardstick for measuring the ADC market A recent article penned by analyst Jim Metzler asks “Can application delivery controllers support virtualization?” A fair question, especially when one digs into the eventual migration and portability of virtual machines across disparate cloud computing deployments based on just such support. But the conclusion reached is misleading and does a disservice to the entire load balancing/application delivery controller industry. Caveat: Having been under fire from vendors and readers alike in the past due to editorial discretion regarding content changes before publishing I must note that the misleading subhead “where application delivery controllers for virtualization fall short” was not Jim’s creation. I am, however, taking him to task for using Cisco as the “yardstick” for application delivery controller capabilities in part because it provided the rationale from which the misleading subhead was derived. The overall suggestion of evaluating application delivery controller’s with an eye toward integration with an organization’s virtualized provisioning system of choice is a good one, but one section of the aforementioned article leaps out as inaccurate and misleading. Where application delivery controllers for virtualization fall short [registration required] Moving a VM between servers in disparate data centers is very challenging, and in some cases there is nothing that an ADC can do to respond to these challenges. Cisco and VMware, for example, have stated that when moving VMs between servers in disparate data centers, the maximum roundtrip latency between the source and destination VMware ESX servers cannot exceed 5 milliseconds. The speed of light in a combination of copper and fiber is roughly 120,000 miles per second. In 5 ms, light can travel about 600 miles. Since the 5 ms is roundtrip delay, the data centers can be at most 300 miles apart. That 300-mile figure assumes that the WAN link is a perfectly straight line between the source and destination ESX servers and that the data that is being transmitted does not spend any time at all in a queue in a router or other device. Both of those assumptions are unlikely to be the case. Using Cisco as your yardstick for application delivery controller capabilities leads to inaccurate analysis of the market and is patently unfair. When writing an article on the state of networking would you use Juniper or Extreme as your measuring stick for switches instead of Cisco? Unlikely. If you are evaluating routers, switches, telepresence or a variety of other network-focused solutions then you use Cisco as your yardstick. They are the leading provider in most things layer 2-3 and certainly it defines that industry and sets the bar for the state of the market and its capabilities. But when you start looking at layer 4-7 (application delivery) then it is Cisco’s yardstick and not application delivery controllers in general that falls short. Most of the industry has long since moved past the capabilities of Cisco’s application delivery solutions and it is not the leader in the market. Therefore using its technology as a yardstick for the rest of the players and then generalizing its inability to perform in this given scenario across the entire market is misleading. It results in an inaccurate depiction of what the market is capable of doing. IT’S THE INTEGRATION – INSIDE and OUT F5 handles this scenario just fine, thank you very much. While true that a simple Load balancer would likely find this scenario difficult to support, an advanced application delivery controller that incorporates WAN optimization integrated with the provisioning systems has no problem whatsoever. I submit a variety of documentation and demonstration of such functionality for your perusal: Long Distance vMotion Demo F5’s vSphere Solution vMotion Deployment Guide vMotion Deployment Guide v10 The reason an F5 BIG-IP has no problem with this scenario is its integration – both with the provisioning systems and internal to its core platform. F5 doesn’t just have a WAN optimization solution; it has a fully integrated WAN optimization feature set that is a part of a larger unified application delivery network. It is contextually aware of where data is coming from and going to and can apply a broad set of policies to accelerate and optimize data traversing disparate networks. More important, perhaps, is the external integration of BIG-IP with orchestration/provisioning systems. Between the two, an F5-VMware solution has no problems traversing WAN links because F5 not only manages the migration from one data center to another (be it cloud or traditional) but it simultaneously manages application connectivity and ensures continuity of use before, during, and after such a migration. This becomes important when addressing this next point from the article: To support VMs moving between servers in disparate data centers, one can extend the VLAN between the VM and the ADC in the originating data center to the ADC in the receiving data center and then proceed as if the VM were being moved between servers in the same data centers. How this is done tends to be specific to each ADC vendor. There is no arguing that migrating a virtual machine between two disparate locations is not complex; it is. There’s an entire industry of solutions cropping up to deal with managing the complex networking necessary to essentially bridge the local data center with remote cloud computing environments (CohesiveFT, for example, and I’m certain Cisco with its layer 2-3 expertise has this well in hand). But this statement focuses altogether too tightly on the network layer and fails to consider the application layer and complexity associated with that migratory process. Most ADCs cannot manage to simultaneously successfully move a virtual machine across a less-than-optimal WAN link and maintain application availability. Live migration. F5 can and does using global application delivery and application delivery controllers. And if you don’t think that’s useful, then consider what it will take to implement dynamic cloud-bursting capability, i.e. on-demand, cross-cloud elastic scalability with live migration and no downtime. Exactly. You have to manage the WAN link and ensure the VM can move from one physical location to another successfully while maintaining application availability until it is complete and then distribute requests across the two live instances until such time as the secondary instance is no longer necessary based on real-time demand and then return to a single instance. All without disrupting service. Jim is right in that there is no solution today that can do that – but that’s today, not tomorrow or next week or next month. But if you’re watching the number two in the market, you’ll probably miss when number one announces it can. F5 clobbers Cisco in application delivery controller market share F5 Networks: Charging Ahead GARTNER REPORT: F5 Commands Worldwide Market Share Lead for Application Delivery Controllers Learning from F5’s Success Against Cisco It doesn’t make sense to watch number two in the market to see where it’s going, because it’s always going to be seconds too late and more than a few bytes short in delivering the solutions to the day’s most challenging problems. Most of the application delivery controllers on the market can’t support the scenario described in the article. True. But that’s just most, not all, and certainly not F5. F5 isn’t leading the application delivery market without good reason. It’s exactly because we understand the complex relationship between applications and networking and the value of integration with the broader data center ecosystem that gives us the edge to develop the solutions necessary not just to traditional application availability and performance challenges, but to the challenges emerging from the rapid adoption of virtualization and cloud computing. Related Posts All F5 Friday Entries on DevCentral from tag analyst Google claims analyst research firm site is an attack site, serving up malware from tag market Firefox Reaches 20% Market Share for First Time Ever - ReadWriteWeb from tag Cisco Hindsight is Always Twenty-Twenty Is Vendor Lock-In Really a Bad Thing? The API Is the New CLI from tag intercloud Pursuit of Intercloud is Practical not Premature Location, Location, Location Cloud Balancing, Cloud Bursting, and Intercloud Getting Around That Pesky Speed of Light Limitation Intercloud: The Evolution of Global Application Delivery from tag vmware The day of the virtual desktop has come...and gone Return of the Web Application Platform Wars (more..) del.icio.us Tags: MacVittie,F5,F5 Friday,virtualization,cloud computing,Cisco,vmotion,vmware,intercloud,Jim Metzler,analyst,market,application delivery,load balancing208Views0likes0Comments