Keep the Old, In With the New

For decades, the infrastructure needed to keep your public-facing websites online has had a relatively simple design. After all, it was all contained in one or a few datacenters under your control, or under the control of a contractor who did your bidding where network architecture was concerned. It looked, in its simplest form, something like this:

Requests came in, DNS resolved them, the correct server or pool of servers handled them. There was security, and sometimes a few other steps in between, but that still didn’t change that it was a pretty simplistic architecture. Even after virtualization took hold in the datacenter, the servers were still the same, just where they sat became more fluid.

And then cloud computing became the rage. Business users started demanding that IT look into it, started asking hard questions about whether it would be cheaper or more cost-effective to run some applications from the cloud, where the up-front costs are less and the overall costs might be higher than internal deployment, might not, but were spread out over time. All valid questions that will have different answers depending upon the variables in question. And that means you have to worry about it.

The new world looks a little different. It’s more complex, and thus is more in need of a makeover. DNS can no longer authoritatively say “oh, that server is over there” because over there might be outside the DNS’s zone. In really complex environments, it is possible that the answer DNS gives out may be vastly different depending upon a variety of factors. If the same application is running in two places, DNS has to direct users to the one most suited to them at this time. Not so simple, really, but comes with the current state of the world.

In the new world, a better depiction would be this:

 

In this scenario, a more global solution than just DNS is required. The idea that a user in one country could be sent to an app in a public cloud and users in another could be sent to the corporate datacenter kind of implies that localized DNS services are short of a full solution.

That’s where GDNS – Global DNS comes into play. It directs users between local DNS copies to get the correct response required. If it is combined with something like F5 GTM’s Wide IPs, then it can direct people to shifting locations by offering an IP address that is understood at each location. Indeed, it isn’t even understood at each location, it’s a proxy for a local address. Use a Wide IP to give users (and more importantly your apps) a single place to go when GDNS responds, and then choose from a group of local DNS resolutions to direct the user where they need to go.

It’s another layer on DNS, but it is an idea whose time has come. You can only go so far with zone-based LDNS, then you’re outside the zone, or directing users inefficiently. Global DNS is here, and the need is getting more imperative every day. Check it out. Call your ADC vendor to find out what they have on offer, or check out the InfoBlox and F5 information on F5.com if your ADC vendor doesn’t have a Global DNS solution.

Cloud does have its uses, you just have to plan for the issues it creates. This is an easy one to overcome, and you might just get increased agility out of GDNS and Global Server Load Balancing (GSLB).

Published Sep 11, 2012
Version 1.0
No CommentsBe the first to comment