Using BIG-IP GTM to Integrate with Amazon Web Services

This is the latest in a series of DNS articles that I've been writing over the past couple of months.  This article is taken from a fantastic solution that Joe Cassidy developed.  So, thanks to Joe for developing this solution, and thanks for the opportunity to write about it here on DevCentral.  As a quick reminder, my first six articles are:

  1. Let's Talk DNS on DevCentral
  2. DNS The F5 Way: A Paradigm Shift
  3. DNS Express and Zone Transfers
  4. The BIG-IP GTM: Configuring DNSSEC
  5. DNS on the BIG-IP: IPv6 to IPv4 Translation
  6. DNS Caching

 

The Scenario

Let's say you are an F5 customer who has external GTMs and LTMs in your environment, but you are not leveraging them for your main website (example.com).  Your website is a zone sitting on your windows DNS servers in your DMZ that round robin load balance to some backend webservers. 

You've heard all about the benefits of the cloud (and rightfully so), and you want to move your web content to the Amazon Cloud.  Nice choice!  As you were making the move to Amazon, you were given instructions by Amazon to just CNAME your domain to two unique Amazon Elastic Load Balanced (ELB) domains.  Amazon’s requests were not feasible for a few reasons...one of which is that it breaks the RFC.  So, you engage in a series of architecture meetings to figure all this stuff out. 

Amazon told your Active Directory/DNS team to CNAME www.example.com and example.com to two AWS clusters: us-east.elb.amazonaws.com and us-west.elb.amazonaws.com.  You couldn't use Microsoft DNS to perform a basic CNAME of these records because of the BIND limitation of CNAME'ing a single A record to multiple aliases.  Additionally, you couldn't point to IPs because Amazon said they will be using dynamic IPs for your platform.  So, what to do, right?

 

The Solution

The good news is that you can use the functionality and flexibility of your F5 technology to easily solve this problem.  Here are a few steps that will guide you through this specific scenario:

  • Redirect requests for http://example.com to http://www.example.com and apply it to your Virtual Server (1.2.3.4:80).  You can redirect using HTTP Class profiles (v11.3 and prior) or using a policy with Centralized Policy Matching (v11.4 and newer) or you can always write an iRule to redirect!

 

  • Make www.example.com a CNAME record to example.lb.example.com; where *.lb.example.com is a sub-delegated zone of example.com that resides on your BIG-IP GTM.

 

  • Create a global traffic pool “aws_us_east” that contains no members but rather a CNAME to us-east.elb.amazonaws.com.
  • Create another global traffic pool “aws_us_west” that contains no members but rather a CNAME to us-west.elb.amazonaws.com. 

The following screenshot shows the details of creating the global traffic pools (using v11.5).  Notice you have to select the "Advanced" configuration to add the CNAME.

 

 

 

 

  • Create a global traffic Wide IP example.lb.example.com with two pool members “aws_us_east” and “aws_us_west”.  The following screenshot shows the details.

 

 

 

 

  • Create two global traffic regions: “eastern” and “western”.  The screenshot below shows the details of creating the traffic regions.

 

 

 

  • Create global traffic topology records using "Request Source: Region is eastern" and "Destination Pool is aws_us_east".  Repeat this for the western region using the aws_us_west pool.  The screenshot below shows the details of creating these records.

 

 

 

 

  • Modify Pool settings under Wide IP www.example.com to use "Topology" as load balancing method.  See the screenshot below for details.

 

 

How it all works...

Here's the flow of events that take place as a user types in the web address and ultimately receives the correct IP address.

 

  • External client types http://example.com into their web browser

 

  • Internet DNS resolution takes place and maps example.com to your Virtual Server address:  IN A 1.2.3.4

 

  • An HTTP request is directed to 1.2.3.4:80

 

  • Your LTM checks for a profile, the HTTP profile is enabled, the redirect request is applied, and redirect user request with 301 response code is executed

 

  • External client receives 301 response code and their browser makes a new request to http://www.example.com

 

  • Internet DNS resolution takes place and maps www.example.com to IN CNAME example.lb.example.com

 

  • Internet DNS resolution continues mapping example.lb.example.com to your GTM configured Wide IP

 

  • The Wide IP load balances the request to one of the pools based on the configured logic:  Round Robin, Global Availability, Topology or Ratio (we chose "Topology" for our solution)

 

  • The GTM-configured pool contains a CNAME to either us_east or us_west AWS data centers

 

  • Internet DNS resolution takes place mapping the request to the ELB hostname (i.e. us-west.elb.amazonaws.com) and gives two A records

 

  • External client http request is mapped to one of the returned IP addresses

 

 

And, there you have it.  With this solution, you can integrate AWS using your existing LTM and GTM technology!  I hope this helps, and I hope you can implement this and other solutions using all the flexibility and power of your F5 technology.

 

Published Jun 18, 2014
Version 1.0

  • IMHO, the original article definitely contains a wealth of information on how to utilize F5 BIG-IP GTM (and BIG-IP LTM also) to coordinate Global Server Load Balancing (GSLB) with a cloud service or content distribution network (CDN) provider that has requirements that are not compatible with classic DNS (i.e., BIND). Thus, the initial steps of the article that use:

    o an HTTP re-direct for FQDN example.com (via an LTM virtual server)

    o a CNAME record for FQDN www.example.com so that it resolves to canonical name FQDN example.lb.example.com

    o the DNS delegation of the subdomain FQDN lb.example.com to the GTM

    all act to nicely setup the remaining steps that use the wide-IP, GTM pool, and Topology features of BIG-IP DNS.

    However, as noted by a couple of previous commenters, since the article was written prior to TMOS v12 the rest of the article where these GTM features are now used suffers from incompatibity with the BIG-IP DNS versions of today. Specifically:

    o The wide-IP and GTM pools at v12 and later releases need to have DNS record type declarations.

    o Furthermore, the DNS record type declarations between wide-IPs and their pools and pool members must adhere to certain rules.

    [NOTE: For an excellent treatment of how these v12+ features are utilized, see Lee Orrick's BIG-IP DNS v12 series of articles and videos at
    https://community.f5.com/t5/technical-articles/big-ip-dns-resource-record-types-architecture-design-and/ta-p/288804.]

    Hopefully, the following revised steps based on the original article will demonstrate how the remaining steps work with BIG-IP DNS today. I used tmsh commands because, given that the pre-v12 and v12+ version wide-IP and GTM pool GUI screens are quite similar looking, it seemed more meaningful to use the tmsh commands as they explicitly document at the "detail level" the specifics of what objects with what record types go where.

    [NOTE: Although the example scenario of the original article was based on one cloud service provider in particular, I thought I would make what follows a little more generic. So in the following scenario, it is the fictional cloud service provider "glowebsvc.com" that is used. And when it comes to "west" and "east" in a topological sense, I am simply going to consider the former to mean the Western Hemisphere and the latter to be the Eastern Hemisphere, just to make things simple.]

    So, using the original article as our guide ...

    o The first step will be to create a GSLB pool named "glo_east" of type CNAME that contains the static-target canonical name east.elb.glowebsvc.com:

    # tmsh create /gtm pool cname glo_east members add { east.elb.glowebsvc.com { static-target yes } }
    # tmsh list /gtm pool cname glo_east
    gtm pool cname glo_east {
    members {
    east.elb.glowebsvc.com {
    member-order 0
    static-target yes
    }
    }
    }

    o Then, we create another GSLB pool named "glo_west" of type CNAME that contains the static-target canonical name west.elb.glowebsvc.com:

    # tmsh create /gtm pool cname glo_west members add { west.elb.glowebsvc.com { static-target yes } }
    # tmsh list /gtm pool cname glo_west
    gtm pool cname glo_west {
    members {
    west.elb.glowebsvc.com {
    member-order 0
    static-target yes
    }
    }
    }

    o Next, we create a wide-IP of type A for the FQDN example.lb.example.com that contains the two GSLB pools of type CNAME that we just created above:

    # tmsh create /gtm wideip a example.lb.example.com pools-cname add { glo_west glo_east }
    # tmsh list /gtm wideip a example.lb.example.com
    gtm wideip a example.lb.example.com {
    pools-cname {
    glo_east {
    order 1
    }
    glo_west {
    order 0
    }
    }
    }

    o Next, we create the topological region named "western_hemisphere", defining it to be North America ("NA") and South America ("SA"):

    # tmsh create /gtm region western_hemisphere region-members add { continent NA continent SA }
    # tmsh list /gtm region western_hemisphere
    gtm region western_hemisphere {
    region-members {
    continent NA { }
    continent SA { }
    }
    }

    o Then, we create the topological region named "eastern_hemisphere", defining it to simply be anywhere not in the region "western_hemisphere":

    # tmsh create /gtm region eastern_hemisphere region-members add { not region western_hemisphere }
    # tmsh list /gtm region eastern_hemisphere
    gtm region eastern_hemisphere {
    region-members {
    not region /Common/western_hemisphere { }
    }
    }

    o Next we create the associated topology records so that the requests from the two defined regions will go to their applicable GTM pools:

    # tmsh create /gtm topology ldns: region western_hemisphere server: pool glo_west score 100
    # tmsh create /gtm topology ldns: region eastern_hemisphere server: pool glo_east score 100
    # tmsh list /gtm topology
    gtm topology ldns: region /Common/western_hemisphere server: pool /Common/glo_west {
    order 1
    score 100
    }
    gtm topology ldns: region /Common/eastern_hemisphere server: pool /Common/glo_east {
    order 2
    score 100
    }

    o Finally, we modify the pool settings under the wide-IP example.lb.example.com to use "Topology" as the load balancing method.

    # modify /gtm wideip cname example.lb.example.com pool-lb-mode topology
    # list /gtm wideip cname example.lb.example.com
    gtm wideip cname example.lb.example.com {
    pool-lb-mode topology
    pools {
    glo_east {
    order 1
    }
    glo_west {
    order 0
    }
    }
    }

  • Can anyone share the updated document for creating cname pools on later versions after 11.5?

      • BullWeivel's avatar
        BullWeivel
        Icon for Nimbostratus rankNimbostratus

        Not sure how this article is relevant since the CNAME pool would need to be against a FQDN.  For example anything you do with AWS the pool members would need to be a FQDN since the IPs change based upon whatever AWS wants to do.

        The original article for 11.x was perfect.  Right now though that advanced button is gone in the newer versions so no clue on how to do with strictly using a GTM.

        What is odd - the LTM you can put pool members with a FQDN but not with the GTM Pool members those must be a IP or server.

  • hello,"Entry could not be matched against existing objects." "appears when I add members of the pool,andI don't know how to solve it

     

  • How can we implement this in BIG-IP 14.1.2.3 Build 0.0.5 Point Release 3?

  • Is there an updated document which reflects the changes made in 12.1 and later? This configuration is no longer relevant past 11.5.

  • In this solution the GTM sits outside of AWS and the LTM are deployed in each AZ correct? If anyone had any drawings of this deployment would be appreciated.

     

  • Not true Joe. When combined with LTM it can provide as good failover as your typical non-cloud GTM/LTM solution would even with non-static IP's.

     

  • Joe_M's avatar
    Joe_M
    Icon for Nimbostratus rankNimbostratus

    In reality the GTM is a bad choice for Global Load Balancing for cloud services that don't have static IPs. Simply because the GTM is completely unable to monitor an end point based off of hostname (ie Amazon's elastic IPs). This one thing completely negates automatic Region or Data Center Failover. When SLAs are involved GTM can't even be put on the table as an option in this scenario.

     

  • Is there any information or deployment docs out there for using a GTM & LTM VE's that have been ported into AWS?