Forum Discussion
pierrevocat_109
Oct 30, 2008Nimbostratus
DNS_Request redirects traffic slowly
Hi,
I'm using the following IRule (V9.3.1) to direct traffic between 2 physical sites/locations (A and B) and also redirecting traffic based on whether it is internal or external.
when DNS_REQUEST {
if { [IP::addr [IP::remote_addr]/16 equals 192.168.0.0]}{
if { [active_members gtm-pool-siteA-internal] } {
pool gtm-pool-gtm-pool-siteA-internal}
else {
pool gtm-pool-siteB-internal}
} elseif { [active_members gtm-pool-siteB-external] } {
pool gtm-pool-siteB-external
} else {
pool gtm-pool-siteA-external
}
}
I've set up 4 GTM pools and it seems to be working however when the pools are marked as down it takes clients a while before they get redirected to an alternative pool instead they keep on hitting the pool that is marked as down until I flush the clients dns or the browser is closed and opened up again.
Is there any other alternative to DNS_REQUEST.
Thanks
Pierre
- JRahmAdminWhat is the TTL set for on the pools? Most LDNS servers out there should honor the TTL, as do most operating systems. If you are using Internet Explorer 4,5, or 7, the DNS records are cached for 30 minutes regardless of what the TTL is, so the browsers will need to be closed and reopened if the stale record is present. This can be changed in the registry, but that's not always practical. IE6 only caches CNAMEs, so if you are using A records, it uses the system cache, so your TTL will be honored. Firefox 1.5 and later cache for 1 minute, so the recovery is much faster.
- pierrevocat_109NimbostratusThanks citizen_elah,
- JRahmAdminYou can confirm that the GTM is making the switch immediately by adding log entries to your rule and making a request from a new browser window on a machine that isn't caching at all. Unfortunately, the failover time is dependent on the LDNS infrastructure of your clients and their choice of browser, so topology won't speed things up.
- pierrevocat_109NimbostratusGreat thanks for your help!
- JRahmAdminNo problem. There's always a tradeoff when planning for failure. How badly do you want to impact the customer experience to plan for the occasional outage? By keeping TTL's really low, particularly in a delegated scenario, each request must be made twice (once to the authority, then to the GTM) and in high latency environments this can impact the perception of the overall performance. A higher TTL may result in a longer outage, but won't impact the day-to-day of normal operations. Knowing your audience and the landscape in which they reach you will help in finding that sweetspot.
Recent Discussions
Related Content
DevCentral Quicklinks
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
Discover DevCentral Connects