Forum Discussion

ltp_55848's avatar
ltp_55848
Icon for Nimbostratus rankNimbostratus
Jan 17, 2012

GTM sorry server/last resort best practices

Hi All,

 

 

Is there an article that outlines the best practices or guidelines for implementing a GTM sorry server?

 

 

The problem I am facing is that we currently use an iRule to reselect the sorry server pool (just webservers displaying an outage page) in the event of an LTM service failing. This is used in preference to a fallback host as it allows the client to maintain the original URL.

 

 

However if this iRule is used in virtuals for GTM services, the virtuals are enever marked as failed as the iRule ensures that the sorry server page is still served.

 

 

For a GTM service using a global availability algorithm this means that the GTM service fails to fail over to a remote site if the first virtual in the GTM pool fails.

 

 

I can forgo the use of a LTM iRule for a sorry server, however this means using a last resort pool in the wideip configuration. The problem then becomes that as our organisation uses a combination of private and public services, both a public and private sorry server GTM pool is required. Additionally, both a HTTP and HTTP service are required for each public and private sorry server as the services are a mixture.

 

 

Is there any way in which to simplify the configuration of a GTM sorry server or fallback host?

 

  • Anthony_7417's avatar
    Anthony_7417
    Historic F5 Account

     

    Do your GTMs monitor the LTM using the "bigip" monitor?

     

    If so, the LTM will report the status of all its virtuals to the GTM syncgroup. (You don't need the GTMs to fire a separate HTTP health check at the LTM virtual server.)

     

     

    After this, all you have to do is ensure that your LTM health checks appropriately mark the poolmembers up/down when they are available/unavailable. Given a properly functioning GTM syncgroup (emphasis on properly), if the virtual is Red on the LTM, it will be Red on the GTMs too.

     

     

    Let me know if I missed something.

     

     

  • Thanks for the reply Anthony.

     

     

    The problem is that if using an iRule to implement a "sorry server" service (i.e. if all pool members are down, then use pool sorry server), then the LTM virtual will be marked as in service when all pool members are down as the request is rebalanced to a sorry server.

     

     

    It is possible to use an LTM fallback host, but this is undesirable as we would prefer just to serve up a sorry server response rather than issue a redirect (as the fallback host functionality does).

     

     

    From a GTM perspective, this means that as long as a sorry server is available, the LTM virtual is still in service. If the LTM is used in a GTM service with the Global Availability LB algorithm, then the service will never fail-over to the next virtual in the pool.

     

     

    A possible workaround for this may be to remove the sorry server iRule at a LTM level, then create a GTM sorry server virtual/pool and specify this as the last virtual/pool where GA is used.

     

  • Hi ltp,

    If you host the sorry server ON the bigip in an iRule instead of as a pool behind the virtual, the virtual will be marked down as expected, but the sorry page will be returned in the meantime as desired. Simplest implementation:

    when HTTP_REQUEST {
      if { [active_members ] < 1 } {
        HTTP::respond 200 content "Well...that's embarrassing. We seem to have misplaced our resources."
      }
    }
    

    I did an article recently on the iFiles feature recently added to the product in 11.1 which makes hosting a sorry server in iRules even easier.

    External File Access from iRules via iFiles