Forum Discussion

Brian_Kinsey_10's avatar
Brian_Kinsey_10
Icon for Nimbostratus rankNimbostratus
Mar 28, 2007

Forcing down a pool member on a GTM

We have GTMs at two sites; our main production site and a DR site. We also have LTMs at these sites. I have a requirement to send all traffic to the production site as long as it is available, and automatically fail all traffic over to the DR site if the production site is not available. The problem comes in when the production site comes back on line. We do not want traffic to automatically fail back to the production site from the DR site when the production site comes back on line. We want this to be manual so that we don't get traffic bouncing back and forth and so that we can ensure the production site is completely back before sending traffic back to it.

 

 

Is there a way to automatically force a pool member down on the GTM if the monitor (which checks to see if a page is returned from the VIP on the LTM) shows that it is unavailable?

 

 

I tried the following:

 

 

when CLIENT_ACCEPTED {

 

if { [LB::status pool test_pool member 10.10.10.10] eq down }{

 

set [LB::status pool test_pool member 10.10.10.10] session_disable

 

}

 

}

 

 

and got this error:

 

 

01070151:3: Rule [DisableDownMember] error:

 

line 1: [unknown event (CLIENT_ACCEPTED)] [when CLIENT_ACCEPTED {

 

if { [LB::status pool test_pool member 10.10.10.10] eq down }{

 

set [LB::status pool test_pool member 10.10.10.10] session_disable

 

}

 

}]
  • This can also be done on the LTM and have the same effect. If a Monitor on the LTM marks down all of the pool members for a Virtual Server, is it possible to have the Virtual Server forced down so that I will have to go into the LTM and enable it when I am ready for traffic to start flowing back to that site?
  • Not sure about the GTM, but you may want to investigate the 'manual resume' functionality or feature. In essence, when enabled, should the server go down, it stays disabled and has to be manually re-enabled.
  • Deb_Allen_18's avatar
    Deb_Allen_18
    Historic F5 Account
    Actually, you shouldn't need an iRule to accomplish any of your goals...

     

     

    I have a requirement to send all traffic to the production site as long as it is available, and automatically fail all traffic over to the DR site if the production site is not available.For this, use "Global Availability" as the WIP LB mode, numbering each pool/member in the order in which you'd like them to be handed out, 1 being the most preferred.

     

     

    We do not want traffic to automatically fail back to the production site from the DR site when the production site comes back on line.As claretian mentioned, GTM's "Manual resume" feature enabled on at least the primary DC pool member should do the trick (Global Traffic / Pools / / Configuration / Advanced).

     

     

    Is there a way to automatically force a pool member down on the GTM if the monitor (which checks to see if a page is returned from the VIP on the LTM) shows that it is unavailable?All you should have to do there is apply an appropriate monitor to the pool members. Make sure you have a Receive string defined, or any response, even a 404 Not Found error, will cause the pool member to be marked UP. If the expected response is not received, the pool member will be marked DOWN after the timeout expires, and if Manual Resume is configured, should stay DOWN until manually re-enabled.

     

     

    If you use an LTM monitor to mark the real servers down, but no content monitor on GTM looking for a specific response string, the LTM-generated DOWN status will propagate to GTM via iQuery, but then I don't believe Manual Resume on GTM would work as expected since GTM didn't explicitly mark the node down.

     

     

    If you are seeing behavior that doesn't match what I've described, I'd open a Support case and review your observations and configuration with them.

     

     

    HTH

     

    /deb
  • Deb_Allen_18's avatar
    Deb_Allen_18
    Historic F5 Account
    (Sorry, that last section didn't make sense as first written -- edited above for clarity.)