I have a scenario and would like to get some input on it.
1 GSLB Pool
2 Pool Members
Load Balancing Method - Global Availability
Both Pool Members are Enabled and Available
Of course, Member 0 will receive all DNS requests, until it goes down and then Member 1 will delivery from there.
I'm trying to determine what triggers the automatic global availability failover in the below scenario:
2 Pool Members - Health Monitor Configuration - Inherit from Pool
1 GSLB Pool - Health Monitors - Selected (None)
Availability Requirements - All Health Monitors
In this scenario, at the pool member level, it's advising to inherit from the pool. But at the pool level, there are no health monitors selected.
How does Global Availability work with the above configuration? Does it automatically inherit from Server Name level (LTM)? Lets say the Server Name health monitor is "bigip". Is this now the "default" health monitor? Does the server health check need to fail in order for the pool member to failover? I'm curious how this setup and configuration would function, bc I see a lot of setups this way.
The bigip server monitor for LTMs reports the status of the LTM virtual server to the GTM via iQuery.
When the LTM pool members go down due to failing monitors, the status of the virtual server changes on the LTM, and the GTM is alerted via iQuery.
Other GTM monitors probing the GTM pools are in addition to the server object level bigip monitor.
Thanks, this is what I originally suspected.
I tried searching around and couldn't find an answer which explains how the bigip server monitor functions. Does the GTM simply do a ping request to the LTM Virtual Servers every 30 seconds to check availability status and is the hold down timer 90 seconds before it's marked as Down?
> Does the GTM simply do a ping request to the LTM Virtual Servers every
> 30 seconds to check availability status
It's way more complex than that - the GTM schedules probe requests to big3d processes on LTMs in the Datacenter, which is then responsible for probing othet LTMs via iQuery. The status of virtual servers is determined directly from the LTM internal state tracking (based on the LTM pool monitors).
> and is the hold down timer 90 seconds before it's marked as Down?
This is controlled by the "ignore-down-response", which is disabled by default.
This will mark down LTM virtual servers immediately (because the defined pool monitor has already expired to mark the virtual server down).
If "ignore-down-response" is enabled, then the timeout period has to expire before the GTM marks the server object down.
Got it, thanks for the clear explanation.
One last thing I'm curious about. If an LTM Virtual Server goes down (on a standard http health monitor check, that would be 3 failed health checks, meaning at the 16s mark), at what point is that corresponding Pool Member on the GTM marked down? Would it be immediate at the 16s mark where the LTM notifies the GTM of it's Down status?