Forum Discussion
Craig_Petty_115
Nimbostratus
Apr 20, 2013Down due to failure vs administratively down
Is there a way in an iRule to differentiate between members that are down due to a failure versus administratively down?
In an LB_FAILED event I would like to redirect to a sorry page if any members are down due to monitor failures, however, if all members are down because someone took them out of service (disabled them), then I would like to redirect to a maintenance page.
3 Replies
- What_Lies_Bene1
Cirrostratus
Craig, this isn't possible. I'd suggest anyway that you look for a process related solution. If you're performing maintenance then why not manually force the redirection to the maint. page as part of the task? - nitass
Employee
Is there a way in an iRule to differentiate between members that are down due to a failure versus administratively down?is LB::status pool usable?
LB::status wiki
https://devcentral.f5.com/wiki/iRules.lb__status.ashx - Kevin_Stewart
Employee
There are a few ways to handle this I think. Sending a user to a sorry page in an LB_FAILED event is pretty straight forward, as Nitass states, using the LB::status command:when LB_FAILED { if { [LB::status pool $poolname member $ip $port] eq "down" } { log "Server $ip $port down!" } }
You may also want to consider looping through the nodes in the pool to find a good one before simply sending a sorry page. Take a look at the HTTP::retry page for a few examples:
https://devcentral.f5.com/wiki/irules.http__retry.ashx
This will loop through nodes that are technically available, but are returning "bad" values (40x or 50x messages). In the event that all of the nodes are down, however, if you want to differentiate between all nodes down because of monitor failures (a sorry page) versus all nodes down because they were disabled (a maintenance page), there are at least two options that I can think of:
1. As you loop through the pool members in your LB_FAILED event (using the members command to enumerate the pool members), take note of their status. At the end of the loop, if more are disabled than failed, send the maintenance page.
2. Configure an external monitor script that queries the monitor status of all of the pool members, and then writes the down or disabled status to a data group. I'd use a "phantom" monitor, one that is attached to a pool, but not a pool that is assigned to a virtual server, or that cares about the actual pool members. Assigning a monitor to a pool simply activates the monitor. Then use a tmsh command like the following to record the status of a given pool and its members:
tmsh show ltm pool local-pool members { 10.70.0.4:http } detail |grep Reason |grep -v pool |awk -F": " '{ print $2 }'
This command will report one of the following states:
- "Pool member is available"
- "Pool member is available, user disabled"
- "Pool member has been marked down by a monitor"
- "Forced down"
So like before, if ALL of the members are down or disabled, and the majority are disabled, mark the data group entry accordingly. On an LB_FAILED event, read the data group and respond to the client request.
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)Recent Discussions
Related Content
DevCentral Quicklinks
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
Discover DevCentral Connects