Forum Discussion
Chad_Roberts_21
Nimbostratus
Aug 10, 2006Check pool status in an iRule
I am trying to write an iRule that will monitor the status of many devices before forwarding to one device. I'll try to skip the details unless it becomes necessary to explain, but is it possible to define multiple pools, each with their own custom monitor, and check in an iRule to ensure that all of them are up before forwarding to the default pool? I thought perhaps LB::status would come in to play here, but I'm not having much luck finding sample code that helps me understand how it works.
5 Replies
- Terje_Gravvold
Nimbostratus
I don't realy know what you are trying to do, but here is a simple code that would do a status check on members of a pool.
set POOL01 "pool_my-pool"
array set POOL01_MEMBERS {
"10.1.1.40" "80"
"10.1.1.41" "80"
"10.1.1.42" "80"
"10.1.1.43" "80"
}
set LBSTAT ""
foreach {IP PORT} [array get POOL01_MEMBERS] {
if { $LBSTAT equals "" } {
set LBSTAT "$IP = [LB::status pool $POOL01 member $IP $PORT]"
}
else {
set LBSTAT "$LBSTAT, $IP = [LB::status pool $POOL01 member $IP $PORT]"
}
}
log local0. "$LBSTAT"
As much as I know you have to define the pool members both in the iRule and as members of a regular pool in the GUI. I don't know any way to extract the pool members from the BigIP config from an iRule.
You may also consider using [active_members ] if the only thing you want to check is the active members compared to how many members the pool have.
- Terje - - Terje_Gravvold
Nimbostratus
Hi Collin,
Is there any way of listing the pool member and their respective monitored ports information automaticly out of a configured pool? That would be a realy nice feature ;-).
- Terje - - Chad_Roberts_21
Nimbostratus
Thanks to all for the responses. I believe I have accomplished what I was looking for with "active_members", since I can compare simply against whether there is at least 1 member up in a pool, and it doesn't need to be updated when nodes are added or removed from pools. - Deb_Allen_18Historic F5 AccountNo rule is required to generate "Node Down" traps.
LTM by default logs "Node Down" messages, and there is a defined alert for those messages, so if SNMP trapping is configured, traps will be sent to the defined trapsink(s).
HTH
/deb - Chad_Roberts_21
Nimbostratus
I shouldn't admit this, but for some odd reason a thought occurred to me about this thread while I was sleeping last night. Yeah, yeah... I need some time off.
Anyway, as pointed out above, the SNMP trap mentioned is a default trap, so in theory all you should have to do is give LTM an IP address of an SNMP server. However, if you are running the latest LTM release, version 9.2.3, there is a bug that will prevent SNMP traps from working:
https://tech.f5.com/home/solutions/sol6249.html
Easy fix, and it may not apply to you, but I thought I should bring it up. I ran into it a few weeks back, and it drove me insane for a while, so maybe this will prevent someone else from running into the same problem.
-chad
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)Recent Discussions
Related Content
DevCentral Quicklinks
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
Discover DevCentral Connects