Forum Discussion
James_Yang_9981
Altostratus
Mar 11, 2009how to use persist lookup command
in one case I need manually select node depends on the status of server in the persist table. if the node is down, LTM need reject the client connection that is already in the persist table instead of select a new pool member. if the node is recovered during the persist table timeout, LTM need following the persist table select the old server for the same client new connection. source ip address persist is used in this case.
I figured out a rules to accomplish this, but not sure how to use persist lookup command.
the rules is like below:
when CLIENT_ACCEPTED {
persist lookup source_addr [IP::remote_addr] [all_status]
if {LB::status $all_status:nodename} {
node $all_status:nodename
else {
reject
}
}
the question is how to extract the node name from LTM's persist table, from irules wiki, only found following:
persist lookup [all|node|port|pool] "all" or no specification returns a list containing the node, port and pool name.
Specifying any of the other return types will return the specified item only.
= | { [any virtual|service|pool] [pool ] }
the latter key specification is used to access persistence entries across virtuals, services, or pools.
12 Replies
- Deb_Allen_18Historic F5 AccountI think you can do what you need without an iRule using the "Action on Service Down" setting in the pool configuration. (You have to select "Advanced" from the Configuration dropdown to see it.)
If you set "Action on Service Down" to "Reject", a RST will be sent on the connection if client traffic arrives and the pool member has gone down. The persistence record will be retained until it times out, but if the client attempts to reconnect and the persisted-to pool member is still unavailable, a new pool member will be chosent and the original persistence entry overwritten.
HTH
/deb - James_Yang_9981
Altostratus
I understand the behavior of "Action on Service Down" setting.
The key point is when client reconnect and send a new connection to BIGIP, by any choice of the "action on Service down" setting, BIGIP will choose a new pool member. that is NOT the customer want behavior, they want BIGIP "block" the client when it try to reconnect BIGIP for a dead connection and wait for server recovery instead of select a new pool member.
if the failed server recovery in a certain time(persist timeout), then BIGIP need send the "old" client new connection to "old" server.
this requirement is obey the high availability design of BIGIP but it will maintain the message integration of a series transaction. - Deb_Allen_18Historic F5 AccountThere really is no way to queue a connection until a specific server becomes available, but to simply keep rejecting it until the persistence record times out, I suppose something like this might work:
when CLIENT_ACCEPTED { set pserver [persist lookup source_addr [IP::remote_addr]] if {($pserver ne "") && ([LB::status $pserver] ne "up")} { reject } }
/d - James_Yang_9981
Altostratus
good rule, before I'm going to do a testing, there are one more question about the persistence table :
if same client new connection rejected by LTM, does the timeout value of persist table will be refreshed because of new connection send from the client that is already in the persist table? this will affect the server recovery time counting. - Deb_Allen_18Historic F5 AccountNot sure, but I doubt the persistence timeout is updated unless the system is able to send traffic to the persistence target. Let us know if you see that it does update the timeout during your testing.
/deb - Nat_Thirasuttakorn
Employee
how about using persist delete? - Deb_Allen_18Historic F5 AccountIf the goal is to allow the node to recover for the length of the persistence timeout before selecting a new target, the persistence record offers an easy check for that without having to do clock arithmetic.
May be necessary to fall back to a clock based approach though if the persist timer is updated even when the target is down.
/d - Deb_Allen_18Historic F5 AccountJust ran into an old email that mentioned the PERSIST_DOWN event, which I had forgotten about: Click here
So this iRule should do the same thing, but much more efficiently:when PERSIST_DOWN { reject } - James_Yang_9981
Altostratus
good, the rule becomes more and more short.
I have open a case ask support about the persistence table, the reply is persist table entry of a failed server will be deleted once monitor marked server down.
it's bad news, so I'm not sure if the rules will work if the entry was deleted. - hoolio
Cirrostratus
I was going to try using the PERSIST_DOWN event to trigger persistence off of a second token in the client request. But I found, as James described, that the persistence record is removed once the pool member is marked down. So how does the PERSIST_DOWN event ever get triggered?! Is it there just to handle the instant between a pool member being marked down and the persistence table entry being removed?
http://devcentral.f5.com/wiki/default.aspx/iRules/persist_down
PERSIST_DOWN is triggered when LTM is ready to send the request to a particular node or pool member via persistence and it has been marked down.
Anyone have ideas?
Thanks, Aaron
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)Recent Discussions
Related Content
DevCentral Quicklinks
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
Discover DevCentral Connects
