Forum Discussion
Wil_Schultz_101
Nimbostratus
Aug 09, 2007LB_FAILED behavior, expected or not?
I have the following iRule, use if for when my nodes are down connections go to a different page.
when LB_FAILED {
switch [LB::server pool] {
default {
set remoteip [IP::remote_addr]
set uri [HTTP::uri]
set hostname [HTTP::host]
log local0. "$remoteip is looking up Hostname $hostname and URI $uri"
HTTP::redirect http://maint.my.com
}
}
}
I found something today that gives me different behavior than I would have expected. I have 3 servers in my pool and when one of them fails for whatever reason this above rule will actually send 1/3 of my traffic to this maintenance page until the BigIP marks the server as down. I've got my check set up in 5 second intervals, and fail at 16 seconds. So all the traffic that is sent to the one down server that has yet to be marked down will hit the redirect.
Is this expected behavior? Sounds to me that LB_FAILED should actually be LB_SERVER_FAILED
- hoolio
Cirrostratus
Hi, - Al_Carandang_11
Nimbostratus
Yes this is expected behaviour - the BigIP will keep sending traffic to the down server until it is marked down which would be after your health checks fail for the configured number of retries. - JRahm
Admin
FYI, If you tune your tcp profile to limit the syn retransmissions to 2, you can get an LB_FAILED event around 9 seconds, which would occur before your monitor timeout of 16 seconds. Please see this thread for a more detailed discourse on this from deb:
Recent Discussions
Related Content
DevCentral Quicklinks
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
Discover DevCentral Connects