Forum Discussion
Jamie_Cravens
Nimbostratus
Feb 28, 2007point to different Pools w/ different persistence
I have the following iRule configured, but each pool needs to use a different Persistence Profile. How can I accomplish this?
when HTTP_REQUEST {
if {[HTTP::uri] starts_with "/member"}
{ pool POOL_A }
else { pool POOL_B }
}
Thank you!
3 Replies
- JRahm
Admin
when HTTP_REQUEST { if { [HTTP::uri] starts_with "/member" } { pool POOL_A persist xxx } else { pool POOL_B persist yyy } } - Greg_Burch
Nimbostratus
I'm attempting to do the same thing, but it doesn't seem to be working reliably at all. The following iRule is specified within a persistence profile that is assigned as the Default Persistence Profile of a virtual server (logging statements were added for debugging purposes):
when HTTP_REQUEST {
if {[string tolower [HTTP::uri]] contains "tcgiservlet" } {
persist source_addr 255.255.255.255 120
log local0. " [IP::client_addr] - [HTTP::uri] - SOURCE persistence selected "
} else {
persist none
log local0. " [IP::client_addr] - [HTTP::uri] - NO persistence selected"
}
}
when LB_SELECTED {
log local0. " [IP::client_addr] - [HTTP::uri] - to node [LB::server]"
}
______________________________________
A sample of the logging that is produced is as follows:
Feb 22 14:07:38 tmm tmm[1036]: Rule XXXXXXXX_PERSIST : y.y.y.y - /live/servlet/GetLeastLoadedComponent?xxxxxx - NO persistence selected
Feb 22 14:07:38 tmm tmm[1036]: Rule XXXXXXXX_PERSIST : y.y.y.y - /live/servlet/GetLeastLoadedComponent?xxxxxx - to node POOLNAME-80 z.z.z.z 80
Feb 22 14:07:38 tmm tmm[1036]: Rule XXXXXXXX_PERSIST : y.y.y.y - /live/servlet/tcgiServlet - SOURCE persistence selected
Feb 22 14:07:38 tmm tmm[1036]: Rule XXXXXXXX_PERSIST : y.y.y.y - /live/servlet/tcgiServlet - SOURCE persistence selected
Feb 22 14:07:39 tmm tmm[1036]: Rule XXXXXXXX_PERSIST : y.y.y.y - /live/servlet/tcgiServlet - SOURCE persistence selected
Feb 22 14:07:39 tmm tmm[1036]: Rule XXXXXXXX_PERSIST : y.y.y.y - /live/servlet/tcgiServlet - to node POOLNAME-80 167.z.z.z 80
The problem is that the persistence doesn't seem to be taking effect immediately, as there will be several requests to one back-end node, and then all subsequent requests will go to the other back-end node. One thing I especially don't understand is the LB_SELECTED event only seems to trigger sporadically, and there doesn't seem to be any rhyme or reason to it. It doesn't occur for every request, but it occurs much more often than 120 seconds, which is the timeout value of the persistence.
Has anyone seen behavior like this or shed any light on what might be happening? - unRuleY_95363Historic F5 AccountThat's classic non-OneConnect behavior. The LB_SELECTED event is only fired when a load-balancing decision is made. And of course, persistence only plays a factor when the load-balancing decision is made.
As a side effect of OneConnect, load-balancing will occur on every request. However, without OneConnect, load-balancing only occurs when the pool changes. Any subsequent requests that happen to go to the same pool will stay connected to the last pool member.
So, you have two ways you can change this behavior: 1) Add OneConnect; 2) at the beginning of HTTP_REQUEST call LB::detach.
HTH
Recent Discussions
Related Content
DevCentral Quicklinks
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
Discover DevCentral Connects
