Forum Discussion
iRule persistance based on HTTP header
After reading the F5 doc, this seemed simple enough:
I created a universal persistence profile with the following rule:
when HTTP_REQUEST {
if { [HTTP::header exists "X-DeviceKey"] } {
persist uie [HTTP::header "X-DeviceKey"]
}
}
Then I created a virtual server with this persistence profile, the default http profile, and a OneConnect profile. The pool itself is set up for Round Robin load balancing.
But it doesn't seem to be working. When I send HTTP requests for different devices from a single client, I am seeing all these requests sent to the same node in the pool (via "persist show all"). This is not what I expected. I'm testing with a small pool with 2 nodes and was expecting to see at least one device "stick" to a different node in the pool than the others.
What am I missing here?
Thanks in advance for any advice you can give me.
--Greg Williams
- hooleylistCirrostratusHi Greg,
- mistergreg_6218NimbostratusYes. I am seeing all of these requests going to the same pool member. For my testing, I am sending a small number of requests from different devices on the same persistent connection. Given that my pool has 2 members in it and is configured as Round Robin, I expected that I would see these different requests spread across both members.
One thing to note is that the nodes in my new pool also belong to another pool that is being used by a different VS at the same time. The other VS is for any TCP traffic and uses source addr persistence, where as my VS is only capturing HTTP traffic on port 80. In addition, the other VS does not have a OneConnect profile whereas my VS does. Could that other VS be affecting how my VS is processing traffic?
My VS is using source address persistence as the fallback persistence method. However, when I send 4 requests from different devices and do "persist all show all", I see 4 universal entries in addition to other entries for requests to the other pool.
I've attached a small file with the "persist all show all" output for the nodes in my pool. You'll see other source addr entries for these nodes in there, due to these nodes being actively used by another pool. Before I ran "persist all show all", I had just send 4 requests with different device ids on the same connection. In this output, you can see 4 universal entries that correspond with those requests. So that makes me think that my persist rule is triggering correctly. However, they are all mapped to the same member.
Thanks.
--Greg W
- hooleylistCirrostratusHi Greg,
- mistergreg_6218NimbostratusHi Aaron,
There are no "match across" option enabled on any of the persistence profiles I'm using. But I was using the same source address persistence profile on both virtual servers.
So I created a different source address persistence profile for my virtual server and set it as the fallback persistence profile. I still saw the same behavior as before.
However, the good news is that when I set the fallback persistence profile to None on my virtual server, then it started working as I expected it to. 2 of my 4 test requests went to one node and the other 2 went to the other.
I guess I don't understand why it is working this way. I thought the fallback persistence profile only took effect if my iRule did not match. Is there something I need to do in my iRule to prevent it from using the fallback persistence profile?
Thanks.
--Greg
- mistergreg_6218NimbostratusAaron,
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com