Persistence across VS with different Poolmembers
new problem new post.
We are having 4 websites pointing to 4 different VS-IPs. Each VS will be balanced to 8 Poolmembers.
Physically we have 8 different ESX-machines and each of them has 4 virtual IPs, one per website. So we are having 32 different Poolmember IPs.
Now we are having the following requirement: if a Client connects to website A and will be balanced to the corresponding virtual IP on ESX-server 3, the same Client also needs to end up on ESX-server 3 if connected to website B, C or D.
Currently we are using an iRule for this, that's why I'm using this Forum and hope it's placed well.
To summarize the iRule:
- all 4 virtual IPs per ESX-server will be grouped to an instance
- in the LB_SELECTED event the instance number of the chosen Poolmember-IP will be stored with the Clients sourceIP as key in the session table (using the table command)
- in the CLIENT_ACCEPTED event we are making a table lookup for the incoming sourceIP to get the correct instance number (if one already exists)
- based on that instance number the correct Poolmember will be chosen with another logic
- this iRule is assigned to all 4 VS
This iRule works from its logic pretty good, but not 100% perfect. In some very rare situations there are coming 2 requests from the same Client to the exact same time to the Loadbalancer (but to 2 different VS). From the logging we could verify that both lookups didn't find a match in the session table although the second request should find the entry of the first request. Our first idea was that it might be related to CMP, because we saw that the first request was handled by tmm and the second one by tmm1, and both kernels didn't know from each other. After some research here on devcentral we found a solution to disable CMP for these 4 VS, by using the "old" DGL reference with the "$::" in front of the DGL-name. But the problem still occurs.
So our latest assumption is that both iRule executions (one per VS) are to close to each other (on a timely manner) so that the don't know from each other.
Now I have two questions:
1. How can I change the iRule that parallel executions know from each other or can at least delay one of them (maybe with priorities???)
2. Change the whole setup and establish the required persistence on a different way (does it bring any benefit if I use this iRule within a UIE-Persistence profile and assign this to each VS?)
As the customer is facing an incident with this setup (could not be verified in UAT and during Go-Live change, because of the very small load) any fast help, tips or ideas are much appreciated.
Ciao Stefan :)