Forum Discussion

Stefan_Klotz's avatar
Icon for Cumulonimbus rankCumulonimbus
Dec 01, 2011

Persistence across VS with different Poolmembers

Hi again,


new problem new post.


We are having 4 websites pointing to 4 different VS-IPs. Each VS will be balanced to 8 Poolmembers.


Physically we have 8 different ESX-machines and each of them has 4 virtual IPs, one per website. So we are having 32 different Poolmember IPs.


Now we are having the following requirement: if a Client connects to website A and will be balanced to the corresponding virtual IP on ESX-server 3, the same Client also needs to end up on ESX-server 3 if connected to website B, C or D.


Currently we are using an iRule for this, that's why I'm using this Forum and hope it's placed well.


To summarize the iRule:


- all 4 virtual IPs per ESX-server will be grouped to an instance


- in the LB_SELECTED event the instance number of the chosen Poolmember-IP will be stored with the Clients sourceIP as key in the session table (using the table command)


- in the CLIENT_ACCEPTED event we are making a table lookup for the incoming sourceIP to get the correct instance number (if one already exists)


- based on that instance number the correct Poolmember will be chosen with another logic


- this iRule is assigned to all 4 VS


This iRule works from its logic pretty good, but not 100% perfect. In some very rare situations there are coming 2 requests from the same Client to the exact same time to the Loadbalancer (but to 2 different VS). From the logging we could verify that both lookups didn't find a match in the session table although the second request should find the entry of the first request. Our first idea was that it might be related to CMP, because we saw that the first request was handled by tmm and the second one by tmm1, and both kernels didn't know from each other. After some research here on devcentral we found a solution to disable CMP for these 4 VS, by using the "old" DGL reference with the "$::" in front of the DGL-name. But the problem still occurs.


So our latest assumption is that both iRule executions (one per VS) are to close to each other (on a timely manner) so that the don't know from each other.


Now I have two questions:


1. How can I change the iRule that parallel executions know from each other or can at least delay one of them (maybe with priorities???)


2. Change the whole setup and establish the required persistence on a different way (does it bring any benefit if I use this iRule within a UIE-Persistence profile and assign this to each VS?)



As the customer is facing an incident with this setup (could not be verified in UAT and during Go-Live change, because of the very small load) any fast help, tips or ideas are much appreciated.


Thank you!



Ciao Stefan :)


3 Replies

  • Hi Stefan,



    There is no way that I am aware of to make one Virtual Sever aware of another's actions. Unless you are talking about using Global Variables stored in a Table that both of them would reference prior to making any type of "decisions".



    You could try looking at this Tech Tip to see if it might solve your issue and make it easier to support:


    Persisting Across Virtual Servers



    Hope this helps.
  • From the logging we could verify that both lookups didn't find a match in the session table although the second request should find the entry of the first request.i have never come across this situation but i think it may be possible if two requests come really really close.



    by the way, are all 4 websites in the same domain name (different subdomain)? if yes, would it be possible to configure static cookie on pools? so, first request will be load balanced and later will be directed to pool member based on static cookie value.



    or am i totally lost what you are asking? :-)
  • Hi nitass,


    no Cookies aren't possible, because we are having 4 complete different domains.


    @Michael: I also checked the Persistence across VS options, but here you always need the same Node-IP in each Pool and this is not the case in our setup.


    The old setup of the customer was using a software Loadbalancer and worked with static balancing (Client-IP modulo 8 to get the instance number). But we are very unhappy with such a solution, because it's very ugly and has some drawbacks (unequal load, no automatic backup if an instance fails).


    Any other ideas or possibilities to redesign this setup?



    Ciao Stefan :)