Forum Discussion

Prakin's avatar
Prakin
Icon for Cirrus rankCirrus
Mar 01, 2019

Persistence state changes in Load balancing method.

guys i have one issue.

 

A script is been used to the functionality of an applictaion where there users will login from those 4 different client machines and the client sends request every 10 secs. Persistence timeout set to 300 secs, tcp idle time out is 300 secs. OneConnect profile disabled. Load balancing set to least session and persistence is Source Address Affinity at least on this setting it works as expected some time but after 15-20 mins problem appears client state is changed.

 

have you people encounter such problem before? please let know. For sure persistence/tcp idle timeout is not expired and the pool members never went up/down. so what can be the cause?

Thu Feb 26 10:15:28 GMT 2019 Sys::Persistent Connections

 

source-address - 10.10.1.100:443 - 172.16.10.101:443

TMM 1 Mode source-address Value 192.168.10.100 Age (sec.) 1 Virtual Name /Common/Testing_VS Virtual Addr 10.10.1.100:443 Node Addr 172.16.10.101:443 Pool Name /Common/Testing_Pool Client Addr 192.168.10.100 Local entry

 

Thu Feb 26 10:15:40 GMT 2019 Sys::Persistent Connections

 

source-address - 10.10.1.100:443 - 172.16.10.102:443 ------------> client connected different pool member

TMM 1 Mode source-address Value 192.168.10.100 Age (sec.) 1 Virtual Name /Common/Testing_VS Virtual Addr 10.10.1.100:443 Node Addr 172.16.10.102:443 Pool Name /Common/Testing_Pool Client Addr 192.168.10.100

 

Local entry

6 Replies

  • Any particular reason you are using least connection method because your back end servers are having different configurations. like CPU, memory etc..

     

  • This does seem like strange behaviour.

     

    Do you by any chance have connection limits set on the pool members? If you do, enable the *Override Connection Limit" setting on the source IP persistence profile

     

    Override Connection Limit:

     

    Specifies, when checked (enabled), that the system allows you to specify that pool member connection limits are overridden for persisted clients. Per-virtual connection limits remain hard limits and are not overridden.

     

  • No reason. i have tried using round robin, as well least connections with source address persistence, still same results. persistence state changes. least session i choosed somewhat it was stable but that issue appears.

     

  • You have a persistence and TCP idle timeout of 5 minutes so after 5 minutes the TCP connection will drop if the client doesn't send any traffic. Once this drops, there will be no persistence record to send the follow-on TCP connection to the same server.

     

    It is working as i would expect it to - if you want to maintain persistence for follow-on TCP connections then extend the persistence timeout to be 1 hour or so.

     

  • Hi Prakash,

     

    Did you see my earlier comment? Have you got connection limits set on the pool members (this is not configured by default)?

     

    Do you get the same issues when using load balancing method least connections member?

     

  • No connection limit set and no connection override enabled on persistence settings. yes this strange behavior seen when using least connections (member) as well.