For more information regarding the security incident at F5, the actions we are taking to address it, and our ongoing efforts to protect our customers, click here.

Forum Discussion

nov1ce_120072's avatar
nov1ce_120072
Icon for Nimbostratus rankNimbostratus
Sep 09, 2014

Question about load distribution

Hello,

 

We have a pair of BIG-IP appliances (10.2.2 Build 763.3 Final) in a active/standby mode configured with one VS serving 22/tcp. The pool consists of two nodes with Round Robin load distribution. There are no iRules deployed, OneConnect Profile is set to None and the default persistence profile is set to source_addr. SNAT Pool is set to Auto Map.

 

I noticed that one of the customers (out of ~100) always ends up on the second node. Other customers seem to be equally distributed between two nodes but only one. Is this something I should be worried about?

 

I found the following article on F5: http://support.f5.com/kb/en-us/solutions/public/10000/400/sol10430.html?sr=40145321 but our setup is quite simple, and both nodes are indeed up.

 

Grateful for any advice.

 

Thank you.

 

2 Replies

  • Either blind luck, or the persistence entry must be constantly maintained for this client. A possible reason is that the client may always creates new connections within the source address profile timeout value (default 180s) of another connection closing. Client use of TCP keepalive could also help here (or else they are constantly sending traffic).

    You can view the persistence records like this;-

    tmsh show ltm persistence persist-records

    • nov1ce_120072's avatar
      nov1ce_120072
      Icon for Nimbostratus rankNimbostratus
      Thank you! I think it's the second option, because from the fw logs I see them connecting every 2 mins and they are constantly sending traffic. I also see the customer IP in Statistics > Local Traffic > Persistence Records with the Age ~5-10 seconds. Does it mean I won't be able to disable this node for maintenance without them noticing it? Since they always end up on one node, if I disable it from the pool will they get timeout and get transferred to another node?