Forum Discussion

alexandre_girau's avatar
alexandre_girau
Icon for Altocumulus rankAltocumulus
Dec 22, 2017
Solved

Persistence issue with TCP

Hi all,

 

We have a very simple configuration where we have multiple node in a pool for a specific TCP protocol (port 6000). In front of F5 farm we have an additionnal network device who NAT source IP, so we have only a different source port for each connection.

 

So when we initiate connection, all traffic are redirected to a single destination and are not correctly load balanced to all node in this pool; only in single. Altought, when a specific node is down, all traffic are correctly send to the 2nd.

 

In short terms we are unable to Load Balance correctly the traffic charge in all node.

 

We tried to use iRule with persit to none, and numerous option, but unfortunately we have no luck to let it working.

 

Can you drive me please on correct action to fix it, thank you Regards

 

  • Ok, I finally fix it ^^

    The issue was because I created a pool with 4 members, 2 same IP but different service port, cause we have 2 versions who using different socket

    And I have 2 Virtual Services different (based on version) with an iRule different for each in order to select specific node with the target port. The irule was like this :

    when CLIENT_ACCEPTED {
    
    set xxx_node1 "13.x.x.183"
    set xxx_node2 "40.x.x.221"
    set xxx_ppol "pool_vm_xxx_prod"
    set xxx_port yyyy
    
    if { ([LB::status pool $xxx_ppol member $xxx_node1 $xxx_port] eq "up") and ([LB::status pool $xxx_ppol member $xxx_node2 $xxx_port] eq "up") }
    {
        node $xxx_node1 $xxx_port
        node $xxx_node2 $xxx_port
    }
    elseif { ([LB::status pool $xxx_ppol member $xxx_node1 $xxx_port] eq "up") and ([LB::status pool $xxx_ppol member $xxx_node2 $xxx_port] eq "down") }
    {
        node $xxx_node1 $xxx_port
    }
    elseif { ([LB::status pool $xxx_ppol member $xxx_node1 $xxx_port] eq "down") and ([LB::status pool $xxx_ppol member $xxx_node2 $xxx_port] eq "up") }
    {
        node $xxx_node2 $xxx_port
    }
    else
    {
        log "Error : Pool $xxx_ppol is down"
    }
    

    }

    Finally, I just recreate 2 new pool specific to each version and on my 2 VS I associate this new pool based on target version and deleted the associated iRule.

    And now, my traffic is correctly load balanced 😉

    Thanks to all who help me in this situation

    Have a nice day, Regards Alex

10 Replies

  • You need to use an application layer persistence. What protocol is used at layer 7?

     

  • Hi

     

    Thank you for your response. The protocol used is a no-known, developed by a company (based on tcp socket). So I’m not sur how to reply you about l7 on this traffic.

     

    But if I understand what you mean, I don’t have to use a VS with a standard type, correct ?

     

    • Leonardo_Souza's avatar
      Leonardo_Souza
      Icon for Cirrocumulus rankCirrocumulus

      Ok, basically as you already figure out, source address persistence will not work in your case. So you need to look something in the upper layers. If you were using HTTP for example, cookie persistence would easily fix the problem.

       

      Here is the list of persistence profile for 13.1.0:

       

      https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/ltm-profiles-reference-13-1-0/4.html

       

      If the others persistence profiles can't be used, you can collect the TCP data, and use something in the TCP payload that uniquely identifies each user, and use that for persistence with universal persistence.

       

      See this link for universal persistence:

       

      https://support.f5.com/csp/article/K7392

       

      See these links about TCP collect and TCP payload:

       

      https://clouddocs.f5.com/api/irules/tcp__collect.html

       

      https://clouddocs.f5.com/api/irules/tcp__payload.html

       

    • alexandre_girau's avatar
      alexandre_girau
      Icon for Altocumulus rankAltocumulus

      Ok, I begin t understand better with theses explanations. And It could be useful to try chech information in TCP Payload, very useful. Thanks, and I will use it but not sure for this special case.

       

      In fact, here we don't need to get this information about something, no need to keep persistence or affinity. It's for IOT project where devices connect to farm server and etablished a socket (and keep it open). If IOT become disconnected, he can reconnect to any node without reconnect previous one.

       

      So, which settings I need to set for load balance tcp connection without any affinity, persistence, etc.. Just want redirect traffic to each nodes based on TCP connection. For example, we have 10k device connected for 3 node farm, 3333 TCP connection need to be redirect to each node.

       

      I'm totally agree that like it's TCP socket, if a node fail then revert, all TCP socket will remain connected in other nodes in farm and only next TCP will be load balanced. We already have a plan for kill TCP socket on server UP after a node failed for equalize again.

       

      Thank you again, Alex

       

    • Leonardo_Souza's avatar
      Leonardo_Souza
      Icon for Cirrocumulus rankCirrocumulus

      In that case, yes, you don't need persistence.

       

      The TCP connection will continue open until something happens. That can be, the connection is closed, timeout, the server goes down, etc...

       

      You can change the timeout in the TCP profile, the default is 300 seconds. Create a new profile based on the default TCP, and change idle timeout value.

       

      There is also the action on service down that you can configure in the pool settings. Basically, what happens to a connection that is already open and the server is marked as down by the monitor.

       

  • i hope your not using any persistence and pool configured with round robin ?

     

    ThxS rini

     

    • alexandre_girau's avatar
      alexandre_girau
      Icon for Altocumulus rankAltocumulus

      Hi,

       

      No, I don't use any persistence

       

      But yes, I use Round Robin

       

      Problem is coming from Round Robin ?

       

      Thanks, Alex

       

  • Ok, I finally fix it ^^

    The issue was because I created a pool with 4 members, 2 same IP but different service port, cause we have 2 versions who using different socket

    And I have 2 Virtual Services different (based on version) with an iRule different for each in order to select specific node with the target port. The irule was like this :

    when CLIENT_ACCEPTED {
    
    set xxx_node1 "13.x.x.183"
    set xxx_node2 "40.x.x.221"
    set xxx_ppol "pool_vm_xxx_prod"
    set xxx_port yyyy
    
    if { ([LB::status pool $xxx_ppol member $xxx_node1 $xxx_port] eq "up") and ([LB::status pool $xxx_ppol member $xxx_node2 $xxx_port] eq "up") }
    {
        node $xxx_node1 $xxx_port
        node $xxx_node2 $xxx_port
    }
    elseif { ([LB::status pool $xxx_ppol member $xxx_node1 $xxx_port] eq "up") and ([LB::status pool $xxx_ppol member $xxx_node2 $xxx_port] eq "down") }
    {
        node $xxx_node1 $xxx_port
    }
    elseif { ([LB::status pool $xxx_ppol member $xxx_node1 $xxx_port] eq "down") and ([LB::status pool $xxx_ppol member $xxx_node2 $xxx_port] eq "up") }
    {
        node $xxx_node2 $xxx_port
    }
    else
    {
        log "Error : Pool $xxx_ppol is down"
    }
    

    }

    Finally, I just recreate 2 new pool specific to each version and on my 2 VS I associate this new pool based on target version and deleted the associated iRule.

    And now, my traffic is correctly load balanced 😉

    Thanks to all who help me in this situation

    Have a nice day, Regards Alex