For more information regarding the security incident at F5, the actions we are taking to address it, and our ongoing efforts to protect our customers, click here.

Forum Discussion

Michael_107360's avatar
Oct 02, 2013

Priority Groups

BigIp configured with 4 nodes configured in Policy Group 20 and 2 nodes configured in Policy Group 10; Priority Group Activation is set to Less than 1. All 4 nodes get shutdown traffic moves over to Priority Group 10....then 1 node comes back in Priority Group 20 followed by the others....."when" does the traffic "switch" back to Priority Group 20?

 

7 Replies

  • Traffic should move back to the higher priority nodes as soon as the activation criteria is met (in this case one or more). Are you seeing something different?

     

  • The 180 seconds setting in the source address persistence profile is a timeout value. The BIG-IP maintains a persistence table entry that maps a source address to a chosen pool member/port. If that table times out for lack of use, only then will a new load balancing decision be made.

     

  • That's the nature of source-based persistence. I'm assuming you want existing sessions (persisted by source address) to remain on the priority 10 servers. Realize that a single client can spawn many TCP sessions in the life of an application session, so if you're using source-based persistence and all requests come form the same IP, then every new session looks like it's coming from the same client. Can use a different form of persistence?

     

  • This issue seems to be specific for maintenance or server goes down abruptly scenario. If you would like the connections to be completely down when using source address persistence, you might want to check the go offline on the node {server}. There is a caveat, this would drop all the active connections for the lower priority group server. All the fresh connections initiated would auto route to the highest priority server(s)/node(s). This could be useful on the non critical, applications hosted on F5

     

  • Can you please help with the option to "check the go offline on the node {server}",

     

    We are facing similar issue,where, the traffic is intermittently routing to the wrong priority pool member on the pool. Typically 10.10.71.247:80 is online but won't serve traffic unless the two priority 2 pool members go down, but has been intermittently serving traffic even when we can see that the other two pool members in priority 2 are shown as up on the load balancers.

     

    We have source address persistence configured because the application (hosted on the priority group 2 servers) does require this. The priority group 1 and priority group 3 servers are basically hosting a maintenance page. Priority 3 for when we have scheduled maintenance, and priority 1 for when we have unplanned downtime. In normal cases, priority 2 and priority 1 servers will be up, but no traffic shall be routed to the priority 1 server unless both priority 2 servers are down. Normally traffic will only be on the priority 2 servers, with persistence (as required by the application). If we have planned maintenance, we will bring up the priority 3 server. In all cases, we have persistence for 180 seconds only, after which the customer’s request will be re-balanced, allowing for the flexibility of persistence for short-term transactions while allowing re-balancing for less frequent use, i.e. when we bring up the priority 3 server.

     

    we typically get 1-2 calls a day from our customers when the priority 1 pool member is up, even though the priority 2 pool members are also both up. Even when the user sees the issue, we cannot reproduce, and the F5 shows priority 2 pool members are active. what other options do we have for troubleshooting?

     

    • Michael_107360's avatar
      Michael_107360
      Icon for Cirrus rankCirrus
      my issue was only resolved by REMOVING Source_addr persistence, that was what was causing my issue.....As soon as I removed source_addr from the virtual server the nodes would switch back and forth immediately.
  • Although I agree with Michael, if the application requirement is to persist a returning client, removing the src addr persistence would not the issue. The solution is viable for most scenarios where persistence is not required.

     

    Although there is priority group activation, there are chances of traffic flowing to other pool members in lower priority group.

     

    There is another setting for the priority group activation at the top which shows less than. So in your case if you activate the less than 2. that is when the traffic will stop flowing to the activation group 1. this is way to hard code the traffic flow. Please try and post if that works with you.