Single Node Persistence
Problem this snippet solves:
A really slick & reliable way to stick to one and only one server in a pool.
Requirement: Direct traffic to only a single node in a pool at a time. Initially, traffic should always go to node A. If Node A fails, then traffic will go to Node B. When Node A comes back online, traffic should continue to go to Node B. When Node B fails, then the traffic should go to Node A.
To send traffic to only 1 pool member at a time, you can use an iRule and Universal Persistence to set a single persistence record that applies to all connections.
- Create a virtual server.
- Create a pool with the real servers in it.
- Create an iRule like this:
- Create a Persistence profile of type Universal which uses the iRule you just created. Set the timeout high enough so it will never expire under typical traffic conditions.
- In the virtual server definition, apply pool as the default pool, and the new persistence profile as the default persistence profile (both on the virtual server "resources" screen).
The first connection will create a single universal persistence record with a key of "1". All subsequent connections will look up persistence using "1" as the key, resulting in truly universal persistence for all connections. (Use 1 or any constant value. 0 will have the same affect as using 1. One of my customers uses "persist uie TCP__local_port"
When one node fails, the other is persisted to by all comers. When the 2nd node fails, the 1st again becomes the preferred node for all, ad infinitum.
Doesn't offer the capability of manual resume after failure, or true designation of a "primary" and "secondary" instance (sometimes required for db applications), but it sure does solve the problem of "only use one node at a time, I don't care which one, please" (You can use priority to gravitate towards the top of a list...)
Note: Priority-based load balancing with or without dynamic persistence doesn't quite address this requirement. Priority load balancing allows you to set a preferred server to which traffic should return once it recovers. With just Priority, and with dynamic persistence of any kind enabled, when a higher priority nodes come back up after failing, you will see traffic distributed across multiple pool members until old connections/sessions die off. With just Priority and no persistence, existing sessions will break once the preferred node again becomes available.
Code :
rule PriorityFailover { when CLIENT_ACCEPTED { persist uie 1 } }
- Leonardo_SouzaCirrocumulus
I was reading an F5 configuration and saw the iRule in this codeshare link.
I knew it would have had to come from DevCentral. :D
The solution proposed here works fine.
However, as indicated in previous comments, destination address does the same job.
I just want to add more information about the performance side.
I hope this will make clear that destination address persistence is a better option.
As a general rule, only use an iRule if there is no builtin functionality for what you want to do.
Destination address persistence is a builtin functionality, universal persistence is also builtin but triggers an iRule.
I don't have access to F5 source code, so I will assume how the persistence table lookups work.
The iRule is using "1" as the key, so if you use the same persistence profile in multiple virtual servers, you will end up with multiple persistence with the key "1".
The persistence table does show the virtual server IP, so it lists the following.
universal (persistence type) - 1 (key) - virtual server IP and port - pool member ip and port - TMM number
I assume the system first do a lookup for key "1", and from that list, do another lookup for the virtual server IP and port.
As someone said in the comments, using something unique to the virtual server, like virtual IP and port, should remove the need for the second lookup.
All that to say, it is better to use destination address persistence than the solution proposed in this code share.
Hi Dominique, I would recommend to create a new dest address affinity persistence profile with an appropriate timeout. It will be used as default persistence profile and replaces the iRule logic. Please use action on service down in the advanced pool config with the parameter of "reject" to terminate current connections in a case of pool members state change. With a state change the persistence record will be deleted and a new incoming connection will be balanced to another available pool member. A new persistence record will be created. Please keep in mind, that the record will be updated by new incoming connections only. That's why you will notice a reset of the remaining time only with newly established connections. Cheers, Stephan
- Dominique_Peti1Nimbostratus
Could someone be more explicit about how to configure "Destination address affinity" like suggested by Stephan Manthey? It is not clear to me if one should
- just replace
bypersist uie 1
in the iRule code example above?persist dest_addr
- or configure "Destination address affinity" persistence on the Default Persistence Profile of the virtual server?
- or still something else?
Also,in a case where the connections are in principle permanent (e.g. to a database master node):
- should the timeout be unset (Indefinite)?
-
In case a server node is temporarily inaccessible or administratively forced offline, the TCP connections to that node might survive, but during that time new connections could be established with other nodes thus resulting, when the node is accessible again, in a state where there are active connections to more than one node. How can it be avoided ? e.g how to cut all connections to other nodes when a new server node is chosen by the persistence?
-
would a custom destination affinity persistence with a CARP Hash Algorithm work like for source address persistence, i.e. it would always select the same server node when all nodes are available, even for the very first connection (e.g after a reboot)? cf. How Carp algorithm with source address persistence works?
Thanks in advance for your explanations!
- just replace
- Stanislas_Piro2Cumulonimbus
Client side connection is the connection between client and f5!!!
- Stanislas_Piro2Cumulonimbus
Client side connection is the connection between client and f5!!!
- k20Nimbostratus
I'm sorry you are confusing me. "Destination address affinity uses client side destination address... which is virtual server address for a standard vs" Are you talking about the connection between the F5 and the real servers? If so, it doesn't apply to my environment because we are using SNAT. So the servers in our case can only see the SNAT address, NOT the virtual address.
- Stanislas_Piro2Cumulonimbus
Destination address affinity will create one single persistence record : the virtual server address
Destination address affinity uses client side destination address... which is virtual server address for a standard vs
So the first connexion will select pool member, all other connexions will use the same pool member
- k20Nimbostratus
So my question remains, how do we make sure that all connections will get sent to a single node when both nodes are up and no persistence records exist yet for those connections?
Does dynamic priority for pool members exist? So far, I only heard about static priority using Priority Group Activation.
That´s true. If there is no persistence record the pool based load balancing method will pick a member. The persistence record will be created and every new connection to this virtual will be balanced to the same pool member now. We are not talking about prioritizing a specific pool member.
- k20Nimbostratus
Stephan, I understand how persistence works. Did you forget that when T=0, there is nothing in the persistence table and you have both nodes are online, what decision will dst addr persistence make to ensure that all clients will get sent to a single node and not on both? You can't make persistence decision yet because it doesn't exist.