Single Node Persistence
Problem this snippet solves:
A really slick & reliable way to stick to one and only one server in a pool.
Requirement: Direct traffic to only a single node in a pool at a time. Initially, traffic should always go to node A. If Node A fails, then traffic will go to Node B. When Node A comes back online, traffic should continue to go to Node B. When Node B fails, then the traffic should go to Node A.
To send traffic to only 1 pool member at a time, you can use an iRule and Universal Persistence to set a single persistence record that applies to all connections.
- Create a virtual server.
- Create a pool with the real servers in it.
- Create an iRule like this:
- Create a Persistence profile of type Universal which uses the iRule you just created. Set the timeout high enough so it will never expire under typical traffic conditions.
- In the virtual server definition, apply pool as the default pool, and the new persistence profile as the default persistence profile (both on the virtual server "resources" screen).
The first connection will create a single universal persistence record with a key of "1". All subsequent connections will look up persistence using "1" as the key, resulting in truly universal persistence for all connections. (Use 1 or any constant value. 0 will have the same affect as using 1. One of my customers uses "persist uie TCP__local_port"
When one node fails, the other is persisted to by all comers. When the 2nd node fails, the 1st again becomes the preferred node for all, ad infinitum.
Doesn't offer the capability of manual resume after failure, or true designation of a "primary" and "secondary" instance (sometimes required for db applications), but it sure does solve the problem of "only use one node at a time, I don't care which one, please" (You can use priority to gravitate towards the top of a list...)
Note: Priority-based load balancing with or without dynamic persistence doesn't quite address this requirement. Priority load balancing allows you to set a preferred server to which traffic should return once it recovers. With just Priority, and with dynamic persistence of any kind enabled, when a higher priority nodes come back up after failing, you will see traffic distributed across multiple pool members until old connections/sessions die off. With just Priority and no persistence, existing sessions will break once the preferred node again becomes available.
Code :
rule PriorityFailover { when CLIENT_ACCEPTED { persist uie 1 } }
- Daniel_GonzalezNimbostratus
Hi Stanislas, Michael
I understand that with a persistence profile of destination address you'll need to take care of the load balancing method to have the requests going to the same node. Specially of new requests.
I cannot think how it will work out by new requests reaching the LTM which are not in persistence table and in the event one of the nodes comes back online.
By reading Codecentral original post, it is required that in the event of a node coming back online, traffic keeps going to the same node.
Thanks
- mderanek_60004Nimbostratus
We having been using this for years with no problems. It's based on the VS name. Using uie 1 is not a good idea. Specially if you are using the irule for multiple VSs.
We create a universal persistence profile that calls the irule instead of assigning the irule to the virtual server.
when CLIENT_ACCEPTED { persist uie [virtual name] }
Just use destination address affinity instead, please. It results in a single persistence record applicable to all clients requesting the virtual. The record actually contains the virtual servers IP address (destination address affinity) and will be deleted/replaced in case the mapped pool member fails and a re-selection happens. Finally all traffic sticks to a single pool member as long as it is available. If it fails the persistence record will be replaced with the next incoming connection. This is an alternative to using priority groups. Priority groups may tend to flapping between pool members in case the high-priority member is not stable. Cheers, Stephan
- k20Nimbostratus
OK people I still don't know how exactly dst addr persistence or iRule will help. Here's what I believe both methods will fail to deliver.
Scenario 1:
T=0, nodes A and B are both online (where T = time), and persistence table is empty T=1, PC1 and PC2 start to connect. 1 of 4 mappings below could happen as persistence entries are created:
- PC1-A and PC2-B
- PC1-B and PC2-A
- PC1-A and PC2-A
- PC1-B and PC2-B
As you can see, we don't want 1 or 2 to happen. Nothing dst addr persistence or iRule could help when persistence entries don't exist yet.
Scenario 2:
T=0, node A online and node B offline, and persistence table is empty. T=1, PC1 and PC2 start to connect. The following mappings will be created in the persistence table:
PC1-A and PC2-A
T=3, node B comes online, PC1, PC2 and PC3 connect. The following mappings could happen:
PC1-A PC2-A PC3-A or PC3-B
As you can see, PC1 and PC2 don't change. However, the new PC3 could go to node A or node B. This will result in some new PC's will go to A or B while the existing PC's will still stick to A due to the existing persistence entries already exist.
Before dst addr persistence or iRule even kicks in, we have to make sure that only ONE node is taking the traffic. How can we accomplish that? Only when this first step is accomplished then dst addr persistence or iRule will help.
Hi k20, destination address affinity does not care about your client. The only thing of interest is the destination IP your clients are targeting. And this will be the IP address of your virtual server. Whenever a client is establishing a connection the virtual will establish a persistence table entry containing the virtual servers IP as key and pool member as value. With a new incoming connection (within the persistence timeout) it will lookup the table. The key is the virtual´s IP address and the value exact the same pool member. And this results in selecting the same pool member for all clients. Cheers, Stephan
- k20Nimbostratus
Stephan, I understand how persistence works. Did you forget that when T=0, there is nothing in the persistence table and you have both nodes are online, what decision will dst addr persistence make to ensure that all clients will get sent to a single node and not on both? You can't make persistence decision yet because it doesn't exist.
That´s true. If there is no persistence record the pool based load balancing method will pick a member. The persistence record will be created and every new connection to this virtual will be balanced to the same pool member now. We are not talking about prioritizing a specific pool member.
- k20Nimbostratus
So my question remains, how do we make sure that all connections will get sent to a single node when both nodes are up and no persistence records exist yet for those connections?
Does dynamic priority for pool members exist? So far, I only heard about static priority using Priority Group Activation.
- Stanislas_Piro2Cumulonimbus
Destination address affinity will create one single persistence record : the virtual server address
Destination address affinity uses client side destination address... which is virtual server address for a standard vs
So the first connexion will select pool member, all other connexions will use the same pool member
- k20Nimbostratus
I'm sorry you are confusing me. "Destination address affinity uses client side destination address... which is virtual server address for a standard vs" Are you talking about the connection between the F5 and the real servers? If so, it doesn't apply to my environment because we are using SNAT. So the servers in our case can only see the SNAT address, NOT the virtual address.