Forum Discussion
Marco_Bayarena_
Altostratus
Apr 02, 2008Failover instead of Load Balancing Pool members?
Is it possible to configure a 2 member pool to have one member receive all the connections until it becomes unavailable then send the traffic to the other member?
I have two mail servers that I would like to provide failover capabilities. I would prefer to have one mail server to receive mail at one time. When that server fails, I would like the other server to start receiving mail.
I created a pool that contains both mail servers and a virtual server that points to the pool. None of the load balancing methods of the pool seem to meet my needs. Is there a way to accomplish this?
Thanks.
5 Replies
- Steve_Brown_882Historic F5 AccountYou need to use priority group activation. Set is up so that it activaties the second group when less than 1 server is availible. The set the server you want to be primary with a priority of 1 and the backup as priority 2. You may be able to find a doc on ask f5 that exsplains it better, but that iis the basic way to do what you are looking to do. I do this for mail servers too and it works well.
- Daniel_55334
Altostratus
For this setup, can it be achieved that the backup server remains active after the primary server comes back up?- kbose_49650
Nimbostratus
Perhaps not, because the moment the server with the higher priority comes up it takes precedence. I also think that all existing connections to the existing server (low priority) will be held until they terminate, but new connections will continue to the higher priority server that just came up alive. For what you want, an iRule as described by hoolio
- hoolio
Cirrostratus
There is a nice example of single node load balancing in the iRules Codeshare: - StephanManthey
Nacreous
The easiest way to get it done from my perspective works without an iRule.
Just apply destination address affinity to the virtual server. Now check the persistence table:watch -d -n 1 tmsh show ltm persist persist-records
You will notice a single entry only. Traffic to the virtual server (persistence key) is balanced to selected pool member. No matter who the client is.
As soon as the pool member fails all incoming traffic will be directed to the alternative pool member and the persistence record will be updated. That´s it. Obviously the feature was intended to be used in another context but it works pretty well for this particular purpose. I actually started using it this way in BIG-IP v4.2. 🙂 Thanks, Stephan
Recent Discussions
Related Content
DevCentral Quicklinks
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
Discover DevCentral Connects