Forum Discussion
Can LTM be used to configure Active and Passive Servers?
1. Define active pool of servers for a vip
2. Define passive pool of servers for a vip
3. When all the members in pool go down then make passive pool active
Is it possible to do that in LTM?
If it's possible then when one of the pool members (previously active) become active again does it switch it back?
- shawn306_84070NimbostratusOk so here is the setup....
- Michael_YatesNimbostratusLess than 1 would be the Active / Hot Standby configuration that you are looking for.
- shawn306_84070NimbostratusMichael,
- james_lee_31100Nimbostratus
I have similar requirement for this
1) when node 10.0.21.201 is up, it is always taking traffic 2) if node 10.0.21.201 is down, node 10.0.21.255 will take traffic 3) when node 10.0.21.201 is back online, load balancer will direct traffic to node 10.0.21.201
I tested it works for case 1 and 2, not case 3?
Any suggestions?
ltm pool /Common/pool-ambari_8441 { load-balancing-mode least-connections-member members { /Common/10.0.21.201:8441 { address 10.0.21.201 priority-group 3 } /Common/10.0.21.255:8441 { address 10.0.21.255 } } min-active-members 1 monitor /Common/MON-ambari-8441 }
- DR_A__18839Historic F5 Account
Getting a stateful load balancer to do this will be "challenging". And it is worth noting that in all but a handful of scenarios (one maybe?) completely undesired.
To know whether it is even possible, we would need the Virtual config to see whether persistence is enabled and what profiles were in-use (possibly need their config also; iRules too?) to assess whether it was possible to use Performance Layer4 type Virtual.
Best case scenario, this might be possible by following the nPath configuration instructions (search on AskF5 should bring this up; worth noting that the name "nPath" has fallen out of favor, so it will commonly be referred to as DSR, or Direct Server Return) -- note that you aren't really doing nPath, but by configuring the F5 for nPath will allow it to not be stateful. This means each packet will be considered for load balancing individually, and that the F5 will no longer be stateful at_all outside of the response packet for a given egressed packet to the pool member (basically, you're configuring us as a dumb router, not a load balancer).
Coming back from the "how" to briefly visit on the "whether": Configuring an F5 Virtual to behave like a dumb L3 device is typically not possible and/or undesired in all but a gateway pool type scenario. The antithesis of this type of configuration would be a secure web shopping cart. As always, I highly recommend testing via a lab, or if needs be, via a "test" virtual that will impact only test traffic to ensure proper operation.
- james_lee_31100Nimbostratus
Thanks Don, I am not sure why nPath in the picture, could you give an example?
Here is the vip configuration, no persistence. ltm virtual /Common/priv-np-int-sjambari_8441 { destination /Common/10.0.22.6:8441 ip-protocol tcp mask 255.255.255.255 pool /Common/pool-int-sjambari_8441 profiles { /Common/tcp { } } source 0.0.0.0/0 source-address-translation { type automap } translate-address enabled translate-port enabled }
- james_lee_31100Nimbostratusand client is on the same subnet as those servers
- DR_A__18839Historic F5 Account
Maybe also possible that slow ramp + PGA is effecting distribution expectations? Review SOL16242 for applicability to your situation: https://support.f5.com/kb/en-us/solutions/public/16000/200/sol16242.html
(never saw reference to instability of your alternate pool member, so not a direct match unless that was omitted)
Or, perhaps you just didn't wait more than 10s?
nPath has nothing to do with this. I noted nPath config on the F5 would allow a shift between pool members of active traffic, but also noted that this is nonsensical unless we're dealing w/L3 pool members gw/router type devices (something I find unlikely given the provided config).
I can only think of a few possible causes for the described behavior: active traffic, a form of persistence, slow ramp, or perhaps even a bug.
Persistence is stated as a non-issue (not in-use; and my cursory glance of config agrees). Slow ramp may be an issue (if you didn't wait 10s), but the bug on slow ramp + PGA doesn't seem to fit (see the final req in the SOL). I don't know the environment, what you're trying to pass, nor your level of expertise, so I'm tossing out more-or-less random ideas.
FWIW: A traffic capture during the 3rd case scenario would answer a lot of questions and garner significant understanding.
If I were wanting to figure this out, I'd use a traffic capture command similar to the following to get further insight: tcpdump -nnei0.0:p -c1000 host 10.0.22.6 and tcp port 8441
Assuming a more modern TMOS version, :p should adequately fetch the related SNAT'd server side connections for the filtered for client side virtual traffic.
In your lab repro, stop the primary pool member, start the tcpdump on the active F5, start the client test traffic, restore service to the primary pool member, wait ~15s (slow ramp + 5s), then stop the capture.
Good luck!
If you choose to post the results of the traffic capture to this thread, I highly suggest scrubbing the output to meet your security standards (as I'm sure you've already done with the provided configuration). I personally don't see a problem with posting this sort of thing, but I'm not your security policy administrator. ;)
If you do want to scrub, I suggest saving the txt output capture to a file, and then running a sed line similar to: sed 's/[mac1]/F5_server_MAC/g;s/[mac2]/primary_pool_MAC/g;s/[ip1]/primary_pool_ip/g' capture.txt
Where "[mac1]" is the literal MAC address of the F5 server_side MAC address, and so on.
Beyond that, if my tcpdump or sed command are incorrect, you have my apologies. I didn't test them.
- james_lee_31100Nimbostratus
No Luck..
I might think of this way, any sharp eyes could help, there is any issues with this Irule
set primary_down 0
when client_accepted { tcp::collect 15 }
when CLIENT_DATA { if { [active_members pool-ambari_8441] < 1 } { pool pool-ambari-standby_8441 set primary_down 1 } elseif { [active_members pool-ambari_8441] == 1 and primary_down == 1 ] } { lb::down pool pool-ambari-standby_8441 pool pool-ambari_8441 lb::up pool pool-ambari-standby_8441 } else { pool pool-ambari_8441
- nitassEmployee
1) when node 10.0.21.201 is up, it is always taking traffic
I tested it works for case 1 and 2, not case 3?
i might be lost but isn't it how priority group works? for case 3, as long as it is a new connection, it should be forwarded to the higher priority pool member i.e. 10.0.21.201.
or do you want to forward existing connection to higher priority pool member when it comes back online?
- james_lee_31100Nimbostratus
You are right, Nitass.. We need to forward existing connection to higher priority pool members when it comes back online.
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com