I am quite lost concerning how Gateway Failsafe (GF) can be used to monitor def GW in cluster.
Def GW object is object synced between nodes. So I can't see a way to set two different Def GW on nodes.
GF is based on monitoring two different gateways (or other objects). Each device have to use completely separate (separate Pool with separate pool members like:
It makes sense because in case of Failover based on GF new Active should have it's pool UP. If it would be the same device then most probably it would be Down same as on Active.
Sure, both devices can have different network paths to the same device but probably it's less frequent.
Maybe that is because of name used for this feature that suggested me that it can be used to monitor Def GW, but in fact it's not at all?
The final question is if there is a way to have separate Def GW per node in cluster? I mean using Routes config, not magic with VSs? And can GF be used to actually monitor access to Internet from nodes - via separate gateways?
I modified the pool (per article) and when tab completing, options of the device names for the devices in the HA pair. This seems to be how the system differentiates:
modify ltm pool ssh_pool gateway-failsafe-device
Configuration Items:
ltm2.test.net ltm1.test.net
I can do all what is described in the article, and it's working. If my gateway pool went down Failover occurs.
That is not an issue here. I wonder how to define separate default gateways per node using Routes.
If I will define default gateway on one device after sync it will be propagated to other, so both will have te same one.
Now if I will define wildcard VS (ForwardingIP) for internal host this VS will use same gateway no matter which node will be active.
But my goal is to be able to use different default gateway depending on which node is Active - assume that each node is connected to network on ext side using different subnets. So one is going out VLAN1 -> GW1 -> Internet second VLAN2 -> GW2 -> Internet based just on routing table.
Seems to be impossible - or I am wrong?
I know that I can monitor separate GW on each device but those GW can not be default gateways because only one entry in Routes define DG and is shared by both nodes.
This is an example using route domains based on this description (I believe):
"So one is going out VLAN1 -> GW1 -> Internet second VLAN2 -> GW2 -> Internet based just on routing table."
Two route domains:
net route-domain 0 {
id 0
vlans {
internal
}
}
net route-domain rd2 {
id 2
vlans {
external
}
}
Two pools based on the route domains:
ltm pool pool_0 {
members {
10.12.23.27:any {
address 10.12.23.27
session monitor-enabled
state up
}
}
monitor gateway_icmp
}
ltm pool pool_2 {
members {
10.11.23.27%2:any {
address 10.11.23.27%2
session monitor-enabled
state up
}
}
monitor gateway_icmp
}
Now the route table, showing two default gateway. Any packet to vlan internal will route to pool_1 and any packet in vlan external will go to pool_2:
show net route
----------------------------------------------------------------------------------------
Net::Routes
Name Destination Type NextHop Origin
----------------------------------------------------------------------------------------
gateway_0 default pool /Common/pool_0 static 1500
gateway_2 default%2 pool /Common/pool_2 static 1500
There are so many options for managing pools and routes, I would like to think this won't be impossible. It may just take a few tries.
I am not sure if RD is exactly what can solve the case here. Maybe it will be easier with some diagram. It's based on my understanding of description from https://support.f5.com/csp/article/K15367
Especially:
The first step of configuring the gateway fail-safe feature is to create one gateway pool for each BIG-IP system in the failover pair. Each gateway pool must consist of the upstream gateway(s) the system is connected to. For example, if bigip1 is connected to upstream gateway 10.10.1.1 and bigip2 is connected to upstream gateway 10.20.2.2, then you must configure two gateway pools; gateway_pool1 consists of pool member 10.10.1.1:any and gateway_pool2 consists of pool member 10.20.2.2:any.
Configuration:
For simplicity there is no redundancy, and tagged VLANs used to spare interfaces.
Both devices have VLAN ext defined on interface 1.1, but BIG-IP1 with tag 100, BIG-IP2 with tag 200
There is no floating IP configured for VLAN ext - would be difficult as each device is using different subnet for VLAN ext SelfIP
VS are all defined in separate subnet - different from both devices SelfIP subnets
R1 has route to 10.30.1.0/24 set to BIG-IP1 SelfIP 10.10.1.10
R2 has route to 10.30.1.0/24 set to BIG-IP2 SelfIP 10.20.2.20
gateway_pool1 attached to BIG-IP1 is set with pool member 10.10.1.1:any
gateway_pool2 attached to BIG-IP2 is set with pool member 10.20.2.2:any
I can't see any problem with traffic coming from Internet (assuming external configuration directing traffic to R1 when BIG-IP1 is ACTIVE and R2 when BIG-IP2 is ACTIVE).
R1 will route it to BIG-IP1 SelfIP, then internally it will be directed to given VS in 10.30.1.0/24 subnet, then to Node via VLAN int.
Returning traffic (based on assumption Auto Last Hop is enabled) will ignore any routing entries and be directed back to R1 MAC.
In case of R1 failure discovered by BIG-IP1 failover to BIG-IP2 will be performed. Sure all TCP sessions will be terminated but...
Now external mechanism will direct Internet traffic to R2 and then to BIG-IP2.
Great for incoming traffic, but what about traffic sourced from VLAN int?
Assuming kind of wildcard ForwardingIP VS enabled on VLAN int we have problem. If DG will be configured to 10.10.1.1 then it will be the same on both devices (will be synced).
So it will work if BIG-IP1 is ACTIVE but not when BIG-IP2 is ACTIVE.
Can imagine setup when we have def_gateway_pool containing:
10.10.1.1:any
10.20.2.2:any
and wildcard PerfomanceL4 VS on VLAN int.
Assuming than when BIG-IP1 is ACTIVE only 10.10.1.1 will be marked as UP then traffic should reach Internet.
When BIG-IP2 will be ACTIVE only 10.20.2.2 will be UP so again traffic should reach Internet.
But it seems to be a bit flawed concerning communication sourced from BIG-IP itself, like update checking, NTP, IPI updates etc.
BIG-IP1 should not have problem, DG is OK, but BIG-IP2 will have problem, DG will not work.
So is there other way to solve this? AM I missing/misunderstanding something?