Forum Discussion
LTM and ASM configuration synchronization without Sync-Failover enabled
Hi everybody,
My question is related to the point of configuration synchronization of LTM and ASM modules between two VE appliances. We have 2 VE VMs in two different geographically dispersed DC environments (1 VM per DC). VM software versions are identical - Version 15.0.0 build 0.0.39. L2 connectivity is already in place between those VMs. The goal is to get appliance configuration synchronized between those VMs (vservers, nodes, pools, ASM policies, bot defense configuration, etc). Unfortunately, due to routing design constraints we are not able to deploy Sync-Failover device groups and each appliance operates in standalone mode. Also, as far as I have understood, at least vserver, node and pool related configuration is not synchronized while using Sync-Only device groups. I am wondering whether it is possible to customize Sync-Only mechanism (or maybe achieve this with other means) and get something like LTM/ASM manual incremental configuration synchronization but without failover feature enabled? I have found information that it is possible to customize UCS configuration backup and include additional files/folders into archive but is it doable to get something similar with configuration synchronization?
Thank you!
Great explanation.
I'd recommend building an Active/Active Sync-failover group with one traffic group per DC
Manual Chapter : Creating an Active-Active Configuration using the Configuration Utility
Split the floating IPs and VIPs into traffic groups using the relevant subnets, and optionally enable BGP Route advertisement.
- Simon_BlakelyEmployee
Can you explain how your configuration is currently working - is external traffic distributed between the two DCs, so that each BigIP carries part of the traffic?
Are the devices being synced so that as a fall-back, all the traffic can be passed through one working DC?
In general, it does not make sense to build LTM object synchronization groups that are not also failover groups. LTM synchronization includes things like IP addresses and vlans, which would normally be expected to be different in different failover zones (i.e DCs), so syncing them across isn't really useful.
Such restrictions are not required for things like ASM, APM and GTM, which is why they have separate sync groups, and config can be synced across disparate DataCenters.
- Oleksandr_Malo1Altostratus
Hi Simon,
Thank you for the reply.
External traffic is distributed based on destination networks with an approximate ratio of 50/50 and DCs operate in acitve/active mode (A kind of usual active/active scenario but failover is based on dynamic routing - BGP).
It means that we have the list of vserver subnets - range "A" and "B". Range "A" subnets are active in DC1 and announced by BGP towards upstream peers. At the same time range "A" subnets are also announced with the worse priority via BGP at DC2. Thus in case of a DC1 network issues range A subnets will be still reachable via DC2. Accordingly we have range "B" subnets that are announced via both DCs but at DC2 BGP priority for this range is better than at DC1.
The list of vserver IPs for each DC is identical. Target (aka real servers) are reachable from both DC1 and DC2 F5 appliances (extended or stretched VLAN capability is utilized between DCs). Hence the the list of real VM IPs and hostnames (as long as pools) is also the same for both F5s.
We do not require session mirroring and BGP convergence delay is fine for us in case network failure occurs and all traffic goes either to DC1 or DC2. The main reason I asked my question was that for me it makes sense to perform LTM (node, pool, vserver) and ASM (policy) setup just once at either DC1 or DC2 node and just replicate configuration to the neighbor node without doing the same job twice.
I hope my explanation was straightforward enough :)
- Simon_BlakelyEmployee
Great explanation.
I'd recommend building an Active/Active Sync-failover group with one traffic group per DC
Manual Chapter : Creating an Active-Active Configuration using the Configuration Utility
Split the floating IPs and VIPs into traffic groups using the relevant subnets, and optionally enable BGP Route advertisement.
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com