Forum Discussion
How to add Active Rules on AFM modules in batch using Command Line
- Oct 30, 2022
Hi davidy2001,
You must enter "default" (current password) at the second red arrow. Then it will ask you to enter the new password.
Each rule has to have a name.
create security firewall policy Policy_val_vs_To_vlan_inweb Rules add { NewRule1 { action accept destination {....
This will be helpful, https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/bigip-tmsh-reference-11-6-0.pdf
- Simon_BlakelyFeb 02, 2020
Employee
Can you explain how your configuration is currently working - is external traffic distributed between the two DCs, so that each BigIP carries part of the traffic?
Are the devices being synced so that as a fall-back, all the traffic can be passed through one working DC?
In general, it does not make sense to build LTM object synchronization groups that are not also failover groups. LTM synchronization includes things like IP addresses and vlans, which would normally be expected to be different in different failover zones (i.e DCs), so syncing them across isn't really useful.
Such restrictions are not required for things like ASM, APM and GTM, which is why they have separate sync groups, and config can be synced across disparate DataCenters.
- Oleksandr_Malo1Feb 03, 2020
Altostratus
Hi Simon,
Thank you for the reply.
External traffic is distributed based on destination networks with an approximate ratio of 50/50 and DCs operate in acitve/active mode (A kind of usual active/active scenario but failover is based on dynamic routing - BGP).
It means that we have the list of vserver subnets - range "A" and "B". Range "A" subnets are active in DC1 and announced by BGP towards upstream peers. At the same time range "A" subnets are also announced with the worse priority via BGP at DC2. Thus in case of a DC1 network issues range A subnets will be still reachable via DC2. Accordingly we have range "B" subnets that are announced via both DCs but at DC2 BGP priority for this range is better than at DC1.
The list of vserver IPs for each DC is identical. Target (aka real servers) are reachable from both DC1 and DC2 F5 appliances (extended or stretched VLAN capability is utilized between DCs). Hence the the list of real VM IPs and hostnames (as long as pools) is also the same for both F5s.
We do not require session mirroring and BGP convergence delay is fine for us in case network failure occurs and all traffic goes either to DC1 or DC2. The main reason I asked my question was that for me it makes sense to perform LTM (node, pool, vserver) and ASM (policy) setup just once at either DC1 or DC2 node and just replicate configuration to the neighbor node without doing the same job twice.
I hope my explanation was straightforward enough :)
- Simon_BlakelyFeb 03, 2020
Employee
Great explanation.
I'd recommend building an Active/Active Sync-failover group with one traffic group per DC
Manual Chapter : Creating an Active-Active Configuration using the Configuration Utility
Split the floating IPs and VIPs into traffic groups using the relevant subnets, and optionally enable BGP Route advertisement.
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com