dsc
16 TopicsIssues with incremental config sync cache || Unable to do incremental sync, reverting to full load for device group
I received an error similar to below : notice mcpd[2789]: 0107168e:5: Unable to do incremental sync, reverting to full load for device group /Common/syncgroup1device%cmi-mcpd peer-/Common/ltm1.example.comfrom commit id { 4 6390393316259868817 /Common/ltm1.example.com}to commit id { 3 6391877370007482801 /Common/ltm2.exmample.com}. Here, changes pertaining to commit id 3 got executed on the peer device. Undesired change like disabled pool member was enabled which caused impact to the business. The recommended action says to reduce the size and frequency of the configuration changes made to the BIG-IP system. You may also be able to mitigate this issue by increasing the size of the incremental ConfigSync cache. While I see the explanation below saying if incremental sync cache size exceeds 1024, the BIG-IP performs a full sync which is not happening in my case. In theMaximum Incremental Sync Size (KB)field, retain the default value of1024, or type a different value.This value specifies the total size of configuration changes that can reside in the incremental sync cache. If the total size of the configuration changes in the cache exceeds the specified value, the BIG-IP system performs a full sync whenever the next config sync operation occurs. Can anyone help me understand the below concerns. Q. Why the full sync doesn't happen if the incremental sync cache size goes beyond 1024. Also it caused an impact to the traffic by configuring changes specific to commit-id 3. Also I checked below command, show cm device-group <sync_group> incremental-config-sync-cache It shows multiple commit id and related configuration, Q. Is there a procedure to only keep the recent commit-id and flush the old ones so the cache doesn't go beyond default 1024KB. Q. Can the modify the cache value to the suggested 2048 and will there be any impact of it ? And will it require increasing it in future if say again the cache fills up ? modify cm device-group <sync_group> incremental-config-sync-size-max ? Specifies the maximum size (in KB) to devote to incremental config sync cached transactions. Range: 128-10240. Default:1024. Q. Is there a way we can monitor this proactively (leaving aside the preventive measures of reducing size and frequency of config changes). Hope I will get answers to the above concerns. thank DevCentral Community in advance !!!1.1KViews0likes0CommentsGTM and cluster
Hi, I wonder what is relation between DSC and GTM Sync Group. Let's assume that I have cluster of two BIG-IP both with LTM and GTM. Then I create DSC cluster (Active-Passive or Active-Active). How it relates to GTM? I assume that Sync Failover will sync DNS Listeners objects used by GTM but not GTM config - Am I right. If so, then how to configure sync group when GTM runs on devices in DSC? Should I add second device as BIG-IP type server object or not? Assuming that all Server object are only referencing Generic Host type off objects is adding other BIG-IP in Sync Group at all necessary? What if I need separate DNS Listener IP that is active on both devices - will creating two Traffic Groups work? So I will have one listener assigned to traffic-group-1 that is by default active on DeviceA and second listener assigned to traffic-group-2 active by default to DeviceB - will it work. Will both listeners serve same Wide IPs and Sync Group will synchronize all GSLB settings between two devices? Piotr899Views0likes2CommentsGateway Failsafe and default gateways
Hi, I am quite lost concerning how Gateway Failsafe (GF) can be used to monitor def GW in cluster. Def GW object is object synced between nodes. So I can't see a way to set two different Def GW on nodes. GF is based on monitoring two different gateways (or other objects). Each device have to use completely separate (separate Pool with separate pool members like: DeviceA - gf1_pool - 10.10.10.1:0 DeviceB - gf2_pool - 10.10.11.1:0 It makes sense because in case of Failover based on GF new Active should have it's pool UP. If it would be the same device then most probably it would be Down same as on Active. Sure, both devices can have different network paths to the same device but probably it's less frequent. Maybe that is because of name used for this feature that suggested me that it can be used to monitor Def GW, but in fact it's not at all? The final question is if there is a way to have separate Def GW per node in cluster? I mean using Routes config, not magic with VSs? And can GF be used to actually monitor access to Internet from nodes - via separate gateways? Piotr582Views0likes5CommentsCannot get vip secondary IPs to transfer over to other virtual appliance in AWS.
Afternoon, I have setup a clustered VE BigIP appliances that are successfully sync'd but the issue is that from what I understand of the floating IPs for vip gateways and virtual servers is that they have to transfer over to the other device which takes around 10 seconds as it relies on API calls on AWS to do this transfer of IPs. I've contacted AWS support on whether they can see any API calls or not and they have confirmed that the devices are not making any calls. after exhausting google and aws and f5 support, I'm here to ask my questions. 1) How do I even begin to troubleshoot API calls from f5? Where do I go? what logs do I see? GUI or CLI? 2) How can I test the failover properly to use the API Call? 3) Why would the device not make an API call even though the permissions on the access key has Admin Rights? 4) is it possible to manually trigger this call? Where? Someone please get me started, as I'm running out of options. Thanks in advance.438Views0likes2CommentsDevice Service Clustering (vCMP guests not failing over when physical vCMP host interface fails)
Hello, Today we have the following physical setup for two 5250v F5s that we are using as vCMP host for 5 vCMP guests in each of them. These vCMP guest are configured in an HA pair using Device Service Clustering with load aware. They are homogeneous. Last week we had failover tests in the Data Center and brought down manually the physical interfaces from the switch side that are members of Port-Channel107 shown in the picture and the fail-over did not occur. While in the vCMP host we noticed the the port status changed to down in the physical interface, this was not passed over to the vCMP guests and that is what I am assuming was the cause of the failing over not happening. We did see all the VIPs, Nodes, Pools going down on the active unit but the failover was not triggered and cause all the applications to fail. Questions: -Is the expected behavior not to pass the status of the physical interfaces in the vCMP host to the vCMP guests? -Is there something that I am missing in my current configuration that is causing these vCMP guests not to fail-over when the physical interfaces in the vCMP host fail? -Do I need to use HA groups to monitor the trunks instead of load aware for the fail-over to be triggered?374Views0likes1CommentRemote Logging Configuration is DSC
Hi, I'm encountering an issue while configuring the remote logging of a DSC. While I can optionally set the local IP, I cannot define which interface to use for remote logging. When no local IP is configured, the logs are send through the routing table of TMOS. I need to send the logs through the management interface, instead of the traffic interfaces. I can reach my goal when configuring the local IP as the one from the management interface. The poor thing is, that the configuration needs to be synchronized after configuration. When I then synchronize the configuration, the other nodes configuration doesn't have the management IP set, instead there is no local IP configured anymore and the traffic interfaces will be used to send out syslog traffic. Is there any way to configure remote logging in a DSC without synchronizing this part of the configuration or is there a way to change the routing of the syslog-ng to use the management interface as default? I saw very much users modifying the syslog-ng configuration itself, instead of using the builtin configuration. Unfortunately the documentation does only claim to set the local IP to a non-floating selfIP in HA configuration (https://support.f5.com/csp/article/K13080): Note: For BIG-IP systems in a high availability (HA) configuration, the non-floating self IP address is recommended if using a Traffic Management Microkernel (TMM) based IP address. From my understanding and experience this would end in the same issue, because the non-floating selfIP is not synchronized, but the remote logging configuration needs to be synchronized. I'm very thankful for every hint. Greets, svs348Views0likes2CommentsComplex HA scenario with 3 Big IPs
Hi all, I am trying to implement a complex scenario with 3 LTMs and 3 Partitions. For the first partition (Partition A) a want to be able to failover to all three ltms (LTM_1, LTM_2 and LTM_3) For the second Partition (Partition B) a want to failover only to LTM_1 and LTM_2 and NEVER to LTM_3 The third Partition (Partition C) a want to be served only from LTM_3 and NEVER from LTM_1 and LTM_2 Since Partition A must be served from all units, I created failover/sync Device Group with all three in it. The only workaround I found out so far is to mess with vlan ids. So, in case 2, what I did is to alter the vlan ids on LTM_3 so if LTM_1 and LTM_2 fail, LTM_3 will come active but the VIPs reside to an isolated vlan, preventing them to be advertised to the rest of the network. Any ideas?289Views0likes3CommentsvCMP guests logs "Skipping heartbeat check on peer slot X: Slot is DOWN."
We have 2 VIPRION C2400 chassis with 2 B2250 blades in each. On these we have vCMP guests in several active/standby clusters (DSC) between the 2 chassis. Device-group-A:guest1/chassis1,guest1/chassis2 , Device-group-B:guest2/chassis1,guest2/chassis2 , and so on. Recently we found that some (not all) of these clusters (DSC) is reporting/logging: "info clusterd: Skipping heartbeat check on peer slot X: Slot is DOWN". X is every slot except the one the actual vCMP itself is running on. 2 slots (3-4) in each chassis is empty, so yes , one could say that they are "DOWN". But the second installed blade (slot 1 or slot 2) is not DOWN - it is up and running with vCMP guests, so we dont understand why it is reported Down. Neither do we understand why we have this behavior on some of our clusters (DSC), but not all. We've tried to compare them, but havent found differences who can explain it. Does any of the experts in here have a clue of what this can be? PS! Big IP version is 12.1.2 HF1 on both chassis (hypervisor) and all vCMP guests.283Views0likes3CommentsScaling Out a VIP across the cluster video question
Hi, There is very nice video about spanned VIP and ECMP by Alex Applebaum (ScaleN Part 5: Scaling Out a VIP across the cluster. I think I figured out packet flow but would like to confirm that I am not wrong. It seems that example config is using one leg setup and packet flow is like that: client (10.128.1.1->10.0.0.1) -> Router with ECMP (10.128.255.251 10.1.255.251) -> BIG-IP 1/2/3 external self IP (10.1.50.201/202/203) -> VS (10.0.0.1) -> SNAT (10.0.0.11/12/13) - src ip 10.0.0.11/12/13 dst ip 10.100.30.3 -> Router with ECMP (10.1.255.251 10.100.30.251) -> server (10.100.30.3) Above assumes that 10.1.255.251 is default route on BIG-IP devices. Returning packet follows same path with BIG-IP doing proper address translation. Is above correct? I am not expert in ECMP are so I wonder how TCP connection persistence can be preserved with such config. I assume that only way to not break given TCP session is to always route packets with srcIP:port-dstIP:port via the same BIG-IP - Am I right? If so can it be done on ECMP routers? Then I wonder how to provide persistence at the L7 level? There is one VIP defined that is accepting traffic on all BIG-IP in the cluster. But instance of VIP on each BIG-IP is separate so it is creating own persistence records and this info is not shared between them. So I assume that using persistence based on persistence records (PR) will not work. Example: First TCP session from IP 10.128.1.1 is directed to BIG-IP1 BIG-IP1 chooses member and creates PR Second TCP session from same IP is directed to BIG-IP3 BIG-IP3 has not PR for this src IP, so it chooses member (could be the same as on BIG-IP1 but could be other) and creates PR Persistence for client is broken. Except of course if ECMP router always direct traffic from given IP to the same BIG-IP (so ports are not used for making decisions) - but it could really mess up with load balancing among BIG-IPs in cluster. However persistence based on cookie should work. No matter which BIG-IP will receive HTTP request with bigip cookie it will direct it to member specified in the cookie. But it is solution working only for HTTP. Is there any for other protocols than HTTP? Could CARP work here - I guess that is only one not using PRs? Piotr265Views0likes0Comments