dsc
16 TopicsIssues with incremental config sync cache || Unable to do incremental sync, reverting to full load for device group
I received an error similar to below : notice mcpd[2789]: 0107168e:5: Unable to do incremental sync, reverting to full load for device group /Common/syncgroup1device%cmi-mcpd peer-/Common/ltm1.example.comfrom commit id { 4 6390393316259868817 /Common/ltm1.example.com}to commit id { 3 6391877370007482801 /Common/ltm2.exmample.com}. Here, changes pertaining to commit id 3 got executed on the peer device. Undesired change like disabled pool member was enabled which caused impact to the business. The recommended action says to reduce the size and frequency of the configuration changes made to the BIG-IP system. You may also be able to mitigate this issue by increasing the size of the incremental ConfigSync cache. While I see the explanation below saying if incremental sync cache size exceeds 1024, the BIG-IP performs a full sync which is not happening in my case. In theMaximum Incremental Sync Size (KB)field, retain the default value of1024, or type a different value.This value specifies the total size of configuration changes that can reside in the incremental sync cache. If the total size of the configuration changes in the cache exceeds the specified value, the BIG-IP system performs a full sync whenever the next config sync operation occurs. Can anyone help me understand the below concerns. Q. Why the full sync doesn't happen if the incremental sync cache size goes beyond 1024. Also it caused an impact to the traffic by configuring changes specific to commit-id 3. Also I checked below command, show cm device-group <sync_group> incremental-config-sync-cache It shows multiple commit id and related configuration, Q. Is there a procedure to only keep the recent commit-id and flush the old ones so the cache doesn't go beyond default 1024KB. Q. Can the modify the cache value to the suggested 2048 and will there be any impact of it ? And will it require increasing it in future if say again the cache fills up ? modify cm device-group <sync_group> incremental-config-sync-size-max ? Specifies the maximum size (in KB) to devote to incremental config sync cached transactions. Range: 128-10240. Default:1024. Q. Is there a way we can monitor this proactively (leaving aside the preventive measures of reducing size and frequency of config changes). Hope I will get answers to the above concerns. thank DevCentral Community in advance !!!1.1KViews0likes0CommentsRemote Logging Configuration is DSC
Hi, I'm encountering an issue while configuring the remote logging of a DSC. While I can optionally set the local IP, I cannot define which interface to use for remote logging. When no local IP is configured, the logs are send through the routing table of TMOS. I need to send the logs through the management interface, instead of the traffic interfaces. I can reach my goal when configuring the local IP as the one from the management interface. The poor thing is, that the configuration needs to be synchronized after configuration. When I then synchronize the configuration, the other nodes configuration doesn't have the management IP set, instead there is no local IP configured anymore and the traffic interfaces will be used to send out syslog traffic. Is there any way to configure remote logging in a DSC without synchronizing this part of the configuration or is there a way to change the routing of the syslog-ng to use the management interface as default? I saw very much users modifying the syslog-ng configuration itself, instead of using the builtin configuration. Unfortunately the documentation does only claim to set the local IP to a non-floating selfIP in HA configuration (https://support.f5.com/csp/article/K13080): Note: For BIG-IP systems in a high availability (HA) configuration, the non-floating self IP address is recommended if using a Traffic Management Microkernel (TMM) based IP address. From my understanding and experience this would end in the same issue, because the non-floating selfIP is not synchronized, but the remote logging configuration needs to be synchronized. I'm very thankful for every hint. Greets, svs365Views0likes2CommentsDevice Service Clustering (vCMP guests not failing over when physical vCMP host interface fails)
Hello, Today we have the following physical setup for two 5250v F5s that we are using as vCMP host for 5 vCMP guests in each of them. These vCMP guest are configured in an HA pair using Device Service Clustering with load aware. They are homogeneous. Last week we had failover tests in the Data Center and brought down manually the physical interfaces from the switch side that are members of Port-Channel107 shown in the picture and the fail-over did not occur. While in the vCMP host we noticed the the port status changed to down in the physical interface, this was not passed over to the vCMP guests and that is what I am assuming was the cause of the failing over not happening. We did see all the VIPs, Nodes, Pools going down on the active unit but the failover was not triggered and cause all the applications to fail. Questions: -Is the expected behavior not to pass the status of the physical interfaces in the vCMP host to the vCMP guests? -Is there something that I am missing in my current configuration that is causing these vCMP guests not to fail-over when the physical interfaces in the vCMP host fail? -Do I need to use HA groups to monitor the trunks instead of load aware for the fail-over to be triggered?382Views0likes1CommentvCMP guests logs "Skipping heartbeat check on peer slot X: Slot is DOWN."
We have 2 VIPRION C2400 chassis with 2 B2250 blades in each. On these we have vCMP guests in several active/standby clusters (DSC) between the 2 chassis. Device-group-A:guest1/chassis1,guest1/chassis2 , Device-group-B:guest2/chassis1,guest2/chassis2 , and so on. Recently we found that some (not all) of these clusters (DSC) is reporting/logging: "info clusterd: Skipping heartbeat check on peer slot X: Slot is DOWN". X is every slot except the one the actual vCMP itself is running on. 2 slots (3-4) in each chassis is empty, so yes , one could say that they are "DOWN". But the second installed blade (slot 1 or slot 2) is not DOWN - it is up and running with vCMP guests, so we dont understand why it is reported Down. Neither do we understand why we have this behavior on some of our clusters (DSC), but not all. We've tried to compare them, but havent found differences who can explain it. Does any of the experts in here have a clue of what this can be? PS! Big IP version is 12.1.2 HF1 on both chassis (hypervisor) and all vCMP guests.289Views0likes3CommentsvCMP and Viprion systems in HA
Hi Experts, Just want to know if we can setup vCMP and Viprion systems in HA. Example I have 2 Viprion Chassis, 1 is vCMP enabled(Chassis1) and 1 is Viprion LTM(Chassis2). Is it possible that we will configure the LTM guest of Chassis1 together with Viprion LTM in HA? Ty230Views0likes2CommentsVIPRION vCMP and DSC with appliance
Hi, I wonder if it is possible (as temporary solution for short period of time) to add vGuests from VIPRION to existing cluster based on 2000S. My main concern is how to configure Failover. According to info from articles and KB documents best practice for VIPRION vGuests Failover configuration is like that: Two options for Failover configuration are: Multicast failover Unicast failover mesh Using the multicast failover or unicast full mesh failover feature on VIPRION platforms requires you to configure valid cluster member IP addresses for all vGuest slots on each redundant vGuest. Important: Failure to configure the cluster member IP address on all vGuest slots may result in failover daemon (sod) communication issues. Of course 2000S has only single MGMT IP, so question is how (if at all possible) configure failover on VIPRION vGuest and (if necessary) on 2000S: VIPRION vGuest - use unicast mesh 2000S - just leave existing configuration with single MGMT IP VIPRION vGuest - use multicast failover 2000S - enable multicast failover Other combination? Piotr251Views0likes0CommentsGTM and cluster
Hi, I wonder what is relation between DSC and GTM Sync Group. Let's assume that I have cluster of two BIG-IP both with LTM and GTM. Then I create DSC cluster (Active-Passive or Active-Active). How it relates to GTM? I assume that Sync Failover will sync DNS Listeners objects used by GTM but not GTM config - Am I right. If so, then how to configure sync group when GTM runs on devices in DSC? Should I add second device as BIG-IP type server object or not? Assuming that all Server object are only referencing Generic Host type off objects is adding other BIG-IP in Sync Group at all necessary? What if I need separate DNS Listener IP that is active on both devices - will creating two Traffic Groups work? So I will have one listener assigned to traffic-group-1 that is by default active on DeviceA and second listener assigned to traffic-group-2 active by default to DeviceB - will it work. Will both listeners serve same Wide IPs and Sync Group will synchronize all GSLB settings between two devices? Piotr915Views0likes2CommentsHA Group and pools
Hi, According to what I read it seems that HA Group (HAG) is best suited for monitoring trunks and clusters (on VIPRION). Still pools can be used as well. My question is about what is best practice for configuring pools in HAG. I know article Best practices for the HA group feature - there is not a lot here except from avoiding pools with members that are not stable (can go down and up rapidly). I can see two scenarios when using pool makes sense: Same pool used on each device in HAG - each device has completely separate network path to members. So it is possible that member is down on one device and up on another. Quite simple to configure. Separate pools on each device pointing to different members - that is one that I am not sure how to implement. Second case is showed in this video Setting up HA Groups (part 2 of 2). One of the condition is that each device should have not only separate pool (A, B, C) but as well separate VS using those pools like that: BIG-IPA - VSA - PoolA BIG-IPB - VSB - PoolB ... As well easy to configure but there is one catch - in case of failover VS IP will change, so connections will be lost and there is some external method necessary to direct clients to new VIP - Am I right? So not a perfect solution. I can imagine that there is a way to switch pools assigned to the same VS depending on which device is Active (using iRule with HA::status and tcl_platform(machine)) - but is that good idea? Sure connections will be reset as well but there is no need to redirect clients to other VIP. Any other way to have same VIP on every device but using separate pools for HAG config? Any other scenarios for using pools in HAG? Piotr269Views0likes2CommentsHA Group and pools
Hi, According to what I read it seems that HA Group (HAG) is best suited for monitoring trunks and clusters (on VIPRION). Still pools can be used as well. My question is about what is best practice for configuring pools in HAG. I know article Best practices for the HA group feature - there is not a lot here except from avoiding pools with members that are not stable (can go down and up rapidly). I can see two scenarios when using pool makes sense: Same pool used on each device in HAG - each device has completely separate network path to members. So it is possible that member is down on one device and up on another. Quite simple to configure. Separate pools on each device pointing to different members - that is one that I am not sure how to implement. Second case is showed in this video Setting up HA Groups (part 2 of 2). One of the condition is that each device should have not only separate pool (A, B, C) but as well separate VS using those pools like that: BIG-IPA - VSA - PoolA BIG-IPB - VSB - PoolB ... As well easy to configure but there is one catch - in case of failover VS IP will change, so connections will be lost and there is some external method necessary to direct clients to new VIP - Am I right? So not a perfect solution. I can imagine that there is a way to switch pools assigned to the same VS depending on which device is Active (using iRule with HA::status and tcl_platform(machine)) - but is that good idea? Sure connections will be reset as well but there is no need to redirect clients to other VIP. Any other way to have same VIP on every device but using separate pools for HAG config? Any other scenarios for using pools in HAG? Piotr260Views0likes0CommentsScaling Out a VIP across the cluster video question
Hi, There is very nice video about spanned VIP and ECMP by Alex Applebaum (ScaleN Part 5: Scaling Out a VIP across the cluster. I think I figured out packet flow but would like to confirm that I am not wrong. It seems that example config is using one leg setup and packet flow is like that: client (10.128.1.1->10.0.0.1) -> Router with ECMP (10.128.255.251 10.1.255.251) -> BIG-IP 1/2/3 external self IP (10.1.50.201/202/203) -> VS (10.0.0.1) -> SNAT (10.0.0.11/12/13) - src ip 10.0.0.11/12/13 dst ip 10.100.30.3 -> Router with ECMP (10.1.255.251 10.100.30.251) -> server (10.100.30.3) Above assumes that 10.1.255.251 is default route on BIG-IP devices. Returning packet follows same path with BIG-IP doing proper address translation. Is above correct? I am not expert in ECMP are so I wonder how TCP connection persistence can be preserved with such config. I assume that only way to not break given TCP session is to always route packets with srcIP:port-dstIP:port via the same BIG-IP - Am I right? If so can it be done on ECMP routers? Then I wonder how to provide persistence at the L7 level? There is one VIP defined that is accepting traffic on all BIG-IP in the cluster. But instance of VIP on each BIG-IP is separate so it is creating own persistence records and this info is not shared between them. So I assume that using persistence based on persistence records (PR) will not work. Example: First TCP session from IP 10.128.1.1 is directed to BIG-IP1 BIG-IP1 chooses member and creates PR Second TCP session from same IP is directed to BIG-IP3 BIG-IP3 has not PR for this src IP, so it chooses member (could be the same as on BIG-IP1 but could be other) and creates PR Persistence for client is broken. Except of course if ECMP router always direct traffic from given IP to the same BIG-IP (so ports are not used for making decisions) - but it could really mess up with load balancing among BIG-IPs in cluster. However persistence based on cookie should work. No matter which BIG-IP will receive HTTP request with bigip cookie it will direct it to member specified in the cookie. But it is solution working only for HTTP. Is there any for other protocols than HTTP? Could CARP work here - I guess that is only one not using PRs? Piotr263Views0likes0Comments