For more information regarding the security incident at F5, the actions we are taking to address it, and our ongoing efforts to protect our customers, click here.

Forum Discussion

Shadow's avatar
Shadow
Icon for Cirrus rankCirrus
Nov 07, 2025
Solved

Can't change sync type or failover after tenant upgrade.

I made a mistake that I didn't think in the end would matter, but here's what I did.

I had previously upgraded this tenant pair to 17.1.3. Everything was fine, and I intended to install on another pair but I installed on the other boot location of one that I had already installed. I didn't think this was an issue as I would just not activate that boot location. However, I couldn't force the Active member to Standby. It was greyed out.

I thought that maybe I should boot to that new location because maybe there was something that needed to complete to allow me to fail over between the members. That made it worse because I couldn't change the sync type back to Automatic with incremental sync.

So naturally, I booted to the previous partition because it seemed to be at least better, but now I seem to be digging a hole I can't get out of.

Where it stands now:

The pair is set to sync type "Manual with Incremental Sync"
Member1  is standby and says "Not All Devices Synced"
Member2 is active and says "Changes Pending"

On the Standby Member1, I can change the sync type, but I haven't.
On the Active Member2, I can't change the sync type or force it to standby.

 

I have a ticket open but as this is a live system, I pursuing all avenues.

  • It turns out that this problem isn't related to installing the same version on different boot locations. I had one more pair to upgrade. The upgrade went smoothly. I had no issues at all until I went to change the sync type. The option is grayed out on both the STANDBY and the ACTIVE.

7 Replies

  • I know I can push the configuration and try to sync again, but I've done too much already and I'm a bit wary now. Doing that might get the pair in sync, but that doesn't guarantee a failover would work if I still can't manually fail them over.

    • I've just seen this other comment mentioning you can't fail over. 
      How many traffic groups are you running? Is there any specific check that may be failing (for example, HA failsafe configurations) on the other unit? 

  • Hi! A couple questions to help me understand this process better. 

    1. You mention "another pair" of tenants. So you did the same process (upgrading to 17.1.3) to a different unit? Or is it the same? What models are we talking about? Are these virtual vCMP tenants? Are they hosted on the same vCMP Host that succesfully upgraded (itself, or one different guest i presume) ?
    2. You mention you installed software on the Active partition. I've personally never tried this but I'm pretty sure this would fail, it's impossible to select it in the GUI and I strogly believe the cli command would return an error.
      Do you mean that you installed 17.1.3 on a partition where software already existed? This isn't a problem in most scenarios -- when F5 performs this operation, it unpacks the .iso on that partition and it creates a new configuration schema using the current config files from your currently Active partition. So configuration will be up-to-date and with new schema. If process completed with no errors, it's pretty safe to assume you can boot it. The only possible error you'll find is license check date, but it's very easy to see and you can resolve it without requiring to activate the old partition back. 
      The only other time I had an issue with an old schema not being wiped, i simply rolled back to previously active location, then deleted the whole partition from the GUI "disk" menu, and created it again. I believe this dated back to BIG-IP v12 tho.  
    3. After you upgraded the unit, GUI show the "Disconnected" state and sync was disabled. This is expected behavior while you run different software version on Active and Standby. Also, it's best practice to manually turn off auto-sync before performing cluster upgrade, and I believe the reason auto-sync was disabled was the same reason - you shouldn't sync configuration (and rightfully shouldn't even be able to do so) while config schema is different.
      Nothing to worry about! As soon as version is the same, full functionality will be restored. 

    4. Regarding your current state. Again, nothing to be worried about. F5 keeps track of configuration changes using an increment counter. So MCPD deamons talk to each other and they say "hey my config version is 101, dated 07.11.2025@17:30, what's yours" "mine's version 102 dated 07.11.2025@17:35" "oh well guess I should warn the user to perform a sync". 

      What happened is you rebooted one unit, and that reverts this counter back to 1. Again, this is expected! Nothing to worry about at all.
      They're in this state because BOTH units now have different values for version counter (old one will be higher) and date (rebooted one will be more recent). 
    5. What you should do -->
      step 1- revert the upgraded unit to previous schema. both units should be running same software version. 
      step 2 - make sure you can sync. perform a MANUAL sync from the node that stayed active this time ( == it was keepinguser traffic alive and no errors were seen ) to the other node. at this point, the top left indicators in the GUI should be green. 
      step 3 - make sure the unit you need to upgrade is Standby.
      step 4 - DISABLE all auto-sync  
      step 5 - delete the 17.1.3 software partition you created from the standby node. check if you need to reactivate license. if you do, do it on the standby node. then, create a new disk partition in version 17.1.3 with current configuration (the automatic process will do it)
      step 6 - upgrade the standby node. there's no need to check the "install configuration" tick box unless you made changes to this single node since the 17.1.3 partition was created. 
      stap 7 - don't panic when it says you can't sync them, it's expected. now perform a cluster failover so that new node starts to manage user traffic. check for problems and eventually rollback. 
      step 8 - repeat step 5 and 6 on the other node. 
      step 9 - after unit completes upgrade, you should see again a manual sync to perform. it's expected again, because both units have same config version counter "1" and different dates. sync from ACTIVE to STANDBY node -- again, i'm assuming that users that connect to Active node haven't raised issues. 
      step 10 - restore auto sync. 
  • I was not very clear with my language in my original message. When I said active pair, I meant a pair that is currentIy in production and fully upgraded. 

    Instead of upgrading the intended pair, I upgraded the one that I had already done but on the other partition while left me with 17.1.3 or both HD1.1 and HD1.2. 

    I can't explain what the issue is/was, but after some time, I am able to proceed as I typically would as all the previously mentioned grayed out options are now available to me.

  • When I ran tmsh load sys config verify, I did not see any errors. I changed the sync type to Auto-Sync and then ran the sync. They are now synced with no pending changes. 

    The only thing I saw was that "Force to Standby" wad grayed out on the active member and the sync. This lasted for a while and then became active. 

    I forced the active to standby and verified that it was working. It seemed to work fine. However again, the Force to Standby button was grayed out on the newly active member.

  • It turns out that this problem isn't related to installing the same version on different boot locations. I had one more pair to upgrade. The upgrade went smoothly. I had no issues at all until I went to change the sync type. The option is grayed out on both the STANDBY and the ACTIVE.

    • Shadow's avatar
      Shadow
      Icon for Cirrus rankCirrus

      Sometimes the simplest solution is the overlooked one. 

       

      When I tried this in Edge instead of Chrome, it worked just fine. I was able to complete the upgrade without issue.