Forum Discussion
A couple questions regarding LTM 10.2.3 to 11.3 upgrade
Hello all, looking to upgrade 2 separate LTM active/standby pairs from 10.2.3 to 11.3 HF7. I have a VE pair that I have run through this process on twice using the steps defined here: http://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/bigip-upgrade-active-standby-11-3-0/1.html Prior to the upgrade, I merged in "sanitized" config entries from our production pairs containing config entries for class,monitor, pool and rule. I did not run into any issues with the upgrade on the VEs, but I did have an issue after upgrading machine A where I found that the Device Trust - Peer List was empty on both A and B machines. When I attempted to add the B machine to the peer list on the A machine, I received and error that a device of the same name already existed, and asked if I wanted to overwrite. I answered yes and this updated the peer lists and allowed me to do the initial sync. The main config difference between our VE pair and our physical pairs is we are using a private vlan for ConfigSync and Network Mirroring, and we do not have HA Network Failover configured as the devices are physically connected. Finally, here are the questions I still have before moving forward: 1.) Anyone know any gotcha's I may run into due to differences between our VE and physical machines? 2.) We typically upgrade the B machine and run for a few days making sure we don't run into issues before upgrading the A machine. If we find an issue, we force B to standby and run on A machine while we fix things. Due to the nature of this particular upgrade, is this a bad idea? 3.) If we upgrade the B machine then immediately upgrade the A machine, and find out the next day there is some issue that needs to be addressed, Can we just fail back to version 10.2.2 by booting off of that partition, or would we also need to load the config saved off prior to the upgrade?
3.)
1 Reply
I have been working through a version of this for the past few days. In our case I am migrating from 10.2.4 to 11.3.0, by loading the UCS from our source systems onto VEs running 10.2.4, then upgrading them to 11.3.0. This has the benefit of keeping the failover peering intact, only requiring minor fixups to the config prior to upgrading to 11.3.0.
Both times I ran this upgrade, as long as the 10.2.4 VEs successfully clustered together after loading the UCS, they successfully peered in a DSC on 11.3.0. If your ConfigSync configuration works correctly, you should not have major issues. You will need to configure network based failover if you are not using it on the source cluster. If you do not do this, I do not believe that DSC will be built correctly on the 11.3.0 instances.
One note is that prior to loading the UCS, I activate the license on the VE. When I load the UCS on the VE, I pass in the "rma" option.
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com