Forum Discussion
smp_86112
Cirrostratus
Aug 28, 2009v10 upgrade - partitions vs. volumes
I'm starting to read through the v10 upgrade docs, and came across the the references to partitions versus volumes. A couple of questions come to mind that I can't find addressed in the doc. I will be upgrading from 9.3.1 to 10 on LTM 6400s. Currently my LTMs have two available boot images according to switchboot:
HD1.1 - BIG-IP 9.3.0 Build 178.5
HD1.2 - BIG-IP 9.3.1 Build 69.0
Historically our practice has been to install upgrades in the "unused" partition in case we need a quick backoff. In this case, the unused partition is HD1.1 since we have been running 9.3.1 for a long time.
When I attempt my v10 upgrade, in my head I envisioned a similar backoff strategy where I could simply reboot and select to boot back into the 9.3.1 partition.
According to my understanding of the v10 upgrade guide, I need to maintain the old partitioning scheme if I want the ability to boot back into my 9.3.1 image if the upgrade goes bad. And so I will be using the image2disk utility without the --format option, and using the switch --instslot=HD1.1.
Do I have that right?
Also from what I read the volume scheme allows you to create multiple places to hold v10 installation images. Is this analagous to the partition scheme in 9.x? How are they different?
Also, how many boot partitions can you create in 9.x? Ideally I would like to create a third partition to hold v10, but I can't seem to find the answer to that question in the docs...
Thanks.
- smp_86112
Cirrostratus
I don't either like the idea of having to boot to 9.x to install a v10 hotfix, since the difference between those two configs will likely be pretty different by the time I get around to installing our first v10 hotfix. Who knows what could happen if I temporarily start load-balancing on an old config. - ntwrkurwrld_683
Nimbostratus
When I did the upgrade, it got rid of all of the other partitions using the command above. Now I only have one partion with Version 10 on it. - hoolio
Cirrostratus
I think the restriction of having to keep a 9.x slot to install hotfixes for the 10.x slot is enough to force you to use LVM. It's also nice to be able to run more than two installations using LVM. The downside is that LVM doesn't make for an easy transition to 10.x. I suppose you could keep the standard partitions and install 10 on the second slot. Once you're confident in the 10.x installation, you could wipe the disk and use LVM. Another option which would require less work but be less supported would be to keep one unit on 9.x and the other on 10.x with LVM. I'd try to use hardwire failover exclusively to ensure the standby unit stays in standby (though I haven't tested a 9.x and 10.x mixed pair with network failover, so this is conjecture...). Once you're happy with the 10.x installation, you could upgrade the peer to 10.x with LVM. - smp_86112
Cirrostratus
I suppose you could keep the standard partitions and install 10 on the second slot. Once you're confident in the 10.x installation, you could wipe the disk and use LVM.
Recent Discussions
Related Content
DevCentral Quicklinks
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
Discover DevCentral Connects