Forum Discussion
Installation of new blade in VIPRION/vCMP System
I went through this for the first time recently. The clustering software will take care of synchronizing the software revision of the new blade to match the master, first by rsyncing the contents of /shared/images to the new blade, and then issuing the install. The new blade will likely arrive with version 10.x software, so the software install will take a few reboots. You can track the progress via the AOM or the logs.
You can successfully install the new blade without its own network interfaces; it should send traffic over the backplane. Sync traffic will also use the backplane. In my opinion, it is best to cable all your blades the same. I typically will run 2 bonded ports on each device, and then bond across the blades. (ie 2x 10gb + 2x 10gb) You obviously need to verify your switch setup can handle this; we use Nexus 5k's which are a perfect fit for Viprion.
You should configure cluster IPs on the master in addition to the management IPs, and the new blade will assume its designated IP. I specify 5 IPs in blocks for each chassis for this purpose, regardless of how many blades are installed at the time. See discusses configuring the cluster addresses, and you should have them whether you're running vCMP or not.
vCMP (to my knowledge) will only automatically sync an all-slots guest to the other blade, it will not sync single-slot guests unless you provision the guest on the other blade.
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com