Forum Discussion
Viprion Blade Management IP Addresses
Thanx All
Looks like I will need to cable all the Management ports, as whenever i migrate Guest to one of the Blades that does not have a cable I am unable to access the Guest.
Yes, I found you have to cable all the management in order for it to work.. These do not go across the backplane.
I can understand the vCMP Host configuration needing one IP for the chassis/cluster (I consider this the 'float' as it seems to go to the blade that is the master) and then one IP address for each blade --- and they must ALL be connected to the network in order to reach them as each blade is independent.
Now, what I don't understand is the requirement / need for the same on the vCMP guests. Is this required and if so, how is this address used other to reach the specific blade? I wouldn't normally use these as I always connect to the guest 'global' management IP address.
Thanks in advance.. I'm busy mapping out 8 blade+1 cluster IP addresses for each guest for each Viprion .. along with the 8+1 IP VCMP host addresses for each Viprion and finding a better understanding of why and how. The SOLution offers best practices, it does not say why the guest needs these 5 IP addresses andstates that the IP that is used should be the cluster IP for management.
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com