Forum Discussion
Viprion Blade Management IP Addresses
Thanx All
Looks like I will need to cable all the Management ports, as whenever i migrate Guest to one of the Blades that does not have a cable I am unable to access the Guest.
- brad_11480Jul 12, 2016
Nimbostratus
Yes, I found you have to cable all the management in order for it to work.. These do not go across the backplane.
I can understand the vCMP Host configuration needing one IP for the chassis/cluster (I consider this the 'float' as it seems to go to the blade that is the master) and then one IP address for each blade --- and they must ALL be connected to the network in order to reach them as each blade is independent.
Now, what I don't understand is the requirement / need for the same on the vCMP guests. Is this required and if so, how is this address used other to reach the specific blade? I wouldn't normally use these as I always connect to the guest 'global' management IP address.
Thanks in advance.. I'm busy mapping out 8 blade+1 cluster IP addresses for each guest for each Viprion .. along with the 8+1 IP VCMP host addresses for each Viprion and finding a better understanding of why and how. The SOLution offers best practices, it does not say why the guest needs these 5 IP addresses andstates that the IP that is used should be the cluster IP for management.
- Rick_Sidwell_79Dec 05, 2016
Nimbostratus
I have the same question. Why are blade specific addresses needed on the vCMP guests? You want them for the vCMP hosts for the rare times you may need to access a specific blade (e.g., for troubleshooting). But you can access a specific blade on a guest using vconsole; you don't need separate IP addresses for them. IPv4 addresses are becoming very precious...
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com