Forum Discussion
Viprion Blade Management IP Addresses
Not sure why I could not edit the my initial comment. Anyway, worked around, delete, and add again edited.
Let me answer the multiple questions in this topic about VIPRION and vCMP. I hope this is also helpful for whoever finds this topic.
Firstly, VIPRION. As someone correctly said, think about the cluster IP as the a floating IP you configure in the LTM. Using the same analogy, think the blade IP as non-floating IPs. The same way you have a non-floating IP in each unit in a HA pair, so you can manage it unit, you must have a blade IP for each blade so you can manage each blade independently. The cluster IP in the VIPRION system is mainly used to make sure you are doing changes to the primary blade, as opposite to a HA pair that you can do changes in each unit (as long you sync in the correct order).
Secondly, vCMP hosts and guests. Not sure if there is any documentation about this, neither I have a multi-blade VIPRION to test this now, but I will explain what make sense to me and you can confirm in your setup. I know your question was about management, but let me start with the other interfaces first. In a VIPRION multi-blade setup, the recommendation is to create a trunk with at least one interface from each blade. You don't assign interfaces to guests, you assign vlans, so in theory you can assign a vlan that use only an interface in another blade. In a VIPRION system, TMM interfaces (no management) can handle traffic from any interfaces, does not matter from which vlan, so traffic can enter blade1 interface 1.1, be processed by a TMM instance in blade2, and leave via blade 1 interface 1.1. For a vCMP guest the system will create virtual interfaces (0.x), and my expectation is that the host will be able to map the interfaces correctly, so should allow the traffic enter blade1 interface 1.1, enter vcmp guest deployed only in blade 2, and return back via blade1. For vCMP guest management interface, the traffic is handled by the Linux system in the guest, and based in the type of manage interfaces you can configure when creating the guest (bridge or isolated), I guess that is a direct (virtual) link with the management port in the blade where the guest is deployed. If you have a guest that uses 2 blades, that in theory is the same a VIPRION with 2 blades, so the guest instance in balde1 is linked to management in blade1, and the guest instance in balde2 is linked to management in blade2
Anyway, resuming, the best practice is to cable all management blades ports, also create a trunk with at least one interface from each blade. So, the question about how the management port is linked to the guest management should not exist if you follow the best practices, neither the TMM interfaces used by the vCMP guests.
Let me answer the multiple questions in this topic about VIPRION and vCMP. I hope this is also helpful for whoever finds this topic.
Firstly, VIPRION. As someone correctly said, think about the cluster IP as the a floating IP you configure in the LTM. Using the same analogy, think the blade IP as non-floating IPs. The same way you have a non-floating IP in each unit in a HA pair, so you can manage it unit, you must have a blade IP for each blade so you can manage each blade independently. The cluster IP in the VIPRION system is mainly used to make sure you are doing changes to the primary blade, as opposite to a HA pair that you can do changes in each unit (as long you sync in the correct order).
Secondly, vCMP hosts and guests. Not sure if there is any documentation about this, neither I have a multi-blade VIPRION to test this now, but I will explain what make sense to me and you can confirm in your setup. I know your question was about management, but let me start with the other interfaces first. In a VIPRION multi-blade setup, the recommendation is to create a trunk with at least one interface from each blade. You don't assign interfaces to guests, you assign vlans, so in theory you can assign a vlan that use only an interface in another blade. In a VIPRION system, TMM interfaces (no management) can handle traffic from any interfaces, does not matter from which vlan, so traffic can enter blade1 interface 1.1, be processed by a TMM instance in blade2, and leave via blade 1 interface 1.1. For a vCMP guest the system will create virtual interfaces (0.x), and my expectation is that the host will be able to map the interfaces correctly, so should allow the traffic enter blade1 interface 1.1, enter vcmp guest deployed only in blade 2, and return back via blade1. For vCMP guest management interface, the traffic is handled by the Linux system in the guest, and based in the type of manage interfaces you can configure when creating the guest (bridge or isolated), I guess that is a direct (virtual) link with the management port in the blade where the guest is deployed. If you have a guest that uses 2 blades, that in theory is the same a VIPRION with 2 blades, so the guest instance in balde1 is linked to management in blade1, and the guest instance in balde2 is linked to management in blade2
Anyway, resuming, the best practice is to cable all management blades ports, also create a trunk with at least one interface from each blade. So, the question about how the management port is linked to the guest management should not exist if you follow the best practices, neither the TMM interfaces used by the vCMP guests.
Edit 06/12/2016, thanks to Josh: Also important to say that you can allocated all management IPs in advance, so when you add new blades, the IPs already exist in the configuration.
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com