Forum Discussion

SL's avatar
SL
Icon for Cirrus rankCirrus
Jun 29, 2016

Viprion Blade Management IP Addresses

I recently installed 2 additional blades into and existing Viprion Chassis.

 

My question is does each Blade need to have a management cable and IP Address assigned so that the guests that are configured on that blade can be accessed? As I am currently unable to access the Guests on the new blades, when I migrate them to the existing blades that each has its own Cable and Management IP, I am able to access vCMP Guests

 

Thanx

 

7 Replies

  • nathe's avatar
    nathe
    Icon for Cirrocumulus rankCirrocumulus

    Sulaiman,

     

    F5 best practice is to allocate IP Addresses to all blades, irrespective of whether the slots are populated. Also, any new guests should get IP addresses across all blades too. This is so you don't lose address space further down the line when you add new blades/guests.

     

    Cabling is preferred but not essential, if i'm right on this or course. There is a management back plane so communication to other mgmt IPs can go via this route, if a dedicated cable doesn't exist.

     

    Bear in mind that if new blades aren't given a management address and the chassis reboots, if one of these blades becomes primary then you'll lose access to the primary management IP address. You can't dictate which blade is primary so this is a risk.

     

    Same for guests too, they have a concept of primary blade. If no IP exists on a blade the the address can't "float" to this blade.

     

    Hope this helps,

     

    N

     

  • Thanx All

     

    Looks like I will need to cable all the Management ports, as whenever i migrate Guest to one of the Blades that does not have a cable I am unable to access the Guest.

     

    • brad_11480's avatar
      brad_11480
      Icon for Nimbostratus rankNimbostratus

      Yes, I found you have to cable all the management in order for it to work.. These do not go across the backplane.

       

      I can understand the vCMP Host configuration needing one IP for the chassis/cluster (I consider this the 'float' as it seems to go to the blade that is the master) and then one IP address for each blade --- and they must ALL be connected to the network in order to reach them as each blade is independent.

       

      Now, what I don't understand is the requirement / need for the same on the vCMP guests. Is this required and if so, how is this address used other to reach the specific blade? I wouldn't normally use these as I always connect to the guest 'global' management IP address.

       

      Thanks in advance.. I'm busy mapping out 8 blade+1 cluster IP addresses for each guest for each Viprion .. along with the 8+1 IP VCMP host addresses for each Viprion and finding a better understanding of why and how. The SOLution offers best practices, it does not say why the guest needs these 5 IP addresses andstates that the IP that is used should be the cluster IP for management.

       

    • Rick_Sidwell_79's avatar
      Rick_Sidwell_79
      Icon for Nimbostratus rankNimbostratus

      I have the same question. Why are blade specific addresses needed on the vCMP guests? You want them for the vCMP hosts for the rare times you may need to access a specific blade (e.g., for troubleshooting). But you can access a specific blade on a guest using vconsole; you don't need separate IP addresses for them. IPv4 addresses are becoming very precious...

       

  • I believe (can't test) that the technology that lets you stripe a VCMP instance across blades in a chassis is similar to the technology that lets the Viprion chassis connect blades in a non-VCMP situation.

     

    Therefore, as others stated, you must configure all management and cluster addresses.

     

    Doing it ahead of time should save some headaches down the road.

     

  • Not sure why I could not edit the my initial comment. Anyway, worked around, delete, and add again edited.

     

    Let me answer the multiple questions in this topic about VIPRION and vCMP. I hope this is also helpful for whoever finds this topic.

     

    Firstly, VIPRION. As someone correctly said, think about the cluster IP as the a floating IP you configure in the LTM. Using the same analogy, think the blade IP as non-floating IPs. The same way you have a non-floating IP in each unit in a HA pair, so you can manage it unit, you must have a blade IP for each blade so you can manage each blade independently. The cluster IP in the VIPRION system is mainly used to make sure you are doing changes to the primary blade, as opposite to a HA pair that you can do changes in each unit (as long you sync in the correct order).

     

    Secondly, vCMP hosts and guests. Not sure if there is any documentation about this, neither I have a multi-blade VIPRION to test this now, but I will explain what make sense to me and you can confirm in your setup. I know your question was about management, but let me start with the other interfaces first. In a VIPRION multi-blade setup, the recommendation is to create a trunk with at least one interface from each blade. You don't assign interfaces to guests, you assign vlans, so in theory you can assign a vlan that use only an interface in another blade. In a VIPRION system, TMM interfaces (no management) can handle traffic from any interfaces, does not matter from which vlan, so traffic can enter blade1 interface 1.1, be processed by a TMM instance in blade2, and leave via blade 1 interface 1.1. For a vCMP guest the system will create virtual interfaces (0.x), and my expectation is that the host will be able to map the interfaces correctly, so should allow the traffic enter blade1 interface 1.1, enter vcmp guest deployed only in blade 2, and return back via blade1. For vCMP guest management interface, the traffic is handled by the Linux system in the guest, and based in the type of manage interfaces you can configure when creating the guest (bridge or isolated), I guess that is a direct (virtual) link with the management port in the blade where the guest is deployed. If you have a guest that uses 2 blades, that in theory is the same a VIPRION with 2 blades, so the guest instance in balde1 is linked to management in blade1, and the guest instance in balde2 is linked to management in blade2

     

    Anyway, resuming, the best practice is to cable all management blades ports, also create a trunk with at least one interface from each blade. So, the question about how the management port is linked to the guest management should not exist if you follow the best practices, neither the TMM interfaces used by the vCMP guests.

     

    Let me answer the multiple questions in this topic about VIPRION and vCMP. I hope this is also helpful for whoever finds this topic.

     

    Firstly, VIPRION. As someone correctly said, think about the cluster IP as the a floating IP you configure in the LTM. Using the same analogy, think the blade IP as non-floating IPs. The same way you have a non-floating IP in each unit in a HA pair, so you can manage it unit, you must have a blade IP for each blade so you can manage each blade independently. The cluster IP in the VIPRION system is mainly used to make sure you are doing changes to the primary blade, as opposite to a HA pair that you can do changes in each unit (as long you sync in the correct order).

     

    Secondly, vCMP hosts and guests. Not sure if there is any documentation about this, neither I have a multi-blade VIPRION to test this now, but I will explain what make sense to me and you can confirm in your setup. I know your question was about management, but let me start with the other interfaces first. In a VIPRION multi-blade setup, the recommendation is to create a trunk with at least one interface from each blade. You don't assign interfaces to guests, you assign vlans, so in theory you can assign a vlan that use only an interface in another blade. In a VIPRION system, TMM interfaces (no management) can handle traffic from any interfaces, does not matter from which vlan, so traffic can enter blade1 interface 1.1, be processed by a TMM instance in blade2, and leave via blade 1 interface 1.1. For a vCMP guest the system will create virtual interfaces (0.x), and my expectation is that the host will be able to map the interfaces correctly, so should allow the traffic enter blade1 interface 1.1, enter vcmp guest deployed only in blade 2, and return back via blade1. For vCMP guest management interface, the traffic is handled by the Linux system in the guest, and based in the type of manage interfaces you can configure when creating the guest (bridge or isolated), I guess that is a direct (virtual) link with the management port in the blade where the guest is deployed. If you have a guest that uses 2 blades, that in theory is the same a VIPRION with 2 blades, so the guest instance in balde1 is linked to management in blade1, and the guest instance in balde2 is linked to management in blade2

     

    Anyway, resuming, the best practice is to cable all management blades ports, also create a trunk with at least one interface from each blade. So, the question about how the management port is linked to the guest management should not exist if you follow the best practices, neither the TMM interfaces used by the vCMP guests.

     

    Edit 06/12/2016, thanks to Josh: Also important to say that you can allocated all management IPs in advance, so when you add new blades, the IPs already exist in the configuration.