Forum Discussion

Sona's avatar
Sona
Icon for Nimbostratus rankNimbostratus
Feb 07, 2025

F5 Self IP is not reachable over the network. New F5 virtual env.

Hi All,

we are migrating F5 LTM from physical to Virtual environment. we are using 16.1.2 version. In F5 we are using 3 interfaces in Trunk (tagged) and created multiple Vlan's and Self IP associated with the VLAN's and configured gateway also. Routes configured on New F5 VE.VM side 3 interfaces are in trunk mode (0-4094 allowed) and ACI side allowed as per F5 VE requirement (muliti vlans ).LACP is disabled in all sides (VM's, F5 VE and ACI Side).

This is the architecture. (NTNX Vmware end --> F5 VE --> ACI )   Issue is unable to ping any vlan GW IP from F5 VE and self IP also not reachable from ACI and not learning ARP also.

Do you have a solution for this issue? Specifically, can we pass multiple VLANS from F5 to Nutanix VMs?

Thanks,

Sona.

 

  • Sona 

     

    Hello, while I have not run into this scenario this is something to take a look at, using GenAI to help with the information 

    1. VLAN Configuration: Ensure that the VLANs are correctly configured on both the F5 VE and the ACI side. Since you mentioned that multiple VLANs are allowed, double-check that the VLAN IDs match on both sides and that they are properly tagged.
    2. Trunk Mode: Verify that the trunk mode settings are consistent across all devices. Since you have allowed VLANs 0-4094, make sure that this range is correctly configured on the F5 VE, VMware, and ACI sides.
    3. LACP Disabled: Since LACP is disabled on all sides, ensure that the interfaces are correctly configured for static trunking. Sometimes, issues can arise if the interfaces expect LACP but it's disabled.
    4. ARP Learning: The issue with ARP learning might be due to misconfigurations in the network settings. Ensure that the ARP settings are correctly configured on the F5 VE and that there are no firewall rules blocking ARP traffic.

    Regarding your specific question about passing multiple VLANs from F5 to Nutanix VMs, it is indeed possible. The key is to ensure that the VLANs are correctly tagged and that the trunk ports are properly configured to allow the VLAN traffic. Here are some steps to consider:

    • VLAN Tagging: Ensure that the VLANs are tagged correctly on the F5 VE and that the Nutanix VMs are configured to recognize these VLAN tags.
    • Trunk Ports: Verify that the trunk ports on the F5 VE and the Nutanix VMs are configured to allow the necessary VLANs.
    • Network Policies: Check the network policies on both the F5 VE and the Nutanix VMs to ensure that they allow the VLAN traffic.

      If these steps do not resolve the issue, it might be helpful to review the specific configurations and logs on the F5 VE and the ACI to identify any discrepancies or errors.
    • Sona's avatar
      Sona
      Icon for Nimbostratus rankNimbostratus

      Thanks for the response. we are not using any IP address just trunked the interfaces and passed the particular vlan from same trunked interfaces. for Ex:- F5trunk name (added 1.1,1.2and 1.3) and vlan 50 ,60 ,70 passed through F5Trunk 

      VM side they have allowed 0-4094 (All Vlan's from same interfaces).

      Today I have observed I saw single MAC-ADD on all interfaces at F5 VE end and VM side different mac -address. match with NTNX -1.3 interfaces. As per my understanding MAC should be different for each interface.

      we will check it and revert you in the next week.

      Have a nice weekend!!!

      Thanks,

      Sona.

  • Hi Sona,

     

    Can you check in tmsh mode on your VE, why you are getting all the same MAC Address for all the interfaces, if the output is global instead of unique this could be the reason for getting same mac address

    list sys db vlan.macassignment
    sys db vlan.macassignment {
        value "unique"
    }

    The possible database values for vlan.macassignment are:

    • unique - Each VLAN is assigned a unique MAC address this is only true as long as there are available MAC addresses to assign. If you create many VLANs, the BIG-IP system will start assigning the same MAC address to multiple VLANs. This is the default database value for vlan.macassignment. Note that this is only true as long as there are available MAC addresses to assign. If you create more VLANs than there are available MAC addresses, multiple VLANs are assigned the same MAC address.
    • global - All VLANs are assigned the same MAC address.
    • vmw-compat - Specific to VE systems, only one interface is allowed per VLAN, and the VLAN will use the MAC address of its corresponding interface. No trunks may be attached to these VLANs.

    Note: Multiple VLANs can safely share the same MAC address as long as your other network devices support a per-VLAN MAC address table. This allows those devices to learn the same MAC address on multiple VLANs without issue. This is a common feature.

    Note: VLAN MAC assignment may change after upgrade, restart, or restarting the mcpd process.

    Viewing the current database variable

    To view the current database variable for vlan.macassignment enter the following command:

    tmsh list /sys db vlan.macassignment

    Changing the database variable using tmsh

    To change the database variable with tmsh, use the following syntax:

    tmsh modify /sys db vlan.macassignment value <preferred variable unique, global or vmw-compat>

    ====================================

    Also can you make sure you are not using 

    default vmxnet3 driver

    Trunking is supported on BIG-IP Virtual Edition (VE) from BIG-IP version 13.1.0 and later, and it is intended to be used with the Single Root I/O Virtualization (SR-IOV) interfaces but not with the default vmxnet3 driver.

    Note: You can enter tmsh list net trunk to display the trunks configured on your BIG-IP system.

    Trunking of the interfaces with the vmxnet3 driver may result in unstable network traffic.

    Environment

    • BIG-IP Virtual Edition running in the VMWare ESXi hypervisor
    • A trunk is configured with interfaces, which use vmxnet3 driver

    Cause

    Trunking is not supported for the default vmxnet3 driver.


    https://my.f5.com/manage/s/article/K14513

    https://my.f5.com/manage/s/article/K97995640

     

    Rate if it helps isolate your issue.

    HTH

    F5 Design Engineer