Forum Discussion

Jonathan_c's avatar
Jul 20, 2025

No traffic on network card in virtual Big-IP

Hi,

We have two virtual deployments of F5 version 17.1.2.2 in a VMware environment, 

Each deployment has 3 network cards:

  1. MGMT
  2. external (for VSs)
  3. internal (for backend pool)

Lately we started to experience a very strange behavior, where randomly, the external NIC stops receiving traffic.

To resolve the issue we can restart the Big-IP, or disable/enable the NIC on the VMside.

When taking pcaps on the BIG-IP there's just nothing coming in, this is why the F5 support told us they can't troubleshoot if there's no traffic, and pointed us to VMware support.

We took pcaps and logs on the vmware side and it looks like traffic is not passing from the hypervisor to the VM but the support doesn't see any errors.

Because the issue is random we can't simulate it.

We tried to correlate the issue with vm migration, or external backups but couldn't pin point anything.

We have a large virtual environment it this issue only occurs on two F5 machine (out of 400+ other VMs).

 

I'm not entirely sure, but I think the issue started after upgrading from version 16.1.4.1 to 17.1.2.2.

I was wondering if anyone stumbled upon something similar?

 

Thanks

4 Replies

  • Hi VGF5​ ,

    I already saw this post, and although it looks similar, it reffers to VMware NSX environment which we don't use.

    Today it happened again, and I noticed thatif I try to ping one of the VSs, it doesn't work, but after a few seconds if somehow "refreshes" the NIC and everything starts working again. Just like disableing/enabling the NIC in the VM.

    • VGF5's avatar
      VGF5
      Icon for Cumulonimbus rankCumulonimbus

      Thank you for the update. If this issue persists, I recommend creating new instances, testing them, and observing the traffic. If everything works as expected, add those two devices to the sync group and delete the existing instances. 

      • Maybe I wasn't clear but these are two standalone deployments and it happens on both of them randomaly, not at the same time.

         

  • VGF5's avatar
    VGF5
    Icon for Cumulonimbus rankCumulonimbus

    According to https://my.f5.com/manage/s/article/K000092620 this behavior is linked to a bug in the VMware hypervisor, particularly when a Transport Node Profile (TNP) is applied to a cluster without first detaching a pre-existing TNP. This can cause intermittent connectivity issues where:

    The vNIC stops processing traffic.

    You can't ping the self-IP associated with the affected NIC.

    Traffic forwarding fails intermittently.

    Restarting the VM or toggling the NIC temporarily