For more information regarding the security incident at F5, the actions we are taking to address it, and our ongoing efforts to protect our customers, click here.

Forum Discussion

SMilanic's avatar
SMilanic
Icon for Cirrus rankCirrus
Sep 19, 2017

Traffic to a vCMP guest gets marked on vCMP host with CoS 3?

Hi,

 

  1. Through testing, we have discovered that the traffic to a self-ip of one of the vCMP guest apparently gets marked with Class of Service 3 (CoS) on a vCMP host. For example, if I ping the self-ip 10.82.27.130 from a VMware guest 10.82.27.150 and capture this traffic on a network switch, I see the echo request with CoS priority 0 and echo reply from the self-ip with CoS 3. This is undesirable because it interferes with settings on Cisco UCS which sets jumbo frame MTU for CoS 3 to a value of 2200+.

     

    Looking through the configuration of the vCMP host, all CoS settings are unconfigured (all to their defaults). Therefore I suspect that this is probably some internal setting. Can someone confirm this behaviour? Is there a way to disable CoS marking?

     

  2. Is there a way to capture traffic on vCMP host interfaces for VLANs that go to vCMP guests? The VLANS are in different configuration partitions on vCMP host. I have tried the following command in the vCMP host, but it captures no data: tcpdump -s0 -nn -i /Partition-01/VLAN_1880 also tcpdump -s0 -i 0.0 captures STP, ARP, and syslog, but no traffic to the guests. Using tcpdump inside tmsh does not change anything.

     

Thanks for yout thoughts!

 

Srecko

 

1 Reply

  • As it turns out the TMM generated traffic is marked with CoS 3. You can change the CoS value from ver. 11.5.4 HF4. Details here: https://support.f5.com/kb/en-us/products/big-ip_ltm/releasenotes/related/relnote-supplement-hotfix-bigip-11-5-4.html

     

    Component: Local Traffic Manager

     

    Symptoms: For TMM generated packets (such as ICMP request), the existing behavior is TMM would use hard code value 3 for the packet priority.

     

    Conditions: Packets are generated internally by TMM.

     

    Impact: No way to control those packets's priority.

     

    Fix: A new db variable tm.egress.pktpriority is added to set packet priority of TMM generated egress packets. Default 3 with range 0-7.