Forum Discussion

SSHSSH_97332's avatar
SSHSSH_97332
Icon for Nimbostratus rankNimbostratus
Oct 05, 2013

F5 Throughput

from dashboard i can see throughput graph of box reaching 5 Gig , but we forwarded only 2 Gig of traffic from upstream router ? how can i check real time interfaces traffic to compare it with throughput , i think summation of interfaces traffic shall reflect the throughput shown on the dashboard , right ?

 

  • Just as an aside for others, the graph in the dashboard shows bps combined, in and out. That is, if you count have 1Gbps input (of requests) and 4Gbps output (in reponses) and it'll show 5Gbps throughput (the combination of in and out).

     

  • and when you have multiple backend networks that communicate over the BIG-IP it will also add to the value.

     

  • the Graph in the dashboard has a 30.5mb as peak over a week, hardly any traffic. Some new findings 1. When i do a tcpdump to capture all traffic from and to the server and while doing a file transfer i can see there is a 30 sec pause at one stage which doesnt have any traffic for that node on the F5. 2. I tried do a continuous ping with packet size set to 1500 and that failed straight away. I found that the max value i could use for the packet size was 1454 anything above that fails even when the node is pinging its own gateway on the f5. I have checked the F5 vlan settings and the mtu is 1500. so i would assume this should just work. 3. I have verified ESXi server and the Switch MTU Size which are default and a VM in the ESXi host can ping the switch with packet size 1500. So this is def not relating to the Cisco Switch etc.

     

    I am leaning towards the large packet size causing this issue. Any ideas?

     

    PS: Thanks guys for your valuable input as always.

     

  • Looks like there is a lower MTU somewhere in the path. When I once had a similar issue (a while back now) I found around 8 packets were being dropped (with a doubling time between each) before the F5 started sending responses with the DF (Don't Fragment) flag NOT set.

     

    In the end I reduced the VLAN MTU (which by definition reduced the MSS the F5 would send on connection establishment) to overcome the issue. I never had any issues but I've had some quite in-depth discussions around that change and the consensus seems to be that I shouldn't have as this isn't 'standards compliant'.

     

    PMTUD Should help with this but as firewalls tend to block ICMP traffic it can be an issue. Are you able to modify the MSS on the host sending/receiving the file(s)? That's the ideal solution.

     

    Alternatively, you can speed things up a bit by adjusting the Maximum Segment Retransmissions value in the TCP profile assigned to the VS.

     

    Also, the the same TCP profile, you could bump the Initial Receive Window Size to a value around 10 if the network between the F5 and end host is reliable. (Especially if the end host runs with a Linux kernal v2.6.38 or higher.

     

  • Sriram, if the VLAN MTU is set to 1454 then I'd expect PINGs with a larger packet size to fail. Any idea why the MTU is set to this value? Is the same value configured on the virtual machines/guests? Is VXLAN in play here?

     

    I'd suggest you change the VLAN MTU back to 1500. If you won't/can't and are using a FastL4 profile I would suggest you do two things;

     

    • Enable/tick Reassemble IP Fragments
    • Enable Maximum Segment Size Override and specify a value of 1414 (1454 -40)