Forum Discussion

Steve_87971's avatar
Icon for Nimbostratus rankNimbostratus
Jul 03, 2012

Proxy Maximum Segment not working as expected

Hi all,



I'm running a web-facing (as opposed to internal) IIS / ASP application behind an HA pair of 3900's with 10.2.3. We'd like to use the F5s to squeeze every last drop of performance from the application. I guess we all do...



One thing we've noted from tcpdumps on the client and server side is that about 60% of our clients perform their tcp three-way-handshake advertising a maximum segment size / MSS *other* than 1460, while *all* responses from the servers appear to come back at 1460.



I figure this means the F5 must chunk those responses down into 2 packets before sending them on to the client, e.g. if a client has an MSS of 1400 but the server responds with 1460, the F5 must be splitting that reponse from the server into 2 packets - one at 1400 and another at 60 before sending them on. The F5s are fast but surely there's a time penalty?



We've modified both the client and server TCP protocol profiles to "Proxy maximum segment", yet the tcpdumps we perform still show 1460 sized packets coming from the server.



Our production VS uses OneConnect with the IIS servers having "HTTP keep-alives", but even on a POC VS without OneConnect and HTTP keep-alives turned on we still don't see the proxy maximum segment setting behaving as we expect, i.e. from the example above the server does not respond with 1400 sized packets.



My question for the group - is my understanding of the setting incorrect, or do we have an issue here?



The details of the TCP profile (and it's parents) are below:


profile tcp PROD_Profile_TCP-WAN {


defaults from tcp-wan-optimized


proxy mss enable


nagle disable profile



tcp tcp-wan-optimized {


defaults from tcp


selective acks enable


nagle enable


proxy buffer low 131072


proxy buffer high 131072


send buffer 65535


recv window 65535



profile tcp tcp {


reset on timeout enable


time wait recycle enable


delayed acks enable


selective acks enable


proxy max segment disable


proxy options disable


deferred accept disable


ecn disable limited transmit enable


nagle enable


rfc1323 enable


slow start enable


bandwidth delay enable


ack on push disable


idle timeout 300


time wait 2000


fin wait 5


close wait 5


send buffer 32768


recv window 32768


keep alive interval 1800


max retrans syn 3


max retrans 8


congestion control highspeed


zero window timeout 20000

4 Replies

  • These are the first lines of the TCP dumps when my client downloaded the webpage from the POC server, with "Proxy Maximum Segment" enabled on the client TCP profile.



    The server is, my workstation is These IPs and some names has been changed to protect the incompetent:



    [me@ltm3:Active] config tcpdump -ni VLAN_external dst host and src host


    tcpdump: verbose output suppressed, use -v or -vv for full protocol decode


    listening on VLAN_external, link-type EN10MB (Ethernet), capture size 108 bytes 15:11:00.809566 IP > S 1715574144:1715574144(0) win 65535 mss 1360,nop,nop,sackOK



    [me@ltm3:Active] ~ tcpdump -ni VLAN_external src host


    tcpdump: verbose output suppressed, use -v or -vv for full protocol decode


    listening on VLAN_external, link-type EN10MB (Ethernet), capture size 108 bytes


    15:11:00.809580 IP > S 2543644914:2543644914(0) ack 1715574145 win 4080 mss 1460,sackOK,eol




    It looks like the client is sending MSS of 1360 yet the server is responding with 1460.
  • MSS is not a negotiated parameter, each side merely announces the MSS it is capable of. So with the BIG-IP, when it is proxying MSS, will simply pass the client's MSS as advertised to the server, and the server's MSS as advertised to the client.
  • Hi Jason,



    I'd completely misunderstood MSS as is pretty obvious now.



    With proxy maximum segment on, performing a tcpdump on the internal interface (to the member server) shows the same MSS as that of the client - this is what changes, not the return from the member server as I'd originally expected!



    Thanks for the speedy response and confirmation.



    Cheers, Steve
  • I completely misunderstand TCP parameters all the time. I keep a good reference handy. :)