Tuning the TCP Profile, Part One

A few months ago I pointed out some problems with the existing F5-provided TCP profiles, especially the default one. Today I'll begin a pass through the (long) TCP profile to point out the latest thinking on how to get the most performance for your applications. We'll go in the order you see these profile options in the GUI.

But first, a note about programmability: in many cases below, I'm going to ask you to generalize about the clients or servers you interact with, and the nature of the paths to those hosts. In a perfect world, we'd detect that stuff automatically and set it for you, and in fact we're rolling that out setting by setting. In the meantime, you can customize your TCP parameters on a per-connection basis using iRules for many of the settings described below, something I'll explain further where applicable.

In general, when I refer to "performance" below, I'm referring to the speed at which your customer gets her data. Performance can also refer to the scalability of your application delivery due to CPU and memory limitations, and when that's what I mean, I'll say so.

Timer Management

The one here with a big performance impact is Minimum RTO. When TCP computes its Retransmission Timeout (RTO), it takes the average measured Round Trip Time (RTT) and adds a few standard deviations to make sure it doesn't falsely detect loss. (False detections have very negative performance implications.) But if RTT is low and stable that RTO may be too low, and the minimum is designed to catch known fluctuations in RTT that the connection may not have observed.

Set Minimum RTO too low, and TCP may improperly enter congestion response and reduce the sending rate all the way down to one packet per round trip. Set it too high, and TCP sits idle when it ought to retransmit lost data.

So what's the right value? Obviously, if you have a sense of the maximum RTT to your clients (which you can get with the ping command), that's a floor for your value. Furthermore, many clients and servers will implement some sort of Delayed ACK, which reduces ACK volume by sometimes holding them back for up to 200ms to see if it can aggregate more data in the ACK. RFC 5681 actually allows delays of up to 500ms, but this is less common. So take the maximum RTT and add 200 to 500 ms.

Another group of settings aren't really about throughput, but to help clients and servers to close gracefully, at the cost of consuming some system resources.  Long Close Wait, Fin Wait 1, Fin Wait 2, and Time Wait timers will keep connection state alive to make sure the remote host got all the connection close messages. Enabling Reset On Timeout sends a message that tells the peer to tear down the connection. Similarly, disabling Time Wait Recycle will prevent new connections from using the same address/port combination, making sure that the old connection with that combination gets a full close.

The last group of settings keeps possibly dead connections alive, using system resources to maintain state in case they come back to life. Idle Timeout and Zero Window Timeout commit resources until the timer expires. If you set Keep Alive Interval to a value less than the Idle Timeout, then on the clientside BIG-IP will keep the connection alive as long as the client keeps responding to keepalive and the server doesn't terminate the connection itself. In theory, this could be forever!

Memory Management

In terms of high throughput performance, you want all of these settings to be as large as possible up to a point. The tradeoff is that setting them too high may waste memory and reduce the number of supportable concurrent connections. I say "may" waste because these are limits on memory use, and BIG-IP doesn't allocate the memory until it needs it for buffered data. Even so, the trick is to set the limits large enough that there are no performance penalties, but no larger.

Send Buffer and Receive Window are easy to set in principle, but can be tricky in practice. For both, answer these questions:

  1. What is the maximum bandwidth (Bytes/second) that BIG-IP might experience sending or receiving?
  2. Out of all paths data might travel, what minimum delay among those paths is the highest? (What is the "maximum of the minimums"?)

Then you simply multiply Bytes/second by seconds of delay to get a number of bytes. This is the maximum amount of data that TCP ought to have in flight at any one time, which should be enough to prevent TCP connections from idling for lack of memory. If your application doesn't involve sending or receiving much data on that side of the proxy, you can probably get away with lowering the corresponding buffer size to save on memory. For example, a traditional HTTP proxy's clientside probably can afford to have a smaller receive buffer if memory-constrained.

There are three principles to follow in setting Proxy Buffer Limits:

  1. Proxy Buffer High should be at least as big as the Send Buffer. Otherwise, if a large ACK clears the send buffer all at once there may be less data available than TCP can send.
  2. Proxy Buffer Low should be at least as big as the Receive Window on the peer TCP profile (i.e. for the clientside profile, use the receive window on the serverside profile). If not, when the peer connection exits the zero-window state, new data may not arrive before BIG-IP sends all the data it has.
  3. Proxy Buffer High should be significantly larger than Proxy Buffer Low (we like to use a 64 KB gap) to avoid constant flapping to and from the zero-window state on the receive side.

Obviously, figuring out bandwidth and delay before a deployment can be tricky. This is a place where some iRule mojo can really come in handy. The TCP::rtt and TCP::bandwidth* commands can give you estimates of both quantities you need, even though the RTT isn't a minimum RTT. Alternatively, if you've enabled cmetrics-cache in the profile, you can also obtain historical data for a destination using the ROUTE::cwnd* command, which is a good (possibly low) guess at the value you should plug into the send and receive buffers.

You can then set buffer limits directly using TCP::sendbuf**, TCP::recvwnd**, and TCP::proxybuffer**. Getting this to work very well will be difficult, and I don't have any examples where someone worked it through and proved a benefit. But if your application travels highly varied paths and you have the inclination to tinker, you could end up with an optimized configuration. If not, set the buffer sizes using conservatively high inputs and carry on.

*These iRule commands only supported in TMOS® version 12.0.0 and later.

**These iRule commands only supported in TMOS® version 11.6.0 and later.

Published Jan 29, 2016
Version 1.0
  • Varma_Dantuluri's avatar
    Varma_Dantuluri
    Historic F5 Account
    In 'Proxy Buffer Limits', (3) recommends that 'Proxy Buffer Low' be significantly lower than 'Proxy Buffer High'. The default tcp profile 'tcp-mobile-optimized', recommended in https://devcentral.f5.com/articles/stop-using-the-base-tcp-profile has the same value for 'Proxy Buffer High' and 'Proxy Buffer Low'. Wondering if we should add a note in the above article to suggest creating a custom profile based on 'tcp-mobile-optimized' with a suggested gap between high and low?
  • And what is the reason the default tcp profile buffer rules predefined on F5 is against the rules mentioned ?

     

    tcp: Proxy buffer high: 49152 bytes Proxy buffer low: 32768 bytes Receive window: 65535 bytes Send buffer: 65535 bytes

     

    Also the tcp wan optimized and tcp lan optimized does not follow the last mentioned?

     

    Proxy Buffer High should be significantly larger than Proxy Buffer Low (we like to use a 64 KB gap) to avoid constant flapping to and from the zero-window state on the receive side.