on 22-Nov-2016 05:00
The proxy buffer is probably the least intuitive of the three TCP buffer sizes that you can configure in F5's TCP Optimization offering. Today I'll describe what it does, and how to set the "high" and "low" buffer limits in the profile.
The proxy buffer is the place BIG-IP stores data that isn't ready to go out to the remote host. The send buffer, by definition, is data already sent but unacknowledged. Everything else is in the proxy buffer. That's really all there is to it.
From this description, it should be clear why we need limits on the size of this buffer. Probably the most common deployment of a BIG-IP has a connection to the server that is way faster than the connection to the client. In these cases, data will simply accumulate at the BIG-IP as it waits to pass through the bottleneck of the client connection. This consumes precious resources on the BIG-IP, instead of commodity servers.
So proxy-buffer-high is simply a limit where the BIG-IP will tell the server, "enough." proxy-buffer-low is when it will tell the server to start sending data again. The gap between the two is simply hysteresis: if proxy-buffer-high were the same as proxy-buffer-low, we'd generate tons of start/stop signals to the server as the buffer level bounced above and below the threshold. We like that gap to be about 64KB, as a rule of thumb.
So how does it tell the server to stop? TCP simply stops increasing the receive window: once advertised bytes avaiable have been sent, TCP will advertise a zero receive window. This stops server transmissions (except for some probes) until the BIG-IP signals it is ready again by sending an acknowledgment with a non-zero receive window advertisement.
Setting a very large proxy-buffer-high will obviously increase the potential memory footprint of each connection. But what is the impact of setting a low one?
On the sending side, the worst-case scenario is that a large chunk of the send buffer clears at once, probably because a retransmitted packet allows acknowledgement of a missing packet and a bunch of previously received data. At worst, this could cause the entire send buffer to empty and cause the sending TCP to ask the proxy buffer to accept a whole send buffer's worth of data. So if you're not that worried about the memory footprint, the safe thing is to set proxy-buffer-high to the same size as the send buffer.
The limits on proxy-buffer-low are somewhat more complicated to derive, but the issue is that if a proxy buffer at proxy-buffer-low suddenly drains, it will take one serverside Round Trip Time (RTT) to send the window update and start getting data again. So the total amount of data that has to be in the proxy buffer at the low point is the RTT of the serverside times the bandwidth of the clientside. If the proxy buffer is filling up, the serverside rate generally exceeds the clientside data rate, so that will be sufficient.
If you're not deeply concerned about the memory footprint of connections, the minimum proxy buffer settings that will prevent any impairment of throughput are as follows for the clientside:
If you are running up against memory limits, then cutting back on these settings will only hurt you in the cases above. Economizing on proxy buffer space is definitely preferable to limiting the send rate by making the send buffer too small.
First off all great article! However I still have some doubts how exactly flow of traffic from client is going on.
Let's assume we have fresh TCP connection between BIG-IP and backend server. To make things easier let's assume Slow Start is disabled.
My assumption of the process is:
BIG-IP sends SYN with window 64k (size of Receive Window in serverside profile)
Serwer sends packet with 1000B (after 3WHS finished)
BIG-IP passes data to clientside profile, it lands in Proxy Buffer
BIG-IP transfers data from Proxy Buffer to Send Buffer - Proxy Buffer is empty
BIG-IP sends data to client, data is still in Send Buffer as client has to ACK it, so Send Buffer capacity is decreased by 1000B, let's assume Nagle is enabled so BIG-IP will not send more data until ACK from client arrives
BIG-IP is not changing window size on serverside - there is plenty of space in Proxy Buffer
Now there is a question what happens when next 1000B arrives from server?
It's placed in Proxy Buffer?
Is it passed to Send Buffer, or because of Nagle BIG-IP knows that if there is data in Send Buffer it will not be able to send more to the client, so it keeps data in Proxy Buffer?
Then assuming we still have no ACK from client process is repeated until Proxy Buffer High is reached
Will BIG-IP start to reduce window size according to decreasing capacity of Proxy Buffer?
At this time BIG-IP is sending ACK on serverside with zero window size
It will continue until Send Buffer will be empty (because of Nagle)
Then data is passed from Proxy Buffer to Send Buffer until Proxy Buffer Low is reached.
At this point BIG-IP sends ACK with non zero window size - what will be this window? Difference between Proxy Buffer High and amount of bytes currently kept in Proxy Buffer?
Then process continues.
I assume that when Nagle is not used then BIG-IP will pass maximum allowed un ACKed bytes from Proxy Buffer to Send Buffer and then start to accumulate data in Proxy Buffer until Proxy Buffer High is reached.
Is above making any sense or I am completely wrong?
Nagle has no effect on delivering data to the send buffer. If the congestion and peer receive windows allow it, the data arrives in the Send Buffer and only then does the Nagle logic apply.
Thanks for reply. So when data flow from Proxy Buffer to Send Buffer stops? From some other articles I was under impression that Send Buffer only contains data that was send to the client but unacknowledged - is that not true?
Is Send Buffer filled with both send/unacknowledged data and data not send at all?
Nagle usage in this question was only attempt to simplify data flow :-), so I am more interested how and when data is passed from Proxy to Send Buffer.
Data moves to the send buffer when the receive window, congestion window, and configured send buffer limit are all larger than the current send buffer size.
Well, that is trivial for you, but not for me 😞
So let's say client advertised receive window of 8KB, BIG-IP then can send (let's simplify this) 8KB of data without ACK from client.
So Send Buffer can accept up to 8KB from Proxy Buffer - assuming that 8KB was already send and no ACK arrived?
Or Send Buffer can accept data up to it's size configured in TCP profile?
If the receive window is 8KB and BIG-IP has already sent that much, without acknowledgment, then the send buffer cannot accept any more data.
Thanks a lot, then all the traffic from backend server will be accumulated in Proxy Buffer (until Proxy Buffer High is reached) - Am I right?
As soon as BIG-IP receives ACK for part or all data send, appropriate amount of data will be passed from Proxy Buffer, let's say client send ACK for 4KB then 4KB will be send from Proxy Buffer to Send Buffer, of course assuming that client is still advertising 8KB window.
If with ACK client will send window 16KB then I assume 12KB can be moved from Proxy to Send (16KB widow - 4KB of unacknowledged data) - is that more or less correct?
Thanks again, now this is much clearer for me 🙂
Sorry for more questions, but your articles and knowledge is best source of info about internal intricacies of TCP Express stack.
First of all it would be really great if you will consider creating article with some real life examples how to analyze traffic processed by BIG-IP and then how to convert results into fine tunning TCP profile settings. Generic info about TCP profile is great base to strat but still it so complicated matter that average person (like me) is quite soon getting lost 😞
Considering this article I wonder if my understanding of initial settings for buffers is correct:
proxy-buffer-high - it's based on assumption how much data theoretically BIG-IP will send before there is a chance for ACK to arrive - I guess it's theoretical because of receiver windo size that can force stopping sending more data before this value is reached?
So for client with bandwidth 25Mbps and RTT 100 ms result will be 3276800/0.1 (bandwidth converted to bytes assuming 1Kb = 1024b/100ms) = 327 000 bytes
proxy-buffer-low it's based on assumption that first BIG-IP will be able to reach backend server after RTT time, so BIG-IP should be able at least buffer this amount of data.
So for server bandwith 300Mbps and RTT 4ms result will be 157 286 bytes
I saw advices that stated to set it to 32k less that max but no more than 64k less - why. If this is true then low calculated above is way to small - isn't it?
If serverside profile receive window is 64k then backend server can theoretically send up to 64k data without ACK. Is that reason to set low to 64k less than high?
What is then relation between high-low size and receive window advertised on serverside? Is that something like that: if difference is bigger that receive window advertise receive window set, if less advertise receive window equal to difference? Probably it's more complicated but is this close to what happens?
I am not sure why proxy-buffer-high = send-buffer-size? Is that because of situation when all data send to client is ACKed (so Send Buffer is empty) and then we should have enough data in Proxy Buffer to again fill up Send Buffer?
The whole purpose of this article is to provide suggestions on setting the proxy buffer values, with an explanation of why. I'm not sure how to explain it without repeating the post above.
If you would like to analyze flows to modify settings, the TCP Analytics delay state measurement will give you clues as to how.
Is there a typo in this sentence?
"The send buffer, by definition, is data already sent but unacknowledged, so it can't be in the send buffer."
Thanks, I clarified it.