congestion control
1 TopicTCP Pace Yourself
TCP congestion control strives to optimize network goodput while minimizing packet loss by moderating transmission speed. Unfortunately, the way that congestion control moderates transmission often results in microbursts which can overflow buffers on switches and routers. This packet loss in turn triggers congestion control to moderate transmission speeds by reducing the congestion window; TCP then ramps up and the cycle repeats. This oscillation between under and over utilization of the network, combined with retransmission delays due to packet loss, cause application performance challenges and poor user experience. How does congestion control work? TCP moderates its transmission speed by calculating a target number of packets to keep in flight. This is the congestion window (cwnd). Typically, the congestion window starts out small and is increased each time the remote system acknowledges receipt of the packet. In addition to adjusting the congestion window, TCP compares the congestion window to the number of packets in flight on the network. In the event that the congestion window is greater than the number of packets in flight, the stack will transmit the difference between the two. When do packet bursts occur? Stretch ACKs are acknowledgements that cover 2 or more segments of unacknowledged data. It is possible based on network conditions that a stretch ACK will result in the acknowledgement of all the in-flight packets. When this happens, the TCP stack’s accounting of the number of packets in flight goes to zero and the stack transmits a sudden burst of traffic to bring the in-flight total back up to the calculated congestion window value. The exponential growth phase of TCP can also trigger packet bursts. During the exponential growth phase, the TCP stack doubles the number of packets in flight each roundtrip time. On high bandwidth high delay networks, these packets travel as a tightly packed group. Once they arrive at the receiver, ACKs will be generated in response. Because of the close arrival time of each of the data packets, the ACKs will also travel as a group. When this tight cluster of the returning ACKs arrives back at the sending system, a burst twice as large as the previous one is generated. Rate Pacing to the Rescue To mitigate this bursty behavior, F5 introduced rate pacing to TCP Express in v11.5 of BIG-IP. Rate paci ng analyzes the traffic on a per flow basis to determine the best speed at which to transmit packets. Rate pacing sends packets at the rate of the slowest draining buffer, resulting in a much smother packet transmission as illustrated to the right. By sending data at a steady pace, large bursts of packets are prevented from traversing the network and causing buffer overflows.822Views0likes3Comments