For more information regarding the security incident at F5, the actions we are taking to address it, and our ongoing efforts to protect our customers, click here.

Forum Discussion

Pavo_177645's avatar
Pavo_177645
Icon for Nimbostratus rankNimbostratus
Jun 18, 2015

Intermittent TCP reset from pool member when large header + SSL

Hi,

 

I'm struggling finding a cause for TCP RST from pool members. It happens only for some of identical requests. The request comes with >4kB long cookie and over HTTPS. The very same request sent with HTTPS is never getting any reset. I'm testing with GET and HEAD requests with browser and with curl. So even the simplest HEAD request is failing when it's over SSL and the headers are over 4kB. I've checked max header size on F5 profiles - it's 32kB, so not the case. I've sniffed some traffic and I can see one difference in how TCP is fragmented after leaving F5 (between F5 and pool member) when using SSL. The sequence of packet sizes with HTTP is usually: 1514, 1514, 1514, 102, 1280. Usually because sometime it is different: 1514, 1514, 1514, 1316 With HTTP it is always the same: 1514, 1514, 1514, 1290

 

TCP RST seems to be happening only when the small chunk (102B) is present.

 

Is there any configuration setting in F5 to control this fragmentation?

 

4 Replies

  • i have seen similar behaviour when the certificate is spread over multiple packets. i believe there is even a SOL on it but im unable to find it now. i don't believe you can influence this behaviour, it is how the system behaves.

     

    you could try a support case or just stick with HTTP.

     

  • I'm considering giving up an switching over to terminating SSL on Nginx as with Nginx the issue is not happening.

     

  • contact support, they can provide you with the exact cause and a possible fix.

     

  • Opened support case, and they didn't help. We've spent many hours over phone and email with them. While they refrain from coming up with solutions, as they don't want to agree on existence of a problem we kept running our own investigation. As of today, after many tests we've done, I can tell that disabling Oneconnect solves the problem completely. I have not enough understanding why Oneconnect would cause this pattern in TCP segmentation but I know it goes away when there's no Oneconnect.