Forum Discussion
Time_wait - why so short?
Firstly you need to understand that any default setting, for anything, will never fit all scenarios. I don't know why F5 Product Development has chosen 2secs, but I can give you a some technical explanation why I think that is correct.
First you need to understand that TCP RFC 793, is from 1981. At that time the network speeds, and quality, was a lot different than today.
https://tools.ietf.org/html/rfc793
“For this specification the MSL is taken to be 2 minutes. This is an engineering choice, and may be changed if experience indicates it is desirable to do so.”
If you read this link, you will get a very good explanation about why TCP needs time_wait.
The 2 reason you see in the above link, are main related with the delay in receive packets. If you compare the speed in 1981 with the speeds now, I am sure you agree they are different. So, we can’t use a default setting defined in 1981, for networks we have in 2016.
If you search for Linux tcp time wait, you will see that Linux have some kernel settings to deal with that, and most Linux admin guides will tell to customize the settings for time_wait. The default msl is 60 sec, so time_wait 2min.
Microsoft also have solutions about that, and the solutions says that are benefits in reduce the time_wait.
https://technet.microsoft.com/en-us/library/cc938217.aspx
F5 still a network device, that is used by different customers, and different networks. Your can’t have a 4 min time_wait for example in a ISP network, as you will either run out of ports or memory in the F5 unit. The same way that other TCP default settings will not work well for a ISP network.
Also, my expectation is that a connection that is created in the server uses a lot less memory than a connection created in the F5. Because F5 would probably allocated a lot of more information about that connection, as it will allocate not only TCP information. For any very busy device, if you increase the time_wait to a very large number, you will end up either without available ports or without memory for new connections.
Anyway, this is to try to give you an idea why the 2 secs, but you can simply create a new TCP profile in F5 unit with different time wait (from 0 to 600 seconds) and apply that to the virtual servers that connect to those windows servers. You will probably see an increase of memory over the time.
I prefer the option of reduce the windows server to the minimum (30sec), and create a new TCP profile with 30sec in the F5. This is the option I have used in the pass for this type of issue with windows servers.
Recent Discussions
Related Content
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com