Forum Discussion
astokes_6920
Nimbostratus
Nov 10, 2009HTTP monitor with HEAD, not GET
While using the following monitor, I'm finding that the web servers are keeping TCP sessions in the TIME_WAIT state, rather than closing them outright.
GET /serverin.html HTTP/1.1\r\nConnection: Close\r\nHost: \r\n\r\n
My understanding was that by specifying HTTP 1.1, with the Close and carriage return, I'd be forcing the FIN and FINACK thereby closing the connection. Unfortunately, at any given time, I've got 3000 plus TIME_WAIT states on my web server courtesy of the F5. Obviously I'm missing something here.
Is it possible to use HEAD instead of GET within the send string to correct this? Or is there something else I need to add to the existing string to force an outright closing of the connection?
I've checked the archives here and am not finding a whole lot on the use of HEAD within a monitor.
Any assistance is greatly appreciated.
15 Replies
- hoolio
Cirrostratus
I thought the Connection header is a suggestion to the destination host that the sender doesn't intend to reuse the TCP connection. I think it might be up to the recipient to close the TCP connection. RFC2616 doesn't seem to state this clearly, but I've read it other places:
http://www.jmarshall.com/easy/http/http1.1s4
If a request includes the "Connection: close" header, that request is the final one for the connection and the server should close the connection after sending the response.
Also, the monitoring daemon, bigd, may append one or more CRLF's to the end of the send string in HTTP/S monitors depending which LTM version you're using. This is described in SOL2167:
SOL2167: Constructing HTTP requests for use with the HTTP or HTTPS application health monitor
https://support.f5.com/kb/en-us/solutions/public/2000/100/sol2167.html
If you're on 9.4 or higher, can you remove the two \r\n's from the end of the send string and see if the connections are still left open for the full TIME_WAIT period?
I don't think using HEAD instead of GET will change the LTM or server behavior. That would also limit you to checking only the response headers (the server won't respond with a payload to a HEAD request.
If this still doesn't work after fixing the CRLF's, can you clarify what type/version of OS the server is?
Aaron - astokes_6920
Nimbostratus
Thanks for that advice. I applied your suggestion of removing the two explicit CRLF earlier today, but checking back in three hours later, a "netstat -n" on the IIS server still shows over 3000 connections in TIME_WAIT.
GET /serverin.html HTTP/1.1\r\nConnection: Close\r\nHost:
The receive string is still simply a 200 OK.
HTTP/1.1 200 OK
My LTM is currently running 9.4. Our web servers are on Windows 2K3, with IIS 6.0.
I've done a TCP dump on the LTM and it looks like the FIN bit is set in the server response.
I guess my next step is to do a packet capture with Wireshark for some more detailed output, and see that the Close is in the HTTP header sent by the LTM. If I see anything out of the ordinary, I'll post it here.
Thanks again. - astokes_6920
Nimbostratus
I did get output from an ethereal packet capture.
The "close\r\n" is included in both the HTTP header request from the LTM and the response from the IIS server. It's obvious however that the IIS box isn't doing what it "agreed" to do and closing the connection outright. Still leaving over 3000 connections in time_wait.
Does anyone else have this issue using the built in http monitor template? I'm inclined to ask for help with an advanced monitor, but don't really see where that would help as I believe I have evidence in hand that it's an IIS issue.
But I certainly can't be the only one experiencing this? Or, are hundreds, if not thousands, of time_wait connections to be expected between LTM and IIS? - L4L7_53191
Nimbostratus
This really isn't a monitor issue, and I always recommend that you add a connection: close, which will notify your web server to close this socket down and be prepared to accept another client request as opposed to maintaining a keep-alive connection (and occupying a socket that could happily service requests) to the LTM even though it won't be re-used.
TIME_WAIT status is an artifact of the web server's TCP/IP stack and is common. For single-purposed systems - i.e. a web or app server that does a single task, serving HTTP/S pages - it's common to tune the TIME_WAIT_REUSE and TIME_WAIT_RECYCLE timers on the server's OS to help minimize sockets sitting in this state. I don't know how to do this in Windows, but I'm sure some of the good folks out there will be able to chime in...
-Matt - Justinian_48178
Nimbostratus
We're having this exact same issue. We did tune our TIME_WAIT timers (and as a good measure increased our max ports), but I'd still love to find a way around this if anyone has one.
For record, the Registry changes for Time_Wait and max ports are (obviously use these at your own risk... I place no warranty on registry suggestions)
To add and configure the MaxUserPort parameter:
1. Start Registry Editor (Regedt32.exe).
2. Locate the following key in the registry:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
3. Right-click on Parameters and select New DWORD Value. Type MaxUserPort in the Name data box, type 65534 (Decimal) in the Value data box, and then click OK.
NOTE: The default setting for the MaxUserPort value is 5000 (Decimal).
4. Quit Registry Editor.
To add and configure the TcpTimedWaitDelay parameter:
1. Start Registry Editor (Regedt32.exe).
2. Locate the following key in the registry:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
3. Right-click on Parameters and select New DWORD Value. Type TcpTimedWaitDelay in the Name data box, type 60 (Decimal) in the Value data box, and then click OK.
NOTE: The default setting for the TcpTimedWaitDelay value is 240 (Decimal), which equals four minutes. Never set this value below 30.
4. Quit Registry Editor. - L4L7_53191
Nimbostratus
FWIW I consider this to be a best practice: tuning your server stack for a specific workload.
There's no work-around to TIME_WAIT from the BigIP standpoint really; the server's stack is going to do what it's configured to do. Most of the time they're setup in a general way to try and accommodate the majority of use cases. If you've got a specific task a farm of servers will be performing, tune your stack in the optimal way for this workload.
To me, it's sort of the same the same as tuning BigIP's tcp profiles for a specific virtual server. It just happens to be a little more difficult to do, and it locks you into a set of server-wide behaviors TCP/IP wise.
From the workload standpoint, in the case of web servers, TIME_WAIT is a prime candidate for tuning (downward, in almost every case). Thanks for sharing the information above; as a *NIX ish type guy I had no clue how to do this on windows. This will be of great value.
-Matt - Daniel_23711
Nimbostratus
Has anyone gotten any further on this issue? I was running version 9.3 and then recently upgraded to version 10.1 and immediately my HTTP monitors were causing issues on our web servers. I was seeing a lot of connections in TIME_WAIT, and I was noticing that the httpd processes virtual memory continued to steadily increase until the host would eventually run out of swap space. My current setup is Apache front-end, tomcat on the back end using AJP connectors. I have a JSP script that does some health checks and then responds back with a simple 'STATUS=OK'. I since have changed the JSP script and removed all my JSP code, and have just 'STATUS=OK' in the mystatus.jsp and still I am having the same issue where HTTP sessions never expire, however this only happens on requests generated from F5 HTTP monitors. I can use 'curl' all day long from my desktop and never see the httpd processes memory fill up. I did do a tcpdump and I am seeing the FIN responses from the F5 and all tcpdump traffic viewed by wireshark to me looks as expected, including when compared to the same requests/captures I am doing from my desktop directly to the web server. I am at a lost here. I have changed my HTTP monitor to a TCP monitor and provided the same GET request "GET /mystatus.jsp HTTP/1.1\r\n\r\n" and I continue to have the same issue; so issue is not just in the HTTP monitor template. Definitely something changed from version 9.3 to version 10.1 that is causing this. I disabled the HTTP monitors, and the web servers httpd processes behave as expected, and I have a lot more traffic flowing through the F5 to my web servers than what the monitors are producing.
Any suggestions will be greatly appreciated, I am in process of getting ready to open ticket with F5; but I know that is going to be a longggg process of suppling tcpdumps, qkview's, and looking at web-ex sessions, for F5 to either tell me, upgrade to version 10.2 or it's an issue with your web servers, deal with it at that level :) but who knows I might get lucky and they will have an answer for me. - hoolio
Cirrostratus
Hi Daniel,
Do see an HTTP 200 status logged in the access logs from the monitor requests? If not, can you add a blank host header and Connection: close to the send string (GET /mystatus.jsp HTTP/1.1\r\nHost: \r\nConnection: close\r\n\r\n)?
Do you still have a 9.x LTM available? Can you compare the requests from a 9.x unit to the 10.2.x ones?
Aaron - L4L7_53191
Nimbostratus
I second Aaron's comment about adding Connection: close - give this a try first. That should explicitly tell the server to close down the connection and not assume a keep-alive request.
-Matt - Dazzla_20011
Nimbostratus
Hi,
Just wondered if there are any further developments with this I'm experiencing the same problems.
Thanks
Darren
Help guide the future of your DevCentral Community!
What tools do you use to collaborate? (1min - anonymous)Recent Discussions
Related Content
DevCentral Quicklinks
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
Discover DevCentral Connects
