Forum Discussion
wtwagon_99154
Nimbostratus
Dec 05, 2008Any simple / quick ways to improve web traffic
This may be an off the wall question, but I wanted to see if anyone had any recommendations to improve web traffic. We use pretty much the standard settings of the HTTP profile, but I wanted to see if anyone had any tweaks to improve web performance (even if it's slightly).
Thanks in advance.
11 Replies
- JRahm
Admin
What kind of traffic is it? Are you also using the base tcp profile? What LTM version are you running? - smp_86112
Cirrostratus
I'd have to agree with dennypayne. I have been studying this type of traffic for quite a few months now, and most of my efforts have been related to tweaking the tcp profile. I've learned quite a bit about the settings and how they affect HTTP traffic, so I'd like to see you post the TCP profile settings which are applied to the VIP you are referring to. Maybe there's something pretty simple you could tweak right off the bat. - wtwagon_99154
Nimbostratus
Hey guys -
Thanks. I can confirm that I am using the standard HTTP profile and then stardard TCP profile that comes with the F5. I am running the F5s on v 9.4.3, but plan to upgrade to 9.4.5 in the near future. Clients are all external. Here is the http profile:
profile http http {
basic auth realm none
oneconnect transformations enable
header insert none
header erase none
fallback "xxx"
compress disable
compress prefer gzip
compress min size 1024
compress buffer size 4096
compress vary header enable
compress http 1.0 disable
compress gzip memory level 8k
compress gzip window size 16k
compress gzip level 1
compress keep accept encoding disable
compress browser workarounds disable
compress cpu saver enable
compress cpu saver high 90
compress cpu saver low 75
response selective chunk
lws width 80
lws separator none
redirect rewrite none
max header size 32768
max requests 0
pipelining enable
insert xforwarded for disable
adaptive parsing enable
ramcache disable
ramcache size 100mb
ramcache max entries 10K
ramcache max age 3600
ramcache min object size 500
ramcache max object size 50K
ramcache ignore client cache control all
ramcache aging rate 9
ramcache insert age header enable
compress content type include
"text/"
"application/(xml|x-javascript)"
}
TCP profile is also the standard one:
profile tcp tcp {
reset on timeout enable
time wait recycle enable
delayed acks enable
selective acks enable
proxy max segment disable
proxy options disable
deferred accept disable
ecn disable
limited transmit enable
nagle enable
rfc1323 enable
slow start enable
bandwidth delay enable
ack on push disable
idle timeout 300
time wait 2000
fin wait 5
close wait 5
send buffer 32768
recv window 32768
keep alive interval 1800
max retrans syn 3
max retrans 8
}
Thanks for the help! - smp_86112
Cirrostratus
What are the Proxy Buffer Low and High values?
First thing I'd do is bump up both window sizes to 65535.
Next I'd enable the Proxy Maximum Segment, and probably Proxy Options too. I had a situation a couple of weeks ago where I discovered that our firewall was reducing the MSS advertised by the client (1460 bytes) before it hit the LTM (1380 bytes). But without Proxy Maximum Segment, the LTM advertised a full segment size (1460) to the server. The server sent 1460 response packets to the LTM, but the LTM split them up into two packets before sending them on to the client - one 1380 bytes and another one 80 bytes. I had assumed that the LTM would re-assemble the packets into full 1480 byte packets, but it did not. It simply split up each packet that was received by the server. By enabling the Proxy Maximum Segment option, it forced the server to send smaller packets (1380) before sending them to the LTM. The LTM then just forwarded them on.
Next thing I see is Nagle. When you have Nagle enabled on the LTM, this will conflict with the Delayed Acknowledgements setting on clients (which is enabled on Windows and every other OS I've seen). When the LTM is sending data and it receives a packet which is not full-sized, the LTM will sit on it until a) it receives enough data to make a full-sized packet or b) it receives an ack from the client. But the client may be waiting for more data from the LTM because of the Delayed Acknowledgement algoritm causing a deadlock. This is a good description of the problem.
http://www.stuartcheshire.org/papers/NagleDelayedAck/
Also over the past couple of days I've been studying the effects of disabling Slow Start. It appears to me that enabled Slow Start results in a situation where there the LTM will only allow a small, limited number (3-4) of outstanding packets on the wire before it sits and waits for an acknowledgement. This slows down throughput tremendously. However with Slow Start disabled, the LTM looks like it tries to completely fill the client's Receive Window before stopping and waiting. Someone will probably try to correct me my description of this particular setting, but that sure looks like the effect to me in the tcpdumps I've done. - JRahm
Admin
I'm with Denny on this one, I'd start with the tcp-lan-optimized profile (applied both client & server-side, even on the WAN) and tune from there. - JRahm
Admin
Sorry, that sounded awfully dismissive, smp. Several of the settings you've addressed are already in the optimized profile that comes canned in the 9.4.3 version, which is why I pointed wtwagon in that direction. - smp_86112
Cirrostratus
Thanks, I was wondering about that. I am running 9.3.1. I thought I put that at the top of my post, but it must have gotten deleted during my editing.
citizen, I've also planned to study the effect of disabling Bandwidth Delay, but I haven't had a chance yet. If you are familiar with the technical details of that particular setting, would you mind providing some info about how traces might look different between having this particular setting enabled versus disabled? It strikes me as a little proprietary, so I presumed it might be difficult to track down detailed technical details. - wtwagon_99154
Nimbostratus
Both the Proxy Buffer Low & High are set at 131072 - JRahm
Admin
Your observations on slow-start are expected, as the LTM adheres to the specifications for it in RFC3390 Click here
Regarding Bandwidth Delay, this setting enables the LTM to attempt calculating the optimal bandwidth per client based on the bandwidth delay product (Click here) - wtwagon_99154
Nimbostratus
Thanks for all the suggestions, that is much appreciated.
With that being said, what are (if any) the implications and drawbacks of moving to the tcp_lan_optimized profile? We deliver a large amount of dynamic content to the world - so I just wanted to make sure we don't impact that specific traffic.
I am running a beta test using the tcp_lan_optimized profile and everything seems fine thus far - but just wanted to make sure I am not impacting some of the dynamic services that we provide.
Thanks again.
Recent Discussions
Related Content
DevCentral Quicklinks
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
Discover DevCentral Connects