Forum Discussion
Matt_Breedlove_
Nimbostratus
Jan 22, 2011TCP and HTTP optimization help
We serve a website from north america that has 26 images on each page. We are on v10 on a viprion
The clients are in Asia and are tens of millions, however they are behind a full tcp mega proxy that only fronts out a few public ip's. The north american website has two pools of servers just serving html markup and then another cluster serving images referenced by the markupMarkup VIP
The markup vip is in standard mode with an http profile and irule cookie based multi-pool persistence with a "cookie" parent profile and using round robin once a pool and is selected. The irule functions in the http_request event and diverts GET requests to one or the other markup server pools based on if a pool specific cookie exists for each GET request. Image VIP
The image vip is also in standard mode using an http profile, has an irule doing uri based pool selection, but is not using any persistence with pool based round robin
When the clients connect to the site, they first load the markup vip, receive the markup over a tcp connection and then come back to the image vip in 26 separate tcp connections to load each referenced image/asset.
The 26 connections are somewhat paralellized, but the page load time is still inadequate due to the clients sitting behind a slow full tcp mega proxy in Asia
We run a lot of sites, but not many where the clients are all in Asia and the site is in north america
our typical configs on our markup and image web servers themselves is to have keepalive disabled in the web server as well as nagle algorithm.LTM TCP profile config
Then on each vip we have a client tcp profile that also has keep alive, slow start, and nagle all disabled there. The server side tcp profile is set to use client tcp profile.
The clients in asia all come from a couple of megaproxy ip's which is why we use cookie persistence on the markup vip as we want them to only select from the same pool of markup servers once that pool is persisted to and then likewise to the same server in that pool until persistence times out. However for the images vip we dont use cookie persistence and just rely on round robin
What I would like to do is have the actual clients from asia open a single TCP connection to the images vip for all 26 images and then have the images vip transparently round robin those 26 image GET requests across the servers behind the VIP while leaving keep alive off on the servers themselves behind the vip.
So basically, the clients benefit in faster experience by not having to open 26 tcp connections to their mega proxy so that the mega proxy can then open another 26 connections in turn to our images vip. However between our own images VIP and our privately addressed image servers behind that vip I have no concern about overhead of tcp connections or server load and actually want more than one image server to help load images on a single page for a single actual asian client. I just want to simplify the tcp connections for the client while allowing as many image servers as possible to contribute to fielding the image elements on a page
So here are the questions:Questions on the Images VIP
On the images vip, should I just enable a oneconnect profile? If so what do I do with the mask in my situation?. This seems like more of a reverse oneconnect situation? Do I need to enable keepalive on the vip tcp profile? Is there any risk in enabling oneconnect in my situation for the images vip? I do not want to use keepalive at all between ltm and backend image server, i would be okay with enabling keep alive between the clients/megaproxy and then images LTM...if it really was the right way to do this and oneconnect was not a good fit
I want to maintain the true clients src ip in the webserver logs (even though its one of a few mega proxy IPs).
we dont want to interfer with the accuracy of the internal image servers seeing the correct user-agent with sourceip, timestamp, etc as we do know.Questions on the markup VIP
On the markup vip am I really getting my per GET request multi-pool persistence granularity without one connect? Is there any functional benefit in enabling one connect on the markup vip if I already have per GET request granularity to divert requests between server pools based on cookie/pool persistence?
this is a summary of the markup irule, which uses http profile and cookie parent profile to give a basic idea. pools contains different servers
when HTTP_REQUEST {
if { ([HTTP::cookie exists "poolacookie"]) } {
set pool poola
set cookie "poolacookie"
} elseif { [HTTP::cookie exists "poolbcookie"] } {
set pool poolb
set cookie "poolbcookie"
} else {
do some function to assign and set vars for poola or poolb.
set pool poola|poolb
set cookie "poolacookie"|"poolbcookie"
}
persist cookie insert $cookie 1020
pool $pool
}Thanks for the help
No RepliesBe the first to reply
Recent Discussions
Related Content
DevCentral Quicklinks
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
Discover DevCentral Connects