Forum Discussion

Eridano_Di_Piet's avatar
Eridano_Di_Piet
Icon for Nimbostratus rankNimbostratus
Mar 25, 2010

Bad HTTP LB performance

Hi guys,

 

I've an F5 LTM pair used to load balance HTTP requests (actually they're HTTP/XML post, cos we're using SOAP).

 

Traffic comes from one client directly connected to the LB, hits the Virtual Server and is then load balanced in round robin fashion to a pool of 2 servers. We're using SNAT to change client's src IP.

 

If the client is directly connected to the server (no LB), the server itself can manage about 100 transitions per second. When we use the load balancer, with 2 servers in the pool, the number of tps is 80 per second!!!!! So it gets even worse than with one server alone....

 

I've tried to use:

 

1) a standard virtual server with HTTP profile and oneconnect

 

2) a Performance http server type

 

3) a fast L4 type:

 

no significant variations.

 

I tried to manipulate profiles, nothing.....

 

Is there anyone who has an idea of what the hell is going on?

 

Thanks in advance for your help.
  • Disable Nagle - create a custom tcp profile (or fastl4) and you'll probably see your performance increase. It's a common issue for this type of traffic (depending upon your message sizes).

     

     

    -Matt
  • Hi Matt,

     

    thanks for your suggestion, actually I did it, at the moment I'm using a tcp-lan-optimization profile (both on client and server side) which has Nagle disabled, besides to that I also have HTTP profile + oneconnect.

     

    I had some improvement, but not that much.....
  • Jesse_42849's avatar
    Jesse_42849
    Historic F5 Account

     

    Have you tried disabling one server at a time and testing? Is it possible that one of them is going slow, but not the one you've tested directly?

     

     

    --jesse
  • Hi,

     

    yes, I did it, without success.

     

    Actually snooping one request from client both on client and server side, I noticed that even if the server reply almost instantly, the lb waits before forwarding traffic.....
  • Odd. I just did some testing with my profiles with a little xmlrpc server and client I hacked together. The testing is all on windows (didn't have time to test this on *nix). By far the most important setting that I toggle is "Ack on Push", which is apparently a big help for MAC and Windows stacks. If anyone knows the details on this I'd love to know. Nagle made a difference too, but not nearly as dramatic. This is enabled with the lan-optimized profile so I'd expect we're dealing with something else...

     

     

    What Operating systems are involved here? Have you by chance looked at this thread? http://devcentral.f5.com/Default.aspx?tabid=53&forumid=31&tpage=1&view=topic&postid=11706051170771

     

     

    -Matt
  • Well, actually we're using Sun Solaris on both client and server sides, so this should avoid the netbios issue.

     

    One more thing I can add is that the serving pool is kind of a duplicated pool, let me explain this.

     

    The VS having problems is using the "A" pool composed by servers x and y listening on port 8080.

     

    Another VS, is using another pool, let's say "B", composed with the same servers x and y, listening on the same 8080 port, but having servers with an assigned priority.

     

    I don't know if this can create problems, actually I don't think so, but you never know.....

     

    Thanks for your help.