Forum Discussion
Mike_Lowell_108
Sep 11, 2007Historic F5 Account
Any questions? Post'em
Hi everyone,
If you have any questions or comments about the performance report or it's supporting documents, please feel free to post them here.
I'm one of the engineers who helpe...
Mike_Lowell_108
Apr 16, 2009Historic F5 Account
Hmmm. Sounds like a pretty good setup to me. :) It's probably just a matter of tuning to get what you need.
It's likely that somewhat more than 24 simusers would reduce throughput, but a lot more than 24 should increase throughput by ensuring that both Ixia and BIG-IP constantly have something to do. I definitely suggest trying 1020 (you have 1x 12-port client blade and 1x 12 port server blade, right?) since I've run similar tests in the past and had good luck with equivalent settings.
One challenge with tests that have only large responses is that it's hard to keep BIG-IP/Ixia busy with enough work. Like you say, it's only 30% utilized. :) BIG-IP/Ixia doesn't have too much more work for 1500byte packets compared to 64byte packets, but it takes ~24x longer to send/receive the bigger ones (12.1us vs 0.51us as mentioned above, the speed of ethernet). In the end it means that you need a lot more concurrency to make sure there's always a queue of work that's waiting, otherwise BIG-IP/Ixia will be underutilized. On a much bigger scale it's the same reason that throughput for WAN links is often substantially lower than the available bandwidth: latency is the killer. Ethernet is obviously worlds faster than a WAN connection across the country, but the same principle applies.
With ethernet you're often better-off to have more physically-unique clients -- this makes it easier to generate a truly concurrent workload. I can't say that I've tried a test with just 2x cards, but I'm guessing it'll still achieve the goal with some tuning, though it would be easier to ensure the needed parallelism with more blades (because of more unique ethernet clocks -- greater possibility for keeping a stream of "back-to-back" packets flowing). The smaller the number of unique ethernet interfaces on the client/server, the harder you have to push them to ensure they're generating a constant stream of traffic that'll keep BIG-IP busy.
The most common issues I've run into with throughput tests customers are running:
1) Not enough concurrency (i.e., see above)
2) Intermediate switch connecting test equipment to BIG-IP can't do line-rate (dropping packets...)
3) Switch and/or BIG-IP are using flow control too early (try manually disabling flow control on both sides instead of using auto)
4) Not enough client/server capacity (I don't think this could apply to you with 2x 12-port Ixia blades)
5) Bad cables/optics (rather unlikely, but not impossible, given the performance you're already getting)
6) Uneven distribution of clients/servers, causing one BIG-IP CPU or switch uplink to get overwhelmed (this typically only happens with L2 testing equipment where there's a small number of hard-coded MAC/IP/ports -- not likely to be your issue). You can check this by running "tmstat" and making sure the various links links have roughly the same throughput (they're usually within 1%, but being within 10% is still not a problem in most cases)
7) Some odd bug. It's always a good idea to make sure you're running the latest version. :)
Mike Lowell
Recent Discussions
Related Content
DevCentral Quicklinks
* Getting Started on DevCentral
* Community Guidelines
* Community Terms of Use / EULA
* Community Ranking Explained
* Community Resources
* Contact the DevCentral Team
* Update MFA on account.f5.com
Discover DevCentral Connects