In the history of technology, we have found time and time again that there are multiple variables to define any given problem domain. As militaries became more and more reliant on firearms and vehicles, “live off of the land” became impossible, because gasoline and bullets, let alone spare parts for everything that make an army run, don’t come from the nearest farm. The same is true of hard drive performance, when eventually disk speed was recognized as only half of the problem, and IOPs were the other defining factor. More recently, VMs were powerful, but VM sprawl made calculations of resource usage imperative. Even though analysts and testing houses had looked close and assured us that things like IOPs were not an issue in virtualized environments, they became a huge issue through device contention and saturation when we started hosting too many active VMs on available hardware.
And always, we have worked through it. New ideas, better technology, standardized ways of approaching problems, all of these have resolved the issues at hand, once we were able to look at all the variables through a single lens. Often, the idea that drives solutions is not only derived from looking at the problem differently, but from bringing in expertise from a completely different field of study. Von Moltke applied philosophical concepts to military actions, for example, to express the way to wage war in an increasingly automated age. While there was much to his writings, perhaps the most philosophical bit was his statement that “All sides must agree to an abiding peace, only one side need choose war.” (paraphrased).
While working through the results from our most recent Performance Report, one of the really very smart engineers we had running tests and analyzing results came to the realization that there is a better way to look at the performance of a given Application Delivery Controller (ADC). Considering that there are a selection of variables that impact the performance of an ADC, and that publishing configurations and trying to level the playing field by avoiding tests obviously slanted in one or the other direction are all good things, showing the data in a way that helps IT staff determine what vendor and even device is best for them is more difficult. Networks have variables that are not adequately represented in tests that evaluate throughput or connections per second, and yet networks are precisely where any given ADC is designed to reside. In short, saying “this ADC can run 150 billion connections per second!” is meaningless if those connections are not doing a thing. It’s more meaningful if those connections are doing something, but could be moreso. If a test evaluates connections per second for files averaging 128 bytes, but your application is streaming video, the tests will not offer a realistic view of what you can expect in your network.
Thinking about this topic, the engineer came up with an idea – as hinted at above, heavily influenced by another field of study entirely. To quote him:
In aviation, the performance envelope graph will show the performance you will get in a given configuration. A major feature of an aviation performance (flight) envelope graph is the outer edge that is normally outlined. This outline on the chart indicates the best performance that the aircraft can do without something interesting happening... (for various measurements of "interesting" from "huh, that's interesting" to "oh god, oh god, we're all going to die") hence the term "pushing the envelope".
This thought from another field led him to develop “ADC Performance Envelopes”. Sizing an ADC is not a precise science, simply because the number of variables is large in an actual network. How many connections at peak? How much data per connection, and how much does that number vary by usage? What is the growth pattern likely to look like for the foreseeable future? These questions and more must be answered to get a realistic picture of what you will need in an ADC moving forward. But the ADC Performance Envelope idea makes this work just a little bit easier.
Taking the number of connections per second on one axis, and the amount of data transferred on the other axis, then using tests that were designed to find the maximum functionality of the devices under test, a picture of the performance potential of a given device can begin to be formed. A closed geometric shape is formed that shows the boundaries of expected performance for the device in question. Normal functioning of the device in question on your network will be somewhere within those bounds, as long as you understand the parameters of the bounds. If transfer sizes are within the tested limits, for example.
This creates a useful tool not only for sizing discussions, but also for determining ways to resolve network bottlenecks. Given the envelope concept, it is possible to determine that a given device (assuming you have accurate test data for it) is unlikely to perform at the level you wish to utilize it – before ever putting it into the network.
All test data has caveats, and it is highly suggested that you understand the boundaries used and their relation to your actual network usage, but with that said, performance envelopes offer a more complete picture of how a device will perform for you.
Layer 4 Performance Envelopes
The area inside the graph is the likely boundary of performance for a given device. In our Performance Report, we offer Performance Envelopes for Layer 4, Layer 7, and SSL communications. Since L4 performance is about the best an ADC is going to offer, L7 is the normal use case, and SSL demonstrates the efficiency of off-loading encryption, together the envelopes form a decent picture of any tested ADC’s performance.
We believe these are a powerful tool for you to determine the best ADC for your network needs. Hopefully you find that to be true in real-life evaluations. Since they’re easy enough to create from test data, we’d love to hear if you develop your own, and how well they worked for you to visualize your needs.