#sdas #webperf Sometimes it's not whether or not you use hardware, but the hardware you choose to use
Everybody talks a good game with respect to changing the economy of scale, but very few explain how that's going to actually happen.
The reality in most enterprise data centers today is that services are expensive. Traditional architectures prescribe a redundant set of network devices be deployed in order to preserve operational reliability (performance and availability). But given the hundreds of applications being delivered every day (and with that number growing) the cost associated with such architectures can quickly become prohibitive for most applications.
Yet all applications are important. If someone in the business took the time to procure/have developed an application then it's important to their specific function or the business' bottom line. Adoption and ultimately success is at least partially dependent on making sure that application is secure, fast and reliable.
But it may not be able to afford to implement the network and application services required to achieve that, potentially dooming the application to a lackluster life of minimal use and the subject of many frustrated user comments.
Changing the economy of scale means making the services required to make sure an application is secure, fast and reliable are also affordable to the applications that need them - even the ones that aren't considered "critical" today (because they could be tomorrow). One of the ways to achieve this is by sharing more of the infrastructure costs across more applications.
Virtualization achieved this by making it possible to deploy many "servers" on a single, shared hardware platform. The cost savings were dramatic - both in capital expenses (the hardware and software necessary) and the operational expenses (administrators could manage more "boxes" than ever before).
Certainly these lessons can be - and are being - applied to the network. But there are challenges in this approach, as services in the network have very different workload profiles than the applications being deployed on virtualized servers.
For example, stateful services in the network (those required to operate at layers 4-7 like load balancing, acceleration, security, etc...) consume more disk and memory resources than their layer 2-3 counterparts. This is due in part to the need to store state (sessions) as well as data (caching, compression, etc...) and policies that direct how to interact with devices and applications.
Traditional hardware (HDD) is not necessarily well-suited to providing a high-performance platform for multiple services with these consumption profiles. Traditional HDD technology introduces a bottleneck at the storage I/O layer that can significantly inhibit the number of virtualized services that can be deployed without impacting performance. Guest density - how many virtualized ADCs you can realistically support on a single hardware platform - is impacted which means it's harder to achieve the economy of scale desired to make sure all applications are able to use the services they need to be successful.
Enter SSD (Solid state disk). SSD is known to perform better (and last longer) than its HDD predecessors. Its performance characteristics relieve the pressure point at the I/O bottleneck resulting in the ability to provision more guests on the same (shared) hardware without impacting performance. This means more services available at lower costs (economy of scale) available to more applications.
Estimate provided by rideshareonline.com
It's like carpooling, only for the network. Instead of driving your car alone, you pool resources with three or four other folks (pretend they're ADCs) and voila! Not only do you get to reduce the costs of commuting to work, but you also get there faster because you're using the express lane (SSD has approximately 200x the IOPS of an HDD, so it's screaming fast).
F5 Synthesis 1.5: SSD-enabled High Performance Services Fabric
That translates into faster applications, because the faster we can apply security or performance or identity and access control services, the faster the application can be delivered. It's a win-win situation, with greater density of guests across the service fabric supporting more applications that need a little boost in performance or tighter security or additional controls on access.