3 Things Software and Hardware in the Data Path Must Have
#SDN #Devops #LineRate When you put software into the data path there are some things it better have to make sure it doesn't become a liability
The software-everywhere drumbeat continues to resound across the entire industry. Experts assure us that software can perform as well as hardware thanks to Moore's law and other advances in chip technology.
Let's assume that's true. Performance of software components is acceptable.
Now that that's out of the way, let's talk about some of the just-as-critical-but-less-mentioned capabilities that software must have if it's going to be running in the data path.
First, let's define what "in the data path" means, cause some folks may not be familiar with that term.
You can think of "the data path" as the set of routers, switches, and network and application infrastructure through which data has to travel to get from a client to an application, and vice versa.
For example, this diagram has a red line depicting the data path from client through the application server tier. Every element through which that path traverses is "in the data path".
In the past, generally speaking, everything in the data path (aside from the client and the application) was running on purpose-built hardware, designed to deal with failure in a way that ensured continued access (availability) to the application.
Moving to software does not mean the abrogation of such capabilities. Every element in the data path should provide three core capabilities, regardless of whether it resides on hardware, software or, as is increasingly the case, in the cloud.
The Three Things
1. Lights Out Manageability
First and foremost is management interface availability. More commonly referred to as "lights out management" in data center grade elements, this is the ability to log-in and manage an element in the data path regardless of utilization on the element. This is critical in situations where elements in the data path might be overwhelmed by attack traffic such as SYN flood attacks. Such occurrences can force utilization of resources to 100% and effectively stop traffic in its tracks. In such an event it is crucial that operators and administrators be able to log-in and do whatever needs doing to address the situation.
Software-deployed solutions that cannot support this requirement but are expected to reside in the data path should be viewed with skepticism.
2. Management APIs
In a world inundated with a need for automation, orchestration and remote management, a set of accessible management APIs is an imperative. These APIs can be leveraged to pre-package integration with data center management (orchestration and automation) systems, used by devops practitioners to automate via popular toolsets like Puppet and Chef, or as a mechanism to enable custom integration and management solutions.
The ability to easily automate and orchestrate provisioning and management of systems in the data path is critical to maintaining an acceptable service velocity within the data center.
One of the tenets of modern architectures is every environment is unique. Furthermore, vendor refresh cycles tend to be longer for elements that reside in the data path than changes occur in the industry, particularly with respect to applications and security. This is a side-effect of being in the data path. Reliability and stability of such solutions is a must and thus longer cycles are necessary to ensure proper testing and certification can be completed.
Thus, enabling elements in the data path with some form of programmability to enable customization and rapid response to security and business events is necessary - especially for those that operate at higher layers of the stack, such as layer 4-7 service solutions.
It doesn't matter whether an element in the data path is deployed on hardware or software or cloud or a hypervisor or a rainbow. These three capabilities are critical for any element that resides in the data path, lest they end up impeding - or cutting off - the data path.