#SDN #DevOps API design best practices apply to the network, too.
We (as in the industry at large) don't talk enough about applying architectural best practices with respect to emerging API and software-defined models of networking. But we should. That's because as we continue down the path that continues to software-define the network, using APIs and software development methodologies to simplify and speed the provisioning of network services, the more we run into if not rules, then best practices, that should be considered before we willy nilly start integrating all the network things.
Because that's really what we're doing - integration. We just don't like to call it that because we've seen those developers curled up in fetal positions in the corner upon learning of a new software upgrade that will require extensive updates to the fifty other apps integrated with it. We don't want to end up like that.
Yet that's where we're headed, because we aren't paying attention to the lessons learned by enterprise architects over the years with respect to integration and, in particular, the design of APIs that enable integration and orchestration of processes.
Martin Fowler touches on this in a recent post "Microservices and the First Law of Distributed Objects":
The consequence of this difference is that your guidelines for APIs are different. In process calls can be fine-grained, if you want 100 product prices and availabilities, you can happily make 100 calls to your product price function and another 100 for the availabilities. But if that function is a remote call, you're usually better off to batch all that into a single call that asks for all 100 prices and availabilities in one go. The result is a very different interface to your product object.
Martin's premise is based primarily on the increased impact of performance and the possibility of failure on remote (distributed) calls.
The answer, of course, is coarser-grained calls across the network than those used in-process.
Which applies perfectly to with respect to automating the network.
Most network devices are API-enabled, yes, but they're fine grained APIs. Every option, every thing has its own API call. And to simply automate the provisioning of even something as simple as a load balancing service requires a whole lot of API calls. There's one to set up the virtual server (VIP) and one to create a pool to go behind it. There's one to create a node (the physical host) and another to create a member of the pool (the virtual representation of a service). Then there's another call to add the member to the pool. Oh and don't forget the calls to choose the load balancing algorithm, create a health monitor (and configure it), and then attach that health monitor to the member. Oh, before I forget don't you forget the calls to set up the load balancing algorithm metrics, too. And persistence. Don't forget that for a stateful app.
I'll stop there before you throw rotten vegetables at the screen to get me to stop. The point is made, I think, that the number of discrete API calls that are generally required to configure even the simplest of network services is pretty intimidating. it also introduces a significant number of potential failure points, which means whatever is driving the automated provisioning and configuration of this service must do a lot more than make API calls, it must also catch and handle errors (exceptions) and determine whether to roll back on error or try again, or both.
Coarser grained (and application-driven) API calls and provisioning techniques reduce this risk to minimal levels. By requiring fewer calls and leveraging innate programmability driven by a holistic application approach, the potential for failure is much lower and the interaction is made much simpler.
This is why it's imperative to carefully consider what software-defined model you'll transition to for the future. A model which centralizes control and configuration can be a boon, but it can also be a negative if it forces a heavy API-tax on the integration necessary to automate and orchestration the network. A centralized control model that focuses on state rather than execution of policy automation and orchestration offers the benefits of increased flexibility and service provisioning velocity while maintaining more stable integration methods.
The focus on improving operational consistency, predictability and introducing agility into the network is a good one, one that will help to address the increasing difficulty in scaling the network both topologically and operationally to meet demands imposed by mobility, security and business opportunities. But choose wisely, as the means by which you implement the much vaunted software-defined architecture of the future matters a great deal to how much success - and what portion of those benefits - you'll actually achieve.