on 23-Sep-2013 06:06
#SDN #OpenFlow They aren't,seriously. They are not synonyms. Stop conflating them.
New technology always runs into problems with terminology if it's lucky enough to become the "next big thing." SDN is currently in that boat, with a nearly cloud-like variety of definitions and accompanying benefits. I've seen SDN defined so tightly as to exclude any model that doesn't include Open Flow. Conversely, I've seen it defined so vaguely as to include pretty much any network that might have a virtual network appliance deployed somewhere in the data path.
It's important to remember that SDN and OpenFlow are not synonymous. SDN is an architectural model. OpenFlow is an implementation API. So is XMPP, Arista's CloudVision solution for a southbound protocol. So are potentially vendor specific southbound protocols that might be included in Open Daylight's model.
SDN is an architectural model. OpenFlow is an implementation API. It is one possible southbound API protocol, admittedly one that is rapidly becoming the favored son of SDN.
It's certainly gaining mindshare, with a plurality of respondents to a recent InformationWeek survey on SDN having at least a general idea what Open Flow is all about, with nearly half indicating familiarity with the protocol.
The reason it is important not to conflate Open Flow with SDN is that both the API and the architecture are individually beneficial on their own. There is no requirement that an Open Flow-enabled network infrastructure must be part of an SDN, for example. Organizations looking for benefits around management and automation of the network might simply choose to implement an Open Flow-based management framework using custom scripts or software, without adopting wholesale an SDN architecture.
Conversely, there are plenty of examples of SDN offerings that do not rely on OpenFlow, but rather some other protocol of choice. Open Flow is, after all, a work in progress and there are capabilities required by organizations that simply don't exist yet in the current specification - and thus implementation.
Even ignoring the scalability issues with OpenFlow, there are other reasons why Open Flow might not be THE protocol - or the only protocol - used in SDN implementations. Certainly for layer 2-3, Open Flow makes a lot of sense. It is designed specifically to carry L2-3 forwarding information from the controller to the data plane.
What it is not designed to do is transport or convey forwarding information that occurs in the higher layers of the stack, such as L4-7, that might require application-specific details on which the data plane will make forwarding decisions. That means there's room for another protocol, or an extension of OpenFlow, in order to enable inclusion of critical L4-7 data path elements in an SDN architecture.
The fact that OpenFlow does not address L4-7 (and is not likely to anytime soon) is seen in the recent promulgation of service chaining proposals. Service chaining is rising as the way in which L4-7 services will be included in SDN architectures.
Lest we lay all the blame on OpenFlow for this direction, remember that there are issues around scaling and depth of visibility with SDN controllers as it relates to directing L4-7 traffic and thus it was likely that the SDN architecture would evolve to alleviate those issues anyway. But lack of support in Open Flow for L4-7 is another line item justification for why the architecture is being extended, because it lacks the scope to deal with the more granular, application-focused rules required.
Thus, it is important to recognize that SDN is an architectural model, and Open Flow an implementation detail. The two are not interchangeable, and as SDN itself matures we will see more changes to core assumptions on which the architecture is based that will require adaptation.
As an example of the power of SDN, the default OpenStack OVS plugin, in response to cloud management platform messages, uses ovsdb agents to configure the switches (with overlay tunnelling your network topology changes everytime a tenant adds a network and plugs in a guest VM port to a network) and then OpenFlow agents are used to inject flows (at both network creation time and every time a guest VM is created on a compute node) which form a creative mesh of MAC based security rules and forwarding flows eventually pushing the frames into a more standard dynamic MAC learning bridge adjacent to the guest VM. There is no flood to controller once the provisioning is done, because the controller is a 'canned application' in the core OVS plugin. Make no mistake.. it's SDN to the core!
For the above example, SDN builds the ports, SDN builds the security flow rules, and then SDN forwards frames to standard flooding based Ethernet switching which we know works with the guests. None of these stages has to be "all things to all networks", but when they are "flowed" together, we have a ton of functionality and scale. SDN allows you to layer your design for the specific application needs (in this case OpenStack L2 tenant defined networks). Vertically integrated network stacks hide all the layers and you live with the limitations of the weakest link. The virtually integrated stack states..."you'll accept the networking you get and like it"... the SDN controller says "I'll layer your a solution across network functions to scales and responds the way your application needs it." Granted 'what you get' from the virtually integrated network devices has been flexible enough to get us here, but now it is time to let the application designers work through the networking models they need.
With this layered based SDN approach, the silicon based wonders in the network can still be used for what they do best, using their management protocol, and then further nuanced based flows, when required, get pushed to clusters of the "monsters of compute" we call servers (or VIPRIONs!!). That's why SDN is mentioned a ton with NVF. The magic is the SDN application looks like a vertical network stack from the deployed application's perspective, but one that does exactly what it wants.
Just a point of clarity... OpenFlow supports L4 tp_src and tc_dst matching with masks to push/pop flows to the next hop. The real limitation of OpenFlow is it is point-to-point.. as it is supposed to be. You might cripple the capacity of a current generation merchant silicon TOR switch by using matching rules for fields deeper in the frame, but that's the fun you push out and do on the clustered server side. We need to recognize that the switch silicon is rising to meet the market too.. at a price. So too is the server's network silicon getting smarter. All the core networking gear has to do is pass out the flows with the best data it can for an application. For most of that work, you can priori inject a base set of flows policies which are as good if not better than what we were doing with flooding in Ethernet switches today.