Which SDN protocol is right for you?
SDN's biggest threat is all us people talking about it!! In a recent article titled something along the lines of "Which SDN protocol should I use?", I found myself totally confused... Not the same kind of confused as entering a turnstile in the southern hemisphere (did you know they spin the other way down there). No, I found myself wondering if any of us can agree what SDN is. A common comparison that has me scratching my big, shiny head is OpenFlow versus VXLAN or NVGRE. This is like comparing a Transformer (the shapeshifting robot, not the power supply) and a family sedan. What do I mean? Well, if you squint your eye's really hard and look at them from a long way away, then yes, there a small realtionship (both have wheels), but they do such very different things. VXLAN and NVGRE are encapsulation protocols. They don't provide "programmable flow instantiation", which is what OpenFlow does, and that is SDN. If we are to label VXLAN and NVGRE as SDN, then we must also accept that older encapsulation protocols are SDN, too. For example, 802.1q VLAN tagging, GRE, etc. It pains me to even make this suggestion. Lets say I put the Oxford English Dictionary down for a moment, while I climb off my high horse, and we do loosen the SDN term to include non-OpenFlow programmable flow instantiation (I prefer to call this simply "Programmable Networking"), then this still doesn't include encapsulation protocols for there is no programmable element to them. May I humbly suggest that dynamic routing protocols are closer to SDN than encapsulation protocols? At least dynamic routing procotols do alter flow in packet forwarding devices! That said, I would then have to add Advanced ADC’s to the mix as they evaluate real-time state (performance/security/app experience) and use this to make constant, flow-altering decisions. This example is even closer to SDN than dynamic routing... its just not using OpenFlow. I'm all for abanoning the SDNacronym altogether. Its broad uses are far too ambigious, and with ambiguity comes confusion, followed shortly by fear and uncertainty. That's it. I'm taking a stand. SDN has been struck from my autocorrect!!385Views0likes0CommentsStop Conflating Software-Defined with Software-Deployed
#SDN #SDDC Can we stop confusing these two, please? Kthanx. For some reason when you start applying "software" as a modifier to anything traditionally deployed as hardware folks start thinking that means that hardware is going away. Software-defined networking (SDN) is no exception. There's a big difference between being software-defined and software-deployed. The former implies the use of software - APIs and the like - to configure and manage something (in this case, the network). The latter implies service functionality deployed in a software form-factor, which could mean pure software or virtualized appliances. These two are not the same by any stretch of the imagination. Software-defined networking - the use of software to control network devices - is not a new concept. It's been around for a long time. What is new with SDN is the demand to physically separate control and data planes and the use of a common, shared control-plane protocol or API (such as OpenFlow) through which to manage all network devices, regardless of vendor origins. This abstraction is not new. If you look at SOA (Service-Oriented Architectures) or OO (Object-Oriented) anything, you'll see the same concepts as promoted by SDN: separation of implementation (data plane) from interface (control plane). The reason behind this model is simple: if you abstract interface from implementation, it is easier to change the implementation without impacting anything that touches the interface. In a nutshell, it's a proxy system that puts something between the consumer and the producer of the service. Usually this is transparent to the consumer, but in some cases wholesale change is necessary, as is true with SDN. The reality is that SDN does not require the use of software-deployed data path elements. What it requires is support for a common, shared software-defined mechanism for interacting and controlling the configuration of the data path. Are there advantages to a software-deployed network element strategy? Certainly, especially when combined with a software-defined data center strategy. Agility, the ability to move toward a true utility model (cloud, anyone?), and rapid provisioning are among the benefits (though none of these are peculiar to software and can also be achieved using hardware elements, just not without a bit more planning and forethought). The reason software-deployed seems to make more sense today is that it's usually associated with the ability to leverage resources laying around the data center on commodity hardware. Need more network? Grab some compute from that idle server over there and voila! More network. The only difference, however, between this approach and a hardware-based approach is where the resources come from. Resources can be - and should be - abstracted such that whether they reside on commodity or purpose-built hardware shouldn't matter to the provisioning and management systems. The control system (the controller, in an SDN architecture) should be blind to the source of those resources. All it cares about is being able to control those resources the same way as all other resources it controls, whether derived from hardware or software. So let us not continue to conflate software-defined with software-deployed. There is a significant difference between them and what they mean with respect to network and data center architecture.271Views0likes0CommentsF5 Friday: SDN, Layer 123 SDN & OpenFlow World Congress and LineRate Systems (A Chat with F5's John Giacomoni)
#SDN #OpenFlow John G (that's what we call him here on the inside) answers some burning questions about F5 and SDN before jetting off to SDN & OpenFlow World Congress.... We get a lot of questions about not just F5 and what we're doing with SDN these days, but also about LineRate and where it fits in. So we thought we'd chat with our resident expert, John Giacomoni. John not only co-founded LineRate and joined us after the acquisition, but has since then emerged as one our key subject matter experts on SDN in general. I caught up with John via e-mail just before he left for Germany to attend Layer 123 SDN & OpenFlow World Congress where he'll be presenting and mingling with other SDN folks. Q: I heard that you'll be speaking at the Layer 123 SDN & OpenFlow World Congress in Germany next week, can you tell us a little bit about that? Sure, I'll be presenting a talk on October 17th focusing on Application Delivery and SDN which gives an overview of how SDN Architectures embrace both applications and Layer 4-7 services and present a few Layer 7 use cases that bring powerful traffic management into data centers built around the SDN Architecture. I'll also be participating on the 16th in the lunchtime debating table focused on "Transitioning from a Connection/Transport Model to an Application Delivery Model using Application Layer SDN" hosted by F5's VP for Service Provider Solutions, Dr. Mallik Tatipamula Q: So you recently joined us in February as part of our acquisition of LineRate Systems. Can you tell us a little bit about your role at F5. Since transitioning to F5 my role has been an evolution of my former LineRate role such that I now wear two very different hats at F5. The most visible hat that I wear is that of the "lead" SDN strategist and evangelist. I have been evangelizing our vision through presentations at conferences, participation in the ONF's L4-7 working group, and authoring white papers and case studies. The less visible hat are my dual roles as architect for the LineRate kernel and participation as part of the LineRate go to market leadership team. Q: Can you briefly summarize F5's SDN story? Sure. The most important thing to understand is that SDN is an architecture and not any collection of technologies. That is to say, the central idea behind SDN is to realize operational benefits by centralizing network control into a single entity typically referred to as an SDN Controller. It is the job of the SDN Controller along with support from SDN plug-ins (Applications) to do the work of implementing the architect's intent throughout the network. The SDN Controller accomplishes this by using Open APIs that allow for programmatic configuration and by extending the data path element's directly programmable data path (extensibility). F5 is extending SDN architectural discussions by introducing the concept of stateful packet forwarding data path elements that complement the much discussed stateless data path elements. There are a number of reasons as I presented at the Layer 123 SDN & OpenFlow APAC Congress for needing stateful L4-7 data path elements. The biggest reason is that to handle all the state transitions needed for L4 and L7 services, one effectively makes the SDN Controller an integral part of the data path creating scalability issues for the controller, latency issues for traffic, and violating the core architectural principal of separation of the data and control planes. Q: Can you give us a sense of how else you've been promoting F5's SDN vision? I've been presenting at conferences, participating in the ONF's L4-7 working group, and authoring printed marketing collateral. My evangelism has been most noticeable at conferences beginning in Singapore at the Layer 123 SDN & OpenFlow APAC Congress back in June where I discussed how an SDN Architecture is incomplete without application layer SDN, that is stateful data path elements operating at Layers 4-7. I've also provided primary coverage for our booths at the Open Network Summit and at Intel IDF in the Software Defined Infrastructure pavilion. Q: So how does LineRate fit into SDN? LineRate fits into SDN the same way as the rest of the F5 portfolio; that is with APIs that allow it to be fully automated by a controller (programmable configuration) and to extend the data path in novel ways without vendor support (directly programmable). F5 has supported programmable configuration with its iControl API since its introduction in 2001 and been directly programmable since 2004 with our introduction of iRules. Both APIs have been fully published and open since launch. F5 has also demonstrated integration with VMware vShield Manager and the IBM SmartCloud Orchestration (SCO). The F5 SDC product has a SOAP API for configuration and Groovy variant of Java for extensibility. The LineRate Products have a REST API and the node.js API for JavaScript. The point is that F5 has a history of providing products that seamlessly integrate with implementations of the SDN Architecture and LineRate is no different. Q: So how does Network Functions Virtualization (NFV) fit into F5's vision? NFV is an interesting addition to the landscape added by Service Providers at last year's Layer123 SDN & OpenFlow World Congress in Germany. Since then, NFV has become a pseudo-standards process in the care of ETSI, in which F5 is a member. The core idea is to virtualize all network functions in their networks so that they can be dynamically provisioned and scaled on commodity hardware. This has the potential to lead to significant efficiencies in terms of both CAPEX and OPEX. CAPEX savings would be realized by avoiding the capacity planning trap as services can be scaled as fast as a computer program can detect the need for additional capacity and order it. OPEX savings come in the form of being able to run all data centers in a lights out model. So NFV is a closely related sibling to SDN in that SDN if focused on optimizing "topology" issues while NFV is focused on optimizing the nodes in the network. Working together they give rise to a fully Software Defined Data Center/Network that can be completely orchestrated from a central point of control. It is also worth noting that all the principles of NFV apply in other data centers and there has been a long standing movement to moving everything to software. Q: For a bit of historical context, can you tell us a bit about the genesis and motivation behind LineRate Systems. Certainly. In 2008 I cofounded LineRate Systems with my co-founder Manish Vachharajani on the then disruptive idea of replacing "big-iron" network appliances with a pure software solution. The goal was to deliver all the flexibility advantages of a software application with the streamlined manageability of a turn-key network appliance. We also made the decision to build the entire product around the idea of a REST API so that our system could be easily integrated into remote configuration management and orchestration systems without the users needing to ever touch the box. Eventually the space we had entered would be called Software Defined Networking (SDN) and Network Functions Virtualization (NFV). So that was the motivation, the genesis was rooted in a research project that I began as a professional researcher assistant at CU Boulder in back in 2003 in high-performance intrusion detection on commodity multi-core x86 servers. Later as a MS and PhD student I connected with my future co-founder Manish Vachharajani as my PhD advisor and we advanced the research techniques from functioning research to practice and founded LineRate in 2008. Q: What about your previous role at LineRate, you mentioned that your role is similar but evolved? At LineRate Manish and I split our duties with Manish biased towards the technical as our Chief Software Architect and responsible for overall architecture with a fair amount of business responsibilities as well, while I began skewed towards the business side as founding CEO and eventually transitioning to a more balanced business/technical position as CTO. As Founding CEO I led the company raise for our seed round of capital and implemented our high-performance kernel that gave us a hardware class platform in pure software. As CTO I spent a lot of time with customers, driving our SDN messaging, and leading kernel architecture. If you're attending Layer 123 SDN & OpenFlow World Congress, take a moment to track John G down and have a chat or attend his session. If you're not, you can track down John on Twitter.235Views0likes0CommentsSDN and OpenFlow are not Interchangeable
#SDN #OpenFlow They aren't,seriously. They are not synonyms. Stop conflating them. New technology always runs into problems with terminology if it's lucky enough to become the "next big thing." SDN is currently in that boat, with a nearly cloud-like variety of definitions and accompanying benefits. I've seen SDN defined so tightly as to exclude any model that doesn't include Open Flow. Conversely, I've seen it defined so vaguely as to include pretty much any network that might have a virtual network appliance deployed somewhere in the data path. It's important to remember that SDN and OpenFlow are not synonymous. SDN is an architectural model. OpenFlow is an implementation API. So is XMPP, Arista's CloudVision solution for a southbound protocol. So are potentially vendor specific southbound protocols that might be included in Open Daylight's model. SDN is an architectural model. OpenFlow is an implementation API. It is one possible southbound API protocol, admittedly one that is rapidly becoming the favored son of SDN. It's certainly gaining mindshare, with a plurality of respondents to a recent InformationWeek survey on SDN having at least a general idea what Open Flow is all about, with nearly half indicating familiarity with the protocol. The reason it is important not to conflate Open Flow with SDN is that both the API and the architecture are individually beneficial on their own. There is no requirement that an Open Flow-enabled network infrastructure must be part of an SDN, for example. Organizations looking for benefits around management and automation of the network might simply choose to implement an Open Flow-based management framework using custom scripts or software, without adopting wholesale an SDN architecture. Conversely, there are plenty of examples of SDN offerings that do not rely on OpenFlow, but rather some other protocol of choice. Open Flow is, after all, a work in progress and there are capabilities required by organizations that simply don't exist yet in the current specification - and thus implementation. Open Flow Lacks Scope Even ignoring the scalability issues with OpenFlow, there are other reasons why Open Flow might not be THE protocol - or the only protocol - used in SDN implementations. Certainly for layer 2-3, Open Flow makes a lot of sense. It is designed specifically to carry L2-3 forwarding information from the controller to the data plane. What it is not designed to do is transport or convey forwarding information that occurs in the higher layers of the stack, such as L4-7, that might require application-specific details on which the data plane will make forwarding decisions. That means there's room for another protocol, or an extension of OpenFlow, in order to enable inclusion of critical L4-7 data path elements in an SDN architecture. The fact that OpenFlow does not address L4-7 (and is not likely to anytime soon) is seen in the recent promulgation of service chaining proposals. Service chaining is rising as the way in which L4-7 services will be included in SDN architectures. Lest we lay all the blame on OpenFlow for this direction, remember that there are issues around scaling and depth of visibility with SDN controllers as it relates to directing L4-7 traffic and thus it was likely that the SDN architecture would evolve to alleviate those issues anyway. But lack of support in Open Flow for L4-7 is another line item justification for why the architecture is being extended, because it lacks the scope to deal with the more granular, application-focused rules required. Thus, it is important to recognize that SDN is an architectural model, and Open Flow an implementation detail. The two are not interchangeable, and as SDN itself matures we will see more changes to core assumptions on which the architecture is based that will require adaptation.529Views0likes1CommentIt's On: Stacks versus Flows
#OpenStack #CloudStack #OpenFlow #SDN It's a showdown of model versus control – or is it? There's a lot of noise about "wars" in the networking world these days. OpenStack versus CloudStack versus OpenFlow-based SDN. But while there are definitely aspects of "stacks" that share similarities with "flows", they are not the same model and ultimately they aren't even necessarily attempting to solve the same problems. Understanding the two models and what they're intended to do can go a long way toward resolving any perceived conflicts. The Stack Model Stack models, such as CloudStack and OpenStack, are more accurately placed in the category of "cloud management frameworks" because they are designed with provisioning and management of the infrastructure services that comprise a cloud computing (or highly dynamic) environment. Stacks are aptly named as they attempt to provide management and specifically automation of provisioning for the complete network stack. Both CloudStack and OpenStack, along with Eucalyptus and Amazon and VMware vCloud, provide a framework API that can (ostensibly) be used to provision infrastructure services irrespective of vendor implementation. The vision is (or should be) to enable implementers (whether service provider or enterprise) to be able to switch out architectural elements (routers, switches, hypervisors, load balancers, etc… ) transparently*. That is, moving from Dell to HP to Cisco (or vice-versa) as an environment's switching fabric should not be disruptive. Physical changes should be able to occur without impacting the provisioning and management of the actual services provided by the infrastructure. And yes, such a strategy should also allow heterogeneity of infrastructure. In many ways, such "stacks" are the virtualization of the data center, enabling abstraction of the actual implementation from the configuration and automation of the hardware (or software) elements. This, more than anything, is what enables a comparison with flow-based models. The Flow Model Flow-based models, in particular OpenFlow-based SDN, also abstracts implementation from configuration by decoupling the control plane from the data plane. This allows any OpenFlow-enabled device (mostly switches today, as SDN and OpenFlow focus on network layers) to be configured and managed via a centralized controller using a common API. Flows are "installed" or "inserted" into OpenFlow-enabled elements via OpenFlow, an open protocol designed for this purpose, and support real-time updates that enable on-demand optimization or fault isolation of flows through the network. OpenFlow and SDN are focused on managing the flow of traffic through a network. Flow-based models purport to offer the same benefits as a stack model in terms of heterogeneity and interoperability. Moving from one OpenFlow-enabled switch to another (or mixing and matching) should ostensibly have no impact on the network whatsoever. What flow-based models offer above and beyond a stack model is extensibility. OpenFlow-based SDN models using a centralized controller also carry with it the premise of being able to programmatically add new services to the network without vendor assistance. "Applications" deployed on an SDN controller platform (for lack of a better term) can extend existing services or add new ones and there is no need to change anything in the network fabric, because ultimately every "application" distills flows into a simple forwarding decision that can then be applied like a pattern to future flows by the switches. The Differences This is markedly different from the focus of a stack, which is on provisioning and management, even though both may be occurring in real-time. While it's certainly the case that through the CloudStack API you can create or delete port forwarding rules on a firewall, these actions are pushed (initiated) external to the firewall. It is not the case that the firewall receives a packet and asks the cloud framework for the appropriate action, which is the model in play for a switch in an OpenFlow-based SDN. Another (relatively unmentioned but important) distinction is who bears responsibility for integration. A stack-based model puts the onus on the stack to integrate (via what are usually called "plug-ins" or "drivers") with the component's existing API (assuming one exists). A flow-based model requires the vendor to take responsibility for enabling OpenFlow support natively. Obviously the ecosystem of available resources to perform integration is a magnitude higher with a stack model than with a flow model. While vendors are involved in development of drivers/plug-ins for stacks now, the impact on the product itself is minimal, if any at all, because the integration occurs external to the component. Enabling native OpenFlow support on components requires a lot more internal resources be directed at such a project. Do these differences make for an either-or choice? Actually, they don't. The models are not mutually exclusive and, in fact, might be used in conjunction with one another quite well. A stack based approach to provisioning and management might well be complemented by an OpenFlow SDN in which flows through the network can be updated in real time or, as is often proffered as a possibility, the deployment of new protocols or services within the network. The War that Isn't While there certainly may be a war raging amongst the various stack models, it doesn't appear that a war between OpenFlow and *-Stack is something that's real or ever will be The two foci are very different, and realistically the two could easily be deployed in the same network and solve multiple problems. Network resources may be provisioned and initially configured via a stack but updated in real-time or extended by an SDN controller, assuming such network resources were OpenFlow-enabled in the first place. * That's the vision (and the way it should be) at least. Reality thus far is that the OpenStack API doesn't support most network elements above L3 yet, and CloudStack is tightly coupling API calls to components, rendering this alleged benefit well, not a benefit at all, at least at L4 and above.284Views0likes1CommentCommunity: Force Multiplier for Network Programmability
#SDN Programmability on its own is not enough, a strong community is required to drive the future of network capabilities One of the lesser mentioned benefits of an OpenFlow SDN as articulated by ONF is the ability to "customize the network". It promotes rapid service introduction through customization, because network operators can implement the features they want in software they control, rather than having to wait for a vendor to put it in plan in their proprietary products. -- Key Benefits of OpenFlow-Based SDN This ability is not peculiar to SDN or OpenFlow, but rather it's tied to the concept of a programmable, centralized control model architecture. It's an extension of the decoupling of control and data planes as doing so affords an opportunity to insert a programmable layer or framework at a single, strategic point of control in the network. It's ostensibly going to be transparent and non-disruptive to the network because any extension of functionality will be deployed in a single location in the network rather than on every network element in the data center. This is actually a much more powerful benefit than it is often given credit for. The ability to manipulate data in-flight is the foundation for a variety of capabilities – from security to acceleration to load distribution, being able to direct flows in real-time has become for many organizations a critical capability in enabling the dynamism required to implement modern solutions including cloud computing . This is very true at layers 4-7, where ADN provides the extensibility of functionality for application-layer flows, and it will be true at layers 2-3 where SDN will ostensibly provide the same for network-layer flows. One of the keys to success in real-time flow manipulation, a.k.a network programmability, will be a robust community supporting the controller. Community is vital to such efforts because it provides organizations with broader access to experts across various domains as well as of the controller's programmatic environment. Community experts will be vital to assisting in optimization, troubleshooting, and even development of the customized solutions for a given controller. THE PATH to PRODUCTIZATION What ONF does not go on to say about this particular benefit is that eventually customizations end up incorporated into the controller as native functionality. That's important, because no matter how you measure it, software-defined flow manipulation will never achieve the same level of performance as the same manipulations implemented in hardware. And while many organizations can accept a few milliseconds of latency, others cannot or will not. Also true is that some customized functionality eventually becomes so broadly adopted that it requires a more turn-key solution; one that does not require the installation of additional code to enable. This was the case, for example, with session persistence – the capability of an ADC (application delivery controller) to ensure session affinity with a specific server. Such a capability is considered core to load balancing services and is required for a variety of applications, including VDI. Originally, this capability was provided via real-time flow manipulation. It was code that extended the functionality of the ADC that had to be implemented individually by every organization that needed it – which was most of them. The code providing this functionality was shared and refined over and over by the community and eventually became so demanded that it was rolled into the ADC as a native capability. This improved performance, of course, but it also offered a turn-key "checkbox" configuration for something that had previously required code to be downloaded and "installed" on the controller. The same path will need to be available for SDN as has been afforded for ADN, to mitigate complexity of deployment as well as address potential performance implications coming from the implementation of network-functionality in software. That path will be a powerful one, if it is leveraged correctly. While organizations always maintain the ability to extend network services through programmability, if community support exists to assist in refinement and optimization and, ultimately, a path to productization the agility of network services increases ten or hundred fold over the traditional vendor-driven model. There are four requirements to enable such a model to be successful for both customer and vendors alike: Community that encourages sharing and refinement of "applications" Repository of "applications" that is integrated with the controller and enables simple deployment of "applications". Such a repository may require oversight to certify or verify applications as being non-malicious or error-free. A means by which applications can be rated by consumers. This is the feedback mechanism through which the market indicates to vendors which features and functionality are in high-demand and would be valuable implemented as native capabilities. A basic level of configuration management control that enables roll-back of "applications" on the controller. This affords protection against introduction of applications with errors or that interact poorly when deployed in a given environment. The programmability of the network, like programmability of the application delivery network, is a powerful capability for customers and vendors alike. Supporting a robust, active community of administrators and operators who develop, share, and refine "control-plane applications" that manipulate flows in real-time to provide additional value and functionality when it's needed is critical to the success of such a model. Building and supporting such a community should be a top priority, and integrating it into the product development cycle should be right behind it. HTML5 WebSockets Illustrates Need for Programmability in the Network Midokura – The SDN with a Hive Mind Reactive, Proactive, Predictive: SDN Models SDN is Network Control. ADN is Application Control. F5 Friday: Programmability and Infrastructure as Code Integration Topologies and SDN242Views0likes0CommentsHTML5 WebSockets Illustrates Need for Programmability in the Network
#HTML5 #SDN The increasing use of HTML5 WebSockets illustrates one of the lesser mentioned value propositions of SDN – and ADN: extensibility. It's likely that IT network and security staff would agree that HTML5 WebSockets has the potential for high levels of disruptions (and arguments) across the data center. Developers want to leverage the ability to define their own protocols while reaping the benefits of the HTTP-as-application-transport paradigm. Doing so, however, introduces security risks and network challenges as never-before-seen protocols start streaming through firewalls, load balancers, caches and other network-hosted intermediaries that IT network and security pros are likely to balk at. Usually because they're the last to know, and by the time they do – it's already too late to raise objections. Aside from the obvious "you folks need to talk more" (because that's always been the answer and as of yet has failed to actually occur) there are other answers. Perhaps not turn-key, perhaps not easy, but there are other answers. One of them points to a rarely discussed benefit of SDN that has long been true for ADN but is often overlooked: extensibility through programmability. In addition, leveraging the SDN controller’s centralized intelligence, IT can alter network behavior in real-time and deploy new applications and network services in a matter of hours or days, rather than the weeks or months needed today. By centralizing network state in the control layer, SDN gives network managers the flexibility to configure, manage, secure, and optimize network resources via dynamic, automated SDN programs. Moreover, they can write these programs themselves and not wait for features to be embedded in vendors’ proprietary and closed software environments in the middle of the network. -- ONF, Software-Defined Networking: The New Norm for Networks The ability to alter behavior of any network component in real-time, to make what has been traditionally static dynamic enough to adapt to changing conditions is the goal of many modern technology innovations including SDN (the network) and cloud computing (applications and services). When developers and vendors can create and deploy new protocols and toss them over the wall into a production environment, operations needs the ability to adapt the network and delivery infrastructure to ensure the continued enforcement of security policies as well as provide support to assure availability and performance expectations are met. Doing so requires extensibility in the network. Ultimately that means programmability. EXTENSIBILITY through PROGRAMMABILITY While most of the networking world is focused on OpenFlow and VXLAN and NVGRE and virtual network gateways, the value of the ability to extend SDN through applications seems to be grossly underestimated. The premise of SDN is that the controller's functionality can be extended through specific applications that provide for handling of new protocols, provide new methods of managing flows, and do other nifty things that likely only network geeks would truly appreciate. The ability to extend packet processing and add new functions or support for new protocols rapidly, through software, is a significant part of the value proposition of SDN. Likewise, it illustrates the value of the same capabilities that currently exist in ADN solutions. ADN, too, enables extensibility through programmability. While varying degrees of control and capabilities exist across the ADN spectrum, at least some provide complete programmatic control over traffic management by offering the ability to "plug-in" applications (of a sort) that provide support for application-specific handling or new (and often proprietary) protocols, like those used to exchange data over WebSockets-transported connections. What both afford is the ability to extend the functionality of the network (SDN) or application traffic management (ADN) without requiring upgrades or new products. This has been a significant source of value for organizations with respect to security, who often turn to the ADN solutions topologically positioned in a strategic point of control within the network to address zero-day or emerging exploits for which there are no quick fixes. When it comes to something like dealing with custom (proprietary) application protocols and the use of WebSockets, for which network infrastructure services naturally has no support, the extensibility of SDN and ADN are a boon to network and security staff looking for ways in which to secure and address operational risk associated with new and heretofore unknown protocols. The Need for (HTML5) Speed SPDY versus HTML5 WebSockets Oops! HTML5 Does It Again Reactive, Proactive, Predictive: SDN Models The Next IT Killer Is… Not SDN SDN is Network Control. ADN is Application Control.209Views0likes0CommentsReactive, Proactive, Predictive: SDN Models
#SDN #openflow A session at #interop sheds some light on SDN operational models One of the downsides of speaking at conferences is that your session inevitably conflicts with another session that you'd really like to attend. Interop NY was no exception, except I was lucky enough to catch the tail end of a session I was interested in after finishing my own. I jumped into OpenFlow and Software Defined Networks: What Are They and Why Do You Care? just as discussion about an SDN implementation at CERN labs was going on, and was quite happy to sit through the presentation. CERN labs has implemented an SDN, focusing on the use of OpenFlow to manage the network. They partner with HP for the control plane, and use a mix of OpenFlow-enabled switches for their very large switching fabric. All that's interesting, but what was really interesting (to me anyway) was the answer to my question with respect to the rate of change and how it's handled. We know, after all, that there are currently limitations on the number of inserts per second into OpenFlow-enabled switches and CERN's environment is generally considered pretty volatile. The response became a discussion of SDN models for handling change. The speaker presented three approaches that essentially describe SDN models for OpenFlow-based networks: Reactive Reactive models are those we generally associate with SDN and OpenFlow. Reactive models are constantly adjusting and are in flux as changes are made immediately as a reaction to current network conditions. This is the base volatility management model in which there is a high rate of change in the location of end-points (usually virtual machines) and OpenFlow is used to continually update the location and path through the network to each end-point. The speaker noted that this model is not scalable for any organization and certainly not CERN. Proactive Proactive models anticipate issues in the network and attempt to address them before they become a real problem (which would require reaction). Proactive models can be based on details such as increasing utilization in specific parts of the network, indicating potential forthcoming bottlenecks. Making changes to the routing of data through the network before utilization becomes too high can mitigate potential performance problems. CERN takes advantage of sFlow and Netflow to gather this data. Predictive A predictive approach uses historical data regarding the performance of the network to adjust routes and flows periodically. This approach is less disruptive as it occurs with less frequency that a reactive model but still allows for trends in flow and data volume to inform appropriate routes. CERN uses a combination of proactive and predictive methods for managing its network and indicated satisfaction with current outcomes. I walked out with two takeaways. First was validation that a reactive, real-time network operational model based on OpenFlow was inadequate for managing high rates of change. Second was the use of OpenFlow as more of an operational management toolset than an automated, real-time self-routing network system is certainly a realistic option to address the operational complexity introduced by virtualization, cloud and even very large traditional networks. The Future of Cloud: Infrastructure as a Platform SDN, OpenFlow, and Infrastructure 2.0 Applying ‘Centralized Control, Decentralized Execution’ to Network Architecture Integration Topologies and SDN SDN is Network Control. ADN is Application Control. The Next IT Killer Is… Not SDN How SDN Is Defined Within ADN Architectures2.1KViews0likes0CommentsWILS: The Data Center API Compass Rose
#SDN #cloud North, South, East, West. Defining directional APIs. There's an unwritten rule that says when describing a network architecture the perimeter of the data center is at the top. Similarly application data flow begins at the UI (presentation) layer and extends downward, toward the data tier. This directional flow has led to the use of the terms "northbound" and "southbound" to describe API responsibility within SDN (Software Defined Network) architectures and is likely to continue to expand to encompass in general the increasingly-API driven data center models. But while network aficionados may use these terms with alacrity, they are not always well described or described in a way that a broad spectrum of IT professionals will immediately understand. Too, these terms are increasingly used by systems other than those directly related to SDN to describe APIs and how they integrate with other systems within the data center. So let's set about rectifying that, shall we? NORTHBOUND The northbound API in an SDN architecture describes the APIs used to communicate with the controller. In a general sense, the northbound API is the interconnect with the management ecosystem. That is, with systems external to the device responsible for instructing, monitoring, or otherwise managing the device in some way. Examples in the enterprise data center would be integration with HP, VMware, and Microsoft management solutions for purposes of automation and orchestration and the sharing of actionable data between systems. SOUTHBOUND The southbound API interconnects with the network ecosystem. In an SDN this would be the switching fabric. In other systems this would be those network devices with which the device integrates for the purposes of routing, switching and otherwise directing traffic. Examples in the enterprise data center would be the use of OpenFlow to communicate with the switch fabric, network virtualization protocols, or the integration of a distributed delivery network. EASTBOUND Eastbound describes APIs used to integrate the device with external systems, such as cloud providers and cloud-hosted services. Examples in the enterprise data center would be a cloud gateway taking advantage of a cloud provider's API to enable a normalized network bridge that extends the data center eastward, into the cloud. WESTBOUND Westbound APIs are used to enable integration with the device, a la plug-ins to a platform. These APIs are internal-focused and enable a platform upon which third-party functionality can be developed and deployed. Examples in the enterprise data center would be proprietary APIs for network operating systems that enable a plug-in architecture for extending device capabilities beyond what is available "out of the box." Certainly others will have a slightly different take on directional API definitions, though north and south-bound API descriptions are generally similar throughout the industry at this time. However, you can assume these definitions are applicable if and when I use them in future blogs. Architecting Scalable Infrastructures: CPS versus DPS SDN is Network Control. ADN is Application Control. The Cloud Integration Stack Hybrid Architectures Do Not Require Private Cloud Identity Gone Wild! Cloud Edition Cloud Bursting: Gateway Drug for Hybrid Cloud The Conspecific Hybrid Cloud685Views0likes0CommentsAll Your Packets Are Belong to … You?
Yes, even the ones over there, in that there cloud, can be yours. No one argues that networks have not exploded in terms of speeds and feeds in the past decade. What with more consumers (and cows), more companies going “online”, and more content it’d be hard to argue that there’s less traffic out there today than there was even a mere four or five years ago. The increasing pressure put on the network is often mentioned almost in passing, as though merely moving from 10Gbps to 40Gbps to 100Gbps will solve the problem. Move along now, nothing to see here but a higher flow of packets. But that higher density of packets along with greater diversity of content coupled with distribution through cloud computing that’s creating other issues for network services whose purpose it is to collect, analyze, and act upon those packets. IDS, IPS, secure web gateways, voice analyzers, honeypots. There are myriad network infrastructure devices that are tasked with analyzing the content of packets flowing in and out of the data center that find it more and more difficult to scale along with the rapid growth of data on the network. Application Performance Monitoring (APM) systems, as well, often take advantage of port mirroring as a way to collect and analyze intra-system traffic to pinpoint configuration or network issues that may cause performance degradation. These systems need one thing: all your (relevant) packets. The problem is that on most switches, you can designate only a couple of ports as egress span ports and you may have three, four or more devices and systems that need those packets. And Heaven forbid you have a desperate need to later tap into the switch to troubleshoot an urgent issue. The answer in the past has been some highly complex network topologies that are difficult to maintain and not easy to extend when the next system needing all your packets is deployed. Additionally, cloud-deployed applications and systems are not easily included, even though organizations desire the same level of visibility and analysis of those packets as is found in the data center. One answer to these issues is found in what Gartner is calling Network Packet Brokers. One such provider in this space is VSS Monitoring, which recently introduced a new set of solutions to resolve this lack of visibility both in the data center and within the cloud. VSS MONITORING VSS Monitoring has been around since 2006, shipping aggregation and related management products. Now it’s introduced several new products that assist in the goal of collecting packets across the increasingly cloudy landscape and getting them to the right place at the right time, a market being referred to as “Network Packet Brokers (NPB)”. Gartner analysts describe these solutions as consisting of “devices that facilitate monitoring and security technologies to see the traffic which is required for those solutions to work more effectively. They could be called “monitoring switches” “matrix switches” (Application Aware Network Performance Monitoring (NPM) and Network Packet Broker (NPB) research). NPB solutions must be able to perform many-to-many port mapping using a GUI or CLI, filter packets at L2-4, and perform packing slicing and deduplication as well as aggregation and intelligent distribution. This last criteria is an important one, as it allows operators to filter out noise when directing packets to reduce the requirement that analyzers and systems process (and ultimately discard) irrelevant traffic. VSS Monitoring has introduced a set of solutions that meet (and in some cases exceed) the requirements laid out by Gartner (VSS supports L2-7 filtering) and that further expand the scope of such solutions into cloud computing environments: New packet broker appliances -- vBrokers™ Expanded system-level scalability – vMesh™ Topology-level unified management console – vMC™ VSS achieves this inter-cloud monitoring capability by leveraging a proprietary L2 bi-directional protocol for its interconnects called vMesh. Its vBrokers are purpose-built appliances that can interconnect with one another using vMesh to form a virtual network tool optimization fabric . These vBrokers can be deployed across LAN, WAN segments and in a wide variety of cloud network infrastructure environments using the vMesh architecture effectively forming an overlay network over which packets are shared. From there, it’s a matter of dragging and dropping policies and configuration via its vMC unified management console to access network packets on demand and properly direct them based on organizational needs. the VSS’ new vMesh technology can scale out to up to 256 devices and 10,000 and more ports. VSS also provides an Open XML API that encourages integration. Configuration, remote management, metrics, etc… can be achieved via this API. VSS solutions today are not supported by common provisioning and automation frameworks (Chef, Puppet, OpenStack) although that is something that may very well be supported in the future. Still, the ability to reach out into the cloud and direct packets to DC-hosted infrastructure services providing analysis, security, or other functions solves a major issue with managing cloud-deployed applications: visibility. SDN versus NETWORK PACKET BROKERS At first read, this sounds a lot like a suggested SDN (Software-Defined Networking) use case (found on SDN Central) that posits the use of OpenFlow as a Virtual Patch Panel. However, on deeper inspection there are some distinct differences between the two solutions. While both are focused on solving what is essentially a port forwarding problem (port spanning is really just a case of directing ingress packets on one port to more than one egress port) SDN is (today) more disruptive a solution both in the enterprise and in the cloud. While it’s true that with both solutions you need some means to direct ingress packets to the desired egress port, VSS’ solution does not require that the switches in question be OpenFlow enabled (which may be problematic in cloud environments). Additionally, the forwarding mechanism available with OpenFlow is simple forwarding – packet in, packet out. While a more sophisticated forwarding algorithm could certainly be employed, this would require specific code. VSS, on the other hand, enables intelligent forwarding of actionable packets, reducing the amount of irrelevant traffic any given infrastructure solution might need to process. Voice analyzers, for example, need only see VoIP, SIP and related traffic. Such a system doesn’t need to inspect a JSON exchange, nor will it – the packets will be inspected and discarded. Using a more intelligent approach, VSS can intervene and eliminate the overhead associated with inspecting and discarding non-actionable traffic. This offload-like capability improves the capacity and performance of packet analyzing systems. Further more, VSS offers a single-pane of glass management system for monitoring and managing its packet brokers, while an OpenFlow-enabled solution currently does not. This is certainly an area of exploration for SDN and OpenFlow-enabled devices and future value-add for those banking on SDN; admittedly the technology is still very much in its nascent phase and maturation will bring more mature, robust solutions not only in core device support but in management and niche-market solutions. The other issue is deployment in the cloud, as a virtual device. The good news is that Open vSwitch is embedded in many hypervisors and is available as a package for a variety of Linux-based systems. The bad news is that in some cloud environments (like Amazon) these approaches may not be possible to deploy and/or take advantage of, thus rendering an SDN-OpenFlow approach more or less toothless. VSS’ packet broker, vBroker, supports a broad set of physical and virtual environments (i.e. physical and virtual span ports, ability to filter and remove VN-Tags, etc) which enables a wider set of cloud environments to take advantage of the capabilities. That’s not to say the two couldn’t be combined, either. In fact, VSS could be described as “SDN for networking monitoring”, though VSS itself has not chosen to represent its solution this way. But essentially it’s acting in the same manner as SDN – simply confined to a specific area of functionality – monitoring. As I posited in the past, I suspect we’ll continue to see these kinds of “pockets of SDN” capabilities pop up to resolve some pressing issues that simply can’t be addressed by traditional networking methods – or at least can’t be addressed efficiently or in an acceptably rapid manner. In such an architecture (one comprised of controllers at strategic points of control) VSS Monitoring is certainly positioned to act as the control point for managing a broadly distributed monitoring network.267Views0likes0Comments