Reactive, Proactive, Predictive: SDN Models
#SDN #openflow A session at #interop sheds some light on SDN operational models One of the downsides of speaking at conferences is that your session inevitably conflicts with another session that you'd really like to attend. Interop NY was no exception, except I was lucky enough to catch the tail end of a session I was interested in after finishing my own. I jumped into OpenFlow and Software Defined Networks: What Are They and Why Do You Care? just as discussion about an SDN implementation at CERN labs was going on, and was quite happy to sit through the presentation. CERN labs has implemented an SDN, focusing on the use of OpenFlow to manage the network. They partner with HP for the control plane, and use a mix of OpenFlow-enabled switches for their very large switching fabric. All that's interesting, but what was really interesting (to me anyway) was the answer to my question with respect to the rate of change and how it's handled. We know, after all, that there are currently limitations on the number of inserts per second into OpenFlow-enabled switches and CERN's environment is generally considered pretty volatile. The response became a discussion of SDN models for handling change. The speaker presented three approaches that essentially describe SDN models for OpenFlow-based networks: Reactive Reactive models are those we generally associate with SDN and OpenFlow. Reactive models are constantly adjusting and are in flux as changes are made immediately as a reaction to current network conditions. This is the base volatility management model in which there is a high rate of change in the location of end-points (usually virtual machines) and OpenFlow is used to continually update the location and path through the network to each end-point. The speaker noted that this model is not scalable for any organization and certainly not CERN. Proactive Proactive models anticipate issues in the network and attempt to address them before they become a real problem (which would require reaction). Proactive models can be based on details such as increasing utilization in specific parts of the network, indicating potential forthcoming bottlenecks. Making changes to the routing of data through the network before utilization becomes too high can mitigate potential performance problems. CERN takes advantage of sFlow and Netflow to gather this data. Predictive A predictive approach uses historical data regarding the performance of the network to adjust routes and flows periodically. This approach is less disruptive as it occurs with less frequency that a reactive model but still allows for trends in flow and data volume to inform appropriate routes. CERN uses a combination of proactive and predictive methods for managing its network and indicated satisfaction with current outcomes. I walked out with two takeaways. First was validation that a reactive, real-time network operational model based on OpenFlow was inadequate for managing high rates of change. Second was the use of OpenFlow as more of an operational management toolset than an automated, real-time self-routing network system is certainly a realistic option to address the operational complexity introduced by virtualization, cloud and even very large traditional networks. The Future of Cloud: Infrastructure as a Platform SDN, OpenFlow, and Infrastructure 2.0 Applying ‘Centralized Control, Decentralized Execution’ to Network Architecture Integration Topologies and SDN SDN is Network Control. ADN is Application Control. The Next IT Killer Is… Not SDN How SDN Is Defined Within ADN Architectures2KViews0likes0CommentsWILS: The Data Center API Compass Rose
#SDN #cloud North, South, East, West. Defining directional APIs. There's an unwritten rule that says when describing a network architecture the perimeter of the data center is at the top. Similarly application data flow begins at the UI (presentation) layer and extends downward, toward the data tier. This directional flow has led to the use of the terms "northbound" and "southbound" to describe API responsibility within SDN (Software Defined Network) architectures and is likely to continue to expand to encompass in general the increasingly-API driven data center models. But while network aficionados may use these terms with alacrity, they are not always well described or described in a way that a broad spectrum of IT professionals will immediately understand. Too, these terms are increasingly used by systems other than those directly related to SDN to describe APIs and how they integrate with other systems within the data center. So let's set about rectifying that, shall we? NORTHBOUND The northbound API in an SDN architecture describes the APIs used to communicate with the controller. In a general sense, the northbound API is the interconnect with the management ecosystem. That is, with systems external to the device responsible for instructing, monitoring, or otherwise managing the device in some way. Examples in the enterprise data center would be integration with HP, VMware, and Microsoft management solutions for purposes of automation and orchestration and the sharing of actionable data between systems. SOUTHBOUND The southbound API interconnects with the network ecosystem. In an SDN this would be the switching fabric. In other systems this would be those network devices with which the device integrates for the purposes of routing, switching and otherwise directing traffic. Examples in the enterprise data center would be the use of OpenFlow to communicate with the switch fabric, network virtualization protocols, or the integration of a distributed delivery network. EASTBOUND Eastbound describes APIs used to integrate the device with external systems, such as cloud providers and cloud-hosted services. Examples in the enterprise data center would be a cloud gateway taking advantage of a cloud provider's API to enable a normalized network bridge that extends the data center eastward, into the cloud. WESTBOUND Westbound APIs are used to enable integration with the device, a la plug-ins to a platform. These APIs are internal-focused and enable a platform upon which third-party functionality can be developed and deployed. Examples in the enterprise data center would be proprietary APIs for network operating systems that enable a plug-in architecture for extending device capabilities beyond what is available "out of the box." Certainly others will have a slightly different take on directional API definitions, though north and south-bound API descriptions are generally similar throughout the industry at this time. However, you can assume these definitions are applicable if and when I use them in future blogs. Architecting Scalable Infrastructures: CPS versus DPS SDN is Network Control. ADN is Application Control. The Cloud Integration Stack Hybrid Architectures Do Not Require Private Cloud Identity Gone Wild! Cloud Edition Cloud Bursting: Gateway Drug for Hybrid Cloud The Conspecific Hybrid Cloud620Views0likes0CommentsSDN and OpenFlow are not Interchangeable
#SDN #OpenFlow They aren't,seriously. They are not synonyms. Stop conflating them. New technology always runs into problems with terminology if it's lucky enough to become the "next big thing." SDN is currently in that boat, with a nearly cloud-like variety of definitions and accompanying benefits. I've seen SDN defined so tightly as to exclude any model that doesn't include Open Flow. Conversely, I've seen it defined so vaguely as to include pretty much any network that might have a virtual network appliance deployed somewhere in the data path. It's important to remember that SDN and OpenFlow are not synonymous. SDN is an architectural model. OpenFlow is an implementation API. So is XMPP, Arista's CloudVision solution for a southbound protocol. So are potentially vendor specific southbound protocols that might be included in Open Daylight's model. SDN is an architectural model. OpenFlow is an implementation API. It is one possible southbound API protocol, admittedly one that is rapidly becoming the favored son of SDN. It's certainly gaining mindshare, with a plurality of respondents to a recent InformationWeek survey on SDN having at least a general idea what Open Flow is all about, with nearly half indicating familiarity with the protocol. The reason it is important not to conflate Open Flow with SDN is that both the API and the architecture are individually beneficial on their own. There is no requirement that an Open Flow-enabled network infrastructure must be part of an SDN, for example. Organizations looking for benefits around management and automation of the network might simply choose to implement an Open Flow-based management framework using custom scripts or software, without adopting wholesale an SDN architecture. Conversely, there are plenty of examples of SDN offerings that do not rely on OpenFlow, but rather some other protocol of choice. Open Flow is, after all, a work in progress and there are capabilities required by organizations that simply don't exist yet in the current specification - and thus implementation. Open Flow Lacks Scope Even ignoring the scalability issues with OpenFlow, there are other reasons why Open Flow might not be THE protocol - or the only protocol - used in SDN implementations. Certainly for layer 2-3, Open Flow makes a lot of sense. It is designed specifically to carry L2-3 forwarding information from the controller to the data plane. What it is not designed to do is transport or convey forwarding information that occurs in the higher layers of the stack, such as L4-7, that might require application-specific details on which the data plane will make forwarding decisions. That means there's room for another protocol, or an extension of OpenFlow, in order to enable inclusion of critical L4-7 data path elements in an SDN architecture. The fact that OpenFlow does not address L4-7 (and is not likely to anytime soon) is seen in the recent promulgation of service chaining proposals. Service chaining is rising as the way in which L4-7 services will be included in SDN architectures. Lest we lay all the blame on OpenFlow for this direction, remember that there are issues around scaling and depth of visibility with SDN controllers as it relates to directing L4-7 traffic and thus it was likely that the SDN architecture would evolve to alleviate those issues anyway. But lack of support in Open Flow for L4-7 is another line item justification for why the architecture is being extended, because it lacks the scope to deal with the more granular, application-focused rules required. Thus, it is important to recognize that SDN is an architectural model, and Open Flow an implementation detail. The two are not interchangeable, and as SDN itself matures we will see more changes to core assumptions on which the architecture is based that will require adaptation.507Views0likes1CommentWhich SDN protocol is right for you?
SDN's biggest threat is all us people talking about it!! In a recent article titled something along the lines of "Which SDN protocol should I use?", I found myself totally confused... Not the same kind of confused as entering a turnstile in the southern hemisphere (did you know they spin the other way down there). No, I found myself wondering if any of us can agree what SDN is. A common comparison that has me scratching my big, shiny head is OpenFlow versus VXLAN or NVGRE. This is like comparing a Transformer (the shapeshifting robot, not the power supply) and a family sedan. What do I mean? Well, if you squint your eye's really hard and look at them from a long way away, then yes, there a small realtionship (both have wheels), but they do such very different things. VXLAN and NVGRE are encapsulation protocols. They don't provide "programmable flow instantiation", which is what OpenFlow does, and that is SDN. If we are to label VXLAN and NVGRE as SDN, then we must also accept that older encapsulation protocols are SDN, too. For example, 802.1q VLAN tagging, GRE, etc. It pains me to even make this suggestion. Lets say I put the Oxford English Dictionary down for a moment, while I climb off my high horse, and we do loosen the SDN term to include non-OpenFlow programmable flow instantiation (I prefer to call this simply "Programmable Networking"), then this still doesn't include encapsulation protocols for there is no programmable element to them. May I humbly suggest that dynamic routing protocols are closer to SDN than encapsulation protocols? At least dynamic routing procotols do alter flow in packet forwarding devices! That said, I would then have to add Advanced ADC’s to the mix as they evaluate real-time state (performance/security/app experience) and use this to make constant, flow-altering decisions. This example is even closer to SDN than dynamic routing... its just not using OpenFlow. I'm all for abanoning the SDNacronym altogether. Its broad uses are far too ambigious, and with ambiguity comes confusion, followed shortly by fear and uncertainty. That's it. I'm taking a stand. SDN has been struck from my autocorrect!!382Views0likes0CommentsF5 Friday: Programmability and Infrastructure as Code
#SDN #ADN #cloud #devops What does that mean, anyway? SDN and devops share some common themes. Both focus heavily on the notion of programmability in network devices as a means to achieve specific goals. For SDN it’s flexibility and rapid adaptation to changes in the network. For devops, it’s more a focus on the ability to treat “infrastructure as code” as a way to integrate into automated deployment processes. Each of these notions is just different enough to mean that systems supporting one don’t automatically support the other. An API focused on management or configuration doesn’t necessarily provide the flexibility of execution exhorted by SDN proponents as a significant benefit to organizations. And vice-versa. INFRASTRUCTURE as CODE Devops is a verb, it’s something you do. Optimizing application deployment lifecycle processes is a primary focus, and to do that many would say you must treat “infrastructure as code.” Doing so enables integration and automation of deployment processes (including configuration and integration) that enables operations to scale along with the environment and demand. The result is automated best practices, the codification of policy and process that assures repeatable, consistent and successful application deployments. F5 supports the notion (and has since 2003 or so) of infrastructure as code in two ways: iControl iControl, the open, standards-based API for the entire BIG-IP platform, remains the primary integration point for partners and customers alike. Whether it’s inclusion in Opscode Chef recipes, or pre-packaged solutions with systems from HP, Microsoft, or VMware, iControl offers the ability to manage the control plane of BIG-IP from just about anywhere. iControl is service-enabled and has been accessed and integrated through more programmatic languages than you can shake a stick at. Python, PERL, Java, PHP, C#, PowerShell… if it can access web-based services, it can communicate with BIG-IP via iControl. iApp A latter addition to the BIG-IP platform, iApp is best practice application delivery service deployment codified. iApps are service- and application-oriented, enabling operations and consumers of IT as a Service to more easily deploy requisite application delivery services without requiring intimate knowledge of the hundreds of individual network attributes that must be configured. iApp is also used in conjunction with iControl to better automate and integrate application delivery services into an IT as a Service environment. Using iApp to codify performance and availability policies based on application and business requirements, consumers – through pre-integrated solutions – can simply choose an appropriate application delivery “profile” along with their application to ensure not only deployment but production success. Infrastructure as code is an increasingly important view to take of the provisioning and deployment processes for network and application delivery services as they enable more consistent, accurate policy configuration and deployment. Consider research from Dimension Data that found “total number of configuration violations per device has increased from 29 to 43 year over year -- and that the number of security-related configuration errors (such as AAA Authentication, Route Maps and ACLS, Radius and TACACS+) also increased. AAA Authentication errors in particular jumped from 9.3 per device to 13.6, making it the most frequently occurring policy violation.” The ability to automate a known “good” configuration and policy when deploying application and network services can decrease the risk of these violations and ensure a more consistent, stable (and ultimately secure) network environment. PROGRAMMABILITY Less with “infrastructure as a code” (devops) and more-so with SDN comes the notion of programmability. On the one hand, this notion squares well with the “infrastructure as code” concept, as it requires infrastructure to be enabled in such as a way as to provide the means to modify behavior at run time, most often through support for a common standard (OpenFlow is the darling standard du jour for SDN).For SDN, this tends to focus on the forwarding information base (FIB) but broader applicability has been noted at times, and no doubt will continue to gain traction. The ability to “tinker” with emerging and experimental protocols, for example, is one application of programmability of the network. Rather than wait for vendor support, it is proposed that organizations can deploy and test support for emerging protocols through OpenFlow enabled networks. While this capability is likely not really something large production networks would undertake, still, the notion that emerging protocols could be supported on-demand, rather than on a vendor' driven timeline, is often desirable. Consider support for SIP, before UCS became nearly ubiquitous in enterprise networks. SIP is a message-based protocol, requiring deep content inspection (DCI) capabilities to extract AVP codes as a means to determine routing to specific services. Long before SIP was natively supported by BIG-IP, it was supported via iRules, F5’s event-driven network-side scripting language. iRules enabled customers requiring support for SIP (for load balancing and high-availability architectures) to program the network by intercepting, inspecting, and ultimately routing based on the AVP codes in SIP payloads. Over time, this functionality was productized and became a natively supported protocol on the BIG-IP platform. Similarly, iRules enables a wide variety of dynamism in application routing and control by providing a robust environment in which to programmatically determine which flows should be directed where, and how. Leveraging programmability in conjunction with DCI affords organizations the flexibility to do – or do not – as they so desire, without requiring them to wait for hot fixes, new releases, or new products. SDN and ADN – BIRDS of a FEATHER The very same trends driving SDN at layer 2-3 are the same that have been driving ADN (application delivery networking) for nearly a decade. Five trends in network are driving the transition to software defined networking and programmability. They are: • User, device and application mobility; • Cloud computing and service; • Consumerization of IT; • Changing traffic patterns within data centers; • And agile service delivery. The trends stretch across multiple markets, including enterprise, service provider, cloud provider, massively scalable data centers -- like those found at Google, Facebook, Amazon, etc. -- and academia/research. And they require dynamic network adaptability and flexibility and scale, with reduced cost, complexity and increasing vendor independence, proponents say. -- Five needs driving SDNs Each of these trends applies equally to the higher layers of the networking stack, and are addressed by a fully programmable ADN platform like BIG-IP. Mobile mediation, cloud access brokers, cloud bursting and balancing, context-aware access policies, granular traffic control and steering, and a service-enabled approach to application delivery are all part and parcel of an ADN. From devops to SDN to mobility to cloud, the programmability and service-oriented nature of the BIG-IP platform enables them all. The Half-Proxy Cloud Access Broker Devops is a Verb SDN, OpenFlow, and Infrastructure 2.0 Devops is Not All About Automation Applying ‘Centralized Control, Decentralized Execution’ to Network Architecture Identity Gone Wild! Cloud Edition Mobile versus Mobile: An Identity Crisis351Views0likes0CommentsSDN, OpenFlow, and Infrastructure 2.0
#infra2 #openflow #sdn #devops As cloud recedes, it reveals what it hid when the hype took it big: a focus on the network. Like cloud two or three years ago, SDN and OpenFlow dominated the talk at Interop. During a show that’s (in theory at least) dedicated to networking, this should be no surprise. Is it making networking sexy again? Yes, insomuch as we’re at least talking about networking again, which is about it considering that the network is an integral component of all the other technology and models that took the spotlight from it in the first place. Considering recent commentary on SDN * and OpenFlow, it seems folks are still divided on OpenFlow and SDN and are trying to figure out where it fits – or if it fits – in modern data center architectures. Prediction: OpenFlow Is Dead by 2014; SDN Reborn in Network Management Of course, many of the problems that the SDN vendors state – VM mobility, the limited range of VLAN IDs, the inability to move L2/L3 networking among data centers and the inflexibility of current networking command and control -- are problems faced only by cloud providers and a handful of large, large companies with big, global data centers. In other words: a rather small number of customers. I think Mike is pretty spot on with this prediction. Essentially, the majority of organizations will end up leveraging SDN for something more akin to network management in a hybrid network architecture, though not necessarily for the reasons he cites. It won’t necessarily be a lack of need, it’ll be a lack of need universally and the cost of such a massive disruption to the data center. With that in mind, we need to spend some time thinking about where SDN fits in the overall data center architecture. Routing and switching is only one part of the puzzle that is dynamic data centers, after all, and while its target problems include the dynamism inherent in on-demand provisioning of resources, alone it cannot solve this problem. Its current focus lies most often on solving how to get from point A to point B through the network when point B is a moving target – and doing so dynamically, to adjust flow in a way that’s optimal given … well, given basic networking principles like shortest path routing. Not that it will remain that way, mind you, but at the nonce that’s the way it is. Greg Ferro sums up and explains in his typical straight-to-the-point manner the core concepts behind OpenFlow and SDN in a recent post. OpenFlow and Software Defined Networking: Is It Routing or Switching ? OpenFlow defines a standard for sending flow rules to network devices so that the Control Plane can add them to the forwarding table for the Data Plane. These flow rules contains fields for elements such as source & destination MAC, Source & destination IP, source and destination TCP, VLAN, QoS and MPLS tags and more. The flow rules are then added to the existing forwarding table in the network device. The forwarding table is what all routers and switches use to dispatch frame and packets to their egress ports. OpenFlow value is realised in the Controller, and the most interesting changes are because the Controller will get new capabilities and truly granular control of the traffic flows. Therefore, OpenFlow is neither routing or switching, it’s about forwarding. It’s About Forwarding This simple statement is central to the “big picture” when you step back and try to put SDN and OpenFlow into the perspective of where it fits in an existing, modern data center architecture because it’s designed to solve specific problems, not necessarily replace the entire network (if you’re starting from scratch, that’s likely a different story). It’s about forwarding and, in particular, it’s about forwarding in a dynamic, volatile environment such as exist in cloud computing models. Where SDN and OpenFlow appear to offer the most value to existing data centers with experiencing this problem is in the network pockets that must deal with the volatility inside the data center at the application infrastructure (server) tiers, where resource lifecycle management in large installations is likely to cause the most disruption. The application delivery tier already includes the notion of separation of control from data plane. That’s the way it’s been for many years, though the terminology did not always exist to describe it as such. That separation has always been necessary to abstract the notion of an “application” or “service” from its implementation and allow for the implementation of reliability and availability strategies through technology like load balancing and failover to be transparent to the consumer. The end-point in the application delivery tier is static; it’s not rigid, but it is static because there’s no need for it to change dynamically. What was dynamic were the resources which have become even more dynamic today, specifically the resources that comprise the abstracted application: the various application instances (virtual machines) that make up the “application”. Elasticity is implemented in the application delivery tier, by seamlessly ensuring that consumers are able to access resources whether demand is high or low. In modern data center models, the virtualization management systems – orchestration, automation, provisioning – are part of the equation, ensuring elasticity is possible by managing the capacity of resources in a manner commensurate with demand seen at the application delivery tier. As resources in the application infrastructure tier are launched and shut down, as they move from physical location to physical location across the network, there is chaos. The diseconomy of scale that has long been mentioned in conjunction with virtualization and cloud computing happens here, inside the bowels of the data center. It is the network that connects the application delivery tier to the application infrastructure tier that is constantly in motion in large installations and private cloud computing environments, and it is here that SDN and OpenFlow show the most promise to achieve the operational efficiencies needed to contain costs and reduce potential errors due to overwhelmingly high volumes of changes in network configurations. What’s missing is how that might happen. While the mechanisms and protocols used to update forwarding and routing tables on switches and routers is well-discussed, the impetus for such updates and changes is not. From where do such changes originate? In a fully automated, self-aware data center (one that does not exist and may never do so) the mere act of provisioning a virtual machine (application) would trigger such changes. In more evolutionary data centers (which is more likely) such changes will be initiated due to provisioning system events, whether initiated automatically or at the behest of a user (in IT as a Service scenarios). Perhaps through data or options contained in existing network discovery protocols or through integration between the virtualization management systems and the SDN management plane. One of the core value propositions of SDN and OpenFlow being centralized control, one assumes that such functionality would be realized via integration between the two and not through modification and extension of existing protocols (although both methods would be capable, if we’re careful, of maintaining compatibility with non-SDN enabled networking components). This is being referred to in some cases as the “northbound” API while the connectivity between the controller and the network components referred to as the “southbound” API. OpenFlow, the southbound API between the controller and the switch, is getting most of the attention in the current SDN hype-fest, but the northbound API, between the controller and the data center automation system (orchestration) will yield the biggest impact for users. SDN has the potential to be extremely powerful because it provides a platform to develop new, higher level abstractions. The right abstraction can free operators from having to deal with layers of implementation detail that are not scaling well as networks increasingly need to support “Hyper-Scale” data centers. A change is blowing in from the North (-bound API) In this way, SDN and OpenFlow provide the means by which the diseconomy of scale and volatility inherent in cloud computing and optimized resource utilization models can be brought under control and even reversed. Infrastructure 2.0 Isn’t that the problem Infrastructure 2.0 has been, in part, trying to address? Early on we turned to a similar, centralized model in which IFMAP provided the controller necessary to manage changes in the underlying network. An SDN-OpenFlow based model simply replaces that central controller with another, and distributes the impact of the scale of change across all network devices by delegating responsibility for implementation to individual components upon an “event” that changes the network topology. Infrastructure 2.0: As a matter of fact that isn't what it means Dynamic infrastructure [aka Infrastructure 2.0] is an evolution of traditional network and application network solutions to be more adaptable, support integration with its environment and other foundational technologies, and to be aware of context (connectivity intelligence). What some SDN players are focusing on is a more complete architecture – one that’s entirely SDN and unfortunately only likely to happen in green field environments, or over time. That model, too, is interesting in that traditional data center tiers will still “exist” but would not necessarily be hierarchical, and would instead use the programmable nature of the network to ensure proper forwarding within the data center. Which is why this is going to ultimately fall into the realm of expertise owned by devops. But all this is conjecture, at this point, with the only implementations truly out there still housed in academia. Whether it will make it into the data center depends on how disruptive and difficult it will be to integrate with existing network architectures. Because just like cloud, enterprises don’t rip and replace – they move cautiously in a desired direction. As in the case of cloud computing, strategies will likely revolve around hybrid architectures enabled by infrastructure integration and collaboration. Which is what infrastructure 2.0 has been about from the beginning. * Everyone right now is fleshing out definitions of SDN and jockeying for position, each to their own benefit of course. How it plays out remains to be seen, but I’m guessing we’re going to see a cloud like evolution. In other words, chaos of definitions. I don’t have a clear one as of yet, so I’m content (for now) to take at face value the definition offered by ONS and pursue how – and where - that might benefit the enterprise. I don’t see that it has to be all or nothing, obviously. Searching for an SDN Definition: What Is Software-Defined Networking? OpenFlow/Software-Defined Networking (SDN) A change is blowing in from the North (-bound API) Infrastructure 2.0 + Cloud + IT as a Service = An Architectural Parfait Will DevOps Fork? The World Doesn’t Care About APIs322Views0likes0CommentsThe Cloud Configuration Management Conundrum
Making the case for a stateless infrastructure model. cloud computing appears to have hit a plateau with respect to infrastructure services. We simply aren’t seeing even a slow and steady offering by providers of the infrastructure services needed to deploy mature enterprise-class applications. An easy answer as to why this is the case can be found in the fact that many infrastructure services while themselves commoditized are not standardized. That is, while the services are common to just about every data center infrastructure the configuration, policies and APIs are not. But this is somewhat analogous to applications, in that the OS and platforms are common but the applications deployed on them are not. And we’ve solved that problem, so why not the problem of deploying unique infrastructure policies and configuration too? It’s that gosh-darned multi-tenant thing again. It’s not that infrastructure can’t or doesn’t support a multi-tenant model; plenty of infrastructure providers are moving to enable or already support a multi-tenant deployment model. The problem is deeper than that, at the heart of the infrastructure. It’s configuration management, or more precisely, the scale of configuration management necessary in a cloud computing environment. CONFIGURATION MANAGEMENT It’s one thing to enable the isolation of of network device configurations – whether in hardware or software form factor – as a means to offer infrastructure services in a multi-tenant environment. But each of the services requires a customer-specific configuration and, in many cases, per-application. Add to that the massive number of “objects” that must be managed on the infrastructure for every configuration and it takes little math skills to realize that the more customers and instances, the more configuration “objects”. At some point, it becomes too unwieldy and expensive a way to manage infrastructure services for the provider. A shared resource requires a lot of configuration regardless of the multi-tenant model. If you use virtualization to create a multi-tenant environment on a device using, then the support is either broad (one instance per customer) or deep (multiple configurations per instance). In either case, you end up with configurations that are almost certainly going to increase parallel to the increase in customers. Thousands of customers = many thousands of configuration objects. Many infrastructure devices have either imposed or have best practice limitations on the number of configuration “objects” it can support. As a simple, oft-referenced, example consider the VLAN limitation. There is a “theoretical” maximum of 4096 VLANS per trunk. It is theoretical because of vendor-imposed limitations that are generally lower due to performance and other configuration-related constraints. The 4096, like most upper limiting numbers in networking, is based on primitive numerical types in programming languages comprised of a certain number of bits and/or bytes that impose a mathematical upper bound. These bounds are not easily modified without changes to the core software/firmware. These limitations are also, when related to transmission of data, constrained by specifications such as those maintained by the IEEE that define the data format of information exchanged via the network. Obviously changing any one of the pieces of data would change the size of the data (because they are strictly defined in terms of ranges of bit/byte locations in which specific data will be found) and thus require every vendor to change their solutions to be able to process the new format. This is at the heart of the problem with IPv4 –> IPv6 migration: the size of IP packet headers changes as does the location of data critical to the processing of packets. This is also true of configuration stores, particularly on networking devices where size and performance are critical. The smallest data type possible for the platform is almost always used to ensure performance is maintained. Thus, configuration and processing is highly dependent on tightly constrained specifications and highly considerate of speed, making changes to these protocols and configuration systems fraught with risk. This is before we even begin considering the run-time impact in terms of performance and potential disruption due to inherent volatility in the cloud “back plane”, a.k.a. server/instance infrastructure. Consider the way in which iptables is used is some cloud computing environments. The iptable configuration is dynamically updated based on customer configuration. As the configuration grows and it takes longer for the system to match traffic against the “rules” and apply them, performance degrades. It’s also an inefficient means of managing a multi-tenant environment, as the process of inserting and removing rules dynamically is also a strain on the entire system because software-only solutions rarely differentiate between management and data plane activity. So basically what we have is a configuration management problem that may be stifling movement forward on more infrastructure services being offered as, well, services. The load from management of the massive configurations necessary would almost certainly overwhelm the infrastructure. So what’s the solution? OPTIONS The most obvious and easiest from a provider standpoint is architectural multi-tenancy. Virtual network appliances. It decreases the configuration management burden and places the onus on the customer. This option is, however, more complex because of the topological constraints imposed by most infrastructure, a.k.a. dependency on a stable IP address space, and thus becomes a management burden on the customer, making it less appealing from their perspective to adopt. A viable solution, to be sure, but one that imposes management burdens that may negatively impact operational and financial benefits. A second solution is potentially found in OpenFlow. Using its programmable foundation to direct flows based on some piece of data other than IP address could alleviate the management burden imposed by topological constraints and allow architectural multi-tenancy to become the norm in cloud computing environments. This solution may, however, require some new network or application layer protocol – or extension of an existing protocol – to allow OpenFlow to identify, well, the flow and direct it appropriately. This model does not necessarily address the issue of configuration management. If one is merely using OpenFlow to direct traffic to a controller of some sort, one assumes the controller would need per-flow or per-application or per-customer configuration. Thus such a model is likely run into the same issues with scale of configuration management unless such extensions or new protocols are self-describing. Such self-describing meta-data included in the protocol would allow interpretation on the OpenFlow-enabled component rather than lookup of configuration data. Basically, any extension of a protocol or new protocol used to dynamically direct flows to the appropriate services would need to be as stateless as possible, i.e. no static configuration required on the infrastructure components beyond service configuration. A model based on this concept does not necessarily require OpenFlow, although it certainly appears that a framework such as offered by the nascent specification is well-suited to enabling and supporting such an architecture with the least amount of disruption. However, as with other suggested specification-based solutions, no such commoditized or standardized meta-data model exists. The “control plane” of a data center architecture, whether in cloud computing or enterprise-class environments, will need a more scalable configuration management approach if it is to become service-oriented whilst simultaneously supporting the dynamism inherent in emerging data center models. We know that stateless architectures are the pièce de résistance of a truly scalable application architecture. RESTful application models, while not entirely stateless yet, are modeled on that premise and do their best to replicate a non-dependence on state as a means to achieve highly scalable applications. The same model and premise should also be applied to infrastructure to achieve the scale of configuration management necessary to support Infrastructure as a Service, in the pure sense of the word. OpenFlow appears to have the potential to support a stateless infrastructure, if a common model can be devised upon which an OpenFlow-based solution could act independently. A stateless infrastructure, freed of its dependency on traditional configuration models, would ultimately scale as required by a massively multi-tenant environment and could lay the foundation for true portability of applications architectures across cloud and data center environments. The IP Address – Identity Disconnect The World Doesn’t Care About APIs IT as a Service: A Stateless Infrastructure Architecture Model Multi-Tenancy Requires More Than Just Isolating Customers Architectural Multi-tenancy Web 2.0 Killed the Middleware Star OpenFlow Site284Views0likes3CommentsIt's On: Stacks versus Flows
#OpenStack #CloudStack #OpenFlow #SDN It's a showdown of model versus control – or is it? There's a lot of noise about "wars" in the networking world these days. OpenStack versus CloudStack versus OpenFlow-based SDN. But while there are definitely aspects of "stacks" that share similarities with "flows", they are not the same model and ultimately they aren't even necessarily attempting to solve the same problems. Understanding the two models and what they're intended to do can go a long way toward resolving any perceived conflicts. The Stack Model Stack models, such as CloudStack and OpenStack, are more accurately placed in the category of "cloud management frameworks" because they are designed with provisioning and management of the infrastructure services that comprise a cloud computing (or highly dynamic) environment. Stacks are aptly named as they attempt to provide management and specifically automation of provisioning for the complete network stack. Both CloudStack and OpenStack, along with Eucalyptus and Amazon and VMware vCloud, provide a framework API that can (ostensibly) be used to provision infrastructure services irrespective of vendor implementation. The vision is (or should be) to enable implementers (whether service provider or enterprise) to be able to switch out architectural elements (routers, switches, hypervisors, load balancers, etc… ) transparently*. That is, moving from Dell to HP to Cisco (or vice-versa) as an environment's switching fabric should not be disruptive. Physical changes should be able to occur without impacting the provisioning and management of the actual services provided by the infrastructure. And yes, such a strategy should also allow heterogeneity of infrastructure. In many ways, such "stacks" are the virtualization of the data center, enabling abstraction of the actual implementation from the configuration and automation of the hardware (or software) elements. This, more than anything, is what enables a comparison with flow-based models. The Flow Model Flow-based models, in particular OpenFlow-based SDN, also abstracts implementation from configuration by decoupling the control plane from the data plane. This allows any OpenFlow-enabled device (mostly switches today, as SDN and OpenFlow focus on network layers) to be configured and managed via a centralized controller using a common API. Flows are "installed" or "inserted" into OpenFlow-enabled elements via OpenFlow, an open protocol designed for this purpose, and support real-time updates that enable on-demand optimization or fault isolation of flows through the network. OpenFlow and SDN are focused on managing the flow of traffic through a network. Flow-based models purport to offer the same benefits as a stack model in terms of heterogeneity and interoperability. Moving from one OpenFlow-enabled switch to another (or mixing and matching) should ostensibly have no impact on the network whatsoever. What flow-based models offer above and beyond a stack model is extensibility. OpenFlow-based SDN models using a centralized controller also carry with it the premise of being able to programmatically add new services to the network without vendor assistance. "Applications" deployed on an SDN controller platform (for lack of a better term) can extend existing services or add new ones and there is no need to change anything in the network fabric, because ultimately every "application" distills flows into a simple forwarding decision that can then be applied like a pattern to future flows by the switches. The Differences This is markedly different from the focus of a stack, which is on provisioning and management, even though both may be occurring in real-time. While it's certainly the case that through the CloudStack API you can create or delete port forwarding rules on a firewall, these actions are pushed (initiated) external to the firewall. It is not the case that the firewall receives a packet and asks the cloud framework for the appropriate action, which is the model in play for a switch in an OpenFlow-based SDN. Another (relatively unmentioned but important) distinction is who bears responsibility for integration. A stack-based model puts the onus on the stack to integrate (via what are usually called "plug-ins" or "drivers") with the component's existing API (assuming one exists). A flow-based model requires the vendor to take responsibility for enabling OpenFlow support natively. Obviously the ecosystem of available resources to perform integration is a magnitude higher with a stack model than with a flow model. While vendors are involved in development of drivers/plug-ins for stacks now, the impact on the product itself is minimal, if any at all, because the integration occurs external to the component. Enabling native OpenFlow support on components requires a lot more internal resources be directed at such a project. Do these differences make for an either-or choice? Actually, they don't. The models are not mutually exclusive and, in fact, might be used in conjunction with one another quite well. A stack based approach to provisioning and management might well be complemented by an OpenFlow SDN in which flows through the network can be updated in real time or, as is often proffered as a possibility, the deployment of new protocols or services within the network. The War that Isn't While there certainly may be a war raging amongst the various stack models, it doesn't appear that a war between OpenFlow and *-Stack is something that's real or ever will be The two foci are very different, and realistically the two could easily be deployed in the same network and solve multiple problems. Network resources may be provisioned and initially configured via a stack but updated in real-time or extended by an SDN controller, assuming such network resources were OpenFlow-enabled in the first place. * That's the vision (and the way it should be) at least. Reality thus far is that the OpenStack API doesn't support most network elements above L3 yet, and CloudStack is tightly coupling API calls to components, rendering this alleged benefit well, not a benefit at all, at least at L4 and above.275Views0likes1CommentStop Conflating Software-Defined with Software-Deployed
#SDN #SDDC Can we stop confusing these two, please? Kthanx. For some reason when you start applying "software" as a modifier to anything traditionally deployed as hardware folks start thinking that means that hardware is going away. Software-defined networking (SDN) is no exception. There's a big difference between being software-defined and software-deployed. The former implies the use of software - APIs and the like - to configure and manage something (in this case, the network). The latter implies service functionality deployed in a software form-factor, which could mean pure software or virtualized appliances. These two are not the same by any stretch of the imagination. Software-defined networking - the use of software to control network devices - is not a new concept. It's been around for a long time. What is new with SDN is the demand to physically separate control and data planes and the use of a common, shared control-plane protocol or API (such as OpenFlow) through which to manage all network devices, regardless of vendor origins. This abstraction is not new. If you look at SOA (Service-Oriented Architectures) or OO (Object-Oriented) anything, you'll see the same concepts as promoted by SDN: separation of implementation (data plane) from interface (control plane). The reason behind this model is simple: if you abstract interface from implementation, it is easier to change the implementation without impacting anything that touches the interface. In a nutshell, it's a proxy system that puts something between the consumer and the producer of the service. Usually this is transparent to the consumer, but in some cases wholesale change is necessary, as is true with SDN. The reality is that SDN does not require the use of software-deployed data path elements. What it requires is support for a common, shared software-defined mechanism for interacting and controlling the configuration of the data path. Are there advantages to a software-deployed network element strategy? Certainly, especially when combined with a software-defined data center strategy. Agility, the ability to move toward a true utility model (cloud, anyone?), and rapid provisioning are among the benefits (though none of these are peculiar to software and can also be achieved using hardware elements, just not without a bit more planning and forethought). The reason software-deployed seems to make more sense today is that it's usually associated with the ability to leverage resources laying around the data center on commodity hardware. Need more network? Grab some compute from that idle server over there and voila! More network. The only difference, however, between this approach and a hardware-based approach is where the resources come from. Resources can be - and should be - abstracted such that whether they reside on commodity or purpose-built hardware shouldn't matter to the provisioning and management systems. The control system (the controller, in an SDN architecture) should be blind to the source of those resources. All it cares about is being able to control those resources the same way as all other resources it controls, whether derived from hardware or software. So let us not continue to conflate software-defined with software-deployed. There is a significant difference between them and what they mean with respect to network and data center architecture.268Views0likes0CommentsF5 Friday: ADN = SDN at Layer 4-7
The benefits of #SDN have long been realized from #ADN architectures There are, according to ONF, three core characteristics of SDN: “In the SDN architecture, the control and data planes are decoupled, network intelligence and state are logically centralized, and the underlying network infrastructure is abstracted from the applications. As a result, enterprises and carriers gain unprecedented programmability, automation, and network control, enabling them to build highly scalable, flexible networks that readily adapt to changing business needs.” -- Software-Defined Networking: The New Norm for Networks, ONF Let’s enumerate them so we’re all on the same page: 1. Control and data planes are decoupled 2. Intelligence and state are logically centralized 3. Underlying network infrastructure abstracted from applications Interestingly enough, these characteristics – and benefits - have existed for quite some time at layers 4-7 in something known as an application delivery network. ADN versus SDN First let’s recognize that the separation of the control and data planes is not a new concept. It’s been prevalent in application architecture for decades. It’s a core premise of SOA where implementation and interface are decoupled and abstracted, and it’s even in application delivery networking – has been for nearly a decade now. When F5 redesigned the internal architecture of BIG-IP back in the day, the core premise was separation of the control plane from the data plane as a means to achieve a more programmable, flexible, and extensible solution. The data plane and control plane are completely separate, and it’s part of the reason F5 is able to “plug-in” modules and extend functionality of the core platform. Now, one of the benefits of this architecture is programmability – the ability to redefine how flows, if you will, are handled as they traverse the system. iRules acts in a manner similar to the goals of OpenFlow, in that it allows the implementation of new functionality down to the packet level. You can, if you desire, implement new protocols using iRules. In many cases, this is how F5 engineers develop support for emerging protocols and technologies in between major releases. BIG-IP support for SIP, for example, was initially handled entirely by iRules. Eventually, demand and need resulted in that functionality being moved into the core control plane. Which is part of the value proposition of SDN + OpenFlow – the ability to “tinker” and “test” experimental and new protocols before productizing them. So the separation of control from data is not new, though BIG-IP certainly doesn’t topologically separate the two as is the case in SDN architectures. One could argue physical separation exists internal to the hardware, but that would be splitting hairs. Suffice to say that the separation of the two exists on BIG-IP platforms, but it is not the physical (topological) separation described by SDN descriptions. Intelligence and control is logically centralized in the application delivery (wait for it… wait for it… ) controller. Agility is realized through the ability to dynamically adjust flows and policy on-demand. Adjustments in application routing are made based on policy defined in the control plane, and as an added bonus, context is shared across components focusing on specific application delivery domain policy. Offering “unprecedented programmability, automation, and network control” that enables organizations “to build highly scalable, flexible [application] networks that readily adapt to changing business needs” is exactly what an application delivery network-based architecture does – at least for that sub-section of the network dedicated to delivering applications. WOULD ADN BENEFIT FROM SDN? There is not much SDN can provide to improve ADN. As mentioned before, there may be advantages to implementing SDN topologically downstream from an ADN, to manage the volatility and scale needed in the application infrastructure, and that might require support for protocols like OpenFlow to participate at layer 2-3, but at layer 4-7 there is really no significant play for SDN that has an advantage over existing ADN thus far. SDN focuses on core routing and switching, and while that’s certainly important for an ADN, which has to know where to forward flows to the appropriate resource after application layer routing decisions have been made, it’s not a core application delivery concern. The argument could (and can and probably will) be made that SDN is not designed to perform the real-time updates required to implement layer 4-7 routing capabilities. SDN is designed to address network routing and forwarding, automatically, but such changes in the network topology are (or hopefully are, at least) minimal. Certainly such a change is massive in scale at the time it happens, but it does not happen for every request as is the case at layer 4-7. load balancing necessarily makes routing decisions for every session (flow) and in some cases, every request. That means thousands upon thousands of application routing decisions per second. In SDN terms, this would require similar scale of updating forwarding tables, a capability SDN has been judged unable at this time to accomplish. Making this more difficult, forwarding of application layer requests is accomplished on a per-session basis, not unilaterally across all sessions, whereas forwarding information in switches is generally more widely applicable to the entire network. In order for per-session or per-request routing to be scalable, it must be done dynamically, in real-time. Certainly SDN could manage the underlying network for the ADN, but above layer 3 SDN will not suffice to support the scale and performance required to ensure fast and available applications. When examining the reasons for implementing SDN, it becomes apparent that most of the challenges addressed by the concept have been addressed at layer 4-7 by a combination of ADN and integration with virtualization management platforms. The benefits afforded at layer 2-3 by SDN are duplicated at layer 4-7 with a unified ADN already. In practice, if not definition, ADN is SDN at layer 4-7. OpenFlow/SDN Is Not A Silver Bullet For Network Scalability F5 Friday: Performance, Throughput and DPS Cyclomatic Complexity of OpenFlow-Based SDN May Drive Market Innovation QoS without Context: Good for the Network, Not So Good for the End user The Full-Proxy Data Center Architecture SDN, OpenFlow, and Infrastructure 2.0 Searching for an SDN Definition: What Is Software-Defined Networking? OpenFlow/Software-Defined Networking (SDN) A change is blowing in from the North (-bound API)268Views0likes0Comments