sdn
185 TopicsJust what does 'operationalize' mean anyway?
#DevOps #SDN We keep saying that, does it mean what you think it means? Operationalization (which is really hard to say, go ahead - try it a few times) is a concept that crosses the lines between trends and technologies. Both SDN and DevOps share the notion of "operationalization" as a means to achieve the goal of aligning IT with business priorities, like that of accelerating time to market for all important applications. But what does it really mean to operationalize the network, or app deployments, or really, anything? Operationalization is a lot like DevOps in that it's more of an approach to how you deploy and manage operations than it is some concrete, tangible thing. It is a verb, it's something you do that has concrete, measurable impacts on the application environment, aka the data center, and the processes that move an application from development and into the hands of its intended consumers, whether internal or external. When we say "operationalize the network", what we mean is to apply a systematic approach to automating network tasks and orchestrating operational processes in a way that meets measurable, defined goals that align with business priorities. Consider the business priority to deliver projects on time. You know, get projects to market before the competition (to meet the business concern of revenue growth) or roll out internal apps faster (to meet the business concern of productivity improvements). The top CIO priorities are intertwined, and IT is in the business of applications as much as it is about technology. Automate all the network things Accelerating the time to market (or time to roll out for internal applications) is an imperative that enables IT to meet several business and IT-related goals simultaneously. But to do that, IT has to operationalize all the things - including the network. Operations (whether network or security or application) has to focus on automating tasks and orchestrating processes to achieve the speed, scale, and stability necessary to roll out new or improved apps faster and, in some cases, more frequently. That means taking advantage of programmability (APIs, app templates and even data path) to integrate and automate the provisioning, configuration and elasticity of applications and the services that deliver them. Does that mean you have to become a coder? Not necessarily. Much of the automation and orchestration of the network is being made available through ecosystems (like those around VMware, Cisco, OpenDaylight and OpenStack) that enable the integration necessary to occur through plug-ins, policies or templates rather requiring network engineers to become developers. No doubt some organizations will choose a more hands-on approach, in which case the answer becomes yes, yes you will have to become familiar with scripting tools and languages and APIs to enable the automation and, ultimately, orchestration required to achieve alignment with business and operational goals. Measure all the deployment things Automation and orchestration alone aren't enough, though, to operationalize the network. Measures must be put into place that span the entire application deployment process. Those measures should align with other operations groups and align better with the business, measures that are typically associated with DevOps but are directly relatable to the network, too: Deploy frequency Volume of defects MTTR Number & Frequency of outages Number & Frequency of performance issues Time/cost per release (deployment) Automation certainly impacts some of these measures, but not all. Process optimization is a critical component of DevOps and operationalization as well that impacts many measures but is people and analysis driven. Optimize all the process things Optimization requires understanding the processes that have likely ossified over time and re-evaluating each and every step to improve not just the speed but the efficiency, too (no, they aren't the same, Virginia). Optimization of processes is about measuring and mapping processes to find the bottlenecks and idle time that causes the entire app deployment train to slow to a crawl. The reality is that orchestrating poor processes just lets you fail faster and more often. So identifying those processes (that include handoffs between silos) causing bottlenecks in the deployment process (or where errors seem to constantly be introduced) is a critical component of successfully operationalizing the network (and other operations, for that matter). Giving the app infrastructure operations group an "easy" button to deploy the appropriate network services isn't going to improve the process if that process is itself broken, after all. The measures let you ascertain whether changes in the process are going to help or not. Modeling and math can do wonders to help determine where changes must be made to improve the overall results, but both require measurement first - and consistent measurement across groups and the deployment lifecycle. Share all the app things All of which requires collaboration. You can automate individual tasks and gain some improvements, yes, but you can't orchestrate a provisioning and configuration process related to a given application or type of application unless you first understand what that application needs. And to do that you've got to talk to the people who develop it and deploy its infrastructure. You have to understand its architecture - is it three-tier? Two-tier? Microservice? Does it present APIs and take advantage of an app proxy or are the integrations and interactions all internal? How is success for this app measured? Productivity improvement? Revenue growth? User adoption? The answers to these questions are imperative to understanding just what network services need to be deployed, and how. It isn't enough to just give the app an IP address and put it on a VLAN. You've got to deliver value out of the network and that means providing services that will help that application meet its business goals, whatever they might be. Operationalize. Everything. Whether you're approaching operationalization of the network from the perspective of implementing a SDN architecture or by applying the principles associated with DevOps you're essentially going to have to embrace and adopt the same basic tenets: automation, sharing and common measurements that result in a cultural change across all of IT's operational groups. To succeed in an application world you're going to have to operationalize all the things. And that includes the network. More in a presentation dedicated to this topic: Operationalize all the Network Things!4.9KViews0likes0CommentsBack to Basics: The Many Modes of Proxies
The simplicity of the term "proxy" belies the complex topological options available. Understanding the different deployment options will enable your proxy deployment to fit your environment and, more importantly, your applications. It seems so simple in theory. A proxy is a well-understood concept that is not peculiar to networking. Indeed, some folks vote by proxy, they speak by proxy (translators), and even on occasion, marry by proxy. A proxy, regardless of its purpose, sits between two entities and performs a service. In network architectures the most common use of a proxy is to provide load balancing services to enable scale, reliability and even performance for applications. Proxies can log data exchanges, act as a gatekeeper (authentication and authorization), scan inbound and outbound traffic for malicious content and more. Proxies are a key strategic point of control in the data center because they are typically deployed as the go-between for end-users and applications. These go-between services are often referred to as virtual services, and for purposes of this blog that's what we'll call them. It's an important distinction because a single proxy can actually act in multiple modes on a per-virtual service basis. That's all pretty standard stuff. What's not simple is when you start considering how you want your proxy to act. Should it be a full proxy? A half proxy? Should it route or forward? There are multiple options for these components and each has its pros and cons. Understanding each proxy "mode" is an important step toward architecting a suitable solution for your environment as the mode determines the behavior of traffic as it traverses the proxy. Standard Virtual Service (Full Application Proxy) The standard virtual service provided by a full proxy fully terminates the transport layer connections (typically TCP) and establishes completely separate transport layer connections to the applications. This enables the proxy to intercept, inspect and ultimate interact with the data (traffic) as its flowing through the system. Any time you need to inspect payloads (JSON, HTML, XML, etc...) or steer requests based on HTTP headers (URI, cookies, custom variables) on an ongoing basis you'll need a virtual service in full proxy mode. A full proxy is able to perform application layer services. That is, it can act on protocol and data transported via an application protocol, such as HTTP. Performance Layer 4 Service (Packet by Packet Proxy) Before application layer proxy capabilities came into being, the primary model for proxies (and load balancers) was layer 4 virtual services. In this mode, a proxy can make decisions and interact with packets up to layer 4 - the transport layer. For web traffic this almost always equates to TCP. This is the highest layer of the network stack at which SDN architectures based on OpenFlow are able to operate. Today this is often referred to as flow-based processing, as TCP connections are generally considered flows for purposes of configuring network-based services. In this mode, a proxy processes each packet and maps it to a connection (flow) context. This type of virtual service is used for traffic that requires simple load balancing, policy network routing or high-availability at the transport layer. Many proxies deployed on purpose-built hardware take advantage of FPGAs that make this type of virtual service execute at wire speed. A packet-by-packet proxy is able to make decisions based on information related to layer 4 and below. It cannot interact with application-layer data. The connection between the client and the server is actually "stitched" together in this mode, with the proxy primarily acting as a forwarding component after the initial handshake is completed rather than as an endpoint or originating source as is the case with a full proxy. IP Forwarding Virtual Service (Router) For simple packet forwarding where the destination is based not on a pooled resource but simply on a routing table, an IP forwarding virtual service turns your proxy into a packet layer forwarder. A IP forwarding virtual server can be provisioned to rewrite the source IP address as the traffic traverses the service. This is done to force data to return through the proxy and is referred to as SNATing traffic. It uses transport layer (usually TCP) port multiplexing to accomplish stateful address translation. The address it chooses can be load balanced from a pool of addresses (a SNAT pool) or you can use an automatic SNAT capability. Layer 2 Forwarding Virtual Service (Bridge) For situations where a proxy should be used to bridge two different Ethernet collision domains, a layer 2 forwarding virtual service an be used. It can be provisioned to be an opaque, semi-opaque, or transparent bridge. Bridging two Ethernet domains is like an old timey water brigade. One guy fills a bucket of water (the client) and hands it to the next guy (the proxy) who hands it to the destination (the server/service) where it's thrown on the fire. The guy in the middle (the proxy) just bridges the gap (you're thinking what I'm thinking - that's where the term came from, right?) between the two Ethernet domains (networks).2.1KViews0likes3CommentsReactive, Proactive, Predictive: SDN Models
#SDN #openflow A session at #interop sheds some light on SDN operational models One of the downsides of speaking at conferences is that your session inevitably conflicts with another session that you'd really like to attend. Interop NY was no exception, except I was lucky enough to catch the tail end of a session I was interested in after finishing my own. I jumped into OpenFlow and Software Defined Networks: What Are They and Why Do You Care? just as discussion about an SDN implementation at CERN labs was going on, and was quite happy to sit through the presentation. CERN labs has implemented an SDN, focusing on the use of OpenFlow to manage the network. They partner with HP for the control plane, and use a mix of OpenFlow-enabled switches for their very large switching fabric. All that's interesting, but what was really interesting (to me anyway) was the answer to my question with respect to the rate of change and how it's handled. We know, after all, that there are currently limitations on the number of inserts per second into OpenFlow-enabled switches and CERN's environment is generally considered pretty volatile. The response became a discussion of SDN models for handling change. The speaker presented three approaches that essentially describe SDN models for OpenFlow-based networks: Reactive Reactive models are those we generally associate with SDN and OpenFlow. Reactive models are constantly adjusting and are in flux as changes are made immediately as a reaction to current network conditions. This is the base volatility management model in which there is a high rate of change in the location of end-points (usually virtual machines) and OpenFlow is used to continually update the location and path through the network to each end-point. The speaker noted that this model is not scalable for any organization and certainly not CERN. Proactive Proactive models anticipate issues in the network and attempt to address them before they become a real problem (which would require reaction). Proactive models can be based on details such as increasing utilization in specific parts of the network, indicating potential forthcoming bottlenecks. Making changes to the routing of data through the network before utilization becomes too high can mitigate potential performance problems. CERN takes advantage of sFlow and Netflow to gather this data. Predictive A predictive approach uses historical data regarding the performance of the network to adjust routes and flows periodically. This approach is less disruptive as it occurs with less frequency that a reactive model but still allows for trends in flow and data volume to inform appropriate routes. CERN uses a combination of proactive and predictive methods for managing its network and indicated satisfaction with current outcomes. I walked out with two takeaways. First was validation that a reactive, real-time network operational model based on OpenFlow was inadequate for managing high rates of change. Second was the use of OpenFlow as more of an operational management toolset than an automated, real-time self-routing network system is certainly a realistic option to address the operational complexity introduced by virtualization, cloud and even very large traditional networks. The Future of Cloud: Infrastructure as a Platform SDN, OpenFlow, and Infrastructure 2.0 Applying ‘Centralized Control, Decentralized Execution’ to Network Architecture Integration Topologies and SDN SDN is Network Control. ADN is Application Control. The Next IT Killer Is… Not SDN How SDN Is Defined Within ADN Architectures2.1KViews0likes0CommentsF5 Synthesis: Software Defined Application Services
#SDN #Devops #Cloud #SDAS Completing the #SDDC stack Everything as a Service has become nearly a synonym for cloud computing. That's unsurprising, as the benefits of cloud - from economy of scale to increased service velocity - are derived mainly from the abstraction of network, compute and storage resources into services that can be rapidly provisioned, elastically scaled and intelligently orchestrated. We've come to use "Everything as a Service" and "Software Defined Data Center" nearly interchangeably, because the goal of both is really to drive toward IT as a Service. A world in which end-users (application owners and administrators) can provision, scale and orchestrate the resources necessary to deliver applications from app dev through devops to the network. This journey, in part, gave rise to SDN as a means to include the network in this new service-oriented data center. But SDN focused on only a subset of the network stack, leaving critical layer 4-7 services behind. Needless to say, such services are critical. Elastic scale is impossible without a combination of visibility and load balancing, after all, and a plethora of performance, security and even identity management focused services have become integral to modern data center architectures attempting to address pressures arising from trends like the Internet of Things, mobility, a steady stream of attacks, and an increasingly impatient consumer base. The problem of what to do about layer 4-7 services has been bandied about and given a lot of lip service, but no one really had a good solution - not one that integrated both with application (cloud and virtualization orchestration) and network orchestration solutions. Given F5's long-standing leadership in the realm of layer 4-7 services, we bent our heads together and found a solution. One that integrates and interoperates with SDN and compute orchestration solutions. One that applies SDN principles to application service networks. One that abstracts the application network resources necessary to deliver application services in a way that fills that gap between the network and compute orchestration layers. That solution is F5 Synthesis, and what it enables is Software Defined Application Services (SDAS). SDAS is the result of delivering highly flexible and programmatic application services from a unified, high-performance application service fabric. Orchestrated intelligently by BIG-IQ, SDAS can be provisioned and architected to solve significant pain points resulting from the whirling maelstrom of trends driving IT today. SDAS relies on abstraction; on the ability to take advantage of resources pooled from any combination of physical, virtual and cloud deployed F5 platforms. End-users are empowered to provision services instead of devices, and to leverage the visibility inherent in F5's full proxy-based platform approach. SDAS are highly flexible, owing to programmatic interfaces at not just the control (iControl, REST APIs) and data plane (iRules, node.js, Groovy) layers, but also at the configuration layer (iCall and iApps) to enable real-time, event-driven changes in behavior at the service level. SDAS is the next phase in the lengthy evolution of application delivery. As the approach to data centers becomes increasingly software-defined, so must the components that comprise the data center. That certainly must include the application service that have become so critical to ensuring the reliability, security and performance of the growing catalog of applications delivered to employees and consumers and, of course, "things" that make up the Internet today. Additional Resources: F5 Synthesis: The Time is Right1.3KViews0likes0CommentsOperationalizing The Network now available with F5 and Cisco ACI [End of Life]
The F5 and Cisco APIC integration based on the device package and iWorkflow is End Of Life. The latest integration is based on the Cisco AppCenter named ‘F5 ACI ServiceCenter’. Visit https://f5.com/cisco for updated information on the integration. #ACI #SDN With the availability of the F5 device package for Cisco APIC, you can now rapidly provision the (entire) network from top to bottom. A key concern of IT continues to center on provisioning of network services. Whether eliminating the time consuming task of manually provisioning network attributes network device by device or trying to eliminate inefficiencies within the broader "network" service provisioning process, the goal is the same: increase the speed and accuracy with which network services are provisioned. SDN and related technologies promise to do just that by operationalizing the network. Last fall Cisco announced its Application Centric Infrastructure (ACI) vision and then focused on the network by bringing its Application Policy Infrastructure Controller ™ (APIC) to fruition. Seeing our visions align fully on the need for operationalization of the network with a focus on the application, F5 committed to supporting Cisco APIC by delivering a device package to enable the rapid provisioning of F5's Software Defined Application Services (SDAS). Today we're pleased to announce the availability - at no charge - of that device package. The availability lets customers configure application policies and requirements for F5 services across the L2-7 fabric and subsequently automatically provision the services necessary to ensure the entire stack of network services - from top to bottom, layer 2 through layer 7 - are available when applications need them. You can download the F5 Device Package for Cisco APIC todayand learn more about how F5 and Cisco work together: Cisco Alliance Cisco Resources on DevCentral903Views0likes2CommentsSDN Prerequisite: Stateful versus Stateless
#SDN #SDAS #cloud Things you need to know before diving into SDN... We've talked before about the bifurcation of the network, which is driven as much by the evolution of network services from "nice to have" to "critical" as it is by emerging architectures. The demarcation line in the network stack has traditionally been - and remains - between layers 3 and 4 in the OSI model. The reason for this is that there is a transition as you move from layer 3 to layer 4 from stateless networking to stateful networking. This is important to emerging architectures like SDN because this characteristic determines what level of participation in the data path is required. Stateless networking requires very little participation. It's limited to evaluating network protocol frames and headers for the purpose of determining where to forward any given packet. The information extracted from the packet is not saved; it is not compared to previous packets.This is why it's stateless, because no information regarding the state of the communication is retained. It is evaluated and the packet is forwarded out the appropriate port based on what's in the FIB (Forwarding Information Base) or what's more commonly referred to as the "forwarding table." Stateful networking, which begins at layer 4, retains certain information extracting from frames and packets and, as you move up the stack, from the application layer. It does this because protocols like TCP are connection-oriented and try to maintain guaranteed delivery. This is achieved through the use of sequence numbers in the TCP headers that, when out of order or lost cause the network to retransmit the packets. There is state associated with TCP, i.e. "I have received packet 1 and am waiting for packet 2 in this connection." This is readily seen in the use of ACKnowledgment packets associated with TCP. There is a pre-designated flow associated with TCP that depends on the state of the end-points involved in the connection. When a networking service operating at layer 4 or higher is inserted into this communication flow, it must also maintain the connection state. This is particularly true of staple stateful services such as security and load balancing, which rely on state to provide stateful failover services (i.e. without simply dropping connections) or to detect attacks based on state, such as SYN floods. The higher a network service operates in the network stack, the more participation is required. For example, application routing based on HTTP headers (the URI, the hostname, cookie values, etc... ) rely on the ability of an intermediate network device maintaining state as well as extracting data from within the payload of a message (which is not the same as a packet). A message might actually require 2 or 3 or more packets, as data transferred by modern web applications is often larger than the network MTU of 1500 bytes. This means the intermediate device operating at the application layer must be stateful, as it must act as the end point for the connection in order gather all the packets that make up a message before it can extract the data and then execute its policies. This is why we also emphasize that layer 2-3 is "fixed" and layer 4-7 is "variable." Networking protocols at layer 2-3 are governed by standards that clearly define the layout of Ethernet frames and IP packets. Devices operating at those layers have highly optimized algorithms for extracting the information needed from frames and packet headers in order to determine how to forward the packet. TCP affords the same luxury at layer 4, but as networking moves up the stack the exactly location of information necessary to make a forwarding decision become highly variable. Even with a clearly defined protocol like HTTP, there is a wide variation in where certain data might be in the header. This is because not all headers are required and unlike Ethernet and IP and even TCP, where options may not be specified, there is still room reserved for those values. HTTP does not require that space be reserved for optional headers. They are simply left out, which can dramatically change the location (and thus the method of extraction by the intermediate device) of the data necessary to formulate a forwarding decision. Say you had a form to fill out and, depending on the answer to question 2 you might go on to question 3 or skip to question 8. If that form were layer 2 or 3, each question would be clearly numbered. Skipping to question 8 would be quick and easy. But if that form were layer 7, the questions are not labeled, and to get to question 8 you have to count each of the questions manually. That's the difference between "fixed" and "variable". It's why compute resource requirements are more important to layer 7 than they are to layer 2 or 3. Why this matters to SDN This matters a great deal to SDN architectures because of how it impacts the control-data plane separation architecture. Stateless networking is perfectly suited to an architecture that places responsibility for making forwarding decisions on a centralized controller because the frequency with which those decisions must be made is relatively low. Conversely, stateful networking requires more participation and more frequent decisions as well as requiring the maintenance of state for each and every connection. This has serious implications for the controller in such a model, as it forces issues of controller scalability and resource requirements into the equation as the controller more actively participates (and stores more information) with stateful networking than it does with stateless networking. This is not to say that SDN architecture is incompatible with higher order network services. It just means that the SDN solution you choose for stateless networking will almost certain not be the same SDN solution you choose for stateful networking. That means it's important to investigate solutions that address both of your "networks" with an eye toward integration and interoperability.899Views0likes1CommentShould you focus on consolidation or standardization?
There is a difference. One is strategic, one is tactical. One lays the foundation for the future, the other sweeps the past under the rug. There is a move (again) in technology that pits consolidation as the be-all and end-all of tactical maneuvers in the data center to reduce operating expenses. The hue and cry is not surprising. Many of us have seen it before. An onslaught of technology in the forms of solutions and services rains down upon the data center, promising to solve this pain point and the next. Eventually, overwhelmed by the unmanageable number of devices, appliances and servers needed to support all these disparate solution, consolidation arrives on a white horse to save the day. Get rid of all the boxen! Save money on power, on cooling, on management. Simplify your architecture! It's a message that does not go unheard in the upper ranks of IT. Reducing operating expenses is a C-level priority, taking number 7 on the CIO top ten according to the "35th Annual SIM IT Trends Study" announced by The Society for Information Managementand discussed on CIOinsight.com. Consolidation isn't a bad approach at all. It certainly will reduce the pain point foot print, which in turn reduces operating costs. But it's a tactical approach, not a strategic one. When you consolidate functions onto a single device, you're essentially just making room for more boxes that, one day, will have to be consolidated again. It's transferring architectural debt to a single system, simply transferring the debt from multiple credit cards to another one. Sure, it has a lower interest rate, but you're still just kicking the can down the road so you can (more than likely) make the same architectural choices that got you into debt in the first place. Standardization, on the other hand, is based on a platform approach which is strategic (i.e. forward) looking. A platform is built to be expanded; to include related and adjacent service deployment now and in the future. You aren't just transferring the debt, you're laying a foundation for being able to keep that debt down in the future by ensuring operational commonality across services now and in the future. Standardization says "let's use the same platform, the same management, the same systems to support as many services as possible." That means lower operational costs across the entire network because skills and knowledge become applicable to a wider range of technologies. It isn't about cramming as much as you can onto a single box, it's about optimizing skills, systems and knowledge across broader sets of services. That being said, one solution to this dilemma [the need to show real value to the business while reducing operating costs] is for IT organizations to focus more on enterprise optimization: standardizing processes, data, technology and applications wherever possible across the enterprise. Benchmarking data shows that standardization can reduce the support and operations cost associated with the IT landscape, and the impact can be dramatic. For instance, top performing IT organizations that drive enterprise optimization have on an average 26 percent higher allocation of their IT spend toward strategic initiatives. -- Monday Metric: Benefits of IT Standardization SDN and SDAS offer standardization at the technology layers. Not just consolidation, standardization. DevOps encourages standardization across processes and (if its paying attention, at the technology layer responsible for automation and orchestration). Combining these technologies and approaches to architecture, management and automation offer IT the means to optimize the data center and reduce operating costs that ultimately enable IT to focus more on strategic initiatives that offer value to (and in many cases, enable) the business. It isn't that there's a single box upon which to consolidate all these services (there is, but that's not the strategic value) but that the underlying platform is standardized and provides the same APIs, templates and command and control console through which all the services are provisioned, configured and managed across their respective lifecycles. That commonality, that standardization, means improved efficiency that translates into reduced operating costs. But more than that, it's a platform that can (and will) be extended to include additional services, making it possible to consume new technology as its introduced without new training, skills or management systems to learn. That's what makes it strategic: a platform approach to standardization. You can certainly gain some of these same benefits by simply consolidating services and functions onto a box. But to realize benefits today and in the future you should really be looking at standardizing on a platform instead. Otherwise you're just robbing Peter to pay Paul.799Views0likes0CommentsBIG-IQ Grows UP [End of Life]
The F5 and Cisco APIC integration based on the device package and iWorkflow is End Of Life. The latest integration is based on the Cisco AppCenter named ‘F5 ACI ServiceCenter’. Visit https://f5.com/cisco for updated information on the integration. Today F5 is announcing a new F5® BIG-IQ™ 4.5. This release includes a new BIG-IQ component – BIG-IQ ADC. Why is 4.5 a big deal? This release introduces a critical new BIG-IQ component, BIG-IQ ADC. With ADC management, BIG-IQ can finally control basic local traffic management (LTM) policies for all your BIG-IP devices from a single pane of glass. Better still, BIG-IQ’s ADC function has been designed with the concept of “roles” deeply ingrained. In practice, this means that BIG-IQ offers application teams a “self-serve portal” through which they can manage load balancing of just the objects they are “authorized” to access and update. Their changes can be staged so that they don’t go live until the network team has approved the changes. We will post follow up blogs that dive into the new functions in more detail. In truth, there are a few caveats around this release. Namely, BIG-IQ requires our customer’s to be using BIG-IP 11.4.1 or above. Many functions require 11.5 or above. Customers with older TMOS version still require F5’s legacy central management solution, Enterprise Manager. BIG-IQ still can’t do some of the functions Enterprise Manager provides, such as iHealth integration and advanced analytics. And BIG-IQ can’t yet manage some advanced LTM options. Never-the-less, this release will an essential component of many F5 deployments. And since BIG-IQ is a rapidly growing platform, the feature gaps will be filled before you know it. Better still, we have big plans for adding additional components to the BIG-IQ framework over the coming year. In short, it’s time to take a long hard look at BIG-IQ. What else is new? There are hundreds of new or modified features in this release. Let me list a few of the highlights by component: 1. BIG-IQ ADC - Role-based central Management of ADC functions across the network · Centralized basic management of LTM configurations · Monitoring of LTM objects · Provide high availability and clustering support of BIG-IP devices and application centric manageability services · Pool member management (enable/disable) · Centralized iRules Management (though not editing) · Role-based management · Staging and manual of deployments 2. BIG-IQ Cloud - Enhanced Connectivity and Partner Integration · Expand orchestration and management of cloud platforms via 3rd party developers · Connector for VMware NSX and (early access) connector for Cisco ACI · Improve customer experience via work flows and integrations · Improve tenant isolation on device and deployment 3. BIG-IQ Device - Manage physical and virtual BIG-IP devices from a single pane of glass · Support for VE volume licensing · Management of basic device configuration & templates · UCS backup scheduling · Enhanced upgrade advisor checks 4. BIG-IQ Security - Centralizes security policy deployment, administration, and management · Centralized feature support for BIG-IP AFM · Centralized policy support for BIG-IP ASM · Consolidated DDoS and logging profiles for AFM/ASM · Enhanced visibility and notifications · API documentation for ASM · UI enhancements for AFM policy management My next blog will include a video demonstrating the new BIG-IQ ADC component and showing how it enhances collaboration between the networking and application teams with fine grained RBAC.799Views0likes3CommentsF5 Synthesis: What about SDN?
#SDN #SDAS How does Synthesis impact and interact with SDN architectures? With SDN top of mind (or at least top of news feeds) of late, it's natural to wonder how F5's architecture, Synthesis, relates to SDN. You may recall that SDN -or something like it - was inevitable due to increasing pressure on IT to improve service velocity in the network to match that of development (agile) and operations (devops). The "network" really had no equivalent, until SDN came along. But SDN did not - and still does not - address service velocity at layers 4-7. The application layers where application services like load balancing, acceleration, access and identity (you know, all the services F5 platforms are known for providing) live. This is not because SDN architectures don't want to provide those services, it's because technically they can't. They're impeded from doing so because the network is naturally bifurcated. There's actually two networks in the data center - the layer 2-3 switching and routing fabric and the layer 4-7 services fabric. One of the solutions for incorporating application (layer 4-7) services into SDN architectures is service chaining. Now, this works well as a solution to extending the data path to application services but it does very little in terms of addressing the operational aspects which, of course, are where service velocity is either improved or not. It's the operational side - the deployment, the provisioning, the monitoring and management - that directly impacts service velocity. Service chaining is focused on how the network makes sure application data traversing the data path flows to and from application services appropriately. All that operational stuff is not necessarily addressed by service chaining. There are good reasons for that but we'll save enumerating them for another day in the interests of getting to an answer quickly. Suffice to say that service chaining is about execution, not operation. So something had to fill the gap and make sure that while SDN is improving service velocity for network (layer 2-3) services, the velocity of application services (layer 4-7) were also being improved. That's where F5 Synthesis comes in. We're not replacing SDN; we're not an alternative architecture. Synthesis is completely complementary to SDN and in fact interoperates with a variety of architectures falling under the "SDN" moniker as well as traditional network fabrics. Ultimately, F5's vision is to provide application-protocol aware data path elements (BIG-IP, LineRate, etc..) that can execute programmatic rules that are pushed by a centralized control plane (BIG-IQ). A centralized control-decentralized execution model implementing a programmatic application control plane and application-aware data plane architecture. Bringing the two together offers a comprehensive, dynamic software-defined architecture for the data center that addresses service velocity challenges across the entire network stack (layer 2-7). SDN automates and orchestrates the network and passes the right traffic to Synthesis High-Performance Services Fabric, which then does what it does best - apply the application services critical to ensuring apps are fast, secure and reliable. In addition to service chaining scenarios there are orchestration integrations (such as that with VMware's NSX) as well as network integrations such as a cooperative effort between F5, Arista and VMware. You might have noticed that we specifically integrate with leading SDN architectures and partners like Cisco/Insieme, VMware, HP, Arista, Dell and BIgSwitch. We're participating in all the relevant standards organizations to help find additional ways to integrate both network and application services in SDN architectures. We see SDN as validating what We (and that's the corporate we) have always believed - networks need to be dynamic, services need to be extensible and programmable, and all the layers of the network stack need to be as agile as the business it supports. We're fans, in other words, and our approach is to support and integrate with SDN architectures to enable customers to deploy a fully software-defined stack of network and application services. F5 Synthesis: The Time is Right F5 and Cisco: Application-Centric from Top to Bottom and End to End F5 Synthesis: Software Defined Application Services F5 Synthesis: Integration and Interoperability F5 Synthesis: High-Performance Services Fabric F5 Synthesis: Leave no application behind F5 Synthesis: The Real Value of Consolidation Revealed F5 Synthesis: Reference Architectures - Good for What Ails Your Apps731Views0likes1Comment