sddc
38 TopicsIf apps incur technical debt then networks incur architectural debt
#devops #sdn #SDDC #cloud 72%. That's an estimate of how much of the IT budget is allocated to simply keeping the lights on (a euphemism for everything from actually keeping the lights on to cooling, heating, power, maintenance, upgrades, and day to day operations) in the data center. In a recent Forrester Research survey of IT leaders at more than 3,700 companies, respondents estimated that they spend an average 72% of the money in their budgets on such keep-the-lights-on functions as replacing or expanding capacity and supporting ongoing operations and maintenance, while only 28% of the money goes toward new projects. How to Balance Maintenance and IT Innovation This number will not, unfortunately, significantly improve without intentionally attacking it at its root cause: architectural debt. Data Center Debt The concept of "debt' is not a foreign one; we've all incurred debt in the form of credit cards, car loans and mortgages. In the data center, this concept is applied in much the same way as our more personal debt - as the need to "service" the debt over time. Experts on the topic of technical debt point out that this "debt' is chiefly a metaphor for the long-term repercussions arising from choices made in application architecture and design early on. Technical debt is a neologistic metaphor referring to the eventual consequences of poor software architecture and software development within a codebase. The debt can be thought of as work that needs to be done before a particular job can be considered complete. If the debt is not repaid, then it will keep on accumulating interest, making it hard to implement changes later on. Unaddressed technical debt increases software entropy. Wikipedia This conceptual debt also occurs in other areas of IT, particularly those in the infrastructure and networking groups, where architectural decisions have long lasting repercussions in the form of not only the cost to perform day-to-day operations but in the impact to future choices and operational concerns. The choice of a specific point product today to solve a particular pain point, for example, has an impact on future product choices. The more we move toward software-defined architectures - heavily reliant on integration to achieve efficiencies through automation and orchestration - the more interdependencies we build. Those interdependencies cause considerable complexity in the face of changes that must be made to support such a loosely coupled but highly integrated data center architecture. We aren't just maintaining configuration files and cables anymore, we're maintaining the equivalent of code - the scripts and methods used to integrated, automate and orchestrate the network infrastructure. Steve McConnell has a lengthy blog entry examining technical debt. The perils of not acknowledging your debt are clear: One of the important implications of technical debt is that it must be serviced, i.e., once you incur a debt there will be interest charges. If the debt grows large enough, eventually the company will spend more on servicing its debt than it invests in increasing the value of its other assets. Debt must be serviced, which is why the average organization dedicates so much of its budget to simply "keeping the lights on." It's servicing the architectural debt incurred by a generation of architectural decisions. Refinancing Your Architectural Debt In order to shift more of the budget toward the innovation necessary to realize the more agile and dynamic architectures required to support more things and the applications that go with them, organizations need to start considering how to shed its architectural debt. First and foremost, software-defined architectures like cloud, SDDC and SDN, enable organizations to pay down their debt by automating a variety of day-to-day operations as well as traditionally manual and lengthy provisioning processes. But it would behoove organizations to pay careful attention to the choices made in this process, lest architectural debt shift to the technical debt associated with programmatic assets. Scripts are, after all, a simple form of an application, and thus bring with it all the benefits and burdens of an application. For example, the choice between a feature-driven and an application-driven orchestration can be critical to the long-term costs associated with that choice. Feature-driven orchestration necessarily requires more steps and results in more tightly coupled systems than an application-driven approach. Loose coupling ensures easier future transitions and reduces the impact of interdependencies on the complexity of the overall architecture. This is because feature-driven orchestration (integration, really) is highly dependent on specific sets of API calls to achieve provisioning. Even minor changes in those APIs can be problematic in the future and cause compatibility issues. Application-driven orchestration, on the other hand, presents a simpler, flexible interface between provisioning systems and solution. Implementation through features can change from version to version without impacting that interface, because the interface is decoupled from the actual API calls required. Your choice of scripting languages, too, can have much more of an impact than you might think. Consider that a significant contributor to operational inefficiencies today stems from the reality that organizations have an L4-7 infrastructure comprised of not just multiple vendors, but a wide variety of domain specificity. That means a very disparate set of object models and interfaces through which such services are provisioned and configured. When automating such processes, it is important to standardize on a minimum set of environments. Using bash, python, PERL and juju, for example, simply adds complexity and begins to fall under the Law of Software Entropy as described by Ivar Jacobson et al. in "Object-Oriented Software Engineering: A Use Case Driven Approach": The second law of thermodynamics, in principle, states that a closed system's disorder cannot be reduced, it can only remain unchanged or increased. A measure of this disorder is entropy. This law also seems plausible for software systems; as a system is modified, its disorder, or entropy, always increases. This is known as software entropy. Entropy is the antithesis of what we're trying to achieve with automation and orchestration, namely the acceleration of application deployment. Entropy impedes this goal, and causes the introduction of yet another set of systems requiring day-to-day operational attention. Other considerations include deciding which virtual overlay network will be your data center standard, as well as the choice of cloud management platform for data center orchestration. While such decisions seem, on the surface, to be innocuous, they are in fact significant contributors to the architectural debt associated with the data center architecture. Shifting to Innovation Every decision brings with it debt; that cannot be avoided. The trick is to reduce the interest payments, if you will, on that debt as a means to reduce its impact on the overall IT budget and enable a shift to funding innovation. Software-defined architectures are, in a way, the opportunity for organizations to re-finance their architectural debt. They cannot forgive the debt (unless you rip and replace) but these architectures and methodologies like devops can assist in reducing the operational expenses the organization is obliged to pay on a day-to-day basis. But it's necessary to recognize, up front, that the architectural choices you make today do, in fact, have a significant impact on the business' ability to take advantage of the emerging app economy. Consider carefully the options and weigh the costs - including the need to service the debt incurred by those options - before committing to a given solution. Your data center credit score will thank you for it.386Views0likes1CommentSoftware Defined Data Center Made Easy with F5 and VMware
Jared Cook, VMware’s Lead Strategic Architect, Office of the CTO, visits #F5Agility15 and shares how F5 and VMware solutions can be used together in an orchestrated fashion to enable customers to spin up applications on-demand, and provision F5 software defined application services those applications need to run successfully, with greater ease and automation than before in the Software Defined Data Center. ps Related: F5Agility15 - The Preview Video Welcome to F5 Agility 2015 Innovate, Expand, Deliver with F5 CEO Manny Rivelo Get F5 Certified at F5 Agility 2015 F5 Agility 2015 F5 YouTube Channel Technorati Tags: f5,agility,f5agility15,vmware,sddc,silva,video,orchestration,automation,euc,cloud Connect with Peter: Connect with F5:270Views0likes0CommentsThe Five Requirements for Application Services in a Software-Defined Data Center
Data center models are changing. A variety of technical trends and business demands are forcing that change, most of them centered on the explosive growth of applications. That means, in turn, that the requirements for application delivery are changing. Certainly application delivery needs to be agile, not waterfall. It needs to deliver services in hours, not weeks or months. It needs to be more cost efficient. And more than anything else, it needs to be really, really, super focused on applications. Especially those services that are particularly application affine. You know the ones - caching, load balancing, web app security, and performance. These are the services whose configurations are tied (tightly coupled) to the applications they deliver. Thus, they need to be closer to the app. Topologically if not physically. These are the application services described as "per-app". As you might infer (and I would imply) these services really are deployed and configured on a per-application basis. That means they must be as agile and orchestratable as the applications (or microservices) they're delivering. That means more than just being software (or virtual) it also means fitting in with the increasingly DevOpsy environment that's currently taking over dev and ops in the software-defined data center (SDDC). There are five key requirements for services to both fit in and enable the transition from a traditional data center to a software-defined, more cloudy, data center model. 1. Per-application Services Services like load balancing, caching and performance-enhancing services need to fit into a highly distributed, application-focused environment. This ensure isolation, such that failure is limited to an individual application stack. A per-application model also ensures granular control and monitoring of the application in question. This further enables greater visibility into application performance, particularly when applications are comprised of multiple microservice instances. 2. Lightweight Footprint The incredible growth of applications - both from mobile application demand and microservice architectures - means organizations have to do more with less. There are fewer resources available and thus a lightweight service model is necessary to make the most efficient use of what is available. A lightweight footprint for application services also serves to increase service density and enable all applications and services to receive the attention to security, scalability and performance they not only deserve but the business demands. 3. Orchestration Friendly In an increasingly automated environment, driven by a DevOps approach, it is critical that application services present orchestration-friendly APIs and templates. This is needful to ensure easy integration with the tools and frameworks used to automate and orchestrate deployments driven by a continuous delivery (CD) approach to application development. These APIs also enable auto scaling up and back down, which supports the need for efficient resource use in these increasingly dense deployments. 4. Multiple VM support In addition to pure software, application services desirous of fitting into a software-defined data center must support the widest set of hypervisors possible. VMware, Citrix, Microsoft, and KVM are non negotiable in terms of support. Even organizations that may have standardized on one platform today may migrate or expand their use to others in the future. 5. Cost-effectiveness The number of services and applications needing per-application services in a software-defined data center can reach into the thousands. The service platform used to deliver complementary application services must scale economically to meet that demand, which means subscription and consumption based licensing models that reach new economies of scale for application services. The world of applications is expanding. Mobile, microservices, and, sooner rather than later, the Internet of Things are creating explosive growth of applications that in turn are driving demand for DevOps and Software-Defined Data Center models. Application delivery models must also adapt and ensure that all applications and services can be scaled and secured in the most efficient, cost-effective way possible. For application delivery that means programmable software solutions.230Views0likes0CommentsF5 Synthesis: F5 brings Scale and Security to EVO:RAIL Horizon Edition
The goal of F5 Synthesis is to deliver the app services that deliver the apps business relies on today for productivity and for profit. That means not just delivering SDAS (Software Defined Application Services) themselves, but delivering them in all the ways IT needs to meet and exceed business expectations. Sometimes that's in the cloud marketplace and other times it's as a cloud service. Sometimes it's as an integratable on-premise architecture and other times, like now, it's as part of a hyper-converged system. As part of a full stack in a rack, if you will. EVO:RAIL is a partnership between VMware and Dell that offers a simplified, hyper-converged infrastructure. In a nutshell, it's a single, integrated rack designed to address the headaches often caused by virtual machine sprawl and heterogeneous hypervisor support as well as providing the means by which expanding deployments can be accelerated. Converged infrastructure is increasingly popular as a means to accelerate the deployment and growth of virtualized solutions such as virtual desktop delivery. Converged infrastructure solutions like EVO:RAIL abstract compute, network and storage resources from the CPUs, cables controllers and switches that make them all usable as a foundation for private cloud or, as is more often the case, highly virtualized environments. By validating F5 VE (Virtual Edition) to deliver app services in an EVO:RAIL Horizon Edition the infrastructure gains key capabilities to assure availability, security and performance of the applications that will ultimately be deployed and delivered by the infrastructure. Including F5 brings capabilities critical to seamlessly scaling VMware View by providing Global Namespace and User Name Persistence support. Additionally, F5 iApps accelerates implementation by operationalizing the deployment of SDAS with simple, menu-driven provisioning. You can learn more about Dell's VMware EVO:RAIL solution here and more on how F5 and VMware are delivering the Software Defined Data Center here.207Views0likes0CommentsGetting better mileage on your journey to an SDDC with VMware and F5
It's a foregone conclusion that many organizations have hit the road in their transformational journey to achieve a Software-Defined Data Center (SDDC). While mileage may vary based on the degree to which an organization has committed to operationalizing their environments, the reality is that those organizations adopting operationally efficient software-defined approaches like DevOps and SDN have seen positive results in stability, agility and the speed with which they are able to traverse the application deployment pipeline. That deployment pipeline crosses traditional silos within IT, including infrastructure and network services alike. It requires a cooperative, workflow-oriented approach to not just an deploying an app but also addressing its infrastructure and networking needs. That requires collaboration, not just of the disparate teams within your organization dedicated to provisioning and managing those services, but between the vendors responsible for delivering the solutions that enable that collaborative approach. That's why we explored 23 different possible methods of deploying F5 BIG-IP alongside VMware NSX in a non-integrated, co-existence model. Because we wanted to address the challenges faced by joint customers when they decide to embark on their journey to an SDDC, such as rolling out NSX without putting at risk applications currently delivered by BIG-IP. We selected three of the explored topologies as archetypes and developed detailed design and recommended practice guides for those deployment topologies. These topologies cover both big-iron and software including F5 BIG-IP Local Traffic Manager (LTM) physical and virtual editions (VE). They include guidance for both environments including VXLAN encapsulation and those that don't, as well as a variety of BIG-IP architectural placement options. Contact F5 to explore the guides and you can always find the latest information on VMware and F5 solutions here on f5.com.169Views0likes0CommentsSecurity as Code
One of the most difficult things to do today, given the automated environments in which we operate, is to identify a legitimate user. Part of the problem is that the definition of a legitimate users depends greatly on the application. Your public facing website, for example, may loosely define legitimate as "can open a TCP connection and send HTTP request" while a business facing ERP or CRM system requires valid credentials and group membership as well as device or even network restrictions. This task is made more difficult by the growing intelligence of bots. It's not just that they're masquerading as users of IE or Mozilla or Chrome, they're beginning to act like they're users of IE and Mozilla and Chrome. Impersonators are able to fool systems into believing they are "real" users; human beings, if you will, and not merely computerized artifacts. They are, in effect, attempting to (and in many cases do) pass the Turing Test. In the case of bots, particularly the impersonating kind, they are passing. With flying colors. Now, wait a minute, you might say. They're passing by fooling other systems, which was not necessarily the test devised by Turing in the first place which required fooling a human being. True, but the concept of a system being able to determine the humanness (or lack thereof) of a connected system in the realm of security may be more valuable than the traditional Turing Test. After all, this is security we're talking about. Corporate assets, resources, access... this is no game, this is the real world where bonuses and paychecks are relying on getting it right. So let's just move along then, shall we? The problem is that bots are getting smarter and more "human" over time. They're evolving and adapting their behavior to be like the users they know will be allowed to access sites and applications and resources. That means that the systems responsible for detecting and blocking bot activity (or at least restricting it) have to evolve and get smarter too. They need to get more "human like" and be able to adapt. They have to evaluate a connection and request within the context it is made, which includes all the "normal" factors like agent and device and application but also includes a broader set of variables that can best be described as "behavioral." This includes factors like pulling data from an application more slowly than their network connection allows. Yes, systems are capable of determining this situation and that's a good thing, as it's a red flag for a slow-and-low application DDoS attack. It also includes factors like making requests too close together, which is a red flag for a flood-based application DDoS attack. Another indicator, perhaps, is time of day. Yes, that's right. Bots are apparently more time-sensitive than even we are, according to research that shows very specific patterns of bot attacks during different time intervals: According to Distil Networks, the United States accounted for 46.58 percent, with Great Britain and Germany coming in second or third with 19.43 percent and 9.65 percent, respectively. Distil Networks' findings are based on activity that occurred between January and December of 2013. Among its customers in the United States, bot attacks occurred most between 6 pm and 9 pm EST, when nearly 50 percent of all bad bot traffic hit sites. The period between 6pm and 2 am EST was home to 79 percent of all attacks. By comparison, the 14-hour time span from 3 am to 5 pm EST saw just 13.8 percent of all malicious bot traffic. -- Bad Bot Percentage of Web Traffic Nearly Doubled in 2013: Report So what does that mean for you, Security Pro? That means you may want to be more discriminating after official business hours than you are during the work day. Tightening up bot-detection policies during these known, bot-dense hours may help detect and prevent an attack from succeeding. So all you have to do is implement policies based on date and time of day. What? That's not that hard if you're relying on programmability. Security as Code: Programmability We make a big deal of programmability of APIs and in the data path as a means to achieve greater service velocity but we don't always talk about how that same automation and programmability is also good for enabling a more adaptive infrastructure. Consider that if you can automatically provision and configure a security service you should be able to repeat that process again and again and again. And if you're treating infrastructure like code, well, you can use simple programmatic techniques to pull the appropriate piece of code (like a template or a different configuration script) and deploy it on a schedule. Like at the end of business hours or over the weekend. By codifying the policy into a template or configuration script you ensure consistency and by using automation to deploy automatically at pre-determined times of the day you don't have to task someone with manually pushing buttons to get it done. That means no chance to "forget" or scrambling to find someone to push the buttons when Bob is out sick or on vacation or leaves the organization. Consistent, repeatable and predictable deployments is as much a part of automation as speed and time to market. In fact, if you look at the origins of lean manufacturing - upon which agile is based - the goal wasn't faster, it was better. It was to reduce defects and variation in the end product. It was about quality. That's the goal with this type of system - consistent and repeatable results. Quality results. Now, you could also certainly achieve similar results with data path programmability by simply basing policy enforcement off a single time-based conditional statement (if time > 5pm and time < 8am then [block of code] else [block of code]). A data path programmatic approach means no need to worry about the command and control center losing connectivity or crashing or rebooting at the wrong time; and no need to worry about the repository being offline or disk degradation causing data integrity issues. But changing the policy directly in the data path also has potential impacts, especially if you need to change it. It's in the data path, after all. Your vehicle of implementation is really up to you. The concept is really what's important - and that's using programmability (in all its forms) to improve agility without compromising on stability, even in the security realm. Because when it comes to networks and security, the blast radius when you get something wrong is really, really big. And not being able to adapt in the realm of security means you're fall further and further behind the attackers who are adapter every single day.188Views0likes0CommentsF5 Synthesis: Now with more VMware than ever!
Whether you've bought into DevOps or NetOps or SDN or SDDC (or all of them) as a way to operationalize the data center, one thing is clear: organizations are desirous of ways to deliver applications and their infrastructure faster with fewer disruptions and with greater consistency. Automation and orchestration frameworks offer IT operations groups of all kinds - from storage to compute to network to security - a means to accomplish that task. That means every one who provides enterprise solutions in one of those four IT areas needs to support those efforts through APIs and, more importantly, pre-validated integration with the frameworks and tools customers want to use. When we asked over 300 customers about the tools and frameworks they use and want to use to automate and orchestrate their application infrastructure deployment experience the overwhelming answer was VMware. That was a pleasant result to hear, given that we've been partnering with VMware for many years, bringing to market architectures, automation packages and integration with our F5 Synthesis architecture. So it's probably no surprise that this post brings news of new and expanded offerings with VMware available in VMware's vCloud Marketplace. Availability and security services delivered through BIG-IP Local Traffic Manager ™ , Global Traffic Manager ™ , and Application Security Manager ™ software are now available, having achieved vCloud Air “Elite” certification for all three. With F5 Synthesis Simplified Business Model, customers can take a "bring your own license" approach when deploying VMware vCloud with F5's Good, Better, and Best packages. F5 also continues its history of contributing at VMware industry and partner events by sponsoring and exhibiting (booth #209) at this year's VMware Partner Exchange (PEX) conference. We'll also be participating in a variety of activities including: Providing Automated Failover and Reliable Business Continuity with F5 and VMware vCloud Air vCloud Air Product Line Manager Yatin Chalke and F5 Business Development Manager Matt Quill discuss how F5 brings advanced application services to vCloud Air to automate disaster recovery/business continuity, hybrid cloud application deployments, cloud bursting, and global application availability. F5 and VMware’s Horizon – Perfect Together In this breakout session, F5 Sr. Solution Architect Justin Venezia explains integration methods between F5 and each of VMware's End User Computing products. The program includes a deep-dive into sample BIG-IP configurations and a demo of combined load balancing and remote access solutions with VMware Horizon. Boost Your Opportunity by Selling Integrated Solutions with VMware and F5 F5 Regional VP of Channel Sales Keith McManigal and VMware Channel Sales Director Troy Wright outline the ways that F5’s BIG-IP application delivery platform and VMware solutions complement each other. Real-world examples are used to clearly demonstrate the benefits of combining F5 with VMware products such as NSX, Horizon, AirWatch, vRealize, and vCloud Air. If you're attending VMware PEX be sure to stop by (that's booth #209). If you aren't (or even if you are) you can follow along on Twitter @f5networks for live updates382Views0likes2CommentsSDDC-開放、安全和降低營運壓力的最佳答案
Please find the English language post, by Lori MacVittie, from which this was adaptedhere. 同一個工作,是否增加愈多的人手,便可做得更快呢?或許在數學的國度裡是這樣,但在真實世界中,那可不一定,甚至在某些狀況下,人手愈多,答案反而是否定的。 相信許多拜讀過《人月神話》(The Mythical Man-Month)一書的程式開發人員,對於該書主張的布魯克定律(Brook’s Law)必然印象深刻。該書認為在專案進行之中增加人手,不僅不會加快專案執行的腳步,反而會出現一個導致專案費時更久的臨界點出現。作者在書中便以「即使九個女人也無法在一個月內生出孩子」的比喻傳神地道盡一切。 由此可見,因行動裝置、應用程式、全新服務,以及相應部署、管理作業的增加,而引發的營運壓力,可不是多派人力支援這麼簡單地就可以解決問題。這是因為想要同步化額外人力的工作時程並追蹤工作進度時,往往會引發諸多溝通上的負荷,在某些狀況下,這反而會形成進度推展上的一大阻礙。 所幸,這正好是軟體定義技術嘗試透過自動化(Automation)與編排化(Orchestration)所要解決的難題。藉由支援開放標準(API與協定)的軟體定義技術,使得跨系統介面的標準化,能讓營運及網路團隊無縫編排工作流程程序的配置。如此一來,便可同時舒緩兩個團隊的壓力,進而促使他們能在穩定性與安全性不受影響的情況下,更有效率地提升整體部署的能耐與質量。 軟體定義資料中心(Software Defined Data Center, SDDC)提供了會是上述上述問題的最佳答案。這個幾乎被整個業界與分析師認可的SDDC,可說是專為私有雲、公有雲及混合雲量身打造的理想架構。SDDC可將諸如抽象化(Abstraction)、資源池共享(Pooling)與自動化等耳熟能詳的虛擬化概念,延伸到資料中心的所有資源與服務上。 將虛擬化概念延伸至網路,不盡然只是將網路服務轉變成虛擬機器而已,反而更傾向於將網路抽離成為可組合的服務,同時這些服務可被快速配置、簡易移轉與軟體定義。所謂虛擬化,主要關乎的是抽象化、硬體與軟體的彼此抽離、應用程式與網路的抽離,以及企業與實體位置的抽離。 提供安全、整合與無縫的解決方案,該方案促使次世代架構能迎戰當前與未來,想將任何地方的應用程式在任何時間交付給任何人的挑戰,這個願景勢必是企業最急迫的期望。例如F5推出不會對安全與控制等關鍵IT問題造成影響並可立即上線的L2-L7 SDDC。 SDDC需要一個協同合作的產業生態系統,亦即連同網路上所有能成功遞送應用程式的服務在內,又如F5 Synthesis與VMware NXS。SDDC一大特質會要求一個開放式、基於標準的機制,進而讓企業只需選擇想要的解決方案與服務,而不用管如何將這些方案與服務整合到資料中心的系統之中。183Views0likes0CommentsCloud Strategy Morsels: Understand your unmet IT demand
On the twenty fourth of April 1949, the children of the United Kingdom rejoiced as the wartime rationing of sweets (candy) finally ended. Four months later rationing had to be reintroduced. This is an important lesson for those of us involved in the evolution of cloud, orchestration and self-service infrastructures. I’ve just returned form Las Vegas where F5 were participating in the ‘Gartner Data Center, Infrastructure and Operations Management Conference’ (try saying that after a few mojitos). As well as participating in a fun panel discussion, getting to talk to some of our customers and partners while doing my turn on our stand and playing the awkward straight man to Peter Silva’s one man charisma show while filming for the F5 YouTube channel , I was able to attend some of the speaker sessions. I particularly enjoyed a session on building private and hybrid clouds from Tom Bittman (@tombitt), a Vice President and Distinguished Analyst at Gartner. As you might expect, it was packed with information, advice and direction. One small tidbit, almost mentioned in passing, really resonated with me. Understand your unmet IT demand before opening the floodgates of self service. I think it’s true to say that for many parts of an organization (including within the IT department itself), the provision of IT infrastructure never really happens as fast as anyone would like. Once you turn over the ability to provision servers, storage, networking, and application services like load balancing or acceleration to the people who actually want to consume the infrastructure, you had better be ready to meet their needs, because their consumption of your infrastructure could wreak havoc on your carefully predicted ROI (although most organizations are realizing that a more important benefit of cloud-like infrastructure is agility, not economy). All the agility and efficiency in the world won’t help if a year’s worth of infrastructure spend is consumed in four months. In addition to understanding the organization’s appetite for infrastructure, you need to let people how much they are consuming by implementing at least ‘showback’ – regular reporting on IT cost apportioned by user. By understanding your organization’s potential usage and at least giving them the chance to read the label before they open that second bag of candy, you can avoid repeating what must have been one mighty sugar crash back in the summer of ‘49.163Views0likes0CommentsHow you integrate all the network things matters
#SDN #DevOps API design best practices apply to the network, too. We (as in the industry at large) don't talk enough about applying architectural best practices with respect to emerging API and software-defined models of networking. But we should. That's because as we continue down the path that continues to software-define the network, using APIs and software development methodologies to simplify and speed the provisioning of network services, the more we run into if not rules, then best practices, that should be considered before we willy nilly start integrating all the network things. Because that's really what we're doing - integration. We just don't like to call it that because we've seen those developers curled up in fetal positions in the corner upon learning of a new software upgrade that will require extensive updates to the fifty other apps integrated with it. We don't want to end up like that. Yet that's where we're headed, because we aren't paying attention to the lessons learned by enterprise architects over the years with respect to integration and, in particular, the design of APIs that enable integration and orchestration of processes. Martin Fowler touches on this in a recent post "Microservices and the First Law of Distributed Objects": The consequence of this difference is that your guidelines for APIs are different. In process calls can be fine-grained, if you want 100 product prices and availabilities, you can happily make 100 calls to your product price function and another 100 for the availabilities. But if that function is a remote call, you're usually better off to batch all that into a single call that asks for all 100 prices and availabilities in one go. The result is a very different interface to your product object. Martin's premise is based primarily on the increased impact of performance and the possibility of failure on remote (distributed) calls. The answer, of course, is coarser-grained calls across the network than those used in-process. Which applies perfectly to with respect to automating the network. How you integrate all the network things matters Most network devices are API-enabled, yes, but they're fine grained APIs. Every option, every thing has its own API call. And to simply automate the provisioning of even something as simple as a load balancing service requires a whole lot of API calls. There's one to set up the virtual server (VIP) and one to create a pool to go behind it. There's one to create a node (the physical host) and another to create a member of the pool (the virtual representation of a service). Then there's another call to add the member to the pool. Oh and don't forget the calls to choose the load balancing algorithm, create a health monitor (and configure it), and then attach that health monitor to the member. Oh, before I forget don't you forget the calls to set up the load balancing algorithm metrics, too. And persistence. Don't forget that for a stateful app. I'll stop there before you throw rotten vegetables at the screen to get me to stop. The point is made, I think, that the number of discrete API calls that are generally required to configure even the simplest of network services is pretty intimidating. it also introduces a significant number of potential failure points, which means whatever is driving the automated provisioning and configuration of this service must do a lot more than make API calls, it must also catch and handle errors (exceptions) and determine whether to roll back on error or try again, or both. Coarser grained (and application-driven) API calls and provisioning techniques reduce this risk to minimal levels. By requiring fewer calls and leveraging innate programmability driven by a holistic application approach, the potential for failure is much lower and the interaction is made much simpler. This is why it's imperative to carefully consider what software-defined model you'll transition to for the future. A model which centralizes control and configuration can be a boon, but it can also be a negative if it forces a heavy API-tax on the integration necessary to automate and orchestration the network. A centralized control model that focuses on state rather than execution of policy automation and orchestration offers the benefits of increased flexibility and service provisioning velocity while maintaining more stable integration methods. The focus on improving operational consistency, predictability and introducing agility into the network is a good one, one that will help to address the increasing difficulty in scaling the network both topologically and operationally to meet demands imposed by mobility, security and business opportunities. But choose wisely, as the means by which you implement the much vaunted software-defined architecture of the future matters a great deal to how much success - and what portion of those benefits - you'll actually achieve.271Views0likes0Comments