services
24 Topics4 Things You Need in a Cloud Computing Infrastructure
Cloud computing is, at its core, about delivering applications or services in an on-demand environment. Cloud computing providers will need to support hundreds of thousands of users and applications/services and ensure that they are fast, secure, and available. In order to accomplish this goal, they'll need to build a dynamic, intelligent infrastructure with four core properties in mind: transparency, scalability, monitoring/management, and security. Transparency One of the premises of Cloud Computing is that services are delivered transparently regardless of the physical implementation within the "cloud". Transparency is one of the foundational concepts of cloud computing, in that the actual implementation of services in the "cloud" are obscured from the user. This is actually another version of virtualization, where multiple resources appear to the user as a single resource. It is unlikely that a single server or resource will always be enough to satisfy demand for a given provisioned resource, which means transparent load-balancing and application delivery will be required to enable the transparent horizontal scaling of applications on-demand. The application delivery solution used to provide transparent load-balancing services will need to be automated and integrated into the provisioning workflow process such that resources can be provisioned on-demand at any time. Related Articles from around the Web What cloud computing really means How Cloud & Utility Computing Are Different The dangers of cloud computing Guide To Cloud Computing For example, when a service is provisioned to a user or organization, it may need only a single server (real or virtual) to handle demand. But as more users access that service it may require the addition of more servers (real or virtual). Transparency allows those additional servers to be added to the provisioned service without interrupting the service or requiring reconfiguration of the application delivery solution. If the application delivery solution is integrated via a management API with the provisioning workflow system, then transparency is also achieved through the automated provisioning and de-provisioning of resources. Scalability Obviously cloud computing service providers are going to need to scale up and build out "mega data centers". Scalability is easy enough if you've deployed the proper application delivery solution, but what about scaling the application delivery solution? That's often not so easy and it usually isn't a transparent process; there's configuration work and, in many cases, re-architecting of the network. The potential to interrupt services is huge, and assuming that cloud computing service providers are servicing hundreds of thousands of customers, unacceptable. The application delivery solution is going to need to not only provide the ability to transparently scale the service infrastructure, but itself, as well. That's a tall order, and something very rarely seen in an application delivery solution. Making things even more difficult will be the need to scale on-demand in real-time in order to make the most efficient use of application infrastructure resources. Many postulate that this will require a virtualized infrastructure such that resources can be provisioned and de-provisioned quickly, easily and, one hopes, automatically. The "control node" often depicted in high-level diagrams of the "cloud computing mega data center" will need to provide on-demand dynamic application scalability. This means integration with the virtualization solution and the ability to be orchestrated into a workflow or process that manages provisioning. Intelligent Monitoring In order to achieve the on-demand scalability and transparency required of a mega data center in the cloud, the control node, i.e. application delivery solution, will need to have intelligent monitoring capabilities. It will need to understand when a particular server is overwhelmed and when network conditions are adversely affecting application performance. It needs to know the applications and services being served from the cloud and understand when behavior is outside accepted norms. While this functionality can certainly be implemented externally in a massive management monitoring system, if the control node sees clients, the network, and the state of the applications it is in the best position to understand the real-time conditions and performance of all involved parties without requiring the heavy lifting of correlation that would be required by an external monitoring system. But more than just knowing when an application or service is in trouble, the application delivery mechanism should be able to take action based on that information. If an application is responding slowly and is detected by the monitoring mechanism, then the delivery solution should adjust application requests accordingly. If the number of concurrent users accessing a service is reaching capacity, then the application delivery solution should be able to not only detect that through intelligent monitoring but participate in the provisioning of another instance of the service in order to ensure service to all clients. Security Cloud computing is somewhat risky in that if the security of the cloud is compromised potentially all services and associated data within the cloud are at risk. That means that the mega data center must be architected with security in mind, and it must be considered a priority for every application, service, and network infrastructure solution that is deployed. The application delivery solution, as the "control node" in the mega data center, is necessarily one of the first entry points into the cloud data center and must itself be secure. It should also provide full application security - from layer 2 to layer 7 - in order to thwart potential attacks at the edge. Network security, protocol security, transport layer security, and application security should be prime candidates for implementation at the edge of the cloud, in the control node. While there certainly will be, and should be, additional security measures deployed within the data center, stopping as many potential threats as possible at the edge of the cloud will alleviate much of the risk to the internal service infrastructure. What are your plans for cloud computing? ( polls)392Views0likes2CommentsF5 and Versafe: Because Mobility Matters
#F5 #security #cloud #mobile #context Context means more visibility into devices, networks, and applications even when they're unmanaged. Mobility is a significant driver of technology today. Whether it's mobility of applications between data center and cloud, web and mobile device platform or users from corporate to home to publicly available networks, mobility is a significant factor impacting all aspects of application delivery but in particular, security. Server virtualization, BYOD, SaaS, and remote work all create new security problems for data center managers. No longer can IT build a security wall around its data center; security must be provided throughout the data and application delivery process. This means that the network must play a key role in securing data center operations as it “touches” and “sees” all traffic coming in and out of the data center. -- Lee Doyle, GigaOM "Survey: SDN benefits unclear to enterprise network managers" 8/29/2013 It's a given that corporate data and access to applications need to be protected when delivered to locations outside corporate control. Personal devices, home networks, and cloud storage all introduce the risk of information loss through a variety of attack vectors. But that's not all that poses a risk. Mobility of customers, too, is a source of potential disaster waiting to happen as control over behavior as well as technology is completely lost. Industries based on consumers and using technology to facilitate business transactions are particularly at risk from consumer mobility and, more importantly, from the attackers that target them. If the risk posed by successful attacks - phishing, pharming and social engineering - isn't enough to give the CISO an ulcer, the cost of supporting sometimes technically challenged consumers will. Customer service and support has become in recent years not only a help line for the myriad web and mobile applications offered by an organization, but a security help desk, as well, as consumers confused by e-mail and web attacks make use of such support lines. F5 and Security F5 views security as a holistic strategy that must be able to dig into not just the application and corporate network, but into the device and application, as well as the networks over which users access both mobile and web applications. That's where Versafe comes in with its unique combination of client-side intelligent visibility and subscription-based security service. Versafe's technology employs its client-side visibility and logic with expert-driven security operations to ensure real-time detection of a variety of threat vectors common to web and mobile applications alike. Its coverage of browsers, devices and users is comprehensive. Every platform, every user and every device can be protected from a vast array of threats including those not covered by traditional solutions such as session hijacking. Versafe approaches web fraud by monitoring the integrity of the session data that the application expects to see between itself and the browser. This method isn’t vulnerable to ‘zero-day’ threats: malware variants, new proxy/masking techniques, or fraudulent activity originating from devices, locations or users who haven’t yet accumulated digital fraud fingerprints. Continuous Delivery Meets Continuous Security Versafe's solution can accomplish such comprehensive coverage because it's clientless,relying on injection into web content in real time. That's where F5 comes in. Using F5 iRules, the appropriate Versafe code can be injected dynamically into web pages to scan and detect potential application threats including script injection, trojans, and pharming attacks. Injection in real-time through F5 iRules eliminates reliance on scanning and updating heterogeneous endpoints and, of course, relying on consumers to install and maintain such agents. This allows the delivery process to scale seamlessly along with users and devices and reasserts control over processes and devices not under IT control, essentially securing unsecured devices and lines of communication. Injection-based delivery also means no impact on application developers or applications, which means it won't reduce application development and deployment velocity. It also enables real-time and up-to-the-minute detection and protection against threats because the injected Versafe code is always communicating with the latest, up-to-date security information maintained by Versafe at its cloud-based, Security Operations Center. User protection is always on, no matter where the user might be or on what device and doesn't require updating or action on the part of the user. The clientless aspect of Versafe means it has no impact on user experience. Versafe further takes advantage of modern browser technology to execute with no performance impact on the user experience, That's a big deal, because a variety of studies on real behavior indicates performance hits of even a second on load times can impact revenue and user satisfaction with web applications. Both the web and mobile offerings from Versafe further ensure transaction integrity by assessing a variety of device-specific and behavioral variables such as device ID, mouse and click patterns, sequencing of and timing between actions and continuous monitoring of JavaScript functions. These kinds of checks are sort of an automated Turing test; a system able to determine whether an end-user is really a human being - or a bot bent on carrying out a malicious activity. But it's not just about the mobility of customers, it's also about the mobility - and versatility - of modern attackers. To counter a variety of brand, web and domain abuse, Versafe's cloud-based 24x7x365 Security Operations Center and Malware Analysis Team proactively monitors for organization-specific fraud and attack scheming across all major social and business networks to enable rapid detection and real-time alerting of suspected fraud. EXPANDING the F5 ECOSYSTEM The acquisition of Versafe and its innovative security technologies expands the F5 ecosystem by exploiting the programmable nature of its platform. Versafe technology supports and enhances F5's commitment to delivering context-aware application services by further extending our visibility into the user and device domain. Its cloud-based, subscription service complements F5's IP Intelligence Service, which provides a variety of similar service-based data that augments F5 customers' ability to make context-aware decisions based on security and location data. Coupled with existing application security services such as web application and application delivery firewalls, Versafe adds to the existing circle of F5 application security services comprising user, network, device and application while adding brand and reputation protection to its already robust security service catalog. We're excited to welcome Versafe into the F5 family and with the opportunity to expand our portfolio of delivery services. More information on Versafe: Versafe Versafe | Anti-Fraud Solution (Anti Phishing, Anti Trojan, Anti Pharming) Versafe Identifies Significant Joomla CMS Vulnerability & Corresponding Spike in Phishing, Malware Attacks Joomla Exploit Enabling Malware, Phishing Attacks to be Hosted from Genuine Sites 'Eurograbber' online banking scam netted $47 million450Views0likes1CommentSDN and OpenFlow are not Interchangeable
#SDN #OpenFlow They aren't,seriously. They are not synonyms. Stop conflating them. New technology always runs into problems with terminology if it's lucky enough to become the "next big thing." SDN is currently in that boat, with a nearly cloud-like variety of definitions and accompanying benefits. I've seen SDN defined so tightly as to exclude any model that doesn't include Open Flow. Conversely, I've seen it defined so vaguely as to include pretty much any network that might have a virtual network appliance deployed somewhere in the data path. It's important to remember that SDN and OpenFlow are not synonymous. SDN is an architectural model. OpenFlow is an implementation API. So is XMPP, Arista's CloudVision solution for a southbound protocol. So are potentially vendor specific southbound protocols that might be included in Open Daylight's model. SDN is an architectural model. OpenFlow is an implementation API. It is one possible southbound API protocol, admittedly one that is rapidly becoming the favored son of SDN. It's certainly gaining mindshare, with a plurality of respondents to a recent InformationWeek survey on SDN having at least a general idea what Open Flow is all about, with nearly half indicating familiarity with the protocol. The reason it is important not to conflate Open Flow with SDN is that both the API and the architecture are individually beneficial on their own. There is no requirement that an Open Flow-enabled network infrastructure must be part of an SDN, for example. Organizations looking for benefits around management and automation of the network might simply choose to implement an Open Flow-based management framework using custom scripts or software, without adopting wholesale an SDN architecture. Conversely, there are plenty of examples of SDN offerings that do not rely on OpenFlow, but rather some other protocol of choice. Open Flow is, after all, a work in progress and there are capabilities required by organizations that simply don't exist yet in the current specification - and thus implementation. Open Flow Lacks Scope Even ignoring the scalability issues with OpenFlow, there are other reasons why Open Flow might not be THE protocol - or the only protocol - used in SDN implementations. Certainly for layer 2-3, Open Flow makes a lot of sense. It is designed specifically to carry L2-3 forwarding information from the controller to the data plane. What it is not designed to do is transport or convey forwarding information that occurs in the higher layers of the stack, such as L4-7, that might require application-specific details on which the data plane will make forwarding decisions. That means there's room for another protocol, or an extension of OpenFlow, in order to enable inclusion of critical L4-7 data path elements in an SDN architecture. The fact that OpenFlow does not address L4-7 (and is not likely to anytime soon) is seen in the recent promulgation of service chaining proposals. Service chaining is rising as the way in which L4-7 services will be included in SDN architectures. Lest we lay all the blame on OpenFlow for this direction, remember that there are issues around scaling and depth of visibility with SDN controllers as it relates to directing L4-7 traffic and thus it was likely that the SDN architecture would evolve to alleviate those issues anyway. But lack of support in Open Flow for L4-7 is another line item justification for why the architecture is being extended, because it lacks the scope to deal with the more granular, application-focused rules required. Thus, it is important to recognize that SDN is an architectural model, and Open Flow an implementation detail. The two are not interchangeable, and as SDN itself matures we will see more changes to core assumptions on which the architecture is based that will require adaptation.532Views0likes1CommentThe Problem with Consumer Cloud Services...
…is that they're consumer #cloud services. While we're all focused heavily on the challenges of managing BYOD in the enterprise, we should not overlook or understate the impact of consumer-grade services within the enterprise. Just as employees bring their own devices to the table, so too do they bring a smattering of consumer-grade "cloud" services to the enterprise. Such services are generally woefully inappropriate for enterprise use. They are focused on serving a single consumer, with authentication and authorization models that support that focus. There are no roles, generally no group membership, and there's certainly no oversight from some mediating authority other than the service provider. This is problematic for enterprises as it eliminates the ability to manage access for large groups of people, to ensure authority to access based on employee role and status, and provides no means of integration with existing ID management systems. Integrating consumer-oriented cloud services into enterprise workflows and systems is a Sisyphean task. Cloud-services replicating what has traditionally been considered enterprise-class services such as CRM and ERP are designed with the need to integrate. Consumer-oriented services are designed with the notion of integration – with other consumer-grade services, not enterprise systems. They lack even the most rudimentary enterprise-class concepts such as RBAC, group-based policy and managed access. SaaS supporting what are traditionally enterprise-class concerns such as CRM and e-mail have begun to enable the integration with the enterprise necessary to overcome what is, according to survey conducted by CloudConnect and Everest Group, the number two inhibitor of cloud adoption amongst respondents. The lack of integration points into consumer-grade services is problematic for both IT – and the service provider. For the enterprise, there is a need to integrate, to control the processes associated with, consumer-grade cloud services. As with many SaaS solutions, the ability to collaborate with data-center hosted services as a means to integrate with existing identity and access control services is paramount to assuaging the concerns that currently exist given the more lax approach to access and identity in consumer-grade services. Integration capabilities – APIs – that enable enterprises to integrate even rudimentary control over access is a must for consumer-grade SaaS looking to find a path into the enterprise. Not only is it a path to monetization (enterprise organizations are a far more consistent source of revenue than are ads or income derived from the sale of personal data) but it also provides the opportunity to overcome the stigma associated with consumer-grade services that have already resulted in "bans" on such offerings within large organizations. There are fundamentally three functions consumer-grade SaaS needs to offer to entice enterprise customers: Control over AAA Enterprises need the ability to control who accesses services and to correlate with authoritative sources of identity and role. That means the ability to coordinate a log-in process that primarily relies upon corporate IT systems to assert access rights and the capability of the cloud-service to accept that assertion as valid. APIs, SAML, and other identity management techniques are invaluable tools in enabling this integration. Alternatively, enterprise-grade management within the tools themselves can provide the level of control required by enterprises to ensure compliance with a variety of security and business-oriented requirements. Monitoring Organizations need visibility into what employees (or machines) may be storing "in the cloud" or what data is being exchanged with what system. This visibility is necessary for a variety of reasons with regulatory compliance most often cited. Mobile Device Management (MDM) and Security Because one of the most alluring aspects of consumer cloud services is nearly ubiquitous access from any device and any location, the ability to integrate #1 and #2 via MDM and mobile-friendly security policies is paramount to enabling (willing) enterprise-adoption of consumer cloud services. While most of the "consumerization" of IT tends to focus on devices, "bring your own services" should also be a very real concern for IT. And if consumer cloud services providers think about it, they'll realize there's a very large market opportunity for them to support the needs of enterprise IT while maintaining their gratis offerings to consumers.249Views0likes1CommentCloud: Commoditizing End-Users
It's not just commoditization of business functions (SaaS) or IT infrastructure (IaaS) - it's the users, too. Prioritization. It's something that's built into nearly every technology, particularly that which services network traffic. Rate shaping. Queuing. Coloring bits. We do a lot of interesting gyrations with technology to ensure that some user traffic and requests are more equal than others. Today we still do the same thing, but it's done in different ways. Software as a Service charges a premium for "extra" API calls, for example, and if you want access to premium content there's sure to be a paywall in front of it. But that's at the service level. It's not the same as prioritization of individual users; of affording specific users privileges of some kind based either on their position (No, no, the CEO can't have his e-mail be delayed - never apply bandwidth limiting policies to him) or on their customer status (They're a "gold" customer, make sure their requests go to the fastest application instance). These kinds of customer privileges have always existed and in some industries remain a staple reward or requirement for operations. Cloud, however, commoditizes users, affording operations no way to distinguish between traffic from the CEO and traffic from, well, me. IT'S THE NETWORK That's because the mechanisms by which traffic and requests are prioritized exist in the network; in the data path. By the time the request gets to the Exchange server, it's already too late. The Exchange server doesn't know that three upstream switches and routers have queued the packets comprising the CEO's request, causing a slight but noticeable delay. It is the infrastructure - the network - that provides this service necessarily. Prioritization of traffic through a series of tubes interconnected by what are essentially processing centers has to occur at those processing centers, before it arrives at the destination. The effect is commoditization of users. Every user is the same, every request - equal. There is no special treatment for anyone, period. Part of this is due to the relinquishment of control over the network inherent in a cloud-based environment, part of it is due to the failure of that same network to pass on context and awareness of the user and the context in which such requests are made. The inability to deploy policies designed to give preference to some requests over other - for whatever reason the business thinks it may be necessary - means users are commoditized. They become a sequence number, nothing more, nothing less. For many applications and business models this may be a non-issue. But for industries and organizations that in part monetize (or have monetized in the past) based on the ability to offer "better or faster" service on an individual basis, moving to cloud will have a significant impact and may require changes to not only operations but to the business. Some capability to differentiate levels of service on a per-user basis may be returned as more mature services are offered by cloud providers, but the level of differentiation and prioritization IT has known in the data center will never completely return in the cloud. Organizations who may be impacted by this commoditization in the form of frustrated users or churning customers will need to consider other ways in which to address the ability to decommoditize its users.183Views0likes0CommentsIn the Cloud Differentiation Means Services
And integration.. Don't forget the integration. Scott Bils has a post on the "Five Mistakes that Enterprise Cloud Service Providers are Making" over on Leverhawk. Points four and five were particularly interesting because it seems there's a synergistic opportunity there. Point number four from Scott: Omitting SaaS and PaaS: Cloud infrastructure service providers have little incentive to migrate customers to public cloud SaaS offerings such as Salesforce.com or Workday. For many customers, migrating legacy apps to SaaS models will be the right answer. Many enterprise cloud service providers conveniently omit this lever from their transformation story and lose customer credibility as a result. And point number five: Failing to differentiate: Many vendors position themselves as providing managed services that make cloud models ”enterprise ready.” The problem is that every other vendor is saying the exact same thing. Enterprise cloud service providers need to think harder about what their distinctive customer value proposition really is. I will not, for the sake of brevity and out of consideration for your time, offer an extensive list of my own posts on this very point save this one. Suffice to say, differentiation of services is something I've noted in the past and continue to note. Mostly because, as Scott points out, it's a problem. What caught my eye here is the relationship between these two points, specifically the relationship between SaaS and services in IaaS. Scott is right when he points out the hyper-adoption rates of SaaS as compared to IaaS. As has been often pointed out, SaaS enjoys higher adoption rates than any other cloud model A Gartner/Goldman Sachs Cloud CIO Survey In 2011 noted 67% of respondents "already do" SaaS. The survey indicated that 75% would be using SaaS by 2017. A modest number, I think, if you look at the rates of adoption over the past few years. Combined with this is the interest in hybrid cloud models. While usually pointing to the marriage of on- and off-premise cloud environments, hybrid cloud is more generically the joining of two disparate cloud environments. That could also be the joining of two public providers irrespective of model (SaaS, IaaS, PaaS). What IaaS providers can do to address both points four and five simultaneously is offer services specifically designed to integrate with a variety of at least SaaS offerings. Services that provide federation and/or SSO services for and with Salesforce.com or Google, for example. Services that differentiate the IaaS provider simply by making integration easier for the ever increasing number of enterprises who are adopting SaaS solutions. IaaS differentiation is not going to come through more varied instance sizes and configurations or through price wars. Enterprise customers understand the value - the business and operational value - of services and pricing is less a deciding factor than it is just one more factor in the overall equation. The value offered by pre-integrated services that make building a hybrid cloud easier, faster and with a greater level of reliability in the stability of the service - such as would be offered by such services - has greater weight than "is it cheap." Vendors who offer APIs for the purposes of external control and ultimately integration know this already. While having an API has become tablestakes, what is more valued by the customer is the availability of pre-integrated, pre-tested, and validated integration with other enterprise-class systems. Having an API is great, but having existing, validated integration with VMWare vCD, for example, is of considerable value and differentiates one solution from another. IaaS providers would do well to consider how providing similar services - pre-integrated and validated - would immediately differentiate their entire offering and provide the confidence and incentive for customers to choose their service over another.240Views0likes0CommentsCuring the Cloud Performance Arrhythmia
#cloud #webperf Maintaining Consistent Performance of Elastic Applications in the Cloud Requires the Right Mix of Services Arrhythmias are most often associated with the human heart. The heart beats in a specific, known and measurable rhythm to deliver oxygen to the entire body in a predictable fashion. Arrhythmias occur when the heart beats irregularly. Some arrhythmias are little more than annoying, such as PVCs, but others can be life-threatening, such as ventricular fibrillation. All arrhythmias should be actively managed. Inconsistent application performance is much like a cardiac arrhythmia. Users may experience a sudden interruption in performance at any time, with no real rhyme or reason. In cloud computing environments, this is more likely, because there are relatively few, if any, means of managing these incidents. A 2011 global study on cloud conducted on behalf of Alcatel-Lucent showed that while security is still top of mind for IT decision makers considering cloud computing, performance – in particular reliable performance – ranks higher on the list of demands than security or costs. THE PERFORMANCE PRESCRIPTION One of the underlying reasons for performance arrhythmias in the cloud is a lack of attention paid to TCP management at the load balancing layer. TCP has not gotten any lighter during our migration to cloud computing and while most enterprise implementations have long since taken advantage of TCP management capabilities in the data center to redress inconsistent performance, these techniques are either not available or simply not enabled in cloud computing environments. Two capabilities critical to managing performance arrhythmias of web applications are caching and TCP multiplexing. These two technologies, enabled at the load balancing layer, reduce the burden of delivering content on web and application servers by offloading to a service specifically designed to perform these tasks – and do so fast and reliably. In doing so, the Load balancer is able to process the 10,000th connection with the same vim and verve as the first. This is not true of servers, whose ability to process connections degrades as load increases, which in turn necessarily raises latency in response times that manifests as degrading performance to the end-user. Failure to cache HTTP objects outside the web or application server has a similar negative impact due to the need to repetitively serve up the same static content to every user, chewing up valuable resources that eventually burdens the server and degrades performance. Caching such objects at the load balancing layer offloads the burden of processing and delivering these objects, enabling servers to more efficiently process those requests that require business logic and data. FAILURE in the CLOUD Interestingly, customers are very aware of the disparity between cloud computing and data center environments in terms of services available. In a recent article on this topic from Shamus McGillicuddy, "Tom Hollingsworth, a senior network engineer with United Systems, an Oklahoma City-based value-added reseller (VAR). "I want to replicate [in the cloud with] as much functionality [customers] have for load balancers, firewalls and things like that." So why are cloud providers resistant to offering such services? Shamus offered some insight in the aforementioned article, citing maintenance and scalability as inhibitors to cloud provider offerings in the L4-7 service space. Additionally, the reality is that such offload technologies, while improving and making more consistent performance of applications also have a side effect of making more efficient the use of resources available to the application. This ultimately means a single virtual instance can scale more efficiently, which means the customer needs fewer instances to support the same user base. This translates into fewer instances for the provider, which negatively impacts their ARPU (Annual Revenue Per User) – one of the key metrics used to evaluate the health and growth of providers today. But the reality is that providers will need to start addressing these concerns if they are to woo enterprise customers and convince them the cloud is where it's at. Enabling consistent performance is a requirement, and a decade of experience has shown customers that consistent performance in a scalable environment requires more than simple load balancing – it requires the very L4-7 services that do not exist in provider environments today. Referenced blogs & articles: Layer 4-7 cloud networking still scarce in IaaS market Understanding the market opportunity for carrier cloud services The Need for (HTML5) Speed SPDY versus HTML5 WebSockets QoS without Context: Good for the Network, Not So Good for the End user The Cloud Integration Stack HTML5 WebSockets: High-Speed Infrastructure Integration Bus? Cloud Delivery Model is about Ops, not Apps208Views0likes0CommentsF5 Friday: Programmability and Infrastructure as Code
#SDN #ADN #cloud #devops What does that mean, anyway? SDN and devops share some common themes. Both focus heavily on the notion of programmability in network devices as a means to achieve specific goals. For SDN it’s flexibility and rapid adaptation to changes in the network. For devops, it’s more a focus on the ability to treat “infrastructure as code” as a way to integrate into automated deployment processes. Each of these notions is just different enough to mean that systems supporting one don’t automatically support the other. An API focused on management or configuration doesn’t necessarily provide the flexibility of execution exhorted by SDN proponents as a significant benefit to organizations. And vice-versa. INFRASTRUCTURE as CODE Devops is a verb, it’s something you do. Optimizing application deployment lifecycle processes is a primary focus, and to do that many would say you must treat “infrastructure as code.” Doing so enables integration and automation of deployment processes (including configuration and integration) that enables operations to scale along with the environment and demand. The result is automated best practices, the codification of policy and process that assures repeatable, consistent and successful application deployments. F5 supports the notion (and has since 2003 or so) of infrastructure as code in two ways: iControl iControl, the open, standards-based API for the entire BIG-IP platform, remains the primary integration point for partners and customers alike. Whether it’s inclusion in Opscode Chef recipes, or pre-packaged solutions with systems from HP, Microsoft, or VMware, iControl offers the ability to manage the control plane of BIG-IP from just about anywhere. iControl is service-enabled and has been accessed and integrated through more programmatic languages than you can shake a stick at. Python, PERL, Java, PHP, C#, PowerShell… if it can access web-based services, it can communicate with BIG-IP via iControl. iApp A latter addition to the BIG-IP platform, iApp is best practice application delivery service deployment codified. iApps are service- and application-oriented, enabling operations and consumers of IT as a Service to more easily deploy requisite application delivery services without requiring intimate knowledge of the hundreds of individual network attributes that must be configured. iApp is also used in conjunction with iControl to better automate and integrate application delivery services into an IT as a Service environment. Using iApp to codify performance and availability policies based on application and business requirements, consumers – through pre-integrated solutions – can simply choose an appropriate application delivery “profile” along with their application to ensure not only deployment but production success. Infrastructure as code is an increasingly important view to take of the provisioning and deployment processes for network and application delivery services as they enable more consistent, accurate policy configuration and deployment. Consider research from Dimension Data that found “total number of configuration violations per device has increased from 29 to 43 year over year -- and that the number of security-related configuration errors (such as AAA Authentication, Route Maps and ACLS, Radius and TACACS+) also increased. AAA Authentication errors in particular jumped from 9.3 per device to 13.6, making it the most frequently occurring policy violation.” The ability to automate a known “good” configuration and policy when deploying application and network services can decrease the risk of these violations and ensure a more consistent, stable (and ultimately secure) network environment. PROGRAMMABILITY Less with “infrastructure as a code” (devops) and more-so with SDN comes the notion of programmability. On the one hand, this notion squares well with the “infrastructure as code” concept, as it requires infrastructure to be enabled in such as a way as to provide the means to modify behavior at run time, most often through support for a common standard (OpenFlow is the darling standard du jour for SDN).For SDN, this tends to focus on the forwarding information base (FIB) but broader applicability has been noted at times, and no doubt will continue to gain traction. The ability to “tinker” with emerging and experimental protocols, for example, is one application of programmability of the network. Rather than wait for vendor support, it is proposed that organizations can deploy and test support for emerging protocols through OpenFlow enabled networks. While this capability is likely not really something large production networks would undertake, still, the notion that emerging protocols could be supported on-demand, rather than on a vendor' driven timeline, is often desirable. Consider support for SIP, before UCS became nearly ubiquitous in enterprise networks. SIP is a message-based protocol, requiring deep content inspection (DCI) capabilities to extract AVP codes as a means to determine routing to specific services. Long before SIP was natively supported by BIG-IP, it was supported via iRules, F5’s event-driven network-side scripting language. iRules enabled customers requiring support for SIP (for load balancing and high-availability architectures) to program the network by intercepting, inspecting, and ultimately routing based on the AVP codes in SIP payloads. Over time, this functionality was productized and became a natively supported protocol on the BIG-IP platform. Similarly, iRules enables a wide variety of dynamism in application routing and control by providing a robust environment in which to programmatically determine which flows should be directed where, and how. Leveraging programmability in conjunction with DCI affords organizations the flexibility to do – or do not – as they so desire, without requiring them to wait for hot fixes, new releases, or new products. SDN and ADN – BIRDS of a FEATHER The very same trends driving SDN at layer 2-3 are the same that have been driving ADN (application delivery networking) for nearly a decade. Five trends in network are driving the transition to software defined networking and programmability. They are: • User, device and application mobility; • Cloud computing and service; • Consumerization of IT; • Changing traffic patterns within data centers; • And agile service delivery. The trends stretch across multiple markets, including enterprise, service provider, cloud provider, massively scalable data centers -- like those found at Google, Facebook, Amazon, etc. -- and academia/research. And they require dynamic network adaptability and flexibility and scale, with reduced cost, complexity and increasing vendor independence, proponents say. -- Five needs driving SDNs Each of these trends applies equally to the higher layers of the networking stack, and are addressed by a fully programmable ADN platform like BIG-IP. Mobile mediation, cloud access brokers, cloud bursting and balancing, context-aware access policies, granular traffic control and steering, and a service-enabled approach to application delivery are all part and parcel of an ADN. From devops to SDN to mobility to cloud, the programmability and service-oriented nature of the BIG-IP platform enables them all. The Half-Proxy Cloud Access Broker Devops is a Verb SDN, OpenFlow, and Infrastructure 2.0 Devops is Not All About Automation Applying ‘Centralized Control, Decentralized Execution’ to Network Architecture Identity Gone Wild! Cloud Edition Mobile versus Mobile: An Identity Crisis357Views0likes0CommentsEcosystems are Always in Flux
#devops An ecosystem-based data center approach means accepting the constancy of change… It is an interesting fact of life for aquarists that the term “stable” does not actually mean a lack of change. On the contrary, it means that the core system is maintaining equilibrium at a constant rate. That is, the change is controlled and managed automatically either by the system itself or through the use of mechanical and chemical assistance. Sometimes, those systems need modifications or break (usually when you’re away from home and don’t know it and couldn’t do anything about it if you did anyway but when you come back, whoa, you’re in a state of panic about it) and must be repaired or replaced and then reinserted into the system. The removal and subsequent replacement introduces more change as the system attempts to realign itself to the temporary measures put into place and then again when the permanent solution is again reintroduced. A recent automatic top-off system failure reminded me of this valuable lesson as I tried to compensate for the loss while waiting for a replacement. This 150 gallon tank is its own ecosystem and it tried to compensate itself for the fluctuations in salinity (salt-to-water ratio) caused by a less-than-perfect stop-gap measure I used while waiting a more permanent solution. As I was checking things out after the replacement pump had been put in place, it occurred to me that the data center is in a similar position as an ecosystem constantly in flux and the need for devops to be able to automate as much as possible in a repeatable fashion as a means to avoid incurring operational risk. PROCESS is KEY The reason my temporary, stop-gap measure was less than perfect was that the pump I used to simulate the same auto-top off process was not the same as the one used by the failed pump. The two systems were operationally incompatible. One monitored the water level and automatically pumped fresh water into the tank as a means to keep the water level stable while the other required an interval based cycle that pumped fresh water for a specified period of time and then shut off. To correctly configure it meant determining the actual flow rate (as opposed to the stated maximum flow rate) and doing some math to figure out how much water was actually lost on daily basis (which is variable) and how long to run the pumps to replace that loss over a 24 hour period. Needless to say I did not get this right and it had unintended consequences. Because the water level increased too far it caused a siphon break to fail which resulted in even more water being pumped into the system, effectively driving it close to hypo-salinity (not enough salt in the water) and threatening the existence of those creatures sensitive to salinity levels (many corals and some invertebrates are particularly sensitive to fluctuations in salinity, among other variables). The end result? By not nailing down the process I’d opened a potential hole through which the stability of the ecosystem could be compromised. Luckily, I discovered it quickly because I monitor the system on a daily basis, but if I’d been away, well, disaster may have greeted me on return. The process in this tale of near-disaster was key; it was the poor automation of (what should be) a simple process. This is not peculiar to the ecosystem of an aquarium, a fact of which Tufin Technologies recently reminded us when it published the results of a survey focused on change management. The survey found that organizations are acutely aware of the impact of poorly implemented processes and the (often negative) impact of manual processes in the realm of security: 66% of the sample felt their change management processes do or could place the organization at risk of a breach. The main reasons cited were lack of formal processes (56%), followed by manual processes with too many steps or people in the process (29%). -- Tufin Technologies Survey Reveals Most Organizations Believe Their Change Management Processes Could Lead to a Network Security Breach DEVOPS is CRITICAL to MAINTAINING a HEALTHY DATA CENTER ECOSYSTEM The Tufin survey focused on security change management (it is a security focused organization, so no surprise there) but as security, performance, and availability are intimately related it seems logical to extrapolate that similar results might be exposed if we were to survey folks with respect to whether or not their change management processes might incur some form of operational risk. One of the goals of devops is to enable successful and repeatable application deployments through automation of the operational processes associated with a deployment. That means provisioning of the appropriate security, performance, and availability services and policies required to support the delivery of the application. Change management processes are a part of the deployment process – or if they aren’t, they should be to ensure success and avoid the risks associated with lack of formal processes or too many cooks in the kitchen with highly complex manually followed recipes. Automation of configuration and policy-related tasks as well as orchestration of accepted processes is critical to maintaining a healthy data center ecosystem in the face of application updates, changes in security and access policies, as well as adjustments necessary to combat attacks as well as legitimate sudden spikes in demand. More focus on services and policy as a means to not only deploy but maintain application deployments is necessary to enable IT to continue transforming from its traditional static, manual environment to a dynamic and more fluid ecosystem able to adapt to the natural fluctuations that occur in any ecosystem, including that of the data center. The Pythagorean Theorem of Operational Risk At the Intersection of Cloud and Control… Cloud Computing and the Truth About SLAs IT Services: Creating Commodities out of Complexity What is a Strategic Point of Control Anyway? The Battle of Economy of Scale versus Control and Flexibility The Future of Cloud: Infrastructure as a Platform The Secret to Doing Cloud Scalability Right Operational Risk Comprises More Than Just Security247Views0likes0CommentsCloud Computing: Architectural Limbo
When abstraction becomes a distraction, cloud computing becomes a realm of architectural limbo… Cloud. It sounds so grand in NIST’s description; full of promises with respect to the ability to provision and manage resources without having to muck around in the trenches. Compute! Network! Storage! Cheap, efficiently provisioned resources in minutes, not months! The siren call of cloud continues to lure many a curious folk, only to trap it in what is rapidly becoming architectural limbo. Differing slightly from the original meaning, in colloquial speech, "limbo" is any status where a person or project is held up, and nothing can be done until another action happens. -- Wikipedia The problem is, unfortunately, at the root of all architectures – the network. ARCHITECTURE and the NETWORK Architecturally, from a “stack” point of view, the network always resides at the bottom. Like other forms of architecture, that shouldn’t be taken to mean it's of less importance than the upper layers of the stack, but rather that it is the foundation upon which all other layers are ultimately laid. A strong foundation is critical to the resilience of the rest of the architecture. That is not to say that cloud computing environments have weak foundations. On the contrary, they have very firm network foundations that make the rest of the stack possible. The problem is that cloud promises us provisioning and management of resources and that includes the network, and yet many cloud providers seem to stop short of offering this capability in a way that meets the needs of enterprise-class architectures. Instead, providers encourage (read: require) a change in the way network resources are architected and ultimately managed. Consider, for a moment, the stark reality of a realm with no real network boundaries offered by AWS in “Building three-tier architectures with security groups”: Unlike with traditional on-premise physical deployments, AWS's virtualization of compute, storage, and network elements requires that you think differently about how to build network segregation into your projects. There are no distinct physical networks, no VLANs, and no DMZs. The post goes on to describe the means in which a secure, traditional three-tiered application architecture can be deployed using AWS security groups. This architecture is a fine approximation of the traditional, data center deployed architecture based on the available abstractions offered by AWS. Note the use of the term “approximation”. That’s important, because it’s indicative of one of the core issues with cloud today: the inability to replicate architecture. You might be thinking that’s okay as long as you can replicate it using available services. No, actually, it’s not necessarily okay – especially when you consider the close relationship between architecture and operational process and the implication of radically changing either one. ARCHITECTURE and OPERATIONAL PROCESS The problem is that in order to fully deploy in the cloud you have to deploy an architecture that will be different from the one you currently maintain in the data center. What that ultimately entails is a separate and environment-specific set of processes, as well, that could quickly become operationally expensive. This is especially true when compliance enters the picture, and even more so when the regulations in question are those that focus on process (think SOX) and not just technological implementation. By encouraging (read: requiring) changes in the core architecture of applications, cloud computing is introducing another set of processes and challenges, many of which have already been faced and overcome in the data center by careful application of infrastructure architecture principles. Those processes must be managed, they must adhere to regulations and comply with requirements, and they must be integrated into and with existing data center operational processes. Because the tools and mechanisms by which those processes are managed are likely very different from those used to manage the data center, IT organizations run the risk of needing two separate but equally important sets of operations teams, each of which are focused on their areas of responsibility. Operational silos, by necessity. Architectural limbo. Neither fully here (the data center) nor there (the cloud), such approaches are fraught with the same potential to minimize the benefits associated with cloud computing by wrapping it in another set of mostly manual IT processes. A more operationally consistent approach is necessary, one that does not require new architectures (and the operational processes to manage them) but instead incorporates cloud computing resources as part of an existing data center model. A hybrid cloud model based on the premise of operational consistency. If we treat cloud computing as it was originally described, as utility computing, and view it as a vast pool of readily available compute and storage resources to be integrated into existing architecture and processes, the biggest benefits are likely to be realized. Cloud computing is unlikely to mature fast enough to provide the full range of infrastructure services that are required to properly transplant enterprise-class applications and systems into the public cloud, but it can be a great asset in implementing what are already proven, trusted and controllable architectures that exist inside the data center today. The cloud can be a vital asset, if it's viewed with the proper perspective of what it is, not what it could be or might be one day. Avoid architectural limbo. Leverage - but don't live - in the cloud. Related blogs & articles: Resolution to the Case (For & Against) X-Driven Scalability in Cloud Computing Environments Intercloud: Are You Moving Applications or Architectures? The Cloud Configuration Management Conundrum IT as a Service: A Stateless Infrastructure Architecture Model You Can’t Have IT as a Service Until IT Has Infrastructure as a Service This is Why We Can’t Have Nice Things The Consumerization of IT: The OpsStore An Aristotlean Approach to Devops and Infrastructure Integration The Impact of Security on Infrastructure Integration Infrastructure 2.0 + Cloud + IT as a Service = An Architectural Parfait The Infrastructure Turk: Lessons in Services196Views0likes0Comments