operational consistency
5 TopicsThe Cloud Integration Stack
#cloud Integrating environments occurs in layers … We use the term “hybrid cloud” to indicate a joining together of two disparate environments. We often simplify the “cloud” to encompass public IaaS, PaaS, SaaS and private cloud. But even though the adoption of such hybrid architectures may be a foregone conclusion, the devil is, as they say, in the details and how that adoption will be executed is not so easily concluded. At its core, cloud is about integrating infrastructure. We integrate infrastructure from the application and networking domains to enable elasticity and scalability. We integrate infrastructure from security and delivery realms to ensure a comprehensive, secure delivery chain that promises performance and reliability. We integrate infrastructure to manage these disparate worlds in a unified way, to reduce the burden on operations imposed by necessarily disconnected systems created by integrating environments. How these integrations are realized can be broken down into a fairly simple stack comprised of the network, resources, elasticity, and control. The NETWORK INTEGRATION LAYER At the network layer, the goal is normalize connectivity and provide optimization of network traffic between two disconnected environments. This is generally applicable only to the integration of IaaS environments, where connectivity today is achieved primarily through the use of secured network tunnels. This enables secure communications over which data and applications may be transferred between environments (and why optimization for performance sake may be desired) and over which management can occur. The most basic of network integration enabling a hybrid cloud environment is often referred to as bridging, after the common networking term. Bridging does not necessarily imply layer 3 normalization, however, and some sort of overlay networking technology will be required to achieve that normalization (and is often cited as a use of emerging technology like SDN). Look for solutions in this layer to be included in cloud “bridges” or “bridging” offerings. The RESOURCE INTEGRATION LAYER At the resource layer, integration occurs at the virtualization layer. Resources such as compute and storage are integrated with data center residing systems in such a way as to be included in provisioning processes. This integration enables visibility into the health and performance of said resources, providing the means to collect actionable performance and status related metrics for everything from capacity planning to redistribution of clients to the provisioning of performance-related services such as acceleration and optimization. This layer of integration is also heavily invested in the notion of maintaining operational consistency. One way this is achieved is by integrating remote resources into existing delivery network architectures that allow the enforcement of policy to ensure compliance with operational and business requirements. Another means of achieving operational consistency through resource integration is to integrate remotely deployed infrastructure solutions providing application delivery services. Such resources can be integrated with data center deployed management systems in such a way as to enforce operational consistency through synchronization of policies across all managed environments, cloud or otherwise. Look for solutions in this layer to be included in cloud “gateway” offerings. The ELASTICITY INTEGRATION LAYER Elasticity integration is closely related to resource integration but not wholly dependent upon it. Elasticity is the notion of expanding or contracting capacity of resources (whether storage, network, or compute) to meet demand. That elasticity requires visibility into demand (not as easy as it sounds, by the way) as well as integration with the broader systems that provision and de-provision resources. Consider a hybrid cloud in which there is no network or resource integration, but rather systems are in place to aggregate demand metrics from both cloud and data center deployed applications. When some defined threshold is met, a trigger occurs that instructs the system to interact with the appropriate control-plane API to provision or de-provision resources. Elasticity requires not only the elasticity of compute capacity, but may also require network or storage capacity be adjusted as well. This is the primary reason why simple “launch a VM” or “stop a VM” responses to changes in demand are wholly inadequate to achieve true elasticity – such simple responses do not take into consideration the ecosystem that is cloud, regardless of its confines to a single public provider or its spread across multiple public/private locations. True elasticity requires integration of the broader application delivery ecosystem to ensure consistent performance and security across all related applications. Look for solutions in this layer to be included in cloud “gateway” offerings. The CONTROL INTEGRATION LAYER Finally, the control integration layer is particularly useful when attempting to integrate SaaS with private cloud or traditional data center models. This is primarily because integration at other layers is virtually non-existent (this is also true of PaaS environments, which are often highly self-contained and only truly enable integration and control over the application layer). The control layer is focused on integrating processes, such as access and authentication, for purposes of maintaining control over security and delivery policies. This often involves some system under the organization’s control (i.e. in the data center) brokering specific functions as part of a larger process. Currently the most common control integration solution is the brokering of access to cloud hosted resources such as SaaS. The initial authentication and authorization steps of a broader log-in process occur in the data center, with the enterprise-controlled systems then providing assurance in the form of tokens or assertions (SAML, specifically crafted encrypted tokens, one time passwords, etc…) to the resource that the user is authorized to access the system. Control integration layers are also used to manage disconnected instances of services across environments for purposes of operational consistency. This control enables the replication and synchronization of policies across environments to ensure security policy enforcement as well as consistent performance. Look for solutions in this layer to be included in cloud “broker” offerings. Eventually, the entire integration stack will be leveraged to manage hybrid clouds with confidence, eliminating many of the obstacles still cited by even excited prospective customers as reasons they are not fully invested in cloud computing . F5 Friday: Avoiding the Operational Debt of Cloud Cloud Security: It’s All About (Extreme Elastic) Control Hybrid Architectures Do Not Require Private Cloud Identity Gone Wild! Cloud Edition Cloud Bursting: Gateway Drug for Hybrid Cloud The Conspecific Hybrid Cloud154Views0likes0CommentsGetting at the Heart of Security in the Cloud
#infosec #cloud CloudPassage digs a bit deeper into the issue of security and public cloud computing and finds some interesting results Security is a pretty big word. It’s used to represent everything from attack prevention to authentication and authorization to securing transport protocols. It’s used as an umbrella term for such a wide variety of concerns that it has become virtually meaningless when applied to technology. For some time, purveyors of security studies have asked the market, “What’s stopping you from adopting cloud?” Invariably one of the most often cited show-stoppers is “security.” Pundits raced to tell us this, but in no wise did they offer deeper insight into what, exactly, security meant. So it was nice to see CloudPassage dig deeper into “security in the cloud” with a recent survey it conducted. You may recall that CloudPassage has a more than passing interest in cloud-based security, as its focus is on cloud-based security with an emphasis on host-based firewalls. Published in February 2012, it sheds some light on what IT professionals consider most important with respect to public cloud security. Not unsurprisingly, “lack of perimeter defenses and/or network control” was the most often cited concern with respect to security in public cloud environments with 25% of respondents indicating it was troubling. This response would appear to go hand in hand with the 12% who cited an inability to leverage “enterprise security tools” in public cloud environments. It is no secret that duplicating security architectures and processes in the cloud is not something we seen done at this juncture. When you combine an inability to replicate security policy and process in the cloud due to incompatibilities of infrastructure and software with a less than robust security service offering in public cloud environments, the “lack of perimeter defenses and/or network control” answer being top of the list makes a lot of sense. WHERE ARE WE GOING? There are myriad surveys that indicate organizations are moving to use public cloud computing, despite these concerns, and one assumes that this means they are finding ways to resolve these issues. Many organizations are turning back the clock and taking advantage of agent-based (host deployed) solutions to secure their assets in public cloud environments, which affords much better protection than nothing at all, and others still are leveraging the tried-and-true “checklist” method: manually securing servers based on best-practices and corporate policy. Neither is optimal from an operational perspective. Neither is the use of cloud provider offered services such as Amazon security groups because the result is a disjointed set of security policies across multiple environments. Policy languages and implementation – not to mention capabilities – vary widely from service to service. While the most basic of protections – firewalling – is more compatible from the perspective of ability to codify, still the actual policy language will differ. These disconnects can lead to gaps in security policies that leave open to attack the organization’s assets. Inconsistent management and deployment processes spanning multiple environments leave open the possibility of human error and misconfiguration, an often cited cause of outages and breaches in general. Where we are today is sitting with a disjointed set of options from which to choose, and the need to somehow cobble together these disparate tools and services into a comprehensive security strategy capable of consistently securing servers, applications, and other resources from attack, exploitation, and breach. It is not really an inspiring view at the moment. Vendors and providers need to work toward some common language and services that enable consistent replication – and thus enforcement - of the policies that govern access and protection of all corporate resources, regardless of location. Whether through standards initiatives or brokerage of APIs or better ability of organizations to deploy security solutions in both the data center and public cloud environments is not necessarily the question. The question is how can enterprises better address the specific security-related concerns they have regarding public cloud deployments in a way that minimizes risk of misconfiguration or gaps in policy enforcement while providing for operationally consistent processes that ensure the benefits of public cloud computing are not lost. REVERSE INTEGRATION One of the interesting trends that we’re seeing is around the demand for consistency in infrastructure across environments, and this will eventually drive demand for integration of what are today “cloud only” solutions back into data center components. Folks like CloudPassage and other cloud-focused systems that deliver host-based security coupled with a SaaS management model will eventually need to consider integration with “traditional” enterprise solutions as a means to deliver the consistency necessary to maintain cloud-related operational benefits. Right now we’re seeing a move toward preserving operational consistency through replication of policy from within the data center out, to the cloud. But as cloud-hosted solutions continue to mature and evolve, one would expect to see the ability to replicate policy in the other direction – from the cloud back into the data center. This is no trivial task, as it requires the SaaS management component of such solutions to become what might be considered a policy broker; that is, their system becomes the point of policy creation and management and it is through integration with both cloud and data center infrastructure that such policies are deployed, updated, and managed. This is why the notion of API-enabled infrastructure, a.k.a. Infrastructure 2.0, is so important. It’s not just about creating a vibrant and healthy ecosystem of solutions within the data center, but in the cloud and in between, as well. It is the glue that will integrate disparate systems and normalize policies across environments, and ultimately provide the market with a broader set of choices that can more efficiently and effectively address the specific security (and other operational) concerns that may be preventing organizations from fully embracing cloud computing. The Conflation of Pay-as-you-Grow Hardware with On-Demand The Conspecific Hybrid Cloud Committing to Overhead: Proceed With Caution. Why MDM May Save IT from Consumerization Block Attack Vectors, Not Attackers Get Your Money for Nothing and Your Bots for Free Dome9: Closing the (Cloud) Barn Door253Views0likes0CommentsThe Conspecific Hybrid Cloud
Operational consistency and control continue to be a driving force in hybrid cloud architectures When you’re looking to add new tank mates to an existing aquarium ecosystem, one of the concerns you must have is whether a particular breed of fish is amenable to conspecific cohabitants. Many species are not, which means if you put them together in a confined space, they’re going to fight. Viciously. To the death. Responsible aquarists try to avoid such situations, so careful attention to the conspecificity of animals is a must. Now, while in many respects the data center ecosystem correlates well to an aquarium ecosystem, in this case it does not. It’s what you usually get, today, but its not actually the best model. That’s because what you want in the data center ecosystem – particularly when it extends to include public cloud computing resources – is conspecificity in infrastructure. This desire and practice is being seen both in enterprise data center decision making as well as in startups suddenly dealing with massive growth and increasingly encountering performance bottlenecks over which IT has no control to resolve. OPERATIONAL CONSISTENCY One of the biggest negatives to a hybrid architectural approach to cloud computing is the lack of operational consistency. While enterprise systems may be unified and managed via a common platform, resources and delivery services in the cloud are managed using very different systems and interfaces. This poses a challenge for all of IT, but is particularly an impediment to those responsible for devops – for integrating and automating provisioning of the application delivery services required to support applications. It requires diverse sets of skills – often those peculiar to developers such as programming and standards knowledge (SOAP, XML) – as well as those traditionally found in the data center. “We own the base, rent the spike. We want a hybrid operation. We love knowing that shock absorber is there.” – Allan Leinwand, Zynga’s Infrastructure CTO Other bottlenecks were found in the networks to storage systems, Internet traffic moving through Web servers, firewalls' ability to process the streams of traffic, and load balancers' ability to keep up with constantly shifting demand. Zynga uses Citrix Systems CloudStack as its virtual machine management interface superimposed on all zCloud VMs, regardless of whether they're in the public cloud or private cloud. Inside Zynga’s Big Move To Private Cloud by InformationWeek’s Charles Babcock This operational inconsistency also poses a challenge in the codification of policies across the security, performance, and availability spectrum as diverse systems often require very different methods of encapsulating policies. Amazon security groups are not easily codified in enterprise-class systems, and vice-versa. Similarly, the options available to distribute load across instances required to achieve availability and performance goals are impeded by lack of consistent support for algorithms across load balancing services as well as differences in visibility and health monitoring that prevent a cohesive set of operational policies to govern the overall architecture. Thus if hybrid cloud is to become the architectural model of choice, it becomes necessary to unify operations across all environments – whether public or enterprise. UNIFIED OPERATIONS We are seeing this demand more and more, as enterprise organizations seek out ways to integrate cloud-based resources into existing architectures to support a variety of business needs – disaster recover, business continuity, and spikes in application demand. What customers are demanding is a unified approach to integrating those resources, which means infrastructure providers must be able to offer solutions that can be deployed both in a traditional enterprise-class model as well as a public cloud environment. This is also true for organizations that may have started in the cloud but are now moving to a hybrid model in order to seize control of the infrastructure as a means to address performance bottlenecks that simply cannot be addressed by cloud providers due to the innate nature of a shared model. This ability to invoke and coordinate both private and public clouds is "the hidden jewel" of Zynga's success, says Allan Leinwand, CTO of infrastructure engineering at the company. -- Lessons From FarmVille: How Zynga Uses The Cloud While much is made of Zynga’s “reverse cloud-bursting” business model, what seems to be grossly overlooked is the conspecificity of infrastructure required in order to move seamlessly between the two worlds. Whether at the virtualization layer or at the delivery infrastructure layer, a consistent model of operations is a must to transparently take advantage of the business benefits inherent in a cross-environment, aka hybrid, cloud model of deployment. As organizations converge on a hybrid model, they will continue to recognize the need and advantages of an operationally consistent model – and they are demanding it be supported. Whether it’s Zynga imposing CloudStack on its own infrastructure to maintain compatibility and consistency with its public cloud deployments or enterprise IT requiring public cloud deployable equivalents for traditional enterprise-class solutions, the message is clear: operational consistency is a must when it comes to infrastructure. H/T @Archimedius “The Hybrid Cloud is the Future of IT Infrastructure”272Views0likes0CommentsThe Cloud API is Pseudo-Consolidation of Infrastructure
It’s about operational efficiency and consistency, emulated in the cloud by an API to create the appearance of a converged platform In most cases, the use of the term “consolidation” implies the aggregation (and subsequently elimination) of like devices. Application delivery consolidation, for example, is used to describe a process of scaling up infrastructure that often occurs during upgrade cycles. Many little boxes are exchanged for a few larger ones as a means to simplify the architecture and reduce the overall costs (hard and soft) associated with delivering applications. Consolidation. But cloud has opened (or should have opened) our eyes to a type of consolidation in which like services are aggregated; a consolidation strategy in which we layer a thin veneer over a set of adjacent functionalities in order to provide a scalable and ultimately operationally consistent experience: an API. A cloud API consolidates infrastructure from an operational perspective. It is the bringing together of adjacent functionalities into a single “entity.” Through a single API, many infrastructure functions and services can be controlled – provisioning, monitoring, security, and load balancing (one part of application delivery) are all available through the same API. Certainly the organization of an API’s documentation segments services into similar containers of functionality, but if you’ve looked at a cloud API you’ll note that it’s all the same API; only the organization of the documentation makes it appear otherwise. This service-oriented approach allows for many of the same benefits as consolidation, without actually physically consolidating the infrastructure. Operational consistency is one of the biggest benefits. OPERATIONAL CONSISTENCY The ability to consistently manage and monitor infrastructure through the same interface – whether API or GUI or script – is an important factor in data center efficiency. One of the reasons enterprises demand overarching data center-level monitoring and management systems like HP OpenView and CA and IBM Tivoli is consistency and an aggregated view of the entire data center. It is no different in the consumer world, where the consistency of the same interface greatly enhances the ability of the consumer to take advantage of underlying services. Convenience, too, plays a role here, as a single device (or API) is ultimately more manageable than the requirement to use several devices to accomplish the same thing. Back in the day I carried a Blackberry, a mobile phone, and a PDA – each had a specific function and there was very little overlap between the two. Today, a single “smart”phone provides the functions of all three – and then some. The consistency of a single interface, a single foundation, is paramount to the success of such consumer devices. It is the platform, whether consumers realize it or not, that enables their highly integrated and operationally consistent experience. The same is true in the cloud, and ultimately in the data center. Cloud (pseudo) consolidates infrastructure the only way it can – through an API that ultimately becomes the platform analogous to an iPhone or Android-based device. Cloud does not eliminate infrastructure, it merely abstracts it into a consolidated API such that the costs to manage it are greatly reduced due to the multi-tenant nature of the platform. Infrastructure is still managed, it’s just managed through an API that simplifies and unifies the processes to provide a more consistent approach that is beneficial to the organization in terms of hard (hardware, software) and soft (time, administration) costs. The cloud and its requisite API provide the consolidation of infrastructure necessary to achieve greater cost savings and higher levels of consistency, both of which are necessary to scale operations in a way that makes IT able to meet the growing demand on its limited resources. BFF: Complexity and Operational Risk The Pythagorean Theorem of Operational Risk At the Intersection of Cloud and Control… Cloud Computing and the Truth About SLAs IT Services: Creating Commodities out of Complexity What is a Strategic Point of Control Anyway? The Battle of Economy of Scale versus Control and Flexibility196Views0likes0CommentsF5 Friday: Secure Remote Access versus En Masse Migration to the Cloud
Being too quick to shout “cloud” when the solution may be found elsewhere can lead to unintended consequences. As with all technology caught up in the hype cycle, cloud computing is often attributed with being “the solution” to problems irrespective of reality. Cloud is suddenly endowed with supernatural powers, able to solve every business and operational challenge merely by being what it is. Take, for example, the attribution of cloud as being “the solution” to the very real issue of severe snow in the UK. Cloud solutions can help businesses to overcome severe weather issues – with your business’ IT in the cloud, 100 percent of your staffs could work from home. Moreover, working in the cloud – anywhere, anytime – is good for your employees’ morale: 76 percent said that off-premise working is great. http://www.thecloudinfographic.com/2011/10/27/cloud-computing-solves-severe-snow-problem.html Now, the premise of this “solution” is that severe snow often prevents employees from working because they can’t get to work or because they lack the means to do so. Given. The claim is that putting IT in “the cloud” (narrowly defined as Software as a Service only) eliminates these issues because employees can access the cloud from anywhere, including their homes during periods of severe weather. One wonders why employees cannot simply access the same applications and resources at their corporate location. Has the business no Internet connectivity? Have they no web applications? Are they, perhaps, the last holdouts against the electronic age? The real solution here has nothing to do with cloud, it is enabling remote access. Cloud computing as part of the strategy to enable that solution is certainly valid, but it isn’t the solution, it’s part of a strategy – a remote access strategy. OPERATIONAL CONSISTENCY The reason touting “cloud” as a the “solution” to snow-bound employee access is somewhat misguided is twofold. First, it completely ignores the need for enterprise-grade security. Simply put “IT” (as if one can move an enterprise-grade data center wholesale) in the cloud and voila! Instant, ubiquitous, access. Granted, the purported solution is SaaS, which implies some level of credentials are required for access, but in doing so it completely ignores the second issue: it assumes all IT functions are commoditized to the point they are offered “as a service” in the first place, which is utterly untrue at this point in the evolution of any kind of cloud. This conflation further dismisses the costs and importance of integration to those systems being “moved” en masse to the cloud, and seems not at all concerned with the operational management costs of now needing to manage not one but perhaps multiple cloud environments. As my toddler would say, “Are you seriously?” Interestingly, before cloud computing came along and became the answer to life, the universe and everything, there was a less disruptive solution to the problem of remote access and business continuity: secure remote access. One of the foremost capabilities provided by secure remote access solutions, like BIG-IP Access Policy Manager (APM), is that of supporting telecommuting. Having been a telecommuter for over a decade now I’ve never had access to corporate resources without the assistance of some kind of remote access (VPN) solution. There are simply too many pieces and parts (resources and applications and services) that are too sensitive to leave unprotected “in the cloud.” Now that’s not saying a cloud isn’t secure, it’s saying that it’s not (currently) enabled with the same level of security and support for secure access best practices required by both operational and business stakeholders. I absolutely agree with the premise that severe weather and other mitigating factors that prevent employees from getting to work is costly to the business. But the solution is not likely to be a wholesale migration of its data center to a cloud, it’s to enable remote access without disrupting existing security and access policies. The bonus is that using BIG-IP APM can actually enable a migration of applications or services to a cloud environment without sacrificing the control necessary to consistently replicate and enforce access policies. CLOUD is FOR EVERYONE but NOT for EVERYTHING “Cloud is for everyone, but not for everything.” (Rackspace ) That is especially true in this case, but even more so when folks are conflating a solution with its model and location because it fails to address the root cause and instead tries to force fit “cloud” as being an integral part of every solution to every IT challenge. Cloud is not a solution, but a deployment option that has both advantages and disadvantages over traditional data center-based deployments. While the issue with severe weather is certainly real, claiming “cloud” is a solution is shortsighted and fails to recognize the difficulties inherent in such an “en masse” migration. Unfortunately the reply of “cloud” as though it’s answer D (all of the above) to every operational and business challenge we encounter will continue to be an issue until the hype cycle finally tires of hearing itself talk and we can get down to the real business of exploiting “the cloud” in ways that are not only meaningful but that do not introduce myriad other (costly and potentially risk inducing) challenges. In this case, a secure remote access solution – BIG-IP Access Policy Manager – is a much better option for folks who are annually plagued by productivity and cost woes due to severe weather. Rather than transplant applications from the data center to a cloud, and likely losing in the process the control and enforcement of security and access policies necessary to comply with regulations and business requirements, enable secure remote access. Keep the control, leverage the flexibility, maximize the benefits. Happy Working from Home!197Views0likes0Comments