security for sp
3 TopicsCloud isn't Social, it's Business.
Adopting a cloud-oriented business model for IT is imperative to successfully transforming the data center to realize ITaaS. Much like devops is more about a culture shift than the technology enabling it, cloud is as much or more about shifts in business models as it is technology. Even as service providers (that includes cloud providers) need to look toward a business model based on revenue per application (as opposed to revenue per user) enterprise organizations need to look hard at their business model as they begin to move toward a more cloud-oriented deployment model. While many IT organizations have long since adopted a “service oriented” approach, this approach has focused on the customer, i.e. a department, a business unit, a project. This approach is not wholly compatible with a cloud-based approach, as the “tenant” of most enterprise (private) cloud implementations is an application, not a business entity. As a “provider of services”, IT should consider adopting a more service provider business model view, with subscribers mapping to applications and services mapping to infrastructure services such as rate shaping, caching, access control, and optimization. By segmenting IT into services, IT can not only more effectively transition toward the goal of ITaaS, but realize additional benefits for both business and operations. A service subscription business model: Makes it easier to project costs across entire infrastructure Because functionality is provisioned as services, it can more easily be charged for on a pay-per-use model. Business stakeholders can clearly estimate the costs based on usage for not just application infrastructure, but network infrastructure, as well, providing management and executives with a clearer view of what actual operating costs are for given projects, and enabling them to essentially line item veto services based on projected value added to the business by the project. Easier to justify cost of infrastructure Having a detailed set of usage metrics over time makes it easier to justify investment in upgrades or new infrastructure, as it clearly shows how cost is shared across operations and the business. Being able to project usage by applications means being able to tie services to projects in earlier phases and clearly show value added to management. Such metrics also make it easier to calculate the cost per transaction (the overhead, which ultimately reduces profit margins) so that business can understand what’s working and what’s not. Enables business to manage costs over time Instituting a “fee per hour” enables business customers greater flexibility in costing, as some applications may only use services during business hours and only require them to be active during that time. IT that adopts such a business model will not only encourage business stakeholders to take advantage of such functionality, but will offer more awareness of the costs associated with infrastructure services and enable stakeholders to be more critical of what’s really needed versus what’s not. Easier to start up a project/application and ramp up over time as associated revenue increases Projects assigned limited budgets that project revenue gains over time can ramp up services that enhance performance or delivery options as revenue increases, more in line with how green field start-up projects manage growth. If IT operations is service-based, then projects can rely on IT for service deployment in an agile fashion, added new services rapidly to keep up with demand or, if predictions fail to come to fruition, removing services to keep the project in-line with budgets. Enables consistent comparison with off-premise cloud computing A service-subscription model also provides a more compatible business model for migrating workloads to off-premise cloud environments – and vice-versa. By tying applications to services – not solutions – the end result is a better view of the financial costs (or savings) of migrating outward or inward, as costs can be more accurately determined based on services required. The concept remains the same as it did in 2009: infrastructure as a service gives business and application stakeholders the ability to provision and eliminate services rapidly in response to budgetary constraints as well as demand. That’s cloud, in a nutshell, from a technological point of view. While IT has grasped the advantages of such technology and its promised benefits in terms of efficiency it hasn’t necessarily taken the next step and realized the business model has a great deal to offer IT as well. One of the more common complaints about IT is its inability to prove its value to the business. Taking a service-oriented approach to the business and tying those services to applications allows IT to prove its value and costs very clearly through usage metrics. Whether actual charges are incurred or not is not necessarily the point, it’s the ability to clearly associate specific costs with delivering specific applications that makes the model a boon for IT. Curing the Cloud Performance Arrhythmia The Cloud Integration Stack Devops is Not All About Automation 1024 Words: The Devops Butterfly Effect Cloud Delivery Model is about Ops, not Apps Cloud Bursting: Gateway Drug for Hybrid Cloud164Views0likes0CommentsIf Security in the Cloud Were Handled Like Car Accidents
Though responsibility for taking precautions may be shared, the risk of an incident is always yours and yours alone, no matter who is driving the car. Cloud and security still take top billing in many discussions today, perhaps because of the nebulous nature of the topic. If we break down security concerns in a public cloud computing environment we can separate them into three distinct categories of risk – the infrastructure, the application, and the management framework. Regardless of the model – IaaS, PaaS, SaaS – these categories exist as discrete entities, the differences being only in what the customer has access to and ultimately over which they have control (responsibility). A Ponemon study recently reported on by InformationWeek ( Cloud Vendors Punt to Security Users ) shows a vastly different view of responsibility as it pertains to cloud computing and data security. Whether it is shared, mostly on the provider or mostly on the customer is a matter of perspective apparently but is just as likely the result of failing to distinguish between categories of security concerns. Regardless of the category, however, if we apply a couple of legal concepts used to determine “fault” in car accidents in many states, we may find some interesting comparisons and insights into just who is responsible – and for what – when it comes to security in a public cloud computing environment. A MATTER of NEGLIGENCE Legalese is legalese, no matter the industry or vertical, and cloud computing is no exception. As noted in the aforementioned InformationWeek article: "When you read the licensing agreements for cloud providers, they don't need to do anything with security--they take 'best effort,'" said Pironti [John P. Pironti, president of IP Architects]. Best effort means that should a case come to court, "as long as they can show they're doing some effort, and not gross negligence, then they're covering themselves." In other words, providers are accepting that they have some level of responsibility in providing for security of their environments. They cannot disregard the need nor their responsibility for security of their environments, and by law they cannot disregard such efforts below a reasonable standard of effort. Reasonable being defined by what a reasonable person would consider the appropriate level of effort. One would assume, then, that providers are, in fact, sharing the responsibility of securing their environments by exerting at least ‘best effort’. A reasonable person would assume that best efforts would be comparable to those taken by any organization with a public-facing infrastructure, i.e. firewalls, DoS protection, notification systems and reasonable identity and access management policies. Now if we treated cloud computing environments as we do cars, we might use a more granular definitions of negligence. If we look at those definitions, it may be that we can find the lines of demarcation for security responsibilities in cloud computing environments. Contributory negligence is a system of fault in which the injured party can only obtain compensation for injuries and damages if he or she did not contribute to the accident in any way. In comparative negligence, the injured party can recover damages even if she was partially at fault in causing the accident. In a pure comparative system, the plaintiff’s award is reduced by the amount of her fault in the accident. Some states have what is called modified comparative fault. This is where there is a cap on how much responsibility the injured party can have in the accident. -- Car Accident Fault and Getting What You’re Owed In a nutshell, when it comes to car accidents “fault” is determined by the contribution to the accident which subsequently determines whether or not compensation is due. If Alice did not fulfill her responsibility to stop at the stop sign but Bob also abdicated his responsibility to obey the speed limit and the two subsequently crash, one would likely assume both contributed to the incident although with varying degrees of negligence and therefore fault. Similarly if Alice has fulfilled all her responsibilities and done no wrong, then if Bob barrels into her it is wholly his fault having failed his responsibilities. The same concepts can certainly be applied to security and breaches, with the focus being on the contribution of each party (provider and customer) to the security incident. Using such a model, we can determine responsibility based on the ability to contribute to a incident. For example, a customer has no control over the network and management framework of an IaaS provider. The customer has no authority to modify, change or configure network infrastructure to ensure an agreeable level of network-security suitable for public-facing applications. Only the provider has the means by which such assurances can be made through policy enforcement and critical evaluation of traffic. Alice cannot control Bob’s speed, and therefore if it is Bob’s speed that causes an accident, the fault logically falls on Bob’s shoulders – wholly. If data security in a cloud computing environment is breached through the exploitation or manipulation of infrastructure and management components wholly under the control of the provider, then the fault for the breach falls solely on the shoulders of the provider. If, however, a breach is enabled by poor coding practices or configuration of application infrastructure which is wholly under the control of the customer, then the customer bears the burden of fault and not the provider. IT ALWAYS COMES BACK to CONTROL In almost all cases, a simple test of contributory negligence would allow providers and customers alike to not only determine the ability to contribute to a breach but subsequently who, therefore, bears the responsibility for security. It is an unreasonable notion to claim that a customer – who can neither change, modify nor otherwise impact the security of a network switch should be responsible for its security. Conversely, it is wholly unreasonable to claim that a provider should bear the burden of responsibility for securing an application – one which the provider had no input or control over whatsoever. It is also unreasonable to think that providers, though afforded such a luxury by their licensing agreements, are not already aware of such divisions of responsibility and that they are not taking the appropriate ‘best effort’ steps to meet that obligation. The differences in the Ponemon study regarding responsibility for security can almost certainly be explained by applying the standards of contributory negligence. Neither provider nor customer is attempting to abrogate responsibility, in fact all are clearly indicating varying levels of contribution to security responsibility, almost certainly in equal portions as would be assigned based on a contributory negligence model of fault for their specific cloud computing model. Customers of IaaS, for example, would necessarily assign providers less responsibility than that of an SaaS provider with regard to security because providers are responsible for varying degrees of moving parts across the models. In a SaaS environment the provider assumes much more responsibility for security because they have control over most of the environment. In an IaaS environment, however, the situation is exactly reversed. In terms of driving on the roads, it’s the difference between getting on a bus (SaaS) and driving your own car (IaaS). The degree to which you are responsible for the security of the environment differs based on the model you choose to leverage – on the control you have over the security precautions. Ultimately, the data is yours; it is your responsibility to see it secured and the risk of a breach is wholly yours. If you choose to delegate – implicitly or explicitly - portions of the security responsibility to an external party, like the driver of a car service, then you are accepting that the third party has taken acceptable reasonable precautions. If the risk is that a provider’s “best effort” is not reasonable in your opinion, as it relates to your data, then the choice is obvious: you find a different provider. The end result may be that only your own environment is “safe” enough for your applications and data, given the level of risk you are willing to bear. Cloud Vendors Punt to Security Users The Corollary to Hoff’s Law Operational Risk Comprises More Than Just Security There Is No Such Thing as Cloud Security Risk is not a Synonym for “Lack of Security” Christofer Hoff “Rational Survivability” on “Security” The Impact of Security on Infrastructure Integration Authorization is the New Black for Infosec Six Lines of Code205Views0likes0CommentsOperational Risk Comprises More Than Just Security
Recognizing the relationship between and subsequently addressing the three core operational risks in the data center will result in a stronger operational posture. Risk is not a synonym for lack of security. Neither is managing risk a euphemism for information security. Risk – especially operational risk – compromises a lot more than just security. In operational terms, the chance of loss is not just about data/information, but of availability. Of performance. Of customer perception. Of critical business functions. Of productivity. Operational risk is not just about security, it’s about the potential damage incurred by a loss of availability or performance as measured by the business. Downtime costs the business; both hard and soft costs are associated with downtime and the numbers can be staggering depending on the particular vertical industry in which a business may operate. But in all cases, regardless of industry, the end-result is the same: downtime and poor performance are risks that directly impact the bottom line. Operational risk comprises concerns regarding: Performance Availability / reliability Security These three concerns are intimately bound up in one another. For example, a denial of service attack left unaddressed and able to penetrate to the database tier in the data center can degrade performance which may impact availability – whether by directly causing an outage or through deterioration of performance such that systems are no longer able to meet service level agreements mandating specific response times. The danger in assuming operational risk is all about security is that it leads to a tunnel-vision view through which other factors that directly impact operational reliability may be obscured. The notion of operational risk is most often discussed as it relates to cloud computing , but it is only that cloud computing raises the components of operational risk to a visible level that puts the two hand-in-hand. CONSISTENT REPETITION of SUCCESSFUL DEPLOYMENTS When we talk about repeatable deployment processes and devops, it’s not the application deployment itself that we necessarily seek to make repeatable – although in cases where scaling processes may be automated that certainly aids in operational efficiency and addresses all facets of operational risk. It’s the processes – the configuration and policy deployment – involving the underlying network and application network infrastructure that we seek to make repeatable, to avoid the inevitable introduction of errors and subsequently downtime due to human error. This is not to say that security is not part of that repeatable process because it is. It’s to say that it is only one piece of a much larger set of processes that must be orchestrated in such a way as to provide for consistent repetition of successful deployments that alleviates operational risk associated with the deployment of applications. Human error by contractor Northrop Grumman Corp. was to blame for a computer system crash that idled many state government agencies for days in August, according to an external audit completed at Gov. Bob McDonnell's request. The audit, by technology consulting firm Agilysis and released Tuesday, found that Northrop Grumman had not planned for an event such as the failure of a memory board, aggravating the failure. It also found that the data loss and the delay in restoration resulted from a failure to follow industry best practices. At least two dozen agencies were affected by the late-August statewide crash of the Virginia Information Technologies Agency. The crash paralyzed the departments of Taxation and Motor Vehicles, leaving people unable to renew drivers licenses. The disruption also affected 13 percent of Virginia's executive branch file servers. -- Audit: Contractor, Human Error Caused Va Outage (ABC News, February 2011) There are myriad points along the application deployment path at which an error might be introduced. Failure to add the application node to the appropriate load balancing pool; failure to properly monitor the application for health and performance; failure to apply the appropriate security and/or network routing policies. A misstep or misconfiguration at any point in this process can result in downtime or poor performance, both of which are also operational risks. Virtualization and cloud computing can complexify this process by adding another layer of configuration and policies that need to be addressed, but even without these technologies the risk remains. There are two sides to operational efficiency – the deployment/configuration side and the run-time side. During deployment it is configuration and integration that is the focus of efforts to improve efficiency. Leveraging devops and automation as a means to create a repeatable infrastructure deployment process is critical to achieving operational efficiency during deployment. Achieving run-time operational efficiency often utilizes a subset of operational deployment processes, addressing the critical need to dynamically modify security policies and resource availability based on demand. Many of the same processes that enable a successful deployment can be – and should be – reused as a means to address changes in demand. Successfully leveraging repeatable sub-processes at run-time, dynamically, requires that operational folks – devops – takes a development-oriented approach to abstracting processes into discrete, repeatable functions. It requires recognition that some portions of the process are repeated both at deployment and run-time and then specifically ensuring that the sub-process is able to execute on its own such that it can be invoked as a separate, stand-alone process. This efficiency allows IT to address operational risks associated with performance and availability by allowing IT to react more quickly to changes in demand that may impact performance or availability as well as failures internal to the architecture that may otherwise cause outages or poor performance which, in business stakeholder speak, can be interpreted as downtime. RISK FACTOR: Repeatable deployment processes address operational risk by reducing possibility of downtime due to human error. ADAPTION within CONTEXT Performance and availability are operational concerns and failure to sustain acceptable levels of either incur real business loss in the form of lost productivity or in the case of transactional-oriented applications, revenue. These operational risks are often addressed on a per-incident basis, with reactive solutions rather than proactive policies and processes. A proactive approach combines repeatable deployment processes to enable appropriate auto-scaling policies to combat the “flash crowd” syndrome that so often overwhelms unprepared sites along with a dynamic application delivery infrastructure capable of automatically adjusting delivery policies based on context to maintain consistent performance levels. Downtime and slowdown can and will happen to all websites. However, sometimes the timing can be very bad, and a flower website having problems during business hours on Valentine’s Day, or even the days leading up to Valentine’s Day, is a prime example of bad timing. In most cases this could likely have been avoided if the websites had been better prepared to handle the additional traffic. Instead, some of these sites have ended up losing sales and goodwill (slow websites tend to be quite a frustrating experience). -- Flower sites hit hard by Valentine’s Day At run-time this includes not only auto-scaling, but appropriate load balancing and application request routing algorithms that leverage intelligent and context-aware health-monitoring implementations that enable a balance between availability and performance to be struck. This balance results in consistent performance and the maintaining of availability even as new resources are added and removed from the available “pool” from which responses are served. Whether these additional resources are culled from a cloud computing provider or an internal array of virtualized applications is not important; what is important is that the resources can be added and removed dynamically, on-demand, and their “health” monitored during usage to ensure the proper operational balance between performance and availability. By leveraging a context-aware application delivery infrastructure, organizations can address the operational risk of degrading performance or outright downtime by codifying operational policies that allow components to determine how to apply network and protocol-layer optimizations to meet expected operational goals. A proactive approach has “side effect” benefits of shifting the burden of policy management from people to technology, resulting in a more efficient operational posture. RISK FACTOR: Dynamically applying policies and making request routing decisions based on context addresses operational risk by improving performance and assuring availability. Operational risk comprises much more than simply security and it’s important to remember that because all three primary components of operational risk – performance, availability and security – are very much bound up and tied together, like the three strands that come together to form a braid. And for the same reasons a braid is stronger than its composite strands, an operational strategy that addresses all three factors will be far superior to one in which each individual concern is treated as a stand-alone issue. It’s Called Cloud Computing not Cheap Computing Challenging the Firewall Data Center Dogma There Is No Such Thing as Cloud Security The Inevitable Eventual Consistency of Cloud Computing The Great Client-Server Architecture Myth IDC Survey: Risk In The Cloud Risk is not a Synonym for “Lack of Security” When Everything is a Threat Nothing is a Threat The Corollary to Hoff’s Law254Views0likes4Comments