operational risk
6 TopicsThe Dynamic Data Center: Cloud's Overlooked Little Brother
It may be heresy, but not every organization needs or desires all the benefits of cloud. There are multiple trends putting pressure on IT today to radically change the way they operate. From SDN to cloud, market pressure on organizations to adopt new technological models or utterly fail is immense. That's not to say that new technological models aren't valuable or won't fulfill promises to add value, but it is to say that the market often overestimates the urgency with which organizations must view emerging technology. Too, mired in its own importance and benefits, markets often overlook that not every organization has the same needs or goals or business drivers. After all, everyone wants to reduce their costs and simplify provisioning processes! And yet goals can often be met through application of other technologies that carry less risk, which is another factor in the overall enterprise adoption formula – and one that's often overlooked. DYNAMIC DATA CENTER versus cloud computing There are two models competing for data center attention today: dynamic data center and cloud computing. They are closely related, and both promise similar benefits with cloud computing offering "above and beyond" benefits that may or may not be needed or desired by organizations in search of efficiency. The dynamic data center originates with the same premises that drive cloud computing: the static, inflexible data center models of the past inhibit growth, promote inefficiency, and are fraught with operational risk. Both seek to address these issues with more flexible, dynamic models of provisioning, scale and application deployment. The differences are actually quite subtle. The dynamic data center is focused on NOC and administration, with enabling elasticity and shared infrastructure services that improve efficiency and decrease time to market. Cloud computing, even private cloud, is focused on the tenant and enabling for them self-service capabilities across the entire application deployment lifecycle. A dynamic data center is able to rapidly respond to events because it is integrated and automated to enable responsiveness. Cloud computing is able to rapidly respond to events because it is necessarily must provide entry points into the processes that drive elasticity and provisioning to enable the self-service aspects that have become the hallmark of cloud computing. DATA CENTER TRANSFORMATION: PHASE 4 You may recall the cloud maturity model, comprising five distinct steps of maturation from initial virtualization efforts through a fully cloud-enabled infrastructure. A highly virtualized data center, managed via one of the many available automation and orchestration frameworks, may be considered a dynamic data center. When the operational processes codified by those frameworks are made available as services to consumers (business and developers) within the organization, the model moves from dynamic data center to private cloud. This is where the dynamic data center fits in the overall transformational model. The thing is that some organizations may never desire or need to continue beyond phase 4, the dynamic data center. While cloud computing certainly brings additional benefits to the table, these may be benefits that, when evaluated against the risks and costs to implement (or adopt if it's public) simply do not measure up. And that's okay. These organizations are not some sort of technological pariah because they choose not to embark on a journey toward a destination that does not, in their estimation, offer the value necessary to compel an investment. Their business will not, as too often predicted with an overabundance of hyperbole, disappear or become in danger of being eclipsed by other more agile, younger versions who take to cloud like ducks take to water. If you're not sure about that, consider this employment ad from the most profitable insurance company in 2012, United Health Group – also #22 on the Fortune 500 list – which lists among its requirements "3+ years of COBOL programming." Nuff said. Referenced blogs & articles: Is Your Glass of Cloud Half-Empty or Half-Full? Fortune 500 Snapshot: United Health Group Hybrid Architectures Do Not Require Private Cloud294Views0likes0CommentsThe Three Axioms of Application Delivery
#fasterapp If you know these three axioms, then you’ll know application delivery when you see it. Like most technology jargon, there are certain terms and phrases that end up mangled, conflated, and generally misapplied as they gain traction in the wider market. Cloud is merely the latest incarnation of this phenomenon, and there will be others in the future. Guaranteed. Of late the term “application delivery” has been creeping up into the vernacular. That could be because cloud has pushed it to the fore, necessarily. Cloud purports to eliminate the “concern” of infrastructure and allows IT to focus on … you guessed it, the application. Which in turn means the delivery of applications is becoming more and more pervasive in the strategic vocabulary of the market. But like cloud and its predecessors, the term application delivery is somewhat vague and without definition. I am not going to define it, in case you were wondering, because quite frankly I’ve watched its expansion and transformation over the past decade and understand that application delivery is not static. As new technology and deployment models arise, new techniques and architectures must also arise to meet the challenges that naturally arise along with those applications. But how, then, do you know what is and is not application delivery? If it can morph and grow and transform with time and technology, then anything can be considered application delivery, right? Not entirely. Application delivery, after all, is about an end-to-end process. It’s about a request that is sent to an application and subsequently fulfilled and returned to the originator of the request. Depending on the application this process may be simple or exceedingly complex, requiring authentication, logging, verification, interaction of multiple services and, one hopes, a wealth of security services ensuring that what is delivered is what was intended and desired, and is not carrying along something malicious. A definition comprising these concepts would be either be far too broad so as to be meaningless, or so narrow that it left no room to adapt to future technologies. Neither is acceptable, in my opinion. A much better way to understand what is (and conversely what is not) application delivery is to learn three simple axioms that define the core concepts upon which application delivery is based. APPLICATION-CENTRIC “Applications are not servers, hypervisors, or operating systems.” Applications are not servers. They are not the physical or virtual server upon which they are deployed and from where they draw core resources. They are not the web and application servers on which they rely for application-layer protocol support. They are not the network stack from which they derive their IP address or TCP connection characteristics. They are uniquely separate entities that must be managed individually. The concrete example of this axiom in action is health-monitoring of applications. Too many times we see load balancing services configured with health-checking options that are focused on IP or TCP or HTTP parameters. Ping checks, TCP half-open checks, HTTP status checks. None of these options are relevant to whether or not the application is available and executing correctly. A ping check assures us the network is operating and the OS is responding. A TCP half-open check tells us network stack is operating properly. An HTTP status check tells us the web or application server is running and accepting requests. But none of these even touches on whether or not the application is executing and responding correctly. Similarly, applications are not ports, and security services must be able to secure the application, not merely its operating environment. Applications are not – or should not – be defined by their network characteristics, and neither should they be secured based on these parameters. Applications are not servers, hypervisors, or operating systems. They are individual entities that must be managed individually, from a performance, availability, and security perspective. MITIGATE OPERATIONAL RISK “Availability, performance, and security are not separate operational challenges.” In most IT organizations the people responsible for security are not responsible for performance or availability, and vice-versa. While devops tries to bridge the gap between applications and operations-focused professionals, we may need to intervene first and unify operations. These three operational concerns are intertwined, they are interrelated, they are paternal triplets. A DDoS attack is security, but it has – or likely will have – a profound impact on both performance and availability. Availability has an impact on performance, both positive and negative. And too often performance concerns result in the avoidance of security that can ultimately return to bite availability in the derriere. Application delivery recognizes that all three components of operational risk are inseparable, and they must be viewed as a holistic concern. Each challenge should be addressed with the others in mind, and with the understanding that changes in one will impact the others. OPERATE WITHIN CONTEXT “Application delivery decisions cannot be made efficiently or effectively in a vacuum.” Finally, application delivery recognizes that decisions regarding application performance, security, and availability cannot be made within a vacuum. What may improve performance for a mobile client accessing an application over the Internet may actually impair performance for a mobile client accessing the application over the internal data center network. What is appropriate authentication methods for a remote PC desktop are unlikely to be applicable to the same user requesting access over a smartphone. The various components of context provide the means by which the appropriate policies are enforced and applied at the right time to the right client for the right application. It is context that provides the unique set of parameters that enfolds any given request. We cannot base decisions solely on user, because user may migrate during the day from one client device to another, and one location to another. We cannot base decisions solely on device, because network conditions and type may change as the user roams from home to the office and out to lunch, moving seamlessly between mobile carrier network and WiFi. We cannot base decisions solely on application, because the means and location of the client may change its behavior and impact delivery in a negative way. When you put these axioms into action, the result is application delivery. A comprehensive, holistic and highly strategic approach to delivering applications. It is impossible to say application delivery is these five products delivered as a solution because whether or not those products actually comprise an application delivery network depends on whether or not they are able to deliver on the promise of these three axioms of application delivery.229Views0likes0CommentsBFF: Complexity and Operational Risk
#adcfw The reason bars place bouncers at the door is because it’s easier and less riskier to prevent entry than to root out later No one ever said choosing a career in IT was going to be easy, but no one said it had to be so hard you’d be banging your head on the desk, either. One of the reasons IT practitioners end up with large, red welts on their foreheads is because data centers tend to become more, not less, complex and along with complexity comes operational risk. Security, performance, availability. These three inseparable issues often stem not from vulnerabilities or poorly written applications but merely from the complexity of data center network architectures needed to support the varying needs of both the business and IT. Unfortunately it is often the case that as emerging technologies creep (and sometimes run headfirst) into the data center the network is overlooked as a potential source of risk in supporting these new technologies. Traditionally, network readiness has entailed some load testing to ensure adequate bandwidth to support a new application, but rarely is it the case that we take a look at the actual architecture of the network and its services to determine if it is able to support new applications and initiatives. It’s the old “this is the way we do this” mantra that often ends up being the source of operational failure. COMPLEXITY MEANS MULTIPLE POINTS of FAILURE Consider the simple case of using SPAN ports to mirror traffic, a traditional network architecture technique that attempts to support the need for visibility into network traffic for security purposes without impeding performance. SPAN ports are used to clone all traffic, allowing it to traverse its intended path to an application service while simultaneously being examined for malicious and/or anomalous content. This architectural approach can inadvertently cause operational failure under heavy load – whether caused by an attack or a flash-mob of legitimate users. “One of the problems with SPAN ports, which people tend to use because they’re cheap, is that you won’t get to keep it for your use all the time. Someone will come along and need that because there’s a limited number of them,” said John Kindervag, senior analyst with Forrester Research. “Whenever you are under attack and need that data, the switch is going to get saturated and the first port that quits functioning is the SPAN port so that it can have some extra compute capacity. So at the exact time that you need it, the whole system is designed not to get that data to you.” … Network traffic capture systems can perform media conversion, sending lower bandwidth data streams to these network security appliances. They can also load balance these data streams across multiple appliances. -- Network traffic capture systems offer broader security visibility Unfortunately, it isn’t only SPAN ports (and thus the systems that rely upon them) that fail under load. Firewalls, too, have consistently failed under the arduous network conditions that occur during an attack. The failure of these components is devastating, disruptive, and unacceptable and is caused in part by architectural complexity. It isn’t, after all, the IPS or IDS that’s failing – it’s the port on the switch upon which it relies for data. The applications have not failed, but if the firewall melts down it really doesn’t matter – to the end user, that’s failure. One of the ways in which we can redress this operational risk is to simplify – to reduce the number of potential points of failure and more intelligently route traffic through the network. INSPECT at the EDGE If you look at the cause of failures in network architectures there are two distinct sources: connections and traffic. The latter is simple and it is also well understood. Too much traffic can overwhelms network components (and you can bet if it’s overwhelming a network component, it can easily overwhelm an application), introducing errors, high latency, lost packets, and more. This trickles up to the application, potentially causing time-outs, unacceptably long response times, or worse – a crash. The answer to this problem is either (1) increase switching capacity or (2) decrease traffic. Both are valid approaches. The latter, however, is rarely the path taken because it’s it’s an accepted fact that traffic is going to increase over time and there’s only so many optimizations you can make in the network architecture that will optimize it and decrease traffic on the wire. But there are architectural changes that can decrease traffic on the wire. It makes very little sense to expend compute and network processing power on traffic that is malicious in nature. The risk to application servers and network infrastructure is real, but avoidable. You need to check the traffic at the door and only let valid traffic in, otherwise you’re going to end up expending a lot more resources tracking it down and figuring out how to kick it out. The problem is that the current network bouncer, the firewall, isn’t able to adequately detect the fake IDs presented by application layer attacks. This traffic just slips through and ends up causing problems in the application tier. The closer to the edge of the network you are able to detect – and subsequently reject – malicious traffic the more operational risk you can mitigate. The more assurances you can provide that security infrastructure won’t end up being ineffective due to a SPAN port shut down, the more operational risk you can mitigate. Leveraging a converged approach will provide that assurance, as well as the ability to sniff out at the door those fake IDs presented by application layer attacks. Leveraging a network component that is capable of both detecting inbound attacks as well as cloning traffic to ensure holistic inspection by security infrastructure will reduce complexity in the network architecture and improve the overall security posture. Detecting attacks at the very edge of the network – network and application-layer attacks – means less burden on supporting security network infrastructure (like switches with SPAN ports) because less traffic is getting through the door. If the network component is also designed to manage connections at high-scale, then the risk of firewall failure from overwhelming inbound connections that appear legitimate but are not. Emerging architectural models are based on the premise of leveraging strategic points of control within the network; those places where traffic and flows are naturally aggregated and disaggregated through the use of network virtualization. Leveraging these points of control are critical to ensuring the success of new architectural and operational deployment models in the data center that allow organizations to realize their benefits of cost savings and operational efficiency. The application delivery tier is a strategic point of control in the new data center paradigm. It affords organizations a flexible, scalable tier of control that can efficiently address all three components of operational risk. Consolidating inbound security with application delivery at the edge of the network makes good operational sense. It reduces operational risk across a variety of components by eliminating the complexity in the underlying architecture. Simplification leads to fewer points of failure because complexity and operational risk really are BFF – you can’t address one without addressing the other. F5 Friday: The Art of Efficient Defense F5 Friday: When Firewalls Fail… F5 Friday: Performance, Throughput and DPS The Pythagorean Theorem of Operational Risk At the Intersection of Cloud and Control… When the Data Center is Under Siege Don’t Forget to Watch Under the Floor Challenging the Firewall Data Center Dogma What CIOs Can Learn from the Spartans What is a Strategic Point of Control Anyway? Server Virtualization versus Server Virtualization219Views0likes0CommentsThe Pythagorean Theorem of Operational Risk
#devops It’s a simple equation, but one that is easily overlooked. Most folks recall, I’m sure, the Pythagorean Theorem. If you don’t, what’s really important about the theorem is that any side of a right triangle can be computed if you know the other sides by using the simple formula a 2 + b 2 = c 2 . The really important thing about the theorem is that it clearly illustrates the relationship between three different pieces of a single entity. The lengths of the legs and hypotenuse of a triangle are intimately related; variations in one impact the others. Operational risk – security, availability, performance – are interrelated in very much the same way. Changes to one impact the others. They cannot be completely separated. Much in the same way unraveling a braided rope will impact its strength, so too does unraveling the relationship between the three components of operational risk impact its overall success. The OPERATIONAL RISK EQUATION It is true that the theorem is not an exact match, primarily because concepts like performance and security are not easily encapsulated as concrete numerical values even if we apply mathematical concepts like Gödel numbering to it. While certainly we could use traditional metrics for availability and even performance (in terms of success at meeting specified business SLAs), still it would be difficult to nail these down in a way that makes the math make sense. But the underlying concept - that it is always true* that the sides of a right triangle are interrelated is equally applicable to operational risk. Consider that changes in performance impact what is defined as “availability” to end-users. Unacceptably slow applications are often defined as “unavailable” because they render it nearly impossible for end-users to work productively. Conversely, availability issues can negatively impact performance as fewer resources attempt to serve the same or more users. Security, too, is connected to both concepts, though it is more directly an enabler (or disabler) of availability and performance than vice-versa. Security solutions are not known for improving performance, after all, and the number of attacks directly attempting to negate availability is a growing concern for security. So, too, is the impact of attacks relevant to performance, as increasingly application-layer attacks are able to slip by security solutions and directly consume resources on web/application servers, degrading performance and ultimately resulting in a loss of availability. But that is not to say that performance and availability have no impact on security at all. In fact the claim could be easily made that performance has a huge impact on security, as end-users demand more of the former and that often results in less of the latter because of the nature of traditional security solutions impact on performance. Thus we are able to come back to the theorem in which the three sides of the data center triangle known as operational risk are, in fact, intimately linked. Changes in one impact the other, often negatively, and all three must be balanced properly in a way that maximizes them all. WHAT DEVOPS CAN DO Devops plays (or could and definitely should play) a unique role in organizations transforming from their traditional, static architectures toward the agile, dynamic architectures necessary to successfully execute on cloud and virtualization strategies. The role of devops focuses on automation and integration of the various delivery concerns that support the availability, performance, and security of applications. While they may not be the ones that define and implement security policies, they are the ones that should be responsible for assuring they are deployed and applied to applications as they move into the production environment. Similarly, devops may not be directly responsible for defining availability and performance requirements, but they are the ones that must configure and implement the appropriate health monitoring and related policies to ensure that all systems involved have the data they need to make the decisions required to meet operational requirements. Devops should be responsible for provisioning the appropriate services related to performance, security, and availability based on their unique role in the emerging data center model. Devops isn’t just scripts and automation; it’s a role requiring unique skillset spanning networking, development, application delivery, and infrastructure integration. Devops must balance availability, performance, and security concerns and understand how the three are interrelated and impact each other. The devops team should be the ones that recognize gaps or failures in policies and are able to point them out, quickly. The interrelationship between all three components of operational risk puts devops front and center when it comes to maintaining application deployments. Treating security, performance, and availability as separate and isolated concerns leads to higher risks that one will negatively impact the other. Devops is the logical point at which these three concerns converge, much in the same way application delivery is the logical data center tier at which all three concerns converge from a services and implementation point of view. Devops, like application delivery, has the visibility and control necessary to ensure that the three sides of the operational risk triangle are balanced properly. * In Euclidian geometry, at least. Yes, other systems exist in which this may not be true, but c’mon, we’re essentially applying Gödel numbers to very abstract metrics and assuming axioms that have never been defined let alone proven so I think we can just agree to accept that this is always true in this version of reality. At the Intersection of Cloud and Control… Cloud Computing and the Truth About SLAs IT Services: Creating Commodities out of Complexity What is a Strategic Point of Control Anyway? The Battle of Economy of Scale versus Control and Flexibility The Future of Cloud: Infrastructure as a Platform The Secret to Doing Cloud Scalability Right Operational Risk Comprises More Than Just Security294Views0likes0CommentsF5 Friday: Platform versus Product
There’s a significant difference between a platform and a product, especially when it comes to architecting a dynamic data center In the course of nearly a thousand blogs it’s quite likely you’ve seen BIG-IP referenced as a platform, and almost never as a product. There’s a reason for that, and it’s one that is increasingly becoming important as organizations begin to look at some major transformations to their data center architecture. It’s not that BIG-IP isn’t a product. Ultimately, of course, it is in the traditional sense of the word. But it’s also a platform, an infrastructure platform, designed specifically to allow the deployment of application delivery-related services in a modular fashion. In the most general way, modern browsers are products and platforms, as they provide an application framework through which additional plug-ins (modules) can be deployed. BIG-IP is similar to this model with the noted exception that its internal application framework is intended for use by F5 engineers to develop new and integrate existing functionality as “plug-ins” within the core architectural framework we call TMOS™. There are myriad reasons why this distinction is important. Primarily among them is a unified internal architecture implies internal, high-speed interconnects that allow inbound and outbound data to be shared across modules (plug-ins) without incurring the overhead of network-layer communication. Many developers can explain the importance of zero-copy operations as it relates to performance. Those that can’t will still likely be able to describe the difference between pass by reference and pass by value which, in many respects, has similar performance implications as the former simply passes a pointer to a memory location and the latter makes a copy. It’s similar to the difference between collaborative editing in Google Docs and tracking revisions in Word via e-mail – the former acts on a single, shared copy while the latter passes around the entire document. Obviously, working on the same document at the same time is more efficient and ultimately faster than the alternative of passing around a complete copy and waiting for it to return, marked up with changes. FROM THEORY to PRACTICE This theory translates well to the architectural principles behind TMOS and the BIG-IP platform: inbound and outbound data is shared across modules (plug-ins) in order to reduce the overhead associated with traditional network-based architectures that chain multiple products together. While the end-result may be similar, performance will certainly suffer and the loss of context incurred by architectural chaining may negatively impact the effectiveness (not to mention capabilities) of security-related functions. The second piece of the platform puzzle are programmatic interfaces for external, i.e. third-party, development. This is the piece of the puzzle that makes a platform extensible. F5 TMOS provides for this with iRules, a programmatic scripting language that can be used to do, well, just about anything you want to do to inbound and outbound traffic. Whether it’s manipulating HTML, JSON, or HTTP headers or inspecting and modifying IP packets (disclaimer: we are not responsible for the anger of your security and/or network team for doing this without their involvement), iRules allows you to deploy unique functionality for just about any situation you can think of. Most often these capabilities are used to mitigate emergent threats – such as the THC SSL Renegotiation vulnerability – but they are also used to perform a variety of operational and application-specific tasks, such as redirection and holistic error-handling. And of course, who could forget my favorite, the random dice roll iRule. While certainly not of value to most organizations, such efforts can be good for learning. (That’s my story and I’m sticking to it.) TMOS is a full proxy, and is unique in its ability to inspect and control entire application conversations. This enables F5 to offer an integrated, operationally consistent solution that can act based on the real time context of the user, network, and application across a variety of security, performance, and availability concerns. That means access control and application security as well as load balancing and DNS services leverage the same operational model, the same types of policies, the same environment across all services regardless of location or form-factor. iRules can simultaneously interact with DNS and WAF policies, assuming both BIG-IP GTM and BIG-IP ASM are deployed on the same instance. The zero-copy nature of the high-speed bus that acts as the interconnect between the switching backplane and the individual modules insures the highest levels of performance without requiring a traversal of the network. Because of the lack of topological control in cloud computing environments – public and private – the need for an application delivery platform is increasing. The volatility in IP topology is true for not only server and storage infrastructure, but increasingly for the network as well, making the architecture of a holistic application delivery network using individually chained components more and more difficult, if not impossible. A platform with the ability to scale out and across both physical and virtual instances while simultaneously sharing configuration to ensure operational consistency is a key component to a successful, cloud-based initiative whether its private, public, or a combination of both. A platform provides the flexibility and extensibility required to meet head on the challenges of highly dynamic environments while ensuring the ability to enforce policies that directly address and mitigate operational risk (security, performance, availability). A product, without the extensibility and programmatic nature of a platform, is unable to meet these same challenges. Context is lost in the traversal of the network and performance is always negatively impacted when multiple network-based connections must be made. A platform maintains context and performance while allowing the broadest measure of flexibility in deploying the right solutions at the right time. At the Intersection of Cloud and Control… What is a Strategic Point of Control Anyway? F5 Friday: Performance, Throughput and DPS Cloud Computing: Architectural Limbo Operational Risk Comprises More Than Just Security All F5 Friday Posts on DevCentral Why Single-Stack Infrastructure Sucks Your load balancer wants to take a level of fighter and wizard189Views0likes0CommentsThe Scariest Cloud Security Statistic You’ll See This Year
Who is most responsible for determining the adequacy of security in the cloud in your organization? Dome9, whom you may recall is a security management-as-a-service solution that aims to take the complexity out of managing administrative access to cloud-deployed servers, recently commissioned research on the subject of cloud computing and security from the Ponemon Institute and came up with some interesting results that indicate cloud chaos isn’t confined to just its definition. The research, conducted this fall and focusing on the perceptions and practices of IT security practitioners, indicated that 54% of respondents felt IT operations and infrastructure personnel were not aware of the risks of open ports in cloud computing environments. I found that hard to swallow. After all, we’re talking about IT practitioners. Surely these folks recognize the dangers associated with open ports on servers in general. But other data in the survey makes this a reasonable assumption, as 51% of respondents said leaving administrative server ports open in cloud computing environments was very likely or likely to expose the company to increased attacks and risks, with 19% indicating such events had already happened. Yet 30% of those surveyed claimed it was not likely or simply would not happen. At all. I remain wishing Ponemon had asked the same questions of the same respondents about similar scenarios in their own data center as I ‘m confident the results would be very heavily weighted toward the “likely or very likely to happen.” It may be time for a reminder of Hoff’s law: “If your security practices suck in the physical realm, you’ll be delighted by the surprising lack of change when you move to Cloud.” However, digging down into the data one begins to find the real answer to this very troubling statistic in the assignment of responsibility for security of cloud-deployed servers. It is, without a doubt, the scariest statistic with respect to cloud security I’ve seen all year, and it seems to say that for some organizations, at least, the cloud of Damocles is swinging mightily. If it doesn’t scare you that business functions are most cited as being ultimately responsible for determining adequacies of security controls in the cloud, it might frighten you to know that 54% of respondents indicated that IT operations and infrastructure personnel were not very or completely unknowledgeable with respect to the dangers inherent in open ports on servers in cloud computing environments – and that 35% of those organizations rely on IT operations to determine the adequacy of security in cloud deployments. While certainly IT security is involved in these decisions (at least one hopes that is the case) that the most responsibility is assigned to those least knowledgeable in the risks. That 19% of respondents indicated already experiencing an attack because of open ports on cloud-deployed servers is no longer such a surprising result of the study. CALL to ACTION The Ponemon study is very interesting in its results, and indicates that we’ve got a ways to go when it comes to cloud and security and increasing our comfort level combining the two. Cloud is a transformational and highly disruptive technology, and at times it may be transforming organizations in ways that are perhaps troubling – such as handing responsibility of security to business or non-security practitioners. Or perhaps it’s simply exposing weaknesses in current processes that should force change. Or it may be something we have to live with. It behooves IT security, then, to ensure it is finding ways to address the threats they know exist in the cloud through education of those responsible for ensuring it. It means finding tools like Dome9 to assist the less-security savvy in the organization with ensuring that security policies are consistently applied in cloud environments as well as in the data center. It may require new technology and solutions that are designed with the capability to easily replicate policies across multiple environments, to ensure that a standard level of security is maintained regardless of where applications are deployed. As cloud becomes normalized as part of the organization’s deployment options, the ability to effectively manage security, availability, and performance across all application deployments becomes critical. The interconnects (integration) between applications in an organization means that the operational risk of one is necessarily shared by others. Consistent enforcement of all delivery-related policies – security, performance, and availability – is paramount to ensuring the successful integration of cloud-based resources, systems, and applications into the IT organization’s operational processes. You can register to read the full report on Dome9’s web site. Related blogs & articles: The Corollary to Hoff’s Law Dome9: Closing the (Cloud) Barn Door173Views0likes0Comments