governance
5 TopicsTraceSecurity: DevOps Meets Compliance
#infosec #devops #bigdata #cloud IT security initiatives can benefit from a devops approach enabled with a flexible framework What do you get when you mix some devops with compliance and its resulting big data? Okay, aside from a migraine just thinking about that concept, what do you get? What TraceSecurity got was TraceCSO – its latest compliance, security, cloud mashup. BIG (OPERATIONAL) DATA The concept of big "operational" data shouldn't be new. Enterprise IT deals with enormous volumes of data in the form of logs generated by the hundreds of systems that make up IT. And the same problems that have long plagued APM (Application Performance Management) solutions are a scourge for security and compliance operations as well: disconnected systems produce disconnected data that can make compliance and troubleshooting even more difficult than it already is. Additional data that should be collected as part of compliance efforts – sign offs, verification, etc.. – often isn't or, if it is, is stored in a file somewhere on the storage network, completely disconnected from the rest of the compliance framework. Now add in the "big data" from regulations and standards that must be factored in. There are a daunting number of controls we have to manage. And we are all under multiple overlapping jurisdictions. There isn't a regulatory body out there that creates an authority document that doesn't, or hasn't overlapped an already existing one. The new US HIPAA/HITECH Acts alone have spun a web of almost 60 Authority Documents that need to be followed. Even PCI refers to almost 5 dozen external Authority Documents and there are at least 20 European Data Protection Laws. -- Information Security Form: Unified Compliance Framework (UCF) While the UCF (Unified Compliance Framework) provides an excellent way to integrate and automate the matching of controls to compliance efforts and manage the myriad steps that must be completed to realize compliance, it still falls on IT to manage many of the manual processes that require sign off or verification or steps that simply cannot be automated. But there's still a process that can be followed, a methodology, that makes it a match for devops. The trick is finding a way to codify those processes in a such a way as to make them repeatable and successful. That's part of what TraceCSO provides – a framework for process codification that factors in regulations and risk and operational data to ensure a smoother, simpler implementation. HOW IT WORKS TraceCSO is a SaaS solution, comprising a cloud-hosted application and a locally deployed vulnerability scanner providing the visibility and interconnectivity necessary to enable process automation. Much like BPM (Business Process Automation) and IAM (Identity and Access Management) solutions, TraceCSO offers the ability to automate processes that may include manual sign-offs, integrating with local identity stores like Active Directory. The system uses wizards to guide the codification process, with many helpful links to referenced regulatory and compliance documents and associated controls. Initial system setup walks through adding users and departments, defining permissions and roles, coordinating network scanning and selecting the appropriate authority documents from which compliance levels can be determined. TraceCSO covers all functional areas necessary to manage an on-going risk-based information security program: Risk Policy Vulnerability Training Vendor Audit Compliance Process Reporting TraceCSO can be integrated with a variety of GRC solutions, though this may entail work on the part of TraceSecurity, the ISV, or the organization. Integration with MDM, for example, is not offered out of the box and thus approaches compliance with proper security policies via an audit process that requires sign-off by responsible parties as designated in the system. Its integrated risk assessment measures against best practices CIA (Confidentiality, Integrity, Availability) expectations. TraceCSO calculates a unique risk score based on CIA measures as well as compliance with authoritative documentation and selected controls, and allows not just a reported risk score over time but the ability to examine unimplemented controls and best practices against anticipated improvements in the risk score. This gives IT and the business a way to choose those control implementations that will offer the best "bang for the buck" and puts more weight behind risk-benefit analysis. By selecting regulations and standards applicable to the organization, TraceCSO can map controls identified during the risk assessment phase to its database of authorities. Technical controls can also be derived from vulnerability scans conducted by the TraceCSO appliance component. TraceCSO is ultimately an attempt to unify the many compliance and risk management functions provided by a variety of disconnected, individual GRC solutions today. By providing a single point of aggregation for risk and compliance management as well process management, the system enables a more comprehensive view of both risk and compliance across all managed IT systems. It's a framework enabling a more devops approach to compliance, which is certainly an area often overlooked in discussions regarding devops methodologies despite the reality that its process-driven nature makes it a perfect fit. The same efficiencies gained through process and task-automation in other areas of IT through devops can also be achieved in the realm of risk and compliance with the right framework in place to assist. TraceCSO looks to be one such framework.217Views0likes0CommentsWhy IT Needs to Take Control of Public Cloud Computing
IT organizations that fail to provide guidance for and governance over public cloud computing usage will be unhappy with the results… While it is highly unlikely that business users will “control their own destiny” by provisioning servers in cloud computing environments that doesn’t mean they won’t be involved. In fact it’s likely that IaaS (Infrastructure as a Service) cloud computing environments will be leveraged by business users to avoid the hassles they perceive (and oft times actually do) exist in their quest to deploy a given business application. It’s just that they won’t themselves be pushing the buttons. There have been many experts that have expounded upon the ways in which cloud computing is forcing a shift within IT and the way in which assets are provisioned, acquired, and managed. One of those shifts is likely to also occur “outside” of IT with external IT-focused services, such as system integrators like CSC and EDS HP Enterprise Services.202Views0likes2CommentsAutomating scalability and high availability services
There are a lot of SOA governance solutions out there that fall into two distinct categories of purpose: one is to catalog services and associated security policies and the other is to provide run-time management for services, including enforcement of security and performance-focused policies. Vendors providing a full "SOA Stack" of functionality across the service lifecycle (design, development, testing, production) often integrate their disparate product sets for a more automated (and thus manageable) SOA infrastructure. But very few integrate those same products and functionality with the underlying network and application delivery infrastructure required to provide high-availability and scalability for those services. The question should (and must) be asked: why is that? Today's application delivery infrastructure, a.k.a. application delivery controllers and load-balancers, are generally capable of integration via standards-based APIs. These APIs provide complete control over the configuration and management of these platforms, making the integration of application delivery platforms with the rest of the SOA eco-system a definite reality. Most registry/repository solutions today offer the ability of external applications to subscribe to events. The events vary from platform to platform, but generally include some commonalities such as "artifact published" or "item changed". This means a listening application can subscribe to these events and take appropriate action when an event occurs. 1. A new WSDL describing a service interface (hosted in the service application infrastructure) is published. 2. The listening application is notified of the event and retrieves the new or modified WSDL. 3. The application parses the WSDL and determines the appropriate endpoint information, then automatically configures the application delivery controller to (a) virtualize the service and (b) load balance requests across applicable servers. 4. The application delivery controller begins automatically load-balancing service requests and providing high-availability and scalability services. There's some information missing that has to be supplied either via discovery, policy, or manual configuration. That's beyond the scope of this post, but would certainly be a part of the controlling application. Conceptually, as long as you have (a) a service-enabled application delivery controller and (b) an application capable of listening for events in the SOA registry/repository, you can automate the process of provisioning high-availability and scalability services for those SOA services. If you combine this with the ability to integrate application delivery control into the application itself, you can provide an even more agile, dynamic application delivery infrastructure than if you just used one concept or the other. And when you get right down to it, this doesn't just work for SOA, it could easily work just as well for any application framework, given the right integration. There already exist some integration of application delivery infrastructure with SOA governance solutions, like AmberPoint, but there could be more. There could be custom solutions for your unique architecture as well, given that the technology exists to build it. The question is, why aren't folks leveraging this integration capability to support initiatives like SOA and cloud computing that require a high level of agility and extensibility and upon which the ROI depends at least partially on the ability to reduce management costs and length of deployment cycles through automation? It's true that there seems to be an increasing awareness of the importance of application delivery infrastructure to architecting a scalable, highly available cloud computing environment. But we never really managed to focus on the importance of an agile, reusable, intelligent application delivery infrastructure to the success of SOA. Maybe it's time we backtrack a bit and do so, because many of the same architectural and performance issues that will arise in the cloud due to poor choices in application delivery infrastructure are the same as those that adversely impact SOA implementations. Related Links Why can't clouds be inside (the data center)? Governance in the Cloud Reliability does not come from SOA Governance Building a Cloudbursting Capable Infrastructure The Next Tech Boom: Infrastructure 2.0183Views0likes0CommentsNever attribute to technology that which is explained by the failure of people
#cloud Whether it’s Hanlon or Occam or MacVittie, the razor often cuts both ways. I am certainly not one to ignore the issue of complexity in architecture nor do I dismiss lightly the risk introduced by cloud computing through increased complexity. But I am one who will point out absurdity when I see it, and especially when that risk is unfairly attributed to technology. Certainly the complexity introduced by attempts to integrate disparate environments, computing models, and networks will give rise to new challenges and introduce new risk. But we need to carefully consider whether the risk we discover is attributable to the technology or to simple failure by those implementing it. Almost all of the concepts and architectures being “discovered” in conjunction with cloud computing are far from original. They are adaptations, evolutions, and maturation of existing technology and architectures. Thus, it is almost always the case that when a “risk” of cloud computing is discovered it is not peculiar to cloud computing at all, and thus likely has it roots in implementation not the technology. This is not to say there aren’t new challenges or risks associated with cloud computing, there are and will be cloud-specific risks that must be addressed (IP Identity Theft was heretofore unknown before the advent of cloud computing). But let’s not make mountains out of molehills by failing to recognize those “new” risks that actually aren’t “new” at all, but rather are simply being recognized by a wider audience due to the abundance of interest in cloud computing models. For example, I found this article particularly apocalyptic with respect to cloud and complexity on the surface. Digging into the “simple scenario”, however, revealed that the meltdown referenced was nothing new, and certainly wasn’t a technological problem – it was another instance of lack of control, of governance, of oversight, and of communication. The risk is being attributed to technology, but is more than adequately explained by the failure of people. The Hidden Risk of a Meltdown in the Cloud Ford identifies a number of different possibilities. One example involves an application provider who bases its services in the cloud, such as a cloud -based advertising service. He imagines a simple scenario in which the cloud operator distributes the service between two virtual servers, using a power balancing program to switch the load from one server to the other as conditions demand. However, the application provider may also have a load balancing program that distributes the customer load. Now Ford imagines the scenario in which both load balancing programs operate with the same refresh period, say once a minute. When these periods coincide, the control loops start sending the load back and forth between the virtual servers in a positive feedback loop. Could this happen? Yes. But consider for a moment how it could happen. I see three obvious possibilities: IT has completely abdicated its responsibility to governing foundational infrastructure services like load balancing and allowed the business or developers to run amokwithout regard for existing services. IT has failed to communicate its overarching strategy and architecture with respect to high-availability and scale in inter-cloud scenarios to the rest of the IT organization, i.e. IT has failed to maintain control (governance) over infrastructure services. The left hand of IT and the right hand of IT have been severed from the body of IT and geographically separated with no means to communicate. Furthermore, each hand of IT wholeheartedly believes that the other is incompetent and will fail to properly architect for high-availability and scalability, thus requiring each hand to implement such services as required to achieve high-availability. While the third possibility might make a better “made for SyFy tech-horror” flick, the reality is likely somewhere between 1 and 2. This particular scenario, and likely others, is not peculiar to cloud. The same lack of oversight in a traditional architecture could lead to the same catastrophic cascade described by Ford in the aforementioned article. Given a load balancing service in the application delivery tier, and a cluster controller in the application infrastructure tier, the same cascading feedback loop could occur, causing a meltdown and inevitably downtime for the application in question. Astute observers will conclude that an IT organization in which both a load balancing service and a cluster controller are used to scale the same application has bigger problems than duplicated services and a failed application. This is not a failure of technology, nor is it caused by excessive complexity or lack of transparency within cloud computing environments. It’s a failure to communicate, to control, to oversee the technical implementation of business requirements through architecture. That’s a likely conclusion before we even start considering an inter-cloud model with two completely separate cloud providers sharing access to virtual servers deployed in one or the other – maybe both? Still, the same analysis applies – such an architecture would require willful configuration and knowledge of how to integrate the environments. Which ultimately means a failure on the part of people to communicate. THE REAL PROBLEM The real issue here is failure to oversee – control – the integration and use of cloud computing resources by the business and IT. There needs to be a roadmap that clearly articulates what services should be used and in what environments. There needs to be an understanding of who is responsible for what services, where they connect, with whom they share information, and by whom they will (and can be) accessed. Maybe I’m just growing jaded – but we’ve seen this lack of roadmap and oversight before. Remember SOA? It ultimately failed to achieve the benefits promised not because the technology failed, but because the implementations were generally poorly architected and governed. A lack of oversight and planning meant duplicated services that undermined the success promised by pundits. The same path lies ahead with cloud. Failure to plan and architect and clearly articulate proper usage and deployment of services will undoubtedly end with the same disillusioned dismissal of cloud as yet another over-hyped technology. Like SOA, the reality of cloud is that you should never attribute to technology that which is explained by the failure of people. BFF: Complexity and Operational Risk The Pythagorean Theorem of Operational Risk At the Intersection of Cloud and Control… What is a Strategic Point of Control Anyway? The Battle of Economy of Scale versus Control and Flexibility Hybrid Architectures Do Not Require Private Cloud Control, choice, and cost: The Conflict in the Cloud Do you control your application network stack? You should. The Wisdom of Clouds: In Cloud Computing, a Good Network Gives You Control...181Views0likes0CommentsReliability does not come from SOA Governance
An interesting InformationWeek article asks whether SOA intermediaries such as "enterprise service bus, design-time governance, runtime management, and XML security gateways" are required for an effective SOA. It further posits that SOA governance is a must for any successful SOA initiative. As usual, the report (offered free courtesy of IBM), focuses on SOA infrastructure that while certainly fitting into the categories of SOA intermediary and governance does very little to assure stability and reliability of those rich Internet applications and composite mashups being built atop the corporate SOA. Effective SOA Requires Intermediaries via InformationWeek In addition to attracting new customers with innovative capabilities, it's equally important for businesses to offer stable, trusted services that are capable of delivering the high quality of service that users now demand. Without IT governance, the Web-oriented world of rich Internet applications and composite mashups can easily become unstable and unreliable. To improve your chances for success, establish discipline through a strong IT governance program where quality of service, security, and management issues are of equal importance. As is often the case, application delivery infrastructure is relegated to "cloud" status; it's depicted as a cloud within the SOA or network and obscured, as though it has very little to do with the successful delivery of services and applications. Application delivery infrastructure is treated on par with layer 2-3 network infrastructure: dumb boxes whose functionality and features have little to do with application development, deployment, or delivery and is therefore beneath the notice of architects and developers alike. SOA intermediaries, while certainly a foundational aspect of a strong, reliable SOA infrastructure, are only part of the story. Reliability of services can't be truly offered by SOA intermediaries nor can they be provided by traditional layer 2-3 (switches, routers, hubs) network infrastructure. A dumb load-balancer cannot optimize inter-service communication to ensure higher capacity (availability and reliability) and better performance. A traditional layer 2/3 switch cannot inspect XML/SOAP/JSON messages and intelligently direct those messages to the appropriate ESB or service pool. But neither can SOA intermediaries provide reliability and stability of services. Like ESB load-balancing and availability services, SOA intermediaries are largely incapable of ensuring the reliable delivery of SOA applications and services because their tasks are focused on runtime governance (authentication, authorization, monitoring, content based routing) and their load-balancing and network-focused delivery capabilities are largely on par with that of traditional l2-3 network infrastructure. High-availability and failover functionality is rudimentary at best in SOA intermediaries. The author mentions convergence and consolidation of the SOA intermediary market, but that same market has yet to see the issue of performance and reliability truly addressed by any SOA intermediary. Optimization and acceleration services, available to web applications for many years, have yet to be offered to SOA by these intermediaries. That's perfectly acceptable, because it's not their responsibility. When it comes to increasing capacity of services, ensuring quality of service, and intelligently managing the distribution of requests the answer is not a SOA intermediary or a traditional load-balancer; that requires an application delivery network with an application fluent application delivery controller at its core. The marriage of Web 2.0 and SOA has crossed the threshold. It's reality. SOA intermediaries are not designed with the capacity and reliability needs of a large-scale Web 2.0 (or any other web-based) application. That chore is left to the "network cloud" in which application delivery currently resides. But it should be its own "cloud", it's own distinct part of the overall architecture. And it ought to be considered as part of the process rather than an afterthought. SOA governance solutions can do very little to improve the capacity, reliability, and performance of SOA and applications built atop that SOA. A successful SOA depends on more than governance and SOA intermediaries; it depends on a well-designed architecture that necessarily includes consideration for the reliability, scalability, and security of both services and the applications - Web 2.0 or otherwise - that will take advantage of those services. That means incorporating an intelligent, dynamic application delivery infrastructure into your SOA before reliability becomes a problem.180Views0likes0Comments