iam
9 TopicsThe Challenges of SQL Load Balancing
#infosec #iam load balancing databases is fraught with many operational and business challenges. While cloud computing has brought to the forefront of our attention the ability to scale through duplication, i.e. horizontal scaling or “scale out” strategies, this strategy tends to run into challenges the deeper into the application architecture you go. Working well at the web and application tiers, a duplicative strategy tends to fall on its face when applied to the database tier. Concerns over consistency abound, with many simply choosing to throw out the concept of consistency and adopting instead an “eventually consistent” stance in which it is assumed that data in a distributed database system will eventually become consistent and cause minimal disruption to application and business processes. Some argue that eventual consistency is not “good enough” and cite additional concerns with respect to the failure of such strategies to adequately address failures. Thus there are a number of vendors, open source groups, and pundits who spend time attempting to address both components. The result is database load balancing solutions. For the most part such solutions are effective. They leverage master-slave deployments – typically used to address failure and which can automatically replicate data between instances (with varying levels of success when distributed across the Internet) – and attempt to intelligently distribute SQL-bound queries across two or more database systems. The most successful of these architectures is the read-write separation strategy, in which all SQL transactions deemed “read-only” are routed to one database while all “write” focused transactions are distributed to another. Such foundational separation allows for higher-layer architectures to be implemented, such as geographic based read distribution, in which read-only transactions are further distributed by geographically dispersed database instances, all of which act ultimately as “slaves” to the single, master database which processes all write-focused transactions. This results in an eventually consistent architecture, but one which manages to mitigate the disruptive aspects of eventually consistent architectures by ensuring the most important transactions – write operations – are, in fact, consistent. Even so, there are issues, particularly with respect to security. MEDIATION inside the APPLICATION TIERS Generally speaking mediating solutions are a good thing – when they’re external to the application infrastructure itself, i.e. the traditional three tiers of an application. The problem with mediation inside the application tiers, particularly at the data layer, is the same for infrastructure as it is for software solutions: credential management. See, databases maintain their own set of users, roles, and permissions. Even as applications have been able to move toward a more shared set of identity stores, databases have not. This is in part due to the nature of data security and the need for granular permission structures down to the cell, in some cases, and including transactional security that allows some to update, delete, or insert while others may be granted a different subset of permissions. But more difficult to overcome is the tight-coupling of identity to connection for databases. With web protocols like HTTP, identity is carried along at the protocol level. This means it can be transient across connections because it is often stuffed into an HTTP header via a cookie or stored server-side in a session – again, not tied to connection but to identifying information. At the database layer, identity is tightly-coupled to the connection. The connection itself carries along the credentials with which it was opened. This gives rise to problems for mediating solutions. Not just load balancers but software solutions such as ESB (enterprise service bus) and EII (enterprise information integration) styled solutions. Any device or software which attempts to aggregate database access for any purpose eventually runs into the same problem: credential management. This is particularly challenging for load balancing when applied to databases. LOAD BALANCING SQL To understand the challenges with load balancing SQL you need to remember that there are essentially two models of load balancing: transport and application layer. At the transport layer, i.e. TCP, connections are only temporarily managed by the load balancing device. The initial connection is “caught” by the Load balancer and a decision is made based on transport layer variables where it should be directed. Thereafter, for the most part, there is no interaction at the load balancer with the connection, other than to forward it on to the previously selected node. At the application layer the load balancing device terminates the connection and interacts with every exchange. This affords the load balancing device the opportunity to inspect the actual data or application layer protocol metadata in order to determine where the request should be sent. Load balancing SQL at the transport layer is less problematic than at the application layer, yet it is at the application layer that the most value is derived from database load balancing implementations. That’s because it is at the application layer where distribution based on “read” or “write” operations can be made. But to accomplish this requires that the SQL be inline, that is that the SQL being executed is actually included in the code and then executed via a connection to the database. If your application uses stored procedures, then this method will not work for you. It is important to note that many packaged enterprise applications rely upon stored procedures, and are thus not able to leverage load balancing as a scaling option. Depending on your app or how your organization has agreed to protect your data will determine which of these methods are used to access your databases. The use of inline SQL affords the developer greater freedom at the cost of security, increased programming(to prevent the inherent security risks), difficulty in optimizing data and indices to adapt to changes in volume of data, and deployment burdens. However there is lively debate on the values of both access methods and how to overcome the inherent risks. The OWASP group has identified the injection attacks as the easiest exploitation with the most damaging impact. This also requires that the load balancing service parse MySQL or T-SQL (the Microsoft Transact Structured Query Language). Databases, of course, are designed to parse these string-based commands and are optimized to do so. Load balancing services are generally not designed to parse these languages and depending on the implementation of their underlying parsing capabilities, may actually incur significant performance penalties to do so. Regardless of those issues, still there are an increasing number of organizations who view SQL load balancing as a means to achieve a more scalable data tier. Which brings us back to the challenge of managing credentials. MANAGING CREDENTIALS Many solutions attempt to address the issue of credential management by simply duplicating credentials locally; that is, they create a local identity store that can be used to authenticate requests against the database. Ostensibly the credentials match those in the database (or identity store used by the database such as can be configured for MSSQL) and are kept in sync. This obviously poses an operational challenge similar to that of any distributed system: synchronization and replication. Such processes are not easily (if at all) automated, and rarely is the same level of security and permissions available on the local identity store as are available in the database. What you generally end up with is a very loose “allow/deny” set of permissions on the load balancing device that actually open the door for exploitation as well as caching of credentials that can lead to unauthorized access to the data source. This also leads to potential security risks from attempting to apply some of the same optimization techniques to SQL connections as is offered by application delivery solutions for TCP connections. For example, TCP multiplexing (sharing connections) is a common means of reusing web and application server connections to reduce latency (by eliminating the overhead associated with opening and closing TCP connections). Similar techniques at the database layer have been used by application servers for many years; connection pooling is not uncommon and is essentially duplicated at the application delivery tier through features like SQL multiplexing. Both connection pooling and SQL multiplexing incur security risks, as shared connections require shared credentials. So either every access to the database uses the same credentials (a significant negative when considering the loss of an audit trail) or we return to managing duplicate sets of credentials – one set at the application delivery tier and another at the database, which as noted earlier incurs additional management and security risks. YOU CAN’T WIN FOR LOSING Ultimately the decision to load balance SQL must be a combination of business and operational requirements. Many organizations successfully leverage load balancing of SQL as a means to achieve very high scale. Generally speaking the resulting solutions – such as those often touted by e-Bay - are based on sound architectural principles such as sharding and are designed as a strategic solution, not a tactical response to operational failures and they rarely involve inspection of inline SQL commands. Rather they are based on the ability to discern which database should be accessed given the function being invoked or type of data being accessed and then use a traditional database connection to connect to the appropriate database. This does not preclude the use of application delivery solutions as part of such an architecture, but rather indicates a need to collaborate across the various application delivery and infrastructure tiers to determine a strategy most likely to maintain high-availability, scalability, and security across the entire architecture. Load balancing SQL can be an effective means of addressing database scalability, but it should be approached with an eye toward its potential impact on security and operational management. What are the pros and cons to keeping SQL in Stored Procs versus Code Mission Impossible: Stateful Cloud Failover Infrastructure Scalability Pattern: Sharding Streams The Real News is Not that Facebook Serves Up 1 Trillion Pages a Month… SQL injection – past, present and future True DDoS Stories: SSL Connection Flood Why Layer 7 Load Balancing Doesn’t Suck Web App Performance: Think 1990s.2.2KViews0likes1CommentThe Identity (of Things) Crisis
#IAM #IoT If you listen to the persistent murmur in the market surrounding the Internet of Things right now, you'd believe that it's all about sensors. Sensors and big data. Sensors that monitor everything from entertainment habits to health status to more mundane environmental data about your home and office. to a certain degree this is accurate. The Internet of Things comprises, well, things. But the question that must be asked - and is being asked in some circles - is not only where that data ends up but how organizations are going to analyze it and, more importantly, monetize it. But there's yet another question that needs to be asked and answered - soon. Assuming these things are talking to applications (whether they reside in the cloud or in the corporate data center) and vice-versa, there must be some way to identify them - and the people to whom they belong. There is already a significant burden placed on IT and infrastructure to control access to applications. Whether it's employees or customers, the burden is very real and has been increased substantially with the introduction of mobile platforms from which users can now access a variety of applications. A recent Ponemon Study conducted on behalf of Netskope revealed an average of 25,180 computing devices connected to networks and/or enterprise systems. Very few of the organizations in the study could claim to have a comparable number of employees using those devices; it's more the case that there are 2 or 3 or event 4 devices per employee at this point in time. But the identity of the user is still the same, and it's their role and "need to know" upon which application access must be based. The access services which allow those users to engage with an application must be able to take into consideration identity but device and, increasingly, location. According to enterprise executives in a Vodafone M2M adaption report (2013), 78% of them expect machine to machine (M2M) interaction to be core to their successful business initiatives in the future. Even assuming these things do nothing but collect data, you can bet at some point their owners will want to visualize that data; to look at it, examine it, and understand what it's telling them. Which means an application, yes, but more than that it means that the exchange of the sensor data in the first place must be tied to an identity. To a real person. To a customer or employee. And because that data is specific to a person there will be privacy concerns. There's no reason for you to know how hot I like the water in my bath, or how many times I open my refrigerator in the middle of the night. But I may want to know, and thus the things in my home, my car, and my office need to be tied to me and secured against access by others. It's also naive to think that things will necessarily be peculiar to a specific provider. The app economy will be driven by apps (and services) that interact with things that may be manufactured by one company, but will have services provided for them by many others. That's one of the ways in which the Internet of Things is going to drive value for all sorts of organizations - apps that provide value added services by interacting with things. All this means that there will be a significant increase in demand on not just identity systems but access services. Already such systems and services are taxed by the increasing need to interpret requests for access within the context of not just identity but device and network as well. Things will need to be identified in such context as well, to ensure that the "thing" should even be talking to apps you provide in the first place and that the "thing" is owned by an employee or customer of yours. Access and identity services will need to be more scalable, more flexible, and highly dynamic to adapt to the needs of the internet of things without buckling under the burden. They will need to be context aware and able to discern at the logical perimeter whether or not access should or should not be granted. The secret to winning the game of the Internet of Things is not only going to be recognizing the opportunity for a new service or application or thing, but also on having an infrastructure in place that's going to be able to meet the sudden and wholly desirable increase in demand on relevant services and applications. Auto-scaling will be table stakes, but not just apps but for their supporting identity and access services, too. Both will need an adaptable architecture upon which to run.500Views0likes1CommentInternet of Insider Threats
Identify Yourself, You Thing! Imagine if Ben Grimm, aka The Thing, didn’t have such distinctive characteristics like an orange rocky body, blue eyes or his battle cry, ‘It’s Clobberin’ Time!’ and had to provide a photo ID and password to prove he was a founding member of the Fantastic Four. Or if the alien in John Carpenter’s The Thing gave each infected life-form the proper credentials to come and go as they please. Today the things we call ‘Things’ are infiltrating every aspect of society but how do organizations identify, secure and determine access for the 15+ connected chips employees will soon be wearing to the office? And what business value to they bring? Gartner refers to it as the ‘Identity of Things’ (IDoT) and an extension to identity management that encompasses all entity identities, whatever form those entities take. According to Gartner, IoT is part of the larger digital business trend transforming enterprises. It means that the business, the people/employees and the ‘things’ are all responsible in delivering business value. The critical part is the relationships between or among those participants so the business policies and procedures can reflect those relationships. Those relationships can be between a device and a human; a device and another device; a device and an application or service; or a human and an application or service. For instance, how does the system(s) know that the wearable asking for Wi-Fi access is the one connected to your wrist? It really doesn’t since today’s Identity and Access Management (IAM) systems are typically people-based and unable to scale as more entities enter the workplace. Not to mention the complexity involved with deciding if the urine powered socks the VP is wearing gets access. The number of relationships between people and the various entities/things will grow to an almost unmanageable point. Could anyone manage a subset of the expected 50 billion devices over the next 4 years? And set policies for data sharing permissions? Not without a drastic change to how we identify and integrate these entities. Talk about the Internet of Insider Threats. That's IoIT for those counting. Gartner suggests that incorporating functional characteristics of existing management systems like IT Asset Management (ITAM) and Software Management Systems (SAM) within the IAM framework might aid in developing a single-system view for IoT. The current static approach of IAM doesn’t take into account the dynamic relationships, which is vital to future IAM solutions. Relationships will become as important as the concept of identity is for IAM in the IDoT, according to Gartner. My, your, our identities are unique and have been used to verify you-are-you and based on that, give you access to certain resources, physical or digital. Now our identities are not only intertwined with the things around us but the things themselves also need to verify their identity and the relationship to ours. I can hear the relationship woes of the future: A: ‘I’m in a bad relationship…’ B: ‘Bad!?! I thought you were getting along?’ A: ‘We were until access was denied.’ B: ‘What are you talking about? You guys were laughing and having a great time at dinner last night.’ A: ‘Not my fiancé…it’s my smart-watch, smart-shoes, smart-socks, smart-shirt, smart-pants, smart-belt, smart-glasses, smart-water bottle, smart fitness tracker and smart-backpack.' IT said, 'It’s not you, it’s me.' ps The Identity of Things for the Internet of Things IoT Requires Changes From Identity and Access Management Space: Gartner What is IoT without Identity? IoT: A new frontier for identity Health and Finance Mobile Apps Still Incredibly Insecure Internet of Things 'smart' devices are dumb by design Authentication in the IoT – challenges and opportunities Technorati Tags: iot,things,wearables,iam,insider threat,security,silva,f5,identity,access Connect with Peter: Connect with F5:266Views0likes0CommentsThe Mounting Case for Cloud Access Brokers
#infosec #cloud #iam Addressing the need for flexible control of access to off-premise applications Unifying identity and access management has been a stretch goal for IT for nearly a decade. At first it was merely the need to have a single, authoritative source of corporate identity such that risks like orphaned or unauthorized accounts could be addressed within the enterprise. But with a growing number of applications - business applications - being deployed "in the cloud", it's practically a foregone conclusion that organizations are going to need similar capabilities for those applications, as well. It's not easy, there are myriad reasons why unifying identity and access control is a stretch goal and not something easily addressed by simply deploying a solution. Federation of identity and access control requires integration. It may require modification of applications. It may require architectural changes. All of these are disruptive and, ultimately, costly. But the costs of not addressing the issue are likely higher. Security a Rising Concern for Cloud-Based Application Usage With access to these applications taking place from a variety of locations including smartphones (80 percent),tablets (71 percent) and non-company computers (80 percent) and with a large percentage of organizations (73 percent) needing to grant temporary access to cloud apps, respondents cited concerns around identity management, governance and complexity. ... Nearly three-quarters (72 percent) of the respondents said they have the need to provide external users, such as consultants, with temporary access to the company’s cloud applications, while just under half (48 percent) of respondents said they are still not able to sign in to cloud applications with a single set of credentials. [emphasis mine] There is a significant loss of control - in terms of governance - that's occurring, where the organization no longer has the means by which they can control who has access to applications, from what device or location, and when. That's the downside of cloud, of distributed systems that are not architected with security in mind. Make no mistake, it's not just IT making a power grab for power's sake. This is a real, significant issue for the business side of the house, because it is their applications - and ultimately data - that is at risk by failing to properly address issues of access. THE CASE FOR CLOUD ACCESS BROKERS The least disruptive - and most efficient - means of addressing this disconnect is to insert into the data center architecture an access broker tier, a layer of dynamic access and identity management services designed to provide federation and unification of credentials across cloud and data center resources based on the organization's authoritative source of identity. The advantages of such a tier are that they are less disruptive, it respects the authoritative source of identity and it is highly flexible. The same cloud access broker that provides authentication and authorization to internal resources can do so for cloud-based resources. The downside is integration with a growing variety of SaaS and custom cloud-deployed applications used by the enterprise. A standards-based way of integrating off-premise applications with a cloud access broker is needed, and we find such a standard in SAML 2.0, an increasingly popular means of integrating identity and access management services across the cloudosphere. In addition to providing access control through such integration, a cloud access broker also provides the means for IT to address the issue of password security noted in "Security a Rising Concern for Cloud-Based Application Usage": The survey indicated unsafe password management continues to be a challenge, with 43 percent of respondents admitting that employees manage passwords in spreadsheets or on sticky notes and 34 percent share passwords with their co-workers for applications like FedEx, Twitter, Staples and LinkedIn. Twenty percent of respondents said they experienced an employee still being able to log in after leaving the company. By enabling federation and single-sign on capabilities, organizations can mitigate this problem by ensuring users have fewer passwords to recall and that they do not share them with off-premise applications like FedEx. Because IT controls the authoritative source of identity, it also governs policies for those credentials, such as password length, history, interval of change, and composition. FEDERATION MEANS HEIGHTENED (AND ENFORCEABLE) SECURITY Federation of identity and access management through a cloud access broker can alleviate the loss of control - and thus expanding security threats. By maintaining the authoritative source of identity on-premise, organizations can enforce security policies regarding password strength and length while improving the overall experience for end-users by reducing the number of credentials they must manage to conduct daily business operations. Issues such as orphaned or rogue accounts having access to critical business applications and data can be more easily - and quickly - addressed, and by using a flexible cloud access broker capable of transitioning security protocols, device incompatibility becomes a non-issue. As more and more organizations recognize the ramifications of unfettered use of cloud services it is inevitable that cloud access brokers will become a critical component in the data center.260Views0likes1CommentAsk the Expert – Why Identity and Access Management?
Michael Koyfman, Sr. Global Security Solution Architect, shares the access challenges organizations face when deploying SaaS cloud applications. Syncing data stores to the cloud can be risky so organizations need to utilize their local directories and assert the user identity to the cloud. SAML is a standardized way of asserting trust and Michael explains how BIG-IP can act either as an identity provider or a service provider so users can securely access their workplace tools. Integration is key to solve common problems for successful and secure deployments. ps Related: Ask the Expert – Are WAFs Dead? Ask the Expert – Why SSL Everywhere? Ask the Expert – Why Web Fraud Protection? Application Availability Between Hybrid Data Centers F5 Access Federation Solutions Inside Look - SAML Federation with BIG-IP APM RSA 2014: Layering Federated Identity with SWG (feat Koyfman) Technorati Tags: f5,iam,saas,saml,cloud,identity,access,security,silva,video,AAA Connect with Peter: Connect with F5:243Views0likes0CommentsF5 Friday: Ops First Rule
#cloud #microsoft #iam “An application is only as reliable as its least reliable component” It’s unlikely there’s anyone in IT today that doesn’t understand the role of load balancing to scale. Whether cloud or not, load balancing is the key mechanism through which load is distributed to ensure horizontal scale of applications. It’s also unlikely there’s anyone in IT that doesn’t understand the relationship between load balancing and high-availability (reliability). High-Availability (HA) architectures are almost always implemented using load balancing services to ensure seamless transition from one service instance to another in the event of a failure. What’s often overlooked is that scalability and HA isn’t important just for applications. Services – whether application or network-focused – must also be reliable. It’s the old “only as strong as the weakest link in the chain” argument. An application is only as reliable as its least reliable component – and that includes services and infrastructure upon which that application relies. It is – or should be – ops first rule; the rule that guides design of data center architectures. This requirement becomes more and more obvious as emerging architectures combining the data center and cloud computing are implemented, particularly when federating identity and access services. That’s because it is desirable to maintain control over the identity and access management processes that authenticate and authorize use of applications no matter where they may be deployed. Such an architecture relies heavily on the corporate identity store as the authoritative source of both credentials and permissions. This makes the corporate identity store a critical component in the application dependency chain, one that must necessarily be made as reliable as possible. Which means you need load balancing. A good example of how this architecture can be achieved is found in BIG-IP load balancing support for Microsoft’s Active Directory Federation Services (AD FS). AD FS and F5 Load Balancing Microsoft’s Active Directory Federation Services, (AD FS) sever role is an identity access solution that extends the single sign-on, (SSO) experience for directory-authenticated clients, (typically provided on the Intranet via Kerberos), to resources outside of the organization’s boundaries, such as cloud computing environments. To ensure high-availability, performance, and scalability the F5 BIG-IP Local Traffic Manager (LTM) can be deployed to load balance an AD FS server farm. There are several scenarios in which BIG-IP can load balance AD FS services. 1. To enable reliability of AD FS for internal clients accessing external resources, such as those hosted in Microsoft Office 365. This is the simplest of architectures and the most restrictive in terms of access for end-users as it is limited to only internal clients. 2. To enable reliability of AD FS and AD FS proxy servers, which provide external end-user SSO access to both internal federation-enabled resources as well as partner resources like Microsoft Office 365. This is a more flexible option as it serves both internal and external clients. 3. BIG-IP Access Policy Manager (APM) can replace the need for AD FS proxy servers required for external end-user SSO access, which eliminates another tier and enables pre-authentication at the perimeter, offering both the flexibility required (supporting both internal and external access) as well as a more secure deployment. In all three scenarios, F5 BIG-IP serves as a strategic point of control in the architecture, assuring reliability and performance of services upon which applications are dependent, particularly those of authentication and authorization. Using BIG-IP APM instead of AD FS proxy servers both simplifies and makes more agile the architecture. This is because BIG-IP APM is inherently more programmable and flexible in terms of policy creation. BIG-IP APM, being deployed on the BIG-IP platform, can take full advantage of the context in which requests are made, ensuring that identity and access control go beyond simple credentials and take into consideration device, location, and other contextual-clues that enable a more secure system of authentication and authorization. High-availability – and ultimately scalability - is preserved for all services by leveraging the core load balancing and HA functionality of the BIG-IP platform. All components in the chain are endowed with HA capabilities, making the entire application more resilient and able to withstand minor and major failures. Using BIG-IP LTM for load balancing AD FS serves as an adaptable and extensible architectural foundation for a phased deployment approach. As a pilot phase, rolling out AD FS services for internal clients only makes sense, and is the simplest in terms of its implementation. Using BIG-IP as the foundation for such an architecture enables further expansion in subsequent phases, such as introducing BIG-IP APM in a phase two implementation that brings flexibility of access location to the table. Further enhancements can then be made regarding access when context is included, enabling more complex and business-focused access policies to be implemented. Time-based restrictions on clients or location can be deployed and enforced, as is desired or needed by operations or business requirements. Reliability is a Least Common Factor Problem Reliability must be enabled throughout the application delivery chain to ultimately ensure reliability of each application. Scalability is further paramount for those dependent services, such as identity and access management, that are intended to be shared across multiple applications. While certainly there are many other load balancing services that could be used to enable reliability of these services, an extensible and highly scalable platform such as BIG-IP is required to ensure both reliability and scalability of shared services upon which many applications rely. The advantage of a BIG-IP-based application delivery tier is that its core reliability and scalability services extend to any of the many services that can be deployed. By simplifying the architecture through application delivery service consolidation, organizations further enjoy the benefits of operational consistency that keeps management and maintenance costs reduced. Reliability is a least common factor problem, and Ops First Rule should be applied when designing a deployment architecture to assure that all services in the delivery chain are as reliable as they can be. F5 Friday: BIG-IP Solutions for Microsoft Private Cloud BYOD–The Hottest Trend or Just the Hottest Term The Four V’s of Big Data Hybrid Architectures Do Not Require Private Cloud The Cost of Ignoring ‘Non-Human’ Visitors Complexity Drives Consolidation What Does Mobile Mean, Anyway? At the Intersection of Cloud and Control… Cloud Bursting: Gateway Drug for Hybrid Cloud Identity Gone Wild! Cloud Edition219Views0likes0CommentsCloud Security: It’s All About (Extreme Elastic) Control
#iam #infosec #cloud #mobile Whether controlling access by users or flows of data, control is common theme to securing “the cloud” The proliferation of mobile devices along with the adoption of hybrid cloud architectures that integrate black-box services from external providers is bringing back to the fore issues of control. Control over access to resources, control over flow of data into and out of resources, and the ability to exert that control consistently whether the infrastructure is “owned” or “rented”. What mobile and BYOD illustrates is the extreme nature of computing today; of the challenges of managing the elasticity inherent in cloud computing . It is from the elasticity that the server side poses its greatest challenges – with mobile IP addresses and locations that can prevent security policies from being efficiently codified, let alone applied consistently. With end-points (clients) we see similar impacts; the elasticity of users lies in their device mobility, in the reality that users move from smart phone to laptop to tablet with equal ease, expecting the same level of access to corporate applications – both on and off-premise. This is extreme elasticity – disrupting both client and server variables. Given the focus on mobile today it should be no surprise to see the declaration that “cloud security” is all about securing “mobile devices.” "If you want to secure the cloud, you need to secure your mobile devices," he explained. "They are the access points to the cloud -- and from an end-user perspective, the difference between the cloud and the mobile phone is lost." -- BYOD: if you can't beat 'em, secure 'em If this were to be taken literally, it would be impossible. Without standardization – which runs contrary to a BYOD policy – it is simply not feasible for IT to secure each and every mobile device, let alone all the possible combinations of operating systems and versions of operating systems. To do so is futile, and IT already knows this, having experienced the pain of trying to support just varying versions of one operating system on corporate-owned desktops and laptops. It knows the futility in attempting to do the same with mobile devices, and yet they are told that this is what they must do, if they are to secure the cloud. Which brings us to solutions posited by experts and pundits alike: IAM (Identity and Access Management) automation and integration. IAM + “Single Control Point” = Strategic Point of (Federated Access) Control IAM is not a new solution, nor is the federation of such services to provide a single control point through which access can be managed. In fact, combining the two beliefs – that control over access to cloud applications with the importance of a “single control point” – is exactly what is necessary to address the “great challenge” for the security industry described by Wendy Nather of the 451 Group. It is the elasticity that exists on both sides of the equation – the client and the server – that poses the greatest challenge for IT security (and operations in general, if truth be told). Such challenges can be effectively met through the implementation of a flexible intermediation tier, residing in the data center and taking advantage of infrastructure and application integration techniques through APIs and process orchestration. Intermediation via the application delivery tier, residing in the data center to ensure the control demanded and required (as a strategic point of control), when combined with context-awareness offer the means by which organizations can meet head on the security challenge of internal and external elasticity.211Views0likes0CommentsIdentity Gone Wild! Cloud Edition
#IAM #cloud #infosec Identity lifecycle management is out of control in the cloud Remember the Liberty Alliance? Microsoft Passport? How about the spate of employee provisioning vendors snatched up by big names like Oracle, IBM, and CA? That was nearly ten years ago. That’s when everyone was talking about “Making ID Management Manageable” and leveraging automation to broker identity on the Internets. And now, thanks to the rapid adoption of SaaS driven, so say analysts, by mobile and remote user connectivity, we’re talking about it again. “Approximately 48 percent of the respondents said remote/mobile user connectivity is driving the enterprises to deploy software as a service (SaaS). This is significant as there is a 92 percent increase over 2010.” -- Enterprise SaaS Adoption Almost Doubles in 2011: Yankee Group Survey So what’s the problem? Same as it ever was, turns out. The lack of infrastructure integration available with SaaS models means double trouble: two sets of credentials to manage, synchronize, and track. IDENTITY GONE WILD Unlike Web 2.0 and its heavily OAuth-based federated identity model, enterprise-class SaaS lacks these capabilities. Users who use Salesforce.com for sales force automation or customer relationship management services have a separate set of credentials they use to access those services, giving rise to perhaps one of the few shared frustrations across IT and users – Yet Another Password. Worse, there’s less control over the strength (and conversely the weakness) of those credentials, and there’s no way to prevent a user from simply duplicating their corporate credentials in the cloud (a kind of manual single-sign on strategy users adopt to manage their lengthy identity lists). That’s a potential attack vector and one that IT is interested in cutting off sooner rather than later. The lack of integration forces IT to adopt manual synchronization processes that lag behind reality. Synchronization of accounts often requires manual processes that extract, zip and share corporate identity with SaaS operations as a means to level access on a daily basis. Inefficient at best, dangerous as worst, this process can easily lead to orphaned accounts – even if only for a few weeks – that remain active for the end-user even as they’ve been removed from corporate identity stores. “Orphan accounts refer to active accounts belonging to a user who is no longer involved with that organization. From a compliance standpoint, orphan accounts are a major concern since orphan accounts mean that ex-employees and former contractors or suppliers still have legitimate credentials and access to internal systems.” -- TEST ACCOUNTS: ANOTHER COMPLIANCE RISK What users – and IT – want is a more integrated system. For IT it’s about control and management, for end-users it’s about reducing the impact of credential management on their daily workflows and eliminating the need to remember so many darn passwords. IDENTITY GOVERNANCE: CLOUD STYLE From a technical perspective what’s necessary is a better method of integration that puts IT back in control of identity and, ultimately, access to corporate resources wherever they may be. It’s less a federated governance model and more a hierarchical trust-based governance model. Users still exist in both systems – corporate and cloud – but corporate systems act as a mediator between end-users and cloud resources to ensure timely authentication and authorization. End-users get the benefit of a safer single-sign on like experience, and IT sleeps better at night knowing corporate passwords aren’t being duplicated in systems over which they have no control and for which quantifying risk is difficult. Much like the Liberty Alliance’s federated model, end-users authenticate to corporate identity management services and then a corporate identity bridging (or brokering) solution asserts to the cloud resource the rights and role of that user. The corporate system trusts the end-user by virtue of compliance with its own authentication standards (certificates, credentials, etc…) while the SaaS trusts the corporate system. The user still exists in both identity stores – corporate and cloud – but identity and access is managed by corporate IT, not cloud IT. This problem, by the way, is not specific to SaaS. The nature of cloud is such that almost all models impose the need for a separate set of credentials in the cloud from that of corporate IT. This means an identity governance problem is being created every time a new cloud-based service is provisioned, which increases risks and the costs associated with managing those assets as they often require manual processes to synchronize. Identity bridging (or brokering) is one method of addressing these risks. By putting control over access back in the hands of corporate IT, much of the risk of orphan accounts is mitigated. Compliance with corporate credential policies (strength and length of passwords, for example) can be restored because authentication occurs in the data center rather than in the cloud. And perhaps most importantly, if corporate IT is properly set up, there is no lag between an account being disabled in the corporate identity store and access to cloud resources being denied. The account may still exist, but because access is governed by corporate IT, the risk is diminished to nearly nothing; the user cannot gain access to that resource without the permission of corporate IT, which is immediately denied. This is one of the reasons why identity and access management go hand in hand today. The distributed nature of cloud requires that IT be able to govern both identity and access, and a unified set of services enables IT to do just that.194Views0likes0CommentsComplexity Drives Consolidation
The growing complexity of managing more users from more places using more devices will drive consolidation efforts – but maybe not in the way you think. Pop quiz time. Given three sets of three items each, how many possible combinations are there when choosing only one from each set? Ready? Go. If you said “27” give yourself a cookie. If you said “too [bleep] many”, give yourself two cookies because you recognize that at some point, the number of combinations is simply unmanageable and it really doesn’t matter, it’s too many no matter how you count it. This is not some random exercise, unfortunately, designed to simply flex your mathematical mental powers. It’s a serious question based on the need to manage an increasing number of variables to ensure secure access to corporate resources. There are currently (at least) three sets of three items that must be considered: User (employee, guest, contractor) Device (laptop, tablet, phone) Network (wired, wireless, mobile) Now, if you’re defining corporate policy based on these variables, and most organizations have – or would like to have - such a level of granularity in their access policies, this is going to grow unwieldy very quickly. These three sets of three quickly turn into 27 different policies. Initially this may not look so bad, until you realize that these 27 policies need to, at least in some part, be replicated across multiple solutions in the data center. There’s the remote access solution (VPN), access management (to control access to specific resources and application services), and network control. Complicating even further (if that was possible) the deployment of such policies is the possibility that multiple identity stores may be required, as well as the inclusion of mobile device management (MDM). On top of that, there may be a web application firewall (WAF) solution that might need user or network-specific policies that tighten (or loosen) security based on any one of those variables. We’ve got not only the original 27 policies, but a variable number of configurations that must codify those policies across a variable number of solutions. That’s not scalable; not from a management perspective and certainly not from an operational perspective. SCALING ACCESS MANAGEMENT One solution lies in consolidation. Not necessarily through scaling up individual components as a means to reduce the solution footprint and thus scale back the operational impact, but by consolidating services into an operationally unified tier by taking advantage of a holistic platform approach to (remote) access management. The application delivery tier is an increasingly key tier within the data center for enabling strategic control and flexibility over application delivery. This includes (secure) remote access and resource access management. Consolidating access management and secure remote access onto a unified application delivery platform not only mitigates the problem of replicating partial policies across multiple solutions, but it brings to bear the inherent scalability of the underlying platform, which is designed specifically to scale services – whether application or authentication or access management. This means dependent services can scale on-demand along with the applications and resources they support. A consolidated approach also adds value in its ability to preserve context across services, a key factor in effectively managing access for the volatile environment created by the introduction of multiple devices and connection media leveraged by users today. It is almost always the case in a highly available deployment that the first component to respond to a user request will be the application delivery controller, as these are tasked with high-availability and load balancing duties. When that request is passed on to the application or an access management service, pieces of the contextual puzzle are necessarily lost due because most protocols are not designed to carry such information forward. In cases where component-component integration is possible, this context can be maintained. But it is more often the case that such integration does not exist, or if it does, is not put to use. Thus context is lost and decisions made downstream of the application delivery controller are made based on increasingly fewer variables, many of which are necessary to enforce corporate access policies today. By consolidating these services at the application delivery tier, context is preserved and leveraged, providing not only more complete policy enforcement but simpler policy deployment. This is why it is imperative for application delivery systems to support not just specific applications or protocols, but all applications and protocols. It is also the driving reason why support for heterogeneous virtualization and VDI platforms is so important; consolidation cannot occur if X-specific delivery solutions are required. As the number of devices, users, and network medium continues to expand, it will put more pressure on all aspects of IT operations. That pressure can be alleviated by consolidating disparate but intimately related services into a unified application delivery tier and applying a more holistic, contextually aware solution that is not only ultimately more manageable and flexible, but more scalable as well.169Views0likes0Comments