Authorization is the New Black for Infosec

Authentication is not enough. Authorization is a must for all integrated services – whether infrastructure components, applications, or management frameworks.

If you’ve gone through the process of allowing an application access to Twitter or Facebook then you’ve probably seen OAuth in action. Last week a mini-storm was a brewing over such implementations, primarily regarding the “overly-broad permission structure” implemented by Twitter.

Currently Twitter application developers are given 2 choices when registering their apps – they can either request “read-only access” or “read & write” access. For Twitter “read & write” means being able to do anything through the API on a user’s behalf.

Twitter’s overly-broad permission structure amplifies the concern around OAuth token security because of what those tokens allow apps to do.

-- Twitter Permissions & Security

Reading that blog post and the referenced articles led to one of those “aha” moments when you realize something is a much larger problem than it first appears. After all, many folks as a general rule do not allow any external applications access to their Facebook or Twitter accounts, so they aren’t concerned by such broad permission structures. But if you step back and think about the problem in more generalized terms, in terms of how we integrate applications and (hopefully) infrastructure to enable IT as a Service, you might start seeing that we have a problem, Houston, and we need to address it sooner rather than later.

Interestingly enough, as I was getting ready to get my thoughts down on this subject I was also keeping track of Joe Weinman and his awesome stream of quotes coming out of sessions he attended at the Enterprise Cloud Summit held at Interop NY. He did not disappoint as suddenly he started tweeting quotes from a talk entitled, “Infrastructure and Platforms: A Combined Strategy”, given by Nimbula co-founder and vice-president of products, Willem van Biljon.

The sentiment exactly mirrored my thoughts on the subject: traditional permissions are “no longer enough” and are highly dependent “on the object being acted upon” as well as the actor. It’s contextual, requiring that the strategic point of control enforcing and apply policies does so on a granular level and in the context of the actor, the object acted upon (the API call, anyone?) and even location. While it may be permissible for an admin to delete an object whilst doing so from within the recognized organization management network, it may not be permissible for the same action to be carried out from iPad that’s located in Paris. Because that may be indicative of a breach, especially when you don’t have an office in Paris.

More disturbing, if you think about it, is the nature of infrastructure integration via an API, a la Infrastructure 2.0. Like Twitter’s implementation and many integration solutions that were designed with internal-to-the-organization only use cases in mind, one need only authenticate to the component in order to gain complete access to all functions possible through the API.

The problem is, of course, that just because you want to grant User/Application A access to everything, you might not want to allow User/Application B to have access to the entire system. Application delivery infrastructure solutions, for example, have a very granular set of API calls that allow third-party applications to control everything. Once authenticated a user/application can as easily modify the core networking configuration as they can read the status of a server.

There is no real authorization scheme, only authentication. That’s obviously a problem.

THE FIVE Ws of CONTEXT-AWARE AUTHORIZATION

What’s needed is the ability to “tighten” security around an API because at the core of most management consoles is the concept of integration with the components they managed via an API. Organizations need to be able to apply granular authorization – access to specific API calls/methods based on user or at the very least role – so that only those who should  be able to tweak the network configuration can do so. This is not only true for infrastructure; it’s especially important for applications with APIs, the purpose for which is application integration across the Internet.

Merely obtaining credentials and authenticating them is not enough. Yes, that’s Joe Bob from the network team, that’s nice. What we need to know is what  he’s allowed to do and to what and perhaps from where and even when and perhaps how. We need to have the ability to map out these variables in a way that allows organizations to be as restrictive – or open – as needs be to comply with organizational security policies and any applicable regulations. Perhaps Joe Bob can modify the network configuration of component A but only from the web management console and only during specified maintenance windows, say Saturday night. Most organizations would be fine with less detail – Job Bob can modify the network from any internal device/client at any time, but Mary Jo can only read status information and health check data and she can do so from any location.

Whether we’re talking fine-grained or broad-grained permissions at the API call level is less the issue than simply having some form of authorization scheme beyond system-wide “read/write/execute” permissions, which is where we are today with most infrastructure components and most Web 2.0 sites. We need to drill down into the APIs and start examining authorization on a per-call basis (or per-grouping-of-calls basis, at a minimum).

And in the case of cloud computing and multi-tenant architectures, we further need to recognize that it’s not just the API-layer that needs authorization but also the “tenant”. After all, we don’t want Joe Bob from Tenant A messing with the configuration for Tenant B, unless of course Joe Bob is part of the operations team for the provider and needs that broad level of access.

AN INFOSEC STRETCH GOAL

The goal to provide a complete automated and authorization-enabled data center is certainly a “stretch” goal, and one that will likely remain a “work in progress”  for the foreseeable future. Many, many components in data centers today are not API-enabled, and there’s no guarantee that they will be any time in the future. Those that are currently so enabled are not necessarily multi-tenant, and there’s no guarantee that they will be any time in the future. And many organizations simply do not have the skills required to perform the work integration work necessary to build out a collaborative, dynamic infrastructure. And there’s no guarantee that they will any time in the future. But we need to start planning and viewing security with an eye toward authorization as the goal, recognizing that the authentication schemes of the past were great when access was to an operator or admin and only from a local console or machine. The highly distributed nature of new data center architectures and the increasing interest in IT as a Service make it necessary to consider not just who can manage infrastructure but what they can manage and from where.

The challenges to addressing the problem of authorization and security are many but not insurmountable. The first step is to recognize that it is necessary and develop a strategic architectural plan for moving forward. Then infosec professionals can assist in developing the incremental tactics necessary to implement such a plan. Just as a truly dynamic data center takes time and will almost certainly evolve through stages, so too will implementing an authorization-focused identity management strategy for both applications and the infrastructure required to deliver them.

Published Oct 20, 2010
Version 1.0

Was this article helpful?

2 Comments

  • Juan,

     

     

    Can you expand APM? Do you mean F5 BIG-IP Access Policy Manager or some other solution?

     

     

    And by F5 do you mean APM or ASM (Application Security Manager) or just plain old BIG-IP LTM? And are you wondering about authorization for F5 devices or for web applications? The latter we can do in a number of ways - applying context-aware policies to URIs can be accomplished by APM, ASM, and LTM + iRules) and for internal resources, FirePass can play along.

     

     

    For the former we'd use the same tools (ASM, APM, or LTM+iRules) to constrain access to iControl by examining the SOAP envelope and headers to find out what API call is being made and then applying the proper authorization policies.

     

     

    Lori