on 27-Oct-2011 13:57
There are many things in the history of high technology that are downright conundrums. One of the obvious ones is: given the formats and media currently used to distribute text, music, and video, for example, how do we protect the rights of both legal users and the creators of content? Of course we want people to be able to make a living of creating content, which does imply it is not given away at the whim of anyone with a copy, but we also (at least in most modern countries) want to protect the rights of people who have purchased (oh fine, licensed if you prefer) the software/book/music/movie to use their purchase/license freely. There isn’t an easy answer to this problem, because people disagree on the nature of the problem, and existing technology doesn’t support a reasonably sound mechanism for determining on-the-fly if the usage is legal.
We suffer a similar conundrum in InfoSec. We need to prevent unauthorized access to an application while not unduly inhibiting authorized access. The problem lies in the definition of “unauthorized”, which changes with every given application, and is wildly different between two points on the Internet. For some government websites, for example, “unauthorized” is, well, nearly everyone in the world. For other government websites, “unauthorized” is either a tiny subset of the world population, or only those who are set on disrupting the website’s normal function. The rest of the problem lies with the definition of “unduly inhibiting”. For most websites, “unduly inhibiting” is anything beyond a simple login. Indeed, for most websites out there, even logging in is delayed as long as possible. You can fill a cart in most web stores and only log in when going to check out, for example. But for some websites, again going to the government for examples (though there are plenty in the commercial world also), a physical security token with a login and a verification of ID are not “unduly inhibiting”, because the nature of the information to be found on the site is that sensitive.
We have traditionally protected our networks with firewalls, utilizing rules of these stalwart protectors between your application and the world to limit who can even get to an application. But firewalls were never a perfect solution. Logging in, for example, is not a function of a firewall for a given application, the application must handle this process. For known vulnerabilities, firewalls with advanced features are able to protect your application in the manner that all others are protected. They do a stand-up job of keeping malcontents at bay, in the generic 90% sense of “stand-up job”. But even modern “Application Layer” firewalls are not “Application Aware”. When they say “Application Layer”, they mean in the network stack, which is standards like TCP and HTTP, not actual Application needs.
But every application has its oddities. Be they the login process or the networks you want to allow connectivity to, be they protecting sensitive data from traversing the Internet unencrypted, or protecting a given field on a web page from various attacks that you know it is vulnerable to. And firewalls aren’t real good at most of these issues if they are issues for only your one application. Indeed, since firewalls are centralized to make management easier, most firewall products become unruly if you do use them to protect for the application-specific things that you know are out there.
A wonderful thing about history though, is that we write it forward. The future is coming, and we have the opportunity to make up for the shortcomings of traditional firewalls. We can “fill the gaps” so-to-speak with Web Application Firewalls. These tools are designed to protect your application and your application specifically from attacks that are more specific than the firewall would normally prevent. Utilizing application profiles (or templates, or whatever your vendor of choice calls them), you can start with generic settings for applications of the category yours is – or in many purchased-product cases the specific product. MS Exchange has enough organizations utilizing OWA for example, that most web application firewalls offer a canned OWA solution that you can then customize. Giving you protection specific to the application is a far site better than the generic protections you’ve gotten in the past. Many places have put Web Application Firewalls into place, but aren’t really using them for anything other than to check the box that says “requires a web application firewall” in a standard or regulation. That’s not making efficient usage of the tools at hand.
And there’s another reason that focusing security on applications is going to be important in the future. One you really do need to think about if you aren’t already pursuing this approach. Due to the nature of cloud computing, as I’ve mentioned before, your firewall, and all of your rules created over years of experience, is not going to run in a cloud. At least not right now. Some cloud vendors offer firewall services, but if you have a product like F5’s Application Security Manager (ASM), you can use the same rules in the physical version inside your datacenter and in the virtual edition running in a cloud environment. That’s a big bonus, as it allows you to copy your existing configuration, and with minimal changes to reflect the change in infrastructure, apply them to the virtual edition in the cloud. Your application receives the same exact protection, regardless of where future needs direct you to deploy it.
At least with InfoSec, we are making progress toward solving the problems. Now if only we could do so in the piracy space. Perhaps one day, we’ll have a way, and all agree on what is reasonable. Or at least most of us. Billions of people are highly unlikely to all agree.