session
5 TopicsThe BIG-IP Application Security Manager Part 9: Username and Session Awareness Tracking
This is the penultimate article in a 10-part series on the BIG-IP Application Security Manager (ASM). The first eight articles in this series are: What is the BIG-IP ASM? Policy Building The Importance of File Types, Parameters, and URLs Attack Signatures XML Security IP Address Intelligence and Whitelisting Geolocation Data Guard Let's say you are building an awesome web application. You want to have as many visitors as possible, but you want to keep your application safe from malicious activity. The BIG-IP ASM can help secure your application by blocking harmful behavior before it ever gets to your app, but it can also help you track down and block the bad guys if they are able to somehow find a way in. This article will focus on the different ways you can track users by session or by username. Using these tracking capabilities, you can find a bad guy, and then you can track the activity or block it altogether. Session Tracking A session begins when a user accesses the web application and it ends when a user exits the application (or when the user exceeds an established timeout length). When a user accesses a web application, the ASM generates a session ID number for that specific session. You can track a specific session by enabling "Session Awareness" on the ASM. Navigate to Security >> Application Security >> Sessions and Logins >> Session Tracking and you will see the screen shown below. Simply check the "Enabled" box for Session Awareness (and Save and Apply Policy) and the ASM will give you the option to take certain actions on any specific session you choose. So, if you are looking through your ASM event logs and notice that a particular user is doing some bad things, you can start logging all activity for that user session or you can block the session completely. I used my trusty Hack-it-yourself auction site to do some testing with this feature. I enabled Session Awareness and then accessed the auction site. Then, I reviewed the ASM event logs (Security >> Event Logs >> Application >> Requests) to see the details of the activity for my user session. Notice in the screenshot below that the Request Details show the Session ID as well as a link to Session Tracking Details (it also gives you a link to the APM session details...pretty cool). When you click on the Session Tracking Details link, you are given the option to Log All Requests, Delay Blocking, and/or Block All. I selected Log All Requests and Block All, and then I tried to log back into the auction site. Guess what? Totally got blocked...and it all got logged in the ASM event logs. So, using this blocking feature, I can easily stop a user from accessing the web application simply by blocking the user session. Now, here's a little more of my story...I accessed the auction site from my Firefox browser and, like I said before, I got blocked. I switched over to Internet Explorer and accessed the site without any problem. Makes sense, though, because I started a totally new session with the other browser. I just wanted to make sure you kept that in mind as you determine the best way to track and stop the bad guys who access your application. Username Tracking Let's talk about your awesome web app for a second. When you created said web app, you no doubt built a login page and established URLs and parameters so that users could authenticate and gain access. By adding these URLs and parameters into your security policy, the ASM can track users by username via the login URL you establish. In order to configure the appropriate login and access parameters, navigate to Security >> Application Security >> Sessions and Logins >> Login Pages List and create a new Login Pages List. In the Login URL, select either Explicit or Wildcard then select either HTTP or HTTPS based on the type of traffic the application accepts, then type in the path of the login page. Next, you can select the Authentication Type (I chose HTML Form based on the structure of the Hack-it-yourself auction site). Next, type in the parameter names for the application's Username and Password parameters (these are case sensitive). When the ASM detects these parameters used together, it knows that a login attempt is taking place. Finally, you need to configure Access Validation criteria that must be satisfied in order for the ASM to allow the user to access the authenticated URL. It's important to note that at least one of these criteria must be selected. If multiple criteria are selected, they must all be satisfied in order for the ASM to allow access to the authenticated URL. The screenshot below shows the settings for my Hack-it-yourself auction site: After the Login Page Properties are set, you can go back to the Session Tracking settings and add the Login Page to the Application Username settings (see the screenshot below). This tells the ASM to start tracking the username from the login URL you listed. The Test I accessed the auction site and logged in with my username (jwagnon) and password. See the screenshot below. Based on all the ASM settings I configured earlier, the ASM should have recognized this login and it should give me the option of tracking this specific user. Sure enough, the ASM Event Logs show the login action (notice the log entry for the Requested URL of /login.php in the screenshot below). I clicked on this specific log entry, and the request details show that the username "jwagnon" logged in to the site. Much like the session tracking options, the ASM gives the option to Log All Requests, Delay Blocking, and/or Block All activity from this specific user. I chose to Log All Requests and Block All activity from this username. Then, I attempted to log back into the auction site using the "jwagnon" username and password, and the ASM blocked the request. Remember how I switched from Firefox to Internet Explorer on the session awareness tracking and I was able to access the site because of the new session I started? Well, I tried that same thing with the username tracking but the ASM blocked it. That's because I used the same username to access the application from both browsers. I also wanted to show the actual HTTP request from this login action. I clicked on the HTTP Request tab for the /login.php event. The screenshot below shows that the request includes the parameters "username" and "password". These are the values for the username and password parameters that were identified in the Login Page Properties that we saw earlier (this is why the values for the parameters are case sensitive). The ASM recognized these two parameter values in the HTTP Request for the /login.php URL, so it gave the option for tracking the username. Session Tracking Status The last thing I'll mention about all this tracking goodness is that the ASM gives you the option of viewing all the session and/or username tracking details from one screen. Navigate to Security >> Reporting >> Applications >> Session Tracking Status to see all the details regarding the sessions and/or usernames you have taken action on. Notice in the screenshot below that the details for Block All and Log All Requests for both session and username are listed (it even shows the value of the session and username). You can click on the hyperlink "View Requests" on the right side of the page and the ASM will display all the requests that satisfy that specific criteria. Pretty cool and handy stuff! Well, I hope this article has helped in your understanding of yet another fantastic capability of the BIG-IP ASM. Here's to tracking down the bad guys and shutting down their nefarious behavior! Update: Now that the article series is complete, I wanted to share the links to each article. If I add any more in the future, I'll update this list. What is the BIG-IP ASM? Policy Building The Importance of File Types, Parameters, and URLs Attack Signatures XML Security IP Address Intelligence and Whitelisting Geolocation Data Guard Username and Session Awareness Tracking Event Logging2.3KViews0likes1CommentWhy you still need layer 7 persistence
Tony Bourke of the Load Balancing Digest points out that mega proxies are largely dead. Very true. He then wonders whether layer 7 persistence is really all that important today, as it was largely implemented to solve the problems associated with mega-proxies - that is, large numbers of users coming from the same IP address. Layer 7 persistence is still applicable to situations where you may have multiple users coming from a single IP address (such as a small client base coming from a handful of offices, with each office using on public IP address), but I wonder what doing Layer 4 persistence would do to a major site these days. I’m thinking, not much. I'm going to say that layer 4 persistence would likely break a major site today. Layer 7 persistence is even more relevant today than it has been in the past for one very good reason: session coherence. Session coherence may not have the performance and availability ramifications of the mega-proxy problem, but it is essential to ensure that applications in a load-balanced environment work correctly. Where's F5? VMWorld Sept 15-18 in Las Vegas Storage Decisions Sept 23-24 in New York Networld IT Roadmap Sept 23 in Dallas Oracle Open World Sept 21-25 in San Francisco Storage Networking World Oct 13-16 in Dallas Storage Expo 2008 UK Oct 15-16 in London Storage Networking World Oct 27-29 in Frankfurt SESSION COHERENCE Layer 7 persistence is still heavily used in applications that are session sensitive. The most common example is shopping carts stored in the application server session, but it also increasingly important to Web 2.0 and interactive applications where state is important. Sessions are used to store that state and therefore Layer 7 persistence becomes important to maintaining that state in a load-balanced environment. It's common to see layer 7 persistence driven by JSESSIONID or PHPSESSIONID header variables today. It's a question we see in the forums here on DevCentral quite often. Many applications are rolled out, and then inserted into a load balanced environment, and subsequently break because sessions aren't shared across web application servers and the client isn't always routed to the same server or they come back "later" (after the connections have timed out) and expect the application to continue where they left it. If they aren't load balanced back to the same server, the session data isn't accessible and the application breaks. Application server sessions generally persist for hours as opposed to the minutes or seconds allowed for a TCP connection. Layer 4 (TCP) persistence can't adequately address this problem. Source port and IP address aren't always enough to ensure routing to the correct server because it doesn't persist once the connection is closed, and multiple requests coming from the same browser use multiple connections now, each with a different source port. That means two requests on the same page may not be load balanced to the same server, even though they both may require access to the application session data. These sites and applications are used for hours, often with long periods of time between requests, which means connections have often long timed out. Could layer 4 persistence work? Probably, but only if the time-out on these connections were set unreasonably high, which would consume a lot more resources on the load balancer and reduce its capacity significantly. And let's not forget SaaS (Software as a Service) sites like salesforce.com, where rather than mega-proxy issues cropping up we'd have lots-of-little-proxy issues cropping up as businesses still (thanks to IPv4 and the need to monitor Internet use) employ forward proxies. And SSL, too, is highly dependent upon header data to ensure persistence today. I agree with Tony's assessment that the mega proxy problem is largely a non-issue today, but session coherence is taking its place a one of the best reasons to implement layer 7 persistence over layer 4 persistence.366Views0likes1CommentCloud Computing: Is your cloud sticky? It should be.
Load balancing an application should, by now, be a fairly routine scaling exercise. But too often when an application is moved into a load balanced architecture it breaks. The reason? Application sessions are often specific to an application server instance. The solution? Persistence, also known as sticky connections. The use of sessions on application servers to add state to web (HTTP) applications is a common practice. In fact, it's one of the greatest "hacks" in the history of the web. It's an excellent solution to the problem of using a stateless application protocol to build applications for which state is important. But sessions are peculiar to the application server instance on which they were created, and in general are not shared across multiple instances unless you've specifically achitected the application infrastructure to do so. Inserting applications into a load balanced environment often ignores this requirement, as load balancing decisions are often made based on server and application load and not on application specific parameters. THE PROBLEM When a client connects the first time, it's directed to Server A and a session is created. Data specific to the application that needs to be persisted over the session of the application, like shopping carts or search parameters, are stored in that session. When a client makes subsequent requests, however, the load balancer doesn't automatically understand this, and may direct the client to Server B, effectively losing the data in the session and hence breaking the application. This affects cloud computing initiatives because an integral part of cloud computing is load balancing to provide horizontal scalability and integral part of applications is session management. Deploying an application that works properly in a test environment into the cloud may break that application because the load balancing solution utilized by the cloud computing provider isn't aware of the importance of that session to the application and, too often, the session variables are unique to the application and the provider's static network infrastructure isn't capable of adapting to those unique needs. THE SOLUTION When choosing a cloud provider, it is imperative that the load balancing solution used by the provider support sticky connections (persistence) so that developers don’t have to rewrite or re-architect applications for cloud deployment. "Sticky connections", or persistence in the networking vernacular, ensure that client requests are load balanced to the appropriate server on which their application session resides. This maintains the availability of the session and means applications don't "break" when deployed into the cloud or inserted into a load balanced environment. Sticky (persistent) connections are often implemented in load balancers (application delivery controllers) by using application server session IDs to help make the decision on where requests should be sent. This results in consistent behavior and a well behaved application. The most common method of implementing sticky (persistent) connections is the use of cookie persistence. Cookie persistence inserts a cookie into the response that is later used to properly direct requests. The cookie often contains the JSESSIONID or PHPSESSIONID or ASPSESSIONID, but can actually contain any data that would allow the load balancer (application delivery controller) to identify the right application server for any given request. ACTION ITEMS If you are deploying applications into the cloud, or planning on doing so, and those applications are session sensitive, it is important that you determine before you deploy into the cloud whether or not your provider's solution supports sticky (persistent) connections, and how that persistence is implemented. Some solutions are not very flexible and will only provide persistence based on a limited array of variables and you will need to instrument your application to support the proper variables. Other solutions provide network-side scripting capabilities that will allow the provider - and you - to support persistence on any variable you deem unique enough to identify the proper application server instance. Note that server FQDN (fully qualified domain name) or IP address will likely not be adequate in a cloud computing environment as these variables may not be guaranteed to be static. The unique variable should ostensibly be something application specific, as this is likely the only variables you will have control over in the cloud and the only variables guaranteed to be consistent across instances as they are brought on and off-line in response to changes in demand. If you're currently examining the possibility of deploying applications in the cloud, make sure the cloud you choose is sticky to avoid the possibility that you'll need to rearchitect your application to get it to work properly.298Views0likes2Comments4 Reasons We Must Redefine Web Application Security
Mike Fratto loves to tweak my nose about web application security. He’s been doing it for years, so it’s (d)evolved to a pretty standard set of arguments. But after he tweaked the debate again in a tweet, I got to thinking that part of the problem is the definition of web application security itself. Web application security is almost always about the application (I know, duh! but bear with me) and therefore about the developer and secure coding. Most of the programmatic errors that lead to vulnerabilities and subsequently exploitation can be traced to a lack of secure coding practices, particularly around the validation of user input (which should never, ever be trusted). Whether it’s XSS (Cross Site Scripting) or SQL Injection, the root of the problem is that malicious data or code is submitted to an application and not properly ferreted out by sanitization routines written by developers, for whatever reason. But there are a number of “web application” attacks that have nothing to do with developers and code, and are, in fact, more focused on the exploitation of protocols. TCP and HTTP can be easily manipulated in such a way as to result in a successful attack on an application without breaking any RFC or W3C standard that specifies the proper behavior of these protocols. Worse, the application developer really can’t do anything about these types of attacks because they aren’t aware they are occurring. It is, in fact, the four layers of the TCP/IP stack - the foundation for web applications – that are the reason we need to redefine web application security and expand it to include the network where it makes sense. Being a “web” application necessarily implies network interaction, which implies reliance on the network stack, which implies potential exploitation of the protocols upon which applications and their developers rely but have little or no control over. Web applications ride atop the OSI stack; above and beyond the network and application model on which all web applications are deployed. But those web applications have very little control or visibility into that stack and network-focused security (firewalls, IDS/IPS) have very little awareness of the application. Let’s take a Layer 7 DoS attack as an example. L7 DoS works by exploiting the nature of HTTP 1.1 and its ability to essentially pipeline multiple HTTP requests over the same TCP connection. Ostensibly this behavior reduces the overhead on the server associated with opening and closing TCP connections and thus improves the overall capacity and performance of web and application servers. So far, all good. But this behavior can be used against an application. By requesting a legitimate web page using HTTP 1.1, TCP connections on the server are held open waiting for additional requests to be submitted. And they’ll stay open until the client sends the appropriate HTTP Connection: close header with a request or the TCP session itself times out based on the configuration of the web or application server. Now, imagine a single source opens a lot – say thousands – of pages. They are all legitimate requests; there’s no malicious data involved or incorrect usage of the underlying protocols. From the application’s perspective this is not an attack. And yet it is an attack. It’s a basic resource consumption attack designed to chew up all available TCP connections on the server such that legitimate users cannot be served. The distinction between legitimate users and legitimate requests is paramount to understanding why it is that web application security isn’t always just about the application; sometimes it’s about protocols and behavior external to the application that cannot be controlled let alone detected by the application or its developers. The developer, in order to detect such a misuse of the HTTP protocol, would need to keep what we in the network world call a “session table”. Now web and application servers keep this information, they have to, but they don’t make it accessible to the developer. Basically, the developer’s viewpoint is from inside a single session, dealing with a single request, dealing with a single user. The developer, and the application, have a very limited view of the environment in which the application operates. A web application firewall, however, has access to its session table necessarily. It operates with a view point external to the application, with the ability to understand the “big picture”. It can detect when a single user is opening three or four or thousands of connections and not doing anything with them. It understands that the user is not legitimate because the user’s behavior is out of line with how a normal user interacts with an application. A web application firewall has the context in which to evaluate the behavior and requests and realize that a single user opening multiple connections is not legitimate after all. Not all web application security is about the application. Sometimes it’s about the protocols and languages and platforms supporting the delivery of that application. And when the application lacks visibility into those supporting infrastructure environments, it’s pretty damn difficult for the application developer to use secure coding to protect against exploitation of those facets of the application. We really have to stop debating where web application security belongs and go back to the beginning and redefine what it means in a web driven distributed world if we’re going to effectively tackle the problem any time this century.213Views0likes1CommentThe third greatest (useful) hack in the history of the Web
Developers have an almost supernatural ability to workaround restrictions, even though some of the restrictions on building applications delivered via the web have been akin to a kryptonite. Like Superman fighting through the debilitating effects of the imaginary mineral, they've gotten around those restrictions by coming up with ways to implement functionality and improve the behavior of browsers and thus web applications anyway. The first greatest hack was giving HTTP state. The second? Cookie-based persistence. The third? The CNAME trick. THE PROBLEM The reason the "CNAME trick" came about was a limitation on browser connections to a single host imposed by browsers, but particularly version of Internet Explorer previous to IE8. With only 2 connections per host name allowed and many times that number of objects on a page, the ability of IE in particular but really all browsers to quickly retrieve all those objects and render them was also hampered. This resulted in the appearance that the application performed poorly, when in reality it wasn't the application but the inherent delivery mechanisms that were slow due to limitations beyond the user's, the network admin's, and the developer's control. Users, of course, don't care about any of this. All they know is that the application they are using is slow and they want it fast. And when some of those users are corporate business users, the developers are going to hear about it because the help desk is going to call them when they get barraged with complaints from users. This is the real reason developers develop nearly supernatural powers of hacking; they'll do anything to stop users from complaining. THE HACK Developers all over (including ours inside F5, working on building our application acceleration solution, WebAccelerator) figured that if the browser was going to limit the number of connections to a single host that the answer was simply to trick the browser into thinking it was talking to more than one host. Turns out doing this is rather trivial: simply add multiple CNAMEs for the same host to DNS, and then reference those as the host for some of the objects in the page. So www.example.com becomes www1, www2, www3, and so on. This required changes to the application so that the additional host names were referenced, unless you made use of a proxy-based solution like WebAccelerator and BIG-IP Local Traffic Manager capable of rewriting outbound host names and virtualizing them to appear to the outside world as if they were a single host. THE NEW PROBLEMS This improved application performance, but at the cost of increasing the number of simultaneous connections to the server. This was "bad" in the sense that a web server, even well tuned, can only support X simultaneous connections at a time, and if each user was consuming Y connections per page, the number of concurrent users that could be supported on a single server was decreased by this hack. This made servers less efficient and required additional servers to ensure availability and scale. Along comes AJAX, and the popularity of the "CNAME trick" rose rapidly. This is because AJAX became a way to provide near real-time updates to web browsers on a per-component basis. The result of this is a rich, interactive application that ends up maintaining a connection to the server on a nearly continual basis. So not only is a single application using more connections - and thus server resources - to load a page, it is also consuming more resources and connections throughout its execution. Some web servers aren't good about dealing with virtual hosts. Rather than pretend that a single instance is all the virtual hosts, it will spawn more and more children instances, each one consuming resources until there are so many instances running that each one can handle fewer concurrent users than the parent. The CNAME trick is also difficult to scale. Every time a new CNAME is added, it must be referenced within the application, which often means modifying applications. A costly proposition in terms of time and effort. If the CNAME trick is used as a reactive measure to address poor application performance, the entire application must be modified to reference the new host names. THE SOLUTIONS There are several solutions to the problems created by the CNAME trick, but they all take advantage of the same core principle: a proxy-based mediator. The proxy's job is to mediate for the client and aggregate connection requests to the server and make them more efficient, either by reusing connections to servers or by load-balancing them more efficiently, or by rewriting requests. The advantage of implementing a proxy-based solution proactively is that applications don't necessarily need to be modified if the solution can rewrite host names in outbound responses. Basically, if the proxy is smart enough, it can change the host names in the page before it reaches the browser, and then aggregate and optimize them as the requests for each object come back in, essentially taking advantage of layer 7 switching to route all the requests to the appropriate web server. The problem of additional connections can't be solved without a proxy of some kind. How efficient a web server is at handling the additional connections can be changed with tuning and optimization, but the additional connections and resulting consequences are going to hit that web server regardless without a mediator to deal with them. THE FUTURE This is a great hack, there is no doubt about it. It's one of the best ways to "workaround" the problems with browser limitations. Now that IE8 is increasing the connection limit from 2 to 6, the CNAME hack will be less necessary to combat the perception of poor application performance. But the problem of inefficiency and resource consumption on servers will not go away; in fact, it's likely to get worse as IE8 is adopted over the next year. The increase in connections initiated from browsers will continue to strain the application infrastructure and, in fact, will get worse primarily because not everyone implemented the "CNAME trick" and thus not everyone's infrastructure felt the strain of those increased connections. With IE8 everyone will feel the impact of additional connections upon their application infrastructure - whether they're a small to medium business, a large enterprise, or a service provider.231Views0likes0Comments