mod_rewrite
4 TopicsI Can Has UR .htaccess File
Notice that isn’t a question, it’s a statement of fact Twitter is having a bad month. After it was blamed, albeit incorrectly, for a breach leading to the disclosure of both personal and corporate information via Google’s GMail and Apps, its apparent willingness to allow anyone and everyone access to a .htaccess file ostensibly protecting search.twitter.com made the rounds via, ironically, Twitter. This vulnerability at first glance appears fairly innocuous, until you realize just how much information can be placed in an .htaccess file that could have been exposed by this technical configuration faux pas. Included in the .htaccess file is a number of URI rewrites, which give an interesting view of the underlying file system hierarchy Twitter is using, as well as a (rather) lengthy list of IP addresses denied access. All in all, not that exciting, because many of the juicy bits that could be configured via .htaccess for any given website are not done so in this easily accessible .htaccess file. Some things you can do with .htaccess, in case you aren’t familiar: Create default error document Enable SSI via htaccess Deny users by IP Change your default directory page Redirects Prevent hotlinking of your images Prevent directory listing .htaccess is a very versatile little file, capable of handling all sorts of security and application delivery tasks. Now what’s interesting is that the .htaccess file is in the root directory and should not be accessible. Apache configuration files are fairly straight forward, and there are plethora examples of how to prevent .htaccess – and its wealth of information – from being viewed by clients. Obfuscation, of course, is one possibility, as Apache’s httpd.conf allows you to specify the name of the access file with a simple directive: AccessFileName .htaccess It is a simple enough thing to change the name of the file, thus making it more difficult for automated scans to discover vulnerable access files and retrieve them. A little addition to the httpd.conf regarding the accessibility of such files, too, will prevent curious folks from poking at .htaccess and retrieving them with ease. After all, there is no reason for an access file to be viewed by a client; it’s a server-side security configuration mechanism, meant only for the web server, and should not be exposed given the potential for leaking a lot of information that could lead to a more serious breach in security. ~ "^\.ht"> Order allow,deny Deny from all Satisfy All Another option, if you have an intermediary enabled with network-side scripting, is to prevent access to any .htaccess file across your entire infrastructure. Changes to httpd.conf must be done on every server, so if you have a lot of servers to manage and protect it’s quite possible you’d miss one due to the sheer volume of servers to slog through. Using a network-side scripting solution eliminates that possibility because it’s one change that can immediately affect all servers. Here’s an example using an iRule, but you should also be able to use mod_rewrite to accomplish the same thing if you’re using an Apache-based proxy: when HTTP_REQUEST { # Check the requested URI switch -glob [string tolower [HTTP::path]] { "/.ht*" { reject } default { pool bigwebpool } } } However you choose to protect that .htaccess file, just do it. This isn’t rocket science, it’s a straight-up simple configuration error that could potentially lead to more serious breaches in security – especially if your .htaccess file contains more sensitive (and informative) information. An Unhackable Server is Still Vulnerable Twittergate Reveals E-Mail is Bigger Security Risk than Twitter Automatically Removing Cookies Clickjacking Protection Using X-FRAME-OPTIONS Available for Firefox Stop brute force listing of HTTP OPTIONS with network-side scripting Jedi Mind Tricks: HTTP Request Smuggling I am in your HTTP headers, attacking your application Understanding network-side scripting716Views0likes4CommentsI am in your HTTP headers, attacking your application
Zero-day IE exploits and general mass SQL injection attacks often overshadow potentially more dangerous exploits targeting lesser known applications and attack vectors. These exploits are potentially more dangerous because once proven through a successful attack on these lesser known applications they can rapidly be adapted to exploit more common web applications, and no one is specifically concentrating on preventing them because they're, well, not so obvious. Recently, SANS Internet Storm Center featured a write up on attempts to exploit Roundcube Webmail via the HTTP Accept header. Such an attack is generally focused on exploitation of operating system, language, or environmental vulnerabilities, as the data contained in HTTP headers (aside from cookies) is rarely used by the application as user-input. An example provided by SANS of an attack targeting Roundcube via the HTTP Accept header: POST /roundcube/bin/html2text.php HTTP/1.1 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.5) Gecko/2008120122 Firefox/3.0.5 Host: xx.xx.xx.xx Accept: ZWNobyAoMzMzMjEyKzQzMjQ1NjY2KS4iICI7O3Bhc3N0aHJ1KCJ1bmFtZSAtYTtpZCIpOw== Content-Length: 54 What the attackers in this example were attempting to do is trick the application into evaluating system commands encoded in the Accept header in order to retrieve some data they should not have had access to. The purpose of the attack, however, could easily have been for some other nefarious deed such as potentially writing a file to the system that could be used as a cross-site scripting attack, or deleting files, or just generally wreaking havoc with the system. This is the problem security professionals and developers face every day: what devious thing could some miscreant attempt to do? What must I protect against. This is part of what makes secure coding so difficult - developers aren't always sure what they should be protecting against, and neither are the security pros because the bad guys are always coming up with a new way to exploit some aspect of an application or transport layer protocols. Think HTTP headers aren't generally used by applications? Consider the use of the custom HTTP header "SOAP Action" for SOAP web services, and cookies, and E-tags, and ... well, the list goes on. HTTP headers carry data used by applications and therefore should be considered a viable transport mechanism for malicious code. So while the exploitation of HTTP headers is not nearly as common or rampant as mass SQL injection today, the use of it to target specific applications means it is a possible attack vector for the future against which applications should be protected now, before it becomes critical to do so. No, it may never happen. Attackers may never find a way to truly exploit HTTP headers. But then again, they might and apparently have been trying. Better safe than sorry, I say. Regardless of the technology you use to, the process is the same: you need to determine what is allowed in HTTP headers and verify them just as you would any other user-generated input or you need to invest in a solution that provides this type of security for you. RFC 2616 (HTTP), specifically section 14, provide a great deal of guidance and detail on what is acceptable in an HTTP header field. Never blindly evaluate or execute upon data contained in an HTTP header field. Treat any input, even input that is not traditionally user-generated, as suspect. That's a good rule of thumb for protecting against malicious payloads anyway, but especially a good rule when dealing with what is likely considered a non-traditional attack vector (until it is used, and overused to the point it's considered typical, of course). Possible ways to prevent the potential exploitation of HTTP headers: Use network-side scripting or mod_rewrite to intercept, examine, and either sanitize or outright reject requests containing suspicious data in HTTP headers. Invest in a security solution capable of sanitizing transport (TCP) and application layer (HTTP) protocols and use it to do so. Investigate whether an existing solution - either security or application delivery focused - is capable of providing the means through which you can enforce protocol compliance. Use secure coding techniques to examine - not evaluate - the data in any HTTP headers you are using and ensure they are legitimate values before using them in any way. A little proactive security can go along way toward not being the person who inadvertently discovers a new attack methodology. Related articles by Zemanta Gmail Is Vulnerable to Hackers The Concise Guide to Proxies 3 reasons you need a WAF even though your code is (you think) secure Stop brute forcing listing of HTTP OPTIONS with network-side scripting What's the difference between a web application and a blog?575Views0likes2Comments3 Reasons not to use Apache mod rewrite
After reading this discussion on Slashdot regarding an anti-virus agent pretending to be Internet Explorer and flooding sites with requests I waited to see a response come from an Apache fan on using mod_rewrite to detect and stop the flood of useless traffic coming from these robots. It was sure to come, particularly after the first post in the discussion pointed out how to use an iRule to detect and "nuke from orbit" these nasty little requests. I was not disappointed. It's not the case that the solution won't work. It will, and it's certainly a viable solution. At least if you're only running 2 or 3 web servers. And you don't care about the need to interrupt service to implement the solution. And you aren't worried about potentially introducing errors into the server configuration. And you're aren't running IIS or some other web server. There are a few very good reasons not to use Apache mod_rewrite for this kind of situation. It's less efficient. mod_rewrite requires that the server respond to the useless request, which means it's chewing up valuable resources on the server and the network that could be used to handle legitimate requests. Using mod_rewrite gains very little when you're attempting to stop useless traffic and requests from overloading your network or web servers because the traffic has to reach the servers and the web servers have to respond. Not only is it less efficient in terms of computing resources, but it's less efficient for administrators to develop, test, and deploy the solution on every server. That takes time, time that could certainly be spent doing other administrative tasks. By implementing the same "detect and nuke" functionality on the load-balancer (application delivery controller) you (a) prevent the traffic from flooding your network, (b) conserve resources on the server that can be used to serve legitimate requests, and (c) can implement the solution much faster as it only needs to be implemented once rather than once per server. It only works in homogeneous environments. If you're running all Apache web servers, great. But very few environments are standardized on a single web server platform. By implementing the same "detect and nuke" functionality on the load-balancer (application delivery controller) you can prevent the consumption of resources on all web server platforms at the same time without needing to implement 2,3, or more different solutions on the web servers. It requires an interruption to services. In order to deploy a mod_rewrite solution the Apache configuration file must be modified and Apache restarted. That means all connected clients will have their service interrupted, and if there's an error or some other problem with the configuration it will result in down time. Either administrators need to quiesce (bleed) the connections off before restarting, or they need to simply hit the reset button without concern for connected clients. Depending on the services running on the affected server, that can be bad news for business. By implementing the same "detect and nuke" functionality on the load-balancer (application delivery controller) you won't need to restart the web servers nor interrupt service. Because the load-balancer is managing the connections and handling the inspection of requests, the servers don't need to be restarted and connection management can be handled by the application delivery controller. So basically, if you're running a small, homogenous server farm and don't care about downtime/interruptions to service then using Apache mod_rewrite to solve this - or a similar - problem then go ahead and use mod_rewrite. But if you're in charge of a lot more servers, and it's a heterogeneous environment, and you can't interrupt service then you should be seriously considering implementing this kind of functionality in your application delivery controller instead. Once again, just because you can, doesn't mean you should.286Views0likes5Comments