http headers
8 TopicsUsing "X-Forwarded-For" in Apache or PHP
An issue that often comes up for users of any full proxy-based product is that the original client IP address is often lost to the application or web server. This is because in a full proxy system there are two connections; one between the client and the proxy, and a second one between the proxy and the web server. Essentially, the web server sees the connection as coming from the proxy, not the client. Needless to say, this can cause problems if you want to know the IP address of the real client for logging, for troubleshooting, for tracking down bad guys, or performing IP address specific tasks such as geocoding. Maybe you're just like me and you're nosy, or you're like Don and you want the webalizer graphs to be a bit more interesting (just one host does not a cool traffic graph make, after all!). That's where the "X-Forwarded-For" HTTP header comes into play. Essentially the proxy can, if configured to do so, insert the original client IP address into a custom HTTP header so it can be retrieved by the server for processing. If you've got a BIG-IP you can simply enable the ability to insert the "X-Forwarded-For" header in the http profile. Check out the screen shot below to see just how easy it is. Yeah, it's that easy. If for some reason you can't enable this feature in the HTTP profile, you can write an iRule to do the same thing. when HTTP_REQUEST { HTTP::header insert "X-Forwarded-For" [IP::client_addr]} Yeah, that's pretty easy, too. So now that you're passing the value along, what do you do with it? Modifying Apache's Log Format Well, Joe has a post describing how to obtain this value in IIS. But that doesn't really help if you're not running IIS and like me have chosen to run a little web server you may have heard of called Apache. Configuring Apache to use the X-Forwarded-For instead of (or in conjunction with) the normal HTTP client header is pretty simple. ApacheWeek has a great article on how to incorporate custom fields into a log file, but here's the down and dirty. Open your configuration file (usually in /etc/httpd/conf/) and find the section describing the log formats. Then add the following to the log format you want to modify, or create a new one that includes this to extract the X-Forwarded-For value: %{X-Forwarded-For}i That's it. If you don't care about the proxy IP address, you can simply replace the traditional %h in the common log format with the new value, or you can add it as an additional header. Restart Apache and you're ready to go. Getting the X-Forwarded-For from PHP If you're like me, you might have written an application or site in PHP and for some reason you want the real client IP address, not the proxy IP address. Even though my BIG-IP has the X-Forwarded-For functionality enabled in the http profile, I still need to access that value from my code so I can store it in the database. $headers = apache_request_headers(); $real_client_ip = $headers["X-Forwarded-For"]; That's it, now I have the real IP address of the client, and not just the proxy's address. Happy Coding & Configuring! Imbibing: Coffee3.5KViews0likes8Comments302 Redirect and http headers
I have a iRule that I'm trying to do a redirect but make the redirect pass http headers to another server (this is in a different domain). The access policy will authenticate the user and get certain information from AD. This information needs to be sent to the other web application with the data in http headers (not my doing 🙂 ). Can headers be passed this way and what would be the syntax for the http:respond with http headers? when ACCESS_POLICY_COMPLETED { set policy_result [ACCESS::policy result] switch $policy_result { "allow" { set clientid "12345" set userid "[ACCESS::session data get session.saml.last.attr.name.employeeID]" set timestamp "[ACCESS::session data get session.custom.form.timestamp]" log local0. "***** clientId $clientid" log local0. "***** userid $userid" log local0. "***** timestamp $timestamp" HTTP::respond 302 Location "https://xyz.domain.com/" "client_id" $clientid "client_userid" $userid "client_timestamp" $timestamp Trying to do something like this HTTP::response 302 Location "https://xyz" "http-header client_id $clientid" "http-header client_userid $user ..." } "deny" { ACCESS::respond 401 content "Error: Failure in Authentication" Connection Close } } }3.4KViews0likes3CommentsAuthenticated Sessions at the HTTP level for the iControl API (HTTP Headers?)
We're using i-Control-11.2's Interfaces object and doing some serious pounding of the system. BigIp has no trouble handling the load; however, we're going through a third party authentication/authorization application (TACACS+) which is having trouble keeping up. Is there a way to maintain the Axis HTTP session once authenticated? I tried using the SOAP "session" header but that didn't work. I'm pretty sure that's more of an application-level session. I'm assuming the HTTP X-iControl-Session header will function the same way. I'm wondering if the BigIp web server will respect HTTP session authentication. If anyone out there has any ideas, I'm open to trying them. We may be exploring pooling authenticated tcp connections with keep-alive. I'm really hoping there's a better solution.609Views0likes3CommentsI am in your HTTP headers, attacking your application
Zero-day IE exploits and general mass SQL injection attacks often overshadow potentially more dangerous exploits targeting lesser known applications and attack vectors. These exploits are potentially more dangerous because once proven through a successful attack on these lesser known applications they can rapidly be adapted to exploit more common web applications, and no one is specifically concentrating on preventing them because they're, well, not so obvious. Recently, SANS Internet Storm Center featured a write up on attempts to exploit Roundcube Webmail via the HTTP Accept header. Such an attack is generally focused on exploitation of operating system, language, or environmental vulnerabilities, as the data contained in HTTP headers (aside from cookies) is rarely used by the application as user-input. An example provided by SANS of an attack targeting Roundcube via the HTTP Accept header: POST /roundcube/bin/html2text.php HTTP/1.1 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.5) Gecko/2008120122 Firefox/3.0.5 Host: xx.xx.xx.xx Accept: ZWNobyAoMzMzMjEyKzQzMjQ1NjY2KS4iICI7O3Bhc3N0aHJ1KCJ1bmFtZSAtYTtpZCIpOw== Content-Length: 54 What the attackers in this example were attempting to do is trick the application into evaluating system commands encoded in the Accept header in order to retrieve some data they should not have had access to. The purpose of the attack, however, could easily have been for some other nefarious deed such as potentially writing a file to the system that could be used as a cross-site scripting attack, or deleting files, or just generally wreaking havoc with the system. This is the problem security professionals and developers face every day: what devious thing could some miscreant attempt to do? What must I protect against. This is part of what makes secure coding so difficult - developers aren't always sure what they should be protecting against, and neither are the security pros because the bad guys are always coming up with a new way to exploit some aspect of an application or transport layer protocols. Think HTTP headers aren't generally used by applications? Consider the use of the custom HTTP header "SOAP Action" for SOAP web services, and cookies, and E-tags, and ... well, the list goes on. HTTP headers carry data used by applications and therefore should be considered a viable transport mechanism for malicious code. So while the exploitation of HTTP headers is not nearly as common or rampant as mass SQL injection today, the use of it to target specific applications means it is a possible attack vector for the future against which applications should be protected now, before it becomes critical to do so. No, it may never happen. Attackers may never find a way to truly exploit HTTP headers. But then again, they might and apparently have been trying. Better safe than sorry, I say. Regardless of the technology you use to, the process is the same: you need to determine what is allowed in HTTP headers and verify them just as you would any other user-generated input or you need to invest in a solution that provides this type of security for you. RFC 2616 (HTTP), specifically section 14, provide a great deal of guidance and detail on what is acceptable in an HTTP header field. Never blindly evaluate or execute upon data contained in an HTTP header field. Treat any input, even input that is not traditionally user-generated, as suspect. That's a good rule of thumb for protecting against malicious payloads anyway, but especially a good rule when dealing with what is likely considered a non-traditional attack vector (until it is used, and overused to the point it's considered typical, of course). Possible ways to prevent the potential exploitation of HTTP headers: Use network-side scripting or mod_rewrite to intercept, examine, and either sanitize or outright reject requests containing suspicious data in HTTP headers. Invest in a security solution capable of sanitizing transport (TCP) and application layer (HTTP) protocols and use it to do so. Investigate whether an existing solution - either security or application delivery focused - is capable of providing the means through which you can enforce protocol compliance. Use secure coding techniques to examine - not evaluate - the data in any HTTP headers you are using and ensure they are legitimate values before using them in any way. A little proactive security can go along way toward not being the person who inadvertently discovers a new attack methodology. Related articles by Zemanta Gmail Is Vulnerable to Hackers The Concise Guide to Proxies 3 reasons you need a WAF even though your code is (you think) secure Stop brute forcing listing of HTTP OPTIONS with network-side scripting What's the difference between a web application and a blog?556Views0likes2Comments20 Lines or Less #10
What could you do with your code in 20 Lines or Less? That's the question I ask every week, and every week I go looking to find cool new examples that show just how flexible and powerful iRules can be without getting in over your head. With another three examples of cool iRules, this week's 20 Lines or Less shows even more things you can do in less than 21 lines of code. I still haven't heard much from you guys as to the kinds of things you want to see, so make sure to get those requests in. I can build all sorts of neat iRules if you just let me know what would be helpful or interesting. Otherwise I might just make iRules that make iRules. Scary. This week we've got a couple forum examples and a contribution to the codeshare. Here's your epic, 10th edition of the 20LoL: HTTP Headers in the HTTP Response http://devcentral.f5.com/s/Default.aspx?tabid=53&forumid=5&postid=25423&view=topic Dealing with HTTP headers is one of the most common tasks we see in iRules. One of the things that I've seen floating about the forums and elsewhere lately is the question of how to access that information in the Response portion of the HTTP transaction. Some people have had a problem with this, as many of those headers no longer exist (like, say, the host). It's a simple solution though, as you can see below...just use a variable to get you there. when HTTP_REQUEST { # Save the URI set uri [HTTP::uri] } when HTTP_RESPONSE { if {([HTTP::header Cache-Control] eq "private, max-age=3600") and ($uri ends_with “.html”)} { HTTP::header replace Cache-Control "public, max-age=3600" } } Persistence equality in RDP sessions http://devcentral.f5.com/s/Default.aspx?tabid=53&forumid=5&postid=25271&view=topic This example solves an issue with mixing Linux and Windows based RDP sessions across a persistence enabled virtual Apparently there's an issue with trying to persist based off of the user string as some clients include user@local.host and others just include the username. That's a bit of an issue. iRules to the rescue, as always. when CLIENT_ACCEPTED { TCP::collect } when CLIENT_DATA { TCP::collect 25 binary scan [TCP::payload] x11a* msrdp if { [string equal -nocase -length 17 $msrdp "cookie: mstshash="] } { set msrdp [string range $msrdp 17 end] set len [string first "\n" $msrdp] if { $len == -1 } { TCP::collect } if { $msrdp contains "@" } { if { $len > 5 } { incr len -1 persist uie [getfield $msrdp "@" 1] 10800 } } else { persist uie $msrdp 10800 } } TCP::release } Pool Selection based on File Extension http://devcentral.f5.com/s/wiki/default.aspx/iRules/PoolBasedOnExtension.html Taking a page from the codeshare, this iRule lets you build a correlation of file extensions and pools that serve those particular file types. This can be quite handy when dealing with large scale image servers, media systems, and especially systems that do things like dynamically generate watermarks on images and the like. Take a peek. when HTTP_REQUEST { switch -glob [HTTP::path] { "*.jpg" - "*.gif" - "*.png" { pool image_pool } "*.pdf" { pool pdf_pool } default { pool web_pool } } } There you have it; three more examples in less than 60 lines of code. I hope you're still finding this series helpful. As always, feel free to drop me a line for feedback or suggestions. Thanks! #Colin445Views0likes0CommentsWorking around client-side limitations on custom HTTP headers
One of the most well-kept secrets in technology is the extensibility of HTTP. It's one of the reasons it became the de facto application transport protocol and it was instrumental in getting SOAP off the ground before SOAP 1.2 and WS-I Basic Profile made the requirement for the SOAP Action header obsolete. Web browsers aren't capable of adding custom HTTP headers on their own; that functionality comes from the use of client-side scripting languages such as JavaScript or VBScript. Other RIA (Rich Internet Applications) client platforms such as Adobe AIR and Flash are also capable of adding HTTP headers, though both have limitations on which (if any) custom headers you can use. There are valid reasons for wanting to set a custom header. The most common use of custom HTTP headers is to preserve in some way the source IP address of the client for logging purposes in a load-balanced environment using the X-Forwarded-For custom header. Custom HTTP headers can be set by the client or set by the server or intermediary (load-balancer, application delivery controller, cache) as well and often are to indicate that the content has passed through a proxy. A quick perusal of the web shows developers desiring to use custom HTTP headers for a variety of reasons including security, SSO (single sign on) functionality, and to transfer data between pages/applications. Unfortunately, a class of vulnerabilities known as "HTTP header injection" often causes platform providers like Adobe to limit or completely remove the ability to manipulate HTTP headers on the client. And adding custom headers using JavaScript or VBScript may require modification of the application and relies on the user allowing scripts to run in the first place, the consistency of which can no longer be relied upon. But what if you really need those custom headers to either address a problem or enable some functionality? All is not lost; you can generally use an intelligent proxy-based load balancer (application delivery controller) to insert the headers for you.If the load balancer/application delivery controller has the ability to inspect requests and modify the requests and responses with a technology like iRules, you can easily add your custom headers at the intermediary without losing the functionality desired or needing to change the request method from GET to POST, as some have done to get around these limitations. Using your load balancer/application delivery controller to insert, delete, or modify custom HTTP headers has other advantages as well: You don't need to modify the client or the server-side application or script that served the client The load balancer can add the required custom HTTP header(s) for all applications at one time in one place Your application will still work even if the client disables scripting Custom HTTP headers are often used for valid reasons when developing applications. The inability to manipulate them easily on the client can interfere with the development lifecycle and make it more difficult to address vulnerabilities and quirks with packaged applications and the platforms on which applications are often deployed. Taking advantage of more advanced features available in modern load balancers/application delivery controllers makes implementing such workarounds simple.380Views0likes0Commentsreverse proxy mapping of server with strict header checking
We are trying to map a number of separately developed apps onto the same domain with each app in a subdomain, so users can request https://.ourdomain.com/ and get directed to the correct app. Apps we developed are in pools in our own hosting and working fine. We also need to map one app developed by a third party and hosted externally (https://thirdpartyapp.theirdomain.com/). We have the ip address of this third party app in a pool and the traffic is flowing correctly but some browsers set headers which cause resource requests that follow the initial connection to receive a 403 FORBIDDEN response. Unfortunately I don't have access to the Big-IP - it's a managed service, so writing and debugging iRules is a slow process. What I need help with... Does this iRule effectively substitute headers in the outgoing request? I know the replace works, but how do I know these are the headers going over to the other end (I have no access to F5 or to 3rd party server). { set uri [HTTP::uri] set httpver [HTTP::version] set headers [HTTP::header names] array unset request array set request {uri $uri} foreach header $headers { regsub -all {externalapp.ourdomain.com} [HTTP::header $header] prod-thirdpartyapp.theirdomain.com newheadervalue set request($header) $newheadervalue } set ENCRYPT 1;pool POOL-thirdparty-443-external;snat [IP::local_addr] } I know the regsub is replacing the headers correctly. Where I am losing confidence is that I can't see the request headers of the outbound connection to the third party server. Do I need to write the headers back into HTTP::header or does "set request" do that for the outbound request - i.e. is request a special object on the F5 that automatically sets the server side https request? Thanks for your help255Views0likes0CommentsHTTP Host Header replacement using AS3
I am using L7 policy within AS3 to manage my sites. I have a requirement where I need to modify the Host header before forwarding the request to the pool. I know this is easy in the GUI in the action section where I can just use replace HTTP Host. However, I do not see an action "replace" for the "Policy_Action_HTTP_Header" in the AS3 schema. Has anybody done this header replacement using AS3 ? Note : I would rather not to use "tcl:.." & am looking native L7 syntax. Any help would be greatly appreciated.85Views0likes2Comments