(HTTP) Redirection via Arbitrary Host Header
Does that title sound familiar to you? It is something we see through in support cases; quite often when a customer has had a PCI audit or penetration test conducted against their web properties. It sounds alarming, but often has a very simple cause, and protecting against it is often also quite simple! What is the Host header? If we go way back to the earliest webservers and HTTP/1.0, RFC1945 didn’t include a specification for a Host header. Instead, it was assumed that the host (IP address) receiving the request was the only intended destination, and that the server was only serving a single website. Obviously, it became apparent to the architects of the modern world-wide web (Tim Berners-Lee and all the others named in the HTTP RFCs) that more flexibility was required, specifically, the ability for a single target IP address to host more than one website under more than one domain (OK, there’s more to it than that – the role of Proxies is also important here, but irrelevant to our current discussion.) To enable that, the “Host:” header was added to RFC2616, the HTTP/1.1 specification document, which would allow a single server to understand which “virtual host” an incoming request was destined for and, through that, serve multiple domains on one system. There are two ways to satisfy that requirement of HTTP/1.1: By sending a “Host:” header along with the request, specifying the desired target (see fig. 1.1) By sending an “Absolute URI” rather than a relative one, with the URI containing the hostname (see fig. 1.2) (See Section 19.6.1.1 of RFC2616 for more information) GET /index.html HTTP/1.1<CRLF> Host: www.example.com<CRLF> <CRLF> Fig 1.1: An example HTTP/1.1 request with Host header GET http://www.example.com/index.html HTTP/1.1<CRLF> <CRLF> Fig 1.2: An example HTTP/1.1 request with Absolute URI What could go wrong? Quite a lot of things, it turns out! There are all sorts of potential problems – many or most of which are now, thankfully, fixed in all of the common webserver and proxy software available today, but still, we must be wary of things like: Host header confusion If a request includes both a Host: header and an Absolute URI, which is used (the RFC is clear here) and do all systems in the request path agree? Server-Side Request Forgery (SSRF) attacks By including special characters (like @) in a URI, can we coerce a proxy to forward on a request which has been modified in an unexpected fashion? Password reset attacks An attacker might be able to abuse the password reset functionality on a legitimate website by manipulating the Host header, causing the website to send a manipulated, malicious password reset link to the victim’s user account contact details, thereby tricking the victim into visiting a phishing website rather than the legitimate site. Web cache poisoning attacks This is a large and complex topic and relates to much more than just the Host header, but a system which trusts a manipulated Host header may make cache poisoning easier for an attacker to perform. Malicious redirects Finally, we arrive at the topic which started this whole article: malicious redirects to an arbitrary destination. Let’s dive into that one more deeply than the others… Redirection via Arbitrary Host Header Let’s be honest for a moment – the real problem here isn’t that you can cause the target system to generate a redirect to an injected host. That’s perhaps not ideal but doesn’t describe any kind of vulnerability; an attacker can’t manipulate the host header on a victim’s system (without having already compromised the victim’s system in some way) and can’t have the reflected, malicious, host header sent to anyone but themselves… ...Unless they can. In the real world, utilizing such a flaw means carrying out one of the other kinds of attack I mentioned earlier; perhaps you can trigger the server to send a redirect (a 302 response with a Location: header) to your arbitrary malicious destination and cause that response to be cached by an intermediate proxy to be subsequently served to other users? Now you’ve poisoned a web cache and anyone you send to the legitimate site via a phishing attack will ultimately be redirected to your malicious domain. Alternatively, the over-trust in the Host header, shown by its use in the responses Location header, might just be a pointer to an attacker, letting the attacker know that they should try to get the vulnerable system to emit the malicious host in other content, like a password reset email. So, what am I saying? I’m saying that the “Redirection via Arbitrary Host Header Manipulation” result we commonly see in vulnerability scans is not, in and of itself, necessarily something to be alarmed about. An attacker being able to send a manipulated redirect back to themselves is next to useless, but it’s a pointer indicating a system might be vulnerable to other attacks that a scanner can’t easily determine in an automated fashion. Unfortunately for us, it’s also often a PCI audit failure, even if the application architecture isn’t vulnerable in a meaningful way. How do we fix it? In part, that depends on why you’re seeing the problem in the first place, so let’s examine some common scenarios: iRules It’s quite common to redirect from HTTP to HTTPS using an iRule – there’s even a built-in iRule on BIG-IP called _sys_https_redirect for that purpose – and without any other checks, the following kind of rule will result in a redirect being generated for whatever host name was received (in other words, you’ll get dinged for “Redirection via Arbitrary Host Header Manipulation” on your audit): when HTTP_REQUEST { HTTP::redirect https://[getfield [HTTP::host] ":" 1][HTTP::uri] } You could fix this by hard-coding the redirect response, of course, and having a single iRule per target application, and that is the most secure option assuming each virtual server only handles traffic for one application; something like this: when HTTP_REQUEST { HTTP::redirect https://www.example.com/[HTTP::uri] } If you need to support multiple applications per virtual server, then your next-best option would be to use a Data Group to define the valid allowed hostnames and then only redirect if the incoming Host header matches one of the hosts in the data group. There’s an excellent answer for this by Kai Wilke, here: https://community.f5.com/discussions/technicalforum/handling-www-with-host-name-redirects-in-irule/27048/replies/27050 BIG-IP Local Traffic Policies It is also quite common to use Local Traffic Policies to redirect HTTP requests, for example to perform an HTTP-to-HTTPS redirect in a more performant way than an iRule. You can still achieve safety here by using the same techniques as for iRules; define the redirect rule to only act when expected host names are received and to drop all other traffic, e.g.: BIG-IP Advanced WAF (ASM) To make preventing this kind of vulnerability incredibly easy, BIG-IP Advanced WAF has a feature called “HTTP redirection protection” which can be configured and enabled on any ASM policy. Configuring it is quite straightforward and is described in K04211103: Configuring HTTP redirection protection; just remember to make sure you have enabled blocking for the policy and enabled Block for the “Illegal redirection attempt” violation under Policy Building->Learning and Blocking Settings! NGINX For NGINX, you just need to be careful when setting up any redirects and use a hard-coded host element rather than taking the resulting hostname from the incoming (potentially attacker-supplied) host header. In other words, don’t do this: location / { return 302 https://$host$request_uri; } Do this instead: location / { return 302 https://example.com$request_uri; } Something else to point out here – it’s very common for administrators to use ‘$uri’ when constructing redirects, but doing so can open you up to header injection and/or response splitting; be sure to use ‘$request_uri’ instead, whenever possible. That’s all for now! That’s all I’m going to cover in this article – there are other ways you can be vulnerable to open redirects (for example if you take an HTTP parameter and use that to construct a subsequent redirect) which aren’t covered here and are a much broader topic. For this article, I chose to concentrate only on the exact report we see across so many PCI audits and vulnerability scans. I will say, though, that BIG-IP Advanced WAF’s HTTP redirect protection will protect you against many, if not all, of the other ways you can be vulnerable because that protection applies to the redirect itself, i.e., to the HTTP response, rather than the request. For that reason (and many, many others), I’d strongly recommend investigating BIG-IP Advanced WAF if you don’t already use it! As always, feel free to leave any comments or questions below and I’ll try to get back to everyone, and thanks for reading this far!118Views1like0CommentsHelp with iRule Proxy
Hi team, I’m working on an iRule where I need to replace the path /admin with the root / and forward the request to the appropriate pool. However, I’m encountering issues with the rule, and it doesn't seem to work as expected. Here’s the first version I implemented: when HTTP_REQUEST { if {[string tolower [HTTP::host]] equals "test.com" and [HTTP::path] starts_with "/admin"} { HTTP::path [string map -nocase {"/admin" "/"} [HTTP::path]] pool POOL-A #log local0.info "Client Address --> [IP::client_addr] | Path: [HTTP::path] | Pool: POOL-A" } else { pool POOL-B #log local0.info "Client Address --> [IP::client_addr] | Path: [HTTP::path] | Pool: POOL-B" } } After some research, I saw that HTTP::path might need to be changed to HTTP::uri. I tried this version: when HTTP_REQUEST { # Log the original URI for debugging log local0. "Original URI: [HTTP::uri]" # Check if the URI starts with "/admin" if {[HTTP::uri] starts_with "/admin"} { # Modify the URI by replacing "/admin" with "/" set new_uri [string map {"/admin" "/"} [HTTP::uri]] HTTP::uri $new_uri # Log the modified URI for debugging log local0. "Modified URI: [HTTP::uri]" # Forward the request to the appropriate pool pool POOL-A } else { # Log default traffic for debugging log local0. "Default traffic - URI: [HTTP::uri], Pool: POOL-B" # Forward to the default pool pool POOL-B } } Issue: Neither version seems to work. When I test requests to /admin, the path replacement does not happen as expected or The replace of path does not allow me to reach any subfolders after root “/” (ex. help, etc etc) and on these objects we faced 404 not found error.Could someone point out what I might be missing or any best practices for this kind of path manipulation? Thanks!45Views0likes2CommentsPort Translation & HTTPS -> HTTP
Systeminformation: F5 BIG-IP r2600 Version 17.1.1.1 Build 0.0.2 Hello everyone, We would like to map the following scenario with the f5 BIG-IP I call https://server.domain.com port 443. The BIG-IP should then forward to http://server.domain.com port 55000. Is this even possible? How did you solve it? Configuration: For port translation, we entered port 443 in the virtual server and gave the pool member port 55000. For HTTPS to HTTP we used the following iRule: when HTTP_REQUEST { # Extrahiere den Host und den URI aus der HTTPS-Anfrage set host [HTTP::host] set uri [HTTP::uri] # Leite die Anfrage an die HTTP-Version der gleichen URL weiter HTTP::respond 301 Location "http://$host$uri" log "iRule_HTTP, HTTPS-Anfrage wurde auf HTTP umgeleitet: $host$uri, ClientIP: [IP::client_addr], ClientPort: [TCP::client_port]" } Is the iRule log entry generated before the port translation? The wrong port is in the logs. Best regardsSolved59Views0likes2CommentsWhat is HTTP Part X - HTTP/2
In the penultimate article in this What is HTTP? series we covered iRules and local traffic policies and the power they can unleash on your HTTP traffic. To date in this series, the content primarily focuses on HTTP/1.1, as that is still the predominant industry standard. But make no mistake, HTTP/2 is here and here to stay, garnering 30% of all website traffic and climbing steadily. In this article, we’ll discuss the problems in HTTP/1.1 addressed in HTTP/2 and how BIG-IP supports the major update. What’s So Wrong withHTTP/1.1? It’s obviously a pretty good standard since it’s lasted as long as it has, right? So what’s the problem? Well, let’s set security aside for this article, since the HTTP/2 committee pretty much punted on it anyway, and let’s instead talk about performance. Keep in mind that the foundational constructs of the HTTP protocol come from the internet equivalent of the Jurassic age, where the primary function was to get and post text objects. As the functionality stretched from static sites to dynamic interactive and real-time applications, the underlying protocols didn’t change much to support this departure. That said, the two big issues with HTTP/1.1 as far as performance goes are repetitive meta data and head of line blocking.HTTP was designed to be stateless. As such, all applicable meta data is sent on every request and response, which adds from minimal to a grotesque amount of overhead. Head of Line Blocking For HTTP/1.1, this phenomenon occurs due to each request needs a completed response before a client can make another request. Browser hacks to get around this problem involved increasing the number of TCP connections allowed to each host from one to two and currently at six as you can see in the image below. More connections more objects, right? Well yeah, but you still deal with the overhead of all those connections, and as the number of objects per page continues to grow the scale doesn’t make sense. Other hacks on the server side include things like domain sharding, where you create the illusion of many hosts so the browser creates more connections. This still presents a scale problem eventually. Pipelining was a thing as well, allowing for parallel connections and the utopia of improved performance. But as it turns out, it was not a good thing at all, proving quite difficult to implement properly and brittle at that, resulting in a grand total of ZERO major browsers actually supporting it. Radical Departures - The Big Changes in HTTP/2 HTTP/2 still has the same semantics as HTTP/1. It still has request/response, headers in key/value format, a body, etc. And the great thing for clients is the browser handles the wire protocols, so there are no compatibility issues on that front. There are many improvements and feature enhancements in the HTTP/2 spec, but we’ll focus here on a few of the major changes. John recorded a Lightboard Lesson a while back on HTTP/2 with an overview of more of the features not covered here. From Text to Binary With HTTP/2 comes a new binary framing layer, doing away with the text-based roots of HTTP. As I said, the semantics of HTTP are unchanged, but the way they are encapsulated and transferred between client and server changes significantly. Instead of a text message with headers and body in tow, there are clear delineations for headers and data, transferred in isolated binary-encoded frames (photo courtesy of Google). Client and server need to understand this new wire format in order to exchange messages, but the applications need not change to utilize the core HTTP/2 changes. For backwards compatibility, all client connections begin as HTTP/1 requests with an upgrade header indicating to the server that HTTP/2 is possible. If the server can handle it, a 101 response to switch protocols is issued by the server, and if it can’t the header is simply ignored and the interaction will remain on HTTP/1. You’ll note in the picture above that TLS is optional, and while that’s true to the letter of the RFC law (see my punting on security comment earlier) the major browsers have not implemented that as optional, so if you want to use HTTP/2, you’ll most likely need to do it with encryption. Multiplexed Streams HTTP/2 solves the HTTP/1.1 head of line problem by multiplexing requests over a single TCP connection. This allows clients to make multiple requests of the server without requiring a response to earlier requests. Responses can arrive in any order as the streams all have identifiers (photo courtesy of Google). Compare the image below of an HTTP/2 request to the one from the HTTP/1.1 section above. Notice two things: 1) the reduction of TCP connections from six to one and 2) the concurrency of all the objects being requested. In the brief video below, I toggle back and forth between HTTP/1.1 and HTTP/2 requests at increasing latencies, thanks to a demo tool on golang.org, and show the associated reductions in page load experience as a result. Even at very low latency there is an incredible efficiency in making the switch to HTTP/2. This one change obviates the need for many of the hacks in place for HTTP/1.1 deployments. One thing to note on the head of line blocking: TCP actually becomes a stumbling block for HTTP/2 due to its congestion control algorithms. If there is any packet loss in the TCP connection, the retransmit has to be processed before any of the other streams are managed, effectively halting all traffic on that connection. Protocols like QUIC are being developed to ride the UDP waveand overcome some of the limitations in TCP holding back even better performance in HTTP/2. Header Compression Given that headers and data are now isolated by frame types, the headers can now be compressed independently, and there is a new compression utility specifically for this called HPACK. This occurs at the connection level. The improvements are two-fold. First, the header fields are encoded using Huffman coding thus reducing their transfer size. Second, the client and server maintain a table of previous headers that is indexed. This table has static entries that are pre-defined on common HTTP headers, and dynamic entries added as headers are seen. Once dynamic entries are present in the table, the index for that dynamic entry will be passed instead of the head values themselves (photo courtesy of amphinicy.com). BIG-IP Support F5 introduced the HTTP/2 profile in 11.6 as an early access, but it hit general availability in 12.0. The BIG-IP implementation supports HTTP/2 as a gateway, meaning that all your clients can interact with the BIG-IP over HTTP/2, but server-side traffic remains HTTP/1.1. Applying the profile also requires the HTTP and clientssl profiles. If using the GUI to configure the virtual server, the HTTP/2 Profile field will be grayed out until use select an HTTP profile. It will let you try to save it at that point even without a clientssl profile, but will complain when saving: 01070734:3: Configuration error: In Virtual Server (/Common/h2testvip) http2 specified activation mode requires a client ssl profile As far as the profile itself is concerned, the fields available for configuration are shown in the image below. Most of the fields are pretty self explanatory, but I’ll discuss a few of them briefly. Insert Header - this field allows you to configure a header to inform the HTTP/1.1 server on the back end that the front-end connection is HTTP/2. Activation Modes - The options here are to restrict modes toALPN only, which would then allow HTTP/1.1 or negatiate to HTTP/2 or Always, which tells BIG-IP that all connections will be HTTP/2. Receive Window - We didn’t cover the flow control functionality in HTTP/2, but this setting sets the level (HTTP/2 v3+) where individual streams can be stalled. Write Size - This is the size of the data frames in bytes that HTTP/2 will send in a single write operation. Larger size will improve network utilization at the expense of an increased buffer of the data. Header Table Size - This is the size of the indexed static/dynamic table that HPACK uses for header compression. Larger table size will improve compression, but at the expense of memory. In this article, we covered the basics of the major benefits of HTTP/2. There are more optimizations and features to explore, such as server push, which is not yet supported by BIG-IP. You can read about many of those features here on this very excellent article on Google’s developers portal where some of the images in this article came from.2.6KViews1like2CommentsRevocation Status in HTTP Request Header
I'm setting up a web app that will use the EDIPI to validate my user's accounts. I think I have a working udnerstanding of how that'll work--I'm going to be setting up a iRule to forward the users EDIPI to the server. (see here) It dawned on me that I'm not really sure how that process works with the revokation status. If their CAC is revoked will CLIENTSSL_HANDSHAKE or HTTP_REQUEST_RELEASE fire? I'm picturing still getting their EDIPI off the CAC and setting that in the header, but also getting their revocation status and putting a yes/no in the header for "x-revoked". I could easily then check that in my server code. I believe that's how that works with Cloud 1. Is that the way I'd do that, or would the best practice be to just not send their request at all somehow?Solved484Views0likes2CommentsICAP with iRule Response Page without ASM
Hello, We are running on Big IP 13.1.1.4 TMOS code and set up Content Adaptation for HTTP request to check files uploaded through one our Website using ICAP. It's working fine but in case any virus is detected the ICAP server modify the response and show it's own response. But we would like to redirect the end-user to a dedicated and corporate web page of our website. I prepared the below Irule but it's now working. when ADAPT_REQUEST_RESULT { if { ([ADAPT::result] contains "respond") } { log local0. "ICAP Response is [ADAPT::result], let's customized reject page" set response { <html> <head> <title>Virus Detected</title> <meta http-equiv="refresh" content="0;URL='https://int-www-01.citizensfla.com/virus-test'" /> </head> </html> } HTTP::header remove Content-Length #HTTP::payload replace 0 [HTTP::payload length] "" HTTP::payload replace 0 0 $response } } How we could redirect the POST of the user to a dedicated page within our website if a virus is found using ICAP internal VS. Many thanks in advance for any help on this matter. Regards Vijay1KViews0likes3CommentsRewrite http:// to https:// in response content
Problem this snippet solves: (Maybe I missed it, but) I didn't see a code share for using a STREAM profile to rewrite content from http to https. This share is just to make it easier to find a simple iRule to replace http:// links in page content to https://. It's taken directly from the STREAM::expression Wiki page. How to use this snippet: You'll need to assign a STREAM profile to you virtual server in order for this to work (just create an empty stream profile and assign it). Code : # Example which replaces http:// with https:// in response content # Prevents server compression in responses when HTTP_REQUEST { # Disable the stream filter for all requests STREAM::disable # LTM does not uncompress response content, so if the server has compression enabled # and it cannot be disabled on the server, we can prevent the server from # sending a compressed response by removing the compression offerings from the client HTTP::header remove "Accept-Encoding" } when HTTP_RESPONSE { # Check if response type is text if {[HTTP::header value Content-Type] contains "text"}{ # Replace http:// with https:// STREAM::expression {@http://@https://@} # Enable the stream filter for this response only STREAM::enable } } Tested this on version: 11.53.1KViews0likes5CommentsPersist On Last JSESSIONID in HTTP Parameter
Problem this snippet solves: This iRule was written with the goal to persist on only the last jsessionid that is present when multiple jsessionid HTTP Paremeters are present.. How to use this snippet: I ran into a particular challenge where a customer was receiving a request where a jsessionid was expected to be set in an HTTP Request Parameter. For those of you who are not familiar, see where the parameters lye below: <scheme>://<username>:<password>@<host>:<port>/<path>;<parameters>?<query>#<fragment> REF: http://www.skorks.com/2010/05/what-every-developer-should-know-about-urls/ Here is an example of a full request: http://www.example.com/my/resource.html;jsessionid=0123456789abcdef;jsessionid=0123456789abcdef?alice=mouse In the above example, there are two jsessionid parameters, both of which are the same value. This is what was being used to Persist "*jsessionid*" { #Parse the URI and look for jsessionid. Skip 11 characters and match up to the next "?" set session_id [findstr [HTTP::uri] jsessionid 11 "?"] } Code : when CLIENT_ACCEPTED { set debug 1 } when HTTP_REQUEST { set logTuple "[IP::client_addr]:[TCP::client_port] - [IP::local_addr]:[TCP::local_port]" set parameters [findstr [HTTP::path] ";" 1] set session_id "" foreach parameter [split $parameters ";"] { scan $parameter {%[^=]=%s} name value if {$debug}{log local0.debug "$logTuple :: Multiple JsessionID in: [HTTP::host][HTTP::uri]"} if {$name equals "jsessionid"} {set session_id $value} } if {$session_id ne ""}{ #Persist on the parsed session ID for X seconds if {$debug}{log local0.debug "$logTuple :: Single JsessionID in: [HTTP::host][HTTP::uri]"} persist uie $session_id 86400 } }593Views0likes3CommentsTightening the Security of HTTP Traffic part 1
Summary HTTP is the de facto protocol used to communicate over the internet. The security challenges that exists today was not considered when designing the HTTP protocol. Web browsers are generally used to interact with web applications. However, this interaction is not secured in most deployments, despite the existence of a number of http headers that can be leveraged by application developers to tighten the security of their applications. In this article, I will give an overview of some important headers that can be added to HTTP responses in order to improve the security web applications. I will also provide sample iRules that can be implemented on F5 BigIP to insert those headers in HTTP responses. Content This article has been made in a series of 3 parts to make it easier to read and digest. The following headers will be reviewed in different parts of this article: Part 1: A brief overview of Cross-Site Scripting (XSS) vulnerability X-XSS-Protection HttpOnly flag for cookies Secure flag for cookies Part 2: The X-Frame Options Header HTTP Strict Transport Security X-Content-Type-Options Content-Security-Policy Public Key Pinning Extension for HTTP (HPKP) Part 3: Server and X-Powered-By Cookie encryption Https (SSL/TLS) A brief overview of XSS Attacks Before going further into the details of the headers we need to implement to improve the security of http traffic, it is important to make a quick review of the Cross-Site Scripting (XSS) attack,which is the most prevalent web application security flaw . This is important because any of the security measure we will review may be completely useless if an XSS flaw exist on the application.The security measures proposed in this article rely on the browsernot infected by any kind of malware (including XSS) and working properly. A Cross-Site Scripting (XSS) attack is a type of injection attack where a malicious script is injected into a trusted but vulnerable web applications. The malicious script is send to victim browser as client side script when the user uses a vulnerable web app. XSS are generally caused by poor input validation or encoding by web application developers. An XSS attack can load a malware (JavaScript) into to browser and make it behave in a completely unpredictable manner, thus making all other protection mechanism useless. Cross-Site scripting was ranked third in the last OWASP 2013 top 10 security vulnerabilities. Detailed information on XSS can be found on OWASP website. 1. TheX-XSS-Protection header The X-XSS-Protection header enables the Cross-Site Scripting filter on the browser. When enabled, the XSS Filter operates as a browser component with visibility into all requests / responses flowing through the browser.When the filter detects a likely XSS in a request or response, it prevents the malicious script from executing. The X-XSS-Protection header is supported by most major browsers. Usage: X-XSS-Protection: 0 XSS protection filter is disabled (facebook) X-XSS-Protection: 1 XSS protection filter is enable. Upon XSS attack detection, the browser will sanitize the page. X-XSS-Protection: 1; mode=block If XSS attack is detected, The browser blocks the page in order to stop the attack. (Google, Twitter) Note: If third party scripts are allowed to run on your web application, enabling XSS-Protection might prevents some scripts from working properly. As such, proper considerations should be made before enabling the header. 1.1 Example of how this header is used: the following command can be used to see which http headers are enabled on a web application. This command sends a single http request to the destination application and follows redirection. Therefore it is completely harmless and is only used for educational purposes in this article. I will use the same commandthroughout the article against somes well known sites, that are build with high security standards, in order to demonstrate how the discussed headers are used in real life. curl -L -A "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" -sD - https://www.google.com -o /dev/null HTTP/1.1 200 OK Date: Mon, 07 Aug 2017 11:24:08 GMT Expires: -1 Cache-Control: private, max-age=0 Server: gws X-XSS-Protection: 1; mode=block [...] 1.2 iRule to enableXSS-Protection The following iRule can be used to insertXSS-Protection header to all http responses: when HTTP_RESPONSE { if { !([ HTTP::header exists "X-XSS-Protection“ ])} { HTTP::header insert "X-XSS-Protection" "1; mode=block" } } 2.HttpOnly flag for cookies When added to cookies in HTTP responses, the HttpOnly flag prevents client side scripts from accessing cookies. Existing cross-site scripting flaws on the web applicationwill not be exploited by using this cookie. The HttpOnly flag relies on the browser to prevent XSS attacks, when set by the server.This flag is supported by all major browsers in their latest versions. It was Introduced in 2002 by Microsoft in IE6 SP1. Note: Using older browser exposes you to higher security risk as older browsers may not support some of the security headers discussed in this article. 2.1 Example: curl -L -A "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" -sD - https://www.google.com -o /dev/null HTTP/1.1 200 OK Date: Wed, 16 Aug 2017 19:34:36 GMT Expires: -1 [..] Server: gws X-XSS-Protection: 1; mode=block X-Frame-Options: SAMEORIGIN Set-Cookie: NID=110=rKlFElgqL3YuhyKtxcIr0PiSE; expires=Thu, 15-Feb-2018 19:34:36 GMT; path=/; domain=.google.com; HttpOnly Alt-Svc: quic=":443"; ma=2592000; v="39,38,37,35" [...] 2.2 iRule to add HttpOnly flag to all cookies for http responses: If added to Big-IP LTM virtual server, the following iRule will add the HttpOnly flag to all cookies in http responses. when HTTP_RESPONSE { foreach mycookie [HTTP::cookie names] { HTTP::cookie httponly $mycookie enable } } 3.Secure flag for cookies The purpose of the secure flag is to prevent cookies from being observed by unauthorized parties due to the transmission of a the cookie in clear text. When a cookie has the secure flag attribute, Browsers that support the secure flag will only send this cookie within a secure session(HTTPS). This means that, if the web application accidentally points to a hard coded http link, the cookie will not be send. Enabling secure flag for cookies is common and recommended 3.1 Example curl -L -A "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0" -s -D - https://www.f5.com -o /dev/null HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html; charset=utf-8 […] Set-Cookie: BIGipServer=!M+eTuU0mHEu4R8mBiet1HZsOy/41nDC5VnWBAlE5bvU0446qCYN4w/jKbf2+U8d8EVe+BxHFZ5+UJYg=; path=/; Httponly;Secure Strict-Transport-Security: max-age=16070400 […] 3.2 Setting the secure flag with iRule If added to Big-IP LTM virtual server, the following iRule will add the secure flag to all http responses. when HTTP_RESPONSE { foreach mycookie [HTTP::cookie names] { HTTP::cookie secure $mycookie enable } }3.7KViews3likes3Comments