internet
99 TopicsLayer 4 vs Layer 7 DoS Attack
Not all DoS (Denial of Service) attacks are the same. While the end result is to consume as much - hopefully all - of a server or site's resources such that legitimate users are denied service (hence the name) there is a subtle difference in how these attacks are perpetrated that makes one easier to stop than the other. SYN Flood A Layer 4 DoS attack is often referred to as a SYN flood. It works at the transport protocol (TCP) layer. A TCP connection is established in what is known as a 3-way handshake. The client sends a SYN packet, the server responds with a SYN ACK, and the client responds to that with an ACK. After the "three-way handshake" is complete, the TCP connection is considered established. It is as this point that applications begin sending data using a Layer 7 or application layer protocol, such as HTTP. A SYN flood uses the inherent patience of the TCP stack to overwhelm a server by sending a flood of SYN packets and then ignoring the SYN ACKs returned by the server. This causes the server to use up resources waiting a configured amount of time for the anticipated ACK that should come from a legitimate client. Because web and application servers are limited in the number of concurrent TCP connections they can have open, if an attacker sends enough SYN packets to a server it can easily chew through the allowed number of TCP connections, thus preventing legitimate requests from being answered by the server. SYN floods are fairly easy for proxy-based application delivery and security products to detect. Because they proxy connections for the servers, and are generally hardware-based with a much higher TCP connection limit, the proxy-based solution can handle the high volume of connections without becoming overwhelmed. Because the proxy-based solution is usually terminating the TCP connection (i.e. it is the "endpoint" of the connection) it will not pass the connection to the server until it has completed the 3-way handshake. Thus, a SYN flood is stopped at the proxy and legitimate connections are passed on to the server with alacrity. The attackers are generally stopped from flooding the network through the use of SYN cookies. SYN cookies utilize cryptographic hashing and are therefore computationally expensive, making it desirable to allow a proxy/delivery solution with hardware accelerated cryptographic capabilities handle this type of security measure. Servers can implement SYN cookies, but the additional burden placed on the server alleviates much of the gains achieved by preventing SYN floods and often results in available, but unacceptably slow performing servers and sites. HTTP GET DoS A Layer 7 DoS attack is a different beast and it's more difficult to detect. A Layer 7 DoS attack is often perpetrated through the use of HTTP GET. This means that the 3-way TCP handshake has been completed, thus fooling devices and solutions which are only examining layer 4 and TCP communications. The attacker looks like a legitimate connection, and is therefore passed on to the web or application server. At that point the attacker begins requesting large numbers of files/objects using HTTP GET. They are generally legitimate requests, there are just a lot of them. So many, in fact, that the server quickly becomes focused on responding to those requests and has a hard time responding to new, legitimate requests. When rate-limiting was used to stop this type of attack, the bad guys moved to using a distributed system of bots (zombies) to ensure that the requests (attack) was coming from myriad IP addresses and was therefore not only more difficult to detect, but more difficult to stop. The attacker uses malware and trojans to deposit a bot on servers and clients, and then remotely includes them in his attack by instructing the bots to request a list of objects from a specific site or server. The attacker might not use bots, but instead might gather enough evil friends to launch an attack against a site that has annoyed them for some reason. Layer 7 DoS attacks are more difficult to detect because the TCP connection is valid and so are the requests. The trick is to realize when there are multiple clients requesting large numbers of objects at the same time and to recognize that it is, in fact, an attack. This is tricky because there may very well be legitimate requests mixed in with the attack, which means a "deny all" philosophy will result in the very situation the attackers are trying to force: a denial of service. Defending against Layer 7 DoS attacks usually involves some sort of rate-shaping algorithm that watches clients and ensures that they request no more than a configurable number of objects per time period, usually measured in seconds or minutes. If the client requests more than the configurable number, the client's IP address is blacklisted for a specified time period and subsequent requests are denied until the address has been freed from the blacklist. Because this can still affect legitimate users, layer 7 firewall (application firewall) vendors are working on ways to get smarter about stopping layer 7 DoS attacks without affecting legitimate clients. It is a subtle dance and requires a bit more understanding of the application and its flow, but if implemented correctly it can improve the ability of such devices to detect and prevent layer 7 DoS attacks from reaching web and application servers and taking a site down. The goal of deploying an application firewall or proxy-based application delivery solution is to ensure the fast and secure delivery of an application. By preventing both layer 4 and layer 7 DoS attacks, such solutions allow servers to continue serving up applications without a degradation in performance caused by dealing with layer 4 or layer 7 attacks.20KViews0likes3CommentsThe Disadvantages of DSR (Direct Server Return)
I read a very nice blog post yesterday discussing some of the traditional pros and cons of load-balancing configurations. The author comes to the conclusion that if you can use direct server return, you should. I agree with the author's list of pros and cons; DSR is the least intrusive method of deploying a load-balancer in terms of network configuration. But there are quite a few disadvantages missing from the author's list. Author's List of Disadvantages of DSR The disadvantages of Direct Routing are: Backend server must respond to both its own IP (for health checks) and the virtual IP (for load balanced traffic) Port translation or cookie insertion cannot be implemented. The backend server must not reply to ARP requests for the VIP (otherwise it will steal all the traffic from the load balancer) Prior to Windows Server 2008 some odd routing behavior could occur in In some situations either the application or the operating system cannot be modified to utilse Direct Routing. Some additional disadvantages: Protocol sanitization can't be performed. This means vulnerabilities introduced due to manipulation of lax enforcement of RFCs and protocol specifications can't be addressed. Application acceleration can't be applied. Even the simplest of acceleration techniques, e.g. compression, can't be applied because the traffic is bypassing the load-balancer (a.k.a. application delivery controller). Implementing caching solutions become more complex. With a DSR configuration the routing that makes it so easy to implement requires that caching solutions be deployed elsewhere, such as via WCCP on the router. This requires additional configuration and changes to the routing infrastructure, and introduces another point of failure as well as an additional hop, increasing latency. Error/Exception/SOAP fault handling can't be implemented. In order to address failures in applications such as missing files (404) and SOAP Faults (500) it is necessary for the load-balancer to inspect outbound messages. Using a DSR configuration this ability is lost, which means errors are passed directly back to the user without the ability to retry a request, write an entry in the log, or notify an administrator. Data Leak Prevention can't be accomplished. Without the ability to inspect outbound messages, you can't prevent sensitive data (SSN, credit card numbers) from leaving the building. Connection Optimization functionality is lost. TCP multiplexing can't be accomplished in a DSR configuration because it relies on separating client connections from server connections. This reduces the efficiency of your servers and minimizes the value added to your network by a load balancer. There are more disadvantages than you're likely willing to read, so I'll stop there. Suffice to say that the problem with the suggestion to use DSR whenever possible is that if you're an application-aware network administrator you know that most of the time, DSR isn't the right solution because it restricts the ability of the load-balancer (application delivery controller) to perform additional functions that improve the security, performance, and availability of the applications it is delivering. DSR is well-suited, and always has been, to UDP-based streaming applications such as audio and video delivered via RTSP. However, in the increasingly sensitive environment that is application infrastructure, it is necessary to do more than just "load balancing" to improve the performance and reliability of applications. Additional application delivery techniques are an integral component to a well-performing, efficient application infrastructure. DSR may be easier to implement and, in some cases, may be the right solution. But in most cases, it's going to leave you simply serving applications, instead of delivering them. Just because you can, doesn't mean you should.5.9KViews0likes4CommentsPersistent and Persistence, What's the Difference?
The English language is one of the most expressive, and confusing, in existence. Words can have different meaning based not only on context, but on placement within a given sentence. Add in the twists that come from technical jargon and suddenly you've got words meaning completely different things. This is evident in the use of persistent and persistence. While the conceptual basis of persistence and persistent are essentially the same, in reality they refer to two different technical concepts. Both persistent and persistence relate to the handling of connections. The former is often used as a general description of the behavior of HTTP and, necessarily, TCP connections, though it is also used in the context of database connections. The latter is most often related to TCP/HTTP connection handling but almost exclusively in the context of load-balancing. Persistent Persistent connections are connections that are kept open and reused. The most commonly implemented form of persistent connections are HTTP, with database connections a close second. Persistent HTTP connections were implemented as part of the HTTP 1.1 specification as a method of improving the efficiency Related Links HTTP 1.1 RFC Persistent Connection Behavior of Popular Browsers Persistent Database Connections Apache Keep-Alive Support Cookies, Sessions, and Persistence of HTTP in general. Before HTTP 1.1 a browser would generally open one connection per object on a page in order to retrieve all the appropriate resources. As the number of objects in a page grew, this became increasingly inefficient and significantly reduced the capacity of web servers while causing browsers to appear slow to retrieve data. HTTP 1.1 and the Keep-Alive header in HTTP 1.0 were aimed at improving the performance of HTTP by reusing TCP connections to retrieve objects. They made the connections persistent such that they could be reused to send multiple HTTP requests using the same TCP connection. Similarly, this notion was implemented by proxy-based load-balancers as a way to improve performance of web applications and increase capacity on web servers. Persistent connections between a load-balancer and web servers is usually referred to as TCP multiplexing. Just like browsers, the load-balancer opens a few TCP connections to the servers and then reuses them to send multiple HTTP requests. Persistent connections, both in browsers and load-balancers, have several advantages: Less network traffic due to less TCP setup/teardown. It requires no less than 7 exchanges of data to set up and tear down a TCP connection, thus each connection that can be reused reduces the number of exchanges required resulting in less traffic. Improved performance. Because subsequent requests do not need to setup and tear down a TCP connection, requests arrive faster and responses are returned quicker. TCP has built-in mechanisms, for example window sizing, to address network congestion. Persistent connections give TCP the time to adjust itself appropriately to current network conditions, thus improving overall performance. Non-persistent connections are not able to adjust because they are open and almost immediately closed. Less server overhead. Servers are able to increase the number of concurrent users served because each user requires fewer connections through which to complete requests. Persistence Persistence, on the other hand, is related to the ability of a load-balancer or other traffic management solution to maintain a virtual connection between a client and a specific server. Persistence is often referred to in the application delivery networking world as "stickiness" while in the web and application server demesne it is called "server affinity". Persistence ensures that once a client has made a connection to a specific server that subsequent requests are sent to the same server. This is very important to maintain state and session-specific information in some application architectures and for handling of SSL-enabled applications. Examples of Persistence Hash Load Balancing and Persistence LTM Source Address Persistence Enabling Session Persistence 20 Lines or Less #7: JSessionID Persistence When the first request is seen by the load-balancer it chooses a server. On subsequent requests the load-balancer will automatically choose the same server to ensure continuity of the application or, in the case of SSL, to avoid the compute intensive process of renegotiation. This persistence is often implemented using cookies but can be based on other identifying attributes such as IP address. Load-balancers that have evolved into application delivery controllers are capable of implementing persistence based on any piece of data in the application message (payload), headers, or at in the transport protocol (TCP) and network protocol (IP) layers. Some advantages of persistence are: Avoid renegotiation of SSL. By ensuring that SSL enabled connections are directed to the same server throughout a session, it is possible to avoid renegotiating the keys associated with the session, which is compute and resource intensive. This improves performance and reduces overhead on servers. No need to rewrite applications. Applications developed without load-balancing in mind may break when deployed in a load-balanced architecture because they depend on session data that is stored only on the original server on which the session was initiated. Load-balancers capable of session persistence ensure that those applications do not break by always directing requests to the same server, preserving the session data without requiring that applications be rewritten. Summize So persistent connections are connections that are kept open so they can be reused to send multiple requests, while persistence is the process of ensuring that connections and subsequent requests are sent to the same server through a load-balancer or other proxy device. Both are important facets of communication between clients, servers, and mediators like load-balancers, and increase the overall performance and efficiency of the infrastructure as well as improving the end-user experience.4.9KViews0likes2CommentsHTTP Pipelining: A security risk without real performance benefits
Everyone wants web sites and applications to load faster, and there’s no shortage of folks out there looking for ways to do just that. But all that glitters is not gold, and not all acceleration techniques actually do all that much to accelerate the delivery of web sites and applications. Worse, some actual incur risk in the form of leaving servers open to exploitation. A BRIEF HISTORY Back in the day when HTTP was still evolving, someone came up with the concept of persistent connections. See, in ancient times – when administrators still wore togas in the data center – HTTP 1.0 required one TCP connection for every object on a page. That was okay, until pages started comprising ten, twenty, and more objects. So someone added an HTTP header, Keep-Alive, which basically told the server not to close the TCP connection until (a) the browser told it to or (b) it didn’t hear from the browser for X number of seconds (a time out). This eventually became the default behavior when HTTP 1.1 was written and became a standard. I told you it was a brief history. This capability is known as a persistent connection, because the connection persists across multiple requests. This is not the same as pipelining, though the two are closely related. Pipelining takes the concept of persistent connections and then ignores the traditional request – reply relationship inherent in HTTP and throws it out the window. The general line of thought goes like this: “Whoa. What if we just shoved all the requests from a page at the server and then waited for them all to come back rather than doing it one at a time? We could make things even faster!” Tada! HTTP pipelining. In technical terms, HTTP pipelining is initiated by the browser by opening a connection to the server and then sending multiple requests to the server without waiting for a response. Once the requests are all sent then the browser starts listening for responses. The reason this is considered an acceleration technique is that by shoving all the requests at the server at once you essentially save the RTT (Round Trip Time) on the connection waiting for a response after each request is sent. WHY IT JUST DOESN’T MATTER ANYMORE (AND MAYBE NEVER DID) Unfortunately, pipelining was conceived of and implemented before broadband connections were widely utilized as a method of accessing the Internet. Back then, the RTT was significant enough to have a negative impact on application and web site performance and the overall user-experience was improved by the use of pipelining. Today, however, most folks have a comfortable speed at which they access the Internet and the RTT impact on most web application’s performance, despite the increasing number of objects per page, is relatively low. There is no arguing, however, that some reduction in time to load is better than none. Too, anyone who’s had to access the Internet via high latency links can tell you anything that makes that experience faster has got to be a Good Thing. So what’s the problem? The problem is that pipelining isn’t actually treated any differently on the server than regular old persistent connections. In fact, the HTTP 1.1 specification requires that a “server MUST send its responses to those requests in the same order that the requests were received.” In other words, the requests are return in serial, despite the fact that some web servers may actually process those requests in parallel. Because the server MUST return responses to requests in order that the server has to do some extra processing to ensure compliance with this part of the HTTP 1.1 specification. It has to queue up the responses and make certain responses are returned properly, which essentially negates the performance gained by reducing the number of round trips using pipelining. Depending on the order in which requests are sent, if a request requiring particularly lengthy processing – say a database query – were sent relatively early in the pipeline, this could actually cause a degradation in performance because all the other responses have to wait for the lengthy one to finish before the others can be sent back. Application intermediaries such as proxies, application delivery controllers, and general load-balancers can and do support pipelining, but they, too, will adhere to the protocol specification and return responses in the proper order according to how the requests were received. This limitation on the server side actually inhibits a potentially significant boost in performance because we know that processing dynamic requests takes longer than processing a request for static content. If this limitation were removed it is possible that the server would become more efficient and the user would experience non-trivial improvements in performance. Or, if intermediaries were smart enough to rearrange requests such that they their execution were optimized (I seem to recall I was required to design and implement a solution to a similar example in graduate school) then we’d maintain the performance benefits gained by pipelining. But that would require an understanding of the application that goes far beyond what even today’s most intelligent application delivery controllers are capable of providing. THE SILVER LINING At this point it may be fairly disappointing to learn that HTTP pipelining today does not result in as significant a performance gain as it might at first seem to offer (except over high latency links like satellite or dial-up, which are rapidly dwindling in usage). But that may very well be a good thing. As miscreants have become smarter and more intelligent about exploiting protocols and not just application code, they’ve learned to take advantage of the protocol to “trick” servers into believing their requests are legitimate, even though the desired result is usually malicious. In the case of pipelining, it would be a simple thing to exploit the capability to enact a layer 7 DoS attack on the server in question. Because pipelining assumes that requests will be sent one after the other and that the client is not waiting for the response until the end, it would have a difficult time distinguishing between someone attempting to consume resources and a legitimate request. Consider that the server has no understanding of a “page”. It understands individual requests. It has no way of knowing that a “page” consists of only 50 objects, and therefore a client pipelining requests for the maximum allowed – by default 100 for Apache – may not be seen as out of the ordinary. Several clients opening connections and pipelining hundreds or thousands of requests every second without caring if they receive any of the responses could quickly consume the server’s resources or available bandwidth and result in a denial of service to legitimate users. So perhaps the fact that pipelining is not really all that useful to most folks is a good thing, as server administrators can disable the feature without too much concern and thereby mitigate the risk of the feature being leveraged as an attack method against them. Pipelining as it is specified and implemented today is more of a security risk than it is a performance enhancement. There are, however, tweaks to the specification that could be made in the future that might make it more useful. Those tweaks do not address the potential security risk, however, so perhaps given that there are so many other optimizations and acceleration techniques that can be used to improve performance that incur no measurable security risk that we simply let sleeping dogs lie. IMAGES COURTESTY WIKIPEDIA COMMONS4.5KViews0likes5CommentsWhat is server offload and why do I need it?
One of the tasks of an enterprise architect is to design a framework atop which developers can implement and deploy applications consistently and easily. The consistency is important for internal business continuity and reuse; common objects, operations, and processes can be reused across applications to make development and integration with other applications and systems easier. Architects also often decide where functionality resides and design the base application infrastructure framework. Application server, identity management, messaging, and integration are all often a part of such architecture designs. Rarely does the architect concern him/herself with the network infrastructure, as that is the purview of “that group”; the “you know who I’m talking about” group. And for the most part there’s no need for architects to concern themselves with network-oriented architecture. Applications should not need to know on which VLAN they will be deployed or what their default gateway might be. But what architects might need to know – and probably should know – is whether the network infrastructure supports “server offload” of some application functions or not, and how that can benefit their enterprise architecture and the applications which will be deployed atop it. WHAT IT IS Server offload is a generic term used by the networking industry to indicate some functionality designed to improve the performance or security of applications. We use the term “offload” because the functionality is “offloaded” from the server and moved to an application network infrastructure device instead. Server offload works because the application network infrastructure is almost always these days deployed in front of the web/application servers and is in fact acting as a broker (proxy) between the client and the server. Server offload is generally offered by load balancers and application delivery controllers. You can think of server offload like a relay race. The application network infrastructure device runs the first leg and then hands off the baton (the request) to the server. When the server is finished, the application network infrastructure device gets to run another leg, and then the race is done as the response is sent back to the client. There are basically two kinds of server offload functionality: Protocol processing offload Protocol processing offload includes functions like SSL termination and TCP optimizations. Rather than enable SSL communication on the web/application server, it can be “offloaded” to an application network infrastructure device and shared across all applications requiring secured communications. Offloading SSL to an application network infrastructure device improves application performance because the device is generally optimized to handle the complex calculations involved in encryption and decryption of secured data and web/application servers are not. TCP optimization is a little different. We say TCP session management is “offloaded” to the server but that’s really not what happens as obviously TCP connections are still opened, closed, and managed on the server as well. Offloading TCP session management means that the application network infrastructure is managing the connections between itself and the server in such a way as to reduce the total number of connections needed without impacting the capacity of the application. This is more commonly referred to as TCP multiplexing and it “offloads” the overhead of TCP connection management from the web/application server to the application network infrastructure device by effectively giving up control over those connections. By allowing an application network infrastructure device to decide how many connections to maintain and which ones to use to communicate with the server, it can manage thousands of client-side connections using merely hundreds of server-side connections. Reducing the overhead associated with opening and closing TCP sockets on the web/application server improves application performance and actually increases the user capacity of servers. TCP offload is beneficial to all TCP-based applications, but is particularly beneficial for Web 2.0 applications making use of AJAX and other near real-time technologies that maintain one or more connections to the server for its functionality. Protocol processing offload does not require any modifications to the applications. Application-oriented offload Application-oriented offload includes the ability to implement shared services on an application network infrastructure device. This is often accomplished via a network-side scripting capability, but some functionality has become so commonplace that it is now built into the core features available on application network infrastructure solutions. Application-oriented offload can include functions like cookie encryption/decryption, compression, caching, URI rewriting, HTTP redirection, DLP (Data Leak Prevention), selective data encryption, application security functionality, and data transformation. When network-side scripting is available, virtually any kind of pre or post-processing can be offloaded to the application network infrastructure and thereafter shared with all applications. Application-oriented offload works because the application network infrastructure solution is mediating between the client and the server and it has the ability to inspect and manipulate the application data. The benefits of application-oriented offload are that the services implemented can be shared across multiple applications and in many cases the functionality removes the need for the web/application server to handle a specific request. For example, HTTP redirection can be fully accomplished on the application network infrastructure device. HTTP redirection is often used as a means to handle application upgrades, commonly mistyped URIs, or as part of the application logic when certain conditions are met. Application security offload usually falls into this category because it is application – or at least application data – specific. Application security offload can include scanning URIs and data for malicious content, validating the existence of specific cookies/data required for the application, etc… This kind of offload improves server efficiency and performance but a bigger benefit is consistent, shared security across all applications for which the service is enabled. Some application-oriented offload can require modification to the application, so it is important to design such features into the application architecture before development and deployment. While it is certainly possible to add such functionality into the architecture after deployment, it is always easier to do so at the beginning. WHY YOU NEED IT Server offload is a way to increase the efficiency of servers and improve application performance and security. Server offload increases efficiency of servers by alleviating the need for the web/application server to consume resources performing tasks that can be performed more efficiently on an application network infrastructure solution. The two best examples of this are SSL encryption/decryption and compression. Both are CPU intense operations that can consume 20-40% of a web/application server’s resources. By offloading these functions to an application network infrastructure solution, servers “reclaim” those resources and can use them instead to execute application logic, serve more users, handle more requests, and do so faster. Server offload improves application performance by allowing the web/application server to concentrate on what it is designed to do: serve applications and putting the onus for performing ancillary functions on a platform that is more optimized to handle those functions. Server offload provides these benefits whether you have a traditional client-server architecture or have moved (or are moving) toward a virtualized infrastructure. Applications deployed on virtual servers still use TCP connections and SSL and run applications and therefore will benefit the same as those deployed on traditional servers. I am wondering why not all websites enabling this great feature GZIP? 3 Really good reasons you should use TCP multiplexing SOA & Web 2.0: The Connection Management Challenge Understanding network-side scripting I am in your HTTP headers, attacking your application Infrastructure 2.0: As a matter of fact that isn't what it means2.7KViews0likes1CommentClickjacking Protection Using X-FRAME-OPTIONS Available for Firefox
But browser support is only half the solution, don’t forget to implement the server-side, too. Clickjacking, unlike more well-known (and understood) web application vulnerabilities, has been given scant amount of attention despite its risks and its usage. Earlier this year, for example, it was used as an attack on Twitter, but never really discussed as being a clickjacking attack. Maybe because aside from rewriting applications to prevent CSRF (adding nonces and validation of the same to every page) or adding framekillers there just haven’t been many other options to prevent the attack technique from being utilized against users. Too, it is one of the more convoluted attack methods out there so it would be silly to expect non-technical media to understand it let alone explain how it works to their readers. There is, however, a solution on the horizon. IE8 has introduced an opt-in measure that allows developers – or whomever might be in charge of network-side scripting implementations – to prevent clickjacking on vulnerable pages using a custom HTTP header to prevent them from being “framed” inappropriately: X-FRAME-OPTIONS. The behavior is described in the aforementioned article as: If the X-FRAME-OPTIONS value contains the token DENY, IE8 will prevent the page from rendering if it will be contained within a frame. If the value contains the token SAMEORIGIN, IE will block rendering only if the origin of the top level-browsing-context is different than the origin of the content containing the X-FRAME-OPTIONS directive. For instance, if http://shop.example.com/confirm.asp contains a DENY directive, that page will not render in a subframe, no matter where the parent frame is located. In contrast, if the X-FRAME-OPTIONS directive contains the SAMEORIGIN token, the page may be framed by any page from the exact http://shop.example.com origin. But that’s only IE8, right? Well, natively, yes. But a development version of NoScript has been released that supports the X-FRAME-OPTIONS header and will provide the same protections as are natively achieved in IE8. The problem is that this is only half the equation: the X-FRAME-OPTIONS header needs to exist before the browser can act on it and the preventive measure for clickjacking completed. As noted in the Register, “some critics have contended the protection will be ineffective because it will require millions of websites to update their pages with proprietary code.” That’s not entirely true as there is another option that will provide support for X-FRAME-OPTIONS without updating pages/applications/sites with proprietary code: network-side scripting. The “proprietary” nature of custom HTTP headers is also debatable, as support for Firefox was provided quickly via NoScript and if the technique is successful will likely be adopted by other browser creators. HOW-TO ADD X-FRAME-OPTIONS TO YOUR APPLICATION – WITH or WITHOUT CODE CHANGES Step 1: Add the custom HTTP header “X-FRAME-OPTIONS” with a value of “DENY” or “SAMEORIGIN” before returning a response to the client Really, that’s it. The browser takes care of the rest for you. OWASP has a great article on how to implement a ClickjackFilter for JavaEE and there are sure to be many more blogs and articles popping up describing how one can implement such functionality in their language-of-choice. Even without such direct “how-to” articles and code samples, it is merely a matter of adding a new custom HTTP header – examples of which ought to be easy enough to find. Similarly a solution can be implemented using network-side scripting that requires no modification to applications. In fact, this can be accomplished via iRules in just one line of code: when HTTP_RESPONSE { HTTP::header insert "X-FRAME-OPTIONS" “(DENY || SAMEORIGIN)”} I believe the mod_rewrite network-side script would be as simple, but as I am not an expert in mod_rewrite I will hope someone who is will leave an appropriate example as a comment or write up a blog/article and leave a pointer to it. A good reason to utilize the agility of network-side scripting solutions in this case is that it is not necessary to modify each application requiring protection, which takes time to implement, test, and deploy. An even better reason is that a single network-side script can protect all applications, regardless of language and deployment platform, without a lengthy development and deployment cycle. Regardless of how you add the header, it would be a wise idea to add it as a standard part of your secure-code deployment requirements (you do have those, don’t you?) because it doesn’t hurt anything for the custom HTTP header to exist and visitors using X-FRAME-OPTIONS enabled browsers/solutions will be a lot safer than without it. Stop brute force listing of HTTP OPTIONS with network-side scripting Jedi Mind Tricks: HTTP Request Smuggling I am in your HTTP headers, attacking your application Understanding network-side scripting 9 ways to use network-side scripting to architect faster, scalable, more secure applications2KViews0likes3CommentsVirtual Patching: What is it and why you should be doing it
Yesterday I was privileged to co-host a webinar with WhiteHat Security's Jeremiah Grossman on preventing SQL injection and Cross-Site scripting using a technique called "virtual patching". While I was familiar with F5's partnership with WhiteHat and our integrated solution, I wasn't familiar with the term. Virtual patching should put an end to the endless religious warring that goes on between the secure coding and web application firewall camps whenever the topic of web application security is raised. The premise of virtual patching is that a web application firewall is not, I repeat is not a replacement for secure coding. It is, in fact, an augmentation of existing security systems and practices that, in fact, enables secure development to occur without being rushed or outright ignored in favor of rushing a fix out the door. "The remediation challenges most organizations face are the time consuming process of allocating the proper personnel, prioritizing the tasks, QA / regression testing the fix, and finally scheduling a production release." -- WhiteHat Security, "WhiteHat Website Security Statistic Reports", December 2008 The WhiteHat report goes on to discuss the average number of days it took for organizations to address the top five urgent - not critical, not high, but urgent - severity vulnerabilities discovered. The fewest number of days to resolve a vulnerability (SQL Injection) was 28 in 2008, which is actually an improvement over previous years. 28 days. That's a lifetime on the Internet when your site is vulnerable to exploitation and attackers are massing at the gates faster than ants to a picnic. But you can't rush finding and fixing the vulnerability, and the option to shut down the web application may not be an option at all, especially if you rely on that application as a revenue stream, as an integration point with partners, or as part of a critical business process with a strict SLA governing its uptime. So do you leave it vulnerable? According to White Hat's data, apparently that's the decision made for many organizations given the limited options. The heads of many security professionals just exploded. My apologies if any of the detritus mussed your screen. If you're one of the ones whose head is still intact, there is a solution. Virtual patching provides the means by which you can prevent the exploitation of the vulnerability while it is addressed through whatever organizational processes are required to resolve it. Virtual patching is essentially the process of putting in place a rule on a web application firewall to prevent the exploitation of a vulnerability. This process is often times a manual one, but in the case of WhiteHat and F5 the process has been made as easy as clicking a button. When WhiteHat's Sentinel, which provides vulnerability scanning as a service, uncovers a vulnerability the operator (that's you) can decide to virtually patch the hole by adding a rule to the appropriate policy on F5's BIG-IP Application Security Manager (ASM) with the click of a button. Once the vulnerability has been addressed, you can remove the rule from the policy or leave it in place, as is your wont. It's up to you. Virtual patching provides the opportunity to close a vulnerability quickly but doesn't require that you necessarily abandon secure coding practices. Virtual patching actual enables and encourages secure coding by giving developers some breathing room in which to implement a thorough, secure solution to the vulnerability. It isn't an either-or solution, it's both, and leverages both solutions to provide the most comprehensive security coverage possible. And given statistics regarding the number of sites infected of late, that's something everyone should be able to get behind. Virtual patching as a technique does not require WhiteHat or F5, but other solutions will require a manual process to put in place rules to address vulnerabilities. The advantage of a WhiteHat-F5 solution is its tight integration via iControl and ability to immediately close discovered security holes, and of course a lengthy list of cool security options and features to further secure web applications available with ASM. You can read more about the integration between WhiteHat and F5 here or here or view a short overview of the way virtual patching works between Sentinel and ASM.1.7KViews0likes2Comments4 reasons not to use mod-security
Apache is a great web server if for no other reason than it offers more flexibility through modules than just about any other web server. You can plug-in all sorts of modules to enhance the functionality of Apache. But as I often say, just because you can doesn't mean you should. One of the modules you can install is mod_security. If you aren't familiar with mod_security, essentially it's a "roll your own" web application firewall plug-in for the Apache web server. Some of the security functions you can implement via mod_security are: Simple filtering Regular Expression based filtering URL Encoding Validation Unicode Encoding Validation Auditing Null byte attack prevention Upload memory limits Server identity masking Built in Chroot support Using mod_security you can also implement protocol security, which is an excellent idea for ensuring that holes in protocols aren't exploited. If you aren't sold on protocol security you should read up on the recent DNS vulnerability discovered by Dan Kaminsky - it's all about the protocol and has nothing to do with vulnerabilities introduced by implementation. mod_security provides many options for validating URLs, URIs, and application data. You are, essentially, implementing a custom web application firewall using configuration directives. If you're on this path then you probably agree that a web application firewall is a good thing, so why would I caution against using mod_security? Well, there's four reasons, actually. It runs on every web server. This is an additional load on the servers that can be easily offloaded for a more efficient architecture. The need for partial duplication of configuration files across multiple machines can also result in the introduction of errors or extraneous configuration that is unnecessary. Running mod_security on every web server decreases capacity to serve users and applications accordingly, which may require additional servers to scale to meet demand. You have to become a security expert. You have to understand the attacks you are trying to stop in order to write a rule to prevent them. So either you become an expert or you trust a third-party to be the expert. The former takes time and that latter takes guts, as you're introducing unnecessary risk by trusting a third-party. You have to become a protocol expert. In addition to understanding all the attacks you're trying to prevent, you must become an expert in the HTTP protocol. Part of providing web application security is to sanitize and enforce the HTTP protocol to ensure it isn't abused to create a hole where none previously appeared. You also have to become an expert in Apache configuration directives, and the specific directives used to configure mod_security. The configuration must be done manually. Unless you're going to purchase a commercially supported version of mod_security, you're writing complex rules manually. You'll need to brush up on your regular expression skills if you're going to attempt this. Maintaining those rules is just as painful, as any update necessarily requires manual intervention. Of course you could introduce an additional instance of Apache with mod_security installed that essentially proxies all requests through mod_security, thus providing a centralized security architecture, but at that point you've just introduced a huge bottleneck into your infrastructure. If you're already load-balancing multiple instances of a web site or application, then it's not likely that a single instance of Apache with mod_security is going to be able to handle the volume of requests without increasing downtime or degrading performance such that applications might as well be down because they're too painful to use. Centralizing security can improve performance, reduce the potential avenues of risk through configuration error, and keeps your security up-to-date by providing easy access to updated signatures, patterns, and defenses against existing and emerging web application attacks. Some web application firewalls offer pre-configured templates for specific applications like Microsoft OWA, providing a simple configuration experience that belies the depth of security knowledge applied to protected the application. Web application firewalls can enable compliance with requirement 6.6 of PCI DSS. And they're built to scale, which means the scenario in which mod_security is used as a reverse proxy to protect all web servers from harm but quickly becomes a bottleneck and impediment to performance doesn't happen with purpose-built web application firewalls. If you're considering using mod_security then you already recognize the value of and need for a web application firewall. That's great. But consider carefully where you will deploy that web application firewall, because the decision will have an impact on the performance and availability of your site and applications.1.5KViews0likes7Comments8 things you can do with a proxy
After having recently discussed all the different kinds of proxies that exist, it occurred to me that it might be nice to provide some examples of what you can do with proxies besides the obvious web filtering scenario. This is by no means an exhaustive list, but is provided to show some of the more common (and cool, I think) uses of proxies. What's really awesome is that while some of these uses are available with only one type of proxy (reverse or forward), a full proxy can provide all these uses, and more, in a single, unified application delivery platform. 1. DATA SCRUBBING Data scrubbing is the process of removing sensitive information like credit card and social security numbers from web application responses. This is particularly useful in preventing data leaks, especially if you're subject to regulations like SOX, HIPPA, and PCI DSS where the penalties for divulging personally identifiable information can be harsh fines - or worse. Data scrubbing is is an implementation of a reverse proxy. 2. URL REWRITING Rewriting URLs is something everyone has likely had to do at one time or another if they've developed a web application. URL rewriting is used to refer web requests to new resources instead of sending out a redirect response in cases where resources have moved, renamed, or migrated to a new version. URL rewriting is an implementation of a reverse proxy. 3. LAYER 7 SWITCHING Layer 7 switching provides an organization with the ability to maximize their IP address space as well as architect a more efficient, better performing application architecture. Layer 7 switching routes specific web requests to different servers based on information in the application layer, like HTTP headers or application data. Layer 7 switching is an implementation of a reverse proxy. 4. CONTENT FILTERING The most common use of proxies is content filtering. Generally, content filtering allows or rejects requests for content based on organizational policies regarding content type, the existence of specific keywords, or based on the site itself. Content filtering is an implementation of a forward proxy. 5. REDIRECTION Redirection is the process of, well, redirecting a browser to a new resource. This could be a new instance of a requested resource or as part of application logic such as redirecting a failed login to the proper page. Redirection is generally implemented by a reverse proxy, but can also be implemented by a forward proxy as a means of redirecting rejected requests to an explanation page. 6. LOAD BALANCING Load balancing is one of the most common uses of a reverse proxy. Load balancing distributes requests for resources across a number of servers in order to provide scalability and availability services. Load balancing is an implementation of a reverse proxy. 7. APPLICATION FIREWALL An application firewall provides a number of functions including some in this list (data scrubbing and redirection). An application firewall sits in front of web applications and inspects requests for malicious content and attempts to circumvent security. An application firewall is an implementation of a reverse proxy. 8. PROTOCOL SECURITY Protocol security is the ability of a proxy to enforce protocol specifications on requests and responses in order to provide additional security at all layers of the OSI stack. Protocol security provides an additional layer of security atop traditional security mechanisms that focus on data. Protocol security is an implementation of a reverse proxy.1.4KViews0likes1CommentFive questions you need to ask about load balancing and the cloud
Whether you are aware of it or not, if you’re deploying applications in the cloud or building out your own “enterprise class” cloud, you’re going to be using load balancing. Horizontal scaling of applications is a fairly well understood process that involves (old skool) server virtualization of the network kind: making many servers (instances) look like one to the outside world. When you start adding instances to increase capacity for your application, load balancing necessarily gets involved as it’s the way in which horizontal scalability is implemented today. The fact that you may have already deployed an application in the cloud and scaled it up without recognizing this basic fact may lead you to believe you don’t need to care about load balancing options. But nothing could be further from the truth. If you haven’t asked already, you should. But more than that you need to understand the importance of load balancing and its implications to the application. That’s even more true if you’re considering an enterprise cloud, because it will most assuredly be your problem in the long run. Do not be fooled; the options available for load balancing and assuring availability of your application in the cloud will affect your application – if not right now, then later. So let’s start with the five most important things you need to ask about load balancing and cloud environments regardless of where they may reside. #5 DIRECT SERVER RETURN If you’re going to be serving up video or audio – real-time streaming media – you should definitely be interested in whether or not the load balancing solution is capable of supporting direct server return (DSR). While there are pros and cons to using DSR, for video and audio content it’s nearly an untouchable axiom of application delivery that you should enable this capability. DSR basically allows the server to return content directly to the client without being processed by any intermediary (other than routers/switches/etc… which of course need to process individual packets). In most load balancing situations the responses from the server are returned via the same path they took to get to the server, notably through the load balancer. DSR allows responses to return outside the path of the load balancer or, if still returning through it, to do so unmolested. In the latter scenario the load balancer basically acts as a simple packet forwarder and does no additional processing on the packets. The advantage to DSR is that it removes any additional latency imposed by additional processing by intermediaries. Because real-time streaming media is very sensitive to the effects of latency (jitter), DSR is often suggested as a best practice when load balancing servers responsible for serving such content. Question: Is it supported? #4 HEALTH CHECKING One of the ways in which load balancers/application delivery controllers make decisions regarding which server should handle which request is to understand the current status of the application. It’s part of being context-aware, and it provides information about the application that is invaluable not just to the load balancing decision but to the overall availability of the application. Health checking allows a load balancing solution to determine whether a server/instance is “available” based on a variety of factors. At the simplest level an ICMP ping can be used to determine whether the server is available. But that tells it nothing of the state of the application. A three-way TCP handshake is the next “step” up the ladder, and this will tell the load balancing solution whether an application is capable of accepting connections, but still tells it nothing of the state of the application. A simple HTTP GET takes it one step further, but what’s really necessary is the ability of the load balancing solution to retrieve actual data and ensure it is valid in order to consider an application “available”. As the availability of an application may be (and should be if it is not) one way to determine whether new instances are necessary or not, the ability to determine whether the actual application is available and responding appropriately are important in keeping costs down in a cloud environment lest instances be launched for no reason or – more dangerously – instances are not launched when necessary due to an outage or failure. In an external cloud environment it is important to understand how the infrastructure determines when an application is “available” or “not”, based on such monitoring, as the subtle differences in what is actually being monitored/tested can impact application availability. Question: What determines when an application (instance) is available and responding as expected? #3 PERSISTENCE Persistence is one of the most important facets of load balancing that every application developer, architect, and network professional needs to understand. Nearly every application today makes heavy use of application sessions to maintain state, but not every application utilizes a shared database model for its session management. If you’re using standard application or web server session features to manage state in your application, you will need to understand whether the load balancing solutions available supports persistence and how that persistence is implemented. Persistence basically ensures that once a user has been “assigned” a server/instance that all subsequent requests go to that same server/instance in order to preserve access to the application session. Persistence can be based on just about anything depending on the load balancing solution available, but most commonly takes the form of either source ip address or cookie-based. In the case of the former there’s very little for you to do, though you should be somewhat concerned over the use of such a rudimentary method of enabling persistence as it is quite possible – probable, in fact – that many users will be sharing the same source IP address based on NAT and masquerading at the edge of corporate and shared networks. If the persistence is cookie-based then you’ll need to understand whether you have the ability to determine what data is used to enable that persistence. For example, many applications used PHPSESSIONID or ASPSESSIONID as it is routine for those environments to ensure that these values are inserted into the HTTP header and are available for such use. But if you can’t configure the option yourself, you’ll need to understand what values are used for persistence and to ensure your application can support that value in order to match up users with their application state. Question: How is persistence implemented? #2 QUIESCING (BLEEDING) CONNECTIONS Part of the allure of a cloud architecture is the ability to provision resources on-demand. With that comes the assumption that you can also de-provision resources when they are no longer needed. One would further hope this process is automated and based on a policy configurable by the user, but we are still in the early days of cloud so that may be just a goal at this point. Load balancers and clustering solutions can usually be told to begin quiescing (bleeding off) connections. This means that they stop distributing requests to the specified servers/instances but allow existing users to continue using the application until they are finished. It basically takes a server/instance out of the “rotation” but keeps it online until all users have finished and the server/instance is no longer needed. At that point either through a manual or automated process the server/instance can be de-provisioned or taken offline. This is often used in traditional data centers to enable maintenance such as patching/upgrades to occur without interrupting application availability. By taking one server/instance at a time offline the other servers/instances remain in service, serving up requests. In an on-demand environment this is of course used to keep costs controlled by only keeping the instances necessary for current capacity online. What you need to understand is whether that process is manual, i.e. you need to push a button to begin the process of bleeding off connections, or automated. If the latter, then you’ll need to ask about what variables you can use to create a policy to trigger the process. Variables might be number of total connections, requests, users, or bandwidth. It could also, if the load balancing solution is “smart enough” include application performance (response time) or even time of day variables. Question: How do connections quiesce (bleed) off – manually or automatically based on thresholds? #1 FAILOVER We talk a lot about the cloud as a means to scale applications, but we don’t very often mention availability. Availability usually means there needs to be in place some sort of “failover” mechanism, in case an application or server fails. Applications crash, hardware fails, these things happen. What should not happen, however, is that the application becomes unavailable because of these types of inevitable problems. If one instance suddenly becomes unavailable, what happens? That’s the question you need to ask. If there is more than one instance running at that time, then any load balancing solution worth its salt will direct subsequent requests to the remaining available instances. But if there are no other instances running, what happens? If the provisioning process is manual, you may need to push a button and wait for the new instance to come online. If the provisioning process is not manual, then you need to understand how long it will take for the automated system to bring a new instance online, and perhaps ask about the ability to serve up customized “apology” pages that reassure visitors that the site will return shortly. Question: What kind of failover options are available (if any)? THERE ARE NO STUPID QUESTIONS Folks seem to talk and write as if cloud computing relieves IT staff (customers) of the need to understand the infrastructure and architecture of the environments in which applications will be deployed. Because there is an increasingly symbiotic relationship between applications and its infrastructure – both network and application network – this fallacy needs to be exposed for the falsehood it is. It is more important today, with cloud computing, than it ever has been for all of IT – application, network, and security – to understand the infrastructure and how it works together to deliver applications. That means there are no stupid questions when it comes to cloud computing infrastructure. There are certainly other questions you can – and should – ask a potential provider or vendor in order to make the right decision about where to deploy your applications. Because when it comes down to it it’s your application and your customers, partners, and users are not going to be calling/e-mailing/tweeting the cloud provider; they’re going to be gunning for you if things don’t work as expected. Getting the answers to these five questions will provide a better understanding of how your application will handle unexpected failures, allow you to plan appropriately for maintenance/upgrades/patches, and formulate the proper policies for dealing with the nuances of a load balanced application environment. Don’t just ask about product/vendor and hope that will answer your questions. Sure, your cloud provider may be using F5 or another advanced application delivery platform, but that doesn’t mean that they’re utilizing the product in a way that would offer the features you need to ensure your application is always available. So dig deeper and ask questions. It’s your application, it’s your responsibility, no matter where it ends up running. And the Killer App for Private Cloud Computing Is… Not All Virtual Servers are Created Equal Infrastructure 2.0: The Feedback Loop Must Include Applications Cloud Computing: Is your cloud sticky? It should be The Disadvantages of DSR (Direct Server Return) Cloud Computing: Vertical Scalability is Still Your Problem Server Virtualization versus Server Virtualization1.4KViews0likes3Comments