protocol
9 Topicsmptcp-mobile-optimized and Hardware SYN Cookie Protection
Does anyone know why the TCP protocol profile mptcp-mobile-optimized ships with Hardware SYN Cookie Protection disabled? It is still enabled on tcp-mobile-optimized. Here is a copy of my two profiles, which should be the default: ltm profile tcp mptcp-mobile-optimized { abc disabled app-service none congestion-control illinois defaults-from tcp delay-window-control disabled delayed-acks disabled dsack disabled ecn enabled hardware-syn-cookie disabled init-cwnd 16 limited-transmit enabled mptcp enabled nagle enabled pkt-loss-ignore-burst 0 pkt-loss-ignore-rate 0 proxy-buffer-high 131072 proxy-buffer-low 131072 rate-pace enabled receive-window-size 131072 reset-on-timeout disabled selective-acks enabled send-buffer-size 262144 slow-start enabled timestamps enabled } ltm profile tcp tcp-mobile-optimized { abc disabled app-service none congestion-control high-speed defaults-from tcp delay-window-control disabled delayed-acks disabled dsack disabled ecn enabled init-cwnd 16 limited-transmit enabled nagle enabled pkt-loss-ignore-burst 0 pkt-loss-ignore-rate 0 proxy-buffer-high 131072 proxy-buffer-low 131072 receive-window-size 131072 reset-on-timeout disabled selective-acks enabled send-buffer-size 131072 slow-start enabled timestamps enabled }436Views0likes1CommentOrchestrated Infrastructure Security - Protocol Inspection with AFM
The F5 Beacon capabilities referenced in this article hosted on F5 Cloud Services are planning a migration to a new SaaS Platform - Check out the latesthere. Introduction This article is part of a series on implementing Orchestrated Infrastructure Security. It includes High Availability, Central Management with BIG-IQ, Application Visibility with Beacon and the protection of critical assets using F5 Advanced WAF and Protocol Inspection (IPS) with AFM.It is assumed that SSL Orchestrator is already deployed, and basic network connectivity is working. If you need help setting up SSL Orchestrator for the first time, refer to the Dev/Central article series Implementing SSL Orchestrator here. This article focuses on configuring Protocol Inspection (IPS) with AFM deployed as a Layer 2 solution. It covers the configuration of Protocol Inspection on an F5 BIG-IP running version 16.0.0. Configuration of BIG-IP deployed as AFM can be downloaded from here from GitLab. Please forgive me for using SSL and TLS interchangeably in this article. This article is divided into the following high level sections: Protocol Inspection (IPS) with AFM Network Configuration Create an AFM Protocol Inspection Policy Attach Virtual Servers to an AFM Protocol Inspection Policy Protocol Inspection (IPS) with AFM: Network Configuration The BIG-IP will be deployed with VLAN Groups.This combines 2 interfaces to act as an L2 bridge where data flows into one interface and is passed out the other interface. From the F5 Configuration Utility go to Network > VLANs.Click Create on the right. Give it a name, ingress1 in this example.Set the Interface to 5.0.Set Tagging to Untagged then click Add.Interface 5.0 (untagged) should be visible like in the image below.Click Repeat at the bottom to create another VLAN. Note: In this example interface 5.0 will receive decrypted traffic from sslo1. Give it a name, egress1 in this example.Set the Interface to 6.0.Set Tagging to Untagged then click Add.Interface 6.0 (untagged) should be visible like in the image below.Click Finished when done. Note: In this example interface 6.0 will receive decrypted traffic from sslo1. Note: This guide assumes you are setting up a redundant pair of SSL Orchestrators.Therefore, you should repeat these steps to configure VLANs for the two interfaces connected to sslo2.These VLANs should be named in a way that you can differentiate them from the others.Example: ingress2 and egress2 It should look something like this when done: Note: In this example Interface 3.0 and 4.0 are physically connected to sslo2. Click VLAN Groups then Create on the right. Give it a name, vlg1 in this example.Move ingress1 and egress1 from Available to Members.Set the Transparency Mode to Transparent.Check the box to Bridge All Traffic then click Finished. Note: This guide assumes you are setting up a redundant pair of SSL Orchestrators. Therefore, you should repeat these steps to configure a VLAN Group for the two interfaces connected to sslo2.This VLAN Group should be named in a way that you can differentiate it from the other, example: vlg1 and vlg2.It should look like the image below: For full Layer 2 transparency the following CLI option needs to be enabled: (tmos)# modify sys db connection.vgl2transparent value enable Create an AFM Protocol Inspection Policy You can skip this step if you already have an AFM Protocol Inspection policy created and attached to one or more virtual servers.If not, we’ll cover it briefly.In this example we configured Protocol Inspection with Signatures and Compliance enabled. From Security select Protocol Security > Inspection Profiles > Add > New. Give it a name, IPS in this example.For Services, select the Protocol(s) you want to inspect, HTTP in this example. Optionally check the box to enable automatic updates and click Commit Changes to System. Attach Virtual Servers to an AFM Protocol Inspection Policy Attach the Protocol Inspection Profile to the Virtual Server(s) you wish to protect.From Local Traffic select Virtual Servers.Click the name of the Virtual Server you want to apply the profile to, 10.4.11.52 in this example. Click Security > Policies. Set the Protocol Inspection Profile to Enabled, then select the Profile created previously, IPS in this example.Click Update when done. Repeat this process to attach the IPS Profile to the remaining Virtual Servers. Summary In this article you learned how to configure BIG-IP in layer 2 transparency mode using VLAN groups.We also covered how to create an AFM Protocol Inspection policy and attach it to your Virtual Servers. Next Steps Click Next to proceed to the next article in the series.512Views1like0CommentsSet Protocol Profile (Server) And HTTP Profile (Server) via the SDK
I am trying to set the Protocol Profile (Server) and the HTTP Profile (Server) via the Python SDK. I am putting the profiles I want in a dictionary object. profiles = { "profiles": [ "f5-tcp-lan", "f5-tcp-progressive", "http_x_forward_for", "custom", "clientssl", "serverssl" ] } ######THE ABOVE VALUES ARE IN THIS ORDER##### #PROTOCOL PROFILE (Client) #PROTOCOL PROFILE (Server) #HTTP PROFILE (Client) #HTTP PROFILE (Server) #SSL Profile (Client) #SSL Profile (Server) ########################################## BIGIP_CONN=ManagementRoot("1.1.1.1","admin","123") vip=BIGIP_CONN.tm.ltm.virtuals.get_collection()[1] vip.update(**profiles) When I do this I get the following error: iControlUnexpectedHTTPError: 400 Unexpected Error: Bad Request for uri: https://labf52:443/mgmt/tm/ltm/virtual/~Common~myvipname/ Text: '{"code":400,"message":"01070097:3: Virtual server /Common/myvipname lists duplicate profiles.","errorStack":[],"apiError":3}' Even if I put the VIP at all defaults with none of these profiles it still throws the error474Views0likes2CommentsHTTP Pipelining: A security risk without real performance benefits
Everyone wants web sites and applications to load faster, and there’s no shortage of folks out there looking for ways to do just that. But all that glitters is not gold, and not all acceleration techniques actually do all that much to accelerate the delivery of web sites and applications. Worse, some actual incur risk in the form of leaving servers open to exploitation. A BRIEF HISTORY Back in the day when HTTP was still evolving, someone came up with the concept of persistent connections. See, in ancient times – when administrators still wore togas in the data center – HTTP 1.0 required one TCP connection for every object on a page. That was okay, until pages started comprising ten, twenty, and more objects. So someone added an HTTP header, Keep-Alive, which basically told the server not to close the TCP connection until (a) the browser told it to or (b) it didn’t hear from the browser for X number of seconds (a time out). This eventually became the default behavior when HTTP 1.1 was written and became a standard. I told you it was a brief history. This capability is known as a persistent connection, because the connection persists across multiple requests. This is not the same as pipelining, though the two are closely related. Pipelining takes the concept of persistent connections and then ignores the traditional request – reply relationship inherent in HTTP and throws it out the window. The general line of thought goes like this: “Whoa. What if we just shoved all the requests from a page at the server and then waited for them all to come back rather than doing it one at a time? We could make things even faster!” Tada! HTTP pipelining. In technical terms, HTTP pipelining is initiated by the browser by opening a connection to the server and then sending multiple requests to the server without waiting for a response. Once the requests are all sent then the browser starts listening for responses. The reason this is considered an acceleration technique is that by shoving all the requests at the server at once you essentially save the RTT (Round Trip Time) on the connection waiting for a response after each request is sent. WHY IT JUST DOESN’T MATTER ANYMORE (AND MAYBE NEVER DID) Unfortunately, pipelining was conceived of and implemented before broadband connections were widely utilized as a method of accessing the Internet. Back then, the RTT was significant enough to have a negative impact on application and web site performance and the overall user-experience was improved by the use of pipelining. Today, however, most folks have a comfortable speed at which they access the Internet and the RTT impact on most web application’s performance, despite the increasing number of objects per page, is relatively low. There is no arguing, however, that some reduction in time to load is better than none. Too, anyone who’s had to access the Internet via high latency links can tell you anything that makes that experience faster has got to be a Good Thing. So what’s the problem? The problem is that pipelining isn’t actually treated any differently on the server than regular old persistent connections. In fact, the HTTP 1.1 specification requires that a “server MUST send its responses to those requests in the same order that the requests were received.” In other words, the requests are return in serial, despite the fact that some web servers may actually process those requests in parallel. Because the server MUST return responses to requests in order that the server has to do some extra processing to ensure compliance with this part of the HTTP 1.1 specification. It has to queue up the responses and make certain responses are returned properly, which essentially negates the performance gained by reducing the number of round trips using pipelining. Depending on the order in which requests are sent, if a request requiring particularly lengthy processing – say a database query – were sent relatively early in the pipeline, this could actually cause a degradation in performance because all the other responses have to wait for the lengthy one to finish before the others can be sent back. Application intermediaries such as proxies, application delivery controllers, and general load-balancers can and do support pipelining, but they, too, will adhere to the protocol specification and return responses in the proper order according to how the requests were received. This limitation on the server side actually inhibits a potentially significant boost in performance because we know that processing dynamic requests takes longer than processing a request for static content. If this limitation were removed it is possible that the server would become more efficient and the user would experience non-trivial improvements in performance. Or, if intermediaries were smart enough to rearrange requests such that they their execution were optimized (I seem to recall I was required to design and implement a solution to a similar example in graduate school) then we’d maintain the performance benefits gained by pipelining. But that would require an understanding of the application that goes far beyond what even today’s most intelligent application delivery controllers are capable of providing. THE SILVER LINING At this point it may be fairly disappointing to learn that HTTP pipelining today does not result in as significant a performance gain as it might at first seem to offer (except over high latency links like satellite or dial-up, which are rapidly dwindling in usage). But that may very well be a good thing. As miscreants have become smarter and more intelligent about exploiting protocols and not just application code, they’ve learned to take advantage of the protocol to “trick” servers into believing their requests are legitimate, even though the desired result is usually malicious. In the case of pipelining, it would be a simple thing to exploit the capability to enact a layer 7 DoS attack on the server in question. Because pipelining assumes that requests will be sent one after the other and that the client is not waiting for the response until the end, it would have a difficult time distinguishing between someone attempting to consume resources and a legitimate request. Consider that the server has no understanding of a “page”. It understands individual requests. It has no way of knowing that a “page” consists of only 50 objects, and therefore a client pipelining requests for the maximum allowed – by default 100 for Apache – may not be seen as out of the ordinary. Several clients opening connections and pipelining hundreds or thousands of requests every second without caring if they receive any of the responses could quickly consume the server’s resources or available bandwidth and result in a denial of service to legitimate users. So perhaps the fact that pipelining is not really all that useful to most folks is a good thing, as server administrators can disable the feature without too much concern and thereby mitigate the risk of the feature being leveraged as an attack method against them. Pipelining as it is specified and implemented today is more of a security risk than it is a performance enhancement. There are, however, tweaks to the specification that could be made in the future that might make it more useful. Those tweaks do not address the potential security risk, however, so perhaps given that there are so many other optimizations and acceleration techniques that can be used to improve performance that incur no measurable security risk that we simply let sleeping dogs lie. IMAGES COURTESTY WIKIPEDIA COMMONS4.6KViews0likes5CommentsThe Evolution of TCP
#sdas #webperf Like everything about the Internet, TCP keeps on changing. In 1974 a specification was developed that would, eventually, launch what we know of as "The Internet." That specification was TCP and though it is often overshadowed by HTTP as the spark that lit the fire under the Internet, without TCP HTTP wouldn't have a (transport) leg to stand on. Still, it would take 10 years before the Internet (all 1000 hosts of it) converted en masse to using TCP/IP for its messaging. Once that adoption had occurred, it was a much shorter leap for the development of HTTP to occur and then, well, your refrigerator or toaster can probably tell you the rest at this point. That makes TCP 40 years old this year. And despite the old adage you can't teach an old dog new tricks, TCP has continued to evolve right along with the needs of the applications that rely so heavily on its reliable transport mechanism. Consider the additions, modifications, enhancements and changes that have been made just in the past 20 years or so. In total, there are well over 100 different RFCs associated with TCP that offer ways to improve performance, or reliability or support new technology like mobile networks and devices. Like SSL and even HTTP, TCP must continue to evolve. Unlike the Internet in 1984 which boasted a mere 1000 hosts, the ISC domain survey puts the number of hosts in Jan 2013 at well over 1 billion. That means to move to some other transport protocol we'd have to coordinate, well, more than seems humanly possible. Especially given that doesn't count the switch required on the client side - which numbers in the 2.4 billion range according to Internet World Stats, which tracks growth of Internet users world wide. What this means, practically, is that TCP must evolve because we can't just move to something else. Not without a long term transition plan and we see how well that's working for the move to IPv6, despite the depletion of IPv4 addresses. So it shouldn't be a surprise when new, TCP-related technologies like MPTCP (Multipath TCP) are introduced to address challenges that continue to crop up as new devices, networks and applications pop up like dandelions in your front yard. Nor should we be surprised when support for such protocols, like SPDY before it, are offered to the market with a transitional approach, i.e. gateways and dual-stack supporting proxies. The only real way to support the use of what would otherwise be a disruptive technology without disrupting the billions of hosts and users that communicate each and every day is to provide strategic transitional protocol points of control that enable transparent and (one hopes) seamless support as hosts and devices begin to support it. TCP isn't going anywhere, it's too critical to the entire Internet at this point. But that doesn't mean it's static or inflexible. It is constantly evolving to meet the needs of the next generation of devices, networks and applications that rely upon it.455Views0likes0CommentsAchieving Scalability Through Fewer Resources
Sometimes it’s not about how many resources you have but how you use them The premise upon which scalability through cloud computing and highly virtualized architectures is built is the rapid provisioning of additional resources as a means to scale out to meet demand. That premise is a sound one and one that is a successful tactic in implementing a scalability strategy. But it’s not the only tactic that can be employed as a means to achieve scalability and it’s certainly not the most efficient means by which demand can be met. WHAT HAPPENED to EFFFICIENCY? One of the primary reasons cited in surveys regarding cloud computing drivers is that of efficiency. Organizations want to be more efficient as a means to better leverage the resources they do have and to streamline the processes by which additional resources are acquired and provisioned when necessary. But somewhere along the line it seems we’ve lost sight of enabling higher levels of efficiency for existing resources and have, in fact, often ignored that particular goal in favor of simplifying the provisioning process. After all, if scalability is as easy as clicking a button to provision more capacity in the cloud, why wouldn’t you? The answer is, of course, that it’s not as efficient and in some cases it may be an unnecessary expense. The danger with cloud computing and automated, virtualized infrastructures is in the tendency to react to demand for increases in capacity as we’ve always reacted: throw more hardware at the problem. While in the case of cloud computing and virtualization this has morphed from hardware to “virtual hardware”, the result is the same – we’re throwing more resources at the problem of increasing demand. That’s not necessarily the best option and it’s certainly not the most efficient use of the resources we have on hand. There are certainly efficiency gains in this approach, there’s no arguing that. The process for increasing capacity can go from a multi-week, many man-hour manual process to an hour or less, automated process that decreases the operational and capital expenses associated with increasing capacity. But if we want to truly take advantage of cloud computing and virtualization we should also be looking at optimizing the use of the resources we have on hand, for often it is the case that we have more than enough capacity, it simply isn’t being used to its full capacity. CONNECTION MANAGEMENT Discussions of resource management generally include compute, storage, and network resources. But they often fail to include connection management. That’s a travesty as TCP connection usage is increases dramatically with modern application architectures and TCP connections are resource heavy; they consume a lot of RAM and CPU on web and application servers to manage. In many cases the TCP connection management duties of a web or application server are by far the largest consumers of resources; the application itself actually consumes very little on a per-user basis. Optimizing those connections – or the use of those connections – then, should be a priority for any efficiency-minded organization, particularly those interested in reducing the operational costs associated with scalability and availability. As is often the case, the tools to make more efficient the use of TCP connections is likely already in the data center and has been merely overlooked: the application delivery controller. The reason for this is simple: most organizations acquire an application delivery controller (ADC) for its load balancing capabilities and tend to ignore all the bells and whistles and additional features (value) it can provide. Load balancing is but one feature of application delivery; there are many more that can dramatically impact the capacity and performance of web applications if they employed as part of a comprehensive application delivery strategy. An ADC provides the means to perform TCP multiplexing (a.k.a. server offload, a.k.a. connection management). TCP multiplexing allows the ADC to maintain millions of connections with clients (users) while requiring only a fraction of that number to the servers. By reusing existing TCP connections to web and application servers, an ADC eliminates the overhead in processing time associating with opening, managing, and closing TCP connections every time a user accesses the web application. If you consider that most applications today are Web 2.0 and employ a variety of automatically updating components, you can easily see that eliminating the TCP management for the connections required to perform those updates will decrease not only the number of TCP connections required on the server-side but will also eliminate the time associated with such a process, meaning better end-user performance. INCREASE CAPACITY by DECREASING RESOURCE UTILIZATION Essentially we’re talking about increasing capacity by decreasing resource utilization without compromising availability or performance. This is an application delivery strategy that requires a broader perspective than is generally available to operations and development staff. The ability to recognize a connection-heavy application and subsequently employ the optimization capabilities of an application delivery controller to improve the efficiency of resource utilization for that application require a more holistic view of the entire architecture. Yes, this is the realm of devops and it is in this realm that the full potential of application delivery will be realized. It will take someone well-versed in both network and application infrastructure to view the two as part of a larger, holistic delivery architecture in order to assess the situation and determine that optimization of connection management will benefit the application not only as a means to improve performance but to increase capacity without increasing associated server-side resources. Efficiency through optimization of resource utilization is an excellent strategy to improving the overall delivery of applications whilst simultaneously decreasing costs. It doesn’t require cloud or virtualization, it simply requires a better understanding of applications and their underlying infrastructure and optimizing the application delivery infrastructure such that the innate behavior of such infrastructure is made more efficient without negatively impacting performance or availability. Leveraging TCP multiplexing is a simple method of optimizing connection utilization between clients and servers that can dramatically improve resource utilization and immediately increase capacity of existing “servers”. Organizations looking to improve their bottom line and do more with less ought to closely evaluate their application delivery strategy and find those places where resource utilization can be optimized as a way as to improve efficiency of the use of existing resources before embarking on a “throw more hardware at the problem” initiative. Long Live(d) AJAX Cloud Lets You Throw More Hardware at the Problem Faster WILS: Application Acceleration versus Optimization Two Different Sock(et)s What is server offload and why do I need it? 3 Really good reasons you should use TCP multiplexing SOA and Web 2.0: The Connection Management Challenge The Impact of the Network on AJAX The Impact of AJAX on the Network225Views0likes0CommentsApple iPad Pushing Us Closer to Internet Armageddon
Apple’s latest “i” hit over a million sales in the first 28 days it was available. Combine that with sales of other Internet-abled devices like the iPhone, Android, Blackberry, and other “smart” phones as well as the continued growth of Internet users in general (via cable and other broadband access technologies) and we are heading toward the impending cataclysm that is IPv4 address depletion. Sound like hyperbole? It shouldn’t. The depletion of IPv4 addresses is imminent, and growing closer every day, and it is that depletion that will cause a breakdown in the ability of consumers to access the myriad services offered via the Internet, many of which they have come to rely upon. The more consumers, the more devices, the more endpoints just exacerbates the slide toward what will be, if we aren’t careful, a falling out between IPv6-only consumers and IPv4-only producers and vice-versa that will cause a breakdown in communication that essentially can only be called “Internet Armageddon.”283Views0likes1CommentI am in your HTTP headers, attacking your application
Zero-day IE exploits and general mass SQL injection attacks often overshadow potentially more dangerous exploits targeting lesser known applications and attack vectors. These exploits are potentially more dangerous because once proven through a successful attack on these lesser known applications they can rapidly be adapted to exploit more common web applications, and no one is specifically concentrating on preventing them because they're, well, not so obvious. Recently, SANS Internet Storm Center featured a write up on attempts to exploit Roundcube Webmail via the HTTP Accept header. Such an attack is generally focused on exploitation of operating system, language, or environmental vulnerabilities, as the data contained in HTTP headers (aside from cookies) is rarely used by the application as user-input. An example provided by SANS of an attack targeting Roundcube via the HTTP Accept header: POST /roundcube/bin/html2text.php HTTP/1.1 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.5) Gecko/2008120122 Firefox/3.0.5 Host: xx.xx.xx.xx Accept: ZWNobyAoMzMzMjEyKzQzMjQ1NjY2KS4iICI7O3Bhc3N0aHJ1KCJ1bmFtZSAtYTtpZCIpOw== Content-Length: 54 What the attackers in this example were attempting to do is trick the application into evaluating system commands encoded in the Accept header in order to retrieve some data they should not have had access to. The purpose of the attack, however, could easily have been for some other nefarious deed such as potentially writing a file to the system that could be used as a cross-site scripting attack, or deleting files, or just generally wreaking havoc with the system. This is the problem security professionals and developers face every day: what devious thing could some miscreant attempt to do? What must I protect against. This is part of what makes secure coding so difficult - developers aren't always sure what they should be protecting against, and neither are the security pros because the bad guys are always coming up with a new way to exploit some aspect of an application or transport layer protocols. Think HTTP headers aren't generally used by applications? Consider the use of the custom HTTP header "SOAP Action" for SOAP web services, and cookies, and E-tags, and ... well, the list goes on. HTTP headers carry data used by applications and therefore should be considered a viable transport mechanism for malicious code. So while the exploitation of HTTP headers is not nearly as common or rampant as mass SQL injection today, the use of it to target specific applications means it is a possible attack vector for the future against which applications should be protected now, before it becomes critical to do so. No, it may never happen. Attackers may never find a way to truly exploit HTTP headers. But then again, they might and apparently have been trying. Better safe than sorry, I say. Regardless of the technology you use to, the process is the same: you need to determine what is allowed in HTTP headers and verify them just as you would any other user-generated input or you need to invest in a solution that provides this type of security for you. RFC 2616 (HTTP), specifically section 14, provide a great deal of guidance and detail on what is acceptable in an HTTP header field. Never blindly evaluate or execute upon data contained in an HTTP header field. Treat any input, even input that is not traditionally user-generated, as suspect. That's a good rule of thumb for protecting against malicious payloads anyway, but especially a good rule when dealing with what is likely considered a non-traditional attack vector (until it is used, and overused to the point it's considered typical, of course). Possible ways to prevent the potential exploitation of HTTP headers: Use network-side scripting or mod_rewrite to intercept, examine, and either sanitize or outright reject requests containing suspicious data in HTTP headers. Invest in a security solution capable of sanitizing transport (TCP) and application layer (HTTP) protocols and use it to do so. Investigate whether an existing solution - either security or application delivery focused - is capable of providing the means through which you can enforce protocol compliance. Use secure coding techniques to examine - not evaluate - the data in any HTTP headers you are using and ensure they are legitimate values before using them in any way. A little proactive security can go along way toward not being the person who inadvertently discovers a new attack methodology. Related articles by Zemanta Gmail Is Vulnerable to Hackers The Concise Guide to Proxies 3 reasons you need a WAF even though your code is (you think) secure Stop brute forcing listing of HTTP OPTIONS with network-side scripting What's the difference between a web application and a blog?575Views0likes2Comments4 Reasons We Must Redefine Web Application Security
Mike Fratto loves to tweak my nose about web application security. He’s been doing it for years, so it’s (d)evolved to a pretty standard set of arguments. But after he tweaked the debate again in a tweet, I got to thinking that part of the problem is the definition of web application security itself. Web application security is almost always about the application (I know, duh! but bear with me) and therefore about the developer and secure coding. Most of the programmatic errors that lead to vulnerabilities and subsequently exploitation can be traced to a lack of secure coding practices, particularly around the validation of user input (which should never, ever be trusted). Whether it’s XSS (Cross Site Scripting) or SQL Injection, the root of the problem is that malicious data or code is submitted to an application and not properly ferreted out by sanitization routines written by developers, for whatever reason. But there are a number of “web application” attacks that have nothing to do with developers and code, and are, in fact, more focused on the exploitation of protocols. TCP and HTTP can be easily manipulated in such a way as to result in a successful attack on an application without breaking any RFC or W3C standard that specifies the proper behavior of these protocols. Worse, the application developer really can’t do anything about these types of attacks because they aren’t aware they are occurring. It is, in fact, the four layers of the TCP/IP stack - the foundation for web applications – that are the reason we need to redefine web application security and expand it to include the network where it makes sense. Being a “web” application necessarily implies network interaction, which implies reliance on the network stack, which implies potential exploitation of the protocols upon which applications and their developers rely but have little or no control over. Web applications ride atop the OSI stack; above and beyond the network and application model on which all web applications are deployed. But those web applications have very little control or visibility into that stack and network-focused security (firewalls, IDS/IPS) have very little awareness of the application. Let’s take a Layer 7 DoS attack as an example. L7 DoS works by exploiting the nature of HTTP 1.1 and its ability to essentially pipeline multiple HTTP requests over the same TCP connection. Ostensibly this behavior reduces the overhead on the server associated with opening and closing TCP connections and thus improves the overall capacity and performance of web and application servers. So far, all good. But this behavior can be used against an application. By requesting a legitimate web page using HTTP 1.1, TCP connections on the server are held open waiting for additional requests to be submitted. And they’ll stay open until the client sends the appropriate HTTP Connection: close header with a request or the TCP session itself times out based on the configuration of the web or application server. Now, imagine a single source opens a lot – say thousands – of pages. They are all legitimate requests; there’s no malicious data involved or incorrect usage of the underlying protocols. From the application’s perspective this is not an attack. And yet it is an attack. It’s a basic resource consumption attack designed to chew up all available TCP connections on the server such that legitimate users cannot be served. The distinction between legitimate users and legitimate requests is paramount to understanding why it is that web application security isn’t always just about the application; sometimes it’s about protocols and behavior external to the application that cannot be controlled let alone detected by the application or its developers. The developer, in order to detect such a misuse of the HTTP protocol, would need to keep what we in the network world call a “session table”. Now web and application servers keep this information, they have to, but they don’t make it accessible to the developer. Basically, the developer’s viewpoint is from inside a single session, dealing with a single request, dealing with a single user. The developer, and the application, have a very limited view of the environment in which the application operates. A web application firewall, however, has access to its session table necessarily. It operates with a view point external to the application, with the ability to understand the “big picture”. It can detect when a single user is opening three or four or thousands of connections and not doing anything with them. It understands that the user is not legitimate because the user’s behavior is out of line with how a normal user interacts with an application. A web application firewall has the context in which to evaluate the behavior and requests and realize that a single user opening multiple connections is not legitimate after all. Not all web application security is about the application. Sometimes it’s about the protocols and languages and platforms supporting the delivery of that application. And when the application lacks visibility into those supporting infrastructure environments, it’s pretty damn difficult for the application developer to use secure coding to protect against exploitation of those facets of the application. We really have to stop debating where web application security belongs and go back to the beginning and redefine what it means in a web driven distributed world if we’re going to effectively tackle the problem any time this century.216Views0likes1Comment