web application security
38 TopicsF5 Friday: It is now safe to enable File Upload
Web 2.0 is about sharing content – user generated content. How do you enable that kind of collaboration without opening yourself up to the risk of infection? Turns out developers and administrators have a couple options… The goal of many a miscreant is to get files onto your boxen. The second step after that is often remote execution or merely the hopes that someone else will look at/execute the file and spread chaos (and viruses) across your internal network. It’s a malicious intent, to be sure, and makes developing/deploying Web 2.0 applications a risky proposition. After all, Web 2.0 is about collaboration and sharing of content, and if you aren’t allowing the latter it’s hard to enable the former. Most developers know about and have used the ability to upload files of just about any type through a web form. Photos, documents, presentations – these types of content are almost always shared through an application that takes advantage of the ability to upload data via a simple web form. But if you allow users to share legitimate content, it’s a sure bet (more sure even than answering “yes” to the question “Will it rain in Seattle today?”) that miscreants will find and exploit the ability to share content. Needless to say information security professionals are therefore not particularly fond of this particular “feature” and in some organizations it is strictly verboten (that’s forbidden for you non-German speakers). So wouldn’t it be nice if developers could continue to leverage this nifty capability to enable collaboration? Well, all you really need to do is integrate with an anti-virus scanning solution and only accept that content which is deemed safe, right? After all, that’s good enough for e-mail systems and developers should be able to argue that the same should be good enough for web content, too. The bigger problem is in the integration. Luckily, ICAP (Internet Content Adaptation Protocol) is a fairly ready answer to that problem. SOLUTION: INTEGRATE ANTI-VIRUS SCANNING via ICAP The Internet Content Adaptation Protocol (ICAP) is a lightweight HTTP based protocol specified in RFC 3507 designed to off-load specific content to dedicated servers, thereby freeing up resources and standardizing the way in which features are implemented. ICAP is generally used in proxy servers to integrate with third party products like antivirus software, malicious content scanners and URL filters. ICAP in its most basic form is a "lightweight" HTTP based remote procedure call protocol. In other words, ICAP allows its clients to pass HTTP based (HTML) messages (Content) to ICAP servers for adaptation. Adaptation refers to performing the particular value added service (content manipulation) for the associated client request/response. -- Wikipedia, ICAP Now obviously developers can directly take advantage of ICAP and integrate with an anti-virus scanning solution directly. All that’s required is to extract every file in a multi-part request and then send each of them to an AV-scanning service and determine based on the result whether to continue processing or toss those bits into /dev/null. This is assuming, of course, that it can be integrated: packaged applications may not offer the ability and even open-source which ostensibly does may be in a language or use frameworks that require skills the organization simply does not have. Or perhaps the cost over time of constantly modifying the application after every upgrade/patch is just not worth the effort. For applications for which you can add this integration, it should be fairly simple as developers are generally familiar with HTTP and RPC and understand how to use “services” in their applications. Of course this being an F5 Friday post, you can probably guess that I have an alternative (and of course more efficient) solution than integration into the code. An external solution that works for custom as well as packaged applications and requires a lot less long term maintenance – a WAF (Web Application Firewall). BETTER SOLUTION: web application firewall INTEGRATION The latest greatest version (v10.2) of F5 BIG-IP Application Security Manager (ASM) included a little touted feature that makes integration with an ICAP-enabled anti-virus scanning solution take approximately 15.7 seconds to configure (YMMV). Most of that time is likely logging in and navigating to the right place. The rest is typing the information required (server host name, IP address, and port number) and hitting “save”. F5 Application security manager (ASM) v10 includes easy integration with a/v solutions It really is that simple. The configuration is actually an HTTP “class”, which can be thought of as a classification of sorts. In most BIG-IP products a “class” defines a type of traffic closely based on a specific application protocol, like HTTP. It’s quite polymorphic in that defining a custom HTTP class inherits the behavior and attributes of the “parent” HTTP class and your configuration extends that behavior and attributes, and in some cases allows you to override default (parent) behavior. The ICAP integration is derived from an HTTP class, so it can be “assigned” to a virtual server, a URI, a cookie, etc… In most ASM configurations an HTTP class is assigned to a virtual server and therefore it sees all requests sent to that server. In such a configuration ASM sees all traffic and thus every file uploaded in a multipart payload and will automatically extract it and send it via ICAP to the designated anti-virus server where it is scanned. The action taken upon a positive result, i.e. the file contains bad juju, is configurable. ASM can block the request and present an informational page to the user while logging the discovery internally, externally or both. It can forward the request to the web/application server with the virus and log it as well, allowing the developer to determine how best to proceed. ASM can be configured to never allow requests to reach the web/application server that have not been scanned for viruses using the “Guarantee Enforcement” option. When configured, if the anti-virus server is unavailable or doesn’t respond, requests will be blocked. This allows administrators to configure a “fail closed” option that absolutely requires AV scanning before a request can be processed. A STRATEGIC POINT of CONTROL Leveraging a strategic point of control to provide AV scanning integration and apply security policies regarding the quality of content has several benefits over its application-modifying code-based integration cousin: Allows integration of AV scanning in applications for which it is not feasible to modify the application, for whatever reason (third-party, lack of skills, lack of time, long term maintenance after upgrades/patches ) Reduces the resource requirements of web/application servers by offloading the integration process and only forwarding valid uploads to the application. In a cloud-based or other pay-per-use model this reduces costs by eliminating the processing of invalid requests by the application. Aggregates logging/auditing and provides consistency of logs for compliance and reporting, especially to prove “due diligence” in preventing infection. Related Posts All F5 Friday Entries on DevCentral All About ASM656Views0likes4CommentsClickjacking Protection Using X-FRAME-OPTIONS Available for Firefox
But browser support is only half the solution, don’t forget to implement the server-side, too. Clickjacking, unlike more well-known (and understood) web application vulnerabilities, has been given scant amount of attention despite its risks and its usage. Earlier this year, for example, it was used as an attack on Twitter, but never really discussed as being a clickjacking attack. Maybe because aside from rewriting applications to prevent CSRF (adding nonces and validation of the same to every page) or adding framekillers there just haven’t been many other options to prevent the attack technique from being utilized against users. Too, it is one of the more convoluted attack methods out there so it would be silly to expect non-technical media to understand it let alone explain how it works to their readers. There is, however, a solution on the horizon. IE8 has introduced an opt-in measure that allows developers – or whomever might be in charge of network-side scripting implementations – to prevent clickjacking on vulnerable pages using a custom HTTP header to prevent them from being “framed” inappropriately: X-FRAME-OPTIONS. The behavior is described in the aforementioned article as: If the X-FRAME-OPTIONS value contains the token DENY, IE8 will prevent the page from rendering if it will be contained within a frame. If the value contains the token SAMEORIGIN, IE will block rendering only if the origin of the top level-browsing-context is different than the origin of the content containing the X-FRAME-OPTIONS directive. For instance, if http://shop.example.com/confirm.asp contains a DENY directive, that page will not render in a subframe, no matter where the parent frame is located. In contrast, if the X-FRAME-OPTIONS directive contains the SAMEORIGIN token, the page may be framed by any page from the exact http://shop.example.com origin. But that’s only IE8, right? Well, natively, yes. But a development version of NoScript has been released that supports the X-FRAME-OPTIONS header and will provide the same protections as are natively achieved in IE8. The problem is that this is only half the equation: the X-FRAME-OPTIONS header needs to exist before the browser can act on it and the preventive measure for clickjacking completed. As noted in the Register, “some critics have contended the protection will be ineffective because it will require millions of websites to update their pages with proprietary code.” That’s not entirely true as there is another option that will provide support for X-FRAME-OPTIONS without updating pages/applications/sites with proprietary code: network-side scripting. The “proprietary” nature of custom HTTP headers is also debatable, as support for Firefox was provided quickly via NoScript and if the technique is successful will likely be adopted by other browser creators. HOW-TO ADD X-FRAME-OPTIONS TO YOUR APPLICATION – WITH or WITHOUT CODE CHANGES Step 1: Add the custom HTTP header “X-FRAME-OPTIONS” with a value of “DENY” or “SAMEORIGIN” before returning a response to the client Really, that’s it. The browser takes care of the rest for you. OWASP has a great article on how to implement a ClickjackFilter for JavaEE and there are sure to be many more blogs and articles popping up describing how one can implement such functionality in their language-of-choice. Even without such direct “how-to” articles and code samples, it is merely a matter of adding a new custom HTTP header – examples of which ought to be easy enough to find. Similarly a solution can be implemented using network-side scripting that requires no modification to applications. In fact, this can be accomplished via iRules in just one line of code: when HTTP_RESPONSE { HTTP::header insert "X-FRAME-OPTIONS" “(DENY || SAMEORIGIN)”} I believe the mod_rewrite network-side script would be as simple, but as I am not an expert in mod_rewrite I will hope someone who is will leave an appropriate example as a comment or write up a blog/article and leave a pointer to it. A good reason to utilize the agility of network-side scripting solutions in this case is that it is not necessary to modify each application requiring protection, which takes time to implement, test, and deploy. An even better reason is that a single network-side script can protect all applications, regardless of language and deployment platform, without a lengthy development and deployment cycle. Regardless of how you add the header, it would be a wise idea to add it as a standard part of your secure-code deployment requirements (you do have those, don’t you?) because it doesn’t hurt anything for the custom HTTP header to exist and visitors using X-FRAME-OPTIONS enabled browsers/solutions will be a lot safer than without it. Stop brute force listing of HTTP OPTIONS with network-side scripting Jedi Mind Tricks: HTTP Request Smuggling I am in your HTTP headers, attacking your application Understanding network-side scripting 9 ways to use network-side scripting to architect faster, scalable, more secure applications2KViews0likes3CommentsPersistent Threat Management
#dast #infosec #devops A new operational model for security operations can dramatically reduce risk Examples of devops focuses a lot on provisioning and deployment configuration. Rarely mentioned is security, even though there is likely no better example of why devops is something you should be doing. That’s because aside from challenges rising from the virtual machine explosion inside the data center, there’s no other issue that better exemplifies the inability of operations to scale manually to meet demand than web application security. Attacks today are persistent and scalable thanks to rise of botnets, push-button mass attacks, and automation. Security operations, however, continues to be hampered by manual response processes that simply do not scale fast enough to deal with these persistent threats. Tools that promise to close the operational gap between discovery and mitigation for the most part continue to rely upon manual configuration and deployment. Because of the time investment required, organizations focus on securing only the most critical of web applications, leaving others vulnerable and open to exploitation. Two separate solutions – DAST and virtual patching – come together to offer a path to meeting this challenge head on, where it lives, in security operations. Through integration and codification of vetted mitigations, persistent threat management enables the operationalization of security operations. A New Operational Model DAST, according to Gartner, “locates vulnerabilities at the application layer to quickly and accurately give security team’s insight into what vulnerabilities need to be fixed and where to find them.” Well known DAST providers like WhiteHat Security and Cenzic have long expounded upon scanning early and often and on the need to address the tendency of organizations to leave applications vulnerable despite the existence of well-known mitigating solutions – both from developers and infrastructure. Virtual patching is the process of employing a WAF-based mitigation to virtually “patch” a security vulnerability in a web application. Virtual patching takes far less time and effort than application modification, and is thus often used as a temporary mitigation that enables developers or vendors time to address the vulnerability but reduces the risk of exploitation sooner rather than later. Virtual patching has generally been accomplished through the integration of DAST and WAF solutions. Push a button here, another one there, and voila! Application is patched. But this process is still highly manual and has required human intervention to validate the mitigation as well as deploy it. This process does not scale well when an organization with hundreds of applications may be facing 7-12 vulnerabilities per application. Adoption of agile development methodologies have made this process more cumbersome, as releases are pushed to production more frequently, requiring scanning and patching again and again and again. The answer is to automate the discovery and mitigation process for the 80% of vulnerabilities for which there are known, vetted mitigating policies. This relieves the pressure on security ops and allows them to effectively scale to cover all web applications rather than just those deemed critical by the business. AGILE meets OPS This operational model exemplifies the notion of applying agile methodologies to operations, a.k.a. devops. Continuous iterations of a well-defined process ensure better, more secure applications and free security ops to focus on the 20% of threats that cannot be addressed automatically. This enables operations to scale and provide the same (or better) level of service, something that’s increasingly difficult as the number of applications and clients that must be supported explodes. A growing reliance on virtualization to support cloud computing as well as the proliferation of devices may make for more interesting headlines, but security processes will also benefit from operations adopting devops. An increasingly robust ecosystem of integrated solutions that enable a more agile approach to security by taking advantage of automation and best practices will be a boon to organizations struggling to keep up with the frenetic pace set by attackers. Drama in the Cloud: Coming to a Security Theatre Near You Cloud Security: It’s All About (Extreme Elastic) Control Devops is a Verb Ecosystems are Always in Flux This is Why We Can’t Have Nice Things Application Security is a Stack Devops is Not All About Automation210Views0likes0CommentsApplication Security is a Stack
#infosec #web #devops There’s the stuff you develop, and the stuff you don’t. Both have to be secured. On December 22, 1944 the German General von Lüttwitz sent an ultimatum to Gen. McAuliffe, whose forces (the Screaming Eagles, in case you were curious) were encircled in the city Bastogne. McAuliffe’s now famous reply was, “Nuts!” which so confounded the German general that it gave the 101st time to hold off the Germans reinforcements arrived four days later. This little historical tidbit illustrates perfectly the issue with language, and how it can confuse two different groups of people who interpret even a simple word like “nuts” in different ways. In the case of information security, such a difference can have as profound an impact as that of McAuliffe’s famous reply. Application Security It may have been noted by some that I am somewhat persnickety with respect to terminology. While of late this “word rage” has been focused (necessarily) on cloud and related topics, it is by no means constrained to that technology. In fact, when we look at application security we can see that the way in which groups in IT interpret the term “application” has an impact on how they view application security. When developers hear the term “application security” they of course focus in on those pieces of an application over which they have control: the logic, the data, the “stuff” they developed. When operations hears the term “application security” they necessarily do (or should) view the term in a much broader sense. To operations the term “application” encompasses not only what developers tossed over the wall, but its environment and the protocols being used. Operations must view application security in this light, because an increasing number of “application” layer attacks focus not on vulnerabilities that exist in code, but on manipulation of the protocol itself as well as the metadata that is inherently a part of the underlying protocol. The result is that the “application layer” is really more of a stack than a singular entity, much in the same way the transport layer implies not just TCP, but UDP as well, and all that goes along with both. Layer 7 is comprised of both the code tossed over the wall by developers as well as the operational components and makes “application security” a much broader – and more difficult – term to interpret. Differences in interpretation are part of what causes a reluctance for dev and ops to cooperate. When operations starts talking about “application security” issues, developers hear what amounts to an attack on their coding skills and promptly ignores whatever ops has to say. By acknowledging that application security is a stack and not a single entity enables both dev and ops to cooperate on application layer security without egregiously (and unintentionally) offending one another. Cooperation and Collaboration But more than that, recognizing that “application” is really a stack ensures that a growing vector of attack does not go ignored. Protocol and metadata manipulation attacks are a dangerous source of DDoS and other disruptive attacks that can interrupt business and have a real impact on the bottom line. Developers do not necessarily have the means to address protocol or behavioral (and in many cases, metadata) based vulnerabilities. This is because application code is generally written within a framework that encapsulates (and abstracts away) the protocol stack and because an individual application instance does not have the visibility into client-side behavior necessary to recognize many protocol-based attacks. Metadata-based attacks (such as those that manipulate HTTP headers) are also difficult if not impossible for developers to recognize, and even if they could it is not efficient from both a cost and time perspective for them to address such attacks. But some protocol behavior-based attacks may be addressed by either group. Limiting file-upload sizes, for example, can help to mitigate attacks such as slow HTTP POSTs, merely by limiting the amount of data that can be accepted through configuration-based constraints and reducing the overall impact of exploitation. But operations can’t do that unless it understands what the technical limitations – and needs – of the application are, which is something that generally only the developers are going to know. Similarly, if developers are aware of attack and mitigating solution constraints during the design and development phase of the application lifecycle, they may be able to work around it, or innovate other solutions that will ultimate make the application more secure. The language we use to communicate can help or hinder collaboration – and not just between IT and the business. Terminology differences within IT – such as those that involve development and operations – can have an impact on how successfully security initiatives can be implemented.305Views0likes0Comments1024 Words: Building Secure Web Applications
A nice infographic from Veracode on building secure web applications Infographic by Veracode Application Security All 1024 Words posts on DevCentral Recognizing a Threat is the First Step Towards Preventing It Should You Be Concerned About the Other Anonymous? Total Eclipse of the Internet Cloud Security: It’s All About (Extreme Elastic) Control Quantifying Reputation Loss From a Breach200Views0likes0CommentsQuantifying Reputation Loss From a Breach
#infosec #security Putting a value on reputation is not as hard as you might think… It’s really easy to quantify some of the costs associated with a security breach. Number of customers impacted times the cost of a first class stamp plus the cost of a sheet of paper plus the cost of ink divided by … you get the picture. Some of the costs are easier than others to calculate. Some of them are not, and others appear downright impossible. One of the “costs” often cited but rarely quantified is the cost to an organization’s reputation. How does one calculate that? Well, if folks sat down with the business people more often (the ones that live on the other side of the Meyer-Briggs Mountain) we’d find it’s not really as difficult to calculate as one might think. While IT folks analyze flows and packet traces, business folks analyze market trends and impacts – such as those arising from poor customer service. And if a breach of security isn’t interpreted by the general populace as “poor customer service” then I’m not sure what is. While traditionally customer service is how one treats the customer, increasingly that’s expanding to include how one treats the customer’s data. And that means security. This question “how much does it really cost” is one Jeremiah Grossman asks fairly directly in a recent blog, “Indirect Hard Losses”: As stated by InformationWeek regarding a Ponemon Institute study on the Cost of a Data Breach, “Customers, it seems, lose faith in organizations that can't keep data safe and take their business elsewhere.” The next logical question is how much? Jeremiah goes on to focus on revenue lost from web transactions after a breach and that’s certainly part of the calculation, but what about those losses that might have been but now will never be? How can we measure not only the loss of revenue (meaning a decrease in first-order customers) but the potential loss of revenue? That’s harder, but just as important as it more accurately represents the “reputation loss” often mentioned in passing but never assigned a concrete value (at least not publicly, some industries discretely share such data with trusted members of the same industry, but seeing these numbers in the wild? Good luck!) HERE COMES the ALMOST SCIENCE 20% of the businesses that lost data lost customers as a direct result. The impacts were most severe for companies with more than 100 employees. Almost half of them lost sales. Rubicon Survey One of the first things we have to calculate is influence, as that directly impacts reputation. It is the ability of even a single customer to influence a given number of others (negatively or positively) that makes up reputation. It’s word of mouth, what people say about you, after all. If we turn to studies that focus more on marketing and sales and businessy things, we can find a lot of this data. It’s a well-studied area. One study 1 indicates that the reach of a single dissatisfied customer will tell approximately 8-16 people. Each of those people has a circle of influence of about 250, with 25 of those being within an organization's primary target audience. Of all those told 2% (1 in 50) will defect or avoid an organization upon hearing of the victim's dissatisfaction. So for every angry customer, the reputation impact is a loss of anywhere from 40-80 customers, existing and future. So much for thinking 100 records stolen in a breach is small potatoes, eh? Thousands of existing and potential customers loss is nothing to sneeze at. Now, here’s where it gets a little harder, because you’re going to have to talk to the businessy folks to get some values to attach to those losses. See, there’s two numbers you need yet: customer lifetime value (CLV) and the cost to replace a customer (which is higher than the cost of acquire a customer, but don’t ask me why, I’m not a businessy folk). Customer values are highly dependent upon industry. For example, based on 2010 FDIC data, the industry average annual customer value for a banking customer is $209 2 . Facebook’s annual revenue per user (ARPU) is estimated at $2.00 3 . Estimates claim Google makes $9.85 annually off each Android user 4 . And Zynga’s ARPU is estimated at $3.96 (based on a reported $0.33 monthly per user revenue) 5 . This is why you actually have to talk to the businessy guys, they know what these values are and you’ll need them to plug in to the influence calculation to come up with a at-least-it’s-closer-than-guessing value. You also need to ask what the average customer lifetime is, so you can calculate the loss from dissatisfied and defecting customers. Then you just need to start plugging in the numbers. Remember, too, that it’s a model; an estimate. It’s not a perfect valuation system, but it should give you some kind of idea of what the reputational impact from a breach would be, which is more than most folks have today. Even if you can’t obtain the cost to replace value, try the model without it. Try a small breach, just for fun, say of 100 records. Let’s use $4.00 as an annual customer value and a lifetime of ten years as an example. Affected Customer Loss: 100 * ($4 *10) = $4000 Influenced Customer Loss: 100 * (40) = 4000 * 40 = $160,000 Total Reputation Cost: $164,000 Adding in the cost to replace can only make this larger and serves very little purpose except to show that even what many consider a relatively small breach (in terms of records lost) can be costly. WHY is THIS VALUABLE? The reason this is valuable is two-fold. First, it serves as the basis for a very logical and highly motivating business case for security solutions designed to prevent breaches. The problem with much of security is it’s intangible and incalculable. It is harder to put monetary value to risk than it is to put monetary value on solutions. Thus, the ability to perform a cost-benefit analysis that is based in part on “reputation loss” is difficult for security professionals and IT in general. The business needs to be able to justify investments, and to do that they need hard-numbers that they can balance against. It is the security professionals who so often are called upon to explain the “risk” of a breach and loss of data to the business. By providing them tangible data based on accepted business metrics and behavior offers them a more concrete view of the costs – in money – of a breach. That gives IT the leverage, the justification, for investing in solutions such as web application firewalls and vulnerability scanning services that are designed to detect and ultimately prevent such breaches from occurring. It gives infosec some firm ground upon which stand and talk in terms the business understands: dollar signs. [1] PUTTING A PRICE TAG ON A LOST CUSTOMER [2] Free Checking and Debit Incentives Post-Durbin [3] Facebook’s Annual Revenue Per User [4] Each Android User Will Make Google $9.85 per Year in 2012 [5] Zynga Doubled ARPU From Last Year Even as Facebook Platform Changes Slowed Growth1.1KViews0likes0CommentsThe Cost of Ignoring ‘Non-Human’ Visitors
#infosec There is a real cost associated with how far you allow non-human traffic to penetrate the data center – and it’s not just in soft security risks. A factor often ignored by those performing contact analysis costs is technology. Call centers, marketing, and other business functions generally rely upon key performance indicators that include costs per contact – a cost associated with every e-mail, call, or web transactions that occurs. This is measured against costs to acquire customers as a measure of effectiveness. These generally involve labor and associated business costs, but forget to include the undeniably important technology costs associated with each contact. Given the increasing desire of consumers to leverage web and mobile contact options, it seems prudent to consider technology costs as a part of the bigger picture. Luckily, estimates put the technology-related costs of contact at a mere 2.6-5.9% of the total cost of contact (Strategic Contact, “COST STRUCTURE AND DISTRIBUTION IN TODAY’S CONTACT CENTERS”). Recent statistics regarding the type of traffic which may end up being factored into such analysis show that perhaps that number could be reduced, which in turn would impact positively the cost of contact by bringing it down even lower. That’s because a significant portion of “visitors” are not even human, and thus consume resources that cost real dollars that impact the total cost of contact for an organization. Not unsurprisingly, different industries and sites show different rates for non-human traffic: In January 2010, the total page views climbed to 876 million, and the number of "bots" or spiders climbed to 586 million. From 57 percent in 2009, the percentage of "bots" and spiders climbed to 67 percent this year -- Grist for the mill: More news, fewer reporters In total 66,806,000 page requests (mime type text/html only!) per day are considered crawler requests, out of 477,009,000 external requests, which is 14.0% -- Wikimedia traffic report for the period Feb 1 – Feb 29 2012 Random reading in forums around the web anecdotally support a percentage somewhere in between, with 33% and 50% being the most common numbers cited by admins when asked what percentage of traffic was attributable to bots and/or spiders. Similarly, the cost per transaction may be higher or lower depending on the industry and level of infrastructure maturity and complexity in place. What’s important to take away is not necessarily the specific numbers in this example, but to recognize that the deeper traffic flows into the data center, the more processing and handling that must occur. Each process, each component, each flow through the data center incurs a real cost to the organization. After all, the network and its bandwidth are not free, and this is increasingly evident when considering cloud computing resources that are more in-the-businesses-face about breaking out the infrastructure costs associated with that processing. Which means the sooner you detect and filter out “non-human” traffic that has little to no value to the organization and thus only serves to increase costs, the better. WHEN MATTERS What that boils down to is that “when” illegitimate traffic is detected and rejected is important to not only improving operational security posture but to containing business costs. The earlier traffic is marked as “bad” and rejected, the better it is for both IT and the business. Consider a relatively average-sized business sees, according to Google Analytics Benchmarks from 2011, about 2873 web transactions per day. Using Incapsula’s estimate that 51% of those transactions are actually illegitimate, non-human visitors each with a Tower Group estimated cost of about $0.17, an average business wastes about $68 - $327 a day. While that figure may be easily shrugged off, the resulting nearly $25,000 dollars a year at the low end and $119,000 on the high end it adds up to is not. It’s a significant chunk of change, especially for a small or medium sized organization. Larger organizations with higher transactional rates would necessarily see that cost increase, to numbers likely not to bring a smile to the face of the CFO or CTO. Luckily, the application delivery tier is strategically positioned to not only detect but reject such traffic as it enters the data center. That means a shallow penetration profile that dramatically reduces the cost associated with processing illegitimate transactions. The application delivery tier is the first tier in the data center, comprising application delivery and access services at the perimeter of the network. The application delivery controller is often designated as the first point of contact for end-user requests – both human and not because of its role in virtualizing the applications it delivers and secures. This limits the number of infrastructure components through which illegitimate traffic must pass. When traditional network security services such as those associated with a firewall as well as access management are consolidated in the application delivery tier, this further reduces the costs to process these requests as they are immediately recognized and rejected as invalid without requiring flows through additional components. While there are still costs associated with this processing – it can’t be entirely eliminated, after all – it is greatly reduced because it occurs as close to the point of ingress as possible and eliminates traversal through the entire application infrastructure chain – from virtualization platforms to web, application, and database systems. Much has been made of the need to align IT with the business in the past few years, but rarely are we able to provide such a clear case of where IT can directly impact the bottom line of the business as in the realm of controlling transactional costs. Whether through earlier detection and preventing or consolidation –or both – IT can certainly add value by reducing costs through the implementation of a comprehensive application delivery tier. Identify the Most Probable Threats to an Organization Mobile versus Mobile: 867-5309 Distributed Apache Killer The Ascendancy of the Application Layer Threat Mature Security Organizations Align Security with Service Delivery BFF: Complexity and Operational Risk170Views0likes0CommentsThe Ascendancy of the Application Layer Threat
#adcfw#RSAC Attackers have outflanked your security infrastructure Many are familiar with the name of the legendary Alexander the Great, if not the specific battles in which he fought. And even those familiar with his many victorious conquests are not so familiar with his contributions to his father’s battles in which he certainly honed the tactical and strategic expertise that led to his conquest of the “known” world. In 339 BC, for example, then Macedonian King Phillip II – the father of Alexander the Great – became engaged in a battle at Chaeronea against the combined forces of ancient Greece. While the details are interesting, they are not really all that germane to technology except for commentary on what may be* Phillips’ tactics during the battle, as suggested by the Macedonian author Polyaenus: In another 'stratagem', Polyaenus suggests that Philip deliberately prolonged the battle, to take advantage of the rawness of the Athenian troops (his own veterans being more used to fatigue), and delayed his main attack until the Athenians were exhausted. -- Battle of Chaeronea (338 BC) (Wikipedia) This tactic should sound familiar, as it akin in strategy to that of application DDoS attacks today. THE RISE of APPLICATION LAYER ATTACKS Attacks at the application layer are here to stay – and we should expect more of them. When the first of these attacks was successful, it became a sure bet that we would see more of them along with more variations on the same theme. And we are. More and more organizations are reporting attacks bombarding them not just at the network layer but above it, at the transport and application layers. Surely best practices for secure coding would resolve this, you may think. But the attacks that are growing to rule the roost are not the SQLi and XSS attacks that are still very prevalent today. The attacks that are growing and feeding upon the resources of data centers and clouds the globe over are more subtle than that; they’re not about injecting malicious code into data to be spread around like a nasty contagion, they’re DDoS attacks. Just like their network-focused DDoS counterparts, the goal is not infection – it’s disruption. These attacks exploit protocol behavior as well as potentially missed vulnerabilities in application layer protocols as a means to consume as many server resources as possible using the least amount of client resources. The goal is to look legitimate so the security infrastructure doesn’t notice you, and then slowly leech compute resources from servers until they can’t stand – and they topple. They’re Phillip’s Macedonians; wearing out the web server until it’s too tired to stand. These attacks aren’t something listed in the OWASP Top Ten (or even on the OWASP list, for that matter). These are not attacks that can be detected by IPS, IDS, or even traditional stateful firewalls. These technologies focus on data and anomalies in data, not behavior and generally not at the application protocol layer. For example, consider HTTP Fragmentation attacks. In this attack, a non-spoofed attacker establishes a valid HTTP connection with a web server. The attacker then proceeds to fragment legitimate HTTP packets into tiny fragments, sending each fragment as slow as the server time out allows, holding up the HTTP connection for a long time without raising any alarms. For Apache and many other web servers designed with improper time-out mechanisms, this HTTP session time can be extended to a very long time period. By opening multiple extended session per attacker, the attacker can silently stop a web service with just a handful of resources. Multiple Methods in a Single Request is another fine example of exhausting a web server’s resources. The attacker creates multiple HTTP requests, not by issuing them one after another during a single session, but by forming a single packet embedded with multiple requests. This allows the attacker to maintain high loads on the victim server with a low attack packet rate. This low rate makes the attacker nearly invisible to NetFlow anomaly detection techniques. Also, if the attacker selects the HTTP method carefully these attacks will bypass deep packet inspection techniques. There a number of other similar attacks, all variations on the same theme: manipulation of valid behavior to exhaustion of web server resources with the goal of disrupting services. Eventually, servers crash or become so slow they are unable to adequately service legitimate clients – the definition of a successful DDoS attack. These attacks are not detectable by firewalls and other security infrastructure that only examine packets or even flows for anomalies because no anomaly exists. This is about behavior, about that one person in the bank line who is acting oddly – not enough to alarm most people but just enough to trigger attention from someone trained to detect it. The same is true of security infrastructure. The only component that will detect such subtle improper behavior is one that’s been designed to protect it. * It was quite a while ago, after all, and sources are somewhat muddied. Whether this account is accurate or not is still debated. The Pythagorean Theorem of Operational Risk F5 Friday: When Firewalls Fail… F5 Friday: Mitigating the THC SSL DoS Threat F5 Friday: If Only the Odds of a Security Breach were the Same as Being Hit by Lightning F5 Friday: Multi-Layer Security for Multi-Layer Attacks When the Data Center is Under Siege Don’t Forget to Watch Under the Floor Challenging the Firewall Data Center Dogma What We Learned from Anonymous: DDoS is now 3DoS The Many Faces of DDoS: Variations on a Theme or Two199Views0likes0CommentsMature Security Organizations Align Security with Service Delivery
#adcfw#RSAC Traditional strategy segregates delivery from security. Traditional strategy is doing it wrong… Everyone, I’m sure, has had the experience of calling customer service. First you get the automated system, which often asks for your account number. You know, to direct you to the right place and “serve you better.” Everyone has also likely been exasperated when the first question asked by a customer service representative upon being connected to a real live person is … “May I have your account number, please?” It’s frustrating and, for everyone involved, it’s cumbersome. That’s exactly the process that occurs in most data centers today as application requests are received by the firewall and then passed on to the service delivery layer. Traditional data center design segregates security from service delivery. There’s an entire complement of security-related components that reside at the perimeter of the network, designed to evaluate incoming traffic for a wide variety of potential security risks – DDoS, unauthorized access, malicious packets, etc… But that evaluation is limited to the network layers of the stack. It’s focused on packets and connections and protocols, and fails to take into consideration the broader contextual information that is carried along by every request. It’s asking for an account number but failing to leverage it and share it in a way that effectively applies and enforces corporate security policies. It’s cumbersome. Reality is that many of the functions executed by firewalls are duplicated in the application delivery tier by service delivery systems. What’s more frustrating is that many of those functions are executed more thoroughly and to better effect (i.e. they mitigate risk more effectively) at the application delivery layer. What should be frustrating to those concerned with IT budgets and operational efficiency is that this disconnected security strategy is more expensive to acquire, deploy, and maintain. Using shared infrastructure is the hallmark of a mature security organization; it’s a sign of moving toward a more strategic security strategy that’s not only more technically adept but is financially sound. SHARED INFRASTRUCTURE We most often hear the term “shared infrastructure” with respect to cloud computing and its benefits. The sharing of infrastructure across organizations in a public cloud computing environment nets operational savings not only from alleviating the need to manage the infrastructure from the fact that the capital costs are shared across hundreds if not thousands of customers. Inside the data center private cloud computing models are rising to the top of the “must have” list for IT for similar reasons. In the data center, however, there are additional technical and security benefits that should not be overlooked. Aligning corporate security strategy with the organizations’ service delivery strategy by leveraging shared infrastructure provides a more comprehensive, strategic deployment that is not only more secure, but more cost effective. Service delivery solutions already provide a wide variety of threat mitigation services that can leveraged to mitigate the performance degradation associated with a disjointed security infrastructure, the kind that leads 9 of 10 organizations to sacrifice that security in favor of performance. By leveraging shared infrastructure to perform both service delivery acceleration as well as security, neither performance nor security need be sacrificed because it essentially aligns with the mantra of the past decade with regards to performance and security: crack the packet only once. In other words, don’t ask the customer for their account number twice. It’s cumbersome, frustrating, and an inefficient means of delivering any kind of service. F5 Friday: When Firewalls Fail… At the Intersection of Cloud and Control… F5 Friday: Multi-Layer Security for Multi-Layer Attacks 1024 Words: If Neo Were Your CSO F5 Friday: No DNS? No … Anything. F5 Friday: Performance, Throughput and DPS When the Data Center is Under Siege Don’t Forget to Watch Under the Floor Challenging the Firewall Data Center Dogma What We Learned from Anonymous: DDoS is now 3DoS The Many Faces of DDoS: Variations on a Theme or Two165Views0likes0Comments3 reasons you need a WAF even if your code is (you think) secure
Everyone is buzzing and tweeting about the SANS Institute CWE/SANS Top 25 Most Dangerous Programming Errors, many heralding its release as the dawning of a new age in secure software. Indeed, it's already changing purchasing requirements. Byron Acohido reports that the Department of Defense is leading the way by "accepting only software tested and certified against the Top 25 flaws." Some have begun speculating that this list obviates the need for web application firewalls (WAF). After all, if applications are secured against these vulnerabilities, there's no need for an additional layer of security. Or is there? Web application firewalls, while certainly providing a layer of security against the exploitation of the application vulnerabilities resulting from errors such as those detailed by SANS, also provide other benefits in terms of security that should be carefully considered before dismissing their usefulness out of hand. 1. FUTURE PROOF AGAINST NEW VULNERABILITIES The axiom says the only things certain in life are death and taxes, but in the world of application security a third is just as certain: the ingenuity of miscreants. Make no mistake, there will be new vulnerabilities discovered and they will, eventually, affect the security of your application. When they are discovered, you'll be faced with an interesting conundrum: take your application offline to protect it while you find, fix, and test the modifications or leave the application online and take your chances. The third option, of course, is to employ a web application firewall to detect and stop attacks targeting the vulnerability. This allows you to keep your application online while mitigating the risk of exploitation, giving developers the time necessary to find, fix, and test the changes necessary to address the vulnerability. 2. ENVIRONMENTAL SECURITY No application is an island. Applications are deployed in an environment because they require additional moving parts in order to actually be useful. Without an operating system, application or web server, and a network stack, applications really aren't all that useful to the end user. While the SANS list takes a subtle stab at mentioning this with its inclusion of OS command injection vulnerability, it assumes that all software and systems required to deploy and deliver an application are certified against the list and therefore secure. This is a utopian ideal that is unlikely to become reality and even if it was to come true, see reason #1 above. Web application firewalls protect more than just your application,they can also provide needed protection against vulnerabilities specific to operating systems, application and web servers, and the rest of the environment required to deploy and deliver your application. The increase in deployment of applications in virtualized environments, too, has security risks. The potential exploitation of the hypervisor is a serious issue that few have addressed thus far in their rush to adopt virtualization technology. 3. THE NINJA ATTACK There are some attacks that cannot be detected by an application. These usually involve the exploitation of a protocol such as HTTP or TCP and appear to the application to be legitimate requests. These "ninja" style attacks take advantage of the fact that applications are generally only aware of one user session at a time, and not able to make decisions based on the activity of all users occurring at the same time. The application cannot prevent these attacks. Attacks involving the manipulation of cookies, replay, and other application layer logical attacks, too, often go undetected by applications because they appear to be legitimate. SANS offers a passing nod to some of these types of vulnerabilities in its "Risky Resource Management" category, particularly CWE-642 (External Control of Critical State Data). Addressing this category for existing applications will likely require heavy modification to existing applications and new design for new applications. In the meantime, applications remain vulnerable to this category of vulnerability as well as the ninja attacks that are not, and cannot be, addressed by the SANS list. A web application firewall detects and prevents such stealth attacks attempting to exploit the legitimate behavior of protocols and applications. The excitement with which the SANS list has been greeted is great news for security professionals, because it shows an increased awareness in the importance of secure coding and application security in general. That organizations will demand proof that applications - third-party or in-house - are secure against such a list is sorely needed as a means to ensure better application security across the whole infrastructure. But "the list" does not obviate the usefulness or need for additional security measures, and web application firewalls have long provided benefits in addition to simply protecting against exploitation of SANS Top 25 errors. Those benefits remain valid and tangible even if your code is (you think) secure. Just because you installed a digital home security system doesn't mean you should get rid of the deadbolts. Or vice versa. Related articles by Zemanta Security is not a luxury item DoS attack reveals (yet another) crack in net's core CWE/SANS TOP 25 Most Dangerous Programming Errors Mike Fratto on CWE/SANS TOP 25 Most Dangerous Programming Errors Zero Day Threat: One big step toward a safer Internet495Views0likes2Comments