web applications
4 TopicsSnippet #7: OWASP Useful HTTP Headers
If you develop and deploy web applications then security is on your mind. When I want to understand a web security topic I go to OWASP.org, a community dedicated to enabling the world to create trustworthy web applications. One of my favorite OWASP wiki pages is the list of useful HTTP headers. This page lists a few HTTP headers which, when added to the HTTP responses of an app, enhances its security practically for free. Let’s examine the list… These headers can be added without concern that they affect application behavior: X-XSS-Protection Forces the enabling of cross-site scripting protection in the browser (useful when the protection may have been disabled) X-Content-Type-Options Prevents browsers from treating a response differently than the Content-Type header indicates These headers may need some consideration before implementing: Public-Key-Pins Helps avoid *-in-the-middle attacks using forged certificates Strict-Transport-Security Enforces the used of HTTPS in your application, covered in some depth by Andrew Jenkins X-Frame-Options / Frame-Options Used to avoid "clickjacking", but can break an application; usually you want this Content-Security-Policy / X-Content-Security-Policy / X-Webkit-CSP Provides a policy for how the browser renders an app, aimed at avoiding XSS Content-Security-Policy-Report-Only Similar to CSP above, but only reports, no enforcement Here is a script that incorporates three of the above headers, which are generally safe to add to any application: And that's it: About 20 lines of code to add 100 more bytes to the total HTTP response, and enhanced enhanced application security! Go get your own FREE license and try it today!735Views0likes2CommentsAPAC market research points to WAF being integrated with application delivery
We entered 2014 on a fillip. Frost & Sullivan had just named us the vendor leading WAF market in Asia Pacific and Japan. The Frost Industry Quotient, put F5 and nine other companies under their analytical magnifying glass, examining our market performance for CY 2012 as well as key business strategies. They left no strategy unturned it would seem. Product and service strategy, people and skills strategy, business and even the ecosystem strategy were all held up to scrutiny. But the real scoop wasn’t that we were No 1 but that Frost IQ had discerned developments in the market that point towards WAF being integrated with application delivery. The researchers noted that the convergence would lead to a more intelligent and holistic way for organizations to protect their web applications. The market is validating what we said a year ago when we launched BIG-IP Advanced firewall Manager, the first in the industry to unify a network firewall with traffic management, application security, user access management and DNS security capabilities within an intelligent services framework. Every day, publicly known or otherwise, organizations grapple with attacks that target their applications in addition to those that threaten the network. Because F5 solutions occupy strategic points of control within the infrastructure, they are ideally suited to combine traditional application delivery with firewall capabilities and other advanced security services. The bell tolls for the traditional firewall. Eventually it will be replaced by intelligent security. F5’s integrated approach to security is key in mitigating DDoS attacks, helping to identify malicious actions, prioritize how requests from specific locations are handled and focus on addressing properly qualified requests. Enabling security services on our ADCs makes it possible to consolidate multiple security appliances into one single device. This consolidation includes a WAF that analyses traffic and can propose rules to automatically protect the enterprise. I caught up quickly with Christian Hentschel, SVP Asia Pacific and Japan, on his views of the new accolade. Aside from being very proud to be recognized as the leading WAF vendor in APJ, a testimony of our strategy and the team’s focus, he noted that customers view traditional firewall less relevant with the sophistication in cyber-attacks on layer 4-7 today.253Views0likes0CommentsEven the best written code has a weakness.
Developers are a great lot of folks, people who spend their day trying to do the impossible with bits for a customer base that is, by and large, impossible to satisfy. When the bits all line up correctly, the last line of code has been checked in, and the nightly compile accepted for deployment, then they get to sit back, relax for five minutes, and start over again. If this makes you think it’s not a great life, then you should live it. Developing gives instant feedback. No matter how unhappy users can be, fixing that nagging bug you’ve been chasing for hours is a rush, and starting with a blank source code file is like looking across a wide-open plain. You can see what might be, and you get to go figure out how to do it. But yeah, it’s high-stress. Deadlines are constant, and it’s not like writing where you have to get your content finished, once the code is done, ten million people want to have input into what you should have done. Various techniques have been developed to mitigate the depressing fact that people tell you what they want after they see what you’ve built, but the fact is, for most ordinary users, be they business users or end users, they don’t know what they want until they see something working on their monitor and can play with it. Because they need a point of comparison. Some few can tell you sight-unseen, early in the process, what they’d like, but most will have increasing demands as the application’s capabilities increase. And these days, there’s one more major gotcha. You have to care about the network. I’ve been saying that for years, but we’ve passed the point where you could ignore me. Some will say “cloud changes all that!” but the truth is, cloud changes the problem domain, not the fact that you have to care. Let’s say you have a web application (as there are precious few other types being developed these days), and you have tweaked it to uber-performance so that it is scalable. You’ve put it behind a load balancer or application delivery controller so that even if your tweaks aren’t enough, you can share the load amongst several copies. You’ve done it all right. And your primary Internet connection goes down. So your network staff switches to the backup connection – which is invariably smaller than the primary. The problem in this scenario is that your application can be load balanced and highly optimized, but now it is fighting for bandwidth on a reduced connection. This is hardly the only scenario in which your application can suffer from outside interference. Ever been on the receiving end of a router configuration error? Your application appears down to everyone in the multi-verse, but in reality, it is responding just peachy but the network is routing your users to Timbuktu. I could tell you about all the great solutions that F5 offers for this problem or that problem (there are many of them, and they’re pretty darned good), but from your perspective, the issue is (or should be) much bigger than that. You need to be able to understand when the problem at hand is a network problem, and you need to be able to diagnose that fact quickly, so the right people are on the job. And that means you need to know networking. Just as importantly, you need to at least viscerally understand your specific network environment. They’re all a bit different, and the likely pain points are different, even though some problems are universal. A DDOS attack, for example, is aimed at clogging your Internet connection, no matter your architecture… But some networking gear reduces the ability of DDOS to actually take the site down, so your network might only see degraded performance. So ask the network team to teach you. Ask them what devices are between your applications and your customers. Ask them how these devices (or their malfunction) impact your applications. Know the environment you’re in, because for most applications today, a problem on the network makes for a poorly performing application. And that is indeed your responsibility. In the cloud you can’t know all of these things for real, but you can understand the concepts. Is there a virtual ADC? What is being used for firewall services? What perf tools are available to determine the bottlenecks of applications deployed in the cloud? All things you’ll want to know, so you can know how best to start troubleshooting when the inevitable problems occur. Learning things like this after your application is the source of user pain seems to still be the norm, but it’s certainly not the best solution. Either it increases the amount of time your application is getting bad PR, or you are fixing things hastily, and haste does indeed make waste in most critical application situations. This knowledge will also give you a new set of tools to solve problems with. If you know that a Web Application Acceleration tool like F5’s WebAccelerator is in place between your application and the user, then you might be able to say “rather than rewrite this chunk of code, let’s tweak the Web Application Acceleration engine to handle it” and save both time and potential coding defect issues. It’s still a great time to be a developer, the fun is still all there, it’s just a more complex world. Master your network architecture, and be a better developer for it. Why Flash can't win the Web application war Virtualization Changes Application Deployment But Not Development Amazon Makes the Cloud Sticky Return of the Web Application Platform Wars Wanted: Application Delivery Network Experts The Stealthy Ascendancy of JSON Now it's Time for Efficiency Gains in the Network. "Application Delivery" Role Missing "Delivery" Focus Finding Your Balance206Views0likes0CommentsThe Fix Must Occur by Rewriting the Code. Wait, What?
This article is just full of interesting ideas. First we're told that the only way to secure Web 2.0/SOA/Web applications is to rewrite the code. This "rewrite the application code" to address any number of delivery issues - security, performance, availability - is old and busted. There are other more efficient mechanisms that can certainly be used to address application delivery issues, such as an application delivery network comprising appropriate intelligent, application aware devices capable of ensuring that all applications are fast, secure, and available. These solutions do not require that the application be rewritten, and in fact in many instances rewriting the application will not solve the problem because some of the issues related to availability, security,and performance are the direct result of protocol inefficiencies and vulnerabilities(HTTP, TCP, IP) that cannot be addressed by rewriting application code. That's because the network and application stacks are not under the control of the application developers. Michael Sutton, security evangelist with SPI Dynamics, now part of Hewlett-Packard Co. (HP) speaks out on Web application security. He said companies have always operated under the assumption that IT is responsible for security and not the Web developers. The problem is that once faulty applications are launched, IT can't provide the fix. The fix must occur by rewriting the code. But there are ways IT can help the developers get it right. Are you really suggesting that application developers rewrite the Java TCP/IP stack to address inefficiencies and vulnerabilities? Are you really saying that the only way to deal with language-specific vulnerabilities (ASP, PHP, JSP, etc...) is for the application developers to rewrite the interpreters executing the application?? Are you really implying that there is no fix IT can provide other than flogging of developers? Come on, this is one of the reasons Web Application Firewalls exist - to address those vulnerabilities that simply can't be addressed by the application developer whetherdue to location in the application/network stack or thetime and expense of rewriting the application. The article gets stranger (which I wasn't sure was possible) when Josef Brunner, security solutions manager at Enterasys Networks, starts discussing SOAP-based security issues. Brunner expressed particular concern for how the Simple Object Access Protocol (SOAP) is used in Web services. SOAP is a way for a program running in one kind of operating system such as Windows 2000 to communicate with a program in the same or another kind of an operating system such as Linux by using the Hypertext Transfer Protocol (HTTP)and its Extensible Markup Language (XML) as the mechanisms for information exchange. A rather simplistic definition but not completely off-target.We call it "loose coupling", and its the cornerstone of what makes SOA work. SOAP is platform-independent and allows users to bypass whatever security devices are on the network, Brunner said, adding that encryption tends to be the only security mechanism for SOAP. "SOAP is very flexible and dynamic, which is always bad from a security standpoint," he said. Wait, what?SOAPdoes not "allow users to bypass" security devices on the network. If users/clients are bypassing security devices on the network in a SOA (Service Oriented Architecture) then the enterprise architects have failed at designing and implementing a secure, robust SOA. SOAP doesn't "allow" such a thing any more than any other application "allows" such a bypass to occur. And if "encryption tends to be the only security mechanism for SOAP" then implementors aren't paying attention to the myriad web services standards available from OASIS that provide for authentication and authorizationspecifically for Web Services (WS-Security 1.1), as well as message and field level encryption (XML Encryption)and non-repudiation (XML Digital Signatures). If only there were XML/SOA security solutions that could more efficiently screen traffic by acting as a reverse proxy (endpoint) and enforcing organizational security policies regarding authorization, authentication, and message contents. If only! SOAP tends to be encrypted by an inconsistent set of methods and so there's no way for security professionals to break and inspect the traffic for trouble. Making matters worse, he noted that SOAP servers are connected to critical back-end systems attackers can compromise with the right exploits. SOAP messages tend to be encrypted usingindustry standardencryption a la SSL. Brunner's suggestions for improving the situation include securing SOAP servers with host-based IDS to prevent buffer overflow attacks, and, above all, demanding better application security, which means training developers to do better. Wait, what? This is a complete non-sequitor.Let's burden SOAP servers with even more resource intensive processing (IDS) that require additional maintenance and costs to deploy and manage. It's not like processing XML is CPU and memory intensive or anything. It's not like AJAX-based applications aren't sucking up a ton of overhead and entries in the session state table because of long-lived sessions and additional connections. An IDS is not going to solve authentication/authorization issues, it's not going to solve the encryption problem, and they are not necessarily going to be able to deal with application-specific vulnerabilities. Besides, it's really difficult to convince a developer that you should be deploying agents on servers that will likely significantly degrade the performance of their application. If only there were some sort of network devices that could deployed in front of SOAP servers that not only optimized and accelerated protocols like HTTP and TCP but could also prevent buffer overflow attacks and application-specific vulnerabilities by acting as the first line of defense at the network perimeter. If only there were solutions to these problems that didn't involve rewriting applications (time, resources, money) or deploying solutions that don't address all the issues (IDS). It's true that in general developers need to be more security conscious. It's also true that there are specific types of vulnerabilities that cannot be addressed outside of the application at this time (application flow/logic errors are peculiar to the application). But it's patently untruethat IT "can't fix the problem" because there exist both Web Application Firewalls as well as holistic application delivery networks that provide excellentsolutions that address both security and performance issues associated with SOA/XML-based applications. Ensuring SOAP services within a SOA are deployed within a robust, dynamic application and network architecture is the task of an integrated and cross-functional team from IT, not the individual developer of services. There is often more than one answer to the problem of application security, and though "rewrite the app" is always one ofthose options, it's rarely the most efficient orcost-effective option out there. Imbibing: Pink Lemonade Technorati tags: F5, MacVittie, security, SOA, application delivery, web applications225Views0likes0Comments