sans
5 TopicsSANS 20 Critical Security Controls
A couple days ago, The SANS Institute announced the release of a major update (Version 3.0) to the 20 Critical Controls, a prioritized baseline of information security measures designed to provide continuous monitoring to better protect government and commercial computers and networks from cyber attacks. The information security threat landscape is always changing, especially this year with the well publicized breaches. The particular controls have been tested and provide an effective solution to defending against cyber-attacks. The focus is critical technical areas than can help an organization prioritize efforts to protect against the most common and dangerous attacks. Automating security controls is another key area, to help gauge and improve the security posture of an organization. The update takes into account the information gleaned from law enforcement agencies, forensics experts and penetration testers who have analyzed the various methods of attack. SANS outlines the controls that would have prevented those attacks from being successful. Version 3.0 was developed to take the control framework to the next level. They have realigned the 20 controls and the associated sub-controls based on the current technology and threat environment, including the new threat vectors. Sub-controls have been added to assist with rapid detection and prevention of attacks. The 20 Controls have been aligned to the NSA’s Associated Manageable Network Plan Revision 2.0 Milestones. They have added definitions, guidelines and proposed scoring criteria to evaluate tools for their ability to satisfy the requirements of each of the 20 Controls. Lastly, they have mapped the findings of the Australian Government Department of Defence, which produced the Top 35 Key Mitigation Strategies, to the 20 Controls, providing measures to help reduce the impact of attacks. The 20 Critical Security Controls are: Inventory of Authorized and Unauthorized Devices Inventory of Authorized and Unauthorized Software Secure Configurations for Hardware and Software on Laptops, Workstations, and Servers Secure Configurations for Network Devices such as Firewalls, Routers, and Switches Boundary Defense Maintenance, Monitoring, and Analysis of Security Audit Logs Application Software Security Controlled Use of Administrative Privileges Controlled Access Based on the Need to Know Continuous Vulnerability Assessment and Remediation Account Monitoring and Control Malware Defenses Limitation and Control of Network Ports, Protocols, and Services Wireless Device Control Data Loss Prevention Secure Network Engineering Penetration Tests and Red Team Exercises Incident Response Capability Data Recovery Capability Security Skills Assessment and Appropriate Training to Fill Gaps And of course, F5 has solutions that can help with most, if not all, the 20 Critical Controls. ps Resources: SANS 20 Critical Controls Top 35 Mitigation Strategies: DSD Defence Signals Directorate NSA Manageable Network Plan (pdf) Internet Storm Center Google Report: How Web Attackers Evade Malware Detection F5 Security Solutions1.2KViews0likes0CommentsI am in your HTTP headers, attacking your application
Zero-day IE exploits and general mass SQL injection attacks often overshadow potentially more dangerous exploits targeting lesser known applications and attack vectors. These exploits are potentially more dangerous because once proven through a successful attack on these lesser known applications they can rapidly be adapted to exploit more common web applications, and no one is specifically concentrating on preventing them because they're, well, not so obvious. Recently, SANS Internet Storm Center featured a write up on attempts to exploit Roundcube Webmail via the HTTP Accept header. Such an attack is generally focused on exploitation of operating system, language, or environmental vulnerabilities, as the data contained in HTTP headers (aside from cookies) is rarely used by the application as user-input. An example provided by SANS of an attack targeting Roundcube via the HTTP Accept header: POST /roundcube/bin/html2text.php HTTP/1.1 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.5) Gecko/2008120122 Firefox/3.0.5 Host: xx.xx.xx.xx Accept: ZWNobyAoMzMzMjEyKzQzMjQ1NjY2KS4iICI7O3Bhc3N0aHJ1KCJ1bmFtZSAtYTtpZCIpOw== Content-Length: 54 What the attackers in this example were attempting to do is trick the application into evaluating system commands encoded in the Accept header in order to retrieve some data they should not have had access to. The purpose of the attack, however, could easily have been for some other nefarious deed such as potentially writing a file to the system that could be used as a cross-site scripting attack, or deleting files, or just generally wreaking havoc with the system. This is the problem security professionals and developers face every day: what devious thing could some miscreant attempt to do? What must I protect against. This is part of what makes secure coding so difficult - developers aren't always sure what they should be protecting against, and neither are the security pros because the bad guys are always coming up with a new way to exploit some aspect of an application or transport layer protocols. Think HTTP headers aren't generally used by applications? Consider the use of the custom HTTP header "SOAP Action" for SOAP web services, and cookies, and E-tags, and ... well, the list goes on. HTTP headers carry data used by applications and therefore should be considered a viable transport mechanism for malicious code. So while the exploitation of HTTP headers is not nearly as common or rampant as mass SQL injection today, the use of it to target specific applications means it is a possible attack vector for the future against which applications should be protected now, before it becomes critical to do so. No, it may never happen. Attackers may never find a way to truly exploit HTTP headers. But then again, they might and apparently have been trying. Better safe than sorry, I say. Regardless of the technology you use to, the process is the same: you need to determine what is allowed in HTTP headers and verify them just as you would any other user-generated input or you need to invest in a solution that provides this type of security for you. RFC 2616 (HTTP), specifically section 14, provide a great deal of guidance and detail on what is acceptable in an HTTP header field. Never blindly evaluate or execute upon data contained in an HTTP header field. Treat any input, even input that is not traditionally user-generated, as suspect. That's a good rule of thumb for protecting against malicious payloads anyway, but especially a good rule when dealing with what is likely considered a non-traditional attack vector (until it is used, and overused to the point it's considered typical, of course). Possible ways to prevent the potential exploitation of HTTP headers: Use network-side scripting or mod_rewrite to intercept, examine, and either sanitize or outright reject requests containing suspicious data in HTTP headers. Invest in a security solution capable of sanitizing transport (TCP) and application layer (HTTP) protocols and use it to do so. Investigate whether an existing solution - either security or application delivery focused - is capable of providing the means through which you can enforce protocol compliance. Use secure coding techniques to examine - not evaluate - the data in any HTTP headers you are using and ensure they are legitimate values before using them in any way. A little proactive security can go along way toward not being the person who inadvertently discovers a new attack methodology. Related articles by Zemanta Gmail Is Vulnerable to Hackers The Concise Guide to Proxies 3 reasons you need a WAF even though your code is (you think) secure Stop brute forcing listing of HTTP OPTIONS with network-side scripting What's the difference between a web application and a blog?556Views0likes2Comments3 reasons you need a WAF even if your code is (you think) secure
Everyone is buzzing and tweeting about the SANS Institute CWE/SANS Top 25 Most Dangerous Programming Errors, many heralding its release as the dawning of a new age in secure software. Indeed, it's already changing purchasing requirements. Byron Acohido reports that the Department of Defense is leading the way by "accepting only software tested and certified against the Top 25 flaws." Some have begun speculating that this list obviates the need for web application firewalls (WAF). After all, if applications are secured against these vulnerabilities, there's no need for an additional layer of security. Or is there? Web application firewalls, while certainly providing a layer of security against the exploitation of the application vulnerabilities resulting from errors such as those detailed by SANS, also provide other benefits in terms of security that should be carefully considered before dismissing their usefulness out of hand. 1. FUTURE PROOF AGAINST NEW VULNERABILITIES The axiom says the only things certain in life are death and taxes, but in the world of application security a third is just as certain: the ingenuity of miscreants. Make no mistake, there will be new vulnerabilities discovered and they will, eventually, affect the security of your application. When they are discovered, you'll be faced with an interesting conundrum: take your application offline to protect it while you find, fix, and test the modifications or leave the application online and take your chances. The third option, of course, is to employ a web application firewall to detect and stop attacks targeting the vulnerability. This allows you to keep your application online while mitigating the risk of exploitation, giving developers the time necessary to find, fix, and test the changes necessary to address the vulnerability. 2. ENVIRONMENTAL SECURITY No application is an island. Applications are deployed in an environment because they require additional moving parts in order to actually be useful. Without an operating system, application or web server, and a network stack, applications really aren't all that useful to the end user. While the SANS list takes a subtle stab at mentioning this with its inclusion of OS command injection vulnerability, it assumes that all software and systems required to deploy and deliver an application are certified against the list and therefore secure. This is a utopian ideal that is unlikely to become reality and even if it was to come true, see reason #1 above. Web application firewalls protect more than just your application,they can also provide needed protection against vulnerabilities specific to operating systems, application and web servers, and the rest of the environment required to deploy and deliver your application. The increase in deployment of applications in virtualized environments, too, has security risks. The potential exploitation of the hypervisor is a serious issue that few have addressed thus far in their rush to adopt virtualization technology. 3. THE NINJA ATTACK There are some attacks that cannot be detected by an application. These usually involve the exploitation of a protocol such as HTTP or TCP and appear to the application to be legitimate requests. These "ninja" style attacks take advantage of the fact that applications are generally only aware of one user session at a time, and not able to make decisions based on the activity of all users occurring at the same time. The application cannot prevent these attacks. Attacks involving the manipulation of cookies, replay, and other application layer logical attacks, too, often go undetected by applications because they appear to be legitimate. SANS offers a passing nod to some of these types of vulnerabilities in its "Risky Resource Management" category, particularly CWE-642 (External Control of Critical State Data). Addressing this category for existing applications will likely require heavy modification to existing applications and new design for new applications. In the meantime, applications remain vulnerable to this category of vulnerability as well as the ninja attacks that are not, and cannot be, addressed by the SANS list. A web application firewall detects and prevents such stealth attacks attempting to exploit the legitimate behavior of protocols and applications. The excitement with which the SANS list has been greeted is great news for security professionals, because it shows an increased awareness in the importance of secure coding and application security in general. That organizations will demand proof that applications - third-party or in-house - are secure against such a list is sorely needed as a means to ensure better application security across the whole infrastructure. But "the list" does not obviate the usefulness or need for additional security measures, and web application firewalls have long provided benefits in addition to simply protecting against exploitation of SANS Top 25 errors. Those benefits remain valid and tangible even if your code is (you think) secure. Just because you installed a digital home security system doesn't mean you should get rid of the deadbolts. Or vice versa. Related articles by Zemanta Security is not a luxury item DoS attack reveals (yet another) crack in net's core CWE/SANS TOP 25 Most Dangerous Programming Errors Mike Fratto on CWE/SANS TOP 25 Most Dangerous Programming Errors Zero Day Threat: One big step toward a safer Internet479Views0likes2CommentsSANS Top 25 Epic Fail: CWE-319
If you've taken the time to read over the "Top 25 Most Dangerous Programming Errors" published by SANS recently, you may (or may not) have noticed that CWE-319 is an anomaly, and should be easily picked out by developers and security professionals in a game called "which one of these is not like the other". CWE-319 If your software sends sensitive information across a network, such as private data or authentication credentials, that information crosses many different nodes in transit to its final destination. Attackers can sniff this data right off the wire, and it doesn't require a lot of effort. All they need to do is control one node along the path to the final destination, control any node within the same networks of those transit nodes, or plug into an available interface. Trying to obfuscate traffic using schemes like Base64 and URL encoding doesn't offer any protection, either; those encodings are for normalizing communications, not scrambling data to make it unreadable. Prevention and Mitigations Architecture and Design Secret information should not be transmitted in cleartext. Encrypt the data with a reliable encryption scheme before transmitting. Implementation When using web applications with SSL, use SSL for the entire session from login to logout, not just for the initial login page. Operation Configure servers to use encrypted channels for communication, which may include SSL or other secure protocols. 1. This is not a "programming error" The first problem with the inclusion of this "error" on the list is that it is not a programming error. It may be a poor design, architectural, or deployment decision, but it is not an "error". While not necessarily a problem with the actual weakness described, the misnomer is frustrating and undermines the rest of the list, most of which are actual errors in coding practices that need to be addressed. SSL can be easily enabled by any customer, regardless of how the web application is written. Using SSL has always been suggested as part of a secure architecture, and it is organizations not using SSL that bear the burden of failure to implement this simple security scheme, not necessarily developers. Trying to force software vendors to force SSL on its customers is an end-run around the sad fact that most organizations fail to implement proper encryption when necessary. 2. Mitigation through encryption can disrupt security systems internally SSL enabled servers require that the organization obtain and manage the appropriate server-side certificates. SSL usage is the responsibility of the organization deploying the software, not the software vendor. Ensuring the web application works correctly when deployed using SSL may be the vendor's responsibility, but configuring it that way is clearly a matter of architectural choice on the part of the organization deploying the software. It is likely that this remediation solution was intended to direct developers to always use HTTPS instead of HTTP when loading URLs, rather than using relative paths. This likely requires rework on the part of the developers of web applications to obtain the host name dynamically before constructing the proper URL rather than using relative paths. This would also require organizations to ensure an environment that supports SSL, which puts the onus of a secure implementation squarely back on the organization, not the vendor. The ramifications of implementation of SSL from client all the way to server can include the inadvertent elimination of the ability of other security systems - IDS, IPS, WAF - to perform their tasks unless specifically configured to decrypt, then examine the requests and responses, and then re-encrypt the session before sending it on to the appropriate server. This requires re-architecture on the part of the organization, and careful consideration of the security of systems on which such keys and/or certificates are will be stored. This is important as the compromise of any system storing the keys and/or certificates may lead to the "bad guys" obtaining these important pieces of security architecture, thus rendering any application or system relying upon that data insecure. 3. Encrypted malicious data is still malicious A very wise man told me once that malicious data encrypted is still malicious. Using SSL encryption certainly keeps the "bad guys" from looking at and capturing sensitive data, but as noted in issue number 3 it also keeps security devices from inspecting the exchange in its goal of detecting and preventing malicious data from getting near the web application or web server, where it is likely to do harm. The "bad guys" have the same level of access to those means as do the normal users; this does nothing to prevent the insertion of malicious data but does make it more difficult to detect and prevent, unless the application is requiring client-side certificates, which opens yet another can of worms and can seriously degrade the flexibility of the application in supporting a wide variety of end-user devices. The result, no matter how it is implemented, is security theater at its finest. CWE-319 should not have been included on a list of top "programming errors", and the remediation solutions offered fail to recognize that the majority of the burden of implementation is on the organization, not the software vendor. It fails to recognize the impact of the suggested implementations on the application and the supporting infrastructure, and it is likely to cause more problems than it will solve. The blind adoption of this list as a requirement for procurement by the state of New York, and likely others soon to follow, is little more than a grand gesture designed to send a message not to vendors, but to its customers and, likely, the courts. Certainly requiring software be certified against this list could be considered due-diligence in any lawsuit resulting from the inadvertent leak of sensitive information, thereby proving no negligence on the part of the organization and therefore no liability. While enabling SSL communications is certainly a good idea, it is important to remember that it - like other encryption schemes - is merely obfuscation. It will blindly transport malicious data as easily as it does legitimate data, and failure to adjust internal architectures to deal with SSL across all required security and application delivery devices does little to enhance security in any real meaningful way. Related articles by Zemanta Secure gmail account by turning on https permanently Windows encryption programs open to kernel hack336Views0likes1CommentDevCentral Top5 01/16/2009
I can't believe it's only the second week of this year's Top5 series. There are so many things going on that it feels like it's been weeks since I wrote last. I know the output to the site has only been bumped up marginally but trust me, things behind the scenes are beyond busy trying to get things ramped up, polished and ready to push hard all through the year. This week saw some continued series', a couple of really interesting new blog / docs posts, and plenty of awesome action in the forums / wikis. Here's this week's Top5: Accuracy is important. Vulnerabilities not so much. http://devcentral.f5.com/s/weblogs/dmacvittie/archive/2009/01/14/accuracy-is-important.-vulnerabilities-not-so-much.aspx One of the more spirited topics flying around the 'net this week was the article put out by SANS title "Top 25 Dangerous Programming Errors". While there were some interesting security issues called out in this list, Don took a bit of an issue with the title. I have to agree with him when he points out that this should have been called the "Top 25 Dangerous SECURITY Programming Errors". That "Security" in the title would have made all the difference. As it is, though, Don found cause to discuss the difference between security programming errors as opposed to other errors more inherent in the functionality and delivery of an application. I like his take on it, and it's a great lead in to the SANS article which, while over-hyped, still has some decent content. 24: A Day in the Life of Geolocating New DevCentral Members http://devcentral.f5.com/s/weblogs/JeffB/archive/2009/01/15/3910.aspx While Jeff is usually found in the background, behind the scenes of DevCentral, keeping things moving and ensuring we're always up to something interesting, he does tend to put out some great blog posts from time to time. In the most recent such post, he details a smattering of user registrations that all occurred within a 24 hour timeframe. He even gives us a big animal picture style chart to show their geographic location. It's a really interesting snapshot of what's going on with the community, community uptake, new members, and the true geographic disbursement of our little slice of the web. Way cool. Adobe AIR (FLEX3) Sample BIG-IP Monitoring Application http://devcentral.f5.com/s/Default.aspx?tabid=63&articleType=ArticleView&articleId=305 Having written powerful blog posts one after the other, Lori goes back to her coding roots a bit in this PHP example that shows how you can have your very own PHP proxy for iControl / FLEX. There isn't much editorial-wise, but there doesn't have to be, there's plenty of tasty coding goodness to be had here. I love seeing the API and the product in general pushed and stretched in different directions, and this is yet another cool example of doing just that. It's good to see all that writing hasn't turned Lori soft in her coding skills. Investigating the LTM TCP Profile: ECN & LTR http://devcentral.f5.com/s/Default.aspx?tabid=63&articleType=ArticleView&articleId=304 I know I include Jason's "Investigating the LTM TCP Profile" series in the Top5 every week, but that's just because it's so darn cool. He dives deep, yet again, into some more options in the TCP profile that you can fiddle with to achieve different behaviors. Detailing both Extended Congestion Notification (ECN) and Limited Transit Recovery (LTR), he uncovers more of the mysteries hiding in the dark depths of the granular profile options. I've said it before, but if you want to get the absolute most out of your systems whether it be flexibility or performance, these articles are definitely worth a look. Ruby Meets iControl: Switching Policies http://devcentral.f5.com/s/Default.aspx?tabid=63&articleType=ArticleView&articleId=303 For the second article in the Ruby Meets iControl series I picked the Ruby WA Policy Switcher. This somewhat simpler application serves an equally cool and useful function as the VIP creator I talked about last week. Seeing a variety of languages used for iControl applications is outstanding, and Ruby is another really cool one to add to the list, not to mention a language that's been burning up the search rankings lately. Even if this particular application doesn't fit your exact need, seeing these things done in many different ways, in many different languages not only gives a wider base of places to start for people wanting to get into iControl coding, but shows off just how versatile and flexible the API and platform are. I dig it. There you have it, five more from the top for this week's DevCentral Top5. Thanks for reading and I hope you'll be back next week. Tell a friend. ;) #Colin Listening to: Daft Punk - Alive 2007 - Around The World / Harder Better Faster Stronger230Views0likes0Comments