vulnerabilities
9 TopicsBIG-IP ASMで対応するOWASP Top 10 - 2017年版
この投稿は、F5ネットワークスのシニア・ソリューション・デベロッパーであるPeter Silva のブログ投稿「The OWASP Top 10 - 2017 vs. BIG-IP ASM 」を元に、日本向けに再構成したものです。 OWASP Top 10の2017年正式版がリリースされましたので、BIG-IP ASMのWAF機能でどのくらい対応できるか概要を紹介したいと思います。 まず最初に、2013年版と2017年版の比較です。いくつかの新規項目の追加と、既存項目の統合が行われています。 では、BIG-IP ASMの対応状況を見ていきましょう。 Vulnerability BIG-IP ASM Controls A1 Injection Flaws インジェクション Attack signatures Meta character restrictions Parameter value length restrictions A2 Broken Authentication and Session Management 認証とセッション管理の不備 Brute Force protection Credentials Stuffing protection Login Enforcement Session tracking HTTP cookie tampering protection Session hijacking protection A3 Sensitive Data Exposure 機密データの露出 Data Guard Attack signatures (“Predictable Resource Location” and “Information Leakage”) A4 XML External Entities (XXE) XML外部実体参照(XXE) Attack signatures (“Other Application Attacks” - XXE) XML content profile (Disallow DTD) (Subset of API protection) A5 Broken Access Control アクセス制御の不備 File types Allowed/disallowed URLs Login Enforcement Session tracking Attack signatures (“Directory traversal”) A6 Security Misconfiguration セキュリティ設定のミス Attack Signatures DAST integration Allowed Methods HTML5 Cross-Domain Request Enforcement A7 Cross-site Scripting (XSS) クロスサイトスクリプティング(XSS) Attack signatures (“Cross Site Scripting (XSS)”) Parameter meta characters HttpOnly cookie attribute enforcement Parameter type definitions (such as integer) A8 Insecure Deserialization 安全でないデシリアライゼーション Attack Signatures (“Server Side Code Injection”) A9 Using components with known vulnerabilities 既知の脆弱性を持つコンポーネントの使用 Attack Signatures DAST integration A10 Insufficient Logging and Monitoring 不十分なロギングおよび監視 Request/response logging Attack alarm/block logging On-device logging and external logging to SIEM system Event Correlation 新規に追加された「A4: XML外部実体参照(XXE)」の項目についても、すでにシグネチャで対応しています。 200018018 External entity injection attempt 200018030 XML External Entity (XXE) injection attempt (Content) また、XXE攻撃は、XMLプロファイルによって汎用的な防御も可能です。 (DTDsを無効にして、"Malformed XML data"バイオレーションを有効にします) また「A8:安全でないデシリアライゼーション」の対応策として、こちらも多くのシグネチャがすでに提供されています。 これらシグネチャの多くは、下記のように“serialization” や“serialized object” といった名前が含まれています。 200004188 PHP object serialization injection attempt (Parameter) 200003425 Java Base64 serialized object - java/lang/Runtime (Parameter) 200004282 Node.js Serialized Object Remote Code Execution (Parameter) 以上、OWASP Top10 2017年版のリリースにともなう、BIG-IP ASMのWAF機能の対応状況のご紹介でした。 関連リンク: What’s New In The OWASP Top 10 And How TO Use It BIG-IP ASM Operations Guide695Views0likes0CommentsThe OWASP Top 10 - 2017 vs. BIG-IP ASM
With the release of the new 2017 Edition of the OWASP Top 10, we wanted to give a quick rundown of how BIG-IP ASM can mitigate these vulnerabilities. First, here's how the 2013 edition compares to 2017. And how BIG-IP ASM mitigates the vulnerabilities. Vulnerability BIG-IP ASM Controls A1 Injection Flaws Attack signatures Meta character restrictions Parameter value length restrictions A2 Broken Authentication and Session Management Brute Force protection Credentials Stuffing protection Login Enforcement Session tracking HTTP cookie tampering protection Session hijacking protection A3 Sensitive Data Exposure Data Guard Attack signatures (“Predictable Resource Location” and “Information Leakage”) A4 XML External Entities (XXE) Attack signatures (“Other Application Attacks” - XXE) XML content profile (Disallow DTD) (Subset of API protection) A5 Broken Access Control File types Allowed/disallowed URLs Login Enforcement Session tracking Attack signatures (“Directory traversal”) A6 Security Misconfiguration Attack Signatures DAST integration Allowed Methods HTML5 Cross-Domain Request Enforcement A7 Cross-site Scripting (XSS) Attack signatures (“Cross Site Scripting (XSS)”) Parameter meta characters HttpOnly cookie attribute enforcement Parameter type definitions (such as integer) A8 Insecure Deserialization Attack Signatures (“Server Side Code Injection”) A9 Using components with known vulnerabilities Attack Signatures DAST integration A10 Insufficient Logging and Monitoring Request/response logging Attack alarm/block logging On-device logging and external logging to SIEM system Event Correlation Specifically, we have attack signatures for “A4:2017-XML External Entities (XXE)”: 200018018 External entity injection attempt 200018030 XML External Entity (XXE) injection attempt (Content) Also, XXE attack could be mitigated by XML profile, by disabling DTDs (and of course enabling the “Malformed XML data” violation): For “A8:2017-Insecure Deserialization” we have many signatures, which usually include the name “serialization” or “serialized object”, like: 200004188 PHP object serialization injection attempt (Parameter) 200003425 Java Base64 serialized object - java/lang/Runtime (Parameter) 200004282 Node.js Serialized Object Remote Code Execution (Parameter) A quick run-down thanks to some of our security folks. ps Related: What’s New In The OWASP Top 10 And How TO Use It BIG-IP ASM Operations Guide3.1KViews0likes0CommentsHackable Homes
Is your house vulnerable? Imagine coming home, disarming the alarm system, unlocking your doors and walking into a ransacked dwelling. There are no broken windows, no forced entry, no compromised doggie doors and really no indication that an intruder had entered. Welcome to your connected home. I stop short of calling it a 'smart' home since it's not yet intelligent enough to keep the bad guys out. From smartphone controlled front door locks to electrical outlets to security cameras to ovens, refrigerators and coffee machines, internet connected household objects are making their way into our homes. Our TVs, DVDs and DVRs are already. And anything connected to the internet, as we all know, is a potential target to be compromised. Researchers have shown how easy it is to infect automobiles and it is only a matter of time before crooks and a little bit of code will be able to watch you leave your driveway, disable your alarms, unlock your door, steal your valuables and get out with minimal trace. Those CSI/NCIS/Criminal Minds/L&O crime dramas will need to come up with some new ideas on how to solve the mystery during the trace-evidence musical montages. The hard-nosed old timer is baffled by the fact that there is nothing to indicate a break-in except for missing items. Is the victim lying for insurance fraud? Could it have been a family member? Or simply a raccoon? A real who-done-it! Until, of course, the geeky lab technician emerges from their lair with a laptop showing how the hacker remotely controlled the entire event. 'Look Boss, zeros and ones!' Many of these remotely controlled home devices use a wireless communications protocol called Z-Wave. It's a low power radio wave that allows home devices to communicate with each other and be controlled remotely over the internet. Last year, 1.5 million home automation products were sold in the US and that is expected to grow to 8 million in less than 5 years. An estimated 5 million Z-Wave devices will be shipped this year. Like any communications protocol, riff-raff will attempt to break it, intercept it and maliciously control it. And as the rush to get these connected devices in consumer's hands and homes grows, security protections may lag. I often convey that the hacks of the future just might involve your refrigerator. Someone takes out all the internet enabled fridges on the West Coast and there is a food spoilage surge since no one owns legacy fridges any more....let alone Styrofoam coolers. ps Related: 'Smart homes' are vulnerable, say hackers The five scariest hacks we saw last week From Car Jacking to Car Hacking The Prosecution Calls Your Smartphone to the Stand Mobile Threats Rise 261% in Perspective Q. The Safest Mobile Device? A. Depends Holiday Shopping SmartPhone Style SmartTV, Smartphones and Fill-in-the-Blank Employees Technorati Tags: blackhat,hacks,vulnerabilities,breach,home,house,smart phone,smart technology,silva,security,z-wave,smart devices,household Connect with Peter: Connect with F5:295Views0likes1CommentInside Look: BIG-IP ASM Botnet and Web Scraping Protection
I hang with WW Security architect Corey Marshall to get an inside look at the Botnet detection and Web scraping protection in BIG-IP ASM. ps Related: F5's YouTube Channel In 5 Minutes or Less Series (23 videos – over 2 hours of In 5 Fun) Inside Look Series Technorati Tags: asm,waf,botnet,web scraping,big-ip,security,protection,vulnerabilities,silva,video,demo,brands,v11.3 Connect with Peter: Connect with F5:260Views0likes0Comments3 reasons you need a WAF even if your code is (you think) secure
Everyone is buzzing and tweeting about the SANS Institute CWE/SANS Top 25 Most Dangerous Programming Errors, many heralding its release as the dawning of a new age in secure software. Indeed, it's already changing purchasing requirements. Byron Acohido reports that the Department of Defense is leading the way by "accepting only software tested and certified against the Top 25 flaws." Some have begun speculating that this list obviates the need for web application firewalls (WAF). After all, if applications are secured against these vulnerabilities, there's no need for an additional layer of security. Or is there? Web application firewalls, while certainly providing a layer of security against the exploitation of the application vulnerabilities resulting from errors such as those detailed by SANS, also provide other benefits in terms of security that should be carefully considered before dismissing their usefulness out of hand. 1. FUTURE PROOF AGAINST NEW VULNERABILITIES The axiom says the only things certain in life are death and taxes, but in the world of application security a third is just as certain: the ingenuity of miscreants. Make no mistake, there will be new vulnerabilities discovered and they will, eventually, affect the security of your application. When they are discovered, you'll be faced with an interesting conundrum: take your application offline to protect it while you find, fix, and test the modifications or leave the application online and take your chances. The third option, of course, is to employ a web application firewall to detect and stop attacks targeting the vulnerability. This allows you to keep your application online while mitigating the risk of exploitation, giving developers the time necessary to find, fix, and test the changes necessary to address the vulnerability. 2. ENVIRONMENTAL SECURITY No application is an island. Applications are deployed in an environment because they require additional moving parts in order to actually be useful. Without an operating system, application or web server, and a network stack, applications really aren't all that useful to the end user. While the SANS list takes a subtle stab at mentioning this with its inclusion of OS command injection vulnerability, it assumes that all software and systems required to deploy and deliver an application are certified against the list and therefore secure. This is a utopian ideal that is unlikely to become reality and even if it was to come true, see reason #1 above. Web application firewalls protect more than just your application,they can also provide needed protection against vulnerabilities specific to operating systems, application and web servers, and the rest of the environment required to deploy and deliver your application. The increase in deployment of applications in virtualized environments, too, has security risks. The potential exploitation of the hypervisor is a serious issue that few have addressed thus far in their rush to adopt virtualization technology. 3. THE NINJA ATTACK There are some attacks that cannot be detected by an application. These usually involve the exploitation of a protocol such as HTTP or TCP and appear to the application to be legitimate requests. These "ninja" style attacks take advantage of the fact that applications are generally only aware of one user session at a time, and not able to make decisions based on the activity of all users occurring at the same time. The application cannot prevent these attacks. Attacks involving the manipulation of cookies, replay, and other application layer logical attacks, too, often go undetected by applications because they appear to be legitimate. SANS offers a passing nod to some of these types of vulnerabilities in its "Risky Resource Management" category, particularly CWE-642 (External Control of Critical State Data). Addressing this category for existing applications will likely require heavy modification to existing applications and new design for new applications. In the meantime, applications remain vulnerable to this category of vulnerability as well as the ninja attacks that are not, and cannot be, addressed by the SANS list. A web application firewall detects and prevents such stealth attacks attempting to exploit the legitimate behavior of protocols and applications. The excitement with which the SANS list has been greeted is great news for security professionals, because it shows an increased awareness in the importance of secure coding and application security in general. That organizations will demand proof that applications - third-party or in-house - are secure against such a list is sorely needed as a means to ensure better application security across the whole infrastructure. But "the list" does not obviate the usefulness or need for additional security measures, and web application firewalls have long provided benefits in addition to simply protecting against exploitation of SANS Top 25 errors. Those benefits remain valid and tangible even if your code is (you think) secure. Just because you installed a digital home security system doesn't mean you should get rid of the deadbolts. Or vice versa. Related articles by Zemanta Security is not a luxury item DoS attack reveals (yet another) crack in net's core CWE/SANS TOP 25 Most Dangerous Programming Errors Mike Fratto on CWE/SANS TOP 25 Most Dangerous Programming Errors Zero Day Threat: One big step toward a safer Internet495Views0likes2CommentsOut, Damn’d Bot! Out, I Say!
Exorcising your digital demons Most people are familiar with Shakespeare’s The Tragedy of Macbeth. Of particularly common usage is the famous line uttered repeatedly by Lady Macbeth, “Out, damn’d spot! Out, I say” as she tries to wash imaginary bloodstains from her hands, wracked with the guilt of the many murders of innocent men, women, and children she and her husband have committed. It might be no surprise to find a similar situation in the datacenter, late at night. With the background of humming servers and cozily blinking lights shedding a soft glow upon the floor, you might hear some of your infosecurity staff roaming the racks and crying out “Out, damn’d bot! Out I say!” as they try to exorcise digital demons from their applications and infrastructure. Because once those bots get in, they tend to take up a permanent residence. Getting rid of them is harder than you’d think because like Lady Macbeth’s imaginary bloodstains, they just keep coming back – until you address the source.184Views0likes0CommentsNew TCP vulnerability about trust, not technology
I read about a "new" TCP flaw that, according to C|Net News, Related Posts puts Web sites at risk. There is very little technical information available; the researchers who discovered this tasty TCP tidbit canceled a conference talk on the subject and have been sketchy about the details of the flaw when talking publicly. So I did some digging and ran into a wall of secrecy almost as high as the one Kaminsky placed around the DNS vulnerability. Layer 4 vs Layer 7 DoS Attack The Unpossible Task of Eliminating Risk Soylent Security So I hit Twitter and leveraged the simple but effective power of asking for help. Which resulted in several replies, leading me to Fyodor and an April 2000 Bugtraq entry. The consensus at this time seems to be that the wall Kaminsky built was for good reason, but this one? No one's even trying to ram it down because it doesn't appear to be anything new. Which makes the "oooh, scary!" coverage by mainstream and trade press almost amusing and definitely annoying. The latest 'exploit' appears to be, in a nutshell, a second (or more) discovery regarding the nature of TCP. It appears to exploit the way in which TCP legitimizes a client. In that sense the rediscovery (I really hesitate to call it that, by the way) is on par with Kaminsky's DNS vulnerability simply because the exploit appears to be about the way in the protocol works, and not any technical-based vulnerability like a buffer overflow. TCP and applications riding atop TCP inherently trust any client that knocks on the door (SYN) and responds correctly (ACK) when TCP answers the door (SYN ACK). It is simply the inherent trust of the TCP handshake as validation of the legitimacy of a client that makes these kinds of attacks possible. But that's what makes the web work, kids, and it's not something we should be getting all worked up about. Really, the headlines should read more like "Bad people could misuse the way the web works. Again." This likely isn't about technology, it's about trust, and the fact that the folks who wrote TCP never thought about how evil some people can be and that they'd take advantage of that trust and exploit it. Silly them, forgetting to take into account human nature when writing a technical standard. If they had, however, we wouldn't have the Internet we have today because the trust model on the Web would have to be "deny everything, trust no one" rather than "trust everyone unless they prove otherwise." So is the danger so great as is being portrayed around the web? I doubt it, unless the researchers have stumbled upon something really new. We've known about these kinds of attacks for quite some time now. Changing the inherent nature of TCP isn't something likely to happen anytime soon, but contrary to the statements made regarding there being no workarounds or solutions to these problem, there are plenty of solutions that address these kinds of attacks. I checked in with our engineers, just in case, and got the low-down on how BIG-IP handles this kind of a situation and, as expected, folks with web sites and applications being delivered via a BIG-IP really have no reason to be concerned about the style of attack described by Fyodor. If it turns out there's more to this vulnerability, then I'll check in again. But until then, I'm going to join the rest of the security world and not worry much about this "new" attack. In the end, it appears that the researchers are not only exploiting the trust model of TCP, they're exploiting the trust between people; the trust that the press has in "technology experts" to find real technical vulnerabilities and the trust that folks have in the trade press to tell them about it. That kind of exploitation is something that can't be addressed with technology. It can't be fixed by rewriting a TCP stack, and it certainly can't be patched by any vendor.206Views0likes2CommentsI am in your HTTP headers, attacking your application
Zero-day IE exploits and general mass SQL injection attacks often overshadow potentially more dangerous exploits targeting lesser known applications and attack vectors. These exploits are potentially more dangerous because once proven through a successful attack on these lesser known applications they can rapidly be adapted to exploit more common web applications, and no one is specifically concentrating on preventing them because they're, well, not so obvious. Recently, SANS Internet Storm Center featured a write up on attempts to exploit Roundcube Webmail via the HTTP Accept header. Such an attack is generally focused on exploitation of operating system, language, or environmental vulnerabilities, as the data contained in HTTP headers (aside from cookies) is rarely used by the application as user-input. An example provided by SANS of an attack targeting Roundcube via the HTTP Accept header: POST /roundcube/bin/html2text.php HTTP/1.1 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.5) Gecko/2008120122 Firefox/3.0.5 Host: xx.xx.xx.xx Accept: ZWNobyAoMzMzMjEyKzQzMjQ1NjY2KS4iICI7O3Bhc3N0aHJ1KCJ1bmFtZSAtYTtpZCIpOw== Content-Length: 54 What the attackers in this example were attempting to do is trick the application into evaluating system commands encoded in the Accept header in order to retrieve some data they should not have had access to. The purpose of the attack, however, could easily have been for some other nefarious deed such as potentially writing a file to the system that could be used as a cross-site scripting attack, or deleting files, or just generally wreaking havoc with the system. This is the problem security professionals and developers face every day: what devious thing could some miscreant attempt to do? What must I protect against. This is part of what makes secure coding so difficult - developers aren't always sure what they should be protecting against, and neither are the security pros because the bad guys are always coming up with a new way to exploit some aspect of an application or transport layer protocols. Think HTTP headers aren't generally used by applications? Consider the use of the custom HTTP header "SOAP Action" for SOAP web services, and cookies, and E-tags, and ... well, the list goes on. HTTP headers carry data used by applications and therefore should be considered a viable transport mechanism for malicious code. So while the exploitation of HTTP headers is not nearly as common or rampant as mass SQL injection today, the use of it to target specific applications means it is a possible attack vector for the future against which applications should be protected now, before it becomes critical to do so. No, it may never happen. Attackers may never find a way to truly exploit HTTP headers. But then again, they might and apparently have been trying. Better safe than sorry, I say. Regardless of the technology you use to, the process is the same: you need to determine what is allowed in HTTP headers and verify them just as you would any other user-generated input or you need to invest in a solution that provides this type of security for you. RFC 2616 (HTTP), specifically section 14, provide a great deal of guidance and detail on what is acceptable in an HTTP header field. Never blindly evaluate or execute upon data contained in an HTTP header field. Treat any input, even input that is not traditionally user-generated, as suspect. That's a good rule of thumb for protecting against malicious payloads anyway, but especially a good rule when dealing with what is likely considered a non-traditional attack vector (until it is used, and overused to the point it's considered typical, of course). Possible ways to prevent the potential exploitation of HTTP headers: Use network-side scripting or mod_rewrite to intercept, examine, and either sanitize or outright reject requests containing suspicious data in HTTP headers. Invest in a security solution capable of sanitizing transport (TCP) and application layer (HTTP) protocols and use it to do so. Investigate whether an existing solution - either security or application delivery focused - is capable of providing the means through which you can enforce protocol compliance. Use secure coding techniques to examine - not evaluate - the data in any HTTP headers you are using and ensure they are legitimate values before using them in any way. A little proactive security can go along way toward not being the person who inadvertently discovers a new attack methodology. Related articles by Zemanta Gmail Is Vulnerable to Hackers The Concise Guide to Proxies 3 reasons you need a WAF even though your code is (you think) secure Stop brute forcing listing of HTTP OPTIONS with network-side scripting What's the difference between a web application and a blog?574Views0likes2CommentsWhy Vulnerabilities Go Unpatched
The good folks at Verizon Business who recently released their 2008 Data Breach Investigations Report sounded almost surprised by the discovery that "Intrusion attempts targeted the application layer more than the operating system and less than a quarter of attacks exploited vulnerabilities. Ninety percent of known vulnerabilities exploited by these attacks had patches available for at least six months prior to the breach." This led the researchers to conclude that "For the overwhelming majority of attacks exploiting known vulnerabilities, the patch had been available for months prior to the breach. [...] Also worthy of mention is that no breaches were caused by exploits of vulnerabilities patched within a month or less of the attack. This strongly suggests that a patch deployment strategy focusing on coverage and consistency is far more effective at preventing data breaches than “fire drills” attempting to patch particular systems as soon as patches are released." There's actually a very valid reason why vulnerabilities go unpatched for months in an organization, regardless of how frustrating that reality may be to security professionals: reliability and stability. The first rule of IT is "Business critical applications and systems shall not be disturbed." When applications are the means through which your business runs, i.e. it generates revenue, you are very careful about disturbing the status quo because even the smallest mistake can lead to downtime, which in turn results in lost revenue. For example, it's estimated that Amazon lost $31,000 per minute it was down. It was down long enough for the revenue lost to jump into six digit figures. Patching a system requires testing and certification. The operating system or application must be patched, and then all applications running on that system must be tested to ensure that the patch has not affected them in any way. IT folks have been burned one too many times in the past by patches to simply "fire and forget" when it comes to changing operating systems, platforms, and applications in production environments. That's why multiple environs exist in the enterprise - an application moves from development to quality assurance and testing to production, and why we've all sat around eating pizza and watching the clock late on a Saturday night, holding a printed copy of our "roll back" plans just in case something went wrong. Patching even a vulnerability takes time, and the more applications running on a system the more time it takes, because each one has to be tested and re-certified in a non-production environment before the patch can be applied to a production system. This process evolved to minimize the impact on production systems and reduce system downtime - which usually translates into lost revenue. Thus, it's no surprise that Verizon's researchers discovered such a high percentage of vulnerability exploits could have been prevented by patches issued months prior to the breach. It's possible that many of those breaches occurred while the patch was being tested and simply hadn't been rolled out to production yet. It's certainly not that IT professionals are unaware that these patches exist. It's simply that there are so many moving parts on a production system with higher risk factors than your average system that they aren't willing - many would say rightfully so - to apply patches that may potentially break a smooth running system. This is one of the reasons we advocate the use of web application firewalls and intelligent application delivery networks. The web application firewall will usually be updated to defend against a known web application vulnerability before the IT folks have had a chance to verify that the patch issued will not negatively affect production systems. This provides IT with the ability to take the time necessary to ensure business continuity in the face of patching systems without leaving systems vulnerable to compromise. And if the web application firewall isn't immediately updated, an intelligent application delivery platform can often be used as a stop-gap measure, filtering requests such that attempts to exploit new vulnerabilities can be stopped while IT gets ready to patch systems. An application delivery platform can also, through its load balancing capabilities, enable IT to patch individual systems without downtime by intelligently directing requests to the systems available that are not being patched at the moment. IT Security Professionals understandably desire patches to be applied as soon as possible, but the definition of as soon as possible is often days, weeks, or even months from the time the patch is issued. IT and business folks don't want to experience a breach, either, but the need to protect applications and systems from exploitation must be balanced against revenue generation and productivity. Employing a web application firewall and/or an intelligent application delivery network solution can bridge the gap between the two seemingly diametrically opposed needs, providing security while ensuring IT has the time to properly test patches. Imbibing: Coffee189Views0likes0Comments