Javascript injecting systems effect on web application end users - a scenario review
Hello! ArvinF is back to share a scenario review where Javascript-injecting systems affected web application end users - web and mobile application. Problem Users are failing to login to a web application protected by BIG-IP ASM/Adv WAF and Shape Security Defense. The site owner notes that the authentication was failing for an unknown reason. There were ASM Support ID noted and an error informing to enable Javascript. Please enable JavaScript to view the page’s content. Your support ID is: xxxxxxxxxxxx Troubleshooting To understand the cause of the authentication failure, we gathered HTTP traffic through a HTTP sniffer. We used httpwatch and gathered HAR (HTTP Archive) files. The site was protected with both on-premise BIG-IP ASM/Adv WAF bot defense and back then, Shape Security Defense (now F5 Distributed Cloud Bot Defense). After the review of the HAR file in httpwatch, the following were noted: ASM blocks a request in a URL related to authentication with a Support ID in the response. There was also javascript code included and it references https[:]//s[.]go-mpulse[.]net/boomerang/. The authentication attempt failed with an error in the HTTP response: ...unable to process your request. Please try again later... BIG-IP ASM/Adv WAF related HTTP cookies from its various features such as Bot Defense Client Side challenges as TSPD_101* cookie was present and other TS cookies, which could also come from Bot defense and DoS profile and security policy configurations. There were also HTTP cookies coming from BIG-IP AVR - f5_cspm cookie was present. Application Visibility and Reporting (AVR) module provides detailed charts and graphs to give you more insight into the performance of web applications, with detailed views on HTTP and TCP stats, as well as system performance (CPU, memory, etc.). https://clouddocs.f5.com/training/community/analytics/html/index.html https://clouddocs.f5.com/api/irules/AVR_CSPM_INJECTION.html Seeing the javascript code referencing "/boomerang/" included in the ASM blocking response was interesting. Reviewing the HAR file, there were several instances of this "/boomerang/". This finding was inquired with the site owner and they noted that there is another system that is in the path between the end users and their web application - a CDN. The traffic flow is as follows: End user web browser / mobile application >>> CDN >>> FW >>> BIG-IP >>> web application On the BIG-IP Virtual Server that fronts the web application, F5 AVR profile, ASM/Adv WAF Bot defense, and security policy and Shape Security defense iRule are configured. From the F5 side, these were the products with features that may insert Javascript in the client-side response. As part of troubleshooting, to isolate the feature that might be causing the failing authentication for the web application, the bot defense profile was removed from the site's Virtual Server and the Shape Security iRule and AVR profile were left untouched. Site owner noted that the authentication works after this change. Shape Security Defense was implemented using an iRule to protect specific URIs. When the iRule was removed from the Virtual Server and the Bot defense and AVR profile were left on, the VS, Site owner noted that the authentication works after this change. But if both ASM/Adv WAF Bot defense and Shape Security Defense iRule is configured on the VS, the site's authentication fails. Per the site owner, there were no changes in the Bot Defense or Shape Security Defense iRule configurations prior to the incident and that these configurations were in place way before the incident. Site owners shared the findings with their respective internal teams for their review. Resolution Afterwards, Site owner shared that their site now works as expected and authentication works for the web application with no changes done on both ASM/Adv WAF Bot defense and Shape Security Defense iRule on the site's VS. The cause of the authentication failure was undetermined. A theory on the possible cause of the issue was perhaps, there was another system inserting Javascript code in the responses and it might have affected the authentication process of the web application by prevented that portion of the site from loading. Additional Troubleshooting Notes The data gathered during the troubleshooting were the qkview and HTTPWatch capture - HAR files. It would help if a packet capture was taken along with the HTTPWatch capture while the issue was happening to have a full view of the issue. Decrypt the packet capture to observe HTTP exchanges and to correlate it with HTTPWatch capture events. The corresponding BIG-IP ASM/Adv WAF application event logs, Bot Defense or DoS protection logs will also be helpful in the correlation. Having a visual idea on how the Security Policy, Bot Defense or DoS protection profile are configured is also helpful - so its good to have a screenshot of these. It helps in analysis when there is complete data. Gathering the asmqkview with report and traffic data and corresponding ASM and AVR db dumps helps in the analysis. asmqkview -s0 --add-request-log --include-traffic-data -f /var/tmp/`/bin/hostname`_asmqkview_`date +%Y%m%d%H%M%S`.tgz #mysqldump -uroot -p`perl -I/ts/packages -MPassCrypt -nle 'print PassCrypt::decrypt_password($_)' /var/db/mysqlpw` DCC | gzip -9 > /shared/tmp/dcc.dump.gz # mysqldump -uroot -p`perl -I/ts/packages -MPassCrypt -nle 'print PassCrypt::decrypt_password($_)' /var/db/mysqlpw` PLC | gzip -9 > /shared/tmp/plc.dump.gz # mysqldump -uroot -p`perl -I/ts/packages -MPassCrypt -nle 'print PassCrypt::decrypt_password($_)' /var/db/mysqlpw` PRX | gzip -9 > /shared/tmp/prx.dump.gz # mysqldump -uroot -p`perl -I/ts/packages -MPassCrypt -nle 'print PassCrypt::decrypt_password($_)' /var/db/mysqlpw` logdb | gzip -9 > /shared/tmp/logdb.dump.gz It would also help if the systems in the path of the web application are known and whether it has features that may interfere with the features of BIG-IP ASM/Adv WAF or Shape Security Defense. Per the findings, there was a CDN that was injecting javascript code in the HTTP response and it may have contributed to the authentication failure for the end users. Isolate potentially conflicting features by removing one of them one at a time and observe the HTTP responses. Per the reference configuration, BIG-IP ASM/Adv WAF, Shape Security Defense, and BIG-IP AVR worked well prior to the incident. boomerang The injected javascript code noted in the ASM blocking page response was loaded from https[:]//s[.]go-mpulse[.]net/boomerang/. Checking this reference, it was related to https://github.com/akamai/boomerang. boomerang is a JavaScript library that measures the page load time experienced by real users, commonly called RUM (Real User Measurement). It has the ability to send this data back to your server for further analysis. With boomerang, you find out exactly how fast your users think your site is. In BIG-IP, the similar product we have is BIG-IP AVR - Application Visibility and Reporting (AVR) - where it collects "performance of web applications, with detailed views on HTTP and TCP stats, as well as system performance (CPU, memory, etc.)." Organizations may have specific needs on data that they need to collect from their site/web application and using a customizable solution such as boomerang can help. That's It For Now I hope this scenario review on Javascript-injecting systems effect on web application end users will be helpful on your next troubleshooting and hopefully gives you guidance on what data to gather and look for and troubleshooting options. The F5 SIRT creates security-related content posted here in DevCentral, sharing the team’s security mindset and knowledge. Feel free to view the articles that are tagged with the following: F5 SIRT series-F5SIRT-this-week-in-security TWIS44Views0likes0CommentsWAF evasion techniques for Command Injection
Let’s talk about Command Injection; I’m going to talk about this specifically from the perspective of Web Application Firewalls (like BIG-IP Advanced WAF, BIG-IP Next WAF, F5 Distributed Cloud WAF and so on) but these concepts are generally applicable anywhere user-input is used to construct commands run on the system, directly or indirectly. So, what is Command Injection? To quote OWASP, who put it very nicely: Command injection is an attack in which the goal is execution of arbitrary commands on the host operating system via a vulnerable application. Command injection attacks are possible when an application passes unsafe user supplied data (forms, cookies, HTTP headers etc.) to a system shell. In this attack, the attacker-supplied operating system commands are usually executed with the privileges of the vulnerable application. Command injection attacks are possible largely due to insufficient input validation. This attack differs from Code Injection, in that code injection allows the attacker to add their own code that is then executed by the application. In Command Injection, the attacker extends the default functionality of the application, which execute system commands, without the necessity of injecting code. Like I say, in this case I’m going to talk about command injection to web applications, but they can happen in almost any piece of software that works on untrusted user input. Perhaps the most famous example of a command injection vulnerability is Shellshock, a suite of vulnerabilities in the Unix Bash shell and if you needed any proof that they can be hard things to find as a defender, Shellshock lived, undiscovered (or at least undisclosed, we’ve no way of proving that no malicious entities knew of the bug!) for 25 years from 1989 to 2014, in one of the most widely used pieces of software in the world. The original Shellshock vulnerability involved a maliciously crafted environment variable containing (malicious) commands after a function definition, e.g. env x=’() { :;}; echo vulnerable’ bash -c “echo test” On a vulnerable system, running the above commands would display “vulnerable” because of Bash continuing to execute the (injected) commands following the function definition. Injecting a command here requires the"| use of two specific characters plus the command, the semi-colon and space characters. If you imagine a web application passing commands to bash on a vulnerable system, you’ll see that it would be possible to block this attack simply by blocking requests containing semi-colon or space (or indeed having a signature for the full function definition and trailing semi-colon of “{ :;};”) Bypassing protections Any time there is a WAF in front of a vulnerable system – and sometimes even when there isn’t – an attacker must try to evade the rules preventing them from simply injecting their chosen command. You’ll often see this when attackers or scanners are looking for SQL injection vulnerabilities in web applications, replacing characters like ‘ with %27 or the space with %20 (and many other tricks), or using chunks of existing text with the SUBSTRING() function to construct queries without having to use the actual text. Many of the same tricks work for command injection vulnerabilities, and I’d like to talk about a specific example here because it’s one I hadn’t considered until it turned up in some real life traffic.. Bypassing WAF signatures using Environment Variables Remember I just said you could construct SQL queries using sub-strings of existing text? Well, if your target system is Windows-based and you know there’s a command injection vulnerability but you’re unable to exploit it due to character blocks or similar restrictions, then good news! Windows environment variables might be what you’re looking for.. Environment variables exist in most operating systems, and Microsoft ones are no exception – they date back to DOS and were one of the enhancements Microsoft brought to the table over and above CP/M (unlike the 8.3 filenames, which came right from CP/M!); their behaviour has been pretty much the same throughout, and there have always been a number of ‘default’ environment variables like the PATH, TMP & TEMP. Current versions of Windows add a number of additional default environment variables like PROGRAMDATA, PROGRAMFILES etc. Windows also allows the shell to return just a part of the environment variable value using the following syntax: %VARIABLE:~start_pos,end_pos% How is this useful to us, you ask? Let’s say you know you can inject a command, but you need a space in your command line; you want to inject “ping 127.0.0.1” but the space is dropped or the request is blocked by a WAF looking for “ping <IP>”, well then you just need an environment variable you know will have a space in it! %PROGRAMFILES%, by default, is going to be set to C:\Program Files on most systems, which has a space right there in the middle! All we need to do to get to it is use it as %PROGRAMFILES:~10,1%, for example: ping%PROGRAMFILES:~10,1%127.0.0.1 Go ahead, fire up a command prompt, and try it out! You could even construct the whole command that way: %PROGRAMFILES:~3,1%%SYSTEMROOT:~4,2%%PROGRAMFILES:~6,1%%PROGRAMFILES:~10,1%127.0.0.1 Again, fire up a command prompt and give that a try! Protect against bypasses with BIG-IP Advanced WAF Now here’s the good news: ASM includes signatures, by default, for all of those useful Windows environment variables (and the same for many other systems, too), so if you were to try the above on a vulnerable system with the right signatures in the policy, you’d still be blocked – like this: All these signatures are part of the Predictable Resource Location Signatures signature set, so you’ll want to make sure you either have all signatures of Medium or above, or at least this set assigned to your policy: Summary Command Injection is a huge topic, much bigger than I can talk about in one blog post here, but hopefully this shows you one way an attacker might try to evade protection in front of a vulnerable Windows system, and some ways in which you can protect it — BIG-IP Advanced WAF or F5 Distributed Cloud WAF both have signatures for this kind of evasion.147Views2likes0CommentsEnhancing Software Security with Rust: A Solution to Common Vulnerabilities
Introduction The digital landscape is continually evolving, with cybersecurity threats growing both in sophistication and number. Among these threats, memory safety vulnerabilities stand out, contributing to a significant portion of software security issues today. Supported by guidance from CISA, NSA, and many other security minded individuals, there is an urgent need for programming practices that inherently mitigate memory safety risks. Addressing Memory Safety with Rust Rust is an open-source programming language renowned for its dedication to safety and performance. It effectively addresses common challenges related to memory safety and concurrency without sacrificing execution speed. In this article, we will explore the top three Common Weakness Enumerations (CWEs) from the 2023 Known Exploited Vulnerabilities list: Use After Free, Heap-based Buffer Overflow, and Out-of-bounds Write. All these CWEs directly relate to memory safety problems. Throughout this article, we will demonstrate how Rust’s unique capabilities serve as effective safeguards against these widespread concerns. Note: While Rust also mitigates other critical issues such as double-free, dangling pointers, and concurrency issues like race conditions, deadlocks, and improper synchronization, these will not be covered in detail in this article. CWE-416: Use After Free This vulnerability occurs when a program continues to use a memory location after it has been freed, potentially leading to application crashes or in more severe scenarios, arbitrary code execution. Languages that require manual memory management, such as C and C++, are typically vulnerable to this issue, as developers must explicitly manage memory allocation and deallocation. Rust uniquely addresses CWE-416 through itsownership, borrowing, and lifetimes systems, catching potential vulnerabilities at compile time. Ownership rules enforce that each piece of data is owned by a single entity. When this owner or piece of data goes out of scope, Rust automatically deallocates the memory associated with it, thereby eliminating the risk of accessing freed memory. Borrowingallows functions to access data via references without taking ownership, a process carefully scrutinized by the borrow checker. This component of the compiler ensures that all borrowed references adhere strictly to lifetime rules, preventing them from outliving the data they reference, thereby avoiding use-after-free vulnerabilities. Lifetimes specify the scope for which a reference is valid, enabling the compiler to track and manage the lifespan of data throughout the program. By requiring explicit lifetime annotations where necessary, Rust enforces a clear contract for how long data can be safely borrowed, further strengthening its memory safety by preventing dangling references that could lead to vulnerabilities. CWE-122: Heap-based Buffer Overflow Heap-based buffer overflows occur when data exceeds its allocated memory in the heap, potentially allowing attackers to read or write memory they shouldn't have access to. This can result in crashing the application or enabling arbitrary code execution. Such vulnerabilities are particularly prevalent in languages like C and C++, which do not automatically enforce bounds checking. Rust effectively addresses CWE-122 through multiple security measures, including a type system, the principle of immutability by default, and robust memory safety abstractions. Type systems are critical for security, and Rust's system exemplifies this by being both statically and strongly typed. Static typing ensures all data types are defined before runtime, allowing the compiler to catch type errors early and mitigate related vulnerabilities. Strong typing in Rust requires explicit type conversions, guarding against unsafe coercions that could lead to issues like buffer overflows. Additionally, Rust enforces runtime bounds checking, which actively prevents heap-based buffer overflows and out-of-bounds writes by causing errors to panic rather than fail silently or behave unpredictably. Together, these features not only enhance security by enforcing strict type safety and data integrity but also by ensuring reliable and predictable error handling. Immutability by Default ensures that all data is immutable unless explicitly declared mutable. This design significantly reduces the risk of unintended data modifications that could lead to buffer overflows. By default, this immutability prevents many common programming errors associated with memory corruption. Memory Safety Abstractions provide high-level abstractions such as Vec<T> for managing dynamic arrays and Box<T> for smart pointers. These abstractions come with built-in bounds checking, which are enforced at runtime. When data operations exceed their allocated bounds, Rust's approach ensures that these operations result in controlled runtime panics, thus preventing unsafe memory access and preserving application integrity. Additionally, Rust promotes using iterators when working with collections. Iterators are both safe and efficient because they abstract away the need for manual bounds checking. This not only simplifies the code but also eliminates a common source of errors associated with direct index access, further enhancing safety and performance. CWE-787: Out-of-bounds Write This vulnerability involves writing data past the bounds of allocated memory, which can corrupt data, crash the application, or lead to code execution. It predominantly affects languages like C and C++ where bounds checking is not enforced automatically by the language, requiring manual oversight by developers. Rust addresses CWE-787: Out-of-bounds Write through its robust memory safety protocols, including automatic bounds checking for all memory write operations. This ensures that data stays within safe operational limits at both compilation and runtime stages, preventing potential security breaches. Additionally, features like the Option enum and fearless concurrency further safeguard against out-of-bounds writes by enforcing strict data handling and thread-safe access. Automatic bounds checking for all memory write operations, effectively preventing data from being written outside allocated segments. This safety measure operates during both compilation and runtime, where Rust ensures safe failure modes through structured error handling and panics rather than allowing undefined behavior. Option enum is special in Rust and its use ensures proper management of data safely, helping developers avoid The Billion Dollar Mistake. Unlike many other languages, Rust does not have null values for any data types and using the Option enum requires developers to explicitly handle cases of Some (data present) and None (data absent), promoting deliberate and safe data access patterns. This forces developers to handle cases which may otherwise go undefined in other languages. Fearless Concurrencyis another defining feature of Rust that guarantees thread-safe data access, effectively eliminating the risk of data races that could lead to out-of-bounds writes. This is achieved through Rust’s ownership and borrowing rules (described earlier), which ensure that data is accessed by only one thread at a time unless explicitly shared in a thread-safe manner. By leveraging these strict concurrency controls, Rust allows developers to build highly concurrent applications without the typical safety compromises seen in other languages, enhancing both performance and security and avoiding difficult to detect and reproduce defects. Conclusion The future of programming, particularly in systems and kernel development, is trending towards languages that provide strong memory safety guarantees. Rust's integration into system programming and even parts of the Linux kernel highlights a significant shift in software development paradigms. While Rust represents the future of secure programming, it's crucial to recognize the enduring legacy of languages like C. The Linux kernel and widely-used software such as OpenSSL and NGINX are predominantly written in C, illustrating that an immediate wholesale transition to Rust across all development sectors isn't practical. However, as we move forward, Rust's role in fostering more secure software is poised to expand with its focus on memory safety becoming a cornerstone of modern system software. The adoption of memory-safe languages like Rust isn't just about addressing current vulnerabilities; it's about reshaping software development practices to prioritize security from the ground up. This evolution marks a future where software inherently withstands a wide range of cybersecurity threats, greatly enhancing the resilience of our digital infrastructure against new challenges.133Views1like0CommentsCoordinated Vulnerability Disclosure: A Balanced Approach
The world of vulnerability disclosure encompasses, and affects, many different parties – security researchers, vendors, customers, consumers, and even random bystanders who may be caught in the blast radius of a given issue. The security professionals who manage disclosures must weigh many factors when considering when and what to disclose. There are risks to disclosing an issue when there is no fix yet available, possibly making more malicious actors aware of the issue when those affected have limited options. Conversely, there are also risks to not disclosing an issue for an extended period when malicious actors may already know of it, yet those affected remain blissfully unaware of their risk. This is but one factor to be considered. Researchers and Vendors The relationship between security researchers and product vendors is sometimes perceived as contentious. I’d argue that’s largely due to the exceptions that make headlines – because they’re exceptions. When some vendor tries to silence a researcher through legal action, blocking a talk at a conference, stopping a disclosure, etc., those moves make for sensational stories simply because they are unusual and extreme. And those vendors are clearly not familiar with the Streisand Effect. The reality is that security researchers and vendors work together every day, with mutual respect and professionalism. We’re all part of the security ecosystem, and, in the end, we all have the same goal – to make our digital world a safer, more secure place for everyone. As a security engineer working for a vendor, you never want to have someone point out a flaw in your product, but you’d much rather be approached by a researcher and have the opportunity to fix the vulnerability before it is exploited than to become aware of it because it was exploited. Sure, this is where someone will say that vendors should be catching the issues before the product ships, etc. In a perfect world that would be the case, but we don’t live in a perfect world. In the real world, resources are finite. Every complex product will have flaws because humans are involved. Especially products that have changed and evolved over time. No matter how much testing you do, for any product of sufficient complexity, you can never be certain that every possibility has been covered. Furthermore, many products developed 10 or 20 years ago are now being used in scenarios that could not be conceived of at the time of their design. For example, the disintegration of the corporate perimeter and the explosion of remote work has exposed security shortcomings in a wide range of enterprise technologies. As they say, hindsight is 20/20. Defects often appear obvious after they’ve been discovered but may have slipped by any number of tests and reviews previously. That is, until a security researcher brings a new way of thinking to the task and uncovers the issue. For any vendor who takes security seriously, that’s still a good thing in the end. It helps improve the product, protects customers, and improves the overall security of the Internet. Non sequitur. Your facts are uncoordinated. When researchers discover a new vulnerability, they are faced with a choice of what to do with that discovery. One option is to act unilaterally, disclosing the vulnerability directly. From a purely mercenary point of view, they might make the highest return by taking the discovery to the dark web and selling it to anyone willing to pay, with no regard to their intentions. Of course, this option brings with it both moral and legal complications. It arguably does more to harm the security of our digital world overall than any other option, and there is no telling when, or indeed if, the vendor will become aware of the issue for it to be fixed. Another drastic, if less mercenary, option is Full Disclosure - aka the ‘Zero-Day’ or ‘0-day’ approach. Dumping the details of the vulnerability on a public forum makes them freely available to all, both defenders and attackers, but leaves no time for advance preparation of a fix, or even mitigation. This creates a race between attackers and defenders which, more often than not, is won by the attackers. It is nearly always easier, and faster, to create an exploit for a vulnerability and begin distributing it than it is to analyze a vulnerability, develop and test a fix, distribute it, and then patch devices in the field. Both approaches may, in the long term, improve Internet security as the vulnerabilities are eventually fixed. But in the short- and medium-terms they can do a great deal of harm to many environments and individual users as attackers have the advantage and defenders are racing to catch up. These disclosure methods tend to be driven primarily by monetary reward, in the first case, or by some personal or political agenda, in the second case. Dropping a 0-day to embarrass a vendor, government, etc. Now, Full Disclosure does have an important role to play, which we’ll get to shortly. Mutual Benefit As an alternative to unilateral action, there is Coordinated Disclosure: working with the affected vendor(s) to coordinate the disclosure, including providing time to develop and distribute fixes, etc. Coordinated Disclosure can take a few different forms, but before I get into that, a slight detour. Coordinated Disclosure is the current term of art for what was once called ‘Responsible Disclosure’, a term which has generally fallen out of favor. The word ‘responsible’ is, by its nature, judgmental. Who decides what is responsible? For whom? To whom? The reality is it was often a way to shame researchers – anyone who didn’t work with vendors in a specified way was ‘irresponsible’. There were many arguments in the security community over what it meant to be ‘responsible’, for both researchers and vendors, and in time the industry moved to the more neutrally descriptive term of ‘Coordinated Disclosure’. Coordinated Disclosure, in its simplest form means working with the vendor to agree upon a disclosure timeline and to, well, coordinate the process of disclosure. The industry standard is for researchers to give vendors a 90-day period in which to prepare and release a fix, before the disclosure is made. Though this may vary with different programs and may be as short as 60-days or as long as 120-days, and often include modifiers for different conditions such as active exploitation, Critical Severity (CVSS) issues, etc. There is also the option of private disclosure, wherein the vendor notifies only customers directly. This may happen as a prelude to Coordinated Disclosure. There are tradeoffs to this approach – on the one hand it gives end users time to update their systems before the issues become public knowledge, but on the other hand it can be hard to notify all users simultaneously without missing anyone, which would put those unaware at increased risk. The more people who know about an issue, the greater the risk of the information finding its way to the wrong people, or premature disclosure. Private disclosure without subsequent Coordinated Disclosure has several downsides. As already stated, there is a risk that not all affected users will receive the notification. Future customers will have a harder time being aware of the issues, and often scanners and other security tools will also fail to detect the issues, as they’re not in the public record. The lack of CVE IDs also means there is no universal way to identify the issues. There’s also a misguided belief that private disclosure will keep the knowledge out of the wrong hands, which is just an example of ‘security by obscurity’, and rarely effective. It’s more likely to instill a false sense of security which is counter-productive. Some vendors may have bug bounty programs which include detailed reporting procedures, disclosure guidelines, etc. Researchers who choose to work within the bug bounty program are bound by those rules, at least if they wish to receive the bounty payout from the program. Other vendors may not have a bug bounty program but still have ways for researchers to official report vulnerabilities. If you can’t find a way to contact a given vendor, or aren’t comfortable doing so for any reason, there are also third-party reporting programs such as Vulnerability Information and Coordination Environment (VINCE) or reporting directly to the Cybersecurity & Infrastructure Security Agency (CISA). I won’t go into detail on these programs here, as that could be an article of its own – perhaps I will tackle that in the future. As an aside, at the time of writing, F5 does not have a bug bounty program, but the F5 SIRT does regularly work with researchers for coordinated disclosure of vulnerabilities. Guidelines for reporting vulnerabilities to F5 are detailed in K4602: Overview of the F5 security vulnerability response policy. We do provide an acknowledgement for researchers in any resulting Security Advisory. Carrot and Stick Coordinated disclosure is not all about the researcher, the vendor has responsibilities as well. The vendor is being given an opportunity to address the issue before it is disclosed. They should not see this as a burden or an imposition, the researcher is under no obligation to give them this opportunity. This is the ‘carrot’ being offered by the researcher. The vendor needs to act with some urgency to address the issue in a timely fashion, to deliver a fix to their customers before disclosure. The researcher is not to blame if the vendor is given a reasonable time to prepare a fix and fails to do so. The ’90-day’ guideline should be considered just that, a guideline. The intention is to ensure that vendors take vulnerability reports seriously and make a real effort to address them. Researchers should use their judgment, and if they feel that the vendor is making a good faith effort to address the issue but needs more time to do so, especially for a complex issue or one that requires fixing multiple products, etc., it is not unreasonable to extend the disclosure deadline. If the end goal is truly improving security and protecting users, and all parties involved are making a good faith effort, reasonable people can agree to adjust deadlines on a case-by-case basis. But there should still be some reasonable deadline, remember that it is an undisclosed vulnerability which could be independently discovered and exploited at any time – if not already – so a little firmness is justified. Even good intentions can use a little encouragement. That said, the researcher also has a stick for the vendors who don’t bite the carrot – Full Disclosure. For vendors who are unresponsive to vulnerability reports, who respond poorly to such (threats, etc.), who do not make a good faith effort to fix issues in a timely manner, etc., this is alternative of last resort. If the researcher has made a good faith effort at Coordinated Disclosure but has been unable to do so because of the vendor, then the best way to get the word out about the issue is Full Disclosure. You can’t coordinate unless both parties are willing to do so in good faith. Vendors who don’t understand it is in their best interest to work with researchers may eventually learn that it is after dealing with Full Disclosure a few times. Full Disclosure is rarely, if ever, a good first option, but if Coordinated Disclosure fails, and the choice becomes No Disclosure vs. Full Disclosure, then Full Disclosure is the best remaining option. In All Things, Balance Coordinated disclosure seeks to balance the needs of the parties mentioned at the start of this article – security researchers, vendors, customers, consumers, and even random bystanders. Customers cannot make informed decisions about their networks unless vendors inform them, and that’s why we need vulnerability disclosures. You can’t mitigate what you don’t know about. And the reality is no one has the resources to keep all their equipment running the latest software release all the time, so updates get prioritized based on need. Coordinated disclosure gives the vendor time to develop a fix, or at least a mitigation, and make it available to customers before the disclosure. Thus, allowing customers to rapidly respond to the disclosure and patch their networks before exploits are widely developed and deployed, keeping more users safe. The coordination is about more than just the timing, vendors and researchers will work together on the messaging of the disclosure, often withholding details in the initial publication to provide time for patching before disclosing information which make exploitation easier. Crafting a disclosure is always a balancing act between disclosing enough information for customers to understand the scope and severity of the issue and not disclosing information which is more useful to attackers than to defenders. The Needs of the Many Coordinated disclosure gets researchers the credit for their work, allows vendors time to develop fixes and/or mitigations, gives customers those resources to apply when the issue is disclosed to them, protects customers by enabling patching faster than other disclosure methods, and ultimately results in a safer, more secure, Internet for all. In the end, that’s what we’re all working for, isn’t it? I encourage vendors and researchers alike to view each other as allies and not adversaries. And to give each other the benefit of the doubt, rather than presume some nefarious intent. Most vendors and researchers are working toward the same goals of improved security. We’re all in this together. If you’re looking for more information on handling coordinated disclosure, you might check out The CERT Guide to Coordinated Vulnerability Disclosure.170Views4likes0Comments- 101Views2likes0Comments