cve
6 TopicsCoordinated Vulnerability Disclosure: A Balanced Approach
The world of vulnerability disclosure encompasses, and affects, many different parties – security researchers, vendors, customers, consumers, and even random bystanders who may be caught in the blast radius of a given issue. The security professionals who manage disclosures must weigh many factors when considering when and what to disclose. There are risks to disclosing an issue when there is no fix yet available, possibly making more malicious actors aware of the issue when those affected have limited options. Conversely, there are also risks to not disclosing an issue for an extended period when malicious actors may already know of it, yet those affected remain blissfully unaware of their risk. This is but one factor to be considered. Researchers and Vendors The relationship between security researchers and product vendors is sometimes perceived as contentious. I’d argue that’s largely due to the exceptions that make headlines – because they’re exceptions. When some vendor tries to silence a researcher through legal action, blocking a talk at a conference, stopping a disclosure, etc., those moves make for sensational stories simply because they are unusual and extreme. And those vendors are clearly not familiar with the Streisand Effect. The reality is that security researchers and vendors work together every day, with mutual respect and professionalism. We’re all part of the security ecosystem, and, in the end, we all have the same goal – to make our digital world a safer, more secure place for everyone. As a security engineer working for a vendor, you never want to have someone point out a flaw in your product, but you’d much rather be approached by a researcher and have the opportunity to fix the vulnerability before it is exploited than to become aware of it because it was exploited. Sure, this is where someone will say that vendors should be catching the issues before the product ships, etc. In a perfect world that would be the case, but we don’t live in a perfect world. In the real world, resources are finite. Every complex product will have flaws because humans are involved. Especially products that have changed and evolved over time. No matter how much testing you do, for any product of sufficient complexity, you can never be certain that every possibility has been covered. Furthermore, many products developed 10 or 20 years ago are now being used in scenarios that could not be conceived of at the time of their design. For example, the disintegration of the corporate perimeter and the explosion of remote work has exposed security shortcomings in a wide range of enterprise technologies. As they say, hindsight is 20/20. Defects often appear obvious after they’ve been discovered but may have slipped by any number of tests and reviews previously. That is, until a security researcher brings a new way of thinking to the task and uncovers the issue. For any vendor who takes security seriously, that’s still a good thing in the end. It helps improve the product, protects customers, and improves the overall security of the Internet. Non sequitur. Your facts are uncoordinated. When researchers discover a new vulnerability, they are faced with a choice of what to do with that discovery. One option is to act unilaterally, disclosing the vulnerability directly. From a purely mercenary point of view, they might make the highest return by taking the discovery to the dark web and selling it to anyone willing to pay, with no regard to their intentions. Of course, this option brings with it both moral and legal complications. It arguably does more to harm the security of our digital world overall than any other option, and there is no telling when, or indeed if, the vendor will become aware of the issue for it to be fixed. Another drastic, if less mercenary, option is Full Disclosure - aka the ‘Zero-Day’ or ‘0-day’ approach. Dumping the details of the vulnerability on a public forum makes them freely available to all, both defenders and attackers, but leaves no time for advance preparation of a fix, or even mitigation. This creates a race between attackers and defenders which, more often than not, is won by the attackers. It is nearly always easier, and faster, to create an exploit for a vulnerability and begin distributing it than it is to analyze a vulnerability, develop and test a fix, distribute it, and then patch devices in the field. Both approaches may, in the long term, improve Internet security as the vulnerabilities are eventually fixed. But in the short- and medium-terms they can do a great deal of harm to many environments and individual users as attackers have the advantage and defenders are racing to catch up. These disclosure methods tend to be driven primarily by monetary reward, in the first case, or by some personal or political agenda, in the second case. Dropping a 0-day to embarrass a vendor, government, etc. Now, Full Disclosure does have an important role to play, which we’ll get to shortly. Mutual Benefit As an alternative to unilateral action, there is Coordinated Disclosure: working with the affected vendor(s) to coordinate the disclosure, including providing time to develop and distribute fixes, etc. Coordinated Disclosure can take a few different forms, but before I get into that, a slight detour. Coordinated Disclosure is the current term of art for what was once called ‘Responsible Disclosure’, a term which has generally fallen out of favor. The word ‘responsible’ is, by its nature, judgmental. Who decides what is responsible? For whom? To whom? The reality is it was often a way to shame researchers – anyone who didn’t work with vendors in a specified way was ‘irresponsible’. There were many arguments in the security community over what it meant to be ‘responsible’, for both researchers and vendors, and in time the industry moved to the more neutrally descriptive term of ‘Coordinated Disclosure’. Coordinated Disclosure, in its simplest form means working with the vendor to agree upon a disclosure timeline and to, well, coordinate the process of disclosure. The industry standard is for researchers to give vendors a 90-day period in which to prepare and release a fix, before the disclosure is made. Though this may vary with different programs and may be as short as 60-days or as long as 120-days, and often include modifiers for different conditions such as active exploitation, Critical Severity (CVSS) issues, etc. There is also the option of private disclosure, wherein the vendor notifies only customers directly. This may happen as a prelude to Coordinated Disclosure. There are tradeoffs to this approach – on the one hand it gives end users time to update their systems before the issues become public knowledge, but on the other hand it can be hard to notify all users simultaneously without missing anyone, which would put those unaware at increased risk. The more people who know about an issue, the greater the risk of the information finding its way to the wrong people, or premature disclosure. Private disclosure without subsequent Coordinated Disclosure has several downsides. As already stated, there is a risk that not all affected users will receive the notification. Future customers will have a harder time being aware of the issues, and often scanners and other security tools will also fail to detect the issues, as they’re not in the public record. The lack of CVE IDs also means there is no universal way to identify the issues. There’s also a misguided belief that private disclosure will keep the knowledge out of the wrong hands, which is just an example of ‘security by obscurity’, and rarely effective. It’s more likely to instill a false sense of security which is counter-productive. Some vendors may have bug bounty programs which include detailed reporting procedures, disclosure guidelines, etc. Researchers who choose to work within the bug bounty program are bound by those rules, at least if they wish to receive the bounty payout from the program. Other vendors may not have a bug bounty program but still have ways for researchers to official report vulnerabilities. If you can’t find a way to contact a given vendor, or aren’t comfortable doing so for any reason, there are also third-party reporting programs such as Vulnerability Information and Coordination Environment (VINCE) or reporting directly to the Cybersecurity & Infrastructure Security Agency (CISA). I won’t go into detail on these programs here, as that could be an article of its own – perhaps I will tackle that in the future. As an aside, at the time of writing, F5 does not have a bug bounty program, but the F5 SIRT does regularly work with researchers for coordinated disclosure of vulnerabilities. Guidelines for reporting vulnerabilities to F5 are detailed in K4602: Overview of the F5 security vulnerability response policy. We do provide an acknowledgement for researchers in any resulting Security Advisory. Carrot and Stick Coordinated disclosure is not all about the researcher, the vendor has responsibilities as well. The vendor is being given an opportunity to address the issue before it is disclosed. They should not see this as a burden or an imposition, the researcher is under no obligation to give them this opportunity. This is the ‘carrot’ being offered by the researcher. The vendor needs to act with some urgency to address the issue in a timely fashion, to deliver a fix to their customers before disclosure. The researcher is not to blame if the vendor is given a reasonable time to prepare a fix and fails to do so. The ’90-day’ guideline should be considered just that, a guideline. The intention is to ensure that vendors take vulnerability reports seriously and make a real effort to address them. Researchers should use their judgment, and if they feel that the vendor is making a good faith effort to address the issue but needs more time to do so, especially for a complex issue or one that requires fixing multiple products, etc., it is not unreasonable to extend the disclosure deadline. If the end goal is truly improving security and protecting users, and all parties involved are making a good faith effort, reasonable people can agree to adjust deadlines on a case-by-case basis. But there should still be some reasonable deadline, remember that it is an undisclosed vulnerability which could be independently discovered and exploited at any time – if not already – so a little firmness is justified. Even good intentions can use a little encouragement. That said, the researcher also has a stick for the vendors who don’t bite the carrot – Full Disclosure. For vendors who are unresponsive to vulnerability reports, who respond poorly to such (threats, etc.), who do not make a good faith effort to fix issues in a timely manner, etc., this is alternative of last resort. If the researcher has made a good faith effort at Coordinated Disclosure but has been unable to do so because of the vendor, then the best way to get the word out about the issue is Full Disclosure. You can’t coordinate unless both parties are willing to do so in good faith. Vendors who don’t understand it is in their best interest to work with researchers may eventually learn that it is after dealing with Full Disclosure a few times. Full Disclosure is rarely, if ever, a good first option, but if Coordinated Disclosure fails, and the choice becomes No Disclosure vs. Full Disclosure, then Full Disclosure is the best remaining option. In All Things, Balance Coordinated disclosure seeks to balance the needs of the parties mentioned at the start of this article – security researchers, vendors, customers, consumers, and even random bystanders. Customers cannot make informed decisions about their networks unless vendors inform them, and that’s why we need vulnerability disclosures. You can’t mitigate what you don’t know about. And the reality is no one has the resources to keep all their equipment running the latest software release all the time, so updates get prioritized based on need. Coordinated disclosure gives the vendor time to develop a fix, or at least a mitigation, and make it available to customers before the disclosure. Thus, allowing customers to rapidly respond to the disclosure and patch their networks before exploits are widely developed and deployed, keeping more users safe. The coordination is about more than just the timing, vendors and researchers will work together on the messaging of the disclosure, often withholding details in the initial publication to provide time for patching before disclosing information which make exploitation easier. Crafting a disclosure is always a balancing act between disclosing enough information for customers to understand the scope and severity of the issue and not disclosing information which is more useful to attackers than to defenders. The Needs of the Many Coordinated disclosure gets researchers the credit for their work, allows vendors time to develop fixes and/or mitigations, gives customers those resources to apply when the issue is disclosed to them, protects customers by enabling patching faster than other disclosure methods, and ultimately results in a safer, more secure, Internet for all. In the end, that’s what we’re all working for, isn’t it? I encourage vendors and researchers alike to view each other as allies and not adversaries. And to give each other the benefit of the doubt, rather than presume some nefarious intent. Most vendors and researchers are working toward the same goals of improved security. We’re all in this together. If you’re looking for more information on handling coordinated disclosure, you might check out The CERT Guide to Coordinated Vulnerability Disclosure.181Views4likes0CommentsCVSS is Just the Beginning
The Buck Starts Here When it comes to discussing vulnerabilities, you can’t avoid CVSS, the Common Vulnerability Scoring System. CVSS provides a convenient way to (for the most part) objectively analyze a vulnerability, and summarize it using a standard vector format, which then translates to a score on a 0-10 scale. The cybersecurity industry has further reduced this scale to four categories – Low, Medium, High, and Critical – as so: CVSS score Severity of vulnerability 9.0 - 10.0 Critical 7.0 - 8.9 High 4.0 - 6.9 Medium 0.1-3.9 Low Obviously, a score of 0 would be not vulnerable. CVSS has evolved over the years, and we’re currently on CVSS v3.1. There are easy-to-use calculators, it is found in almost every CVE published, NIST provides their CVSS score for every vulnerability in the National Vulnerability Database (NVD), and you’ll find CVSS scores in security advisories from nearly every vendor – including F5. CVSS is nigh-ubiquitous. However, this ubiquity isn’t always a good thing. For one, familiarity breeds contempt. There has been a bit of a backlash against CVSS in some circles. Personally, I think this is overblown and mostly unwarranted, but I also understand what drives it. CVSS is not perfect – but that’s why it has continued to evolve, and why CVSS 4.0 is currently in Public Preview. I’m not on that working group myself, for lack of time, but one of my F5 SIRT colleagues is. However, you don’t throw out the baby with the bath water, as they say, and CVSS is a useful tool, warts and all. The real problem is that, to use another metaphor, when all you have is a hammer, everything looks like a nail. CVSS is everywhere, so people are using it for everything – even things it is not good at or wasn’t designed to do. Or they’re using it for things it was designed for, but not using it appropriately. Let’s start there. All your base are belong to us I’m not going to do a full CVSS tutorial here, maybe another time. If you’re reading this, I’m presuming you have some familiarity with it and, if not, the documentation is well written. As the User Guide states early on: The CVSS Specification Document has been updated to emphasize and clarify the fact that CVSS is designed to measure the severity of a vulnerability and should not be used alone to assess risk. Concerns have been raised that the CVSS Base Score is being used in situations where a comprehensive assessment of risk is more appropriate. The CVSS v3.1 Specification Document now clearly states that the CVSS Base Score represents only the intrinsic characteristics of a vulnerability which are constant over time and across user environments. The CVSS Base Score should be supplemented with a contextual analysis of the environment, and with attributes that may change over time by leveraging CVSS Temporal and Environmental Metrics. More appropriately, a comprehensive risk assessment system should be employed that considers more factors than simply the CVSS Base Score. Such systems typically also consider factors outside the scope of CVSS such as exposure and threat. Take the time to read that and internalize it. That is a vitally important point. One of the biggest mistakes we see, over and over, is the use of a CVSS base score as the deciding factor on whether to address a vulnerability. In the F5 SIRT we see this all the time with customers, and we also see it internally when working to address our own vulnerabilities. How many of you have heard it, or done it, yourself? “Oh, that’s only a Medium, we don’t need to deal with that right now.” Or “That’s a Critical – drop everything and patch that immediately!” CVSS scores drive everything from IT response to press coverage, often massively out of proportion to the actual issue. Especially when the score provided by a vendor is the base score, which is basically a theoretical score of the vulnerability of the issue in a vacuum. A vendor’s 10.0 Critical may be a 0.0 if you don’t have that functionality enabled at all. Or their 5.9 Medium may be a critical issue if it is in a key bit of mission-critical kit exposed to the public Internet, and there is a known exploit in the wild. The problem isn’t CVSS. The problem is that too much weight is being placed on the CVSS base score as the end-all and be-all factor in evaluating a vulnerability, or, more specifically, risk. Expanding Your Base An easy place to start is with the rest of CVSS. That’s right, there’s more! Vendors provide the Base Metric Group, or base score. This mix of exploitability and impact metrics are universal and apply to all environments. As I said above, it is an evaluation of the vulnerability in a vacuum. Vendors don’t have visibility into each customer’s network – but you do. At least your own network. If you have visibility into all networks… Setec Astronomy, right? There are two more groups available – the Temporal Metric Group and the Environmental Metric Group. As the name implies, the temporal metrics cover characteristics that may change over time, such as the publication of exploit code or the release of a patch or mitigation. While the environmental metrics allow you to consider factors unique to your environment which may affect the score. As the base vector is nigh-universally available, making evaluation of the two additional vectors part of your vulnerability analysis process is a reasonable place to start. This would produce a CVSS score adjusted for your environment, which would be a start for use in existing prioritization systems you may have. It would be an improvement over the generic base scores, but it really is just a start. Strike That. Reverse It. As a step beyond CVSS in your vulnerability analysis, might consider Stakeholder-Specific Vulnerability Categorization (SSVC). Yes, that’s CVSS backwards; never let it be said geeks don’t have a sense of humor. SSVC was created by Carnegie Mellon University's Software Engineering Institute, in collaboration with the Cybersecurity & Infrastructure Security Agency (CISA). The concept behind SSVC is a decision-tree model which results in three basic end states for the issue. As very, very brief summary which does not do it justice: Track – no action required at this time. Attend – requires attention from higher levels and further analysis. Act – needs high level attention and immediate action. SSVC is, as made very clear by the name, stakeholder specific. It is something each end consumer can use to evaluate the impact of an issue on their network, to help them prioritize their response. Part of your complete vulnerability response plan, as it were. It is a fairly simple system, and CISA provides a PDF guide, a YouTube training video, and a simple online calculator to make it easy. I recommend checking it out. You may find that SSVC helps you better prioritize allocation of your limited IT resources. How do I know they’re limited? Is water wet? Just My Type Another factor you may want to consider in your risk assessment is the type of vulnerability. Is this a DoS vector or unauthorized access? Not all CVSS 7.5 vulnerabilities are created equal. And that’s where the Common Weakness Enumeration (CWE) can help. CWE provides a standardized way to categorize a vulnerability to, well, enumerate the type of weakness it is. Many vendors, F5 being one, include CWEs in their security advisories. It isn’t as widespread as CVE, but it is not uncommon. And, in the cases where the vendor doesn’t provide it, NVD has NIST’s evaluation of the appropriate CWE for each entry. The type of vulnerability can be another input into your risk assessment. A DoS is bad, but not as bad as data exfiltration. It goes beyond the score and helps foster a deeper understanding of the issue. What You Don’t Know Can Hurt You Which poses a greater risk to your network; a vulnerability CVSS 8.9 High which was found internally by the vendor and has never been seen in the wild, or a CVSS 5.8 Medium which has several known exploit scripts and active exploitation reported? Now, answer honestly to yourself, which one is likely to get more attention from your own vulnerability response policies? What if the first issue was a 9.9 Critical with the same characteristics? Too often, the policies we see are “Bigger Number First”. We’ve seen policies where anything less than a 7.0 is basically ignored. If it isn’t a High or a Critical, it doesn’t matter. That boggles my mind. We’ve seen so many exploits where a Medium, or even a Low, severity vulnerability was used as part of a chain of exploits to completely own a network. Any vulnerability is still a vulnerability. And, to repeat the important point, CVSS base score is not a comprehensive risk assessment. It should not be used as such. OK, so how can we include what is being exploited in our evaluations? How can we know? Well, we consult the Known Exploited Vulnerabilities Catalog (KEV), of course. OK, this is just one of many resources available, but it is a free service provided by CISA for all to take advantage of. There are many commercial tools out there as well, of course, and I’m not going to get into those. Knowing that a vulnerability is being actively exploited would certainly factor into my risk assessment, I’m just saying. Minority Reporting Wouldn’t it be nice if you knew which vulnerabilities were the most likely to be exploited, and therefore the highest priority to patch? That’s the goal of the Exploit Prediction Scoring System (EPSS). EPSS is an effort to develop a data-driven model to predict which vulnerabilities are most likely to be exploited. Its first public release was in early 2021, so it is still a relatively young effort, but it is certainly interesting. EPSS is available for all to use. You can browse some of the data online, as well as download it. There is an API, User Guide, etc. All the usual good stuff. I’m not quite comfortable recommending making this part of a production risk assessment system just yet, but it wouldn’t hurt to be something to evaluate and see how it correlates with what you’re doing. I find the idea promising. I don’t see it even being a perfect prediction engine, but I can see it highlighting higher risk vulnerabilities that might otherwise be overlooked. Time will tell. Risky Business The key point is that you need to evaluate the risk of a given vulnerability to your business. No one else can do that for you, and there is no one-size-fits-all solution. CVSS is not bad, it just isn’t meant for that task. My personal view is that anyone writing articles or giving talks about how ‘CVSS is dead’ is just doing it for the clickbait – and probably looking to sell an alternative. But the arguments generally come down to what I said right at the top – vendors provide the CVSS base vector and score, and that was never intended to be some kind of triage trigger on what gets addressed. As I mentioned in my previous articles on CVE, F5’s role as a vendor is to inform our customers. But we’re limited to providing general information on vulnerabilities, we cannot perform risk assessments for customer networks. We do our best to provide the data, but the rest is up to you. Keep using CVSS but use it as part of a balanced risk assessment system. It should be one input, not the deciding factor. I presented a few options above, but this is hardly an exhaustive list. If you have other suggestions, please leave a comment – DevCentral is a community. It doesn’t have to be a free resource, if there is a commercial tool you can’t live without, by all means recommend it. I tried to remain neutral on that front as an F5 employee, but you don’t have to. Until next time! P.S. Check out the other articles from the F5 SIRT.2.8KViews5likes0CommentsWhy We CVE
Why We CVE Background First, for those who may not already know, I should probably explain what those three letters, CVE, mean. Sure, they stand for “Common Vulnerabilities and Exposures”, but that does that mean? What is the purpose? Borrowed right from the CVE.org website: The mission of the CVE® Program is to identify, define, and catalog publicly disclosed cybersecurity vulnerabilities. There is one CVE Record for each vulnerability in the catalog. The vulnerabilities are discovered then assigned and published by organizations from around the world that have partnered with the CVE Program. Partners publish CVE Records to communicate consistent descriptions of vulnerabilities. Information technology and cybersecurity professionals use CVE Records to ensure they are discussing the same issue, and to coordinate their efforts to prioritize and address the vulnerabilities. To state it simply, the purpose of a CVE record is to provide a unique identifier for a specific issue. I’m sure many of those reading this have dealt with questions such as “Does that new vulnerability announced today affect us?” or “Do we need to worry about that TCP vulnerability?” To which the reaction is likely exasperation and a question along the lines of “Which one? Can you provide any more detail?” We certainly see a fair number of support tickets along these lines ourselves. It gets worse when trying to discuss something that isn’t brand new, but years old. Sure, you might be able to say “Heartbleed”, and many will know what you’re talking about. (And do you believe that was April 2014? That’s forever ago in infosec years.) But what about the thousands of vulnerabilities announced each year that don’t get cute names and make headlines? Remember that Samba vulnerability? You know, the one from around the same time? No, the other one, improper initialization or something? Fun, right? It is much easier to say CVE-2014-0178 and everyone knows exactly what is being discussed, or at least can immediately look it up. Heartbleed, BTW, was CVE-2014-0160. If you have the CVE ID you can look it up at CVE.org, NVD (National Vulnerability Database), and many other resources. All parties can immediately be on the same page with the same basic understanding of the fundamentals. That is, simply, the power of CVE. It saves immeasurable time and confusion. I’m not going to go into detail on how the CVE program works, that’s not the intent of this article – though perhaps I could do that in the future if there is interest. Leave a comment below. Like and subscribe. Hit the bell icon… Sorry, too much YouTube. All that’s important is to note that F5 is a CNA, or CVE Numbering Authority: An organization responsible for the regular assignment of CVE IDs to vulnerabilities, and for creating and publishing information about the Vulnerability in the associated CVE Record. Each CNA has a specific Scope of responsibility for vulnerability identification and publishing. Each CNA has a ‘Scope’ statement, which defines what the CNA is responsible for within the CVE program. This is F5’s statement: All F5 products and services, commercial and open source, which have not yet reached End of Technical Support (EoTS). All legacy acquisition products and brands including, but not limited to, NGINX, Shape Security, Volterra, and Threat Stack. F5 does not issue CVEs for products which are no longer supported. And F5’s disclosure policy is defined by K4602: Overview of the F5 security vulnerability response policy. F5, CVEs, and Disclosures While CVEs have been published sporadically for F5 products since at least 2002 (CVE-1999-1550 – yes, a 1999 ID but it was published in 2002 – that’s another topic), things really changed in 2016 after the creation of the F5 SIRT in late 2015. One of the first things the F5 SIRT did was to officially join the CVE program, making F5 a CNA, and to formalize F5’s approach to first-party security disclosures, including CVE assignment. This was all in place by late 2016 and the F5 SIRT began coordinating F5’s disclosures. I’ve been involved with that since very early on, and have been F5’s primary point of contact with the CVE program and related working groups (I participate in the AWG, QWG, and CNACWG) for a number of years now. Over time I became F5’s ‘vulnerability person’ and have been involved in pretty much every disclosure F5 has made for a number of years now. It’s my full-time role. The question has been asked, why? Why disclose at all? Why air ‘dirty laundry’? There is, I think, a natural reluctance to announce to the world when you make a mistake. You’d rather just quietly correct it and hope no one noticed, right? I’m sure we’ve all done that at some point in our lives. No harm, no foul. Except that doesn’t work with security. I’ve made the argument about ‘doing the right thing’ for our customers in various ways over the years, but eventually it distilled down to what has become something of a personal catchphrase: Our customers can’t make informed decisions about their networks if we don’t inform them. Networks have dozens, hundreds, thousands of devices from many different vendors. It is easy to say “Well, if everyone keeps up with the latest versions, they’ll always have the latest fixes.” But that’s trite, dismissive, and wholly unrealistic – in my not-so-humble opinion. Resources are finite and prioritizations must be made. Do I need to install this new update, or can I wait for the next one? If I need to install it, does it have to happen today, or can it wait for the next scheduled maintenance? We cannot, and should not, be making decisions for our customers and their networks. Customers and networks are unique, and all have different needs, attack surfaces, risk tolerance, regulatory requirements, etc. And so F5’s role is to provide the information necessary for them to conduct their own analysis and make their own decisions about the actions they need to take, or not. We must support our customers, and that means admitting when we make mistakes and have security issues that impact them. This is something I believe in strongly, even passionately, and it is what guides us. Our guiding philosophy since day one, as the F5 SIRT, has been to ‘do the right thing for our customers’, even if that may not show F5 in the best light or may sometimes make us unpopular with others. We’re there to advocate for improved security in our products, and for our customers, above all else. We never want to downplay anything, and our approach has always been to err on the side of caution. If an issue could theoretically be exploited, then it is considered a vulnerability. We don’t want to cause undue alarm, or Fear, Uncertainty, and Doubt (FUD), for anyone, but in security a false negative is worse than a false positive. It is better to take an action to address an issue that may not truly be a problem than to ignore an issue that is. All vendors have vulnerabilities, that’s inevitable with any complex product and codebase. Some vendors seem to never disclose any vulnerabilities, and I’m highly skeptical when I see that. I don’t care for the secretive approach, personally. Some vendors may disclose issues but choose not to participate in the CVE program. I think that’s unfortunate. While I’m all for disclosure, I hope those vendors come to see the value in the CVE program not only for their customers, but for themselves. It does bring with it some structure and rigor that may not otherwise be present in the processes. Not to mention all of the tooling designed to work with CVEs. I’ve been heartened to see the rapid growth in the CVE program the past few years, and especially the past year. There has been a steady influx of new CNAs to the program. The original structure of the program was fairly ‘vendor-centric’, but it has been updated to welcome open-source projects and there has been increasing participation from the FOSS community as well. The Future In 2022 F5 introduced a new way of handling disclosures, our Quarterly Security Notification (QSN) process, after an initial trial in late 2021. While not universal, the response has been overwhelmingly positive – you may not be able to please all the people, all the time, but it seems you can please a lot of them. The QSN was primarily designed to make disclosures more predictable and less disruptive to our customers. Consolidating disclosures and decoupling them from individual software releases has allowed us to radically change our processes, introducing additional levels of review and rigor. At the same time, independent of the QSN process, the F5 SIRT had also begun work on standardized language templates for our Security Advisories. As you might expect, there are teams of people who work on issues – engineers who work on the technical evaluation, content creators, technical writers, etc. With different people working on different issues, it was only natural that they’d find different ways to say the same thing. We might disclose similar DoS issues at the same time, only to have the language in each Security Advisory (SA) be different. This could create confusion, especially as sometimes people can read a little too much into things. “These are different, there must be some significance in that.” No, they’re different because different people wrote them is all. Still, confusion or uncertainty is not what you want with security documentation. We worked to create standardized templates so that similar issues will have similar language, no matter who works on the issue. I believe that these efforts have resulted in a higher quality of Security Advisory, and the feedback we’ve received seems to support that. I hope you agree. These efforts are ongoing. The templates are not carved in stone but are living documents. We listen to feedback and update the templates as needed. When we encounter an issue that doesn’t fit an existing template a new template is created. Over time we’ve introduced new features to the advisories, such as the Common Vulnerability Scoring System (CVSS) and, more recently, Common Weakness Enumeration (CWE). We continue to evaluate feedback and requests, and industry trends, for incorporation into future disclosures. We’re currently working on internal tooling to automate some of our processes, which should improve consistency and repeatability – while allowing us to expand the work we do. Frankly, I only scale so far, and the cloning vats just didn’t work out. Having more tooling will allow us to do more with our resources. Part of the plan is that the tooling will allow us to provide disclosures in multiple formats – but I don’t want to say anything more about that just yet as much work remains to be done. So why do we CVE? For you – our customers, security professionals, and the industry in general. We assign CVEs and disclose issues not only for the benefit of our customers, but to lead by example. The more vendors who embrace openness and disclose CVEs, the more the practice is normalized, and the better the security community is for it. There isn’t really any joy in being the bearer of bad news, other than the hope that it creates a better future. Postscript If you’re still reading this, thank you for sticking with me. Vulnerability management and disclosure is certainly not the sexy side of information security, but it is a critical component. If there is interest, I’d be happy to explore different aspects further, so let us know. Perhaps I can peel back the curtain a bit more in another article and provide a look at the vulnerability management processes we use internally. How the sausage, or security advisory, is made, as it were. Especially if it might be useful for others looking to build their own practice. But I like my job so I’ll have to get permission before I start disclosing internal information. We welcome all feedback, positive or negative, as it helps us do a better job for you. Thank you.3.8KViews13likes3CommentsCVE: Who, What, Where, and When
CVE: Who, What, Where, and When Background A few months ago I wrote “Why We CVE”, wherein I covered the general intention of the CVE program, and more specifically the reasons why F5 publishes CVEs. After publishing that article the rusty, creaky mental wheels started turning, remembering the old “Who, What, Where, When, Why, and How”. I thought I might answer those as well. I’ll be leaving ‘How’ as a possible future article. I have been giving that some thought, and what it might entail – maybe how F5 runs out processes internally, or perhaps a general look at how the CVE system works with CVE Services 2.1, CVE JSON 5.0, CNAs, Root CNAs, etc. But that’s a story for another time. Today I intend to attempt to answer the Who, What, Where, and When questions. Who I touched on this briefly last time. As I said then, F5 officially joined the CVE program as a CNA in late 2016. Joining the program doesn’t magically publish CVEs though. Someone, or some group, must ultimately do the work that goes into being a CNA and publishing CVEs. While my primary responsibility as an F5 SIRT Principal Security Engineer is coordinating our vulnerability efforts, and acting as F5’s primary external contact with CVE.org, VINCE, etc., I’m just one member of the team – and this very much is a team effort. Within F5 the effort to diagnose, document, track, and publish security issues is the responsibility of the F5 SIRT. Ownership of different security issues is distributed amongst the core team, and each Security Engineer shepherds their issues through our processes from initial discovery all the way to readying the Security Advisory for publication. Everyone who has published under the ‘F5 SIRT’ tag here on DevCentral plays a role, and I’m glad to be part of such a great team. Of course, other teams within F5 are also involved – Product Engineering creates the fixes, Digital Services works with us to author the Security Advisories, many groups review those before publication, etc. – but ownership and responsibility resides with the F5 SIRT. What There are a couple of ways to interpret ‘what’ – so first, what do we CVE? As mentioned last time, each CNA has a ‘Scope’ statement, which defines what the CNA is responsible for within the CVE program. This is F5’s statement: All F5 products and services, commercial and open source, which have not yet reached End of Technical Support (EoTS). All legacy acquisition products and brands including, but not limited to, NGINX, Shape Security, Volterra, and Threat Stack. F5 does not issue CVEs for products which are no longer supported. So that establishes the overall scope for which products and services we’re responsible for when it comes to issuing CVEs. That’s a broad area, but this is further refined by K4602: Overview of the F5 security vulnerability response policy. F5 assigns Common Vulnerability and Exposures (CVEs) and publishes security advisories for all vulnerability categories, from Low to Critical. Additionally, F5 publishes security advisories for security exposures, which do not affect the confidentiality, integrity, and availability (CIA) of the F5 product, but may impact the CIA of one of the following: Services or network traffic processed by the F5 product Confidentiality of data cached on the F5 product Security exposures are generally dependent on configuration and use of the F5 device. Therefore, F5 cannot produce an accurate CVSS score or severity for security exposures. F5 recommends customers treat security exposures as a high priority and evaluate the specific severity of these issues in their own environment. An important item to call out from that policy is that F5 publishes Security Advisories for both CVEs and what we call Security Exposures. The way we distinguish these is via the CIA Triad. For those not familiar with infosec terminology, the ‘CIA’ refers to Confidentiality, Integrity, and Availability. The very high-level overview is that a Confidentiality impact means information is seen by those who should not be able to see it, an Integrity impact means someone is able to change information they should not be able to change, and an Availability impact means a degradation, or complete loss, of a service to legitimate users. Yes, that’s extremely simplified, feel free to leave a comment – but it will suffice for our needs here. How does F5 distinguish between a CVE and a Security Exposure? We look at where the CIA impact resides. If the CIA of the F5 product or service itself is impacted, then we consider that a vulnerability in need of a CVE. Leaking private keys, modifying the configuration, denial of services attacks, etc. We know the full scope of the issue and can provide an accurate CVSS score and vector, available mitigations, etc. These are relatively straight-forward. We use Security Exposures when a defect in the F5 product or service causes a CIA impact that is not against the F5 product or service itself, but rather somewhere else in the network. An example of this would be a defect in the WAF engine that causes traffic that is rightfully expected to be blocked to be allowed through. The backend systems are now being exposed to malicious traffic. The CIA impact is against those backend systems, but the flaw allowing this is in F5’s product. As every network is different, there are many unknowns and the CVSS score and vector would vary. We publish these issues because, as I like to say, “Our customers can’t make informed decisions about their networks if we don’t inform them.” We are advocates for the customer, and making good security decisions requires knowing all relevant information. Part of our role, and duty, is providing that information. The other way to interpret ‘what’ is – what do we say when we CVE? This is covered in more detail by our Vulnerability Disclosure Policy within K4602: F5 is committed to responsibly disclosing information that contains sufficient detail about the vulnerability in question, such as the CVSS score and vector. This information is intended to help customers understand the impact, severity, potential mitigations, and software fixes related to the vulnerability when such information is available. F5 does not provide the following information: Example exploit code or reproduction information The number of customers or sites affected Any information F5 regards as confidential We strive to provide enough information to be useful to our customers, and the broader infosec community, while not providing information that might aid those with malicious intent. That is a fine line to walk, and we tend to err on the side of caution, but we do try to provide the information necessary to evaluate the issue and determine if a given system is impacted and needs to be remediated. We have refined our approach to Security Advisories over the years and will continue to do so. Over time we introduced CVSS and CWE. We’ve updated the document structure, standardized language, and made many other changes based on the feedback received. We are always open to feedback, and there is a field for that at the bottom of every Security Advisory. Yes, that feedback is read. Where This may be the simplest question to answer; you can find all F5 Security Advisories on MyF5. More specifically, you can see all new Security Advisories. Note that includes F5’s first-party issues as well as any third-party CVE’s we’ve responded to, and it covers both CVEs and Security Exposures. If you wish to be notified of new security announcements, we have two mailing lists – F5 Security Announcements and NGINX Security Announcements. We do strongly encourage everyone to subscribe to the appropriate list(s) to receive security notifications. We normally have an RSS feed option as well, documented at K9957: Creating a custom RSS feed to view new and updated documents – but that is currently K9957: RSS feed service interruption as the RSS functionally was lost in a recent platform change for MyF5. RSS functionality will be returning to MyF5, but we don’t have a date for restoration that I can share. I recommend checking K000092546: What's new and planned for MyF5 for updates. Of course, as participants in the CVE program, all CVEs published by F5 can also be found through CVE.org, as well as downstream providers such as NVD, etc. You can find our CVEs wherever fine CVEs are distributed. Do note that this will only be CVEs; Security Exposures are only distributed directly by F5. When Actually, this is the simplest question to answer - K12201527: Overview of Quarterly Security Notifications. Done. OK, more seriously, we do four scheduled disclosures a year – the aptly named Quarterly Security Notifications (QSN). Each QSN ‘rolls up’ all fixes released since the last QSN. At the time I’m writing this our next QSN is May 3, 2023. With each QSN the next day is also published, and we’re currently on a February, May, August, October cycle. (October? Wouldn’t the cadence be November? Yes, but based on customer feedback we do the fourth quarter QSN a little earlier, primarily to avoid holiday IT change freezes. And now you know.) While we always try to announce security issues via a QSN, that is not always possible. Typically, this is due to an issue being reported to F5 by an external researcher with a disclosure deadline we must meet, and we’re unable to hold disclosure until the next QSN. When this occurs we perform an Out-Of-Band Security Notification (OOBSN), which is akin to a ‘surprise QSN’. We follow the same processes and procedures, but the date isn’t scheduled or announced in advance. Other reasons this might happen would include an issue being disclosed publicly in some way such as a zero day from a 3 rd party, F5 discovering active exploitation against our customers, discovery in an open-source repository where it would be visible (mainly this involves NGINX products), etc. The short version is that various external forces may periodically require F5 to disclose issues outside of the scheduled QSN process. Wrapping Up Hopefully I’ve sufficiently addressed Who, What, Where, and When, but feel free to leave a comment if you have any lingering questions. After two CVE-related articles in a row, we’ll see what I get into next time; I think I’ll save the How for a later date and tackle something else for a change of pace. If you enjoyed this article or found it useful, or even if you didn’t, I encourage you to check out the articles from my F5 SIRT colleagues, including the This Week In Security newsletter. Thank you for your time and attention. The F5 SIRT will continue advocating for better security. Stay safe out there.1.8KViews5likes1Comment90 Seconds of Security: What is CVE and CVSS?
Security researchers at F5 monitor web traffic 24/7 at locations around the world and the F5 Security Incident Response Team (SIRT) helps customers tackle incident response in real time. And when they find a new vulnerability, it’ll often get a Common Vulnerability & Exposures number like CVE-2019-1105 for the ‘Outlook for Android Spoofing Vulnerability’. Created in 1999, the CVE provides definitions for all publicly knowncybersecurity vulnerabilitiesandexposures. So, gimmie 90 Seconds to understand a little bit about the Common Vulnerability & Exposures. Now that we’ve looked at how vulnerabilities become CVEs, let’s explain how a CVE gets scored. The Common Vulnerability Scoring System or CVSS was introduced in 2005 as an open framework for communicating the characteristics and severity of software vulnerabilities. It consists of three metric groups: Base, Temporal, and Environmental. Once again, let’s start the clock to understand a little bit about the Common Vulnerability Scoring System. Hope that was helpful and you can catch the entire 90 Seconds Series on F5's YouTube Channel. ps1.1KViews0likes0CommentsManaging Your Vulnerabilities
I recently recovered from ACDF surgery where they remove a herniated or degenerative disc in the neck and fuse the cervical bones above and below the disk. My body had a huge vulnerability where one good shove or fender bender could have ruptured my spinal cord. I had some items removed and added some hardware and now my risk of injury is greatly reduced. Breaches are occurring at a record pace, botnets are consuming IoT devices and bandwidth, and the cloud is becoming a de-facto standard for many companies. Vulnerabilities are often found at the intersection of all three of these trends, so vulnerability and risk management has never been a greater or more critical challenge for organizations. Vulnerabilities come in all shapes and sizes but one thing that stays constant – at least in computer security - is that a vulnerability is a weakness which allows an attacker to reduce a system’s information assurance. It is the intersection where a system is susceptible to a flaw; whether an attacker can access that flaw; and whether an attacker can exploit that flaw within the system. For F5, it means an issue that results in a confidentiality, integrity, or availability impact of an F5 device by an unauthorized source. Something that affects the critical F5 system functions - like passing traffic. You may be familiar with CVE or Common Vulnerabilities and Exposures. This is a dictionary of publicly known information security vulnerabilities and exposures. Each vulnerability or exposure gets a name or CVE ID and allows organizations to reference it in a public way. It enables data exchange between security products and provides a baseline index point for evaluating coverage of tools and services. MITRE is the organization that assigns CVEs. There are also CVE Numbering Authorities (CNA). Instead of sending a vulnerability to MITRE for numbering, a CNA gets a block of numbers and can assign IDs as needed. The total CVE IDs is around 79,398. Most organizations are concerned about CVEs and the potential risk if one is present in their environment. This is obviously growing with the daily barrage of hacks, breaches and information leaks. Organizations can uncover vulnerabilities from scanner results; from media coverage like Heartbleed, Shellshock, Poodle and others; or from the various security related standards, compliance or internal processes. The key is that scanning results need to be verified for false positives, hyped vulnerabilities might not be as critical as the headline claims and what the CVE might mean for your compliance or internal management. For F5, we keep a close eye on any 3 rd party code that might be used in our systems. OpenSSL, BIND or MySQL are examples. For any software, there may be bugs or researcher’s reports or even non-CVE vulnerabilities that could compromise the system. Organizations need to understand the applicability, impact and mitigation available. Simply put: Am I affected? How bad is it? What can I do? With Applicability, research typically determines if an organization should care about the vulnerability. Things like, is the version of software noted and are you running it. Are you running the vulnerable function within the software? Sometimes older or non-supported versions might be vulnerable but you’ve upgraded to the latest supported code or you are simply not using the vulnerable function at all. The context is also important. Is it being used in default, standard or recommended mode? For instance, many people don’t change the default password of their Wi-Fi device and certain functionality is vulnerable. It gets compromised and becomes part of a botnet. But if the password was changed, as recommended, and it becomes compromised some other way, then that is a different situation to address. For Impact, there are a couple ways to decide how bad it is. First, you can look at the severity of the vulnerability - is it low, medium, high or critical. You can also see if there is a Common Vulnerability Scoring System (CVSS) score tied to the vulnerability. The CVSS score can give you a gauge to the overall risk. To go a bit deeper, you can look at the CVSS Vector. There are 3 sections to the CVSS. There are the constant base metrics covering the exploitability of the issue, the impact that it may have and the scope that it is in. There are the temporal metrics, which may change over time, giving the color commentary of the issue. And there are the environmental metrics which look at the specific, individual environment and how that is impacted. Areas explored here include things like the attack vector and complexity; whether elevated privileges are required or any user interaction along with the scope and how it affects the confidentiality, integrity and availability of the system. One can use the CVSS calculator to help determine a vector score. With a few selections you can get a base, temporal and environmental score to get an overall view of the severity. With this, you can get an understanding as to how to handle the vulnerability. Every organization has different levels of risk based on their unique situation. The vulnerability base score may have a critical listing yet based on your environmental score, the severity and risk may be nil. Lastly, the Mitigation taken is not an exact science and truly depends on the issue and the organization’s situation. Mitigation is not necessarily prevention. For example, compensating controls, such as restricting root level access might mean that a vulnerability simply isn’t exploitable without a privileged account. Vulnerability management and information security is about managing risk. Risk analysis, risk management, risk mitigation and what that risk means to the business. Patching a vulnerability can introduce other risks, so the old refrain of “patch your $#!+” is not the panacea we’re often led to believe. Risk is not limited to the severity of the vulnerability alone, but also to the required vector for exploiting that vulnerability where it exists within a specific organization’s infrastructure. It’s important to understand your risk and focus on the important pieces. ps355Views0likes0Comments