Why We CVE

Why We CVE

Background

First, for those who may not already know, I should probably explain what those three letters, CVE, mean.  Sure, they stand for “Common Vulnerabilities and Exposures”, but that does that mean?  What is the purpose?

Borrowed right from the CVE.org website:

The mission of the CVE® Program is to identify, define, and catalog publicly disclosed cybersecurity vulnerabilities. There is one CVE Record for each vulnerability in the catalog. The vulnerabilities are discovered then assigned and published by organizations from around the world that have partnered with the CVE Program. Partners publish CVE Records to communicate consistent descriptions of vulnerabilities. Information technology and cybersecurity professionals use CVE Records to ensure they are discussing the same issue, and to coordinate their efforts to prioritize and address the vulnerabilities.

To state it simply, the purpose of a CVE record is to provide a unique identifier for a specific issue.  I’m sure many of those reading this have dealt with questions such as “Does that new vulnerability announced today affect us?” or “Do we need to worry about that TCP vulnerability?”  To which the reaction is likely exasperation and a question along the lines of “Which one?  Can you provide any more detail?”  We certainly see a fair number of support tickets along these lines ourselves.

It gets worse when trying to discuss something that isn’t brand new, but years old.  Sure, you might be able to say “Heartbleed”, and many will know what you’re talking about.  (And do you believe that was April 2014?  That’s forever ago in infosec years.)  But what about the thousands of vulnerabilities announced each year that don’t get cute names and make headlines?  Remember that Samba vulnerability?  You know, the one from around the same time?  No, the other one, improper initialization or something?

Fun, right?  It is much easier to say CVE-2014-0178 and everyone knows exactly what is being discussed, or at least can immediately look it up.  Heartbleed, BTW, was CVE-2014-0160.  If you have the CVE ID you can look it up at CVE.org, NVD (National Vulnerability Database), and many other resources.  All parties can immediately be on the same page with the same basic understanding of the fundamentals.  That is, simply, the power of CVE.  It saves immeasurable time and confusion.

I’m not going to go into detail on how the CVE program works, that’s not the intent of this article – though perhaps I could do that in the future if there is interest.  Leave a comment below.  Like and subscribe.  Hit the bell icon… Sorry, too much YouTube.  All that’s important is to note that F5 is a CNA, or CVE Numbering Authority:

An organization responsible for the regular assignment of CVE IDs to vulnerabilities, and for creating and publishing information about the Vulnerability in the associated CVE Record. Each CNA has a specific Scope of responsibility for vulnerability identification and publishing.

Each CNA has a ‘Scope’ statement, which defines what the CNA is responsible for within the CVE program.  This is F5’s statement:

All F5 products and services, commercial and open source, which have not yet reached End of Technical Support (EoTS). All legacy acquisition products and brands including, but not limited to, NGINX, Shape Security, Volterra, and Threat Stack. F5 does not issue CVEs for products which are no longer supported.

And F5’s disclosure policy is defined by K4602: Overview of the F5 security vulnerability response policy.

F5, CVEs, and Disclosures

While CVEs have been published sporadically for F5 products since at least 2002 (CVE-1999-1550 – yes, a 1999 ID but it was published in 2002 – that’s another topic), things really changed in 2016 after the creation of the F5 SIRT in late 2015.  One of the first things the F5 SIRT did was to officially join the CVE program, making F5 a CNA, and to formalize F5’s approach to first-party security disclosures, including CVE assignment. 

This was all in place by late 2016 and the F5 SIRT began coordinating F5’s disclosures.  I’ve been involved with that since very early on, and have been F5’s primary point of contact with the CVE program and related working groups (I participate in the AWG, QWG, and CNACWG) for a number of years now.  Over time I became F5’s ‘vulnerability person’ and have been involved in pretty much every disclosure F5 has made for a number of years now.  It’s my full-time role.

The question has been asked, why?  Why disclose at all?  Why air ‘dirty laundry’?  There is, I think, a natural reluctance to announce to the world when you make a mistake.  You’d rather just quietly correct it and hope no one noticed, right?  I’m sure we’ve all done that at some point in our lives.  No harm, no foul.

Except that doesn’t work with security.  I’ve made the argument about ‘doing the right thing’ for our customers in various ways over the years, but eventually it distilled down to what has become something of a personal catchphrase:

Our customers can’t make informed decisions about their networks if we don’t inform them.

Networks have dozens, hundreds, thousands of devices from many different vendors.  It is easy to say “Well, if everyone keeps up with the latest versions, they’ll always have the latest fixes.”  But that’s trite, dismissive, and wholly unrealistic – in my not-so-humble opinion.  Resources are finite and prioritizations must be made.  Do I need to install this new update, or can I wait for the next one?  If I need to install it, does it have to happen today, or can it wait for the next scheduled maintenance?

We cannot, and should not, be making decisions for our customers and their networks.  Customers and networks are unique, and all have different needs, attack surfaces, risk tolerance, regulatory requirements, etc.  And so F5’s role is to provide the information necessary for them to conduct their own analysis and make their own decisions about the actions they need to take, or not.  We must support our customers, and that means admitting when we make mistakes and have security issues that impact them.  This is something I believe in strongly, even passionately, and it is what guides us.

Our guiding philosophy since day one, as the F5 SIRT, has been to ‘do the right thing for our customers’, even if that may not show F5 in the best light or may sometimes make us unpopular with others.  We’re there to advocate for improved security in our products, and for our customers, above all else.  We never want to downplay anything, and our approach has always been to err on the side of caution. 

If an issue could theoretically be exploited, then it is considered a vulnerability.  We don’t want to cause undue alarm, or Fear, Uncertainty, and Doubt (FUD), for anyone, but in security a false negative is worse than a false positive.  It is better to take an action to address an issue that may not truly be a problem than to ignore an issue that is.

All vendors have vulnerabilities, that’s inevitable with any complex product and codebase.  Some vendors seem to never disclose any vulnerabilities, and I’m highly skeptical when I see that.  I don’t care for the secretive approach, personally.  Some vendors may disclose issues but choose not to participate in the CVE program.  I think that’s unfortunate.  While I’m all for disclosure, I hope those vendors come to see the value in the CVE program not only for their customers, but for themselves.  It does bring with it some structure and rigor that may not otherwise be present in the processes.  Not to mention all of the tooling designed to work with CVEs.

I’ve been heartened to see the rapid growth in the CVE program the past few years, and especially the past year.  There has been a steady influx of new CNAs to the program.  The original structure of the program was fairly ‘vendor-centric’, but it has been updated to welcome open-source projects and there has been increasing participation from the FOSS community as well.

The Future

In 2022 F5 introduced a new way of handling disclosures, our Quarterly Security Notification (QSN) process, after an initial trial in late 2021.  While not universal, the response has been overwhelmingly positive – you may not be able to please all the people, all the time, but it seems you can please a lot of them.  The QSN was primarily designed to make disclosures more predictable and less disruptive to our customers.  Consolidating disclosures and decoupling them from individual software releases has allowed us to radically change our processes, introducing additional levels of review and rigor. 

At the same time, independent of the QSN process, the F5 SIRT had also begun work on standardized language templates for our Security Advisories.  As you might expect, there are teams of people who work on issues – engineers who work on the technical evaluation, content creators, technical writers, etc.  With different people working on different issues, it was only natural that they’d find different ways to say the same thing.

We might disclose similar DoS issues at the same time, only to have the language in each Security Advisory (SA) be different.  This could create confusion, especially as sometimes people can read a little too much into things.  “These are different, there must be some significance in that.”  No, they’re different because different people wrote them is all.  Still, confusion or uncertainty is not what you want with security documentation.  We worked to create standardized templates so that similar issues will have similar language, no matter who works on the issue.

I believe that these efforts have resulted in a higher quality of Security Advisory, and the feedback we’ve received seems to support that.  I hope you agree.

These efforts are ongoing.  The templates are not carved in stone but are living documents.  We listen to feedback and update the templates as needed.  When we encounter an issue that doesn’t fit an existing template a new template is created.  Over time we’ve introduced new features to the advisories, such as the Common Vulnerability Scoring System (CVSS) and, more recently, Common Weakness Enumeration (CWE).  We continue to evaluate feedback and requests, and industry trends, for incorporation into future disclosures.

We’re currently working on internal tooling to automate some of our processes, which should improve consistency and repeatability – while allowing us to expand the work we do.  Frankly, I only scale so far, and the cloning vats just didn’t work out.  Having more tooling will allow us to do more with our resources.  Part of the plan is that the tooling will allow us to provide disclosures in multiple formats – but I don’t want to say anything more about that just yet as much work remains to be done.

So why do we CVE?  For you – our customers, security professionals, and the industry in general.  We assign CVEs and disclose issues not only for the benefit of our customers, but to lead by example.  The more vendors who embrace openness and disclose CVEs, the more the practice is normalized, and the better the security community is for it.  There isn’t really any joy in being the bearer of bad news, other than the hope that it creates a better future.

Postscript

If you’re still reading this, thank you for sticking with me.  Vulnerability management and disclosure is certainly not the sexy side of information security, but it is a critical component.  If there is interest, I’d be happy to explore different aspects further, so let us know.

Perhaps I can peel back the curtain a bit more in another article and provide a look at the vulnerability management processes we use internally.  How the sausage, or security advisory, is made, as it were.  Especially if it might be useful for others looking to build their own practice.  But I like my job so I’ll have to get permission before I start disclosing internal information.

We welcome all feedback, positive or negative, as it helps us do a better job for you.

Thank you.

Updated Dec 30, 2022
Version 2.0
  • Thanks for the article MegaZone. It's interesting to know these details behind the CVE numbers we hear about.  Thanks for making a public-facing article so I can reference this stuff in future. 

  • Cheers to a great, behind-the-scenes discussion MegaZone.

    This line rings clear for me;

    in security a false negative is worse than a false positive

    Especially so as it relates to your personal catchphrase;

    Our customers can’t make informed decisions about their networks if we don’t inform them.

    And then even further as it relates to something we, at DevCentral, hold dear;

    Context is king.
    (I don't recall anyone actually ever saying this out-loud TBH but; maybe we should. 😎)

    Either way, great stuff. Thanks for sharing. #SecurityInCommunity