f5 sirt
119 TopicsWhy We CVE
A look at what CVE is and why F5 is a member of the CVE program and discloses vulnerabilities. Background First, for those who may not already know, I should probably explain what those three letters, CVE, mean. Sure, they stand for “Common Vulnerabilities and Exposures”, but that does that mean? What is the purpose? Borrowed right from the CVE.org website: The mission of the CVE® Program is to identify, define, and catalog publicly disclosed cybersecurity vulnerabilities. There is one CVE Record for each vulnerability in the catalog. The vulnerabilities are discovered then assigned and published by organizations from around the world that have partnered with the CVE Program. Partners publish CVE Records to communicate consistent descriptions of vulnerabilities. Information technology and cybersecurity professionals use CVE Records to ensure they are discussing the same issue, and to coordinate their efforts to prioritize and address the vulnerabilities. To state it simply, the purpose of a CVE record is to provide a unique identifier for a specific issue. I’m sure many of those reading this have dealt with questions such as “Does that new vulnerability announced today affect us?” or “Do we need to worry about that TCP vulnerability?” To which the reaction is likely exasperation and a question along the lines of “Which one? Can you provide any more detail?” We certainly see a fair number of support tickets along these lines ourselves. It gets worse when trying to discuss something that isn’t brand new, but years old. Sure, you might be able to say “Heartbleed”, and many will know what you’re talking about. (And do you believe that was April 2014? That’s forever ago in infosec years.) But what about the thousands of vulnerabilities announced each year that don’t get cute names and make headlines? Remember that Samba vulnerability? You know, the one from around the same time? No, the other one, improper initialization or something? Fun, right? It is much easier to say CVE-2014-0178 and everyone knows exactly what is being discussed, or at least can immediately look it up. Heartbleed, BTW, was CVE-2014-0160. If you have the CVE ID you can look it up at CVE.org, NVD (National Vulnerability Database), and many other resources. All parties can immediately be on the same page with the same basic understanding of the fundamentals. That is, simply, the power of CVE. It saves immeasurable time and confusion. I’m not going to go into detail on how the CVE program works, that’s not the intent of this article – though perhaps I could do that in the future if there is interest. Leave a comment below. Like and subscribe. Hit the bell icon… Sorry, too much YouTube. All that’s important is to note that F5 is a CNA, or CVE Numbering Authority: An organization responsible for the regular assignment of CVE IDs to vulnerabilities, and for creating and publishing information about the Vulnerability in the associated CVE Record. Each CNA has a specific Scope of responsibility for vulnerability identification and publishing. Each CNA has a ‘Scope’ statement, which defines what the CNA is responsible for within the CVE program. This is F5’s statement: All F5 products and services, commercial and open source, which have not yet reached End of Technical Support (EoTS). All legacy acquisition products and brands including, but not limited to, NGINX, Shape Security, Volterra, and Threat Stack. F5 does not issue CVEs for products which are no longer supported. And F5’s disclosure policy is defined by K4602: Overview of the F5 security vulnerability response policy. F5, CVEs, and Disclosures While CVEs have been published sporadically for F5 products since at least 2002 (CVE-1999-1550 – yes, a 1999 ID but it was published in 2002 – that’s another topic), things really changed in 2016 after the creation of the F5 SIRT in late 2015. One of the first things the F5 SIRT did was to officially join the CVE program, making F5 a CNA, and to formalize F5’s approach to first-party security disclosures, including CVE assignment. This was all in place by late 2016 and the F5 SIRT began coordinating F5’s disclosures. I’ve been involved with that since very early on, and have been F5’s primary point of contact with the CVE program and related working groups (I participate in the AWG, QWG, and CNACWG) for a number of years now. Over time I became F5’s ‘vulnerability person’ and have been involved in pretty much every disclosure F5 has made for a number of years now. It’s my full-time role. The question has been asked, why? Why disclose at all? Why air ‘dirty laundry’? There is, I think, a natural reluctance to announce to the world when you make a mistake. You’d rather just quietly correct it and hope no one noticed, right? I’m sure we’ve all done that at some point in our lives. No harm, no foul. Except that doesn’t work with security. I’ve made the argument about ‘doing the right thing’ for our customers in various ways over the years, but eventually it distilled down to what has become something of a personal catchphrase: Our customers can’t make informed decisions about their networks if we don’t inform them. Networks have dozens, hundreds, thousands of devices from many different vendors. It is easy to say “Well, if everyone keeps up with the latest versions, they’ll always have the latest fixes.” But that’s trite, dismissive, and wholly unrealistic – in my not-so-humble opinion. Resources are finite and prioritizations must be made. Do I need to install this new update, or can I wait for the next one? If I need to install it, does it have to happen today, or can it wait for the next scheduled maintenance? We cannot, and should not, be making decisions for our customers and their networks. Customers and networks are unique, and all have different needs, attack surfaces, risk tolerance, regulatory requirements, etc. And so F5’s role is to provide the information necessary for them to conduct their own analysis and make their own decisions about the actions they need to take, or not. We must support our customers, and that means admitting when we make mistakes and have security issues that impact them. This is something I believe in strongly, even passionately, and it is what guides us. Our guiding philosophy since day one, as the F5 SIRT, has been to ‘do the right thing for our customers’, even if that may not show F5 in the best light or may sometimes make us unpopular with others. We’re there to advocate for improved security in our products, and for our customers, above all else. We never want to downplay anything, and our approach has always been to err on the side of caution. If an issue could theoretically be exploited, then it is considered a vulnerability. We don’t want to cause undue alarm, or Fear, Uncertainty, and Doubt (FUD), for anyone, but in security a false negative is worse than a false positive. It is better to take an action to address an issue that may not truly be a problem than to ignore an issue that is. All vendors have vulnerabilities, that’s inevitable with any complex product and codebase. Some vendors seem to never disclose any vulnerabilities, and I’m highly skeptical when I see that. I don’t care for the secretive approach, personally. Some vendors may disclose issues but choose not to participate in the CVE program. I think that’s unfortunate. While I’m all for disclosure, I hope those vendors come to see the value in the CVE program not only for their customers, but for themselves. It does bring with it some structure and rigor that may not otherwise be present in the processes. Not to mention all of the tooling designed to work with CVEs. I’ve been heartened to see the rapid growth in the CVE program the past few years, and especially the past year. There has been a steady influx of new CNAs to the program. The original structure of the program was fairly ‘vendor-centric’, but it has been updated to welcome open-source projects and there has been increasing participation from the FOSS community as well. The Future In 2022 F5 introduced a new way of handling disclosures, our Quarterly Security Notification (QSN) process, after an initial trial in late 2021. While not universal, the response has been overwhelmingly positive – you may not be able to please all the people, all the time, but it seems you can please a lot of them. The QSN was primarily designed to make disclosures more predictable and less disruptive to our customers. Consolidating disclosures and decoupling them from individual software releases has allowed us to radically change our processes, introducing additional levels of review and rigor. At the same time, independent of the QSN process, the F5 SIRT had also begun work on standardized language templates for our Security Advisories. As you might expect, there are teams of people who work on issues – engineers who work on the technical evaluation, content creators, technical writers, etc. With different people working on different issues, it was only natural that they’d find different ways to say the same thing. We might disclose similar DoS issues at the same time, only to have the language in each Security Advisory (SA) be different. This could create confusion, especially as sometimes people can read a little too much into things. “These are different, there must be some significance in that.” No, they’re different because different people wrote them is all. Still, confusion or uncertainty is not what you want with security documentation. We worked to create standardized templates so that similar issues will have similar language, no matter who works on the issue. I believe that these efforts have resulted in a higher quality of Security Advisory, and the feedback we’ve received seems to support that. I hope you agree. These efforts are ongoing. The templates are not carved in stone but are living documents. We listen to feedback and update the templates as needed. When we encounter an issue that doesn’t fit an existing template a new template is created. Over time we’ve introduced new features to the advisories, such as the Common Vulnerability Scoring System (CVSS) and, more recently, Common Weakness Enumeration (CWE). We continue to evaluate feedback and requests, and industry trends, for incorporation into future disclosures. We’re currently working on internal tooling to automate some of our processes, which should improve consistency and repeatability – while allowing us to expand the work we do. Frankly, I only scale so far, and the cloning vats just didn’t work out. Having more tooling will allow us to do more with our resources. Part of the plan is that the tooling will allow us to provide disclosures in multiple formats – but I don’t want to say anything more about that just yet as much work remains to be done. So why do we CVE? For you – our customers, security professionals, and the industry in general. We assign CVEs and disclose issues not only for the benefit of our customers, but to lead by example. The more vendors who embrace openness and disclose CVEs, the more the practice is normalized, and the better the security community is for it. There isn’t really any joy in being the bearer of bad news, other than the hope that it creates a better future. Postscript If you’re still reading this, thank you for sticking with me. Vulnerability management and disclosure is certainly not the sexy side of information security, but it is a critical component. If there is interest, I’d be happy to explore different aspects further, so let us know. Perhaps I can peel back the curtain a bit more in another article and provide a look at the vulnerability management processes we use internally. How the sausage, or security advisory, is made, as it were. Especially if it might be useful for others looking to build their own practice. But I like my job so I’ll have to get permission before I start disclosing internal information. We welcome all feedback, positive or negative, as it helps us do a better job for you. Thank you.4.1KViews13likes3CommentsF5 SIRT - This Week in Security - June 20th to 26th, 2022 - USB(ooze), 24 Billion Credentials, I Spy
This Week in Security June 20-26, 2022 "USB(ooze), 24 Billion Credentials Can't Be Wrong, I Spy With My Little Eye" The theme this week is information and it's (mis)management. Drunk (USB) Driving To err is human, but adding alcohol definitely helps the process along. Of course, the prerequisite conditions for this colossal mistake would not have existed had proper information security procedures been followed. The information should not have been copied onto the USB drive in the first place. That rule having been broken, once the work was completed the information should have been deleted. Failing to do even that, not getting pass-out-in-the-gutter drunk with the unsecured drive in his bag might have been a better life, and career, choice. The drive is reportedly encrypted, which might offer some reassurance to the 465-thousand people whose personal information was on the missing drive. Though, given the worker's chain of excellent decisions I'm not sure I'd put a lot of faith in his correctly encrypting the drive. Drunken gentleman gets USB flash drive stolen — too bad it had personal info on every city resident | Boing Boing Japan: Man loses USB flash drive with data on entire Amagasaki city's residents after night out - CNN 24 BILLION Username/Password Combinations <Insert Your Own Dr. Evil 'Billion' Joke Here> For those of us in infosec the popularity of credential stuffing attacks is no surprise. They've been increasingly common over the past few years, aided by multiple massive credential leaks. But the number of username/password credentials available on the dark web is truly staggering now - 24 billion combinations. 6.7 billion unique - a 1.7 billion increase from 2020. For the average consumer, who will use one email address as their username pretty much everywhere (as is the de facto standard these days), this highlights the risk of password reuse. Leaks are so prevalent, and credential stuffing so common, that the risks are high for users who reuse their credentials. One leak and accounts on multiple sites may get popped. The best way to protect ourselves from this is by using unique passphrases (not just passwords) for each site, likely aided by password managers since our brains aren't great at remembering all of those, and using multi-factor authentication (MFA) (aka two-factor authentication (2FA)) everywhere it is offered. Using apps like Authy, Duo, Google Authenticator, etc., is probably the best choice for most users. Physical tokens are great, but the tradeoff in usability and convenience is probably not justified for most users. Even SMS-based authentication is better than nothing. Yes, SIM-jacking and other attacks exist, but the risk to any random user is fairly low. It isn't perfect, but it raises the level of effort required. Of course, unique passphrases would be a huge improvement given 1 out of 200 of the passwords in the collection are '123456'. And 49 of the 50 most common passwords can be cracked in under a second with standard tools. The users are not alright. 24+ Billion Credentials Circulating on the Dark Web in 2022 — So Far 24 billion username, password combinations can be found on cybercriminal forums | SC Media Ransomware Goes Big Nearly three million patient records, and counting, have been potentially compromised by a breach at Eye Care Leaders, a provider of electronic health records and patient management software solutions for eye care practices. The breach took place back in early December, but the scope - and tally of affected records - just keeps growing. Eye care providers large and small are affected - in the running list being maintained by HIPAA Journal the smallest provider had 1,337 patients at risk while the largest had 1,290,104. While there is not yet evidence that patient records were successfully exfiltrated or otherwise accessed, the systems with the information were compromised and patient record access or exfiltration cannot be ruled out. Eye Care Leaders claims they provide software solutions to over 9,000 ophthalmologists and optometrists, so the list of affected practices, and therefore the number of patient records at risk, seems likely to continue to grow from the 33 listed today. As the investigation is ongoing, and it doesn't look like any findings have been released at this point, there aren't really any new recommendations stemming from this breach. It just goes to show how far reaching the impact of a breach in a services provider can be. With all of the practices storing their data in 'the cloud', as provided by ECL, the risks for operators are higher, as are the rewards for black hats. Eye Care Leaders Hack Impacts Millions of Patients Eye Care Leaders EMR Data Breach Tally Surpasses 2 Million Breach at Eye Care Software Vendor Hits Millions of Patients | SecurityWeek.Com 5 more organizations added to Eye Care Leaders attack total, now biggest PHI breach of 2022 | SC Media1.1KViews7likes0CommentsLet's Get Critical, Critical
MegaZone is back again for a roundup of the security news that caught my eye for the week of November 10th - 16th, 2024. This time, I want to get Critical. Yes, let's get into the Critical - issues, of course. We're going to look at some very recent Critical issues making the rounds, as well as issues which made the charts in 2023 - including an old friend which keeps on giving. And I'll end with a critical issue for all of us in the cybersecurity field, one I feel strongly about. Atomic Batteries to Power! Turbines to Speed!435Views6likes2CommentsHTTP Request Smuggling Using Chunk Extensions (CVE-2025-55315)
Executive Summary HTTP request smuggling remains one of the nastier protocol-level surprises: it happens when different components in the HTTP chain disagree about where one request ends and the next begins. A recent, high-visibility ASP.NET Core disclosure brought one particular flavor of this problem into the spotlight: attackers abusing chunk extensions in chunked transfer encoding to craft ambiguous request boundaries. The vulnerability was assigned a very high severity (CVSS 9.9) by Microsoft, their highest for ASP.NET Core to date. This article explains what chunk extensions are, why they can be abused for smuggling, how the recent ASP.NET Core issue fits into the bigger picture, and what defenders, implementers, and F5 customers should consider: particularly regarding HTTP normalization, compliance settings, and protection coverage across F5 Advanced WAF, NGINX App Protect, and Distributed Cloud. Background: What Are Chunk Extensions? In HTTP/1.1, chunked transfer encoding (via Transfer-Encoding: chunked) allows the body of a message to be sent in a sequence of chunks, each preceded by its size in hex, terminated by a zero-length chunk. The specification also allows chunk extensions to be appended after the chunk length, e.g.: In theory, chunk extensions were meant for metadata or transfer-layer options: for example, integrity checks or special directives. But in practice, they’re almost never used by legitimate clients or servers: many HTTP libraries ignore or inconsistently handle them, and this inconsistency across intermediaries (proxies and servers) can serve as a source of request smuggling vulnerabilities. But if a lot of servers and proxies ignore it, why would that even be an issue? Let’s see. Root Cause Analysis for CVE-2025-55315 The CVE description reads: “Inconsistent interpretation of HTTP requests (‘HTTP request/response smuggling’) in ASP.NET Core allows an authorized attacker to bypass a security feature over a network.” Examining the GitHub commit reveals a relatively straightforward fix. In essence, the patch adjusts the chunk-extension parser to correctly handle \r\n line endings and to throw an error if either \r or \n appears unpaired. Additionally, a new flag was introduced for backward compatibility. As expected, the vulnerable logic resides in the ParseExtension function. The new InsecureChunkedParsing flag preserves legacy behavior - but it must be explicitly enabled, since that mode reflects the prior (and now considered insecure) implementation. Previously, the parser looked only for the carriage return (\r) character to determine the end of a line. In the updated implementation, it now checks for either a line feed (\n) or a carriage return (\r). Next, we encounter the following condition: The syntax may look a bit dense, but the logic is straightforward. In short, they retained the old insecure behavior when the InsecureChunkedParsing flag is enabled, which is checking the presence of \n only after encountering \r . This is problematic because it allows injecting a single \r or \n inside the chunk extension. In depth, the vulnerable condition, suffixSpan[1] == ByteLF, mirrors the old behavior - it verifies that the second character is \n. We reach this part only if we previously saw \r. The new condition validates that the last two characters of the chunk extension are \r\n. Remember that in the new version, we reach this part when encountering either \r or \n. The fixed condition ensures that if an attacker tries to inject a single \r or \n somewhere within the chunk extension, the check will fail - the condition will evaluate to false. When that happens, and if the backward-compatibility flag is not enabled, the parser throws an exception: Bad chunk extension. And what happened before the patch if the character following \r wasn’t \n? They simply continued parsing, making the following characters part of the chunk extension. That means that a chunk extension could include line terminator characters. The attack affecting unpatched ASP.NET Core applications is HTTP request smuggling via chunk extensions, a technique explained clearly and in depth in this article, which we’ll briefly summarize in this post. Request smuggling using chunk extensions variants Before diving into the different chunk-extension smuggling variants, it’s worth recalling the classic Content-Length / Transfer-Encoding (CL.TE and TE.CL) request smuggling techniques. These rely on discrepancies between how proxies and back-end servers interpret message boundaries: one trusts the Content-Length, the other trusts Transfer-Encoding, allowing attackers to sneak an extra request inside a single HTTP message. If you’re not familiar with CL.TE and TE.CL and other variants, this article gives an excellent overview of how these desync vulnerabilities work in practice. TERM.EXT (terminator - extension mismatch): The proxy treats a line terminator (usually \n) inside a chunk extension as the end of the chunk header, while the backend treats the same bytes as part of the extension. EXT.TERM (extension - terminator mismatch) The proxy treats only \r\n sequence as the end of the chunk header, while the backend treats the line terminator character inside the chunk extension as the end of the chunk header. The ASP.NET Core issue Previously, ASP.NET Core allowed lone \r or \n characters to appear within a chunk extension if the line ended with \r\n, placing it in the EXT category. If a proxy ahead has TERM behavior (treating \n as line end), their parsing mismatch can enable request smuggling. The figure shows an example malicious request that exploits this parsing mismatch. The proxy treats a lone \n as the end of the chunk extension. As a result, the bytes xx become the start of the body and 47 is interpreted as the size of the following chunk. If the proxy forwards the request unchanged (i.e., it does not strip the extension), those next chunks can effectively carry a second, smuggled request destined for an internal endpoint that the proxy would normally block. When Kestrel (the ASP.NET Core backend) receives that same raw stream, it enforces a strict \r\n terminator for extensions. Because the backend searches specifically for the \r\n sequence, it parses the received stream differently - splitting the forwarded data into two requests (the extension content, 2;\nxx is treated as a chunk header + chunk body). The end result: a GET /admin request can reach the backend, even though the proxy would have blocked such a request if it had been observed as a separate, external request. F5 WAF Protections NGINX App Protect and F5 Distributed Cloud NGINX App Protect and F5 Distributed Cloud (XC) normalize incoming HTTP requests and do not support chunk extensions. This means that any request arriving at NAP or XC with chunk extensions will have those extensions removed before being forwarded to the backend server. As a result, both NAP and XC are inherently protected against this class of chunk-extension smuggling attacks by design. To illustrate this, let’s revisit the example from the referenced article. NGINX, which treats a lone \n as a valid line terminator, falls under the TERM category. When this request is sent through NAP, it is parsed and normalized accordingly - effectively split into two separate requests: What does this mean? NAP does not forward the request the same as it arrived. It normalizes the message by stripping out any chunk extensions, replacing the Transfer-Encoding header with a Content-Length, and ensuring the body is parsed deterministically - leaving no room for ambiguity or smuggling. If a proxy precedes NAP and interprets the traffic as a single request, NAP will safely split and sanitize it. F5 Distributed Cloud (XC) doesn’t treat lone \n as line terminators and also discards chunk extensions entirely. Advanced WAF Advanced WAF does not support chunk extensions. Requests containing a chunk header that is too long (more than 10 bytes) are treated as unparsable and trigger an HTTP compliance violation. To improve detection, we’ve released a new attack signature, “ASP.NET Core Request Smuggling - 200020232", which helps identify and block malicious attempts that rely on chunk extensions. Conclusions HTTP request smuggling via chunk extensions remains a very real threat, even in modern stacks. The disclosure of CVE-2025-55315 in the Kestrel web server underlines this: a seemingly small parsing difference (how \r, \n, and \r\n are treated in chunk extensions) can allow an attacker to conceal a second request within a legitimate one, enabling account takeover, code injection, SSRF, and many other severe attacks. This case offers a great reminder: don’t assume that because “nobody uses chunk extensions” they cannot be weaponized. And of course - use HTTP/2. Its binary framing model eliminates chunked encoding altogether, removing the ambiguity that makes these attacks possible in HTTP/1.1.581Views5likes2Comments