Coordinated Vulnerability Disclosure: A Balanced Approach
The world of vulnerability disclosure encompasses, and affects, many different parties – security researchers, vendors, customers, consumers, and even random bystanders who may be caught in the blast radius of a given issue. The security professionals who manage disclosures must weigh many factors when considering when and what to disclose. There are risks to disclosing an issue when there is no fix yet available, possibly making more malicious actors aware of the issue when those affected have limited options. Conversely, there are also risks to not disclosing an issue for an extended period when malicious actors may already know of it, yet those affected remain blissfully unaware of their risk. This is but one factor to be considered. Researchers and Vendors The relationship between security researchers and product vendors is sometimes perceived as contentious. I’d argue that’s largely due to the exceptions that make headlines – because they’re exceptions. When some vendor tries to silence a researcher through legal action, blocking a talk at a conference, stopping a disclosure, etc., those moves make for sensational stories simply because they are unusual and extreme. And those vendors are clearly not familiar with the Streisand Effect. The reality is that security researchers and vendors work together every day, with mutual respect and professionalism. We’re all part of the security ecosystem, and, in the end, we all have the same goal – to make our digital world a safer, more secure place for everyone. As a security engineer working for a vendor, you never want to have someone point out a flaw in your product, but you’d much rather be approached by a researcher and have the opportunity to fix the vulnerability before it is exploited than to become aware of it because it was exploited. Sure, this is where someone will say that vendors should be catching the issues before the product ships, etc. In a perfect world that would be the case, but we don’t live in a perfect world. In the real world, resources are finite. Every complex product will have flaws because humans are involved. Especially products that have changed and evolved over time. No matter how much testing you do, for any product of sufficient complexity, you can never be certain that every possibility has been covered. Furthermore, many products developed 10 or 20 years ago are now being used in scenarios that could not be conceived of at the time of their design. For example, the disintegration of the corporate perimeter and the explosion of remote work has exposed security shortcomings in a wide range of enterprise technologies. As they say, hindsight is 20/20. Defects often appear obvious after they’ve been discovered but may have slipped by any number of tests and reviews previously. That is, until a security researcher brings a new way of thinking to the task and uncovers the issue. For any vendor who takes security seriously, that’s still a good thing in the end. It helps improve the product, protects customers, and improves the overall security of the Internet. Non sequitur. Your facts are uncoordinated. When researchers discover a new vulnerability, they are faced with a choice of what to do with that discovery. One option is to act unilaterally, disclosing the vulnerability directly. From a purely mercenary point of view, they might make the highest return by taking the discovery to the dark web and selling it to anyone willing to pay, with no regard to their intentions. Of course, this option brings with it both moral and legal complications. It arguably does more to harm the security of our digital world overall than any other option, and there is no telling when, or indeed if, the vendor will become aware of the issue for it to be fixed. Another drastic, if less mercenary, option is Full Disclosure - aka the ‘Zero-Day’ or ‘0-day’ approach. Dumping the details of the vulnerability on a public forum makes them freely available to all, both defenders and attackers, but leaves no time for advance preparation of a fix, or even mitigation. This creates a race between attackers and defenders which, more often than not, is won by the attackers. It is nearly always easier, and faster, to create an exploit for a vulnerability and begin distributing it than it is to analyze a vulnerability, develop and test a fix, distribute it, and then patch devices in the field. Both approaches may, in the long term, improve Internet security as the vulnerabilities are eventually fixed. But in the short- and medium-terms they can do a great deal of harm to many environments and individual users as attackers have the advantage and defenders are racing to catch up. These disclosure methods tend to be driven primarily by monetary reward, in the first case, or by some personal or political agenda, in the second case. Dropping a 0-day to embarrass a vendor, government, etc. Now, Full Disclosure does have an important role to play, which we’ll get to shortly. Mutual Benefit As an alternative to unilateral action, there is Coordinated Disclosure: working with the affected vendor(s) to coordinate the disclosure, including providing time to develop and distribute fixes, etc. Coordinated Disclosure can take a few different forms, but before I get into that, a slight detour. Coordinated Disclosure is the current term of art for what was once called ‘Responsible Disclosure’, a term which has generally fallen out of favor. The word ‘responsible’ is, by its nature, judgmental. Who decides what is responsible? For whom? To whom? The reality is it was often a way to shame researchers – anyone who didn’t work with vendors in a specified way was ‘irresponsible’. There were many arguments in the security community over what it meant to be ‘responsible’, for both researchers and vendors, and in time the industry moved to the more neutrally descriptive term of ‘Coordinated Disclosure’. Coordinated Disclosure, in its simplest form means working with the vendor to agree upon a disclosure timeline and to, well, coordinate the process of disclosure. The industry standard is for researchers to give vendors a 90-day period in which to prepare and release a fix, before the disclosure is made. Though this may vary with different programs and may be as short as 60-days or as long as 120-days, and often include modifiers for different conditions such as active exploitation, Critical Severity (CVSS) issues, etc. There is also the option of private disclosure, wherein the vendor notifies only customers directly. This may happen as a prelude to Coordinated Disclosure. There are tradeoffs to this approach – on the one hand it gives end users time to update their systems before the issues become public knowledge, but on the other hand it can be hard to notify all users simultaneously without missing anyone, which would put those unaware at increased risk. The more people who know about an issue, the greater the risk of the information finding its way to the wrong people, or premature disclosure. Private disclosure without subsequent Coordinated Disclosure has several downsides. As already stated, there is a risk that not all affected users will receive the notification. Future customers will have a harder time being aware of the issues, and often scanners and other security tools will also fail to detect the issues, as they’re not in the public record. The lack of CVE IDs also means there is no universal way to identify the issues. There’s also a misguided belief that private disclosure will keep the knowledge out of the wrong hands, which is just an example of ‘security by obscurity’, and rarely effective. It’s more likely to instill a false sense of security which is counter-productive. Some vendors may have bug bounty programs which include detailed reporting procedures, disclosure guidelines, etc. Researchers who choose to work within the bug bounty program are bound by those rules, at least if they wish to receive the bounty payout from the program. Other vendors may not have a bug bounty program but still have ways for researchers to official report vulnerabilities. If you can’t find a way to contact a given vendor, or aren’t comfortable doing so for any reason, there are also third-party reporting programs such as Vulnerability Information and Coordination Environment (VINCE) or reporting directly to the Cybersecurity & Infrastructure Security Agency (CISA). I won’t go into detail on these programs here, as that could be an article of its own – perhaps I will tackle that in the future. As an aside, at the time of writing, F5 does not have a bug bounty program, but the F5 SIRT does regularly work with researchers for coordinated disclosure of vulnerabilities. Guidelines for reporting vulnerabilities to F5 are detailed in K4602: Overview of the F5 security vulnerability response policy. We do provide an acknowledgement for researchers in any resulting Security Advisory. Carrot and Stick Coordinated disclosure is not all about the researcher, the vendor has responsibilities as well. The vendor is being given an opportunity to address the issue before it is disclosed. They should not see this as a burden or an imposition, the researcher is under no obligation to give them this opportunity. This is the ‘carrot’ being offered by the researcher. The vendor needs to act with some urgency to address the issue in a timely fashion, to deliver a fix to their customers before disclosure. The researcher is not to blame if the vendor is given a reasonable time to prepare a fix and fails to do so. The ’90-day’ guideline should be considered just that, a guideline. The intention is to ensure that vendors take vulnerability reports seriously and make a real effort to address them. Researchers should use their judgment, and if they feel that the vendor is making a good faith effort to address the issue but needs more time to do so, especially for a complex issue or one that requires fixing multiple products, etc., it is not unreasonable to extend the disclosure deadline. If the end goal is truly improving security and protecting users, and all parties involved are making a good faith effort, reasonable people can agree to adjust deadlines on a case-by-case basis. But there should still be some reasonable deadline, remember that it is an undisclosed vulnerability which could be independently discovered and exploited at any time – if not already – so a little firmness is justified. Even good intentions can use a little encouragement. That said, the researcher also has a stick for the vendors who don’t bite the carrot – Full Disclosure. For vendors who are unresponsive to vulnerability reports, who respond poorly to such (threats, etc.), who do not make a good faith effort to fix issues in a timely manner, etc., this is alternative of last resort. If the researcher has made a good faith effort at Coordinated Disclosure but has been unable to do so because of the vendor, then the best way to get the word out about the issue is Full Disclosure. You can’t coordinate unless both parties are willing to do so in good faith. Vendors who don’t understand it is in their best interest to work with researchers may eventually learn that it is after dealing with Full Disclosure a few times. Full Disclosure is rarely, if ever, a good first option, but if Coordinated Disclosure fails, and the choice becomes No Disclosure vs. Full Disclosure, then Full Disclosure is the best remaining option. In All Things, Balance Coordinated disclosure seeks to balance the needs of the parties mentioned at the start of this article – security researchers, vendors, customers, consumers, and even random bystanders. Customers cannot make informed decisions about their networks unless vendors inform them, and that’s why we need vulnerability disclosures. You can’t mitigate what you don’t know about. And the reality is no one has the resources to keep all their equipment running the latest software release all the time, so updates get prioritized based on need. Coordinated disclosure gives the vendor time to develop a fix, or at least a mitigation, and make it available to customers before the disclosure. Thus, allowing customers to rapidly respond to the disclosure and patch their networks before exploits are widely developed and deployed, keeping more users safe. The coordination is about more than just the timing, vendors and researchers will work together on the messaging of the disclosure, often withholding details in the initial publication to provide time for patching before disclosing information which make exploitation easier. Crafting a disclosure is always a balancing act between disclosing enough information for customers to understand the scope and severity of the issue and not disclosing information which is more useful to attackers than to defenders. The Needs of the Many Coordinated disclosure gets researchers the credit for their work, allows vendors time to develop fixes and/or mitigations, gives customers those resources to apply when the issue is disclosed to them, protects customers by enabling patching faster than other disclosure methods, and ultimately results in a safer, more secure, Internet for all. In the end, that’s what we’re all working for, isn’t it? I encourage vendors and researchers alike to view each other as allies and not adversaries. And to give each other the benefit of the doubt, rather than presume some nefarious intent. Most vendors and researchers are working toward the same goals of improved security. We’re all in this together. If you’re looking for more information on handling coordinated disclosure, you might check out The CERT Guide to Coordinated Vulnerability Disclosure.73Views2likes0CommentsPowerPoint, ArcaneDoor, the Z80 and Kaiser Permanente - April 21-27, 2024 - This Week in Security
Notable security news from the week of April 21st with a small side of nostalgia for the Z80 CPU; we'll dive into the exploitation of an old PowerPoint CVE from 2017, ArcaneDoor and the targeting of Cisco perimiter devices and an enormous breach of Kaiser Permanente user information!223Views3likes0CommentsInSpectre, Rust/PANOS CVEs, X URL blunder and More-April 8-14, 2024-F5 SIRT-This Week in Security
Editor'sIntroduction Hello, Arvin is your editor for This Week in Security. As usual, I collected some interesting security news. Credit to the original articles. Intel processors are affected by a Native Branch History Injection (Native BHI) attack and the tool InSpectre, a tool that can find gadgets (code snippets that can serve as a jumping point to bypass sw and hw protections) in an OS kernel on vulnerable hardware. Spectre style attacks that abuses speculative execution on processors has been around for a while now. Intel updated their previous published article on "Branch History Injection and Intra-mode Branch Target Injection" guidance and included an "Additional Hardening Options" section. The silver lining in this, is the CVEs CVSS score are Medium severity. See the section snippets from the research paper of the researchers from VU Amsterdam that illustrates the use InSpectre tool. Rust has a critical CVE - CVE-2024-24576. It affects the Rust standard library, which was found to be improperly escaping arguments when invoking batch files on Windows using the library's Command API – specifically, std::process::Command. It is specific to the Windows OS cmd exe as it has complex parsing rules and allowed untrusted inputs to be safely passed to spawned processes. Next is a PAN OS Critical CVE, where it affects devices with firewall configurations with GlobalProtect gateway and device telemetry enabled. CVE-2024-3400 affects PAN-OS 10.2, PAN-OS 11.0 and PAN-OS 11, Updates to fully fix this CVE were made available from April 14. Refer tohttps://security.paloaltonetworks.com/CVE-2024-3400 Change Healthcare's worries on effects of a previous breach due to ALPHV ransomware group appears to be not over. Per the report, the victim organization was potentially "exit" scammed by ALPHV and is being pursued by the "contactor/affiliate" of the ransomware attack, RansomHub, demanding another round of ransom to be paid, else, they sell the exfiltrated data to the highest bidder. X/Twitter had an URL blunder where it converts anything with the string twitter in their site's tweets and then converts it to the letter X - example, netflitwitter[.]com will be converted to netflix[.]com. This behavior was reversed and back to usual, but X twitter[.]com URLs now properly converts to X[.]com. Lastly, a round up of issues from MS, Fortinet, SAP, Cisco, Adobe, Google/Android. As in previous TWIS editions, some of these news were a recurrence/follow up. In general, keep your systems up to date on software versions, secure access to them and allow only trusted users and applications to run. Implement layers of protections - updated AV/ED/XDR on Server and End User systems, Firewall/network segmentation rules/IPS to prevent further spread/lateral movement in the event of a ransomware attack (BIG-IP AFM have network firewall, IPS features that you can consider), a WAF to protect your web applications and APIs - BIG-IP ASM/Adv WAF, F5 Distributed Cloud Services, NGINX App Protect have security policy configuration and attack signatures that can mitigate known command injection techniques and other web exploitation techniques. End user security training and awareness, incident response and reporting will help an organization should that first phishing email reaches a target end user mailbox. If it feels "off" and looks suspicious, stop and ponder before clicking. I hope this edition of TWIS is educational. You can also read past TWIS editions and othercontent from the F5 SIRT , so check those out as well. Till next time! Rust rustles up fix for 10/10 critical command injection bug on Windows in std lib Programmers are being urged to update their Rust versions after the security experts working on the language addressed a critical vulnerability that could lead to malicious command injections on Windows machines. The vulnerability, which carries a perfect 10-out-of-10 CVSS severity score, is tracked as CVE-2024-24576. It affects the Rust standard library, which was found to be improperly escaping arguments when invoking batch files on Windows using the library's Command API – specifically, std::process::Command. "An attacker able to control the arguments passed to the spawned process could execute arbitrary shell commands by bypassing the escaping," said Pietro Albini of the Rust Security Response Working Group, who wrotethe advisory. The main issue seems to stem from Windows' CMD.exe program, which has more complex parsing rules, and Windows can't execute batch files without it, according to the researcher at Tokyo-based Flatt Security whoreported the issue. Albini said Windows' Command Prompt has its own argument-splitting logic that works differently from the usual Command::arg and Command::args APIs provided by the standard library, which typically allow untrusted inputs to be safely passed to spawned processes. "On Windows, the implementation of this is more complex than other platforms, because the Windows API only provides a single string containing all the arguments to the spawned process, and it's up to the spawned process to split them," said Albini. "Most programs use the standard C run-time argv, which in practice results in a mostly consistent way arguments are split. "Unfortunately it was reported that our escaping logic was not thorough enough, and it was possible to pass malicious arguments that would result in arbitrary shell execution." https://www.theregister.com/2024/04/10/rust_critical_vulnerability_windows/ It's 2024 and Intel silicon is still haunted by data-spilling Spectre Intel CPU cores remain vulnerable to Spectre data-leaking attacks, say academics at VU Amsterdam. We're told mitigations put in place at the software and silicon level by the x86 giant to thwart Spectre-style exploitation of its processors' speculative execution can be bypassed, allowing malware or rogue users on a vulnerable machine to steal sensitive information – such as passwords and keys – out of kernel memory and other areas of RAM that should be off limits. The boffins say they have developed a tool called InSpectre Gadget that can find snippets of code, known as gadgets, within an operating system kernel that on vulnerable hardware can be abused to obtain secret data, even on chips that have Spectre protections baked in. InSpectre Gadget was used, as an example, to find a way to side-step FineIBT, a security feature built into Intel microprocessors intended to limitSpectre-stylespeculative execution exploitation, and successfully pull off a Native Branch History Injection (Native BHI) attack to steal data from protected kernel memory. "We show that our tool can not only uncover new (unconventionally) exploitable gadgets in the Linux kernel, but that those gadgets are sufficient to bypass all deployed Intel mitigations," the VU Amsterdam teamsaidthis week. "As a demonstration, we present the first native Spectre-v2 exploit against the Linux kernel on last-generation Intel CPUs, based on the recent BHI variant and able to leak arbitrary kernel memory at 3.5 kB/sec." https://www.theregister.com/2024/04/10/intel_cpus_native_spectre_attacks/ fromhttps://download.vusec.net/papers/inspectre_sec24.pdf 2.2 Spectre v2 In 2018, the disclosure of Spectre [29] famously demonstratedhow speculation can be used to leak data across security domains. One variant presented in the paper, originally known asSpectre v2 or Branch Target Injection (BTI), shows how speculation of indirect branches can be used to transiently divertthe control flow of a program and redirect it to an attackerchosen location. The attack works by poisoning one of theCPU predictors, the Branch Target Buffer (BTB), which isused to decide where to jump on indirect branch speculation. Initially, mitigations were proposed at the software leveland, later, in-silicon mitigations such as Intel eIBRS [5] anARM CSV2 [12] were added to newer generations of CPUsto isolate predictions across privilege levels. 2.3 Branch History Injection In 2022, Branch History Injection (BHI) [13] showed that,despite mitigations, cross-privilege Spectre v2 is still possibleon latest Intel CPUs by poisoning the Branch History Buffer(BHB). Figure 1 provides a high-level overview of the attack. In summary, by executing a sequence of conditionalbranches (HA and HV ) right before performing a system call,an unprivileged attacker can cause the CPU to transientlyjump to a chosen target (TA) when speculating over an indirect call in the kernel (CV ). This happens because the CPUpicks the speculative target forCV from a shared structure, theBTB, that is indexed using both the address of the instructionand the history of previous conditional branches, which isstored in the Branch History Buffer (BHB). Finding the rightcombination of histories that will result in a collision can bedone with brute-forcing.To ensure the injected target, TA, contains a disclosure gadget, the original BHI attack relied on the presence of theextended Berkeley Packet Filter (eBPF), through which anunprivileged user can craft code that lives in the kernel. Figure 2: InSpectre gadget workflow. The analyst provides akernel image and a list of target addresses to InSpectre Gadget⃝1 , which performs in-depth inspection to find gadgets thatcan leak secrets and output their characteristics. The gadgetscan be filtered ⃝2 based on the available attacker-controlledregisters and the mitigations enabled, and used to craft Spectrev2 exploits against the kernel ⃝3 . Zero-day exploited right now in Palo Alto Networks' GlobalProtect gateways Palo Alto Networks on Friday issued a critical alert for an under-attack vulnerability in the PAN-OS software used in its firewall-slash-VPN products. The command-injection flaw, with an unwelcome top CVSS severity score of 10 out of 10, may let an unauthenticated attacker execute remote code with root privileges on an affected gateway, which to put it mildly is not ideal. It can, essentially, be exploited to take complete control of equipment and drill into victims' networks. Updates to fully fix this severe hole are due to arrive by Sunday, April 14, we're told. CVE-2024-3400affects PAN-OS 10.2, PAN-OS 11.0 and PAN-OS 11.1 firewall configurations with a GlobalProtect gateway and device telemetry enabled. Cloud firewalls, Panorama appliances, and Prisma Access are not affected, Palo Altosays. Zero-day exploitation of this vulnerability was detected on Wednesday by cybersecurity shop Volexity, on a firewall it was monitoring for a client. After an investigation determined that the firewall had been compromised, the firm saw another customer get hit by the same intruder on Thursday. "The threat actor, which Volexity tracks under the alias UTA0218, was able to remotely exploit the firewall device, create a reverse shell, and download further tools onto the device," the networks security management firm said in ablog post. "The attacker focused on exporting configuration data from the devices, and then leveraging it as an entry point to move laterally within the victim organizations." The intrusion, which begins as an attempt to install a custom Python backdoor on the firewall, appears to date back at least to March 26, 2024. Palo Alto Networks refers to the exploitation of this vulnerability as Operation MidnightEclipse, which at least is more evocative than the alphanumeric jumble UTA0218. The firewall maker says while the vulnerability is being actively exploited, only a single individual appears to be doing so at this point. mitigations include applying a GlobalProtect-specificvulnerability protection, if you're subscribed to Palo Alto's Threat Prevention service, or "temporarily disabling device telemetry until the device is upgraded to a fixed PAN-OS version. Once upgraded, device telemetry should be re-enabled on the device." It urged customers to follow the above security advisory and thanked the Volexity researchers for alerting the company and sharing its findings. ® https://www.theregister.com/2024/04/12/palo_alto_pan_flaw/ https://www.volexity.com/blog/2024/04/12/zero-day-exploitation-of-unauthenticated-remote-code-execution-vulnerability-in-globalprotect-cve-2024-3400/ https://unit42.paloaltonetworks.com/cve-2024-3400/ Change Healthcare faces second ransomware dilemma weeks after ALPHV attack Change Healthcare is allegedly being extorted by a second ransomware gang, mere weeks after recovering from an ALPHV attack. RansomHub claimed responsibility for attacking Change Healthcare in the last few hours, saying it had 4 TB of the company's data containing personally identifiable information (PII) belonging to active US military personnel and other patients, medical records, payment information, and more. The miscreants are demanding a ransom payment from the healthcare IT business within 12 days or its data will be sold to the highest bidder. "Change Healthcare and United Health you have one chance in protecting your clients data," RansomHub said. "The data has not been leaked anywhere and any decent threat intelligence would confirm that the data has not been shared nor posted. The org is alleged to have paid a $22 million ransom to ALPHV following the incident – a claim made by researchers monitoring a known ALPHV crypto wallet and one backed up by RansomHub. However, Change Healthcare has never officially confirmed this to be the case. If all of the claims are true, it means the embattled healthcare firm is deciding whether to pay a second ransom fee to keep its data safe. the prevailing theory among infosec watchers is that ALPHV pulled what's known as an exit scam after Change allegedly paid its ransom. While the ratios vary slightly between gangs, generally speaking, ransomware payments are split 80/20 – 80 percent for the affiliate that actually carried out the attack and 20 percent for the gang itself. It's believed that ALPHV took 100 percent of the alleged payment from Change Healthcare, leaving the affiliate responsible for the attack without a commission. Angry and searching for what they believed they were "owed," the affiliate is thought to have retained much of the data it stole and now switched allegiances to RansomHub in one last throw of the dice to earn themselves a payday, or so the theory goes. UnitedHealth, parent company of Change Healthcare,discloseda cybersecurity incident on February 22, saying at the time it didn't expect it to materially impact its financial condition or the results of its operations. It originally suspected nation state attackers to be behind the incident, but the ALPHV ransomware gang later claimed responsibility. Many of its systems were taken down as a result while it assessed and worked to remediate the damage. Hospitals and pharmacies reported severe disruption to services following the attack, with many unable to process prescriptions, payments, and medical claims. Cashflow issues also plagued many institutions, prompting the US government tointervene. The IT biz's data protection standards are soon to be subject to aninvestigationby the US healthcare industry's data watchdog, which cited the "unprecedented magnitude of this cyberattack" in its letter to Change. https://www.theregister.com/2024/04/08/change_healthcare_ransomware/ X fixes URL blunder that could enable convincing social media phishing campaigns Elon Musk's X has apparently fixed an embarrassing issue implemented earlier in the week that royally bungled URLs on the social media platform formerly known as Twitter. Users started noticing on Monday that X's programmers implemented a rule on its iOS app that auto-changedTwitter.comlinks that appeared in Xeets toX.com links. Attackers could feasibly copy legitimate web pages to steal credentials, or skip the trouble and simply use it as a malware-dropping tool, or any number of other possibilities. The potential for abuse here would be rife, given the number of legitimate, well-known brands most people would blindly trust. Netflix, Plex, Roblox, Clorox, Xerox – you get the picture. According to tests at Reg towers on Wednesday morning, the issue appears to have been reversed. Netflitwitter[.]com now reads as such, but Twitter.com is auto-changed to X.com.203Views2likes0CommentsMaintainers, Slowloris/2, Kobold Letters - April 1st - 7th, 2024 - F5 SIRT - This Week in Security
Introduction Hello again, Kyle Fox here. This week we have some shorter bits about things, in which I promise two more future articles, which I think means I am up to three non-TWIS articles in the pipeline. We have to talk about project maintainers again. We have all seen that one XKCD comic about dependency maintainers. The xz situation has resurfaced a common plea from Open Source maintainers: We need funds and help. I don't have any real deep commentary here, just a plea that companies heavily dependent on Open Source projects should consider giving back to the community by retaining internal SMEs who can help projects resolve issues by submitting bug fixes, contribute to those projects financially, and possibly consider hiring internal people to work on the major features they want out of these projects. Platforms like GitHub may be able to help by moderating discussions to keep project maintainers from being abused by users. And the community should work better at being a positive force for change. And the same goes for conferences, some of us spend lots of time working on all the little details so you can go to DEF CON, have parties to go to, things to hack and places to hack them in. Its easy to look at something like DEF CON and think that its just another industry conference and everyone is being paid to be there, but very few people are paid to be there. I will further discuss this soon in a post about the current DEF CON situation and venues. Is the HTTP/2 CONTINUATION Attack Just Slowloris/2? On April 3rd the industry got wind of a new attack on HTTP/2, this time you could consume resources by sending a steady stream of CONTINUATION frames, leaving the connection open and consuming resources. This came on the tail end of the HTTP/2 Rapid Reset attack, which consumed resources in an orthogonal way. If this attack sounds familiar, its because it is almost the same attack for HTTP/2 as the Slowloris attack was for HTTP/1.1. You could also compare it to the Slow POST attack as well. How Slowloris worked, for those who may have forgotten since 2009, is the attacker will send a HTTP/1.1 request to a webserver and then slowly send one header at a time, holding the connection open for a very long time with limited traffic. On susceptible webservers they would only need to send headers fast enough to keep the TCP connection from timing out, since the webserver does not have a timeout for the header stage of the request. The Slow POST attack is similar, but slowly sending chunks of POST data rather than headers, relying on the webserver not timing out on those. BIG-IP mitigated Slowloris by its normal behavior of buffering all the headers before forwarding a request to the backend servers. A limit on the number and/or size of headers allows further refinement of this mitigation. When mitigated, these attacks only generate at most an open connection on the backend with no request. This same behavior mitigated the HTTP/2 Rapid Reset attack and now mitigates the HTTP/2 CONTINUATION attack. As we can see from this, old attacks can become new ones when a new or significantly revised protocol comes along. This is why when working on new features F5 performs Threat Modelling Assessments to categorize possible new variations of old attacks or completely new attacks that may apply to a new feature, protocol or service and build in protections against those attacks. Display: none Strikes Again, Now in Email. A recent post over at Lutra Security called Kobold Letters has resurfaced an old trick with CSS, but this time in email. The basic TL;DR of this trick is using display: none attached to CSS in an email to hide text in the email until its forwarded or replied to. Email clients often will convert an email to plain text or try to convert the HTML and CSS slightly. This results in the ability to put blocks of text in divs or other selectable blocks that can be styled in CSS to hide them or otherwise change their display and appearance when they are forwarded or replied to. I don't know if this really changes much in the spear-phishing risk area, at this point organizations should have considerable controls in places to make sure that fund transfers are only acted on with clear verified approval and that the destinations of fund transfers are vetted and verified, not copied from some email and sent without checking. Fortunately in this case the vendors have been informed and they are working to provide solutions to this attack, so it may not be viable for very long. Are Bluetooth Discovery Attacks Drying Up? I don't have much to write here since I have not yet dove into the data that much, but the Bluetooth Discovery attacks that I talked about in December appear to not be as popular as they once were. I used Wall-of-Flippers at a few conventions in March to collect Flipper and Bluetooth Discovery Spam data, but it appears that not a whole lot of spamming was happening. Apple and Google Android have been working on mitigating these attacks, Apple having released several iOS updates to patch it. The lack of impact these days may be driving this trend. I do intend on bringing the Wall-of-Flippers to more events, and will be doing a bigger writeup on the device, the software and the data collected here on DevCentral in the coming month or two. Roundup Not a channel this time, but a single video by TwinkleTwinkie: Understanding & Making PCB Art. Google to delete records made from users using Incognito Mode in lawsuit settlement. Microsoft has announced how much it will cost to keep Windows 10 past the date they want you to move to Windows 11. No word on a better Windows 11 UI. Fake AI lawfirms are sending DMCA takedowns to generate SEA gains. (Original report) A recommendation from my recent trip to Las Vegas: Roberto's Taco Shop. Wi-Fi only works when its raining. This is a lesson in sometimes the observations, while absurd, are correct. Roku wants to insert ads in HDMI inputs? DEF CON now has hotel blocks at the Sahara Las Vegas, The Fontainebleau Las Vegas and Resort World.54Views1like0Comments- 77Views2likes0Comments