on 23-Jun-2022 07:46 - edited on 08-Sep-2022 08:32 by AubreyKingF5
As Rebecca mentioned last week, we (the F5 SIRT) are now aiming to publish a round-up of the previous week(s) security news under the This Week in Security (henceforth, TWIS) banner. This week we have a bumper issue actually, covering two weeks (June 6th to June 19th 2022) of security news. As always, the security world moves quickly and there were a plethora of articles to read over the last couple of weeks - I've pulled out a handful of the ones I found most interesting, and you can read my commentary on phishing attacks, QNAP ransomware attacks, an Atlassian Confluence vulnerability and two novel processor side-channel attacks within.
This isn't a new or novel attack, phishing is something that has been with us since the advent of electronic communications, and will probably be around forever; but it popped up in the news again last week driven by new Malware-as-a-Service (MaaS) offerings and, in all likelihood, geopolitical events. Let's look at a couple of them:
First, I'd like to talk about Google Drive share spam; this pops up in the news every now and then (at least as far back as 2020, again in 2021) but isn't something I've experienced personally - until the last couple of weeks. Since then my personal email account has been absolutely flooded with notifications of files being shared with me via Google Drive - PDFs, Google Slides, Google Docs - all of which contain images along with a link to websites designed to directly or indirectly relieve me of money. It's good to see that the same techniques we saw 20+ years ago (with the ILOVEYOU and Anna Kournikova viruses) is apparently still effective enough for attackers to use today... Still, the Google Drive 'shared document' spam was new to me, and seems to be extremely hard to counter - no amount of reporting stuff as spam in Gmail helps the situation, and it seems like the only thing left is to block individual senders. Interested to hear if anyone else has seen an uptick in this attack recently!
Second, MaaS, specifically Matanbuchus. I've linked to Palo Alto's Unit42 excellent write-up, as well as SANS ISC's brilliant summary if you'd like a shorter read with more specific calls to action like the SHA256 hashes of files dropped and outbound traffic. What caught my eye with this was the absurdly low initial rental cost of Matanbuchus; just $2500, putting it firmly within the reach of even the most poorly funded organisations. Other than that, really, this is your typical run-of-the-mill spam attack to deliver malware via malicious links or attachments (in this case, attachments): In SANS example, a ZIP file is delivered via email which, when extracted by the user, presents an HTML file. When that HTML file is opened by the user it opens a fake OneDrive page which then delivers a second ZIP file containing an MSI installer, lauching the installer directly installs the Malware. Meanwhile in Unit 42s example the first stage dropper is an Excel sheet with code embedded across multiple cells to download and execute the malware.
Attacks like this and Follina highlight, for me, the need for both layered security as well as whole-network visibility and alerting - outbound traffic inspection (where possible) logging and blocking attempts to access known malicious resources with robust intelligence feeds, endpoint inspection logging and quarantining potentially malicious files. Products like SSLO can help here with outbound access control, APM can enforce endpoint standards, ThreatStack for alerting on cloud workloads should lateral movement off workstations happen which can all be backed up by a robust SIEM solution for visibility.
I feel like I write about supply chain security every time I write an analysis these days, but I spotted something unfold over the last couple of weeks that highlights, yet again, the difficulties and importance of ensuring attention is paid to all of the individual components (often sourced via third parties) which make up a product. That's something that the F5 SIRT and F5's Platform Security organisations spend a lot of time inspecting and a place that is seeing - both at F5 and in the wider industry - significant investment into tooling around visibility and patching, in part driven by last year's Executive Order mandating a Software Bill of Materials (SBOM) for all products and, I think, in part by the sheer volume of supply chain incidents over the last couple of years.
Getting back to the topic at hand; around June 17th we started to see reports of widespread ransomware attacks against QNAP Network Attached Storage (NAS) devices using the DeadBolt ransomware. QNAP immediately urged customers to upgrade to allow the built in anti-malware software to quarantine the DeadBolt instance, and only a day or so later reports emerged of QNAP NAS' being targeted by ech0raix ransomware. This smelt to me like a new attack vector had been discovered, and although I haven't seen any detailed analysis of how either ransomware was being dropped onto devices, it seems somewhat suspicious that a week later QNAP would announce patching a PHP vulnerability from 2019 which can allow remote code execution when exploited.
The fact that this is a PHP vulnerability from 2019 for which exploits have been available for three years is what circles me back to supply chain security - vendors (like F5 has) must automate at least the visibility into their supply chain so that issues like this are surfaced quickly and fixed in a timely manner in their products, rather than relying on after-the-fact patching. To be clear this is an enormous undertaking for any vendor with more than a trivial number of large products, but it is absolutely essential for the security of all of our lives. F5 has made a commitment to be a force for a safer digital world, what do you want to be a force for?
Atlassian were last on my radar in the summer of 2021, but over the last couple of weeks they have popped back up again when an RCE reported in very early June (CVE-2022-26134) hit it's mass-exploitation phase with CISA advising US federal agencies to block access to, or remove, vulnerable instances by June 6th. This mirrors something that we've seen repeatedly with what I would call 'high value' vulnerabilities - the time between disclosure and mass exploitation by worms is measured in a handful of days and sometimes just hours! This makes the task of patching vulnerabilities in internet-facing applications in a 'timely' manner almost impossible for most organisations; the take away here, then, has to be that segmentation - if and when an attacker breaches an internet facing application, you absolutely do not want them to be able to pivot into more sensitive internal infrastructure (and maybe don't put the Jira instance you use to track product development on the Internet?). Zero Trust has a role to play here too, but don't interpret Zero Trust's second name of "perimeterless security" to mean that you do not need boundaries and segmentation between servers and systems..
Back to my favourite subject again! There have been a couple (either two or three, depending on how you're counting them!) of new processor vulnerabilities disclosed over the last couple of weeks - one made lots of news while the other seems to have flown more under the radar.
First, Hertzbleed - I am pretty sure this got the attention because a) it affects x86 architectures (Intel and AMD) and b) it has a catchy name and logo; and we all know you need a catchy name and a logo for your shiny new vulnerability, right?
Hertzbleed is a remotely exploitable timing based side-channel attack which has been shown, in laboratory conditions at least, to allow complete recovery of secret keys and conceivably any other arbitrary information which can be gleaned by sending workloads of different difficulty to the target system. The novel part of this attack is that the timing discrepancies are uncovered due to the dynamic frequency scaling (the frequency of a CPU being measured in Megahertz or Gigahertz, hence Hertz-) causing the same code to execute in a different amount of time depending on the data being processed, allowing a side-channel to leak (hence -bleed) information about the workload itself.
Your three bulletpoint summary a-la Aaron is:
*I have to add a caveat here - disabling frequency scaling will lock the processor at it's base frequency; you won't get turbo-boost anymore, nor will it scale back when idle. This means you will potentially use more power and generate more heat and the processor will handle less peak load as it can no longer boost above the base frequency; bad for the environment and if you were taking advantage of that turbo boost frequency boost, you might need more processors for your existing workloads. See my point 3 above for why I think the costs of the workaround outweigh the likely benefits.
Next up - do you have a new M1 Mac and feel left out? Fear not, PACMAN is here! This vulnerability got a shiny name and logo but doesn't seem to have generated as much traffic or commentary as Hertzbleed - perhaps that's because if you Google Pacman you have to weed out all of the results for the classic video game? PACMAN builds on Spectre and implements similar techniques against the M1 architecture, though much like exploiting Spectre you need to have found a piece of exploitable code, or loaded your own exploitable code, onto the target endpoint - for PACMAN you need a piece of software containing an existing memory corruption bug and a vulnerable piece of kernel code to use as a gadget in order to construct a complete exploit. The report cites this as exploitable via the network and, while I don't dispute that, you do have to have found an awful lot of predicates before you can exploit this (as I noted, some piece of code which is already vulnerable and whose vulnerability you can exploit, a kernel gadget and all of that has to be network accessible). So here's my bullet-points again:
*The caveat here is that there is a class of attacker who would use this kind of exploit - nation states. Your regular run of the mill attacker will look for much, much easier ways to compromise a system (and we're back to phishing again) because they aren't as concerned with high value targets or noiseless attacks. Actually exploiting this in a real world system would, in my opinion, take so much intelligence gathering and prior research that only the most highly motivated nation-state attacker would use this, and only against the highest value targets. What can you do? Make sure you aren't running vulnerable code .. which is easier said than done.
Also check out F5 Labs' 2020 Phishing and Fraud Report: https://www.f5.com/labs/articles/threat-intelligence/2020-phishing-and-fraud-report