A long week of breaches - Jan 7th - 13th, 2023, F5 SIRT - This Week in Security

This Week in Security
Jan 6th - 13th, 2022
A long week of breaches, malware and exploited vulnerabilities


Aaron here as your editor this week, for my first TWIS of 2023 - as MZ said last week, I hope everyone had a great holiday season and new year, and here's fingers crossed that 2023 will be less exciting than 2022; though it has started out anything but, really, with the LastPass breach MZ discussed, a slew of ransomware attacks, and no let-up in the stream of news to keep up with! So let me guide you through a few of the interesting, notable, or important articles I came across in the last week.

Worthy of note but not worthy of full analysis:


CircleCI is a Continuous Integration/Continuous Delivery (CI/CD) tool which allows teams to build automated pipelines from testing to deployment; in other words, you can automate the task of testing whatever new code you have checked in to your code repository before it gets pushed out to a target system. On January 4th, 2023, they alerted their customers to a security incident which had allowed malicious actors to steal - and then decrypt - the contents of various datastores containing customer data and, most notably, potentially containing customer secrets (like AWS tokens, GitHub OAuth tokens and so on).

They have since published an excellent, and very transparent, blog detailing everything that happened leading up to the compromise, what data was impacted and what steps they have taken since to try and avoid anything like this happening again in the future; I'll just pull out the main points here:

  • December 16th, a third party used malware on an employee's laptop to steal a valid 2FA-backed SSO session
  • December 19th, reconnaissance activity began
  • December 22nd, exfiltration of data occurred. Although the data was encrypted-at-rest the attacker was also able to steal encryption keys from a running process to facilitate decryption
  • December 29th, one of their customers alerted them to suspicious activity tracing back to secrets stored in CircleCI
  • December 30th, CircleCI learned that the secrets had been compromised by a third party
  • December 31st, they began rotating all similar secrets
  • January 4th, their internal investigation was complete and public disclosure made
There are many ways the malware could have been delivered originally and their blog doesn't go into detail here, but the most likely vectors are, as always, going to be phishing attacks, the use of untrusted WiFi or even an unattended laptop in a public location providing physical access. The best defence here is user education (this is why F5 carries out regular mandatory training, across the entire organisation, on these topics) backed up by robust endpoint detection and logging. Here CircleCI were unfortunate in that their device management and antivirus solutions did not alert to the presence of the malware.

It's also unfortunate that the attacker was able to exfiltrate (presumably) what is a reasonable large dataset without any kind of alerting being triggered. I don't think we can be too hard on them for the data being decrypted once it was exfiltrated - they did everything correctly here by ensuring that the data was encrypted at rest, and preventing users (who have legitimate access to the data) from having access to the decryption keys in a running process, while still granting them access to that data, is a tricky problem to solve - but detecting the exfiltration could have reduced the amount of time that the data was valuable for by enabling them to begin rotating secrets before the attacker had attempted to use them.

CircleCI have taken all the right steps post compromise, in my opinion - they have tweaked MDM & A/V detections, implemented further principles of least privilege by further restricting who has access to what data and requiring additional authentication even for those users (protecting against stolen session re-use) and have implemented additional monitoring.

Again, for me it is that last step which is the most important take-away here: visibility is everything. Of course, with visibility comes a flood of logs and alerts and that's where hard decisions need to be made around exactly what assets you prioritise monitoring of and which alerts are the most important, both of which will be highly environment specific.

Be careful where you store your secrets, I guess?


A right Royal Ransom

We talk about ransomware and wipers a lot here! Koichi wrote about Haneda hospital, Dharminder wrote about Fantasy data wiper and now I'm writing about ... the UK postal service?

On January 11th Royal Mail, the UK postal carrier, alerted news outlets and the public that they had suffered a "cyber incident" and asked the public to stop sending parcels and letters overseas. Evidently domestic deliveries and imports were unaffected, but they had lost the ability to export outside of the UK. This was obviously a big deal as the National Cyber Security Centre (NCSC) and National Crime Agency (our FBI) announced they were working with Royal Mail to understand the scope and source of the attack - pretty quickly it became public knowledge that Royal Mail had suffered an attack by LockBit when the ransom notes (which come flying out of any accessible printer on your network) were either given or leaked to the press.

Royal Mail weren't the only targets in the UK this week - Vice Society also released data from 14 more schools (including the personal data of both students and staff) in the latest of their string of attacks against educational establishments worldwide. This week the impacted institutions include Carmel College, St Helens; Durham Johnston Comprehensive School; Frances King School of English, London/Dublin; Gateway College, Hamilton, Leicester; Holy Family RC + CE College, Heywood; Lampton School, Hounslow, London; Mossbourne Federation, London; Pilton Community College, Barnstaple; Samuel Ryder Academy, St Albans; School of Oriental and African Studies, London; St Paul’s Catholic College, Sunbury-on-Thames; Test Valley School, Stockbridge; The De Montford School, Evesham; Pates Grammar School, Gloucestershire - though they have previously leaked data from University of Duisberg-Essen in Germany, Cincinatti State College in the USA and Australia's Fire Rescue Victoria so they are clearly not overly picky when it comes to their targets. They are slightly unusual in being the most high profile "if you don't pay we will leak your data" group, which is probably why they were the subject of a 2022 joint Cybersecurity Advisory by the FBI, CISA and MS-ISAC (https://www.cisa.gov/uscert/sites/default/files/documents/aa22-249a-stopransomware-vice-society.pdf)

What I'm really focussing on here is the human aspect of all of these attacks: In the case of the Royal Mail attack, the inability to export goods outside of the UK has caused significant financial harm to small businesses who rely on day-to-day cashflow and the ability to deliver goods internationally (https://www.bbc.co.uk/news/business-64291272), attacks on hospitals have the potential to cause real, physical harm directly to people experiencing the most difficult times in their lives and leaking data from educational establishment could easily cause emotional, physical or financial harm to minors.

As technology professionals we have a duty to do better when it comes to security, and I'll come right back to a point I made in the previous section - education. While some of these attacks will undoubtedly have happened due to a lack of patching known vulnerabilities (CISA recently ordered government agencies in the US to ensure OWASSRF is patched before the end of January - https://www.bleepingcomputer.com/news/security/cisa-orders-agencies-to-patch-exchange-bug-abused-by-ransomware-gang/) just as commonly we see malware delivered via the good old fashioned phishing attack and, right now, education is our best defence against phishing. Undoubtedly, we need to look for better ways to prevent phishing via technology, better segment systems to prevent lateral movement and further implement zero trust, but right now it's my opinion that education continues to be key.

And of course, don't leave old vulnerabilities unpatched - or even new vulnerabilities, especially if they lead to remote code execution. Which leads nicely into our next segment...


THX 1138 .. no, xdr33

On January 9th, 360Netlab published a Chinese language blog which referenced a vulnerability in "an F5" being leveraged to deliver a novel piece of malware, followed quickly by an English language blog on January 10th which hit news outlets on January 16th. The focus of the blog was actually the delivered malware, based on the leaked CIA Hive kit and dubbed 'xdr33' based on the embedded certificate CN field within the binary and is a worthwhile read if you'd like to learn more about what this piece of malware can do, what C2 servers it communicates with (and how) and how it could potentially pivot across a network once installed.

Of course, the fact that F5 was mentioned along with an unnamed vulnerability caused some concern among customers! Fortunately one of the researchers confirmed on Twitter that the vulnerability in question was CVE-2022-1388 from May last year, for which fixes have been available for quite some time (fixes were introduced in 13.1.5,,, and 17.0.0) and I should also note that the vulnerability impacts the control-plane (iControl REST) so as long as that is not exposed to untrusted hosts, the risk is minimal regardless.

This behaviour - of exploiting known vulnerabilities to self-propagate - tracks with existing botnets like Mirai and we saw it ourselves previously with CVE-2020-5902 and, as Tikka noted, CVE-2022-1388 was listed as one of the PRC's top exploited vulnerabilities of 2022 as assessed by the NSA, CISA and FBI.

Still, I hope this clarification puts everyone's mind at rest - I still recommend reading the blog post for some interesting analysis of the delivered malware, though!

THN coverage - https://thehackernews.com/2023/01/new-backdoor-created-using-leaked-cias.html
Updated Jan 20, 2023
Version 2.0

Was this article helpful?