WormGPT, Trojan Horse PoC, and Fileless Python Malware - July 9-16 F5 SIRT This Week in Security

Jordan here as your editor this week. This week has seen a few threats rise to prominence: WormGPT, Trojan horses posing as a Linux vulnerability proofs of concept, and a Python-based fileless attack designed for cryptocurrency mining. Keeping up to date with new technologies, techniques and information is an important part of our role in the F5 SIRT. The problem with security news is that it's an absolute fire-hose of information, so each week or so we try to distill the things we found interesting and pass them on to you in a curated form.

It's also important for us to keep up to date with the frequently changing behaviour of bad actors. Bad actors are a threat to your business, your reputation, your livelihood. That's why we take the security of your business seriously. When you're under attack, we'll work quickly to effectively mitigate attacks and vulnerabilities, and get you back up and running. So next time you are under security emergency please contact F5 SIRT.

 

WormGPT: Generative AI's Dual Nature Comes to Light

As we stride forward in the age of technology, artificial intelligence (AI) stands alongside other technologies with the dual potential to catalyze positive breakthroughs or cause harmful consequences. A striking example of this duality is WormGPT, a development built on the foundation of GPTJ, a variant of the GPT architecture.

Contrary to its name, WormGPT isn't a conventional computer worm. Instead, it's an unrestricted generative AI system optimized to deliver results that popular platforms like ChatGPT or Google Bard won't provide. Because WormGPT bears similarities to DarkBert, I presume it is leveraging training data derived from dark web sources. This enables it to generate convincingly deceptive phishing emails and complex malware code, all carried out without any form of filtering or censorship. WormGPT's strength lies in its ability to use AI to produce contextually relevant content, thereby lending a veil of authenticity to its outputs, making them incredibly credible to an unwary observer.

The emergence of WormGPT raises an essential question: Who should have the authority to regulate and censor AI technology? The appeal of unrestricted generative models like WormGPT and the community of  JailBreaking generative AI models implies that there is a preference among some users for AI systems that are not subject to controls imposed by large technology corporations. This sentiment becomes increasingly important in light of the concerns voiced over the years regarding the possibility of censorship misuse by major tech companies.

In the past, these tech giants have come under fire for what critics perceive as their undue influence over the dissemination and accessibility of information. There is a growing belief that these corporations wield excessive power in deciding the content that is deemed appropriate for the public, which could lead to biases and unjust restrictions. Therefore, the debate surrounding generative AI models like WormGPT isn't merely about technological innovation—it's also about freedom of information, the establishment of ethical boundaries, and the shifting power dynamics between the public and the tech industry.

Therefore, the conversation around generative AI models is more than a question of technology; it's a debate about freedom of information, ethical boundaries, and the power dynamics between the public and the tech industry. The emergence of WormGPT, with its potential for both harmful and beneficial uses, only adds fuel to this complex, ongoing conversation.

Trojan Horses Masquerading as Linux Vulnerability POCs

Cybersecurity researchers have recently detected a crafty scheme aimed at the security community, involving the concealment of malicious backdoors within proof-of-concept (PoC) code. The code purported to demonstrate CVE-2023-35829, a security flaw in the Linux kernel, creates a backdoor for cybercriminals to infiltrate systems and steal sensitive data. This intrusion strategy underscores the need for diligent PoC verification, secure testing in isolated environments, and the essential role of real-time threat intelligence.

As explained by researchers, the PoC's trap is set when the user executes a 'make' command—a tool typically used to compile and build executables from source code. However, buried within the Makefile is a snippet of code that builds and launches the malicious program. This malware uses a commonly seen technique of disguising itself as a standard system process named kworker. It allows an attacker to establish persistence on the compromised system by subtly adding the threat actor's SSH key to the system's authorized key file. The concealed backdoor within these PoCs equips cybercriminals with extensive capabilities, including the theft of sensitive data and remote access to compromised systems.

This incident serves as a stark reminder for the cybersecurity community that trust should always be corroborated with thorough verification.

 

Python-Based Fileless Attack for Cryptocurrency Mining

Another elusive fileless attack, this one dubbed PyLoose was recently discovered. This attack is targeted at cloud workloads with the intention of deploying an XMRig Miner. Used for mining Monero, a cryptocurrency famed for its privacy features, XMRig is loaded directly into the system's memory in an attempt to sidestep detection methods.

The initiation of the PyLoose attack is marked by the execution of a Python script carrying a compressed and precompiled XMRig miner. Fetching the payload into the Python runtime's memory via an HTTPS GET request allows the attack to proceed without writing the file to the disk, underscoring its fileless nature. The PyLoose attack demonstrates the measures that attackers are willing to take to remain hidden.

It's worth noting that the attacker's evasion tactics were meticulously crafted, including the use of an open data-sharing service to host the payload and the embedding of the configuration of the precompiled XMRig miner within the code. This allowed the attack to proceed without conspicuous command line usage or disk writing, thereby significantly reducing the likelihood of detection. In addition, Monero uses obfuscated public ledgers, meaning anyone can broadcast or send transactions, but outside observers cannot tell the source, amount, or destination. This makes Monero transactions untraceable, a quality highly prized by those seeking to evade detection from the mined cryptocurrency.

The PyLoose attack exemplifies the escalating sophistication of cybersecurity threats, leveraging advanced fileless techniques and the privacy-focused Monero cryptocurrency to conduct clandestine operations, underscoring the imperative need for constant vigilance and advanced threat detection in today's digital landscape.

 

And there you have it, folks. Whether it's a worm, trojan horse, or python, the world of cybersecurity is truly a wild safari! So grab your binoculars and keep your eyes peeled, because in this jungle of technology, we're all just trying to avoid becoming the prey. Until next time, stay safe out there!

Updated Jul 20, 2023
Version 2.0
  • Jordan_Zebor - interesting about WormGPT. Thanks for this.


    I just came across a conversation, dropped today, with Shuman G. on Preet Bharara's podcast (Ian Bremmer guest-hosting) - touching on these things you mention:


    ...the conversation around generative AI models is more than a question of technology; it's a debate about freedom of information, ethical boundaries, and the power dynamics between the public and the tech industry.


    https://cafe.com/stay-tuned/humanity-security-ai-oh-my-with-ian-bremmer-shuman-ghosemajumder/

    Thanks for keeping us all up to speed from a security perspective.

  • Thanks for the feedback, I'll look at the article. Working with Shuman was an honor and I am grateful for the opportunity to have experienced his exceptional leadership firsthand.