on 04-May-2023 13:30
Arvin is your editor for F5 SIRT's This Week in Security covering 22nd-28th of April. Here's a summary of the security news I gathered for this edition.
BYOVD - bring-your-own-vulnerable-driver (BYOVD) attack abuses legitimate driver to disable endpoint detection and response software. AuKill, a detection evasion utility, dupes a target system in trusting an outdated MS process explorer and disable EDR processes. AuKill brought the bad driver with it to exploit as it infiltrated the victims' networks. MS has a list of bad and banned drivers that can be implemented with Windows Defender Application Control policy to counter this attack.
Apache Superset, a modern data exploration and visualization platform, shipped with an insecure default configuration that could be exploited to login and take over the data visualization application, steal data, and execute malicious code. Documented as CVE-2023-27524, a new update titled "fix: refuse to start with default secret on non debug envs" will prevent superset to start as researchers highlighted, after a year of reporting to Apache security team, users have not addressed the issue, thus, a harsher measure.
Some good news, a memory safe language, Rust, will be at the core of the Windows OS. This hopefully means, less memory safety bugs before the code lands in the hands of users which is about 70 percent of the CVE-listed security vulnerabilities patched by the Windows since 2006.
In crypto crime news and some win for the defenders, US DoJ and the treasury dept are pursuing 3 men accused of wide-ranging and complex conspiracies, providing support to the notorious Lazarus Group, laundering stolen and illicit cryptocurrency that the North Korean regime used to finance its massive weapons programs. The DPRK is tied to the Lazarus group - a North Korean state-sponsored cyber threat group.
Google obtained a court order to shut down domains used to distribute CryptBot after suing the distributors of the info-stealing malware. Litigation was filed against several of CryptBot's major distributors whom are believed to be based in Pakistan and operate a worldwide criminal enterprise.
AI-powered attacks - Generative AI, the result of decades of research into neural networking and Generative Adversarial Networks (GANs), is widely seen as the next candidate on this list. The idea that AI is a big deal is nothing new and the generative AI that has made headlines is only one subsector of AI development. Chatbots such as OpenAI's ChatGPT and Google's Bard has fired a jolt of destabilizing energy into computing as a whole, and cybersecurity as a discipline. We know its coming and probably have already arrived, complex automated attacks can cost so much to the defenders, keeping them/us busy defending.
At attack surfaces where F5 technologies are present, particularly in protecting web applications, F5 has made use of AI and ML in our defensive capabilities. F5 ASM/Adv WAF have used ML in its learning and policy building feature since the early versions of 14.1, and with F5 Distributed Cloud API Security in its automatic API discovery, threat detection, and schema enforcement and F5 Distributed Cloud WAAP - the web application and API WAF, utilizes AI and ML on unique malicious user detection and mitigation capabilities that create a per-user threat score based on behavioral analysis that determines intent.
The RSA Conference 2023 in the bay area had just concluded and it had many great talks and learnings. Thoughts from various vendors shared views on the current security landscape, challenges and the future, particularly, the automated/AI class of attacks. Defenders have tools and secure practices - SOCs, the devsecops, and well thought and critical incident response - to hopefully, prevent cyber security incidents and impact to business. on a lighter note, F5 DevCentral folks were also in RSA Conference 2023 and have a nice wrap up video.
I hope this summary is informative. Thanks for reading and see you on the next one.
Ransomware spreaders have built a handy tool that abuses an out-of-date Microsoft Windows driver to disable security defenses before dropping malware into the targeted systems. This detection evasion utility, which Sophos X-Ops researchers are calling AuKill, is the latest example in a growing trend where miscreants either abuse a legitimate driver to disable, silence or otherwise get past endpoint detection and response (EDR) software on the systems – the so-called bring-your-own-vulnerable-driver (BYOVD) attack – or work to get a malicious driver that does the same digitally signed by a trusted entity and injected onto a victim's computer. Either way, the victim's PC is duped into trusting a privileged driver, granting an intruder low-level rights and access, which gives them the ability to side step any protections and deploy their malware. And to be clear, AuKill takes the BYOVD approach: it brings onto the PC a vulnerable Microsoft driver to exploit. "Last year, the security community reported about multiple incidents where drivers have been weaponized for malicious purposes," Andreas Klopsch, a threat researcher at Sophos, wrote in a technical report this month."The discovery of such a tool confirms our assumption that adversaries continue to weaponize drivers, and we expect even more development in this area the upcoming months." "The AuKill tool requires administrative privileges to work, but it cannot give the attacker those privileges," writes Klopsch at Sohpos. "The threat actors using AuKill took advantage of existing privileges during the attacks, when they gained them through other means." To defend against this, ensure your environment can detect and block bad and banned drivers from being installed and/or run. Microsoft has some notes about that here.
Apache Superset until earlier this year shipped with an insecure default configuration that miscreants could exploit to login and take over the data visualization application, steal data, and execute malicious code. The open source application, based on Python's Flask framework, defaulted to a publicly known secret key: SECRET_KEY = '\2\1thisismyscretkey\1\2\e\y\y\h'
According to Sunkavally, about two-thirds of those using the software failed to generate a new key when setting up Superset: as of October 11, 2021, the application had almost 3,000 instances exposed to the internet, about 2,000 of which relied on the default secret key. The Apache security team responded the following day and by January 11, 2022, made some changes, which established a new default secret key: "CHANGE_ME_TO_A_COMPLEX_RANDOM_SECRET"
Microsoft is rewriting core Windows libraries in the Rust programming language, and the more memory-safe code is already reaching developers. David "dwizzle" Weston, director of OS security for Windows, announced the arrival of Rust in the operating system's kernel at BlueHat IL 2023 in Tel Aviv, Israel, last month."You will actually have Windows booting with Rust in the kernel in probably the next several weeks or months, which is really cool," he said. "The basic goal here was to convert some of these internal C++ data types into their Rust equivalents." Microsoft showed interest in Rust several years ago as a way to catch and squash memory safety bugs before the code lands in the hands of users; these kinds of bugs were at the heart of about 70 percent of the CVE-listed security vulnerabilities patched by the Windows maker in its own products since 2006. The Rust toolchain strives to prevent code from being built and shipped that is exploitable, which in an ideal world reduces opportunities for miscreants to attack weaknesses in software. Simply put, Rust is focused on memory safety and similar protections, which cuts down on the number of bad bugs in the resulting code. Rivals like Google have already publicly declared their affinity for Rust.
Rust "Hello World" - https://doc.rust-lang.org/rust-by-example/hello.html
If the DPRK is named, you know it somehow involves Lazarus Group The US government is aggressively pursuing three men accused of wide-ranging and complex conspiracies of laundering stolen and illicit cryptocurrency that the North Korean regime used to finance its massive weapons programs. The Department of Justice (DoJ) this month indicted North Korean national Sim Hyon Sop, Wu HuiHui of China, and Cheng Hung Man, a Hong Kong British national, for their roles in two money laundering conspiracies, both aimed at channeling funds into North Korea's coffers. The Democratic People's Republic of Korea (DPRK) is known for running complex operations designed to steal or generate crypto – often through state-sponsored groups – that is then laundered and sent to the regime to fund its programs around weapons of mass destruction (WMD) and ballistic missiles, which the US and other countries deem national security threats. North Korea has been operating such increasingly creative cyber schemes since at least 2017. "The charges… highlight the ways in which North Korean operatives have innovated their approach to evading sanctions by exploiting the technological features of virtual assets to facilitate payments and profits, and targeting virtual currency companies for theft," Assistant Attorney General Kenneth A Polite Jr of the DoJ's Criminal Division said in a statement.
In one of the conspiracies, Wu and Cheng are accused of providing support to the notorious Lazarus Group, a group linked to numerous attacks around the world for more than a decade, targeting a variety of industries from financing and manufacturing to media, entertainment, and shipping.
Google said it obtained a court order to shut down domains used to distribute CryptBot after suing the distributors of the info-stealing malware. According to the Chocolate Factory's estimates, the software nasty infected about 670,000 Windows computers in the past year, and specifically targeted Chrome users to pilfer login details, browser cookies, cryptocurrencies, and other sensitive materials from their PCs. A New York federal judge this week unsealed a lawsuit [PDF] that Google filed against the malware's slingers; the US giant accused the distributors of committing computer fraud and abuse, and trademark infringement by using Google's marks in their scam. The court granted Google a temporary restraining order, which allowed it to shut down the bot operators' internet infrastructure. Usually in this sort of case, Google gets to take its restraining order to registrars and registries that are under the court's jurisdiction, and get specific domains used to spread the malware disabled. Judging from the court order [PDF] Google can not only have domains taken down in that fashion, it can show its restraining order to network providers and hosters to get connections to the servers used by CryptBot blocked; get any of the hardware or virtual machines involved switched off and services suspended; materials that would lead to the identification of CryptBot's operators preserved and handed over; ensure steps are taken to keep this infrastructure offline; and much more. All in all, the order allows Google to wipe from the internet the systems and websites used by CryptBot's operators to spread their software nasty. "Our litigation was filed against several of CryptBot's major distributors who we believe are based in Pakistan and operate a worldwide criminal enterprise," said Google's Head of Litigation Advance Mike Trinh and its Threat Analysis Group's Pierre-Marc Bureau. The restraining order will "bolster our ongoing technical disruption efforts against the distributors and their infrastructure," they added. "This will slow new infections from occurring and decelerate the growth of CryptBot." The distributors targeted in the lawsuit – said to be Zubair Saeed, Raheel Arshad, and Mohammad Rasheed Siddiqui of Pakistan – operated websites that lured unwitting users into downloading malicious versions of Google Earth Pro and Google Chrome, we're told. Those marks thought they were getting the real deal, but instead they are fetching versions stuffed with the info-stealer malware. Once they install the software on their computers, they infect their machines with CryptBot. "Recent CryptBot versions have been designed to specifically target users of Google Chrome, which is where Google's CyberCrimes Investigations Group (CCIG) and Threat Analysis Group (TAG) teams worked to identify the distributors, investigate and take action," Trinh and Bureau said.
ChatGPT is just the beginning: CISOs need to prepare for the next wave of AI-powered attacks Generative AI, the result of decades of research into neural networking and Generative Adversarial Networks (GANs), is widely seen as the next candidate on this list. The idea that AI is a big deal is nothing new and the generative AI that has made headlines is only one subsector of AI development. But there's no doubt that its very public arrival through chatbots such as OpenAI's ChatGPT and Google's Bard has fired a jolt of destabilizing energy into computing as a whole, and cybersecurity as a discipline. With microprocessors, you can build small computers. With the PC you can put an affordable one on everyone's desk. With the web you can connect the PC to a global information network. With the smartphone, that network can go anywhere and everywhere. What, then, will be the role for AI? The high-level answer is that it will allow automation and advanced decision making without the need to consult human beings. Humans make mistakes that machines don't. They also do things slowly and expensively. At a stroke, with generative AI many of these issues appear to vanish. Data can be processed in seconds as new insights multiply and automated decision-making accelerates. There is, of course, also a darker side to generative AI which researchers have been busily investigating since ChatGPT's public launch on the GPT-3 natural language large language model (LLM) last November. This has generated a surprising amount of doom-saying publicity for chatbots, starting with their effect on the building block of cyber-criminality, phishing emails. This author proved this by feeding ChatGPT real phishing 'security alert' emails to see how it might improve them. Not only did it correct grammatical mistakes, it added additional sections that made them sound even more authoritative. In language at least, these were impossible to distinguish from a well-composed, genuine support email written by a native speaker. Beyond simply improving the language of phishing, the obvious next step would be to make each attack more targeted. The threat here is that AI will be used to scrape data on specific people as a way of impersonating them. AI will also make it much easier for attackers to analyze the large volumes of stolen data, sifting it for sensitive topics at a speed that would be impossible today. "Learn from the environment on a continuous basis," he says. "Have machine learning that knows about the entities it is protecting and not simply the outside world."
F5 Safeguards Digital Services with New AI-Powered App and API Security Capabilities
How F5 Engineers are using AI to Optimize Software
The future of DevSecOps
The person who coined the term "DevSecOps," Shannon Lietz, former VP of Adobe Security, delivered the day's keynote. In her session: "DevSecOps… The Train has Left the Station!" she laid out her vision for how we can get to a better, more secure future in DevOps by staying focused on three overarching topics:
A simple, clear response plan for non-security folks
In her session "Incident Response for Developers," the one and only Tanya Janca, author and founder of We Hack Purple, shared with us a training course we can use with our own teams. Along the way, she told a lot of amusing anecdotes gained from her years of security leadership.
She said one of our most important jobs is helping the rest of the team understand their role during any security incident. What we tell them can boil down to a fairly short list:
1. "Tell the security team if you see something." It is important to let them know you will never be mad at a false alarm. It is always better to tell security than to act on their own.
2. "Don't leave the premises without telling security." Developers are used to going home when the day is done, and they arrive at a logical stopping point. You must explain it is critical for them to stay around until the security team clears everyone to depart.
3. "This incident is top priority. Treat it like an emergency" This is not just a high priority; this is a fire. Do not hide things in order to just keep working on that Jira ticket.
4. "Follow 'need to know' rules about security information." Do not spread what you 'think' is correct. When in doubt, just remember the first item on this list.
5. "Don't try to manage it yourself and try to be a hero" Unfortunately, acting independently and without the right training in some security situations can mean contaminating evidence or chain of custody, which helps bad actors go free even if caught.
Bryan Palma, CEO of Trellix foresees a future where we respond to the growing threats more aggressively and with a different approach than we have been taking, which looks a lot like throwing more security personnel at every security issue. In his talk "SIEM There, Done That: Rising Up in the SecOps Revolution," Bryan said he went to 6 different Security Operation Centers, SOCs, and was shocked to find the state of things. The rapid expansion of threats and variety of attacks has meant longer hours and teams struggling to stay motivated.
He then laid out a simple 3-point plan to address the state of things.
He said tomorrow's SOC:
Fights back – You can not win the game by only playing defense. We must be able to respond so rapidly that the attacker is taken off their feet. Each round they have to rethink their approach is a round they are not attacking, making it a round you win.
Games the system – There are currently more than 3.4 million more openings for security professionals than there are qualified people to fill them. Meanwhile, estimates are over 3 billion gamers exist worldwide. If we could even harness even 1% of that, we could easily fill this skill gap. It is up to us to rethink how training and what day-to-day operations look like.
Runs on robots – Nearly 1/3 of CISOs surveyed want more automation in their security operations. Bryan believes we need to find ways to move humans away from the front lines of response and into the supervisor roles overseeing the robots who are engaging in ever more common machine-on-machine warfare.
The state of CVEs
In their highly informative talk "The Evolution of CVEs, Vulnerability Management, and Hybrid Architectures," Dr. Benjamin Edwards of the Cyentia Institute, and Sander Vinberg, Threat Research Evangelist at F5 Networks, laid out the history of CVEs and the overall trends they are seeing from their research.
Back in 1999, there were just 321 vulnerabilities identified on the first-ever list of CVEs, Common Vulnerabilities and Exposures. Currently, there are between 500 and 1200 new CVEs each week, with over 1000 per week trending to be the new norm by the end of 2023. The high number of CVEs alone does not necessarily mean we are becoming less secure. Instead, the data points to more efficient reporting with better-defined and more tightly scoped vulnerabilities.
The rate of new CVEs has skyrocketed from a new one introduced 300 days after the launch of this classification system to 0 days between them now. They said their research revealed this is in part due to the explosion of vendors in the marketplace. 59% of all CVEs ever reported are related to a single vendor. By comparison, Microsoft has over 10,000 associated CVEs, Google accounts for over 9,100, and Fedora is tied to just over 4,200 CVEs. Roughly 74% of CVEs affect only one product, and 49% of them affect only one version of that product.
While the number of CVEs continues to grow overall, the severity of reported vulnerabilities remains fairly constant. They warned that getting too fixated on the volume of reports can be counterproductive. Tracking CVEs will continue to be an important part of everyone's overall security posture, even if it tends to be a bit messy. They stressed it is far better than the alternative of no common framework where it is every product and security team for themselves. Again, they hit on the underlying theme that we are stronger together.
New and evolving threats
RSAC brings together thought leaders to share their opinions on trends they are seeing in their research. The panel discussion "The Five Most Dangerous New Attack Techniques" brought together 5 such influential minds to share what they see on the horizon. The panel was lead by Ed Skoudis, President of SANS, and featured Heather Mahalik, Senior Director of Digital Intelligence at Cellebrite, Katie Nickels, Director of Intelligence at Red Canary, and Fellows from the SANS Institute Stephen Sims and Johannes Ullrich.
Malvertizing and copycat sites
Starting things off, Katie's research showed that defenders are getting better at building fences, but adversaries are getting better at going over and around our barriers. She said the disturbing rise in SEO attacks, where attackers leverage Google ads to trick victims into directly downloading malware like Gootloader. Katie noted this type of attack, referred to as 'Malvertizing," has just been added to the MITRE ATT&CK framework during RSAC.
Devs at risk
Johannes is most concerned with the threats developers face, specifically, malware loaded in from typosquatting attacks. To make things worse, many tools warning of dangers often get ignored or muted, thanks to the high false positive rates so many devs have experienced.
Blocking developers' tools like GitHub's Copilot or 7zip might seem like a secure approach, but these kinds of efforts normally backfire. If a developer wants a tool, they will find a way to get it. What we should be doing is educating teams about the potential risks, while at the same time giving them safe paths to get what they want.
AI written malware
Stephen Sims said his research had taken him down some interesting paths with ChatGPT. While the AI program will refuse to write malware if directly asked, if you ask enough times and in indirect ways, he found you can manipulate it into writing some pretty sophisticated malware. Combine this with a determined attacker who is always on the alert for new Zero Days, and he worries we are about to see a whole new class of AI-assisted ransomware and malware attacks. Beyond awareness of zero days and keeping patched as soon as possible, he is still trying to figure out what else can be done about this threat.
Heather rounded out the panel by sharing a story about how she tried to leverage ChatGPT to try to get her young son to reveal his address over chat. He was savvy enough to know something was wrong and refused to fall for any lure to disclose his location. While she is proud of her son, the exercise also showed he how sophisticated ChatGPT has become in writing convincing, compelling language. Her fear is not for those who are growing up with this tech but for the vast majority of adults who do not fully realize what ChatGPT, and AI in general, is capable of.
A CTO’s Reflection of the 2023 RSA Conference
We must change the game
Instead of attempting to scan every part of an exponentially expanding surface, the only tenable approach is to make design choices that completely eliminate large portions of our vulnerability surface. We have to make entire classes of attacks impossible.
We must build software in ways that drastically reduce the size of potential targets and limit the blast radius. Our software must become private and secure by design.
In the past, this was challenging and costly. The following tools are changing the game:
Strongly typed languages like Rust and Typescript turn invariants into compile-time errors. This reduces the set of possible mistakes that can be shipped to production by making them easier to catch at build time.
Memory-safe languages eliminate the possibility of buffer overflows, use-after-free, and other memory safety errors. An attack vector that is known to cause 60-70% of high-severity vulnerabilities in large C or C++ codebases. Rust provides this safety without the performance costs of garbage collection at runtime.
Supply chain security practices described in emerging standards like SLSA help us build controls that guarantee artifact integrity within our dependency trees. This diminishes the possibility of malicious libraries, packages, and container images exploiting developer workstations, build pipelines, and runtime environments.
Cryptographic keys, stored in secure hardware, combined with passwordless and tokenless approaches eliminate the possibility of attacks using stolen passwords and access tokens.
Mutual authentication and granular authorization, at the application level, using tools like Ockam, enables zero trust in operating networks, VPNs, and VPCs. This removes other applications within the same network from an application’s vulnerability surface.
Application layer, end-to-end encryption of all data, and using Ockam Secure Channels, eliminates third-party services from our vulnerability surface. End-to-end guarantees of data authenticity, integrity, and confidentiality mean that any mistake or misconfiguration within a broker, load balancer, or gateway cannot compromise our application’s data.
All these approaches shift security left and allow an application’s development team to be in control of the security and privacy properties of their application. This team no longer has to cross their fingers and hope a third-party service won’t be compromised; they can simply end-to-end encrypt data as it passes through that service.
Such design decisions turn security and privacy into problems that can be methodically solved instead of endlessly rolling a big boulder up a steep hill.
Folks from F5 DevCentral were in the RSA Conference 2023 and have a short wrap up video.