Three Ways AI Can Hack the U.S. Election
The growing capability of AI content poses three very real threats to modern elections. We explain each, and take a glimpse at a possible solution to the growing AIpocalypse.
In 2020, we covered Three Ways to Hack the U.S. Election. That article is every bit as relevant today as it was four years ago. At the time, we focused on the ways in which disinformation could be used to misinform and divide the nation. Now, the digital landscape has shifted with generative AI and deepfakes posing even more threats. In our recent article, Three Ways AI Can Hack the U.S. Election, we explore disinformation and deepfakes, voter suppression tactics, and the role bots have in spreading disinformation.
(This is just a summary - click here to read the full article on f5.com/labs.)
Disinformation and Deepfakes
Election security is all about trust. Disinformation has become a geopolitical weapon, and it is easier than ever to create convincing fake content through generative AI and machine learning tools.
AI manipulators can easily change aspects of video, such as backgrounds or facial expressions, to deepfake audio. The boundaries between real and fake have been blurred, with gen AI tools allowing anyone to create realistic content at virtually no cost.
Voter Suppression
Beyond creating fake content for disinformation, deepfakes can have other nefarious purposes.
For example, in 2024, Steve Kramer, a political consultant, admitted to orchestrating a widespread robocall operation using deepfake technology to mimic President Joe Biden’s voice, which discouraged thousands of New Hampshire voters from participating in the state’s presidential primary.
The call used caller ID spoofing to disguise its origins. Kramer spent $500 to generate $5 million worth of media coverage and was fined $6 million by the FCC for orchestrating illegal robocalls.
Dissemination and Widening the Divide
Bots and automation are a significant factor in spreading disinformation on social media platforms like X/Twitter. They can amplify false narratives, manipulate public opinion, and create the illusion of widespread consensus on controversial topics. Bots can share misleading content, interact with genuine users, and boost the visibility of posts, making it difficult for users to differentiate between organic engagement and orchestrated campaigns. AI can significantly enhance the capabilities of social media bots by creating fully-realized, convincing personas, interacting with real and fake users, and creating an illusion of authenticity. AI-enhanced bots can also craft highly realistic posts on a wide range of topics, making them powerful tools for influencing conversations and shaping public opinion.
Future AI
TV news, once considered trustworthy due to its live, real-time broadcasting, is becoming increasingly susceptible to fake news and AI-generated content. Advancements in AI could lead to AI-generated anchors delivering and reacting to real-world events in real-time, blurring the line between authentic and synthetic information. Emotionally intelligent AI can also be used to manipulate social divides by analyzing emotional cues in real-time, allowing disinformation campaigns to manipulate individuals and fuel polarization. This could further fuel polarization and divisive issues.
Combating Fake and AI-generated Content
The Coalition for Content Provenance and Authenticity (C2PA) protocol is a standards-based initiative developed by Adobe, Microsoft, Intel, and the BBC to combat disinformation and fabricated media, particularly in the era of AI-generated content.
It attaches verifiable metadata to digital media files, allowing creators to disclose key information about the origin and editing history of an image, video, or document. C2PA uses cryptographic signatures to detect any tampering with metadata, allowing viewers to access information across platforms. This approach is crucial in combating AI-generated fake content, such as deepfakes, and providing reliable tools for publishers and consumers to judge the trustworthiness of digital media.
Conclusion
The threat of disinformation and AI is growing. While C2PA offers protection, its limitations include the lack of widespread adoption, the need for public education, and potential skepticism and distrust.
Check out the full article written by David Warburton, Director of F5 Labs here.