The Rise of AI in Election Misinformation
In June, during a contentious Republican gubernatorial primary in Utah, a misleading video circulated on social media, depicting Utah Gov. Spencer Cox allegedly confessing to fraudulent ballot signature collection. However, the governor never made such a statement, and courts upheld his election victory.
This incident highlights a growing trend of AI-generated content in elections, much of which is false or designed to provoke. Experts warn that the advent of deepfakes and other AI technologies has made it easier for anyone to create convincing misinformation.
How can AI be used to spread misinformation?
While some misinformation is intentional, AI can also produce accidental inaccuracies due to algorithmic flaws. AI chatbots often rely on outdated or incorrect data, leading to misleading outputs.
In May, OpenAI announced its commitment to enhancing transparency regarding its AI tools during the election year, endorsing the bipartisan Protect Elections from Deceptive AI Act.
Recent incidents, like Elon Musk’s AI assistant Grok mistakenly claiming Vice President Kamala Harris was ineligible for the ballot in several states, underscore the risks of poorly regulated AI systems.
Verifying Digital Identities
The deepfake incident involving Gov. Cox has led to a partnership between a public university and a tech platform aimed at combating deepfakes in Utah elections. From July 2024 to January 2025, students and researchers will collaborate with SureMark Digital to verify politicians’ digital identities.
Brandon Amacher from Utah Valley University believes AI’s role in elections will mirror the impact of social media in 2008—significant but not overwhelming.
What’s the motivation behind misinformation?
Misinformation campaigns can stem from various motivations, including political agendas and financial gain. Michael Kaiser, CEO of Defending Digital Campaigns, notes that some groups aim to sow chaos and division rather than directly influence voting.
Strategies for Stopping the Spread of Misinformation
Experts suggest several strategies to combat misinformation: verify the source of images or videos, use reverse image searches, and double-check election-related messages against official resources.
Technologists are also encouraged to build AI responsibly, with recommendations for best practices to prevent the misuse of AI in political contexts.
As the 2024 elections approach, the potential for AI-generated misinformation will likely increase, making vigilance essential.
- 0 Comments
- Ai Process
- Artificial Intelligence