loader

In June, during a heated Republican gubernatorial primary in Utah, a misleading video surfaced on social media, falsely depicting Utah Gov. Spencer Cox admitting to fraudulent ballot signature collection. This claim was untrue, and the courts upheld his election victory.

This incident highlights a rising trend of election-related content generated by artificial intelligence, with some of it being false or misleading, aimed at provoking viewers.

AI-generated deepfakes have become a significant concern for those combating misinformation during elections. Previously, creating deepfakes required a skilled team and resources, but advancements in AI technology have made it accessible to almost anyone.

Tim Harper, a senior policy analyst at the Center for Democracy and Technology, stated, “Now we can supercharge the speed, frequency, and persuasiveness of existing misinformation narratives.” He noted that AI has evolved significantly since the last presidential election in 2020, particularly after the release of ChatGPT in November 2022.

How can AI spread misinformation?

While some misinformation is intentional, AI can also produce accidental misinformation due to flaws in its algorithms. AI chatbots pull information from their databases, and if that information is incorrect or outdated, it can lead to erroneous outputs.

OpenAI announced in May its commitment to enhancing transparency regarding its AI tools during the election year and endorsed the bipartisan Protect Elections from Deceptive AI Act, currently pending in Congress.

Elon Musk faced criticism from several secretaries of state after his AI assistant Grok mistakenly informed users that Vice President Kamala Harris was ineligible to be on the presidential ballot in nine states due to missed deadlines. This misinformation remained on the platform for over a week before being corrected.

“As millions of voters seek accurate information about voting this election year, X must ensure that all users have access to true guidance regarding their voting rights,” stated a letter signed by secretaries of state from multiple states.

Generative AI impersonations, like the deepfake video of Cox, pose new risks for misinformation. Another deepfake video falsely depicted Florida Governor Ron DeSantis as dropping out of the 2024 presidential race.

While some misinformation campaigns are large-scale, many are localized and targeted. Bad actors may mimic local political organizers or send AI-generated messages to specific communities. Generative AI has made it easier to reach language minority communities by translating messages effectively.

Although most adults recognize AI’s role in elections, some localized campaigns may go unnoticed, according to Harper.

For instance, someone could use local polling data to send misleading messages about polling place changes, making it appear credible.

“If that message arrives via WhatsApp or text, it may seem more convincing than a political ad on social media,” Harper explained.

Verifying digital identities

The deepfake incident involving Cox prompted a partnership between a public university and a tech platform to combat deepfakes in Utah elections. From July 2024 to January 2025, students and researchers at the Gary R. Herbert Institute for Public Policy will collaborate with SureMark Digital to verify politicians’ digital identities and assess AI-generated content’s impact on elections.

Candidates for Utah’s congressional and senate seats can authenticate their digital identities at no cost through SureMark’s platform, aiming to enhance trust in elections.

Brandon Amacher, director of the Emerging Tech Policy Lab at UVU, likened AI’s role in this election to the impact of social media in 2008—significant but not overwhelming.

In the pilot’s first month, Amacher noted the effectiveness of simulated video messages, particularly on platforms like TikTok and Instagram Reels, where short videos are easier to fake and less scrutinized.

SureMark’s verification platform allows users to obtain credentials, linking their identity to published content through cryptographic techniques. A browser extension helps users identify accredited content across various media platforms.

Stornetta, SureMark’s chairman, compared the technology to X-ray vision, enabling users to discern real from fake content.

While the pilot program is still credentialing politicians, Jones, the executive director of the Herbert Institute, reported enthusiasm from campaigns eager to explore this technology.

What motivates misinformation?

Misinformation campaigns can stem from various groups with different motivations. Some target specific candidates, while others aim to influence public opinion on geopolitical events.

Russia’s interference in the 2016 and 2020 elections is well-documented, and efforts are expected to continue in 2024, aiming to undermine U.S. support for Ukraine, as noted in a recent Microsoft study.

Monetary incentives can also drive misinformation, as viral content can lead to financial rewards on platforms that compensate users for views.

Kaiser, focusing on cybersecurity for campaigns, explained that while election interference is a goal for some, many seek to create chaos and apathy towards the electoral process.

“They aim to divide us further,” he stated. “For some, misinformation isn’t about how you vote; it’s about fostering division.”

AI-generated content often evokes strong emotions, making it more likely to be shared. “They aim to incite anger or disbelief, prompting users to share it with friends,” Kaiser added.

Strategies to combat misinformation

Experts suggest several strategies to mitigate misinformation’s spread. First, verify if an image or sound bite has been reported elsewhere. Use reverse image searches to check the credibility of images.

For election-related messages, double-check information through state voting resources.

Implementing two-factor authentication on social media and email accounts can help prevent phishing and hacking, which can spread misinformation.

If you receive a suspicious phone call, confirm the caller’s identity by asking about your last conversation.

Look for visual clues in AI-generated images, such as extra fingers or distorted features, as AI struggles with fine details. Deepfake videos often have plain backgrounds, as complex settings are harder to replicate.

As elections approach, the likelihood of encountering misinformation increases. Bad actors exploit this proximity, knowing misinformation has less time to be debunked.

Technologists can also play a role in mitigating misinformation by refining AI development practices. Harper recently published recommendations for AI developers to promote best practices.

These recommendations include avoiding the release of tools that replicate real people’s voices and prohibiting AI-generated political ads. AI tools should disclose how frequently their training data is updated concerning election information.

While some tech companies voluntarily adhere to transparency best practices, many regions face a patchwork of laws that lag behind technological advancements.

A bill prohibiting deceptive AI-generated media was introduced in Congress last year but has yet to be enacted. However, some states have passed laws addressing AI in elections, focusing on banning or requiring disclaimers for AI-generated content.

For now, tech companies aiming to combat misinformation can seek guidance from reports like the CDT’s or pilot programs such as UVU’s.

“We aimed to create a comprehensive election integrity program for these companies,” Harper concluded, recognizing the need for regulatory scrutiny in this rapidly evolving landscape.

Our stories may be republished online or in print under Creative Commons license CC BY-NC-ND 4.0. We ask that you edit only for style or to shorten, provide proper attribution and link to our website. AP and Getty images may not be republished. Please see our republishing guidelines for use of any other photos and graphics.