Please consider supporting us by disabling your content blocker.
loader

In June, during a fiercely contested Republican gubernatorial primary race, a misleading video circulated on social media, falsely depicting Utah Gov. Spencer Cox allegedly confessing to fraudulent ballot signature collection. However, the governor never made such a statement, and courts have confirmed his election victory.

This incident is part of a rising trend of election-related content generated by artificial intelligence, which experts warn can be false, misleading, or designed to provoke reactions.

AI-generated content, particularly deepfakes, has become a significant concern for those fighting misinformation during election cycles. Previously, creating deepfakes required a skilled team and resources, but advancements in AI technology have made it accessible to almost anyone.

Tim Harper, a senior policy analyst at the Center for Democracy and Technology, noted, “Now we can supercharge the speed and the frequency and the persuasiveness of existing misinformation and disinformation narratives.” AI technology has evolved remarkably since the last presidential election in 2020, with tools like OpenAI’s ChatGPT making AI widely available since November 2022.

How can AI spread misinformation?

Misinformation from AI can be intentional or accidental, stemming from flaws in algorithms. AI chatbots rely on their databases, meaning if the information is incorrect or outdated, they can produce misleading answers.

OpenAI announced in May its commitment to enhancing transparency regarding its AI tools during the election year, supporting the bipartisan Protect Elections from Deceptive AI Act, which is currently pending in Congress.

“We want to ensure our AI systems are built, deployed, and used safely,” the company stated. “Like any new technology, these tools come with benefits and challenges. They are also unprecedented, and we will continue to evolve our approach as we learn more about their use.”

Unregulated AI systems can contribute to misinformation. Recently, Elon Musk was urged by several secretaries of state after his AI assistant Grok falsely claimed that Vice President Kamala Harris was ineligible for the presidential ballot in nine states due to missed deadlines. This misinformation remained on the platform for over a week, reaching millions.

“As millions of voters seek accurate information about voting this election year, X has a responsibility to ensure all users have access to true guidance regarding their voting rights,” stated a letter signed by secretaries of state from Washington, Michigan, Pennsylvania, Minnesota, and New Mexico.

Gov. Ron DeSantis. (Screenshot: Florida Channel)

Generative AI impersonations also pose risks for misinformation. Besides the false video of Cox, a deepfake of Florida Gov. Ron DeSantis misleadingly showed him withdrawing from the 2024 presidential race.

While some misinformation campaigns are large-scale, others are localized and targeted. For example, malicious actors might mimic a local political organizer’s online presence or send AI-generated messages to specific communities. Generative AI has made it easier to translate messages for language minority groups.

Although most adults recognize AI’s role in elections, some localized campaigns may go unnoticed, Harper warned.

For instance, someone could use local polling data to send misleading messages about polling place changes, which could appear credible if they have accurate information about the original location.

“If that message arrives via WhatsApp or text, it may seem more convincing than a political ad on social media,” Harper explained. “People are less accustomed to targeted misinformation sent directly to them.”

Verifying digital identities

The deepfake video of Cox prompted a collaboration between a public university and a tech platform to combat deepfakes in Utah elections. From July 2024 to January 2025, students and researchers at the Gary R. Herbert Institute for Public Policy and the Center for National Security Studies at Utah Valley University will partner with SureMark Digital to verify politicians’ digital identities and assess the impact of AI-generated content on elections.

Candidates for Utah’s congressional and Senate seats can authenticate their digital identities at no cost through SureMark’s platform, aiming to enhance trust in Utah’s elections.

Brandon Amacher, director of the Emerging Tech Policy Lab at UVU, believes AI’s influence in this election will be similar to social media’s emergence in 2008 — impactful but not overwhelming. “What we’re witnessing now is the start of a trend that could become significantly more influential in future elections,” Amacher stated.

In the pilot’s first month, the group observed the effectiveness of simulated video messages, particularly on platforms like TikTok and Instagram Reels. Short videos are easier to fabricate, and if users scroll through these platforms for an extended period, misinformation may not receive adequate scrutiny, yet it can still shape opinions.

SureMark Chairman Scott Stornetta described the verification platform, which recently launched, as a tool for users to obtain credentials. Once approved, the platform uses cryptographic methods to link a person’s identity to their published content. A browser extension then indicates whether content was published by the individual or an unauthorized source.

Stornetta compared the technology to an X-ray, stating, “If someone views a video or image on a standard browser, they won’t distinguish between real and fake. However, with this X-ray vision, they can click a button to determine if it’s authentic or not.”

The pilot program is currently working to credential state politicians, and results are expected in a few months. Justin Jones, executive director of the Herbert Institute, noted that every campaign they’ve engaged with has expressed eagerness to explore this technology.

“All of them have indicated concern and a desire to learn more,” Jones said.

What motivates misinformation?

Various groups with differing motivations can drive misinformation campaigns, according to Michael Kaiser, CEO of Defending Digital Campaigns. Misinformation can target specific candidates, as seen with the deepfake videos of Governors Cox and DeSantis, or relate to geopolitical events aimed at swaying public opinion.

Russia’s influence on the 2016 and 2020 elections is well-documented, and efforts are likely to persist in 2024, with a Microsoft study recently reporting attempts to undermine U.S. support for Ukraine.

Monetary incentives can also motivate misinformation, as viral content can generate revenue on platforms that pay users for views. Kaiser emphasized that while election interference may be a goal, many actors aim to create chaos and apathy towards the electoral process.

“They aim to divide us further,” he stated. “For some, misinformation isn’t about how you vote; it’s about creating division.”

Much of the AI-generated content is inflammatory or emotionally charged, Kaiser explained. “They want to provoke emotions, making you eager to share it with others, thus becoming a conduit for misinformation.”

Strategies to combat misinformation

Recognizing the emotional responses that drive engagement with content is crucial for slowing the spread of misinformation. Experts recommend several strategies:

First, verify if an image or sound bite has been reported elsewhere. Use reverse image searches to check if the content appears on reputable sites or is solely shared by suspicious accounts. Websites that fact-check altered images can help trace the origins of information.

If you receive messages about voting or election day, cross-check the information through your state’s voting resources.

Implementing two-factor authentication on social media and email accounts can help prevent phishing attacks and hacking, which may facilitate misinformation dissemination.

If you suspect a phone call may be AI-generated or uses someone’s voice likeness, confirm the person’s identity by asking about your last conversation.

Look for visual clues in AI-generated images, such as extra fingers or distorted features, as AI struggles with finer details. Deepfake videos often have plain backgrounds, as complex settings are harder to replicate.

As elections approach, the likelihood of encountering misinformation increases. Bad actors exploit the proximity to election day, as misinformation has less time to be debunked.

Technologists can also play a role in mitigating misinformation by refining AI development practices. Harper recently published a summary of recommendations for AI developers, advocating for best practices.

These recommendations include avoiding the release of text-to-speech tools that replicate real voices, refraining from generating realistic political images and videos, and prohibiting the use of generative AI in political advertising.

Harper also suggests that AI tools disclose how often their training data is updated concerning election information, develop machine-readable watermarks for content, and promote authoritative election information sources.

While some tech companies voluntarily adhere to many transparency best practices, the country faces a patchwork of laws that have not kept pace with technological advancements.

A bill prohibiting deceptive AI-generated media of federal candidates was introduced in Congress last year but has yet to be enacted. However, several states have passed laws addressing AI in elections, primarily banning AI-generated messaging or requiring disclaimers about AI use in campaign materials.

DeSantis signed such a measure into law earlier this year.

For now, tech companies aiming to combat misinformation can seek guidance from the CDT report or pilot programs like UVU’s.

“We aimed to create a comprehensive election integrity program for these companies,” Harper said. “Recognizing that unlike legacy social media platforms, they are new and lack the regulatory scrutiny necessary for robust election integrity policies.”

Our stories may be republished online or in print under Creative Commons license CC BY-NC-ND 4.0. We ask that you edit only for style or to shorten, provide proper attribution and link to our website. AP and Getty images may not be republished. Please see our republishing guidelines for use of any other photos and graphics.