Please consider supporting us by disabling your content blocker.
loader

Introduction

In June, during a fiercely contested Republican gubernatorial primary in Utah, a misleading video surfaced on social media, showing Utah Gov. Spencer Cox allegedly confessing to fraudulent ballot signature collection. However, the governor never made such a statement, and courts upheld his election victory.

The Rise of AI in Misinformation

This false video is part of a larger trend of election-related content generated by artificial intelligence, much of which is misleading or designed to provoke. Experts warn that AI-generated deepfakes have become a significant concern for those fighting misinformation during election seasons.

How can AI be used to spread misinformation?

While some misinformation is intentional, AI can also produce accidental inaccuracies due to flaws in its algorithms. AI chatbots rely on the information available in their databases, which can lead to incorrect outputs if the data is outdated or erroneous.

In May, OpenAI announced its commitment to enhancing transparency regarding its AI tools during the election year, supporting the bipartisan Protect Elections from Deceptive AI Act.

Local Misinformation Campaigns

Many misinformation campaigns are localized and targeted. For instance, bad actors might impersonate local political organizers or send AI-generated messages to specific communities. Generative AI has made it easier to tailor messages to language minority groups, increasing the risk of misinformation.

Verifying Digital Identities

The deepfake video of Gov. Cox prompted a partnership between a public university and a tech platform aimed at combating deepfakes in Utah elections. From July 2024 to January 2025, students and researchers will work with SureMark Digital to verify the digital identities of politicians, enhancing trust in Utah’s elections.

What motivates misinformation?

Misinformation campaigns can stem from various motivations, including specific political agendas or geopolitical events. For example, Russia’s influence on U.S. elections has been documented, with ongoing efforts to sway public opinion.

Additionally, some misinformation is driven by financial incentives, as viral content can generate revenue on social media platforms.

Strategies for Stopping Misinformation

To combat misinformation, individuals should verify the credibility of images or sound bites by checking reputable sources. Utilizing reverse image searches can help identify the origin of images, while double-checking election-related messages through official state resources is crucial.

Technologists can also play a role in mitigating misinformation by implementing best practices in AI development, such as avoiding the release of tools that replicate real people’s voices or images.

Conclusion

As AI continues to evolve, understanding its potential impact on elections is vital. By staying informed and vigilant, individuals can help combat the spread of misinformation in the digital age.

Our stories may be republished online or in print under Creative Commons license CC BY-NC-ND 4.0. We ask that you edit only for style or to shorten, provide proper attribution and link to our website. AP and Getty images may not be republished. Please see our republishing guidelines for use of any other photos and graphics.