AI’s Limited Impact: Traditional Tactics Dominate 2024 Election Misinformation

January 6, 2025

The 2024 U.S. election was anticipated to be heavily influenced by artificial intelligence (AI), with experts warning of AI-driven misinformation campaigns. Nonetheless, the reality was quite different. Despite these fears, traditional misinformation tactics continued to dominate the political landscape. This article delves into the role AI played in the election, the measures taken to curb its misuse, and why traditional methods remained prevalent.

The Anticipated AI Threat

Expert Warnings and Public Concerns

Leading up to the 2024 election, there was significant concern about the potential for AI to create and spread misinformation. Experts predicted that AI-generated content could mislead voters on a massive scale. Public anxiety was fueled by the rapid advancements in AI technology, which made it easier to produce realistic fake images, videos, and audio. These technological strides created a sense of urgency among public advocates and governmental bodies to address the issue before it could hamper electoral integrity.

AI-generated deepfakes and manipulative media seemed poised to disrupt the election process. While the specter of rampant AI manipulation loomed over public discourse, the technological readiness of voters, institutions, and tech companies aimed at mitigating these risks was equally significant. Experts warned that the rapid evolution of AI had outpaced regulatory measures, and coupled with the potential for AI to emulate real voices and appearances, it was becoming increasingly difficult to distinguish legitimate content from fabricated material.

Legislative and Institutional Responses

In response to these concerns, various legislative and institutional measures were implemented. The Federal Communications Commission (FCC) banned the use of AI-generated voices in robocalls after a controversial incident involving an AI-generated call that impersonated President Joe Biden. Additionally, sixteen states enacted legislation to regulate AI’s use in campaigns, often requiring disclaimers for synthetic media close to voting periods. These legislative actions reflected a broader consensus among state and federal bodies aiming to preemptively mitigate AI’s potentially harmful impacts.

Federal agencies also played a crucial role in safeguarding the electoral process from AI misconduct. The Election Assistance Commission released an “AI toolkit” to aid election officials in managing AI-driven misinformation. This toolkit provided practical resources for identifying and neutralizing deceptive AI content. Collaboratively, these initiatives demonstrated a proactive approach, ensuring that potential AI threats were met with timely and effective responses.

AI’s Actual Role in the Election

Minor Impact on Misinformation

Contrary to expectations, AI’s role in creating and spreading misinformation was relatively minor. Traditional misinformation tactics, such as false claims about vote counting and mail-in ballots, remained dominant. Experts like Paul Barrett from the New York University Stern Center for Business and Human Rights noted that generative AI was not necessary to mislead voters. Indeed, the propagation of falsehoods relied more on well-established avenues of misinformation than on sophisticated AI manipulation.

The repetitive themes surrounding vote fraud, mail-in ballots, and voting machines continued to dominate the misinformation landscape. AI’s anticipated disruption was absorbed into preexisting methods of creating and disseminating false claims. Familiar strategies such as tampered pictures, misleading headlines, and fabricated testimonials continued to make a profound impact without necessitating AI interventions. Voters and platforms were better equipped to counter AI-driven threats than traditional tactics.

Reinforcing Existing Narratives

AI-generated content tended to bolster existing narratives rather than introduce new falsehoods. For instance, former President Donald Trump and his vice-presidential candidate, JD Vance, falsely claimed that Haitian immigrants were eating pets in Springfield, Ohio. AI-generated images and memes supporting this narrative surfaced online, demonstrating AI’s role in reinforcing preexisting claims. Despite AI’s limited transformative capacity, it nonetheless enhanced the reach and visual impact of recurring falsehoods.

Moreover, the integration of AI-generated media further polarized partisan sentiments but did not establish new misinformation themes. The AI-generated content functioned more like an amplifier than a creator, elevating already circulating false information. By reinforcing entrenched narratives, AI occasionally influenced audience perception without introducing unprecedented challenges. Voters witnessed these phenomena predominantly through traditional channels and methods implicated AI but relied heavily on familiar misinformation strategies.

Measures to Mitigate AI Misuse

Technology Companies’ Interventions

Technology companies played a crucial role in minimizing AI’s potential harm. Meta mandated that political ads disclose any AI usage, while TikTok implemented automatic labeling for some AI-generated content. OpenAI went a step further by prohibiting the use of its services for political campaigns altogether. These proactive measures indicated a serious commitment from tech giants to ensure responsible AI utilization, thereby preventing the manipulation of political discourse.

Social media platforms enhanced their content moderation capabilities to address AI-driven misinformation. Algorithms were tweaked to detect and downrank misleading content while increasing the visibility of accurate information. This concerted effort helped marginalize AI-manipulated media, preventing its widespread adoption in the election campaign. Meta and OpenAI’s policies underscored the recognition of their platforms’ societal responsibilities and their dedication to ethical standards.

Federal and State Efforts

Federal bodies, such as the Election Assistance Commission, released an “AI toolkit” to aid election officials in addressing AI-driven misinformation. State legislatures also took action, with sixteen states enacting laws to regulate AI’s use in campaigns and elections. These measures included requirements for disclaimers on synthetic media close to voting periods. The emphasis on preemptive guidelines ensured that election processes remained transparent and credible.

States developed a comprehensive framework targeting the responsible deployment of AI in political advertising. Specific measures included transparency obligations for AI-generated political content and strict enforcement protocols for compliance. Collaboration across federal, state, and local jurisdictions facilitated a unified front against potential AI exploits. By integrating technological oversight with legislative action, these efforts collectively mitigated AI’s impact on misinformation.

Traditional Misinformation Tactics Prevail

Effectiveness of Traditional Methods

Despite the safeguards against AI misuse, traditional misinformation tactics remained more effective. Digital media forensics expert Siwei Lyu and Dartmouth College professor Herbert Chang pointed out that traditional memes generally produced more engagement than those generated by AI. This observation was supported by a study co-authored by Chang, which revealed that AI-generated images have less viral potential than traditional memes. These findings indicated that the longevity of traditional misinformation methods outshined the emergent AI approaches despite diverse preventive measures.

Traditional misinformation benefited from being deeply rooted in cultural and social contexts, resonating more directly with voter sentiments. Memes, authentic-looking but deceitfully contextualized images, and subtle twists of factual events continued to engage audiences effectively. The familiarity and simplicity of these tactics rendered them more compelling and credible to the public. Voters demonstrated a higher engagement inclination toward these established misinformation forms rather than novel AI-generated content.

Role of Public Figures

Public figures with substantial followings played a significant role in disseminating messages without relying on AI. For instance, Trump repeatedly made false statements about illegal immigrants voting, despite such cases being exceedingly rare. This persistent narrative appeared effective, as more than half of Americans expressed concern about noncitizens voting in the 2024 election. The influence of high-profile individuals underscored the impact of traditional channels in spreading misinformation.

Political leaders and influencers wielded significant power in shaping voter perceptions. Their ability to command large audiences allowed them to steer public discourse effectively. As these figures engaged directly with their followers, their messages acquired amplified credibility and reach. Consequently, the reliance on familiar misinformation tactics ensured that traditional methods maintained dominance, reflecting their enduring relevance even amidst evolving technological threats.

Instances of AI Use in Misinformation

Isolated Cases of AI Misuse

While AI did not significantly contribute to broader misinformation narratives, there were isolated instances of its misuse. One notable case involved a New Orleans magician creating a fake Biden robocall designed to stir political tensions. However, such instances were relatively rare and did not have a widespread impact. The constrained usage of AI-driven misinformation highlighted the efficacy of preventive measures.

The magician’s AI stunt exemplified isolated but innovative misuse attempts. Although it represented a potentially disruptive application, its singular nature indicated that stringent controls in place effectively minimized broader adoption. These rare instances underscored the resilience of regulatory frameworks and institutional readiness against potential AI threats. Public awareness and vigilant detection mechanisms further curbed such exploits, ensuring their limited political influence.

AI in Partisan Animus

AI was sometimes used to exacerbate partisan animus. For example, AI-generated images and memes were used to support false claims made by political figures. However, these instances were exceptions rather than the norm, and traditional misinformation tactics continued to dominate the landscape. Such uses of AI were both geographically and contextually narrow, reflecting its limited transformative reach.

Instances of AI-driven content aimed at heightening partisan divisions demonstrated the technology’s potential for targeted influence. Nevertheless, these attempts fell short of displacing the effectiveness of long-established misinformation tactics. Traditional methods prevailed due to their accessible and relatable nature, consistently engaging voters beyond digitally sophisticated manipulations. AI’s role remained ancillary, enhancing but not defining the broader misinformation dynamics in the political sphere.

Efforts to Control AI-Driven Misinformation

Social Media and AI Platforms’ Mechanisms

Social media and AI platforms employed various mechanisms to control the spread of harmful content. Meta and OpenAI reported rejecting numerous requests for generating AI images of political figures. Platforms also used watermarks, labels, and fact-checks to mitigate the potential harms of AI-generated content. These interventions played a crucial role in ensuring transparency and accuracy, countering the challenge posed by AI.

Comprehensive monitoring systems were deployed by social media companies to identify and mitigate AI-driven misinformation promptly. These platforms collaborated with fact-checkers to verify suspicious content and promptly eliminate deceptive media. The introduction of automatic labels on identified AI-generated content further helped users distinguish between genuine and manipulated information. Such robust approaches fortified the integrity of online political discourse, significantly diminishing AI’s disruptive potential.

Areas for Improvement

Despite these efforts, there were notable areas for improvement. The Washington Post found that ChatGPT could still compose campaign messages aimed at specific voters when prompted. PolitiFact discovered that Meta AI could generate images to support specific misleading narratives, indicating that more robust measures are needed to fully address AI-driven misinformation. These gaps underscored the need for ongoing vigilance and adaptive regulatory strategies.

The latent vulnerabilities in current AI containment strategies revealed areas necessitating enhanced oversight and deeper technological integration. Further advancements in AI monitoring tools and more stringent compliance frameworks could bolster existing measures. Continued collaboration among stakeholders, including technology firms, legislative bodies, and civil societies, is imperative to counter emerging threats effectively. Ensuring dynamic and responsive systems will be critical to addressing evolving AI misuse risks comprehensively.

Conclusion

The 2024 U.S. election was expected to be heavily influenced by artificial intelligence (AI). Experts had issued warnings about AI-driven misinformation campaigns possibly swaying voter opinions. However, the actual events of the election unfolded in a different manner. Contrary to the fears and speculations, traditional misinformation tactics continued to dominate the political discourse and campaigning techniques. This article explores the role AI played in the 2024 election, highlighting the measures that were implemented to curb its misuse. Additionally, it examines the reasons why traditional misinformation methods maintained their prevalence over AI-driven ones. The analysis suggests that while AI posed a potential threat, the well-established, more familiar approaches to spreading misinformation proved more effective and remained the primary concern throughout the election process. The persistence of traditional methods underscores a lingering reliance on known strategies, despite the looming possibilities presented by advanced technologies like AI.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later