Unlike previous presidential elections, the 2024 contest is the first that will take place at a time when AI tools have become sophisticated and accessible enough where they could have a material impact. Bad actors are now reportedly tapping generative AI tools to create more convincing impersonation scams.
When an AI deepfake of Joe Biden’s voice was used in a robocall scheme earlier this year, it was the first high-profile sign to voters that they may no longer be able to trust that what they see and hear is authentic. This robocall deepfake had the potential to mislead voters for political advantage or suppress the vote.
A recent survey commissioned by TNS asked Americans about the political robocalls and robotexts that they have received, revealing growing fears of election interference from bad actors. The survey data is available in TNS’ latest eBook: American Votes – 2024 US Presidential Election in Robocalls.
AI Deepfakes Undermine Voter Trust
According to the survey, Americans are aware of how AI could influence their voting behaviors. Sixty percent of US adults believe robocalls and robotexts are being used to undermine confidence in the 2024 Presidential Election.
In this low-trust environment, legitimate calls from election officials or campaigns may go ignored or untrusted. The skepticism that protects the target of a scam from engaging with a malicious call can also prevent them from getting accurate information from an official source. The survey found that two-thirds of Americans agree that the difference between legitimate 2024 election robocalls and robotexts and those containing false information is sometimes unclear.
As AI deepfake technology becomes more advanced, the challenge of maintaining voter trust in communications requires more advanced solutions.
Battleground States in the AI Crosshairs
One of the most concerning findings from the survey is the impact AI deepfakes could have on key battleground states and local races: 71% of Americans believe that a voter in a battleground state for the 2024 election is more likely to be targeted by an AI deepfake robocall attempt than a voter in a non-battleground state.
These states, often responsible for the outcome of Presidential elections, are prime targets for illicit attempts to potentially sway voter opinion.
Americans Call for “Pre-bunking” Disinformation
Rather than solely focusing on de-bunking disinformation after it has spread, many election officials and consumer advocates have pushed for a pro-active strategy known as “pre-bunking”. This approach aims to educate voters about disinformation tactics before they are exposed to misleading content, helping them identify and dismiss false claims in real time.
Seventy-seven percent of US adults believe that policymakers and regulators should educate Americans on the risks of political AI deepfakes and how to protect against them.
Restoring Trust in Voice Calling
Generative AI will continue to evolve, and so will the threats it poses to election integrity. Without wider mitigation and education efforts, AI deepfakes, particularly in the form of robocalls, have the potential to undermine voter trust by targeting key battleground states with disinformation.
According to TNS survey data, Americans are not only concerned about these risks but are also calling for more proactive measures to combat them.
Tools such as TNS Enterprise Authentication and Spoof Protection help to restore trust in the voice channel by ensuring only legitimate, verified branded calls are delivered to the end recipient.
Spreading awareness of the risks, pre-bunking common robocall schemes, and using innovative tools to authenticate the identity of callers are some of the ways policymakers and election officials can work together to combat potential political disinformation.
For access to more of the survey findings, download our latest eBook here.
Greg Bohl is Chief Data Officer at TNS with specific responsibility for TNS’ Communications Market solutions.