As campaign season nears, politicians are turning up the volume on campaign rhetoric. To cut through the noise, we’re launching Campaign Context, a series providing clarity on the messages you’re hearing from candidates on the campaign trail. We’re digging past the politics and into the facts to provide you with the transparent, spin-free information you need to make informed decisions this election season.
AUSTIN (KXAN) — The spread of misinformation and disinformation on social media is an unfortunate part of any modern election cycle, but 2024 has been particularly tricky for voters thanks to rapid advancements in artificial intelligence.
AI-generated images and deepfakes (media manipulated by AI) can be landmines of bad or misleading info.
You might remember former President Donald Trump was recently accused of reposting AI-generated images of Taylor Swift and her fans appearing to support him. Those images were immediately flagged as fake.
That posed the question, "When voters see these AI images or videos —and voters are fully aware they are not real— can the messages still somehow resonate?"
For insight, KXAN spoke with Samuel Woolley, Dietrich Chair of Disinformation Studies at the University of Pittsburgh, and a researcher in the field of AI and politics.
Speaking on deepfakes, Woolley said, "Most people have a decent 'Spidey sense' when it comes to deepfakes these days. Some of them are getting more realistic, so it could be challenging to find them, but it still has an impact."
Woolley said deepfake videos can be especially potent as they appeal to multiple senses. He added one of the reasons the messages can stick is simple: repetition.
"Repetition is a core part of politics," he said. "It doesn't matter if you believe something to be true. The more you hear it repeated, as with advertisements, as with everything, the more likely it is to stick in your brain. And so, with deepfakes, one of the key things that they do is repeat an idea that a politician or a group of people or maybe some bad actor somewhere wants you to remember."
Woolley said the more sophisticated deepfakes out there are also meant to be memorable.
"There's a reason why the gossip rags are at the front of the grocery store. We like to look at them," he said. "And while most of the time, we don't believe what we read in, say, Us Weekly or something like that, it doesn't mean that we don't remember it, and that potentially later on, it might come into our brain at a weird moment and make us think, 'Oh, remember, I heard this thing about so-and-so.'"
"The same thing happens with deepfakes," he added. "For instance, the fake Taylor Swift endorsement of Donald Trump, we might not remember that Swift actually didn't endorse Donald Trump. We just remember something to do with Swift endorsing Donald Trump."
As for who is generating most of these AI images and deepfakes, be they regular people, political operatives or foreign agents, Woolley said it's a hard question to answer at the moment.
"Researchers like me don't really have access to data and information to find out from the social media companies who's launching these, and sometimes the social media companies themselves and the government don't even know who's launching these things, because of anonymity and AI automation."
His advice to voters: take a beat before you share something you see on social media.