© 2024
NPR News, Colorado Stories
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
KUNC's The Colorado Dream: Ending the Hate State has arrived! Join us each Monday through Nov. 4 for a new episode.

How real is the threat of AI deepfakes in the 2024 election?

AYESHA RASCOE, HOST:

A remarkable campaign ad aired in Iowa earlier this month. It's from a group that supports Florida Republican Governor Ron DeSantis in the 2024 presidential race, and it attacks former President Trump. On its surface, it looks and sounds like a typical campaign ad, but something happens that makes this one very different. The ad features a soundbite of what sounds like former President Trump's voice.

(SOUNDBITE OF ARCHIVED RECORDING)

AI-GENERATED VOICE: (As Donald Trump) I opened up the governor position for Kim Reynolds, and when she fell behind, I endorsed her.

RASCOE: But Trump never said those words. The voice in the ad was allegedly created using artificial intelligence to read something Trump wrote on social media. Hany Farid is a professor at the University of California, Berkeley, and a digital forensics expert, and he joins us now. Welcome.

HANY FARID: Good to be with you, Ayesha.

RASCOE: So AI can do a lot in this realm. What are the specific risks of using AI voices and using the voice of a candidate who you're running against with AI?

FARID: I think there's two risks here that we have to think about. One is the ability to create an audio recording of your opponent saying things that they never said. But the other concern we should have is that when the candidate really does get caught saying something, how are we going to determine whether it's real or not? And think back to 2016 when Donald Trump was caught on the Access Hollywood tape. He apologized at the time because there was no out. But now it's fake. And so I think we have to worry about two things - the fake content, but also how are we going to validate the very real content that is going to emerge in the coming years?

RASCOE: People who have listened to Trump talk a lot might notice it's a fake. But how close is this technology to creating a near-perfect copy that even someone who's listened to Trump a lot or listened to the person talk a lot may not be able to detect?

FARID: There are still slight artifacts, and if you listen a few times, you'll probably catch those. But the technology is accelerating exceedingly fast. And you know what's going to happen because everything in the technology space works exactly the same way, which is every year or so the technology gets much, much better. It gets cheaper and gets more ubiquitous. You also have to remember that the way this content is distributed through social media, people are not spending a lot of time analyzing this the way you or maybe I would. And so even if there are slight quirks in it, if things conform to our preconceived ideas, we are going to absorb them, and we are going to move on to the next tweet or Facebook post or whatever it is because we move so fast online.

RASCOE: Where do you see this technology being used most right now when it comes to politics? Is it being used in the 2024 presidential campaigns outside of this Iowa ad, or is it happening more in state and local races where there'll be a lot less media attention?

FARID: We're seeing it across the boards. And interestingly, mostly what we are seeing at the national level is the campaigns themselves - their PACs, their supporters are the ones creating it. They're not necessarily outsiders. We are seeing this absolutely at the state and local level as well. Those tend not to get as much national attention. But the fact is, is that this technology is very easy to use. You can go over to a commercial website, and for $5 a month you can clone anybody's voice. And then you type, and you have them say anything you want. And so I think it's very likely that we will see this continue.

RASCOE: There are some state laws that regulate the use of AI or deepfakes in certain instances. Can you talk about what guardrails do exist?

FARID: Yeah. So what's tricky here is it's not illegal to lie in a political ad. Most of the laws that exist are either toothless - that is, they're extremely hard to enforce - or they don't - are not broad enough to really cover the most extreme cases. There was a law, for example, here in California that tried to ban political deepfakes, and it eventually got sunset because it was so ineffective. But the reason it was ineffective is, first of all, it required intent. You had to show that the person creating it intended to be deceptive. And proving that kind of intent is nearly impossible. It also said that they were banned only within 90 days of an election. What do you do 91 days before the election? And also it's borderless. So if the person creating it and the person being affected is not in the state of California, the law has no impact. And so I think the guardrails are not going to come from a regulatory regime. I think the guardrails have to come from two places. One is the campaigns. The campaigns have to decide that it's not OK to use AI to distort the record of our opponents. And I would like to see them do that, but I'm also not naive. But the other place this can be done is in the services that are being used to create the fake content. They can start to say, well, we think that cloning the voice of Trump and DeSantis and Biden and Harris may not be the best idea, and we just won't let you do that. But here's the other thing they can do. They can watermark and fingerprint every single piece of content that they produce. They can say, we are going to insert digital watermarks. We are going to extract digital fingerprints that will allow us to track this content over time so that if they do end up in the wild, we have a reasonable ability to try to detect those and determine what's real and what's not.

RASCOE: So at this point, do we have that ability to determine what is real and what is not? Can we figure that out?

FARID: The answer is yes and no. The technology to do the watermarking and fingerprinting is available. The companies that, by the way, agreed to voluntary principles with the White House on controlling and making sure that generative AI isn't used in malicious ways - now they actually have to start to deploy it. Now, on the flip side, there will be bad actors in this space. There will be people who don't use this technology. And that's when the work that I do and my students do here at UC Berkeley come into play, where we ingest an image or an audio or a video, and we analyze it, and we try to determine if it's real or not. The problem with that approach, of course, is the half-life of a social media post is measured in minutes. So by the time we end up analyzing it and fact-checking it, it's great for the journalists, not so much for the millions of people who have already seen it online. But I think we need all these technologies. And of course, we still do need the government to step in in the most delicate way possible because we have to be careful not to infringe on the right of candidates to say what they want, but also putting in some regulatory controls here as well.

RASCOE: That's Hany Farid, a professor at the University of California, Berkeley, and he is a digital forensics expert. Thank you so much for joining us.

FARID: Great to be with you, Ayesha. Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Ayesha Rascoe is a White House correspondent for NPR. She is currently covering her third presidential administration. Rascoe's White House coverage has included a number of high profile foreign trips, including President Trump's 2019 summit with North Korean leader Kim Jong Un in Hanoi, Vietnam, and President Obama's final NATO summit in Warsaw, Poland in 2016. As a part of the White House team, she's also a regular on the NPR Politics Podcast.