© 2024 Connecticut Public

FCC Public Inspection Files:
WEDH · WEDN · WEDW · WEDY · WNPR
WPKT · WRLI-FM · WEDW-FM · Public Files Contact
ATSC 3.0 FAQ
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

As tech evolves, deepfakes will become even harder to spot

SHANNON BOND, HOST:

A video of Ukrainian President Volodymyr Zelenskyy circulated widely on social media last spring. It showed him calling on his soldiers to surrender to Russia. But that never happened. The video was a deepfake. These are images or recordings that have been manipulated to misrepresent someone's actions or words. And the fake video of Zelenskyy wasn't that well made, but it got traction after hackers managed to get it briefly on Ukrainian television and the broadcaster's website. As technology evolves, experts warn deepfakes will become harder to spot and even further undermine public trust.

Hany Farid is a professor at the University of California, Berkeley and a digital forensics expert. He joins me now. Welcome, Hany.

HANY FARID: Good to be with you again, Shannon.

BOND: Just how good is the technology to make these kinds of images, videos, other things right now?

FARID: I've been studying manipulated media for over 20 years now. And what I've seen over the 20 years is every few years, the technology gets better and better for manipulating media. But I've never seen anything like the last five years, where we now can whole-cloth synthesize an audio in your voice, a video of you saying and doing something you never did. And what is really dramatic about this technology is that we have democratized access to what used to be in the hands of Hollywood studios and state-sponsored actors to now anybody can generate this. And that's a very different threat factor in terms of disinformation campaigns meant to sow civil unrest, interfere with democratic elections. Every few months, we see more and more advances that are really dramatic and exciting in some - on one hand and worrisome in the other hand.

BOND: Can you give an example of one of these?

FARID: Yeah. My favorite recent one is the work from OpenAI called DALL-E. It has taken the internet by storm over the last few weeks. It is what is called a paint-by-text. And the way it works is you go to the website and you just type in, make me an image of a squirrel wearing a life jacket surfing in the Atlantic Ocean. And - anything in your head. And it will make an image that is eerily pertinent to that. And that is really impressive. But you can also see why that can be incredibly dangerous. And, of course, when we then are able to do that, not just for images but for audio, now you're looking at potential real challenges for discerning what is real and what is fake on the internet.

BOND: You have also talked about another risk that actually - as this technology is more democratized and accessible, you actually don't have to go down the road to make that fake video...

FARID: Yeah.

BOND: ...Or picture to inject out. Can you explain this idea?

FARID: I think this is what keeps me up at night more than the actual fake content. If we enter a world where any story, any audio recording, any image, any video can be fake, well, then nothing has to be real. We can simply dismiss inconvenient facts. A video showing police violence - it's fake. A video of human rights violations - it's fake. A video of a candidate saying something offensive - it's fake. How, then, do we reason about the world? If everything can be manipulated, how do we get news in a trusted way? And that - long term, that is what really worries me.

BOND: How do you approach kind of peering around the corner and preparing for what's coming next, whether it's DALL-E, like, generated videos or live videos that are faked? I guess, how are you approaching these problems?

FARID: We shouldn't think about where the problem is today. We should think about where the problem is tomorrow. And I can tell you that we are going to enter a time when these deepfake videos and images and audio start to become more prevalent in disinformation campaigns. And the hope is that we can get out ahead of that - not to stop it, but to mitigate it. The only hope we have is to develop technology that takes it out of the hands of the amateurs, the average person on the internet, and makes it more difficult, more time-consuming, more risky to produce these. But we should acknowledge that it's always going to be possible to create fake media.

BOND: Well, but as you say, these are technologies that have essentially become democratized, that people do have access to. So, I mean, what does that mean? Is there a regulatory response here? Is it on the companies themselves that are developing these technologies to, you know, decide how to police their use?

FARID: I think the fact is that the regulatory regime moves way too slowly. Members of the U.S. Congress are simply not sophisticated enough, frankly, to understand the complexity in these technologies. I want to see companies have some reasonably strong guardrails to prevent abuse. But here's what we know. No matter how good the top companies are, there's going to be a bad actor in this space. And the bad actor's going to get a hold of that technology. And we've already seen that in the form of non-consensual sexual imagery.

BOND: Revenge porn.

FARID: Exactly, revenge porn. The worst example use case of deepfakes is now absolutely everywhere. In the technology sector, we often ask if we can do something, and I think that we should start asking, should we do something? Because the fact is, once you develop these technologies, there is no controlling them. And I think a lot of researchers are developing technologies because they can and not necessarily because they should. If the downsides of a technology are so much greater than the upsides, maybe we shouldn't be developing these technologies in the first place.

BOND: So for listeners out there who are just living their lives, you know, browsing the internet, how worried should they be about this problem right now?

FARID: I think they should be worried about disinformation generally. The internet is awash with nonsense and lies and conspiracies beyond the deepfakes. And I think that if you are somebody who is getting the majority of - if we can call it this - news from social media, you should stop. I think people should start to grow weary about being lied to and being manipulated every minute of every day and return to the trusted sources so that we can have honest conversations about what's going on in the world.

BOND: Hany Farid, professor at the University of California, Berkeley, thanks so much for joining us.

FARID: Great talking to you, Shannon.

(SOUNDBITE OF CORBIN ROE, MAYNE AND NICXIX SONG, "DRIP") Transcript provided by NPR, Copyright NPR.

Shannon Bond is a business correspondent at NPR, covering technology and how Silicon Valley's biggest companies are transforming how we live, work and communicate.

Stand up for civility

This news story is funded in large part by Connecticut Public’s Members — listeners, viewers, and readers like you who value fact-based journalism and trustworthy information.

We hope their support inspires you to donate so that we can continue telling stories that inform, educate, and inspire you and your neighbors. As a community-supported public media service, Connecticut Public has relied on donor support for more than 50 years.

Your donation today will allow us to continue this work on your behalf. Give today at any amount and join the 50,000 members who are building a better—and more civil—Connecticut to live, work, and play.