© 2024 Connecticut Public

FCC Public Inspection Files:
WEDH · WEDN · WEDW · WEDY
WECS · WEDW-FM · WNPR · WPKT · WRLI-FM · WVOF
Public Files Contact · ATSC 3.0 FAQ
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Takeaways from the roundtable with President Biden on artificial intelligence

MARY LOUISE KELLY, HOST:

We're going to get a readout now from the roundtable that President Biden convened this week in San Francisco to discuss what he called the risks and enormous promises of artificial intelligence.

(SOUNDBITE OF ARCHIVED RECORDING)

PRESIDENT JOE BIDEN: My administration is committed. It's committed to safeguarding America's rights and safety from protecting privacy to addressing bias and disinformation, to making sure AI systems are safe before they are released.

KELLY: Well, Tristan Harris is among the tech leaders who met the president on Tuesday. He cofounded the Center for Humane Technology, and before that he held a senior post at Google. For years, he has worked to raise awareness about the threats that AI poses, and he told me some of his takeaways from the meeting.

TRISTAN HARRIS: We had quite a diverse set of stakeholders at the table - people focusing on everything from AI fairness and discrimination issues, Joy Buolamwini from Algorithmic Justice League, the issues of AI and predictive policing or, you know, falsely identifying people and face recognition, the cost of that. You know, AI's promises that it'll help us develop cancer drugs and solutions to energy problems and climate change, but we'll need more public interest funding for that.

And we talked a lot about truth, trust and democracy because for all the issues that social media causes and misinformation and disinformation, AI will supercharge that. As people know, the example of a video that went viral of President Biden falsely giving a speech saying that he was instituting the draft - and that went viral - right? - or the photo of the Pentagon that had been bombed, which it had not been. And these kinds of things can move the stock market and really affect our financial system or, you know, what people think in an already fragile world. So we talked a lot about truth, trust and democracy. That's actually where I focused with President Biden.

KELLY: Got it. And was your sense that the message was heard?

HARRIS: Absolutely. I think that his team is making this a major priority. This was just one meeting, but there's lots of ongoing engagement. Senator Schumer, I think just yesterday, published a big speech on the things that they're planning on doing in Congress in creating AI regulations. So this is moving. We do need to move very quickly, and it's not just a national issue.

KELLY: Is there a useful precedent to have in mind for regulating technology that is so new and changing so fast?

HARRIS: Yeah, I mean, AI is certainly new in how fast the technology evolves. There is a level setting of the unique problem and challenge that AI poses because it is digital and moves around the world at the speed of bits and the speed of network connections. And so it is going to be a challenge. But, you know, we could have been one of those nuclear scientists who, after the first atomic bomb was exploded, just say, well, I guess this is inevitable. I guess the whole world is going to have nuclear bombs. And instead, a collection of people worked very, very, very hard to make sure that we now live in a world with only nine countries with nuclear weapons. And we signed the Nuclear Test Ban Treaty, and we had to come to international coordination between the two major powers. I think we need to do that with AI.

KELLY: And so circle back to my central question of what kind of sense you walked away from this roundtable from in terms of where the White House is on this. Were any specific proposals floated in terms of regulation, or are they still, in your view, in the fact-finding phase of all this?

HARRIS: In the meeting, we didn't focus on the specific regulatory proposals. It was more a discussion around the different issues that are at play. In other conversations that I've had with various people in the administration and Office of Science and Technology Policy and elsewhere, there is a deep, ongoing discussion about specific proposals. As an example, we can talk about ethics and responsibility all day long, but that will get bulldozed by the incentives to deploy these technologies as fast as possible. And one of the solutions to that is liability - that if companies are liable for the downstream harms that emerge from releasing a big AI model and what people can do with it, that can slow down the pace of market development. So that's an example of something that could move ahead.

KELLY: Before I let you go, I want to step back and just get your sense of the stakes here. I saw a talk that you gave in March in which you said - and I'm quoting - "50% of AI researchers believe there is a 10% or greater chance that humans go extinct from our inability to control AI." Are you part of that 50%?

HARRIS: I want people to know - I don't want to alarm people, but I do think that we have to have an honest assessment of the risk so that we can take the actions that are necessary to lower that risk. And I know a lot of people who work inside the AI companies who do not know even how we will safely steward what already exists. There's many dangerous capabilities that are already out there. I do think the stakes of this are incredibly high, and that's why I think people should be calling their members of Congress to advocate for the need to get this international regulation in place and these guardrails.

KELLY: Tristan Harris is co-founder and executive director of the Center for Humane Technology. Thank you.

HARRIS: Thank you so much for having me. Transcript provided by NPR, Copyright NPR.

Mary Louise Kelly is a co-host of All Things Considered, NPR's award-winning afternoon newsmagazine.
Tinbete Ermyas
[Copyright 2024 NPR]

Stand up for civility

This news story is funded in large part by Connecticut Public’s Members — listeners, viewers, and readers like you who value fact-based journalism and trustworthy information.

We hope their support inspires you to donate so that we can continue telling stories that inform, educate, and inspire you and your neighbors. As a community-supported public media service, Connecticut Public has relied on donor support for more than 50 years.

Your donation today will allow us to continue this work on your behalf. Give today at any amount and join the 50,000 members who are building a better—and more civil—Connecticut to live, work, and play.