The NPR Politics Podcast - Lawmakers Want To Be Proactive On Artificial Intelligence Regulation
Episode Date: May 17, 2023OpenAI head Sam Altman appeared before a Senate panel this week to talk about his ChatGPT product and the future of artificial intelligence. Lawmakers acknowledge the broad upsides of the fast-moving ...technology but hope to craft regulation in order to blunt the social and civic drawbacks that arrived alongside past tech breakthroughs.This episode: political reporter Deepa Shivaram, disinformation correspondent Shannon Bond, and congressional correspondent Claudia Grisales.The podcast is produced by Elena Moore and Casey Morell. Our editor is Eric McDaniel. Our executive producer is Muthoni Muturi. Unlock access to this and other bonus content by supporting The NPR Politics Podcast+. Sign up via Apple Podcasts or at plus.npr.org. Connect:Email the show at nprpolitics@npr.orgJoin the NPR Politics Podcast Facebook Group.Subscribe to the NPR Politics Newsletter.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
Transcript
Discussion (0)
Hi, this is Hugh in, well, I'm not anywhere, really. I'm a synthetic voice.
Oh, my God.
I don't belong to a real person. This podcast was recorded at
1.06 p.m. Wednesday, May 17th, 2023.
Things may have changed by the time you hear it.
Oh, Lord.
Okay, here's the show.
Oh, my gosh. That still freaks me out.
That was actually kind of scary. I love it. Hey, there. It's the NPR Politics Podcast.
I'm Deepa Shivaram. I cover politics.
I'm Claudia Grisales. I cover Congress.
And Shannon Bond from NPR's disinformation team is with us today. Hey, Shannon.
Hey, guys.
So Congress is racing to try and catch up to exploding advances in AI. As you can tell,
this podcast is also trying to do some
catching up. But it's really us, I promise. No AI voices from the three of us in this show,
you can guarantee that. So for the past several weeks, Senate Majority Leader Chuck Schumer has
met with at least 100 experts in artificial intelligence to craft legislation around this
technology. And yesterday, the Senate held a hearing with the AI executive behind ChatGPT. So, Claudia, talk me through some of
these meetings, 100 of them. What is Schumer hoping to achieve here? Right. He's trying to
craft a bipartisan consensus behind comprehensive legislation to install safeguards for AI. And I
sat down with him for a few minutes to talk about it. He said it's probably
the most important issue facing our country, families, and humanity in the next hundred years.
So he tried to illustrate that there's a lot at stake here. He said it's a national issue,
a country issue, human issue, but it's easier said than done. And he knows that. He admits that.
They want to try and craft law
where AI can be allowed to see the tremendous good that it could be capable of, but also put
guardrails where there are worries, where there could be tremendous bad. And he said this is very
difficult because it is moving so quickly, it's changing so quickly, and he's facing a bitterly
divided Congress. So it's going to be very tough for him
to weigh in on an issue when it's very hard for Congress right now to pass any kind of bipartisan
legislation for the most part. Yeah, and this is, I mean, an issue that I think a lot of us don't
even realize how much it's already impacting us and our industries and our jobs. And clearly,
this is very top of mind. Shannon, what are the biggest concerns around AI technologies that the government might want to have a say in? What kind of guardrails
would they want to put in here? Yeah, I mean, I think it can be easy to sort of go really
catastrophic with sort of, you know, these kind of extreme far off warnings about, you know,
are the robots all going to take over? But actually, I think the stuff, you know, that
critics are talking about, but also it sounds like lawmakers are talking about as well, a huge question on everyone's minds is, you know,
what is going to be the impact on jobs? An IBM official appeared at this hearing this week in
front of Congress. You know, IBM has said, you know, it's going to pause hiring for certain
positions because it thinks over time it could be replacing, you know, close to 8,000 jobs with AI.
You know, these are back office jobs. I mean,
it affects so many industries, especially any kind of knowledge work or anything that can be easily
automated. So I think there's going to be big questions around that. There are big questions
around privacy. You know, these systems, systems like ChatGPT are trained using huge amounts of
data, you know, scraped from the internet. And there's a lot of questions around, you know, like, you know, what gets involved? You know, can you opt out? This is something that, you know, scraped from the internet. And there's a lot of questions around,
you know, like, you know, what gets involved? You know, can you opt out? This is something that,
you know, a lot of artists and musicians are really concerned about, but also, you know,
average people should be worried about as well. There are questions about bias. You know, these are systems, if you remember, it can be easy to say, oh, it's just an algorithm. Algorithms are
written by people, right? These are systems that are built by people and they reflect our own biases. And so if we're thinking about using AI, you know, in all kinds
of parts of life, you know, we need to be thinking about what are the impacts on people who are
going to be affected by the decisions we're turning over to AI. And then, of course, there's the
question about disinformation, about misleading information, you know, manipulation. And you can
see how that could have huge effects, you know, if you can really scale up interference in elections on
social media. You know, one of the things that Sam Altman from OpenAI spoke about in the hearing
was this, you know, the idea that you could have essentially like personalized interactive
manipulation and disinformation. And, you know, there's just lots and lots of questions about how
we are going to handle the impacts in these different risk areas. Yeah, that's no small
thing. I mean, I know we're not talking about robots literally taking over, but these are some
pretty existential questions that we're throwing out there and lawmakers are going to have to
grapple with all of this. So, Claudia, what are they saying? Are they using AI? Are they familiar
with this? How are they kind of responding to all of these questions that are being raised?
Yeah, they are using AI. It's pretty interesting how much ChatGPT, for example, has just permeated society.
And we're seeing it here on the Hill as well. We've seen several members of Congress use ChatGPT, for example, to write remarks for a hearing and read from those. Yesterday, I think we had a first during
a Senate Judiciary subpanel hearing where we saw the chairman of that panel, Richard Blumenthal,
use an AI-generated audio software to basically play his voice or mimic his voice. It was drawn
from floor speeches. Too often, we have seen what happens when technology outpaces regulation.
If you were listening from home,
you might have thought that voice was mine
and the words from me.
But in fact, that voice was not mine.
The words were not mine.
And so it was just one of these moments
where he's trying to illustrate the dangers here.
People were smiling and smirking and laughing a little bit when that happened. But at the same time, later in the hearing, he talked about how people's voices can be stolen. And so, yes, this is going to be challenging for Capitol Hill to address this, for lawmakers to address this. They're basically facing the equivalent of trying to put brakes on a runaway train.
They've already missed critical windows to regulate the internet and social media. And I
talked to a law professor in North Carolina at the University of North Carolina at Chapel Hill,
who co-founded an AI research program there, law professor Ifeoma Ajunwa. And she talks about how
there are not enough experts in both computer science and law on Capitol Hill, and that makes AI lawmaking all the more challenging.
AI or automated decision-making technologies are advancing at breakneck speed.
And there is this AI race, yet the regulations are not keeping pace.
Yeah, I'm definitely having flashbacks to the previous hearings where lawmakers were interviewing CEOs of Google and Mark Zuckerberg and asking questions that we were all like,
do you even know? Do you know what this is? And it's really hard to watch. Yeah, it's really
interesting in terms of they have a lot to wrap their arms around. They're already behind. And
also this professor, Professor Ajunwa, told me that maybe it's up to the White House to try and
get on this quicker.
They have that ability with executive orders. And we've already seen the Biden White House
roll out some initiatives. So they are trying to get on top of this.
All right. We're going to take a quick break and we'll be back in a second.
And we're back. Yesterday's hearing was with the OpenAI CEO, Sam Altman. OpenAI is the company behind ChatGPT. So Shannon, how did that hearing go? Was he open to the idea of any regulations here? executives getting grilled on Capitol Hill and being, you know, yelled at by senators and other
lawmakers. First of all, I would say Sam Altman got a pretty enthusiastic reception from a lot
of these lawmakers. But I also thought, you know, it was interesting, you know, he kind of came in,
you know, this is like the brand new technology. There's lots of hype around it. People are pretty
excited about it. People are also pretty wary about it. And so, you know, it was interesting
to sort of see him come in and really, you know, from the beginning say, you know, we want to be regulated,
you know, giving some pretty specific ideas about how regulation could shape up.
And, you know, I just think compared, I used to cover these other, a lot of these social media
companies, you know, kind of compared to how they used to come in. It took them a lot longer,
right? It took Mark Zuckerberg at Facebook a lot longer to kind of come around to the idea of like, yes, there should be regulation and to agree with any sort of specific regulations.
And that was very different.
I mean, and Sam Altman, I think, was also pretty candid about this idea that there are big risks here.
And he talked about how, you know, if this technology goes wrong, it can go quite wrong. And so I think he was very much trying to strike this sort of balance
as one of the leading, maybe the leading company right now in AI saying, you know,
this is something we're taking seriously, but also clearly wanting to influence the shape of
whatever lawmaking gets done. It was really interesting in terms of how open he was on
these ideas of regulation. And as Shannon mentioned, you know, if the technology goes wrong, it can go quite wrong. And he said he was open to models that would require testing of these various AI programs
or licensing requirements or to allow the AI industry to be overseen by a new government
regulatory body. The devil's in the details. No legislation has been written yet,
and we'll see what the back and forth looks like once that gets started.
Yeah. And I would say, you know, when he, that idea around creating a new agency, I mean,
what, you know, sort of a record scratch moment for me during the hearing was
Senator Kennedy then saying, well, maybe you could lead that agency.
And Sam Otwin's like, well, I already have a job. I mean, that is kind of, it's kind of remarkable because like he is kind of remarkable because his company is going to be one of the targets of these regulations.
You heard from the lawmakers how much they feel they missed this moment with the internet, with social media, to sort of rein these companies in before they caused tremendous damage to our society. And now, you know, we still, despite the fact that we're kind of, we've already seen the evidence of just how harmful some of this technology can be,
there still is no progress on regulating most of these companies. You know, lawmakers are very
aware of that. But I also think, I get the sense that the AI industry is also aware of that. And
so they want to very much from the beginning show themselves or present themselves as being
kind of responsible and open to this because they don't want that kind of backlash. Yeah, that's interesting, though,
because I feel like there's like two sides of it a little bit, right, where they want to come across
as open and kind of like cooperative and stuff like that. But I'm a little confused because
doesn't more regulation generally hurt tech companies? Is that not the case here?
Yeah, you know, it's interesting. I was outside the doors, closed doors, where Altman had a dinner with a group of bipartisan members.
And so the message was a little different behind closed doors when I heard lawmakers talking about what they heard.
And Altman warned them in that room that aggressive regulation could hurt AI, in turn hurt the growth that AI could fuel with the economy. And so he highlighted
a lot of the positives, but he also warned against going too far if Congress were to get there.
And I think with these kind of pushes by companies, there's a way in which if you are
already a dominant company in the space, like, of course, you want to be involved in shaping
what the laws are written about that space, because that can help entrench your own dominance.
Right. And so that is one of the questions here is, you know, like, obviously, they need to understand, you know, lawmakers need to understand this industry and they do need to talk to people in the industry to help understand it.
But there is this question of, like, just how beneficial is whatever is going to be written will be to companies like OpenAI. Generally speaking, on the timeline here, I know that Congress is trying to, you know,
catch up, do their homework, not bungle this like they did with social media and whatnot.
But in a lot of ways, they're already kind of too late.
The European Union is far ahead on this kind of regulation.
Shannon, what does that look like and how far ahead are they really?
Yeah.
So, I mean, like with many of these areas,
like things around privacy and social media rules, the EU is definitely moving much faster on this.
And, you know, the consequences of that is that the rules that the EU set end up sort of becoming kind of de facto global regulations. You know, we've seen, you know, many of these tech companies
have to change how they operate because of things like the European privacy law. So the EU does have this framework for regulating AI. It's a risk-based framework where the idea is
it's much harder to sort of say we're just going to blanket regulate a technology.
Their approach seems to be we're going to look at different cases. So how might AI be allowed to be
used in contexts like elections or politics or, you know, medical information, I mean,
different areas that they've identified of risk.
And they're moving ahead with this and, you know, clearly at a much faster pace than anyone
in the U.S.
I mean, Claudia mentioned, you know, the White House has talked about this.
Joe Biden hosted leaders of many American tech companies working on AI recently.
But again, we haven't really seen anything concrete.
You know, there have been a number of
bills proposed, including things about regulating the use of AI in election campaigns or things
around disclosure. If you're going to be using chat GPT to write fundraising messages, what are
your obligations? But again, nothing really moving back quickly. And, you know, we heard from lawmakers this week, you know, including
Senator Hawley, you know, was actually kind of skeptical of the idea that you could regulate this.
Having seen how agencies work in this government, they usually get captured by the interests that
they're supposed to regulate. They usually get controlled by the people who they're supposed
to be watching. I mean, that's just been our history for 100 years. Maybe this agency would be different.
I think there is always this question of, like,
just how quickly can the U.S. move on this?
And will it even matter if kind of the global regulatory force
is really coming out of Europe?
You know, and even with all this momentum in the EU
and seeing AI just going, you know, exploding right now
without, you know, Congress keeping pace.
You know, there's a lot of members here who are undeterred, Schumer's among them.
Hawley's pretty interesting.
I talked to him a few days before that hearing, and he kept talking about,
I've got to get educated, got to get educated.
It was obvious by this week's hearing that he had really caught up on a lot of what's going on in the AI industry
by the time he was able to question Altman and others
about options in terms of regulation. So we're seeing members really trying to catch up. There's
one member in the House. This is Representative Don Beyer of Virginia. He's actually gone to school,
back to school, to learn about AI. So he's doing that kind of on the side to catch up. We're seeing
other members like Ted Lieu of California. This is a Democrat. He introduced legislation this year. It was written by Chet GBT. That's never happened before. Another first that we're seeing. And so we are seeing members trying to catch up as much as they can right now. Whether they will, that's to be determined. But they're trying to stay on track and catch up.
Shannon Vaughn, thank you so much for joining us today.
Yeah, thanks for having me.
I'm Deepa Shivaram. I cover politics.
I'm Claudia Grisales. I cover Congress.
And thank you for listening to the NPR Politics Podcast.