Fresh Air - The Promise & Peril Of AI
Episode Date: March 19, 2025Pulitzer Prize-winning journalist Gary Rivlin says regulation can help control how AI is used: "AI could be an amazing thing around health, medicine, scientific discoveries, education ... as long as w...e're deliberate about it." He spoke with Dave Davies about some of his fears about artificial intelligence. His book is AI Valley. Also, Maureen Corrigan reviews Karen Russell's new Dust Bowl-era epic, The Antidote.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
Transcript
Discussion (0)
Honest Baba is NPR's eyes and ears on the ground in Gaza.
Wherever you put your eye to the horizon, it's the same. Destruction everywhere.
On The Sunday Story, what it's like to be a reporter covering the war in Gaza while also living through it.
Listen now to The Sunday Story on the Up First podcast from NPR.
This is Fresh Air.
I'm Dave Davies.
For decades, scientists have dreamed of computers
so sophisticated they could think like humans
and worried what might happen
if those machines began to act independently.
Those fears and aspirations accelerated in 2022
when a company called OpenAI
released its artificial intelligence chat bot called ChatGPT.
Our guest veteran investigative reporter Gary Rivlin has burrowed deep into the AI world
to understand the plans and motivations of those pushing artificial intelligence and
what impact they could have for good or ill.
In his new book, Rivlin writes that in March of 2023 there were more than
3,000 startup companies in the US working on artificial intelligence with
new ones popping up at a rate of 30 per day. While AI is already in use in some
fields such as medical diagnosis, many believe the field is on the verge of a
new breakthrough, achieving artificial general intelligence, systems that truly
match or
approximate human cognitive abilities. Some believe it could be as transformational to human
society as the industrial revolution. But many fear where it may take us. A poll of AI researchers
in 2022 found that half of them believe there's at least a one in 10 chance that humanity will go extinct due to our inability to control AI.
In 2023, President Joe Biden issued an executive order imposing some regulatory safeguards
on AI development, but President Trump quickly repealed that order upon taking office, saying
Biden's dangerous approach imposed unnecessary government control on AI innovation. We've invited
Gary Rivlin here to help us understand all these issues and developments.
Rivlin has worked for the New York Times among other publications and published
ten previous books. In 2017 he shared a Pulitzer Prize for reporting on the
Panama Papers. His new book is AI Valley, Microsoft, Google, and the trillion dollar race to cash in on
artificial intelligence.
Well, Gary Rivlin, welcome back to Fresh Air.
Thanks for having me.
Let's just start with a couple of basics.
We're used to computers being very smart.
Way back in 2011, Siri appeared on Apple products.
What distinguishes artificial intelligence from just smart computers?
You know, there's this sense out there that in 2022, we suddenly had artificial intelligence.
It's been much, much more gradual than that. You know, Google has been using machine learning,
artificial intelligence since the 2000s, you know, to decipher imprecise Google searches, to figure out how much to
charge for the various ads they throw on the system.
You know, Google Translate's been around since the mid 2010s.
That's AI.
So, you know, we've been auto-complete.
You know, spam filters.
That's AI.
You know, but you're touching on a really interesting question. It's not this clear like, oh, this is a smart machine.
This is artificial intelligence.
The way it's kind of played out now is that these machines can learn, right?
I mean, the old approach had been you encode rules.
You just teach the computer, here's exactly the set of rules, just follow it. Now it's machine learning,
deep learning, that the computer is ingesting vast troves of data, books, the public internet,
Amazon reviews, Reddit posts, whatever it might be, articles, and it's finding patterns and,
in quotes, learning, you know, and then they're fine-tuned and then they get
better at communicating with us and such. So, you know, there really isn't this, oh
artificial intelligence is this, and in fact the term artificial intelligence is
controversial just in the sense that, you know, right now it's more amplified
intelligence. We could use this thing to get smarter, to find patterns that humans couldn't
possibly understand because we can't read billions of words. So, you know, there's another
definition that AI really should be alien intelligence because the weird thing about
AI is that it seems to know everything, but it doesn't understand a thing. You know, there's
this term, I love it from a linguist at University of Washington uses it, the stochastic parrot. You know,
it's just like, it's like a parrot. It just, it's repeating words randomly, but
it doesn't really understand what it's saying.
Right, but it's learned a lot of words. Okay, now this may be another artificial
distinction, but I want, new talk is now of artificial general intelligence, a great leap forward.
What is that exactly?
Right, so you know, AGI, just to use the phrase,
is that it's a system that could match or exceed
cognitive abilities across the board.
And you know, again, I feel like in some ways we have artificial general intelligence.
You know, again, you got to be a PhD in physics and understand this, but what's amazing about
these models is that they have deep understanding and a vast array of domains. So in one way that is AGI, artificial
general intelligence. You know, there's no set definition. It keeps on
changing. There are predictions that we're gonna have AGI the next year, two
years, maybe it's five years kind of thing. I'm dubious of those predictions. I mean, this is moving exponentially. This
is improving so fast that making predictions could be perilous. But on the other hand,
I really feel like there needs to be another breakthrough or two before we have this artificial
general intelligence, a computer from like Star Trek that you're talking to
and it's helping you explore, it's at your site,
a co-pilot, figuring out everything.
Again, an artificial distinction in that,
I don't think like one day there's gonna be this Eureka.
We have AGI, I do guarantee there will be startups
and large companies that say Eureka,
we have artificial general intelligence,
but they'll just play with the definition
But you know a few days ago
I'm sure you saw this Kevin Roos the respected tech columnist for the New York Times wrote a piece saying that you know
Whether that we're gonna quickly see companies claiming they have artificial general intelligence and whatever you call it these
dramatically more powerful AI systems are coming and soon and
As recline of the New York Times opinion section says essentially the same thing. Both of them agree we're not ready for the implications of this.
Do you agree with that?
I do and you're taking away from me what's the main message of those.
These things are coming and they're coming fast and we're not prepared.
You know, I personally think AI could be an amazing thing around health, medicine, scientific
discoveries, education, a wide array of things, as long as we're deliberate about it. And
that's my worry. And I do believe that's Kevin and Ezra's worry that we're not being deliberate. We started
in 2023. There was meetings at the White House and there were hearings in the Senate and that's just
kind of dropped by the wayside and now we're more at a laissez-faire attitude towards it. We need
to prepare for this. Like any, like any technology, there's
good and there's bad, right? The car, the car meant freedom, the car changed our society,
but the car meant pollution. The car means 30,000 to 40,000 deaths in the US a year kind
of thing. And I look at AI the same way. It could be really great if we're deliberate
about it and take steps to ensure that we get
more of the positive than the negatives because I guarantee you there will be
both positives and negatives. You know I mentioned in the introduction that
President Biden had issued this executive order trying to establish some
processes and guardrails and safeguards. Trump swept all that away saying nope
that's that's onerous government regulation let let let innovation
proceed and you know Tony the last time you and I talked on this program it was about
efforts to implement the Dodd-Frank reforms of the financial system. And one
of the difficulties was that was that that bill had general principles but
regulators had to actually spell out what it meant to regulate some pretty complicated, you know, contracts
and instruments in the world of finance. And what you'd written about then
was how the private interests had gotten in and kind of gummed all that up with
by disputing everything. But I'm wondering what does, what do regulations that
control something as sprawling as AI high what what does that look like?
What do we need in terms of how do we get prepared?
Right, so there are a few basic steps that the Biden administration
Thought of one that you in quotes red team these cutting-edge
models and
basically you
Get outsiders to try to break the system,
try to get it to jump the fence, to use the term,
to get it to misbehave, just to see what could go wrong.
And the executive order said you need to test them,
and then you need to share with the government
what you find.
That's one of the things that went by the wayside
when Trump took over as president.
But to me, I'd break it down more to the
concerns, the use of AI as a weapon of war, the use of AI for surveillance.
I worry that AI is just gonna solidify biases that we already have because the
AI is learning from us and all these inherent biases in things.
It's like we need to prepare for the impact on the job market,
which I think will be a slow roll.
I don't think like we're going to lose millions of jobs in a year kind of thing.
But, you know, it is coming and we need to prepare for it.
There's another concept, recursive learning, that these systems change in ways we don't really
understand and that's what scares me, that we're going to let these systems lose and
they could just learn because, you know, really the way to understand any of these large language
models, any of these chatbots, is it's a mirror on us.
It's reading our collective works.
It's learning from us about imperialism and domination and
you know humans mistreating each other. It's learning about loneliness. It's
learning about freedom and independence and autonomy and all that. And so me it's
recursive intelligence. This idea that these models are constantly improving in
ways we don't understand and then that could be dangerous.
And they could learn how to pursue an agenda and keep it hidden, right, to deceive in their
own interests.
Yeah, so what would that look like in terms of what are the dark fears here?
I mean, that's not really a theoretical.
You know, these systems, I can't remember which model it was, but they were testing
it and it was dissembling.
It was changing the files that would monitor its behavior and then lying to the people
who noticed it and said, wait, aren't you changing those files?
And it's another example, OpenAI, the creative chat GPT, when they came out
with GPT-4, their then cutting edge model in 2023, they put out a research report and
they red teamed it, they tested it and saw all the ways it could misbehave.
And one of the most interesting is that the model went to,
I think it was a test rabbit,
it went to one of those services where you can hire a human,
maybe a fiber, you could hire a human,
and they used it to beat the CAPTCHA test,
the test that is gonna test, are you a machine or a human?
And you know, that's very clever and very, very scary.
Wow.
So what are some of the darkest fears? I mean, starting nuclear war, That's very clever and very, very scary. Wow.
So what are some of the darkest fears?
I mean, starting nuclear war,
you set it to defend territory with drones
and it decides it needs to be more aggressive
than the generals want to.
I mean, what are the fears?
I look at is, you look at the positives
and then you imagine what the negative could be.
So an AI that makes possible new drug discoveries
and more effective therapeutics is also one
that could create a new bioterror weapon
or it can engineer a pandemic.
I can imagine cyber thieves employing AI
to siphon off a trillion dollars
from the world monetary system
before any human being even notices it.
I guess the point is that AI could be a powerful tool for good, but it could also be a powerful tool for people with bad intent.
Everyone knows or many people know that you could use it to write a toast on someone's 50th birthday or for a wedding toast.
Scammers from a different country could use it to create a better crafted scam email. You know, these systems are so good
now that you could take seconds of someone's voice and make it sound like it's that person
speaking. So you can imagine a scenario where, you know, a kid is overseas in Europe and the bot,
the one of these systems, you know, calls grandma, pretends it's
that kid and says, I'm in trouble, wire me money.
And they, they're good enough to fool, you know,
the, the parent, the grandpa, I mean, maybe not
a parent, but I don't think we're very far away
from that and it could certainly fool many, many
people.
Right, right.
You know, there's something that you wrote in the book. You wrote about a couple of tech guys, Tristan Harrison, Azar Raskin, you know, who had real
experience in the tech world, who said they worried about AI because it's a technology
whose creators confess they do not understand why their models do what they do.
Is that literally true?
That's kind of scary.
Yeah, so they're a black box.
I mean, so nowadays it's neural networks,
models that emulate how humans learn.
They learn by reading vast stores of data,
the open internet books, whatever,
and they improve through feedback and trial and error.
You're not really encoding the rules.
Well, it's trying to emulate the human brain.
I mean, I have two teenage sons.
We try to teach them, they read, we give them feedback and all.
They're things that come out of their mouths I don't quite understand.
That's the way I look at these chatbots, these neural networks, these large language
models.
You know, that we don't quite understand they say what they say because they're trying to
emulate the human brain as best they can.
And who could say why I'm saying the words I'm saying right now when you're going to
have the exact reaction?
And so that's part of the miracle, the gee whiz.
These things are amazing.
But it's part of what's scary, because we
don't fully understand.
The people who create it don't fully understand
why it says what it says.
One more thing about the national political scene.
There's a lot of talk about tech bros and Donald Trump.
Elon Musk is clearly a driving force
in the administration's effort to cut federal workforce
and contracts.
There are a bunch of billionaires from the tech world
at his inauguration.
Do you think that there's an elite tech agenda
to radically reshape society at work through Donald Trump?
In a word, yes.
What scares me is there's a movement in Silicon Valley,
there's a movement in tech, the accelerationists.
Anything that stands in the way of our advancing
artificial intelligence is bad.
Often it's put in the context of competing with China.
We can have new rules in the way.
And that is their agenda. I would say their real
agenda is that they could make a lot of money, billions, hundreds of billions, ultimately
trillions of dollars off of this, and they don't want anyone standing in their way. And so I think
if you want to understand Elon Musk, you want to understand Mark Zuckerberg,
you want to understand Jeff Bezos
and cozying up to Trump,
for a few million dollars is not very expensive for them.
They could have a friend in the White House
who makes sure that they can do
what they want to do unchecked.
And in fact, maybe that's my biggest fear about AI.
It's so much power in the hands of few people.
Creating these models is so expensive.
To hire the talent, you have to pay them
a million or more a year.
To train them, it takes tens of millions,
if not hundreds of millions of dollars in computer power.
And then to operate them takes equivalent money.
It's billions of dollars and billions of dollars.
So, you know, it's becoming less and less about the startups,
and more about the same companies that dominated tech in the 2010s,
dominating in the 2020s, you know, Google, Microsoft,
MetaWitch is Facebook, Amazon, a few others.
And that's really what concerns me.
That's kind of the Silicon Valley way.
Let's get five smart guys, and they're almost always guys, in a room,
and we'll figure it out.
And like, okay, we saw that didn't go so great with social network,
and now we're having a really powerful technology,
and I'd like there to be more than just five people
in a room figuring this out.
You know, the account that you give us in the book
is pretty detailed and really interesting
about how all this unfolded.
One of the things that struck me
is that some of the leading players in
developing AI weren't just coders or computer nerds. A lot of them studied
classics or philosophy or worked in completely unrelated fields. Is there a
connection here? That's one of the things I was surprised by and found
fascinating myself that it's not just computer scientists, it's mathematicians,
it's physicists, it's philosophers, it's neuroscientists, and it's a broad range of things because,
again, it's no longer about just programming these models to act the way we want them to
act, we're trying to emulate the way humans learn.
So what a psychologist has to say, what an educator has to say about that is,
a linguist is really important
to it speaking a natural language.
That's actually what attracted me
to the topic in the first place.
This idea that computers could speak to us in our language.
You didn't have to learn a programming language.
Earlier in my life, I tried to program in computers.
I studied for-trans. I did too program in computers. I studied for tram.
I did too a long time ago.
It's difficult.
It was so frustrating.
You know, you make a little mistake and you know,
whatever and the idea that you could speak to these things.
And you know, nowadays, I mean, speak to it.
You don't even have to type.
You know, they have voice.
You can talk to it.
I just found that fascinating to me.
So you do need a
wide range of people. In fact, if I had a criticism, I don't think there's a wide
enough range of people. I'd like some historians and sociologists and others
involved in the developing of these models, given the stakes.
I'm gonna take another break here. We are speaking with Gary Rivlin. He's a
veteran investigative reporter. His new book is AI Valley,
Microsoft, Google, and the trillion-dollar race to cash in on artificial intelligence.
He'll be back to talk more after a short break. I'm Dave Davies, and this is Fresh Air.
Over 70% of us say that we feel spiritual, but that doesn't mean we're going to church.
But that doesn't mean we're going to church. Nope.
The girls are doing reiki, the bros are doing psychedelics, and a whole lot of us are turning
inward to manifest our best selves.
On It's Been A Minute from NPR, I'm looking at why maybe you and your closest friends
are buying into wellness for spirituality.
That's on the It's Been A Minute podcast from NPR.
When you take a shower or get ready in the morning, how many products are you using?
Everything from your shampoo to your lotion.
In our study, we found that the average woman used about 19 products every day and the average
man used about seven.
These products might come at a cost.
The ingredients they contain can be harmful to our health.
Listen to the Life Kit podcast from NPR to learn more about the risks of personal care products.
If you're a super fan of Fresh Air with Terry Gross, we have exciting news. WHYY has launched
a Fresh Air Society, a leadership group dedicated to ensuring Fresh Air's legacy. For over 50 years,
this program has brought you fascinating interviews with favorite authors, artists, actors, and more.
As a member of the Fresh Air Society, you'll receive special benefits and recognition.
Learn more at whyy.org slash Fresh Air Society.
You know, you made the point earlier that it's enormously expensive to develop AI.
I mean, the talent is high priced, and it takes tons and tons of computing power to
Develop the systems to run them once you have them which means you know not a couple three million dollars
But hundreds of millions in some cases or more which means that the big companies in tech
You know Microsoft Google, you know matter. We all know the names
Have an edge but it's interesting as I read your story that story, that's no guarantee of success, is it?
Sometimes it's kind of an obstacle,
having a big organization.
You know, it's interesting, let's use the example of Google.
Let's give Google credit first.
They were so far ahead of almost everyone else on AI.
They hired some of the best talent,
they were employing machine learning, deep learning,
long before most everyone else.
They did some of the more cutting edge things.
In fact, the breakthrough that led to Chat Cheap ET was actually out of Google.
Google had inside the company in around 2020, a Chat Cheap ET equivalent.
But Google takes in a lot of revenue.
There's a lot of risk if this chatbot misbehaves.
There was famously this example of Microsoft, I think it was 2016, 2017, came out with Tay.
And it was trained on social media and that kind of thing.
And within 24 hours, it was a holocaust denying white supremacists. And
of course, Microsoft worrying about the reputational risk pulled the plug on that rather quickly.
And I feel like that's haunted the giant. So even though Google was far ahead, even
though Google could have had their version of chat CPT, and it was Google that changed
the world, they were scared of it and never underestimate
the ability of a giant to stumble over its own feet.
They have layers and layers of bureaucracy.
They have huge public relations department
that's whispering the CEO's ear.
I don't think it's a coincidence that OpenAI,
startup founded in 2015, was the one that set off the starter's pistol on this
because they didn't have as much as at stake.
They can afford reputation-wise to release chat GPT.
They could just make the decision without 10 layers of decision making before they did
it.
So yes, they have advantage, but Google you know, Google also, Google also has like
a hundred billion dollars of reserves, you know, where at OpenAI has to go out and raise funds,
raise, they've raised roughly, I don't know, 20 billion so far. And there's talk that they've
raised another 30 billion and those I might even be underestimating. And so that's 50 billion or so.
Google, they just pay for themselves.
Microsoft, Meta, they all have deep, deep, deep reserves of money.
And so it's almost like a race of attrition.
You can use these chat bots for free if you want the leading edge, cutting edge.
You have to pay, a consumer would pay 20 bucks a month for it,
but you know, most people are using these things for free
and it's costing the companies a lot more than $20 a month
to handle the heavy usage.
And so these things are gonna become more of a commodity.
You know, there's a leapfrogging going on,
like yes, GPT-4, that's OpenAI's,
you know, when it came out, it was cutting edge.
But then Anthropix, Claude leapfrogged over that.
And then others leapfrogged over that.
And so they're all more or less as powerful, as useful
as the other.
And it's not clear how any of these companies
are going to make money.
Google can afford to lose money on these things
for five years plus.
A startup, that's harder to do.
Right, right. And so a lot of times you see the big companies buying smaller startups that have shown promise.
It's interesting that this company called OpenAI kind of became the public face of artificial intelligence in a way.
It was a startup that didn't have, you know, the power of a Microsoft or a Google behind it.
It was this guy Sam Altman and some other folks.
Elon Musk.
Yeah, Elon Musk among others, right, right.
And there's a moment that was sort of a critical transformational point when they released
this version of ChatGPT, but that was preceded by a dinner at Bill Gates House, which you
describe, which the house being as absolutely as magnificent as you would expect
Bill Gates house to be tell us about that evening. What happened? So Microsoft starting in 2019 started investing
in open AI and
So, you know that they had a financial stake. So open AI
would give Bill Gates,
others at Microsoft,
an early peek at what they were learning.
Gates, to him, AI is the holy grail of computing.
He's been programming since before he was born practically.
So artificial intelligence is the holy grail.
He was impressed with, I think it was GPT-3 or whatever,
the most recent one
He had seen but he gave a challenge. He said I am gonna be impressed if it could
ace the
Biology AP test and he chose that one because it's not just regurgitating facts
You need to analyze you need to synthesize you really have to show some sense of
Understanding and intelligence and he thought that would be a great to analyze, you need to synthesize, you really have to show some sense of understanding and
intelligence.
And he thought that would be a great challenge.
And so he threw down the gauntlet and thought, okay, I'll hear from them in a few years,
whatever.
And not that many months later, he heard from OpenAI, okay, we're ready. And so in September of 2022,
Gates hosted at his house a demo,
Open AI, and it was whatever,
30 people from Microsoft from Open AI,
while someone was at a computer,
his big screen was set up,
and watching this computer take
this test, and you know, within two or three answers, people were just blown away. In fact,
it did get five out of five on the test. It did pass the test. And that's when Gates became a true,
true, true believer. You know, I thought I was, I was, in his mind, as he said,
I thought I was throwing down a gauntlet
that would be a while and suddenly,
it matched my expectations.
In fact, and then they kept on playing with it
and they would just ask it,
what would you say to a father
worried about the health of his son?
And it just kind of spit out an answer in his gate's foot.
It's like, it's kind of a better answer than most of us
could have given sitting around that room.
And, you know, they just started playing with it.
Gates started playing with it.
Others started playing with it.
And it just blew them away.
We're going to take another break here.
Let me reintroduce you.
We are speaking with Gary Rivlin.
He's a veteran investigative reporter.
His new book is AI Valley, Microsoft, Google, and the trillion dollar race to cash in on artificial
intelligence. We'll be back to talk more in just a moment. This is fresh air.
You know a few weeks ago there was this development which kind of shook the stock
market. This Chinese company called DeepSeek announced that they had
created this artificial intelligence system at far less cost without the sophisticated microchips
that American companies were using. It made Americans wonder, heavens are we
about to be overtaken? Or I don't know, where does all this leave us? How
important is this development? Right, so I mean to me some of that was overstated.
You know Silicon Valley companies were experimenting with smaller models that required less compute power.
DeepSeq itself was venture funded.
It was cheaper, but hardly cheap.
They still cost millions to train it, presumably,
and it costs millions, tens of millions to operate.
It just didn't require as much. And that really kind of was almost an existential threat
to Silicon Valley, which they had put all their money,
these tens of billions, hundreds of billions of dollars,
into building ever bigger models
that presume that you need ever more computer power.
But a couple of things.
One, I think all that means is that computer power, but you know, a couple of things.
One, I think all it means is that instead of like,
hey, we can do this at one-tenth the power,
one-tenth the cost, I think they're just gonna build
10 times more powerful models because they could do
more with less.
And they say, they, do you mean DeepSeek, the Chinese,
or do you mean who?
No, the American companies.
They're learning from this.
They'll integrate it.
Like I said, I feel like the AI companies I was following, they were already for a year
plus paying attention to smaller models.
Maybe you don't need this whole huge system to answer a simple question.
Maybe we should have a bunch of smaller models and like, okay, this one's an expert in this, that one's an expert in that, and we just have a smaller model
give questions. But I think what an open AI would say, other than the fact, an ironic statement that
used our model to train, I say it's ironic because open AI is being sued for taking other, taking the copyright, the intellectual property
of the New York Times, of book writers,
of artists, of musicians and all.
But you know, I think what's interesting about DeepSeek
is it really gives hope to startups.
Like, wait, okay, maybe you don't need as much money
as we thought you do to create a company
But you know I I do think it's important to understand that they still were using a lot of computer power
They still required a lot of money just not as much
As some of these larger companies that we've been talking about
You know Reid Hoffman the investor who's been very active in this area is ultimately
Very optimistic about where AI is going to
take us.
Where are you on that scale?
I do feel that AI is going to bring about incredible things.
I think it's being overstated.
You hear people say that it's going to close the divide between the developing world and
the developed world.
I don't think that's so.
But, you know, there's this interesting study that came out recently the idea of an AI tutor a tutor in the pocket that everyone
has access about five billion people around the globe have a smartphone and
You can use that smartphone as a tutor
And so there was a study in Africa that like let's let these kids
After school have access to these AI tutors. And in six weeks, they showed two years worth
of advancements.
And I really do think around education, around science.
You know, science is Balkanized, right?
It's, you know, it's specialties and subspecialties
and there's own vocabulary lingo in every subspecialty.
You know, these large language models could read
across specialties and connect the dots.
They can make connections that no human being can do.
And I think we're going to see some amazing scientific advancements.
Creation of vaccines, of better therapies.
You know, there are some who predict, and I actually think there's a lot to it, that
the mortality rate for most cancers are going to go way down because of AI.
So I really do think AI could do some amazing things.
It's just, I just don't know how bad the bad's gonna be.
If I had one wish, I wish we were dealing with the concerns
that are within the line of sight,
the stuff that we can imagine.
Like, wait, it could be used for scams,
it could be used in warfare, instead of like this idea of the
robots are going to take over and subjugate humanity.
I guess that's possible, but not in the short term, not in the medium term, you know, just
in kind of in the long term.
And if we're deliberate about it, I think there's no doubt that AI could be a positive,
you know, again, I just
compare it to the internet. You know, is the internet a great thing? Like, no, I could
tell you a lot of negatives with the internet, but you know, I think the internet has changed
society in a lot of ways that, you know, we like, you know, the smartphone, the same kind
of thing. So I, it's going to be a mixed bag. And I guess I'm keeping my fingers crossed
that, you know, despite the next four years, there's not going to be much regulation, not much checks and balances that
AI is going to be a net positive.
Speaking of guardrails, what rules, if any, do you have for your kids and their use of
chatbots?
You know, right after ChatGPT came out, middle school, where my younger son goes, they kind of,
I had this idea of banning it.
And it's like, wait, wait, wait, like,
they need to learn how to use this.
I'll go back to what I was saying before that,
we have to learn how to use this.
What is this good for and what are ways
we can't rely on it right now?
So, if one of my sons writes a composition, you know
Like throw it into chat chat GPT and get some feedback, you know on it
Like I I may or may not caught my older son
You know using it to write an English paper, you know
It's just told about a million people what you may or may not have done
You know within three sentences it was obvious like, okay, this is too perfect.
This sounds like, you know, Cliff Notes for those of us who are old enough to know what
Cliff Notes are, but it's like, go rewrite.
So you know, don't use it to write, but use it as a research assistant.
You know, use it for feedback.
And in fact, I see with one of my sons, you know, the teacher, like, yeah,
you're writing something for science, use it and get some feedback on,
you know, saying more clearly what it is.
But, you know, it's a very personal choice.
But I'm convinced that my kids, their life is going to be
as dramatically different as mine was growing up before the internet and before mobile phones became pervasive.
I really do think AI liked the internet, liked the phone within, you know, I'll say 10 or 15 years, I could be wrong on that,
but at some point in the future is going to be at the center of their lives, And I think this next generation should get used to it
because it's going to be critical to what they do,
how they relate to the world, how they get employment.
The company Inflection that you write about,
they had this chatbot pie.
You had an interesting exchange with that chatbot
about a medical issue your son had.
Do you want to share that with us?
Yeah, so we were facing this health crisis
just as Pi was coming out.
And usually what a reporter does when the chatbot comes out
is they try to mess with it.
They try to get it to misbehave.
They try to get it to jump the fence.
But let me try dealing with this in a more authentic way.
And, you know, I was really impressed.
You know, it had just the right tone, said all the right things, if not a little too
perfectly, you know, it asked the right questions to get a dialogue going, you
know, it kind of in the fashion of a friend, like, how's your son taking the
news?
How's the school handling it?
How are you taking care of yourself through these stressful times? You know, it was a slew of questions, probably too many
questions, but you know, it really picked up on nuance. It got little jokes. I told
a funny moment from the sit-down with the neurosurgeon, you know, and it just
responded like, you know, teenagers, am I right? You know, gave me a lot of things
to think about, but what was so interesting to me is it also didn't mean
anything to me.
You know, there's this quote I love from an MIT sociologist, Sherry Turkle, you know,
the performance of empathy is not empathy.
You know, about expressing empathy, it's not really empathy.
It's just algorithms parsing human language patterns trying to like, oh, here's the right
thing to ask and stuff.
But, you know, it really was an interesting experience.
And I can understand, like, you know, if people were lonely, if people didn't have but you know, it really was an interesting experience. And I can understand like, you know, if people were lonely, if people didn't
have, you know, a network of people to speak with, this could be really something.
I think something people have to get used to is dropping this idea like,
Oh my God, you're going to have a friendship, um, with a bod, you know,
you're going to treat it like a therapist.
Yes, of course you should go to a licensed therapist to deal with your issues.
But like, you know, what if you don't have a few hundred dollars or whatever, uh,
it costs for a therapist every week.
And you know, it's like, they really do help you think through, at least this
bot pie really helps you think through what are the questions you should be
asking yourself and also, you know, was really it's really interesting experience for me to really just try to feel like just your average user what they would feel like
You know discussing something difficult brain surgery in this case of my son, which by the way
I should say it was very happy ending everything turned out fantastic. It's easy to talk about because of
That good. I'm glad you mentioned that.
But you know, the bot gave me some interesting things
to think about.
Well, Gary Rivlin, thanks so much
for speaking with us again.
Oh, my pleasure. Thank you so much.
Gary Rivlin is a veteran investigative reporter.
His new book is AI Valley, Microsoft, Google,
and the trillion-dollar race to cash in
on artificial intelligence.
This is fresh air.
Karen Russell's first novel, Swamplandia, came out in 2011 and was a finalist for the
Pulitzer Prize.
Our book critic Maureen Corrigan says she expects Russell's new novel, The Antidote,
will be on a lot of prize lists this year.
Here's her review.
No one summons up the old weird America in fiction like Karen Russell does. Her tall
tales of alligator wrestlers in Florida, homesteaders on the Gothic Great Plains, and female prospectors
digging for gold mash up history with the macabre in a cracker barrel aged with dry humor.
Russell's celebrated debut novel, Swamplandia, came out in 2011. Since then
she's published a couple of excellent short story collections, but the wait for
another novel was growing a little strained. I even heard speculation that maybe all the acclaim Russell
received for her first novel had blocked her. Well, the antidote has just come out, and
now we know why it took so long. American epics take a while. The antidote is set in
a Dust Bowl-era Nebraska town called Uz, but it also reaches back to the
earlier pioneer era Russell evoked in her short story masterpiece Proving Up, which was made into
an opera. The novel is framed by two true weather catastrophes, the Black Sunday dust storm on April 14, 1935, in which people were
suffocated by a moving black wall of dust, and a month later the Republican
River flood, when 24 inches of rain fell within one day. Much of what occurs
between those two disasters is also true, emotionally.
But in Russell's worldview, the fantastic and the familiar coexist on the same plane.
Our central character here is a prairie witch who goes by the name the Antidote. Part huckster, mostly healer, she, like other prairie witches, promises to treat what ails
her customers by taking away whatever they can't stand to know. The memories that make them chase
impossible dreams, that make them sick with regret and grief, whatever cargo unbalances the cart. I can hold on to anything for anyone.
Milk, honey, rainwater, venom, blood, pour it all into me. I am the empty bottle.
Lying in a trance, the antidote absorbs the heaviness, but not the details, of her customer's stories, which they sometimes
want back. After the Black Sunday dust settles, however, the antidote is horrified
to realize she feels lighter, vacant, some awful force has robbed her of the
stories she safeguarded. Who knows how her more violent customers will react when they discover they can't make withdrawals.
Other narrators step in to amplify Russell's peculiar vision of life in Us.
There's Del Oletsky, a teenage girl whose single mother was allegedly murdered by the lucky rabbit's foot killer, so called because he
leaves a bloody rabbit's foot near his victim's bodies. Del lives with her uncle Harp, whose
farm is mysteriously untouched by the all-enveloping dust. A federal agency photographer, a black woman named Cleo Alfre, eventually turns up in Oz.
Cleo explains her work by saying she's making advertisements for Roosevelt's
New Deal programs. She's also painfully aware of whose faces carried the most
weight with Congress. Actual depression-era photographs are scattered throughout this
novel, but the camera Cleo depends on goes Twilight Zone haywire, photographing the past and
possible futures of the town and surrounding terrain. Like Cleo's camera, Russell's instrument, her language, is uncanny.
Swathes of the spellbinding final third of this novel move deeply into the past, specifically
into the buried memory of how Harp Olecki's parents in Poland grabbed at the offer of free land in Nebraska, land they come to realize that was occupied
before their arrival. Here's Harp's father, guiltily recalling how he made peace,
not only with that land grab, but with racial hierarchy in America.
I was born a serf in all but name. My skin is the color of an unwashed onion. In America, this placed me ahead of many, on a low rung of the ladder, but higher
than the black porter. I heard the ticking pulse of a sick relief. Not me, not me, not me. The same feeling I once had
whenever one of my brothers was chosen over me for a beating. In the antidote,
Karen Russell, America's own prairie witch of a writer, exumes memories out of
the collective national unconscious, and invites
us to see our history in full.
There are alas no antidotes for history.
Our constellations are found in writers like Russell, who refract horror and wonder through
their own strange looking glass, leaving us energized for that next astounding thing.
Maureen Corrigan is a professor of literature at Georgetown University. She reviewed The
Antidote by Karen Russell. On Tomorrow Show, New Yorker staff writer Andrew Morantz joins us to
discuss how podcasts, live streams, and YouTube channels have become the platforms where men who feel disillusioned and alienated go to feel seen and heard, many of them gravitating
toward the MAGA movement. I hope you can join us. To keep up with what's on the
show and get highlights of our interviews follow us on Instagram at
NPR Fresh Air's executive producer is Danny Miller. Our technical director and engineer is Audrey Bentham with additional engineering support from Al Banks.
Our managing producer is Sam Brigger. Our interviews and reviews are produced and edited by Phyllis Myers, Anne-Marie Baldonado,
Lauren Krenzel, Theresa Madden, Monique Nazareth, Thea Challener, Susan Yakundi, and Anna Bauman.
Our digital media producer is Molly Sivinesper.
Roberta Shorrock directs the show.
For Terry Gross and Tanya Mosley, I'm Dave Davies.