Consider This from NPR - Bubbling questions about the limits of the AI revolution
Episode Date: August 24, 2025OpenAI founder Sam Altman floated the idea of an AI bubble, an MIT report found that 95% of generative AI pilots at companies are failing and tech stocks took a dip.With the AI sector is expected to b...ecome a trillion dollar industry within the next decade, what impact might slowing progress have on the economy? NPR’s Scott Detrow speaks with Cal Newport, a contributing writer for the New Yorker, and a computer science professor at Georgetown, about the limitations of the AI revolution. For sponsor-free episodes of Consider This, sign up for Consider This+ via Apple Podcasts or atplus.npr.org. Email us at considerthis@npr.org.This episode was produced by Elena Burnett. It was edited by John Ketchum and Eric McDaniel. Our executive producer is Sami Yenigun.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
Transcript
Discussion (0)
Debates on artificial intelligence usually go about two ways.
It's either hyped as a savior or derided as a reaper lurking just around the corner.
But up until very recently, nearly everyone agreed that technology is evolving fast
and the billions and billions of dollars invested at it are a pretty good bet.
Earlier this month, Meta CEO Mark Zuckerberg announced that he thought superintelligence was within sight.
I think an even more meaningful impact in our lives is going to come from everyone
having a personal superintelligence that helps you achieve your goals,
create what you want to see in the world, be a better friend,
and grow to become the person that you aspire to be.
Anthropic CEO Dario Amadeh had a starker prediction
that AI could eliminate up to 50% of new white-collar jobs
and could raise unemployment by 10 to 20%.
Here he was in an interview with CNN.
I think we do need to be raising the alarm.
I think we do need to be concerned about it.
I think policymakers do need to work.
worry about it. Many policymakers are worried about it. Former Transportation Secretary Pete Buttigieg,
seen by many as a potential presidential contender next time around, told NPR he is concerned about
the next few years. It'll be a bit like what I lived through as a kid in the industrial
Midwest when trade and automation sucked away a lot of the auto jobs in the 90s, but 10 times,
maybe 100 times more disruptive because it's happening on a more widespread basis, and it's happening
more quickly. So it really seemed like the whole world was preparing for the dawn of a new era.
But then things shifted.
Earlier this month, OpenAI launched the most recent version of its flagship product,
ChatGPT, which many users found disappointing.
And then weeks later, Sam Altman, the CEO of the same company, warned of a looming AI bubble.
To top it all off, MIT put out a recent study saying that 95% of AI pilots at companies are falling flat.
Only 5% are succeeding at, quote, rapid revenue acceleration.
All of this has made investors question whether AI,
is still the safe bet it used to be.
Consider this. For years, we have been told the explosive growth of AI
could radically change life for the better or for the worse.
So has that growth stalled? And is this as good as AI is going to get?
From NPR, I'm Scott Detrow.
It's considered this from NPR.
I asked ChatGPT to write an introduction for this segment, and my prompt, a 30-second introduction for a radio news segment, and the topic how, after years of promise and sky-high expectations, there are suddenly doubts about whether the technology will hit a ceiling.
This is part of what I got.
Quote, for years it was hailed as the future, a game changer destined to reshape industries, redefine daily life, and break boundaries we hadn't even imagined.
But now, the once-limitless promise of this breakthrough technology is face.
facing new scrutiny. Experts are asking, have we hit a ceiling? Okay, now back to the humans.
We're going to put that question to an expert of our own. Cal Newport is a contributing writer for
the New Yorker and a computer science professor at Georgetown University, and he joins me now. Welcome.
Thanks for having me.
Let's just start with Chad GPT in the latest version. Was it really that disappointing?
It's a great piece of technology, but it was not a transformative piece of technology.
And that's what we had been promised ever since GPT4 came out.
which is the next major model was going to be the next major leap.
And GPT-5 just wasn't that.
One of the things you pointed out in your recent article is that there have been voices saying it's not a given that it's always going to be exponential leaps.
And they were really drowned out in recent years.
And kind of the prevailing thinking was, of course, it's always going to be leaps and bounds until we have superhuman intelligence.
And the reason why they were drowned out is that we did have those leaps at first.
So there was an actual curve.
It came out in a paper in 2020 that showed this is how.
how fast these models will get better as we make them larger. And GBT3 and GPT4 fell right on those
curves. So we had a lot of confidence in the AI industry that, yeah, if we keep getting bigger,
we're going to keep moving up this very steep curve. But sometime after GPT4, the progress fell
off that curve and got a lot flatter. Chad, GPT is the leader. It is the most high profile of all
of these models out there. So obviously this is a big data point. But what are you looking at to get a sense
of is this just one blip or what is the bigger picture here?
This is an issue across all large language models.
Essentially, the idea that simply making the model bigger and training it longer is going to make it much smarter, that has stopped working across the board.
We first started noticing this around late 2023, early 2024.
All of the major large language models right now has shifted to another way of getting better.
They're focusing on what I call post-training improvements.
which are more focused and more incremental, and all major models from all major AI companies are focused on this more incremental approach to improvement right now.
I want to talk about that in a moment.
First, I want to get your thoughts on this other big headline from recent days, this MIT report.
The headline that was all over the place was 95% of generative AI pilots at companies are failing.
95% do you find that number surprising?
I don't find that number surprising at all.
What we were hoping was going to happen with AI in the workplace was the agentic revolution, which was this idea that maybe language models would get good enough that we could give them control of software, and then they could start doing lots of stuff for us in the business context.
But the models aren't good enough for that.
They hallucinate.
They're not super reliable.
They make mistakes or make odd behavior.
And so these tools we've been building on top of language models, as soon as we leave very narrow applications where language models are very good, these more general business.
tools, they're just not very reliable yet.
You're talking about hopes, and a lot of these companies have hopes and a lot of investors
have hopes, but there's been a lot of people who have been really freaked out about all
of this, whether it means job security, whether it means some of the more, you know, high-flung
sci-fi-type views of what happens down the line with AI.
Do you think a slowdown is necessarily good news for people who are worried, or do you
think this continues to be the focus in so many industries and it will continue to take
more and more center?
I think it's good news for those who are worried.
about, let's say, the next five years.
Okay.
I think this idea, like Dario Amadeh floated, that we could have up to 20% unemployment,
that we could have up to 50% of all new white-collar jobs being automated in the near
future, that technology is not there, and we do not have a route for it to get there
in the near future.
The farther future is a different question, but I do not think those scenarios of doom
we've been hearing over the last six months or so.
I think right now they're seeming unrealistic.
You mentioned post-training before.
You had a great metaphor for it involving cars.
Can you walk us through that?
Well, there's two ways of improving a language model.
The first way is making it bigger, training it longer.
This is what's called pre-training.
This is what gives you the basic capabilities of your model.
Then you have this other way of improving them, which we can think of as post-training,
which is a way of souping up or improving the capabilities they already have.
So if pre-training gives you like a car, post-training soups up the car.
And what has happened is we've turned our attention in the industry away from,
from pre-training and towards post-training.
So less trying to build a much better car
and more focused on trying to get more performance
out of the car we already have.
How much is this leading to broad-scale rethinking
of what comes next?
Or is it just kind of tweaking the current approach
to how these models get better and better?
I think it's almost a crisis moment for AI companies
because the capital expenditure required
to build these massive models is astonishingly large.
And in order to make a huge amount of money
from these technologies,
you need hugely lucrative applications.
How are we going to make enough revenue to justify hundreds of billions of dollars of capital expenditures that's required to train these models?
A lot of tradeoffs have gone into this.
There's an economic effect already.
People have lost jobs already.
We're talking about the enormous energy suck and environmental consequences of just the raw power that goes into all of AI.
And I'm wondering, does that make you rethink whether or not all of the downsides are worth it if the upside.
isn't as revolutionary, possibly, as has been promised?
I think it's a critical question, because when the thought was pushing ahead as fast as possible
is going to give us artificial general intelligence, people were willing to make whatever
sacrifice or cause whatever damage because you had this goal, like this is so transformative,
it's worth it. If we knew then what we know now, that maybe this massive investment and
the environmental damage, the impact on communities, the impact on the economy, if where this is going to lead in the near future is going to be something like a better version of Google, something that is good at producing computer code, and the more narrow types of applications we have, I don't know that we would have had the stomach for tolerating that level of disruption. So there is going to be some interesting questions about what we've already done, but also some questions about what we're willing to accept in the near future if we're no longer sure,
that we're heading our way to somewhere super transformative in the near future.
What does this mean in the immediate term for people who have already started to use AI in their
everyday lives at work, at home? Does that continue? Do you think that we hit kind of a bubble there?
Like, what comes next on the small consumer scale, do you think?
I think we're going to get a lot more effort on product market fit. So instead of just having
this focus on making the models bigger and bigger and maybe you just access them through a chat
interface. Now we're going to have to have a lot more attention on building bespoke tools on
top of these foundation models for specific use cases. So I actually think the footprint in
regular users' lives is going to get more useful because you might get a tool that's more
custom fit for your particular job. There's still plenty of things to be worried about. Language
models, as we have them today, can do all sorts of things that are a pain. It's generating
slop for the internet. It makes it much easier to have persuasive misinformation. The fraud
possibilities are explosive.
All of these things are negative, but I'll probably just get some better tools in the near
future as just an average user.
That's not necessarily so bad.
That is Cal Newport, author and professor of computer science at Georgetown University.
Thanks for coming up.
Thank you.
This episode was produced by Elena Burnett and edited by John Ketchum, Eric McDaniel, and Sarah
Robbins.
Our executive producer is Sammy Yenigan.
It's considered this from NPR.
I'm Scott Detrow.
Want to hear this podcast without sponsor breaks?
Amazon Prime members can listen to consider this sponsor-free through Amazon music.
Or you can also support NPR's vital journalism and get Consider This Plus at plus.npr.org.
That's plus.npr.org.