Deep Questions with Cal Newport - Is the AI Doom Fever Breaking? | AI Reality Check
Episode Date: May 7, 2026Cal Newport takes a critical look at recent AI News. Video from today’s episode: youtube.com/calnewportmedia (0:00) Is the AI doom fever breaking? (4:30) Have the AI CEOs changed their tuen? ... (7:23) Why did they change? (11:21) Why did they ever think this was a good idea? (24:26) Conclusion Links: Buy Cal’s latest book, “Slow Productivity” at www.calnewport.com/slow https://www.youtube.com/watch?v=DFnoQkYUqgU https://www.youtube.com/watch?v=NWxHOrn8-rs https://x.com/sama/status/2050229058425045178 https://fortune.com/2026/05/02/jensen-huang-nvdia-ceo-god-complex-ai-apocalypse-warnings-shortages-critical-jobs/ https://www.youtube.com/watch?v=2Kpb8eu1pEY https://www.nytimes.com/2026/05/03/opinion/ai-jobs-unemployment-silicon-valley.html https://nickbostrom.com/existential/risks.pdf Thanks to Jesse Miller for production and mastering and Nate Mechler for research and newsletter. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
The way that CEOs of major AI companies have been talking about their products in recent years
really has been bonkers.
They seem to be going out of their way to terrify their potential customers.
I mean, last week, I played a clip of Microsoft CEO Mustafa Suleiman,
claiming that their AI systems would be capable of fully automating basically every knowledge work task within a year.
That's not a very nice thing to tell to your customers.
All your jobs are going to go away, by the way.
We're working on it.
Hold for applause, I guess.
Now, he's far from alone in making these type of disturbing claims.
Here's Open AI CEO, Sam Altman, talking last summer about what the world is going to be like after AI takes over everything.
People really need agency.
Like, they really need to feel like they have a voice in governing the future and deciding where things go.
And I think if you just, like, say, okay, AI is going to do everything.
and then everybody gets like a, you know, dividend from that,
it's not going to feel good.
And I don't think it actually would be good for people.
So I think we need to find a way where we're not just like,
if we're in this world, where we're not just distributing money or wealth.
Like, actually, I don't just want like a check every month.
What I would want is like a ownership share, you know, whatever the AI creates
so that I feel like I'm participating in this thing that's going to compound and get more valuable over time.
All right.
to summarize what Altman just said there, sure, AI will quote-unquote do everything,
but don't worry, we're going to find way for you humans to still participate in the world.
And now, not to be outdone, here's Anthropic CEO, Dario Amadeh,
also wringing his hands about the damage that his own company's products will soon do.
But exactly those same kind of skills, things like summarizing a document,
brainstorming, putting together a financial report, makes me worry a lot that entry-level jobs
in areas like finance, consulting, tech, many other areas like that, entry-level white-collar work,
I worry that those things are going to be first augmented, but before long, replaced by AI
systems, and that we may indeed, it's hard to predict the future, but we may indeed have a
serious employment crisis.
Think about how crazy this is, right?
We have CEOs stating in effect that they're afraid of all the different ways that their products are going to destroy the economy and make everyone's lives much worse.
We've kind of become used to this in the context of AI.
But if we heard it in any other industry, it would really catch our attention.
I mean, can you imagine the CEO of Pfizer going on the air and talking about a new drug and saying, hey, we're excited for the potential of this new pill to reduce plaque psoriasis by 50%?
At the same time, however, I'm worried because once widely used, it will likely transform a large fraction of the population into zombies.
That's basically what we're getting from the AI CEOs, and I think it's just lunacy.
But here's the thing.
Recently, I'm talking about, like, the last few weeks, I have noticed a bit of a shift in this rhetoric.
There have now been multiple statements from major and...
AI leaders that hint that they might be retreating from the strategy of trying to make everyone
as anxious as possible about their own products.
Now, I've true, this immediately inspires some questions.
What specifically are these AI leaders saying in more recent weeks?
Why are they changing their minds about how they talk?
And why did they ever think it was a good idea in the first place to try to terrify people about
the products that they were also trying to sell them?
them. Well, it's Thursday, which means it's time for another AI reality episode, AI reality check
episode rather, of this show, which means this is a perfect opportunity for us to explore some of
these answers, which is exactly what we're going to do. So stay tuned. As always, I'm Cal Newport,
and this is Deep Questions, the show for people seeking depth in a distracted world.
All right, so let's proceed by looking at those three questions that I just raised.
Question number one, is it true that the AI CEOs are starting to change the way they talk about their product's potential impacts?
Well, I want to give you some recent examples that give me hope.
I want to start with a tweet from Sam Altman from late last week.
So this is sort of breaking news, right?
He said, and I quote,
we want to build tools to augment and elevate people, not entities to replace them.
I think a lot of people are going to be busier and hopefully more fulfilled than ever,
and jobs, dumerism is likely long-term wrong.
And just as a quick aside, it's all lowercase letters.
Jesse, we've talked about this before.
Why do these tech people write in lowercase letters have a little bit of respect?
Okay, we'll let that slide.
This is a far cry from the Altman that we heard just last summer
talking about how we'd have to find ways for people to participate in the world
after AI took over literally every job.
So this is definitely a change of tune from Altman.
We're getting even stronger pushback on job Dumerism
from the CEO of the largest of the AI relevant companies,
which is Navidia.
Jensen Wong has been on a welcomed rampage recently,
really pushing back on this rhetoric about AI replacements.
placing jobs. I want to read you some quotes here from a remarkable article that was published in
Fortune just on Saturday, so just less than a week ago. All right, here's from the Fortune article.
Navidia CEO Jensen Wong has been pushing back against the popular narrative that AI will wipe out
huge swaths of the workforce, but he also placed some blame on overly confident CEOs who assume they
know everything. Although it's important to advocate for guard rails on AI, he added that scaring people
into believing that the technology will pose an existential threat to humanity, destroy democracy,
or eliminate 50% of entry-level jobs is, quote, ridiculous, end quote.
In reality, he estimated that AI has created more than half a million jobs in the last few years.
That's because when companies incorporate AI, they grow faster and hire more people.
And data from hiring site indeed shows that demand for software engineers is actually increasing.
So good for you, Huang, for...
Really taking a stance.
I love that sort of barely concealed dig at Anthropic CEO Darya Amadeh,
who's the exactly the figure who's been saying that thing about 50% of entry-level jobs.
And Huang is like, nope, not going to happen.
It's still early, but I have been picking up more and more of these signals that I think
a memo has been successfully delivered to the AI CEOs.
Hey, guys, stop trying to terrify everyone about the product you hope they will pay for.
All right, this brings us to our second question.
Why did they change their minds?
Why did the rhetoric shift?
Well, now we're going to get a little bit more speculative.
I have a couple different things I want to suggest here.
One, I think the fact that multiple of these companies are preparing for an IPO and or considering an IPO,
and I'm looking really particularly here at Anthropic and Open AI made a difference.
Now, here's why.
If you're anthropic or your Open AI and you're thinking about an IPO,
you have to start hanging out with people from the East Coast who wear suits.
And they're not in the Silicon Valley bubble.
And they're not in this world where everyone tries to one up each other with who can be more apocalyptic about AI.
They come from a different, more sober-minded, more careful world in terms of the ways they think and talk about things.
And I really think this happened.
I think a lot of these companies are basically hearing from Wall Street types who are saying,
what the hell are you doing?
You're trying to convince people to get excited about your company.
And yet you're also telling them to be terrified about your company and all the damage you're going to cause.
No.
Stop it.
So I think there was some East Coast influence, which is starting to pervade some of these otherwise very West Coast companies.
It reminds me of that scene in the film festival episode of The Simpsons
where Mr. Burns discovers that everyone in the crowd,
with the exception of Hans Mullman, is actually booing him.
Smithis, are they booing me?
No, they're saying boo-erns, bo-erns.
Are you saying boo-wurns?
I was saying bo-eruns.
All right, but it's not just a collision with the world of finance
that I think help make this point.
I think the AI CEOs
are also picking up something that's being
measured in recent public opinion
surveys, which is the public is turning
on these companies.
Quinnipiac survey from March
revealed that a healthy majority of
Americans now thinks that AI will do more
harmed and good. And this is a sharp increase from a
year earlier in which those numbers
were reversed.
Now, of course, this is going to happen.
It shouldn't be surprising.
How much can you tell people
we are going to destroy your lives and everything you love
before they finally say, I don't think I like you.
And that's exactly what I think we're starting to see.
So I think the AI CEOs are reacting to that as well.
And finally, I think the third factor that's leading to a change in rhetoric
is that more and more reporters are beginning to develop some skepticism
around some of the more breathless claims being made by the AI CEOs.
Just last week, Asra Klein right in the New York Times,
wrote a column with a title that warmed my heart.
His column was called
Why the AI Job Apocalypse Probably Won't Happen.
You read the article, Klein goes on to say,
Economist I found are quite skeptical
that mass joblessness is on the horizon.
Right?
So we no longer have this sort of phenomenon
in the reporter space where they say,
well, these CEOs, they know more about this technology
to anyone else.
So we have to believe what they say
that grace period
has ended. They've been too bombastic.
Not enough of their claims have come true. They've changed their minds too much. So now they
have skepticism from there as well. So I think these factors are all coming together.
The impending IPO, forcing them to behave like normal, responsible citizens of the world,
turning public opinion against them. You can't just scare the public constantly
and expect that people are still going to like you and your products. And finally,
increasing journalism skepticism, that pressure has led to AI CEOs to back off.
they're more apocalyptic discussions of what's going to happen to the job market because of AI.
Well, this brings us to the most complicated question, our third and final question.
Why did they ever think it was a good idea?
Like, why were they actually talking that way, trying to scare people about their own products?
Well, there's a common explanation for this that I've mentioned myself on this show before.
The common explanation is, oh, it helps attract investment.
Yeah, it might be scary that your company is going to automate all jobs,
but that does make your company very valuable.
If you're an investor, if there's only going to be one company left in the world that does everything,
that's where I want to put my money.
So that's the common explanation for why the AI CEOs have been so apocalyptic in the way they've been talking about AI impacts.
And I think that's partly true.
Partly true.
And I think this certainly happened.
I saw a bunch of good coverage in the last week or so about the mass evaluation bumps
an anthropic got for, for example, presenting mythos as if it had made a major leap and it was going to destabilize all cybersecurity, that was very scary.
They hit a trillion dollar valuation for the first time.
So it made a big difference.
All right.
So I think that's partially what's going on.
But there's a deeper reason that I want to explore here.
And this came out of, I just finished teaching a doctoral seminar on superintelligence at Georgetown.
We read a lot of papers from a lot of different fields.
And it's really giving me a deeper appreciation of the cultural context from which these AI CEOs emerged.
So I want to tell you a story here.
This is my alternative explanation for why these AI CEOs were trying to terrify their customers.
All right.
So here's the story.
You got to go all the way back to the first decade of the 21st century.
This was a point in which a loose movement, especially among engineers,
especially based on the West Coast, emerged that was known in part as rationalism.
It came out of some online discussion boards such as Less Wrong and Slate Star Codex,
which now has a different name, and it became quite popular in particular among engineers in the San Francisco area.
At the core of the rationalist movement was this idea that humans have cognitive biases in the way they think,
And if you could be super rational, you can overcome your cognitive biases.
And in doing so, actually, be more effective in the world.
So it's a very sort of engineering way of thinking.
I'm very used to this as someone who, you know, I'm an MIT trained computer scientist.
I'm around engineers.
I am an engineer.
I know this way of thinking.
It's foreign to other people.
But in engineering circles, it makes sense.
You're like, I'm going to be super logical.
I'm like data from Star Trek the next generation.
And by doing so, I'll get over all these weaknesses.
do we have in our minds, then I can be more effective at my job or in helping the world
or politically or whatever it is, right?
So that's rationalism, and it became a sort of well-defined movement in the early 21st century.
Okay, so how do we connect this to AI today?
Well, rationalism had many sort of subgroups within it.
For example, one of the best-known subgroups coming out of rationalism was the so-called
effective altruist who tried to be hyper-rational about where to invest
money, time, or effort charitably to get the biggest return.
Right?
So this was this idea.
If we're super rational, we can be better at charity.
We won't be just like emotionally manipulated or biased in what we're doing.
And right, so that became like a really big movement.
Famously, Sam Bankman-Fried was very interested in effective altruism.
So that's like a well-known sub-community within rationalism.
Well, there was another well-known sub-community that rose out of this rationalism that
was called the existential risk or X-risk, for short, the X-risk community.
And here is their idea.
We need to be super rational about studying existential risk to humanities.
And the core mathematical rational tool they were applying was expected value.
And here was their core idea, which is a completely sound idea.
There's, you know, mathematically this makes sense.
They said, here's the cognitive bias that we're worried about.
if a negative event is really rare, humans discount it.
I don't have to worry about that because it's very rare.
But they said, no, no, no, you've got to do an expected calculation,
expected value calculation where you weigh cost and benefits against their probabilities.
As something that's very rare, but that has a super negative cost, if it does happen,
can be just as relevant as something that is not so rare and has a much lesser cost.
So let me be more concrete about it.
They would say an asteroid hitting the Earth is very rare.
It's very unlikely to happen.
But the cost of it happening would be incredibly high because it would kill all humanity.
And so the expected cost there is something we should care about.
And if we compare that to like a hurricane, like a hurricane hitting me is not nearly as rare as an asteroid hitting me.
And the cost, though, would also be not nearly as bad as an asteroid.
And if we multiply those together, it might actually be a similar expectation.
the cost as the asteroid.
So we shouldn't let
rareness by itself determine what we care
about. It needs to be rareness multiplied by
the potential cost. That's what the
X-risk community was focusing
on. It's a rationalist way of thinking about
things. They ended up
with three major categories
of existential risks that they begin to argue
that we should, even though super
rare, care about. It was asteroid hits, it was
deadly pandemics, and
here comes to connection,
is super intelligent AI.
So now we have by the 2010s the X-risk sub-community of the rationalist that these are people like Nick Boxstrom at Oxford or L.A. Zer Ucowski who's kind of doing his own thing.
They were writing these papers.
We read a bunch of them in my seminar where they would just like do these ontologies of risks and asteroids and pandemics and super intelligent AI and talk about like how these could unfold and why we should care about them now, even though none of these things are.
like about to happen or we have any reason to fear that they're about to happen.
And so that was the X-risk community.
It was them, for example, who organized that kind of infamous conference in Puerto Rico in 2017
to talk about existential risks.
That coming out of that, you got Elon Musk, Stephen Hawkins, Bill Gates, you know,
where you got all these quotes from these famous scientists saying, oh, we should worry about AI.
It was coming out of this conference in 2017, and it was an X-risk conference.
This is one of the far future abstract concerns that should be on our mind, because who knows, it could happen one day and we have to worry about these.
So the original sort of existential risk-a-safety concerns came out of this subculture of the rationalist based on those online forums and largely in San Francisco.
All right.
Then what happened in this story is chat GPT.
Now this is where I'm kind of throwing my own,
this is like my own original take here,
just trying to understand this world.
You get chat GPT,
which is super impressive,
and it's very anthropomorphizable, right?
Because you're dealing with language
and we project minds on the other side of a conversation
where we're getting fluent language
because our mind connects the fluent generation of language
with another mind,
it was impossible not to encounter
these early large language model demos
without being like, wow, AI is now advancing faster than we thought it was.
Something is accelerating.
There's changes afoot.
And for the X-risk community within the rationalist, this presented a completely life-altering, terrifying, exhilarating possibility.
What if we were right about this risk?
And not only were we right, but it's happening, right?
right? It would be like if you had
you had been warning
about aliens and abductions
for years and years and years and years and then
the Independence Day
mother ship comes on the earth.
You'd be like this is terrifying
but you would also be like the people dancing
on top of the building in New York in that scene
before they got destroyed by the lasers.
They were excited that they were there.
I'm kind of stretching this a little bit.
But I think this was
completely mind-melding
life-changing.
for the X-risk rationalist
because they had spent years
making list upon list
and sub-list among sublifts.
I mean,
just go read a Yukowski paper,
like a mirror paper
from 10 years ago.
It's 19 levels of lists
with sub-lists,
with sublifts, with sublifts,
about the CDI does this or this or that.
I mean, like they obsess.
They've been obsessing over superintelligence
and all the ways it might unfold.
And I assume they're wearing
when no one's looking at home,
they're wearing matrix trench coats
and pretending like they're
Neo and, you know, I don't want to, I'm just guessing all this type of stuff is going on, right?
They've been obsessing about this.
And suddenly there's this thought, what if it's real?
Think about it.
This would make them the heroes.
They would make them John Connor.
It would make them Neo.
It would make them, we are the ones who pointed out and are going to help save you from worldwide destruction.
I think that thought was so intoxicating that it sort of overcame the sort of rationalist guardrails.
Like, yeah, but is this technology going to do that or not?
and it just became all-consuming.
It gave meaning and structure to their lives.
Super intelligence is coming.
We warns you.
We're the heroes.
We're going to be the ones to lead you to do it.
Where's my matrix trench coat?
I think that's what happened.
And in Silicon Valley and San Francisco more broadly,
post-chat GPT, there was this huge ex-risk culture just became, boom, ubiquitous, and accepted.
You go, in 2003, you're walking around San Francisco talking to people.
Everyone is just straight up.
apocalyptic, massive disruption, everything is going to change.
It became like the central meaning, the central engine of meaning and understanding the world and making life interesting.
It was a structure for life in that part of the country because we had laid the foundation with the rationalist community and then this technology came and it was just too intoxifying of a possibility that maybe they were right.
That that had to be the case.
And it really took over that city.
really took over that city.
All right.
Now here's what you've got to understand.
A lot of these big tech companies, these AI companies, they came out of that.
Open AI was an X-risk nonprofit.
Elon Musk funded, largely funded Open AI to be an AI safety firm because they were sitting here doing these abstract thought experiments about superintelligence.
They had money to burn.
And like, let's put together this organization that just study AI so that we can figure out how to do it.
it safely, right? That was an ex-risk hobby project. That's why you have Sam Altman,
who, as we learned from the New Yorker reporting from my colleague Ronan Farrow and Andrew Martin's
article in the New Yorker a couple weeks ago, why he's not really that great of an executive.
That's why the board tried to fire him because this wasn't meant to be a trillion-dollar company.
It was meant to be, you know, this was meant to be a nonprofit. It was like a hobby for ex-riskers,
right? Well, what about Anthropic? It came out of Open AI. Anthropic is over.
Open AI employees who felt like Open AI was insufficiently,
insufficiently rational.
They weren't being ex-risky enough.
And so they left to start their own company.
What about Grock?
Elon Musk was like deeply in this world.
Right.
So these companies came out of that world.
This monoculture, this eccentric, strange, almost cultish sort of X-risk superintelligence
monoculture that was really ruling out there.
Silicon Valley. These companies all came out of that.
So what I think was happening
with Altman and Amadeh, etc.
I don't think they were playing 4D chess.
I don't think they were thinking about how to move the markets or attract
investment.
I think they just kept talking the way that every single person they knew was
talking.
And finally, as their companies got big enough and their platforms got big
enough and the amount of people involved got big enough, finally someone had
to say, hey, guys,
were not in the Mission District anymore.
I don't think you can talk this way
when you're a company that's taken on $60 billion in investment,
trying to a $500 billion IPO.
I just don't think they realized that most people didn't think and talk that way.
For a while, they were just ex-riskers that became the king of the ex-riskers.
It was exciting.
They're like, yeah, look, all these guys,
you know, all the people I hung out with and now I'm the king of it.
I'm, you know, at the leading edge of this.
They were talking to their people.
and then they looked around and realized over here was the rest of the world
who were terrified out of their skin by what they were saying.
It's like when you go to a new high school
and you realize like, you know, the group of friends I was hanging out with middle school,
I used to think they were awesome, but like they're a little strange.
And I kind of like sports and girls.
And like maybe I got a, you know, I'm going to have to kind of chill out a little bit about,
you know, whatever it is I'm doing.
So I don't know if that's completely true, but I'm increasingly convinced this is a cultural thing.
It's just a way that that community talked, and I'm so used to this.
This is what engineers are like.
And it's off-putting to other people.
This rationalism stuff is off-putting.
I mean, like my wife, after a while was like, don't take me to the MIT Christmas parties, because you guys are all so weird.
It's just the way we are.
But I don't think it plays well for the rest of the country.
So this is my theory.
I'm putting it out here for you to take or leave.
leave.
But there is a lot of what's going on with the terrifying, this baffling strategy of trying
to terrify your own customers.
I just think part of it was just cultural.
That's the way people in San Francisco who came out of these rationalist communities.
That's just the way everyone they knew talked and they didn't know any better.
And now they're learning and I think we're all the better for it.
So we'll see.
Who knows?
But I'll just say, all these things.
I am welcoming the end of the, like, faux terror.
I'm welcoming the new wave of skepticism among journalists.
I'm welcoming the East Coast people that are coming over and are like, will you guys stop talking like you're Sarah Connor from Terminator 2?
All of this is good for all of our mental health.
Maybe it's bad news for the AI reality check because I'll have less to reality check.
I don't think that's really going to be a problem, but who knows.
But there we go.
That's what's going on.
I think it's good news.
My explanation might be right.
It might not be.
but at the very least it's entertaining to follow.
All right, that's all the time we have for this week's AI Reality Check.
I'll be back on Monday with an advice episode of the show, so definitely check that out.
And then tell them, remember, take AI seriously, but not everything that you hear about it.
