Deep Questions with Cal Newport - AI Reality Check: Is the Economy About to Collapse?
Episode Date: March 12, 2026Cal Newport takes a critical look at recent AI News. Below are the questions covered in today's episode (with their timestamps). Get your questions answered by Cal! Here’s the link: https://bit.ly/...3U3sTvo Video from today’s episode: youtube.com/calnewportmedia ARTICLE #1: America Isn’t Ready for What AI Will Do to Jobs [2:15] ARTICLE #2: Mass Hysteria. Thousands of Jobs Lost. Just How Bad Is It Going to Get? [9:23] ARTICLE #3: THE 2028 GLOBAL INTELLIGENCE CRISIS: A Thought Exercise in Financial History, from the Future [14:39] Links: Buy Cal’s latest book, “Slow Productivity” at www.calnewport.com/slow https://www.theatlantic.com/magazine/2026/03/ai-economy-labor-market-transformation/685731/ https://www.nytimes.com/2026/03/05/opinion/ai-jobs-white-collar-apocalpyse.html https://www.citriniresearch.com/p/2028gic https://www.nytimes.com/2026/02/25/business/citrini-ai-stock-market.html https://www.citadelsecurities.com/news-and-insights/2026-global-intelligence-crisis/ Thanks to Jesse Miller for production and mastering and Nate Mechler for research and newsletter. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
There have been some pretty dark articles published recently about all the ways in which AI is about to destroy the worldwide economy.
Now, these include tales of mass unemployment and collapsing industries and white-collar workers trying to retrain for skilled craft jobs like woodworking and plumbing.
One of these pieces, a World War Z-style dispatch from the year 2008, which was put out by a small financial services firm named Satrini Research,
spread so widely and scared so many people that it was blamed.
for a temporary dip in the S&P 500.
All that's missing from these tales are the garbage can fires.
So how seriously should we take these economics doomsday articles?
Well, if you've been following AI news recently,
this is probably a question that you've been asking.
And today, I want to try to find some measured answers.
I'm Cal Newport, and this is the AI reality check.
All right, here's the thing.
coverage of AI topics moves in waves.
You'll have a certain sort of take or idea that will become popular and everyone is writing and talking about it.
And then sort of seemingly all at once, all the attention will move on to a new topic, as if the other one didn't exist.
Like back in 2023, for example, I spent a lot of time trying to explain to people that a static feed-forward large language model could not be considered conscious.
I had fierce debates about this.
And then at some point, the whole conversation just moved on with no resolution.
Late last year, to give another example, all the discussion was around superintelligence.
And I found myself having to argue about how you cannot infer intention in an anthropomorphized manner from the auto-regressively produced outputs of a chatbot.
But then we've moved on from that recently as well.
The topic de jour and AI coverage is this idea that.
we might not be ready for mass economic displacement that AI is now poised to wreak.
I want to go over quickly a few examples among many of some of the articles recently that have
been making this point.
The first article was published online in February, and it's part of the March print
issue of the Atlantic, and it was titled, America isn't ready for what AI is.
will do for jobs.
All right.
So if you read this piece,
it opens on a somewhat long history
of the Bureau of Labor Statistics,
which is actually quite interesting
the history of the BLS.
And so you're thinking,
okay, maybe this is going to be
a sort of thought-provoking
exploration of job cycles
and technological disruption,
but nope,
it gets a little darker.
Let me read from the piece here.
But like all statistical bodies,
the BLS has its limits.
It's excellent and revealing what has happened
and only moderately useful
at telling us what's about to.
the data can't foresee recessions or pandemics or the arrival of a technology that might do to the
workforce what an asteroid did to the dinosaurs.
I'm referring, of course, to artificial intelligence.
Yikes.
Remember, the asteroid that killed the dinosaurs killed off most of life on Earth.
So we've kind of raised the stakes pretty high for what's about to happen with AI.
All right.
So the article goes on.
The author says,
once required skill judgment and years of training are now being executed relentlessly and
indifferently by software that learns as it goes. I don't know what it means for a language model
to be relentless or indifferent, but I guess they are. Quick fact check, the language model is
driving most of the tools that we're talking about here. They don't learn as they go. They're static
and trained in static batches. I guess you could make a case that if you're looking at like a terminal
agent like Claude Code, that it could be doing updates to a markdown, five.
that it uses as part of its prompting.
But I don't think that's a great understanding
of how this AI works.
It's treat it more like a human brain.
All right, anyways, let's keep going here.
But anyone subcontracting tasks to AI
is clever enough to imagine what might come next.
A day when augmentation crosses into automation
and cognitive obsolescence compels them to seek work
at a food truck, pet spa or massage table,
at least until the humanoid robots arrive.
Man, the word might does a lot of work in this essay.
He said before, AI,
might be like the asteroid that destroyed 99% of life on Earth.
And here he said AI might make us all have to work at pet spas until the robots come.
But there's evidence for this.
So what's the main argument for why we should be concerned about this?
Let me read from the article again.
In May 2025, Dario Amade, the CEO of the AI company Anthropics, said that AI
could drive unemployment up to 10 to 20% in the next one to five years and, quote,
wipe out half of all entry-level white-color jobs.
jobs, end quote.
Jim Farley, the CEO of Ford, estimated that it would eliminate literally half of all white
collar workers in a decade.
Sam Altman, the CEO of OpenAI, revealed that, quote, my little group chat with my tech
CEO friends, end quote, has a bet about the inevitable date when a billion dollar company is
staffed by just one person.
I'll step out of the quote here.
The Atlantic piece thing goes on to mention layoffs that recently happened at many companies,
including meta, Amazon, United Health, etc.
All right, back to the quote.
Taken together, these statements are extraordinary.
The owners of capital warning workers that the ice beneath them is about to crack while continuing to stomp on it.
All right, we've got to hold on for a second here.
I want to break apart.
This is the evidence for the claim that, well, we've got two claims.
Either all life on earth is going to be wiped out like the dinosaurs or knowledge workers are going to have to be a massage therapist.
It's worth taking a little bit closer work at exactly what this evidence is.
stating. I want to start with the layoff piece because we covered this in last week's
episode of the AI Reality Check, and I've covered it on my newsletter at Calnewport.com as well.
We don't, for the most part, these layoffs have nothing to do with AI automating jobs
or increasing efficiency to the point that you don't need more workers. Now, I haven't covered
every one of these companies mentioned in this article, but I did cover the first two companies
mentioned, Amazon and Meta. I've talked on background to multiple people within both of those
companies, and they're both very clear.
Recent layoffs have nothing to do with AI making those workers unnecessary.
They have everything to do with overhiring during the pandemic that's now being corrected
for.
The bulk of the layoffs at Meta recently were in the reality labs, which Zuckerberg had
put a massive amount of money in over the last five years to try to build the Metaverse
where we're all going to put on virtual reality helmets and float around space stations
and play cards.
Remember that?
Yeah, it was a bad.
idea. So they're firing a lot of those people. They want to put that money elsewhere.
So right off the bat, okay, this is vibe reporting 101. You take a fact that you have a
scenario that's scary, and then you take a fact that directionally seems aligned with that
scenario, but in reality is not, and you list it next to it to try to ground the hypothetical
into something that's happening now, which vastly increases its value to actually cause anxiety
or fear. All right, but what about the other piece of this argument? The idea,
that AI CEOs are making dire predictions.
If the owners of capital are warning us, then for sure we have to listen.
But wait a second.
We could flip this on its head.
Of course the CEOs of AI companies are making dire predictions about how powerful
their tools are going to be because they are like the wizard and wizard of Oz say,
don't look behind a curtain, don't look behind a curtain, terrified that people are going to
spend more time asking about their financials, asking about the fact that in order for them to
keep up with their debt, I've talked about the major AI companies and not face implosion over
the next one to two years, they need to be the fastest growing companies in the history of
companies. We're talking about hundreds and hundreds of billions of dollars of revenue that
needs to be generated at some point in the next year or two, and it's unclear how they're going
to do this beyond putting ads on chat GPT and Claude
subscriptions, which they're currently losing money on.
So yes, of course, they would rather be talking about dire predictions of some future
because guess what? That makes their technology the most important technology in the world
and justifies investors continuing to put money into their company. So I'm not saying
that's definitely what's happening, but I don't have to stretch to find an alternative
explanation for why Dario Amadeh or Sam Altman love to spout out these sort of big predictions,
it completely serves their purpose.
Now, I want to say, look, it's a good writer.
It's a good article after this.
It's well-researched.
He talks to a lot of people.
You learn a lot about labor statistics.
You hear from a lot of experts.
But I just want to kind of point out the core.
The beginning of the article has this combination of vibe reporting and appeal to biased authority.
that as we're going to see is sort of a theme in these economic doomsday's article.
All right, let me talk about another one.
Our second example here, this was from last week, I think, in the New York Times,
it was an op-ed that had a happy, feel-good title.
Mass hysteria, thousands of jobs lost.
Just how bad is it going to get?
Oh, geez.
All right, so the piece opens.
You don't choose the titles if you write an op-ed.
So let's put that aside.
Let's look at the piece to see what it actually argues.
The piece opens with the story of a college graduate having a hard time finding a job.
Let me read this here.
Just a few years ago, an entry-level role with a bank or an asset management firm might have been Mr. Griefenberger's for the asking.
But the white-colored job market has cooled sharply.
While the unemployment rate remains relatively low, 4.3 percent, office jobs are suddenly a lot harder to come by for recent college graduates and experienced professionals alike.
Now, this is an important real story.
Unemployment's pretty good, but there is a cooling, especially on entry-level hiring
in knowledge work jobs that has been persistent, really, for multiple years now and isn't yet improving.
All right, so why is this happening?
Well, you can ask economist, and there's three reasons they'll give you in a descending order of importance.
By far the number one reason, most important reason explaining this trend is that white color industries hired aggressively in 2020 to 2022 as pandemic era digital growth was super strong.
And there was like these great resignation fears which led companies to overcompensate and offer like very attractive packages.
It was like get people in the door because we're worried about losing workforce.
All right.
Now after that pandemic period is over, the economy is trying to correct for this.
And we have a lot of employers, not firing people, but they're going into what's called a no hire, no fire phase where they say, okay, we need to slow down here.
We have too many people.
We don't, most of us don't want to do mass layoffs of too many people because they might be useful in the future, but let's do no hire, no fire, which is how you get to this unusual situation where unemployment is actually pretty good.
But you also have low new job growth.
All right, the second secondary cause mentioned by economists is the higher interest rates.
They started going up in 2022.
They try to offset the inflation caused by COVID-era stimulus investments.
That slows down business expansion, right?
That's economics 101.
The third cause is global uncertainty, right?
With especially in, you know, the American context, the tariffs, what's happening in educational
and now educational world, and now we have global wars.
It's an uncertain time.
So there's a lot of businesses that are sort of like, let's just wait and see.
We don't need the, we're not sounding the alarm bills yet.
We don't have to, you know, greatly reduced like we would into a strong recession,
but we're not going to, let's be careful about hiring right now as well.
All right, so let's return down to that time's op-bed.
I'm sure it says, like, this is what explains this.
So, you know, it is what it is.
Hopefully this will get better.
All right, let's read what they actually say instead.
Many companies went on hiring sprees current of the pandemic,
and the slowdown is perhaps just the inevitable adjustment.
All right, so far, so good.
Are we going to leave it there?
Nope.
Here's what comes next.
But it is happening against the backdrop of the generative AI revolution and fears that vast
numbers of knowledge workers will soon be evicted from their cubicles replaced by machines.
There's kind of a remarkable statement because it's vibe reporting, but it's vibe reporting,
but it's vibe reporting that's transparently acknowledging that it's vibe reporting, right?
They're saying, look, there are good explanations for this, but this other thing is happening
now that makes us afraid.
So let's just pretend they're connecting.
Even though we have other explanations, it's directionally aligned with this other fear we have,
so why don't we just put them together?
What is the main evidence cite in this op-ed for these fears?
I'll quote here.
That the people selling the artificial intelligence are among those sounding the most ominous warnings about its potential fallout is notable.
Some of them are prone to bombastic claims, but it's hard to see how spooking the public serves their interest.
It might be wise to take their predictions at face value and assume that AI,
is indeed going to devour a lot of white-colored jobs.
Again, this is the appeal to biased authority.
It is not hard to see why the CEOs of the company selling this technology,
like stories, that makes this the most powerful, important technology of the last 200 years.
Of course they want that story out there, because without that story, again, it becomes,
how are you going to generate $300 billion in revenue in the next two years?
They don't want that question.
They've been spouting these things for the last five years.
I don't know why this idea of, like, we need to take it face value.
what the owners of the technologies say about what their technology is going to do,
I don't think we should take them at face value at all.
We should be highly suspicious of them.
All right.
So anyways, again, this article goes on and it looks at a lot of things.
It's not a bad article.
But again, we have this sort of vibe reporting,
mention stuff that's happening that's directionally aligned with the fear.
Then you mention the fear, and then you justify the fear by saying,
Look, the CEOs of these companies are the ones sounding the alarm.
Why would they sound the alarm if it wasn't real?
All right.
Let me get to the third article, which is the one that spooked a stock market, and this will be the sort of final example I point out here before I get to some stronger responses.
This article was called the 2008 Global Intelligence Crisis, a thought exercise in financial history from the future.
It was published on Substack by a small financial services firm called Satrini Research.
Now look, right off the bat, if you read this substack piece, the authors are clear that they say this is a thought experiment and not a prediction.
And you'll hear actually the authors have been interviewed a lot in the aftermath of this article going viral and spooking people.
And they're really leaning into this.
This was just a thought experiment.
I was writing fan fiction.
Like, why are people taking this so seriously?
But if you read their same introduction, they then go on to say, hopefully reading this leaves you more prepared for,
for potential left-tail risk
as AI makes the economy increasingly weird.
So clearly they're saying this is a possibility.
This is a prediction.
We're not saying it will definitely happen,
but it's on the table and we need to be worried about it.
So I don't think they get off the hook
by saying, hey, we said this is not a prediction.
But you did say pay attention to this
so you're prepared for what might come.
I'm not a linguist,
but that kind of sounds like the definition of a prediction.
All right.
So what does this article actually say?
Well, it is written in the style of World War Z.
That is, it's written like a dispatch.
I think it's like a financial report, like these companies write, but from the year
28, reflecting on the dire current circumstances and how the economy got there.
So it's told in this sort of fake future retelling style, which is a very powerful style.
Let me read a quote here from early in this sort of fake dispatch from the future.
two years.
That's all it took to get from contained and sector-specific to an economy that no longer
resembles the one any of us grew up in.
This quarter's macro memo is our attempt to reconstruct the sequence, a post-mortem
on the pre-crisis economy.
And then it goes on to lay out the scenario where it starts like right now.
And it's like, well, there's layoffs happening, but we were happy about productivity
booms.
And the stock market goes up until about the fall of 2026.
and then as automation continues,
these cyclically reinforcing negative feedback loops emerged.
The economy crashes the next year in November of 2027.
And, you know, again, we're back to garbage can fires
and knowledge workers having to eat their dogs.
All right?
This was a very effective article.
It spread really far for two reasons.
One, that World War Z style of storytelling
where you're telling a story, like this is what happened.
Let me look back on it, is very emotionally engaging.
and it presses fear buttons much more than sort of straightforward analysis or prognostication.
And two, there's a vibe reporting trick here that we've seen in the other two examples.
They peg their fake scenario to something that's real happening right now.
It began with layoffs in the tech sector in 2026, which there are happening right now.
Now, of course, as I've covered in this episode and the last episode and ad nauseum,
the layoffs in tech industry started a few years ago.
It's in response to overhiring during the pandemic.
but whatever, when you peg a story that in somewhere fantastical and terrible, with something that's
happening right now, your mind puts it on a reality trajectory, and it makes it much more believable.
So that went viral.
It was, people said it had to do with a collapse in the essence, not a collapse, a minor dip in the S&P 500.
Other commentators have said there's a lot of factors why there might have been that temporary
collapse in SP 500, but it got a lot of news, especially in the financial world.
All right, so how seriously, I mean, I talked about some of the bad reporting techniques in these articles, but it doesn't mean, that doesn't mean a priori that they're also wrong.
So how seriously should we take these scenarios of economic doom?
I got to say, they're very anxiety provoking.
I don't like dystopian fiction, right?
Like I read World War Z.
I really didn't like it.
I don't like watching zombie movies.
Dysopian, especially like collapse of society, tales and movies.
They press a lot of buttons for me.
So I'm someone who knows a lot about AI and am a critic of hype.
And even for me, these were distressing.
So I can only imagine how much distress these type of articles are causing for the millions of people that are reading these in major publications.
So how seriously should we take them?
Let me tell you what made me feel better.
And hopefully it'll make you feel a little better as well.
in the wake of the Satrini article,
because that spread through the financial world,
and it might have had an actual impact on the stock market,
in the wake of that Satrini article,
professional economic, economists,
and global macro strategy analysts,
people who,
their goal is not engagement or impacting the conversation,
it's to make money based on accurate understandings
of what's likely to happen in the economy.
They came out of the woodwork and said,
hey,
This is ghost stories, and they're not, we have no reason to believe they're true.
And hearing from these economists, I have to say, made me feel a little bit better.
I'm going to give you some quotes, and hopefully it'll make you feel a little better as well.
The New York Times, to their credit, published an article called Bleak Research Report Stokes AI debate on Wall Street,
is written by a financial reporter, and they actually quoted some serious economist who were not that impressed by the situation.
Trini article. Let me read you two quotes. Here's one. The argument leans heavily on narrative and
emotion rather than hard evidence. Jim Reed, a strategist at Deutsche Bake said of the report,
that doesn't mean it will ultimately be wrong, but he added that the vibes of substance ratio is
undeniably high. Right, here's another quote. On Tuesday, Christopher Waller, a governor on the Fed board,
noted that he had not read the Satrini report, quote, deeply, end quote, but push back on the broader
idea that AI will lead to a rapid rise in unemployment as technology displaces white-collar
workers.
I don't think that is going to happen, Mr. Waller said, adding that he is not a doom and gloomer
like that report was.
I think my favorite response, however, came from Citadel Security.
So a global macro strategy analyst for Citadel Securities named Frank Flight put out a report
in the aftermath of the Satrini article that had a sort of sarcasticity.
title, the 2026 global intelligence crisis. So the Satrini report was the 2028 global intelligence
crisis. Say like, hey, everything has gone wrong in these two years. And so he called it the 26 global
intelligence crisis. But here he's referring to the intelligence crisis being people believing
these types of stories. And so he does a sort of faux opening. He's like here, he's doing,
describing our current situation. And that sort of faux opening describing our current situation,
sort of sticks in the dagger with the following.
Despite the macroeconomic community struggling to forecast two-month forward payroll growth with any reliable accuracy,
the forward path of labor destruction can apparently be inferred with significant certainty
from a hypothetical scenario posted on substack.
He's sort of making fun of people in the community who were taking that substack post with any seriousness.
He then proceeds to kind of educate in a semi-accessible way,
the types of things that global macro financial analysts look at,
especially when it comes to technological disruption
and why they don't see signs of some sort of major calamity coming
and they're not particularly worried about some sort of collapse of the economy.
I'm going to read a few of these quotes just to give you a sense
of the type of things covered in this article.
Number one, we would posit that if AI represents imminent displacement risk,
the real-time population data was shown inflection upwards in the daily use of AI for work.
the data seems unexpectedly stable and presents little evidence of any imminent displacement.
So again, there's lots of discussion about this, but they're looking at the Fed's data out of the St. Louis Fed.
And they say there's no rapid uptake in the way that the news media would have you believe in AI use.
Second quote, the current debate around artificial intelligence conflates the recursive potential of the technology with expectations of recursive economic deployment.
technological diffusion has historically followed an S-curve.
Early adoption is slow and expensive.
Growth accelerates as cost-fall and complementary infrastructure develops.
Eventually saturation sets in and the marginal adopter is less productive or less profitable,
which causes growth that accelerate.
I'm seeing this argument from a lot of professional analysts of technological disruption.
They say, man, we always make the exact same mistake.
You have slow, and then you get a period of speed up,
and we say that speed up will go on forever.
Let's keep extrapolating out that curve.
And if we keep extrapolate out that curve, collapse or singularity or whatever the thing is that you want to say is going to happen.
But this is never what happens.
At S curves, it goes up and then it begins other sort of factors contained to growth that goes slower than you think there's time to adjust.
They say have no reason to believe why would this be different.
All right, let me read another quote here.
Displacing white color work would require orders of magnitude more compute intensity than the current level utilization.
If automation expands rapidly, demand for compute definitionly rises, pushing up.
up its marginal cost. If the marginal cost of compute rises above the marginal cost of human labor for
certain tests, substitution will not occur, create a natural economic boundary. We don't have nearly
enough compute for these scenarios. And as they're saying, as you try to build out compute for
more and more use, it's going to drive up the cost because we're going to have a mismatch
between demand and actual supply. As the cost comes up, it drives back down the demand. We're
already actually seeing this with the one sector where after five years of work, we're finally
seeing tools, it's the best case scenario for AI, we're finally seeing tools that are really
catching the interest of a sector, and that's in computer programming. All of the evidence I can
find right now seems to imply that these companies are selling the compute for these
agents for computer programming at a significant loss because they're trying to fight for market share.
When they have to actually go, because again, they have huge debt, when these companies
actually have to try to make more profit off of this, and these costs,
get adjusted to the reality of how much expense they're incurring at the AI companies,
you're going to see like a real moderation probably and like how much we use for programming
and is it really worth, is worth $2,000 a month for an individual, $5,000 a month?
I mean, it's going to be interesting.
And that's just for this one first use case.
So I think that's interesting to see it as well.
They also say, quote, moreover, there's little evidence of AI disruption in labor market
data as of today.
In fact, the forward-looking components for labor market tracking have improved.
recently. So huge mismatch between what the financial analysts are seeing and what the op-ed writers
are hypothesizing. The evidence of the financial analyst is their decades of experience of trying
to understand the labor market and technological disruption, the evidence of the article on op-ed
writers, Amazon laid off people, and Dario Amadeh says his technology is the most powerful thing ever.
All right, let me read the conclusion from this Citadel securities piece.
For AI to produce a sustained negative demand shock, the economy must see.
a material acceleration and adoption,
experienced near total labor substitution,
no fiscal response,
negligible investment absorption,
and unconstrained scaling of compute.
It is also worth recalling
that over the past century,
successive waves of technological change
have not produced runaway exponential growth,
nor have they rendered labor obsolete.
Instead, they have been just sufficient
to keep long-term trend growth
in advanced economies near 2%.
Today's secular forces of aging population,
climate change and de-globalization exert downward pressure on potential growth of productivity,
perhaps AI is just enough to offset these headwinds.
So they're saying, and I think this is actually pretty optimistic,
they're saying the reality of major disruptive technological changes historically
has been just enough to offset all sorts of negative trends
and keep at least some growth happening in the economy.
They say, what we hope for, here's what they're predicting from AI.
They're like, we have lots of negative growth forces
that we're going to have to encounter in the next couple of decades
that are going to pull down the economy,
hopefully we'll get enough out of AI to sort of stave those off
and still get at least some economic growth.
That is a very different vision.
Like AI is the latest technological innovation
to stave off de-growth.
Is a completely different argument than, no, no, it's going to...
This is the one technology in history where the S-curve doesn't happen
and it's going to go exponentially and it's going to crash the economy.
So they kind of end on a positive note there.
All right, so here's, let's step back.
First of all, I want to say the economist made me feel better.
It doesn't necessarily mean, of course, they're right,
and maybe we are going to have all these factors will come together right to destroy the economy,
but I do like the fact that the economists aren't, they're not that worried about it.
I think we see this reflected in the stock market where we're seeing, you know, again,
if serious investors really believe that the economy was going to crash in the fall of 2027,
and that we're going to have massive decline starting in October of 2026, the COVID dip from 2020 is going to look like a minor correction, right?
Like, it would be substantial.
But the reactions are small.
Like, they're actually being, they're pessimistic on the frontier AI companies because they think they're spending too much money.
So they don't buy the AI tech CEO stories that their technology is going to automate all work, which would make them the most valuable companies in the history of companies.
The stock market doesn't buy it.
We see more moderate bets against specific sectors where they think they're going to have practical disruption like the SaaS sector and even those are modest.
And we're seeing actually much bigger reaction from things like the cost of oil going up to $100 a barrel.
That caused way bigger impacts on the stock market than the scenarios of the last two months about the economy collapsing.
So to me, that makes me feel better.
But it doesn't mean there's not going to be an impact.
And they could be wrong.
or maybe the impact is going to be smaller.
But let's put that on the table now, right?
Let's say, okay, maybe the economy is not going to collapse.
I don't have to learn how to light a garbage can fire or become a pet masseuse.
But maybe we're going to have a like, it's going to be a hard run.
There's going to be economic disruption, and it's going to be like more so than almost any other technology in the past.
And it is going to be disruptive in some way.
Let's say that was the case.
If that is, and it could be true, and I hope not, but it could be true.
AI doomsday reporting?
isn't helping.
What I'm seeing is that these sort of AI doomsday articles,
we try to one-up each other with how prescient you are about how bad things are going to get
prevents us from responding in effective ways.
If we instead treat AI like a normal technology and we respond with our normal tools
when we see it doing things that we would normally say this is a problem that we need to correct,
I think we can have much better progress in containing shaping and directing the AI revolution
than instead falling back to these massive dystopian World War Z-Tales.
The fallback on doomsday writing is letting the AI companies off the hook.
Look at what I covered last week.
Jack Dorsey negligently goes off and makes these huge acquisitions sort of in an impulsive fashion throughout the pandemic
of these crypto and blockchain companies.
They don't go well.
So he then impulsively fires half of his workforce,
because he can't do anything.
Injectors, he can't do anything in measured increments.
Everything he does is drastic, right?
But because he comes out and says,
this is just the first sign of the AI economic apocalypse,
I, for one, am learning how to make trash can fires
because I'm going to not only be a pet masseuse,
but have to maybe eat the dogs
because there'll be no money left in the world.
because he leaned in the doomsday reporting, what was the coverage of the block layoffs?
Reporters would rather treat it as evidence of the narrative economic doomsday.
That's what they focused on.
In fact, he cited in one of the articles I talked about, the block layoffs are cited as evidence of what's coming.
The right way to treat that was like, yeah, sure.
And I'm sure you have a perpetual motion machine and you can fly.
Back to the point.
What happened to those crypto investments?
why did you have to lay off that many people?
Who did you lay off?
Wait a second.
Most of these jobs have nothing to do with AI automatable roles.
We would hold us feet to the fire.
Like you're being negligent and impulsive.
But instead we're like, oh yeah, thank you, Cassandra,
for helping us understand what's coming.
The same thing has happened with these AI CEOs.
They find, like, the more dramatic and fearful of a thing to say,
the more the attention turns away from what's actually happening.
journalists used to severely distrust billionaire tech CEOs, but not when it comes to this issue.
We look to them as like they are guiding us to understand what's happening with this technology.
These CEOs have been covering this have been saying crazy stuff for the last four years.
They keep changing what it is en masse.
They were all talking about super intelligence and the machines getting out of control and like an alien mind.
And they're all talking about that.
And they all shifted at some point to something else.
And now they've shifted to like the economy is going to cry.
They just follow.
They just say stuff.
And it's entirely in their favor.
Because again, your technology automates all jobs.
Well, where am I going to put my money?
The only place left to put my money is in like the three companies they're going to run all the jobs.
So I think doomsday reporting prevents us from actually responding or prevents us from saying when Dario Amade is like 50% of white color jobs are going to be gone.
I'm like, uh-huh, uh-huh, you need to make $300 billion somehow in the next four years in order to like stave off, like to get anywhere near profitability.
How are you doing that?
Right.
That's the question we could be asking.
So I think that we don't need to ignore AI or its impact on jobs.
So we need to cover it like a normal technology so we can deploy the type of normal things we would do when we see disruption or changes when we see that as cover for malfeasance or impulsiveness or.
whatever's going on.
And so I hope we move past.
By time this comes out,
we'll probably have moved on to, you know, something else.
I don't know what.
AI and birds are going to spy on us.
Whatever it is.
And I hope so,
because I think this AI doomsday reporting,
not only is stressing people like me out,
but it's preventing us from actually respond
to real impacts of this technology
in a way they could really matter.
All right, enough of my sermon.
Hopefully some of this makes you a little bit,
feel a little bit better this week.
We'll be back probably next week.
I'm doing this on Thursdays,
maybe not every Thursday,
but if there's something to talk about,
I'll be back next Thursday.
Remember, take AI seriously,
but not everything that's written about it.
See you next time.
