Deep Questions with Cal Newport - AI Reality Check: Is AI Stealing Entry-Level Jobs?
Episode Date: April 9, 2026AI Reality Check: Is AI Stealing Entry-Level Jobs? Cal Newport takes a critical look at recent AI News. Video from today’s episode: youtube.com/calnewportmedia OPENING: Is AI stealing entry-level jo...bs? [1:29] MAIN STORY: Torsten Slok essay [3:06] CONCLUSION: AI is not stealing entry-level jobs now [11:32] Links: Buy Cal’s latest book, “Slow Productivity” at www.calnewport.com/slow https://www.wsj.com/lifestyle/careers/ai-entry-level-jobs-graduates-b224d624 https://www.apolloacademy.com/busting-the-ai-youth-unemployment-myth/ https://www.theatlantic.com/economy/2026/04/job-market-artificial-intelligence/686659/ Thanks to Jesse Miller for production and mastering and Nate Mechler for research and newsletter. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
Last summer, the Wall Street Journal published an article with an alarming headline.
AI is wrecking an already fragile job market for college graduates.
It then goes on to say, companies have long leaned on entry-level workers to do grunt work that doubles as on-the-duty training.
Now, chat GPT and other bots can do many of these chores.
Now, this idea that young people are having a particularly hard time-finding jobs,
and that this is due in part to AI soon took off and became conventional wisdom.
Variations of this claim have been cited ever since.
Now look, this belief is starting to have a real impact.
Just last week, Axios wrote an article that was titled,
AI is making college students change majors,
and it cited a survey that showed 16% of currently enrolled college students
have changed their studies due to concerns about AI.
that number jumps to 25% when you consider students who are studying technology,
which is all to say, if you've been reading AI coverage recently,
you've probably encountered these type of claims many times.
But are they true?
Today, we're going to look for some measured answers.
I'm Cal Newport, and this is the AI reality check.
All right, so the question we're looking at today is AI reducing
the market for entry-level jobs. Now, I'm not an economist, but fortunately for our purposes,
multiple economists have weighed in recently on this claim about AI stealing entry-level jobs,
and here's the thing. They're not that impressed. All right, I want to start with someone
named Torsten Slok, who is the chief economist at Apollo Global Management. Now, last week, he published a
newsletter that was titled Busting the AI Youth Unemployment Myth.
All right.
Now, in this article, he has two different charts that he put together, both of them
drawing data from the Bureau of Labor Statistics.
I'll put the first chart up here on the screen.
Okay, so this is looking at the unemployment rate from the mid-1990s until today, and it
shows two lines, one for all people, 16 years and older, and others for just 20 to 24-year-old.
So he's looking at are young people having a particularly hard time with unemployment right now?
And what you see is, well, no, the overall unemployment rate and the unemployment rate for young people seems to be moving roughly with the same trends.
All right, here's how SLOC summarizes this chart.
the data does not show any sign that unemployment is stronger,
that unemployment among younger workers is structurally higher because of AI.
Okay, now a common critique that you might hear here is that it depends what type of young people we're talking about, right?
It's really young people with college degrees that should really be seen their jobs being stolen by AI because AI automation is more aimed at white-collar jobs than non-white-collar jobs.
So with this critique in mind, we can bring ourselves to the second chart that SLOC looks at here, which I'm putting it up on the screen, which is looking at the unemployment rate among U.S. college graduates who are between the ages of 22 and 27.
It's broken out by gender, female and male.
And what you see when you zoom out here is that the unemployment off to the far right for our current period on average really is not that much different than other times we've seen.
So there's not necessarily a major difference here.
between what we've been seen recently and what we've seen in other economic upturns and downturns in time past.
Now, here's how SLOC summarizes this.
The unemployment rate has increased for men, but it has recently converged towards the unemployment rate for women.
For women, since ChatGPT was released, the unemployment rate has been moving lower, but then more recently it has increased slightly again.
So this is kind of weird and messy data, but not at all what you'd expect to see if AI was beginning to rapidly audits.
automate entry-level college worker jobs, which would be both men and women.
You would probably see a very rapid rise in unemployment among young college graduates.
It's not what you're seeing.
SLOC gave this chart a simple title.
No signs of AI having a particular impact on the unemployment rate among U.S.
college graduates age 22 to 27.
All right.
So that's some compelling data we have here that maybe AI is not stealing these jobs.
but it's not a slam dunk case by itself.
And why is this?
Well, because SLOC is not comparing college graduates to non-college graduates.
Now, if you look closer at more of these claims, these proponents of AI displacement theory,
what they often talk about is not the overall unemployment rate for young people with college degrees,
which, as we just saw, is like, it's moving noisily, but it's not unusual compared to other past periods.
He says, what matters is relatively speaking, the proponents rather would say, what matters is relatively speaking, what is going on with the unemployment rate for recent college graduates versus recent workers of the same age that don't have a college degree.
Because the idea is AI, again, the automation is going to hit college graduates harder than it's going to hit non-college graduates right now.
And what proponents of AI displacement argue is that, look, the unemployment rate, it's not that it's unusual for college graduates, but it's higher and rising faster than it is for non-college graduates. And that's new. Traditionally, non-college graduates have higher unemployment rates, if we look back over many decades. And now we've seen an inversion where actually college graduates have their unemployment outpacing the unemployment rate of their peers who don't have a college degree. So for the
proponents of this idea that AI is stealing entry-level jobs, that is one of their big pieces of data.
Now, it turns out this claim is testable to an economist have looked at it.
Now, I learned about some of these studies through the following article that came out last week.
It's written by Rogey Carman, the Atlantic called Young People Are Falling Behind, but not because of AI.
This is a good article because Roge talks to an economist named Nathan Goldschlag, who has been studying this trend more recently.
and in a series of papers has found some pretty informative results.
So, for example, in a recent paper, Goldschlach, co-authoring with Adam Ozymec, found an alternative explanation for differential unemployment rates between those with a college degree and those with not.
I'm going to actually read from the Atlantic article here, summarizing this data.
The economist Adam Ozemeck and Nathan Goldschlachlag recently took a deeper look at the data and found that a
significant number of younger workers without college degrees had simply given up looking for a job,
artificially improving the unemployment rate for young workers without a degree, and thereby
giving the appearance that college graduates were doing uniquely poorly.
Karma in his Atlantic article calls us a quote-unquote statistical mirage.
So this, oh, look, things got better for people without college degrees, but worse for those
with, that turned out to be a statistical mirage.
was actually people without college degrees leaving the active job market, which takes them out of standard statistics that are used here.
All right.
So what happens then if you use a just overall unemployment metrics, which do not remove people who have stopped looking for a job, but includes an entire population?
So what if we focus on young people only and overall unemployment, right?
Just what percentage of these people have jobs or not?
And guess what, Ozympic and Goldschlach family looked at that data?
those without college degrees are actually doing worse.
So it's actually the opposite of what the displacement proponents were saying,
which was like, look, things are getting proportionately worse for people with college degrees because AI can take their jobs.
No, it's job market's beginning worse for people without college degrees.
There's just a statistical mirage because they were dropping out of the market altogether that made it seem like that wasn't happening.
The Goldschlog has a succinct summary.
This is him in the Atlantic article.
This makes me doubt that this is an AI story.
I love the sort of low-key delivery there.
But we're not yet done because Goldschlog actually wrote another recent paper, this time co-authored with a converse name Sarah Eckhart, where they now analyzed hiring trends in different economic sectors.
And they used five different measures of exposure to AI automation.
So five different ways that people have assessed how exposed a particular job is for AI to come in and take over jobs.
and they looked at what's the impact on you being more AI automatable to unemployment and employment trends in that particular sector, right?
So again, if AI is stealing entry-level jobs, what we should see is a rise in unemployment, a decrease in hiring in the jobs that are most exposed to AI.
So what did they find?
I'm going to quote them here from the Atlantic article.
No matter how we cut the data, we didn't see any meaningful.
AI impacts on the labor market.
So there was no signal in there that AI exposure somehow made that particular type of job
to be more likely to be hiring less.
And in fact, another economist, quote in the Atlantic piece, this is Vernier Tadeshi.
He showed the opposite.
He said, actually, in the period since 2023, unemployment has increased for the professionals
that are least exposed to potential AI automation.
So there's a very messy job market out there.
This is the conclusion.
There's a very messy job market out there.
Coming out of the pandemic, of course it was mixed up.
The pandemic was a generational displacement and disruption,
and lots of things got shaken up.
We've talked about this on this show before.
In the white-collar world,
there's a lot of over-hiring during the pandemic,
especially in tech-related firms.
Interest rates were dead low,
so you could borrow money very easily,
and people were hiring up a storm
because there's a lot of interest in technology-based solutions,
like especially cloud-based solutions.
So there's a lot of hiring.
And now there's a lot of corrections because they don't need that many people.
Interest in those services are down and interest rates are back up.
Interest rates going back up alone is enough to lead to a lot of cuts, right?
Because it's as if someone made your operating costs that much more expensive.
So you have to offset that.
So it's been a messy market.
It's affected white-collar workers.
It's affected non-white-collar workers.
All of this data show this is messy.
But no matter how we slice it looking for a specific signal showing that
AI is beginning to slow down entry-level hiring.
No matter how you come at it, we do not see that signal.
In fact, we often see opposite signs showing up.
All right, so I don't mean, let me step back here.
In this type of analysis, I don't mean to be dismissive or 100% skeptical or reactionary.
I'm not claiming that AI is not one day going to potentially cause big disruptions in the job market.
It very well could.
And it's possible that these disruptions will, in fact, start with entry-level.
entry-level jobs exposed to AI, slowing down entry-level hiring. That may very well be what we see
in the future. But it is not happening right now. And the reason why this matters is because there's
been a lot of articles and discussions and interviews where this idea is being referenced as if it was
a fact. Now, I know I've made this point before, but I'm going to make it again. I think what's going
on in a lot of this discussion and coverage of AI is that commentators will latch on to a theory
or an observation or claim, not because they think it's correct, like that they've checked it
and it's correct, like they might with another story, but because they believe it is
directionally true. So I don't know, maybe AI is not right this moment actually taking entry-level
jobs. But it's directionally true because people need to be worried about AI's impact. And so I will put an article in or I'll do an interview or I'll make a claim online about something that might not be happening right now because my ultimate goal is not to get to the truth. My ultimate goal is to influence how people are thinking about this. I know better. They need to be more worried about this than there are. So it's direction that we should worry about AI on jobs. So I will throw out and promote any claim that moves in that direction and helps to try to give that belief. I think there's a lot of that going
And I've seen this sort of directionally true versus factually true.
This is something I've seen happen in a lot of different major things that have happened
in the last decade or so.
This shift towards my job as a commentator is to shape how people understand and act, not necessarily
to try to get to the truth of what's actually happening.
But I think this is a problem, right?
Because what happens when you lean into what's directionally true, what feels true, what
matches the vibe that you're feeling versus actually trying to figure out what is actually true.
Two things happen.
One, you erode public trust.
the more AI commentators leaning into what's directionally true, people pick up over time.
Hey, a year ago, you said this was going to happen.
It didn't it.
Last month, you were so confident that this was the case, and I turned out that, no, actually, Jack Dorsey was just AI washing.
You do that enough times, people stop listening.
And then when there's things that really need to be reported, because these are massive companies with huge implications on all sorts of different sectors of our society and economy, people are no longer listening to you.
So trust is a matter.
It matters.
And if you go from accuracy to directional trueness, you begin to erode trust.
The second thing that I think matters is that it lets these frontier AI companies get away with a lot.
The more we lean into trying to have the most bombastic coverage possible because it matches our vibe that this is a big deal,
it allows the frontier AI companies to keep raising money probably way more than they need to.
They're not being held to the same scrutiny because if you believe this is the most disruptive technology in the last two centuries,
I don't care about your Ibida.
I don't care about your debt.
I don't care about your revenue.
I want to be involved in the company that's going to replace all the jobs.
That's a big problem because if they're not actually, if these companies aren't actually able to become the fastest growing companies in the history of companies,
it's going to have a huge impact negatively on the American stock market if and when that bubble burst.
It also lets them get away with doing stuff, the sloppy products they're jamming down our throats,
the problems they're causing in all sorts of different sectors,
the stresses they're causing.
It lets them get away with that because it gives them an aura,
like they're wearing this cloak of massive disruptive inevitability.
Like, hey, what can we do?
This technology, everything is changing.
We're just trying to hold on.
If it's not us, it'd be someone else.
As opposed to, if we treat them like a normal company,
what is this nonsense product you just released?
Why do I have to use this?
What's your claim?
Convince me this is useful.
Why are you taking up all this energy and water?
Why are you building these things?
We don't hold people's feet to the fires
as long as we're focused on the vibe of the disruption.
We're addicted to the idea of this either dystopian, utopian change,
as opposed to these are real companies doing real things that need accountability.
So I'll get off my soapbox now.
But I just want this minor correction of a one thread among many that are woven into our coverage of AI right now.
I want it to stand in for this bigger thing.
It's not about what's directionally true.
It's about what is actually true.
It is not our jobs as AI commentators to influence how people think about something.
it's to inform them and to trust them to think the right way once they know what's really going on.
We have to hold these people's feet to the fire. We have to get past their own anxieties and get to the on-the-ground truth.
All right, sermon over. That's enough preaching for today. So remember, until next time, care about AI, but not everything you read about it.
