The Peter Zeihan Podcast Series - The AI Race to Regression || Peter Zeihan
Episode Date: November 20, 2025The AI race has been all the rage, but what if we were racing ourselves straight into regression?Join the Patreon here: https://www.patreon.com/PeterZeihanFull Newsletter: https://bit.ly/4hYjNMV...
Transcript
Discussion (0)
Everybody, Peter Zion here, coming to you from McCurdy Peak.
Well, the actual peak is there.
Anyway, Peter Zion, coming to you from Colorado.
Today, we're taking another question from the Patreon page,
specifically, can you please explain to me this new space age that we're in
the race for artificial intelligence and what we should look for,
what we should worry about.
Well, let's start by saying that most of the things that people are talking about
with AI are generally not quite on the mark.
For example, a lot of folks think that Open AI,
that's the premier artificial intelligence company in the United States,
that their new program chat GPT 5.0,
which is supposedly an upgrade,
is actually a significant downgrade.
They find it not as user-friendly,
not as personable, not as complete.
That's for personal users.
AI affects potentially thousands of different applications
and how most people interact with artificial intelligence
is in some sort of first-person single-seat interface,
like what you get on your phone,
or your laptop. I mean, I've got it that way, too. And the jump from ChatGPT4 to GPT5 was not designed
for your single-seat user. It was designed for people who do code, for people who design
drugs. It's designed to bring a huge amount of processing power to things on the back end
to basically recreate something. So the institutional users, the design users, they're actually
finding ChatGPT all kinds of fun. And Sam Altman, who is the CEO of OpenAI, is going back
and kind of taking some characteristics from ChatGPT4 to put it into ChatGPT5 in order to make
everybody happy. So that's all going to work out. Here's the problem. Software versus
hardware, if I'm going to really sum it up, it's that. Chat GPT4, the algorithm that we all
found so groundbreaking, really only took up about 10 terabytes. And you could easily carry that
on thumb drives in your hand. Chat GPT5, more advanced, is at least twice that, probably three
times, but Open AI is not sane, so we don't know that number for sure. The point is in terms of
the raw memory required to make the AI function, it's really not that impressive. And so if
the corporate espionage or an act of benevolence open AI were to lose control of the algorithm
and it got out there into the wild, so to speak, it really could be used by almost anyone.
What makes AI function in the way that we think of it today, not this SkyNet future thing,
but how it is now, requires massive amounts of processing power at data centers.
The largest data centers that the world has ever seen are needed in order to deal with the
inflow of requests that come in, run the algorithm, and spit out the results.
Which means that the limiting factor, for the moment in artificial intelligence, isn't the
software, it's the hardware.
And this is where we have a really big problem, and it's not that far away.
The ability to make the high-end processing chips that Taiwan is famous for requires
100,000 steps, 30,000 pieces, 9,000 companies, and they're scattered around the world.
The single biggest concentration is in the United States, which is something Americans
conveniently forget when they're talking about sovereignty.
The number two concentration is in the Taiwan-centric zone.
The single most important company is in the Netherlands.
Netherlands, but it has facilities in Germany and in Austria and in California and Japan.
But you're never going to be able to do the chips at all without all of these steps.
And a lot of them are single point failures. So if you have any degree of de-globalization,
it doesn't matter really what the country is that falls out in that network. We can't make them
at all. And for the chips that we already have, lifespan when they're in a data center is typically
in the three to six year range.
So when we get to the point
where we realize that we can't make the chips,
we're going to have a bit of a scramble
to see who can control what's left.
And then the ability to use AI
will shrink from something that you can all have
on your phone to simply the handful of
entities, whether governments or corporations,
that are capable of having their own data center
so they can run AI themselves.
And that will be it until we reinvent
the entire ecosystem.
And what we have been seen with
most government efforts around the world, including the United States, to re-shore this sort of
manufacturing. It only focuses on the fabrication facilities, which is what is in Taiwan. It ignores the
design. It ignores the material inputs. It ignores the photo mask. It ignores the wiring.
It ignores everything else that goes into a successful chip, much less the downstream stuff
like testing and packaging that ultimately makes the stuff that ends up in a data center.
No one, to my knowledge, is putting any effort into actually bringing the entire,
ecosystem under one roof. And I honestly don't even think it would be possible anyway. There are too
many pieces. There are too many players. And if you're looking at the United States, there are not
enough technicians that are capable of doing it because we already have record low unemployment
levels. So we are in a moment right now where AI is possible with chat GPT 5.0 and all the rest
that will not last. And in the not too distant future, we are going to see a technological regression
as we lose the ability to make the hardware.
And since it took us 60 years to figure out
how to do that in the first place,
it's not something that we're going to do in a season.
It is going to take a mass reindustrialization process
of different parts of the world
to do different things coming together in different ways.
And that is something that I am not looking forward to.
But we're going to see at the beginning of that
within this next decade.
