a16z Podcast - Dwarkesh and Noah Smith on AGI and the Economy
Episode Date: August 4, 2025In this episode, Erik Torenberg is joined in the studio by Dwarkesh Patel and Noah Smith to explore one of the biggest questions in tech: what exactly is artificial general intelligence (AGI), and how... close are we to achieving it?They break down:Competing definitions of AGI — economic vs. cognitive vs. “godlike”Why reasoning alone isn’t enough — and what capabilities models still lackThe debate over substitution vs. complementarity between AI and human laborWhat an AI-saturated economy might look like — from growth projections to UBI, sovereign wealth funds, and galaxy-colonizing robotsHow AGI could reshape global power, geopolitics, and the future of workAlong the way, they tackle failed predictions, surprising AI limitations, and the philosophical and economic consequences of building machines that think, and perhaps one day, act, like us. Timecodes: 0:00 Intro0:33 Defining AGI and General Intelligence2:38 Human and AI Capabilities Compared7:00 AI Replacing Jobs and Shifting Employment15:00 Economic Growth Trajectories After AGI17:15 Consumer Demand in an AI-Driven Economy31:00 Redistribution, UBI, and the Future of Income31:58 Human Roles and the Evolving Meaning of Work41:21 Technology, Society, and the Human Future45:43 AGI Timelines and Forecasting Horizons54:04 The Challenge of Predicting AI's Path57:37 Nationalization, Geopolitics, and the Global AI Race1:07:10 Brand and Network Effects in AI Dominance1:09:31 Final Thoughts Resources: Find Dwarkesh on X: https://x.com/dwarkesh_spFind Dwarkesh on YT: https://www.youtube.com/c/DwarkeshPatelSubscribe to Dwarkesh’s Substack: https://www.dwarkesh.com/Find Noah on X: https://x.com/noahpinionSubscribe to Noah’s Substack: https://www.noahpinion.blog/ Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Transcript
Discussion (0)
AI might be generating hundreds of dollars of value for me a month, but like humans are
generating thousands of dollars or tens of thousands of dollars of value for me a month.
Why is that the case?
And I think it's just like, AI's are lacking these capabilities, humans have these capabilities.
You are a natural general intelligence, but we cannot easily do each other's jobs, even
though our jobs are fairly similar.
The reason humans are so valuable is not just their raw intellect.
It's their ability to build up context, it's to interrogate their own failures,
and pick up small efficiencies and improvements
as you practice a task.
Whereas with an AI model,
it's understanding of your problem or business
will be expunged by the end of a session.
Every other technological tool is a compliment to humans,
and yet when people talk about AI and think about AI,
they essentially never seem to think in these terms,
they always seem to think in terms
of perfect substitutability.
What happens when AI can do almost every white-collar job
but still can't remember what you told me yesterday?
What does that mean for AGI, the future of work
and the shape of the global economy?
I sat down with Noah Smith, author of No Opinion,
and Dorkesh Patel, host of the Dorkesh Podcast,
to unpack what's real and what's hype in the race against AGI.
We talk about continual learning, economic substitution, galaxy-scale growth, and whether
humanity's biggest challenge is technological or political.
Let's get into it.
As a reminder, the content here is for informational purposes only, should not be taken as legal
business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in
any A16Z fund.
Please note that A16Z and its affiliates may also maintain investments in the companies
discussed in this podcast.
For more details, including a link to our investments, please see a16z.com forward slash
disclosures.
Dorkesh, Noah, welcome.
Our first podcast ever as a trio.
Yes, excited.
I'm very excited.
So Dorkesh, you came out the scaling era.
It's almost like you're a future historian.
You're sort of telling the history as it's being written.
And so it's only appropriate to ask you,
what is your definition of AGI and And how has that evolved over time?
I feel like I'm like five decades too young to be a historian.
I gotta be like in your 80s or something before I could.
But we're living in history right now.
Right.
So the ultimate definition is can do almost any job,
let's say like 98% of jobs, at least as well,
fast, cheaply as a human.
I think the definition that's often useful for near-term
debates is can automate 95% of white-collar work
because there's a clear path to get to that,
whereas robotics, there's a long tail of things
you had to do in the physical world and robotics is slower.
So, automate white-collar work.
That's interesting because it's an economic definition.
It's not a definition about how it thinks,
how it reasons, et cetera.
It's about what it can do.
Yeah, I mean, we've been surprised what capabilities
have come first in AI.
It's like they can reason already.
And why they seem to lack the economic value we would have assumed
would correspond to that level of capability.
This thing can reason, but it's making OpenAI $10 billion a year.
And McDonald's and Kohl's make more than $10 billion a year, right?
So, clearly there's more things relevant to automating entire jobs than we previously assumed.
So then it's just useful to like,
who knows what all those things are,
but once they can automate it, then it's AGI.
And so when Ilya or Ameta is using the word
superintelligence, what do they mean?
Do they mean the same thing or something totally different?
I'm not sure what they mean.
There's a spectrum between God and just something
that thinks like a human but much faster.
Do you have some sense of what you think they mean?
God. I think probably they mean something they would worship as a god.
Yeah.
And so when Tyler says we've achieved AGI and you differ from him,
where is the tangible difference there?
I'm just noticing that if there was a human who was working for me,
they could do things for me that these models cannot do, right?
And I'm not talking about something super advanced. I'm just saying I have transcripts for my podcast.
I want you to rewrite them the way a human would.
And then I'll give you feedback about what you messed up.
And I want you to integrate that feedback as you get better over time, you learn my
preferences, you learn my content.
And they can't learn over the course of six months how to become a better editor for me
or how to become a better transcriptor for me.
And since a human would be able to do this, they can't.
So therefore, it's not AGI.
Now, I have a question.
I am a natural general intelligence.
You are a natural general intelligence.
But we cannot easily do each other's jobs,
even though our jobs are fairly similar.
Put me in the DoorCash podcast, and I could not
interview people nearly so well.
If you had to write substack articles
like several times a week on economics,
you might not do as well.
But we are general intelligences, and we're not exactly substitutable.
So why should we use substitutability as the criterion for AGI?
What else is it that we want them to do?
I think with humans, we have more of a sense of there is
some other human who theoretically could do what you would do.
An individual copy of a model might be,
say, fine-tuned to do a particular job.
It would be fair to say then why expect
this particular fine-tune to be able to do any job in the economy?
But then there's a question of, well, there's many different models in the world,
and each model might have many different fine-tunes or many different instances.
Any one of them should be able to do a particular white-collar job
for it to count as AGI.
It's not that, like, any AGI should be able to do every single job,
that, like, some artificial intelligence should be able to do this job
for this model to count as AGI. I see. Okay, single job, that like some artificial intelligence should be able to do this job for this model
to count as AGI.
I see, okay, but so let's take another similar example.
Let's take Star Trek.
Yeah.
Okay, you got Spock.
He's very logical.
He can do stuff that Kirk and whoever can't do,
but then those guys can do stuff that Spock can't do,
get in touch with their emotions,
intuition, stuff like that.
They're both general intelligences,
but they're alien to each other.
So AI feels alien to me.
Sometimes it talks just like us. It was built off of our thoughts, alien to me. Sometimes it talks just like us.
It was built off of our thoughts, obviously,
but then sometimes it talks just like us,
and sometimes it's just like very alien.
And so should we ever expect that to change
such that it's no longer an alien intelligence?
I think it'll continue to be alien,
but I think eventually we will gain capabilities
which are necessary to unlock the trillions of dollars of economic value
that are implied by automating human labor, which these models are clearly not generating
right now.
So you could say like, if we substituted jobs right now, immediately there'd be a huge productivity
dip.
But over time, we would learn to start doing them better.
I mean, maybe a better example is just that like, you hire people to do things for you.
I don't know if you actually hire people, but I assume...
Okay, I'll give you a few.
Okay. Why are you still having to do that rather than hiring an AI?
And I have, like, many rules where it's like,
an AI might be generating hundreds of dollars of value for me a month,
but, like, humans are generating thousands of dollars,
or tens of thousands of dollars of value for me a month.
Why is that the case?
And I think it's just like, AI's are lacking these capabilities,
humans have these capabilities.
And is the main thing missing, in your view, sort of continual learning? What is the bottleneck?
The reason humans are so valuable
is not just their raw intellect.
It's not mainly their raw intellect,
although that's important.
It's their ability to build up context.
It's to interrogate their own failures
and pick up small efficiencies and improvements
as they practice a task.
Whereas with an AI model,
it's understanding of your problem,
your business, will be expunged
by the end of a session.
And then you're starting off at the baseline of the model.
And with a human, you had to train them over many months
to make them useful employees.
Yeah.
And what will need to change in order
for AI to develop that capability?
I mean, I probably wouldn't be a podcast
if I had to answer that question.
It just seems to me that a lot of the modalities
that we have today to teach LLM stuff
do not constitute this kind of continual learning.
For example, making the system prompt better
is not the kind of continual learning
or on-the-job training that my human employees experience,
or RL fine-tuning is not this.
But what the solution to this looks like,
it's precisely because I don't have an obvious solution
that I think were many years away.
Okay, so here's my question about replacing jobs.
It seems to me that it's partly by demand.
So, for example, suppose that AI has already replaced my job
or can replace my job.
So, suppose that anyone who fires up chat GPT
or whatever model and says,
search the web, find the most interesting topics
that people are talking about economics and write me
an insightful post telling me some cool new thing I should think about that
and they just do that every day and then they get a better blog than No Opinion.
I don't know if that's happened yet.
I mean, I've tried that and I don't like it as much, but suppose that most people will
like it as much and so my job is an automated and people just don't realize it or people
have this sort of idea in their mind of like, well, is it really a human and blah, blah,
and then as generational turnover happens, young people won't care about reading human,
they'll care about reading an AI.
But in terms of functional capabilities,
it's already there.
But in terms of demand, it's not there.
How much of that could there be?
I expect there'll be much less of that than people assume.
If you just look at the example of Waymo versus Uber,
I think previously you could have had this thing about
people would hesitate to take automated rides.
And in fact, in the cities where it's been deployed, people love this product,
despite the fact that you had to wait 20 minutes because the demand is so high.
And it's still like a gossip glitches to iron out.
But just the seamlessness of using machines to do things for you,
the fact that it can be personalized to you, it can happen immediately.
One thing people will be like,
okay, well doctors and lawyers will set up guilds,
and so you won't be able to consult.
I think there might be guilds and who can call themselves a doctor or a lawyer.
But I just think if genuinely, it's actually going to be as good medical advice as a real
doctor, the experience of just talking to a chatbot rather than spending three hours
in a waiting room is so much better that I think a lot of sectors of the economy look
like this where we're like, we're assuming people will care about having a human, but
in fact, they will not if you assume that they will genuinely have the capabilities
that the human brings to bear. Right. So it's interesting AI is better for diagnosis on a lot
of things than humans, right? But then something about having humans to follow up with makes me
also want to check with a human after I've gotten diagnoses from an AI on something.
And so that might vary by job. Like cars may be one thing, but maybe it is about capability. I
can't say.
I'm just saying like everybody seems to think that AI is a perfect substitute for humans
and that's what it should be and that's what it will be.
And everyone seems to think of it in that case.
However, every other tool that's ever been made, every other technological tool is a
compliment to humans.
It could do something humans could do.
Maybe even it could do anything humans could do, but at different relative costs, different
relative prices, so that you'd have humans do something and the tool do other things and you'd have
this complementarity between the two.
And yet when people talk about AI and think about AI, they essentially never seem to think
in these terms, they always seem to think in terms of perfect substitutability.
And so I'm trying to get to the bottom of like why people insist on always thinking
in terms of perfect substitutability when every other tool has been complementary in
the end.
Well, human labor is also complementary to other human labor, right?
There's increasing returns to scale.
But that doesn't mean that Microsoft has to hire some number of software engineers.
And, like, it will care about the cost of what the software engineers cost.
Like, it will go to markets where they can get the highest performance for the relative value
the software engineers are bringing in.
I think it will be a similar story with AI labor and human labor.
And AI labor just has the benefit of having
extremely low subsistence wages.
Like the marginal cost of keeping an H100 running
is much lower than the cost of keeping a human alive for a year.
Noah, would you say you're AGI-pilled in the sense
that Dorcas described the term?
We've talked a little bit about AI's effect on labor
when you shared why you're perhaps a little bullish
that there will be a
Plenty for humans to do and that'll be more complimentary. What is AI-pilled?
We just believe in that it will automate a huge swath of the economy or labor I mean I am very unwilling to say like here's something technology will never be able to do
I mean that always seems like a bad bet
Here's two things people have been saying since the beginning of the Industrial Revolution
Neither of which has ever remotely come close to being true,
even in specific subdomains.
The first one is, here's a thing technology will
never be able to do.
And the second one is, human labor will be made
obsolete.
Those people have been saying those two things,
and you can just go, you can read it, you can even
ask AI to go search and find, I have done this, and find new examples of people saying those two things, and you can just go, you can read it, you can even ask AI to go search and find,
I have done this, and find you examples
of people saying those two things.
People have been saying those two things
over and over and over and over and over,
and it's never been true.
That doesn't mean it could never be true.
Sometimes something happens that never happened before,
such as the Industrial Revolution itself.
You have this hockey stick where suddenly like,
oh, we'll never get rich, we'll never get rich,
oh, we're rich.
And so sometimes that happens.
The unprecedented can happen.
However, I'm always wary because I've seen it said so many times.
And so within just the last 10 years or whatever, I've seen a couple predictions just spectacularly
fail.
So for example, in 2015, 10 years ago, I was sitting in the Bloomberg office in New
York and my colleague, I won't name, he was physically yelling at me that truck drivers
were in trouble and that truck drivers were all going to be put out of a job by self-driving trucks. And he said this is going to just
devastate a sector of the economy. It's going to devastate the working class, it's going
to devastate blue collar labor, blah, blah, blah. And at the same time, I was reading
like I always read the sci-fi top stories of the year, whatever. And so there were two
stories in the same year about truckers being mass unemployed by self-driving trucks. And
then 10 years later, there's a trucker shortage,
and the number of truckers we hire is higher than ever.
I'm not saying truckers will never be automated.
They may.
However, I'm saying that was a spectacularly wrong prediction.
You also got Jeffrey Hinn's prediction
that radiologists would be unemployed
within a certain time frame.
And by that time, radiologist wages were higher than ever,
and employment was higher than ever.
I'm not saying this can't happen.
I'm not smugly sitting here and saying
there's a law of the universe that says,
you'll never see this kind of mass unemployment, blah, blah, blah.
I mean, there were encyclopedia salespeople, we're mass unemployed by the internet, we've seen it happen in real life.
But these predictions keep coming wrong and keep coming wrong.
I'm trying to figure out why is that true?
Why do they keep coming wrong?
Is it simply that people overestimate progress and technical capabilities?
Or are there complementarities that people can't imagine from sort of like the O-net division of tasks
or the standard mental division of tasks?
I think the problem has been that people underestimate
how many things are truly needed to automate human labor.
And so they think like we've got reasoning
and now that we've got reasoning,
like this is what it takes to take over a job.
When I think, in fact, there's much more to a job
than is assumed. That's why I wrote this blog post where I'm, in fact, there's much more to a job than is assumed.
That's why I wrote this blog post where I'm like,
look, it's not a couple years away.
It might be longer than that.
Then there's another question of, like, by 2100,
will there be jobs that humans are doing?
If you just, like, zoom out long enough,
will we ever be able to make machines that can think
and do physical labor at least as cheaply and as well as humans can?
And fundamentally, the big advantage they have is, like like we can keep building more of them, right?
So we make as many of those machines as the value they generate equals the cost of producing them.
And the cost will continue to go down.
Right, yeah.
And it will be lower than the cost of keeping a human alive.
So even if a human could do the exact same labor,
a human needs like a lot of stuff to stay alive, let alone to grow human everything.
An H100 costs $40,000 today.
The yearly cost of running it is like thousands of dollars.
We can just buy more H100s.
Currently we have the algorithm for AGI.
We could run it on an H100 and yeah.
So however big the demand is,
the latent demand that's unlocked by the more,
we just increase the supply basically to meet that demand.
So first, when AGI is here, what does the world look like?
Because Sam Altman was reflecting on his podcast
with Jack Altman the other week.
He was saying, if you told me 10 years ago that we would have
PhD-level AI, I would think the world looks a lot different.
But in fact, it doesn't look that different.
And so is there a potential where
we have much more increased capabilities,
but actually the world doesn't?
It's like the Peter Till called the 1973 test or something.
We have these phones, but the world just looks the same.
We just have phones in our pockets.
Yeah, I think if we have like chat bots that
can answer hard math questions, I
don't expect the world to look that different,
because the fraction of economic value that is generated by math
is like extremely small.
But there's like other jobs that are much more mundane than
quote unquote PSD intelligence, which a chatbot just cannot do, right? A chatbot
cannot edit videos for me. And once those are automated I actually expect a pretty
crazy world because the big bottleneck to growth has been that human population
can only increase at this slow clip. And in fact one of the reasons that growth
has slowed since the 70s is that in developing countries,
the population has plateaued.
With AI, the capital and the labor
are functionally equivalent, right?
You can just build more data centers
or build more robot factories,
and they can do real work
or they can build more robot factories.
And so you can have this explosive dynamic.
And once we get like that loop closed,
I think it would just be like 20% growth plus.
Do you see that feasible or possible?
Tyler, I believe, said 5%.
0.5% more than the steady state.
0.5%.
What is the argument for that?
For Tyler's argument, bottlenecks.
I think the problem with that argument is that there's always bottlenecks.
So you could have said before the Industrial Revolution,
well, we will never 10x the rate of growth because there will be bottlenecks.
And that doesn't tell you what, like, you empirically have to just look at the fraction of the economy that will be bottlenecked,
and what is the fraction that's not, and then like actually derive the rate of growth.
The fact that there's bottlenecks doesn't tell you, yeah, okay, there will be like...
Is he mostly referring to the regulation or...?
Yeah, and just that like we live in a fallen world and people will have to use the AIs and yeah, things like that.
Who will be buying all the stuff?
So, background, in economics,
GDP is what people are willing to pay for.
Who will be buying the stuff in a world
where we get 20% growth?
First of all, I don't know.
So you could have said in 10,000 BC,
the economy is gonna be a billion times bigger
in 10,000 years.
What does it mean to produce a billion times more stuff
than we're producing right now?
Who is buying all this stuff?
You can't predict that in advance.
In 1700s, I could tell you exactly who was buying stuff.
It was everybody, peasants.
In fact, people wrote these things around 1900
about what the world would look like in a hundred years.
You know, what we'll have.
They didn't get exactly the right things right
that we'll have, but they correctly identified
that it would be regular consumers
who would be buying all these things, regular people.
And so that came true.
It was obvious, but here's my point.
Suppose that 99% of people do not have a job
and are not getting paid an income,
and all the money is going to sort of Sam Altman,
Elon Musk, and five other guys, okay?
And they're captive AIs that they own
because for some reason our property rights system
still exists.
But okay, suppose that that's the future
we're contemplating, right?
And so 99% of people or more don't have any job,
they don't have any income, they're out on the street,
and yet you're saying 20% growth a year,
that growth is defined by people, consumers,
paying for things and saying,
here is the money, take my money.
I wouldn't define it just as people.
Okay, so then-
I would just define it as like,
the raw, I mean, I assume the AI's are trading each other.
We will have AI purchasing agents.
Yeah, and I mean, it's like-
No, that doesn't count GDP.
Only final good, only final good. Okay, so we're like launching the Dyson Spears, we're not allowed to count that because the AIs are trading each other. We will have AI purchase from the agents. No, that doesn't count GDP.
Only final good, only final good.
Okay, so we're like launching the Dyson spheres,
we're not allowed to count that because the AIs are doing it.
I mean, like, I want to know what the solar system will look like.
I don't care, like, what, like, the semantics of that are.
And I think the better way to capture what is physically happening
is just include the AIs in the GAT numbers.
Why will they do that?
One argument is simply that if there's any agent,
AI or human, who cares about colonizing the galaxy,
even if 99% of agents don't care about that,
if one agent cares, they can go do it,
colonizing the galaxy is a lot of growth
because the galaxy is really big, right?
So it's very easy for me to imagine
if Sam Alvin decides to launch the probes,
how breaking down Mars and sending out the virus probes
generates 20% growth.
I think what you're getting at here
is that AI will have to have property rights.
AI agents will have to be able to have autonomous control of resources.
I guess it depends on what you mean by autonomous. Today we already have computer programs that have autonomous use of resources.
Okay, but the program goes off and colonizes the solar system.
Right.
It's not like a dude telling it colonize the solar system now and doing all this
stuff, it's like the AI has made the decision to do it
and Sam Altman sitting back there saying,
oh well, it may be doing it.
I'm just saying this is not a crux.
Sam Altman could say it or the AI could say it.
If some Asian cares about this
and they're not stopped from doing it,
like this is just like,
physically you can easily see
where the 20% growth is coming from.
Let me make this a little more concrete.
Suppose that AI is gonna produce a bounty
of the things that humans desire
and that's gonna be what growth is.
How will it get to the humans if the humans don't have a job?
And if the humans don't have a job, why will AI be?
So in other words, if there's no consumers to buy my cars,
why am I building cars?
You might be assuming there's some UBI or some sort of-
No, no, I don't need to assume that. Although, I mean-
Let's assume there's not that.
Yes, I don't need to assume that.
It seems like you're saying, look,
if 99 percent of consumers are no longer consumers,
where's this economic data coming from?
Yeah.
And I'm just saying, okay,
if one person cares about colonizing the galaxy,
that's generating a lot of demand.
It takes a lot of stuff to colonize the galaxy.
So this world where like,
even if there's not an egalitarian world
where everybody's like roughly contributing
equivalent amounts of demand,
the potential for one person alone to generate this demand
is so high enough that like...
So Sam Allman tells his infinite army of robots to go out and colonize the galaxy,
we count that as consumption, we put a value on it, and that's GDP.
Yeah, it might be investment. Maybe he's going to defer his consumption to once he's like after he colonized the galaxy.
And I'm not saying this is the world I want, I'm just saying think about it physically.
If you're colonizing the galaxy, which you can do potentially after AGI,
I'm not saying it will happen tomorrow after AGI, right? But is the thing that's physically possible, is that growth?
Like something's happening that's like explosive.
Right. Maybe.
The thing is that it's a very weird world.
It doesn't look like the kind of economy you've ever had.
And we created the notion of GDP to represent people exchanging money for goods and services,
people like basically exchanging their labor for goods, exchanging the value of their labor for goods and services.
That's at a fundamental level, that's what GDP is.
We're envisioning a radical shift of what GDP means to a sort of internal pricing that
a few overlords set for the things that their AI agents want to do.
And that's incredibly different than what we've called GDP in the past.
I think the economy will be incredibly different from what it was in the past.
I'm not saying this is like the modal world.
There's a couple of reasons why this might not end up happening.
One is even if your labor is not worth that much,
the property you own is potentially worth a lot, right?
If you own that SMP500 and there's been explosive growth,
you're like a multi-multi-millionaire,
or the land you have is like worth a lot.
If the AI can make such good use of that land
to build the space probes
assuming that our system of property rights continues into this regime and
second so I mean
In many cases it's hard to ascribe how much economic growth there has been over very long periods of time for example If you're comparing the basket of goods that we can produce as an economy today versus like 500 years ago
It's not clear how you compare we have antibiotics today
I wouldn't want to go back 500 years for any amount of money
because they don't have antibiotics and I might die
and it'll just suck.
So there's actually like no amount of money
to live in 1500 that we would rather have than live today.
And so if we have those quality of goods for normal people,
just like you can live forever,
you have like euphoria drugs, whatever,
these are things we can imagine now,
hopefully it'll be even more compelling than that.
Then it's easier to imagine like, okay,
it makes sense why this stuff is worth way more
than the stuff that the world economy can pursue
even for normal people today.
Right.
And so I guess I'm just thinking about,
this is a thing that economists really struggled with
in the early 20th century.
It's this idea that we had this capacity
to expand production, expand production, expand production.
And then the thing is that companies competed
their profits to zero and the profits crashed
and nobody wanted to expand production anymore because they weren't making any profit.
We're seeing this happen again in China right now with overproduction.
We're seeing BYD having to take loans from its suppliers just to stay financially afloat
even though it's the best car company in the world because the Chinese government has paid
a million other car companies to compete with BYD.
And so you overproduce, so you have this overproduction.
So the solution was to expand consumption.
This is the solution people are recommending for China now,
to expand consumption so that you can refloat
the profit margins of all these companies
and have them continuous companies.
And so the idea is if AI is producing all this stuff,
but it's overproducing services,
it's overproducing whatever AI can produce,
and the profits from this go negative,
that makes the GDP contribution go to zero,
and basically OpenAI and Anthropic and XAI and whatever will just be sitting there saying,
why am I doing this again?
Why am I?
No one's buying this shit.
And so at that point, it seems like there will be corporate pressure on the government
to do something to redistribute purchasing power so that they don't compete their profits
to negative.
And so they have some reason to create more economic activities
so they can take a slice of it, which is essentially what happened in the early 20th century.
Yeah, I disagree with this. I think—
I'm not saying this will happen. I'm saying like that would be the analogous thing.
I disagree. I would prefer it to be the case that even as a libertarian,
I would prefer for significant amounts of redistribution in this world because
the libertarian argument doesn't make sense if there's no way you could physically pick yourself up by the bootstraps.
Like your labor is not worth anything.
Or your labor is worth less than subsistence calories or whatever, which is the more relevant
thing.
But I don't think this is analogous to the situation in China.
I think what's happening in China is more due to the fact that you have the system of
financial repression, which redistributes money and also currency manipulation, which basically
redistributes ordinary people's money to basically producing
one EV maker in every single province.
So it is the market distortion that the government is creating that causes this overproduction.
We can go into what the analogous thing in the AI case looks like, but I think if there
isn't some market distortion, I just think people will use AI where it has the highest
rate of return.
If it's not space colonization, there will be longevity drugs or whatever.
I'm just asking why would I invest all this money into AI producing stuff?
Why would I just invest the massive hundreds and billions of trillions and
whatever of dollars into producing stuff for people who are all going to be out
of a job and won't be able to buy this stuff?
But again, I don't think you'll be producing it for them.
I think you'll be producing it for whoever does that.
Like there's stuff in the world.
Somebody will have stuff.
Maybe it's the AIs, maybe it's Sam Altman.
You're producing it for whoever has the capability to buy your stuff.
And will they want AI?
And I'm just saying AI can do so many things,
least of which is colonizing the galaxy.
People are willing to pay a lot of stuff to colonize the galaxy.
I'm just trying to get this straight in my head of what this economy looks like,
and I'm seeing a picture of the trillions of dollars needed
to build out all these data centers will be done not for profit,
not to make money from a consumer economy for the creators of the AI, but to satisfy the whims of a few robot lords to colonize the galaxy.
I think you're making two different points and they're getting they're getting wrapped into a one.
Yes, important word. So there's one about do you expect in the case that the robot overlord world happens? And I saying, no, actually, even without redistribution, first of all, I expect redistribution
to happen.
I hope it happens.
But even if it doesn't, and I don't think it will happen because people like corporations
wanted a redistribution to happen.
I think it would be good to happen for independent reasons.
But I don't buy this argument that the corporations would be like, we need somebody to buy our
AI, therefore we need to give the money to the ordinary consumers.
You believe broad-based asset ownership will create a whole lot of broad-based consumer
demand even in the absence of labor income?
Honestly, I don't have a super strong opinion, but I think that's plausible.
But independent of that, I'm like, okay, even if that demand doesn't exist, just the things
you can do with a new frontier of technology, as long as one person wants it, there's so
much room to do things.
Space colonization is an obvious example.
That costs a lot of money.
Right. There's obvious demand for the things that you will be able to produce, right?
Like one of the things that I can produce is colonize the galaxy.
Right, exactly. So, but the question is like, I can see a paperclip maximizing
an autonomous intelligence is colonizing the galaxy, but in terms of...
That's a lot of growth.
That is. In terms of... And so, by the way, I would like to say that I am a paperclip maximizer.
I am the real paperclip maximizer. I am the real paperclip maximizer.
I want to maximize rabbits in the galaxy.
I want to turn the entire galaxy into floofy rabbit.
That's my goal.
And so my goal with AGI is to enlist the AGI to help me in this goal.
But then to align them towards rabbit.
But anyway...
Get this guy in front of the OpenAI board of directors.
I know.
I mean, like, the social welfare function is floofiness.
But I guess my point here is, as long as AI still doesn't have property rights and it's
humans making all the economic decisions, be it Sam Altman and Elon Musk or, you know,
you and me, then at that point, like, that really matters for what gets done.
Because if we're talking about the money needed to build all these massive data centers, which
currently it's a lot of money, it's a ton of money required to build these data centers.
And that money need will not go away.
We can't just say, oh, cost goes to zero because we can say unit cost goes to zero, but total
cost doesn't go to zero, nor has it.
It has increased.
The total spend on data centers has increased.
And I think everyone expects it to increase for the foreseeable future.
The question is, is that money being spent because AI companies expect to reap benefits
from consumers like you and me?
Or to what extent is it that?
And to what extent is it Sam Altman feels
like doing some crazy stuff
and Sam Altman's just God-like richer than everybody else.
And so Sam Altman is actually consuming
when he builds those data centers.
He is building those data centers so that he can indulge his godlike whims.
I think that more plausible than either a single godlike person is able to direct the whole economy,
or like there's this broad-based consumer.
These are extremes.
Yes, I think more plausible is like, AIs will be integrated through all the firms in the economy.
A firm can have property.
Firms will be largely run by AI's,
even though there's nominally human board of directors.
And it might not even be nominal, right?
Maybe the AI's are aligned and genuinely give
the board of directors an accurate summary
of what's happening, but day to day,
they're being run by AI's.
And firms can have property rights,
firms can demand things.
So say all you have is a board of directors and AI.
Yeah.
Okay. I mean, in the ideal world. Okay, say all you have is a board of directors and AI. Yeah.
I mean, in the ideal world.
Okay, so then what we're basically looking at is
the labor share of income goes to zero
or something approaching that.
Depends on how you define the AI labor.
And capital income is distributed highly unevenly.
It's more distributed much more unevenly than labor income,
but it's still distributed reasonably broadly.
Like I have capital income, you have capital income.
So at that point, we have just an extremely unequal society
where owners get everything and then workers get nothing.
And then so we have to figure out what to do about that.
Yeah, 100%.
Piketty is killing himself somewhere.
Piketty's been wrong about everything.
Yeah, I know.
So let's hope he's wrong again.
I mean, he'd be happy.
He'd be like, see, I was right.
Because we're an economist, being right
is the most important thing.
Yeah, exactly.
I mean, the hopeful case here is the way our society currently treats retirees and old
people who are not generating any economic value anymore.
And if you just look at like the percent of your paycheck that's basically being transferred
to old people, it's like, I don't know, 25% or something.
And you're willing to do this because they have a lot of political power.
They've used that political power in order to lock in these advantages.
They're not like so overwhelming.
You're like, I'm going to go into like Costa Rica instead.
You're like, okay, I had to pay this money.
I had to pay this concession.
I'll do it.
And hopefully humans can be in a similar position to this massive
EI economy that old people today have in today's economy.
All right.
What do humans do?
Let's say they get some money.
They have enough to live.
How do they spend their time? Is it art, religion, poetry, drugs?
Podcasting, it's the final job.
Yeah, we're out of the curve here.
Or is it we're the last man of history?
Exactly.
Wait, so here's an idea.
How about sovereign wealth fund?
Okay, so sovereign wealth fund, we tax Sam Altman and Alan Musk.
We're using Sam as a metaphor here.
He's a friend of the firm. Yeah, yeah, yeah. We tax him, we tax Sam Altman and Alan Musk. We're using Sam as a metaphor here.
He's a friend of the firm.
Yeah, yeah, yeah.
We tax him, we tax Mark.
And so then we use their money.
Only the friends of the show will be taxed.
Right.
We use that money to buy shares in the things that those people have.
So they get their money back because we're buying the shares back from them.
Okay.
So it's okay.
And then we hire them.
Because then what we do is we hire a number of firms,
including A16Z and pay them two and 20 or whatever,
to manage the investment of AI stuff on behalf of the humans.
But then the humans become broad-based sort of index fund
shareholders or shareholders in whatever you guys choose
to invest in, then you take a cut.
And this could be the future economy.
This is what my PhD advisor, Miles Kimball has suggested.
This is what the socialist Matt Bruning has suggested. And this is what Alaska PhD advisor, Miles Kimball, has suggested. This is what the socialist, Matt Bruning, has suggested.
And this is what Alaska actually does with oil.
Capitalists like it, socialists like it, Alaska likes it.
I think sovereign oil funds generally have a bad track record.
There's some exceptions that have managed to use their wealth oil,
like Norway or Alaska, but there's just like these
political economy problems that come up
when there's this tight connection between the investment,
which should
theoretically be just highest rate of return and politicians.
So I don't have like have a strong alternative.
Ideally, you just let the market decide how the investment should happen.
And then you can just take a tax.
But then exactly where does that tax happen?
I haven't thought it through, but.
Are you dubious of this?
Yeah, I wouldn't want the government influencing where that investment happens,
but I want the government taking a significant share of the returns of that investment.
Yeah. Are you dubious of the trope that labor provides meaning,
and if people don't have a clear sense for labor,
then it will be very difficult for them to obtain alternative sources of meaning?
Or is that kind of a capitalist sort of stroke that isn't necessarily true?
My suspicion is that humans have just adapted to so much.
Like, agricultural revolution, industrial revolution,
the growth of states.
Like, once in a while, like, a communist or fascist regime
will come around or something.
Like, the idea that being free and having millions of dollars
is the thing that finally gets us,
I'm just suspicious of.
By the way, do we not disagree about the thing I'm saying?
Once we get AGI,
humans will not have high paying jobs. Do we disagree about this?
I think humans may have high paying jobs.
Okay.
Because of comparative advantage. The key here is if there's some AI specific resource
constraint that doesn't apply to humans, then comparative advantage law takes over and then
humans get high paying jobs even though AI would be better at any specific thing than human.
Because there's some sort of aggregate constraint.
The example I always use, of course, is Mark Andreessen, who is the fastest typist I have
ever seen in my life and yet does not do his own typing.
And so because there's a Mark Andreessen specific aggregate constraint on Mark Andreessen's,
there is only one of him.
Whew.
So he hasn't taken all the secretary's typist jobs,
but because he has better things to do.
And so if there's some sort of AI specific research constraint
that hits, then humans could have.
Now, I'm not saying there will be.
Yeah.
And I'm not saying there won't be.
Yeah.
I'm saying I don't know if there is.
Yeah.
The reason I find that implausible is that I think that will be true in the short term,
because right now there's 10 million H100 equivalents in the world.
In a couple of years, it might be 100 million.
Like, H100 has the same amount of flops as a human brain.
So theoretically they're like as good as a brain if you have the right algorithm.
So there's like a lower population of AIs even if you had AGI right now than humans.
But the key difference is that in the long run you can just keep increasing the supply
of compute or of robots. And so if it is the case, so if an H100 costs a couple thousand dollars a year to run, you can just keep increasing the supply of compute or of robots.
And so if it is the case, so if an H100 costs a couple thousand dollars a year to run,
but the value of an extra year of intellectual work is still like a hundred thousand
dollars, so you're like, look, we've saturated all the H100s and we're going to pay a human a hundred thousand
dollars because there's still so much intellectual work to do. In that world, the return
on buying another H100, like an H100 costs $40,000,
just think in a year that H100 will pay you over 200% return, right?
So you'll just keep expanding that supply of compute
until basically the H100 plus depreciation plus running cost
is the same as an extra year of labor.
And in that world, that's like much lower than human subsistence.
So compared to advantage, it's totally consistent with human wages being below subsistence.
It is, but that comes from the common resource consumption.
So if basically all of the land and energy that could be used to feed and clothe and shelter humans gets appropriated by H100s, then that is the case. However, if you pass a law that says,
this land is reserved for growing human food,
if we actually were to just pass a simple law
saying that you have to use these resources,
these resources are reserved for human.
But at that point, the comparative narrowing.
Then comparative advantage comes out.
At that point, human labor has nothing to do with this.
The only reason the system works is that
you are basically transferring resources.
You've come up with a sort of like intricate way
to transfer resources to humans.
It's just like, this resource is for you,
you have this land, and therefore you can survive.
And this is just like an inefficient way
to allocate resources to humans.
It's true that it is an inefficient way.
I think people will like hear this argument
of comparative advantage and be like,
oh, there's some intrinsic reason that humans will-
We take UBI instead.
Yeah.
Okay. Yeah.
Yeah, I mean, sure.
But then again, we typically do not
see the first best most efficient political solution
implemented for things like redistribution.
In the real world, redistribution
happens via things like the minimum wage
or letting the AMA decide how many doctors there's going to be.
So redistribution in the real world
is not always the most efficient thing.
So I'm just saying that like comparative advantage,
if you're talking about will humans actually continue
to get high paid work, yes or no,
it depends on political decisions that may,
and it depends on physical constraints that will happen.
But the high paid jobs are literally because like,
you have said that there must be high paying jobs politically.
I understand, in this case you've said it in an indirect way,
but you still said it.
Right, yeah.
You're absolutely right.
Yeah.
You're not wrong.
Yeah.
I guess it's incredibly different from what somebody
might assume.
Right.
Like, it has almost nothing to do with the comparative
advantage argument.
OK, sure, but that's true of a lot of jobs that exist now.
Like, a lot of jobs that exist now, I'm not sure what,
like, university professors, there's a lot of those jobs.
Or like, credit rating agencies. Or, you know, there's a lot of those jobs, or like credit rating agencies, or you know,
there's a lot of things where probably we could ring out
some significant TFP growth more or less
by eliminating those things, but we don't,
because our politics is a clujocracy.
I think this is one of Tyler's points.
Yeah.
I mean, I do think it's important to like point out in advance,
like basically, it would be better if we just bit the bullet about AGI,
so that instead of doing redistribution by expanding Medicaid
and then Medicaid can't procure all the amazing services that AI will create,
it would be better if we just said, look, this is coming, and I'm not saying we should do a UBI today,
but like, if all human wages go below subsistence, then the only way to deal with that
is through some kind of UBI,
rather than if you happen to sue OpenAI,
you get a trillion dollar settlement,
otherwise you're screwed.
Some people said the bare case for UBI
was something around like COVID as an example.
You gave people a bunch of money,
and what do they go do?
Go ride to the streets, I'm teasing.
But are people gonna use that money in an effective way?
I mean, that was literally what happened.
Yeah, so is UBI the form that you would think the,
like, what is the most effective method?
The reason I favor UBI is, like, this thing
where in a future world with explosive growth,
we're going to see so many new kinds of goods and services
that will be possible that are not available today.
And so distributing just like a basket of goods
is just inferior to saying,
oh, if like we solve aging, here's some fraction of GDP,
go spend your tens of millions partly on buying this aging cure,
whatever this new thing that AI enables,
rather than here's a food stamps equivalent of the AGI world
that you can have access to.
Of course, I mean, this discussion may be academic
because I believe that you said that we got phones and the world looked the same.
I mean, no, it doesn't. Phones have destroyed the human race. Like the fertility crash that's
happening all around the world, nobody has replacement level. Fertility is going far below
replacement everywhere because of technology. And-
Is that a phone or the pill or-
Well, no, it's a phone. I mean, we'll know the pill and other things like women's education, whatever, like lowered
fertility like quite a bit, but some countries were still at replacement level, some were still around replacement level,
but the crash we've seen since everybody got phones is epic and is just unbounded. The human race
does not have a desire, a collective desire to perpetuate itself.
Yes, we're gonna get lonely, but we'll have company through AI and through the internet, social media,
until there's just a few of us and we dwindle and dwindle.
But yeah, I mean, like, technology has already destroyed
the human race, and basically, UBI is just, like,
keeping us around on life support for a little while
while that plays out.
I do think so far there's been a lot of negative effects
from widespread TikTok use or whatever
that we're still, like, learning about.
I am somewhat optimistic that in the long run,
there's some optimistic vision here that could work.
Just because right now the ratio of,
it's impossible for Steven Spielberg
to make every single TikTok
and direct it in a sort of really compelling way
that's like genuine content
and not just video games at the bottom
and some music video at the top. In the future, it might genuinely be
possible to give every single person their own dedicated Steven Spielberg and
create incredibly compelling but long narrative arcs that include other people
they know etc. So in the long run I'm like maybe this...
What's next that'll happen?
I don't think TikTok is like the best possible medium.
No, I also don't think TikTok is unique the best possible medium. No, I also don't think TikTok is unique
in destroying human race.
I think that interacting online
instead of interacting in person,
that's a great filter.
How do you make your money go ahead?
I agree.
We're all making money destroying our species.
You don't think we got isolated to dating apps and.
No, I'm saying like as long as you can get your,
why did humans perpetuate the human species?
It was not because they wanted to see
the human species perpetuated.
It was because it's like, oop, I had sex and there came a baby.
And that's done.
We've severed that.
That is the end.
We did not evolve to want our species to continue.
Right.
But you're saying the reasons why we're not having babies is because we can
make friends on the internet, but is it that dating apps have created
just a much more efficient market and thus there is a pair of bun?
I don't know.
I mean, like people having less sex.
If Elon gets his way, everybody will just sit there gooning
to some sort of Grok companion thing.
The goonpocalypse seems upon us.
But this is available right now. What's the website?
Oh, no.
This podcast got silly, but anyway, I guess the point is that
the idea of a humanity that just keeps increasing in numbers and spreading out to the galaxy,
I don't see a lot of evidence that is in our future and that we have to go to great lengths to make sure that future is compatible with AGI.
Because I don't think it's happening in any case, AGI or none.
By the way, not to cope too hard, but in a world where AGI happens, how important is increasing population? Population has so far been the decisive factor
in terms of which countries are powerful.
Like the reason China, if US was not involved,
the reason China could take over Taiwan
is just that there's 1.4 billion Chinese people
and there's 20 million Taiwanese people.
Now, if in future your population is,
your effective labor supply is like largely AIs, then this dynamic
just means that your inference capacity is literally
your geopolitical power, right?
I want to shift to short term a bit.
You've had some people on the podcast, you have the AI 2027
folks who believe that AGI is perhaps two years away.
I think they updated to three years away.
And then you've also had some folks on who said,
it's not for 30-something years.
Maybe you could steel man both arguments and then share where you net it out.
Yeah. So two years, if I'm steel manning them, is that look, if you just look at the progress
over the last few years, it's reasoning.
Aristotle's like the thing that makes humans is reasoning.
It was not that hard, right?
Like train on math and code problems and have it like think for a second and you get
reasoning. Like, that's crazy.
So what is a secret thing that we won't get? and have it like think for a second and you get reasoning like that's crazy so
what is a secret thing that we won't get? Can I ask a stupid question? Why was
stuff like 03 type models why are those called reasoning models but like GPT-4o
is not called reasoning? What are they doing different that's reasoning? One I
think it's GPT-3 can technically do a lot of things GPT-4 can but GPT-4 just
does it
way more reliably. And I think this is even more true of reasoning models relative to
GPT-4.0, where 4.0 can solve math problems. And in fact, like modern day 4.0 has been
probably trained a lot on math and code. But the original GPT-4 just wasn't trained that
much on math and code problems. So like, it didn't have whatever meta-circuits there exist
for like, how do you backtrack?
How do you be like, wait, but I'm on the wrong track, I gotta go back, I gotta pursue the solution this way.
Algorithmically, I have a okay idea of what a reasoning model does that the non-reasoning models don't.
But in terms of how does that map to a thing that we call reasoning, what is the definition of what it means to reason that these people are using, the operational definition here?
Because I don't understand that myself.
I mean, 4.0 can't get a gold in IMO.
Okay, but I can reason and I can't get a gold in IMO.
But I can reason.
I can't get a gold either,
but I don't think I can reason as well as a Mac Olympiad,
at least in the relevant domain.
I agree that reasoning is not just about mathematics,
but this is true of any word you come up with,
like the zebra.
What about the thing that like is a mixture of a zebra and a giraffe and they
have a baby, is that a zebra still?
I agree there's edge cases to everything, but there's a general conceptual
category of zebra and I think there's like a general conceptual category of reasoning.
Okay.
I was just wondering what it is.
Like when you have a checkout clerk, right?
That checkout clerk wouldn't would look at an IMO problem and be like, what?
But then like you have a checkout clerk and the checkout clerk, you're like, okay,
so you put the thing on this shelf,
and therefore someone has looked for it and didn't find it,
so something else must have happened.
That's reasoning.
I think a reasoning model will be more reliable
and be better at solving that kind of problem than for a row.
So you're still manning the AI 2027.
Yes.
So a lot of things we previously thought were hard
have just been incredibly easy.
So whatever additional bottlenecks you are anticipating, whether it's this continual
learning on the job training thing, whether it's computer use, this is just going to be
the kind of thing where in advance it's like, how would we solve this?
And then deep learning just works so well that we like, I don't know, try to train it
to do that and then it'll work.
The long time lens people will say, I don't know, there's a sort of longer argument.
I don't know how much to bore you with this, but basically the things
we think of as very difficult and requiring intelligence have been some of
the things that machines have gotten first. So just adding numbers together,
we got in the 40s and 50s. Reasoning might be another one of those things
where we think of it as the apogee of like human abilities, but in fact it's
only been recently optimized by evolution over the last few million years.
Whereas things like just moving about in the world and having common
sense and so forth and having this long term memory, evolution spent hundreds of millions
if not billions of years optimizing those kinds of things. So those might be much harder
to build into these AI models.
I mean, the reasoning models still go off in these crazy hallucinations that they'll
never admit were wrong and will just gaslight you infinitely on some crap it made up.
Like just knowing truth from falsehood.
I've met a couple of humans who don't seem to be able
to know truth from falsehood.
They're weird.
And so, but O3 sometimes does this.
I think it's a good question.
Do they hallucinate more than the average person?
I think no less.
They hallucinate meaning like getting something wrong
and when they push them on it, they're like, no, whatever.
And eventually they'll like a seed if they're clearly wrong.
I think like they're actually more reliable than the average human.
So the thing about the average human is you can get the average human to not do
that with the right consequences.
And maybe AI, we haven't found the right like reinforcement learning function
or whatever to get them to not do that.
Right.
Okay.
Now let's get to the view that it's 30 years away.
What's that view?
Just this thing of reasoning is relatively easy in comparison to forget about get them to not do that. Right. Okay, now let's get to the view that it's 30 years away. What's that view?
Just this thing of reasoning is relatively easy in comparison to, forget about robotics,
which is just going to be, evolution spend billions of years trying to get like robotics
to work.
But there's like other things involved with like tracking long run state of, you know,
a lion can follow up prey for a month or something, but these models can't do a job for a month.
And these kinds of things are actually much more complicated than even reasoning.
And where you've netted out is it's either going to happen in a few years or not for quite some time?
Yeah, basically, the progress in AI that we've seen over the last decade has been largely driven
by stupendous increases in compute.
So the compute used on training a frontier system
has grown four X a year for I think like the last decade.
And that just over four years has 160 X, right?
So that's over the course of a decade
that's hundreds of thousands of times more compute.
That physically cannot continue if you just like,
okay, what would it mean right now
we're spending 1.2% of GDP or something on data centers?
Not all of that is for training of course,
but what would it mean to continue this for another decade?
For maybe five more years, you could
keep increasing the share of energy
that we're spending on training data centers,
or the fraction of TSMC's leading edge nodes,
wafers that we dedicate to making AI chips,
or even the fraction of GDP that we can dedicate to AI training.
But at some point, you can't keep this 4x trend going a year.
And after that point, then it has to just come from new ideas.
Here's a new way we could train a model.
And by the way, when I was writing that comparative advantage
post, and I was thinking about AI specific aggregate
constraints, resource constraints,
that's what I was thinking of, actually.
That expansion of compute has to slow down.
But I don't know how much that matters.
That's for training and yeah for the labor that will be like the inference will also use the same
same bucket of compute. It is the case that for the amount of compute it costs to train a system,
if you like set up a cluster to train a system, you can usually run a hundred thousand copies
of that model at typical token speeds on that same cluster. That's still obviously not like billions.
But if we've got all this compute
to be training these huge systems in the future,
it would still allow us to sustain
the population on hundreds of millions,
if not billions of AIs.
At that point, maybe we obviously
will still need one more AIs.
What does a single AI mean in this instance?
Oh, like when you're talking to Claude,
it's like a single instance that's talking to you.
I see.
So instances.
Yeah, yeah.
So what's going to determine whether it's in a few years or?
Right now, we're basically riding the wave of this extra compute.
That's why EI is getting better every year mostly.
In terms of the contribution of new algorithms, it's a smaller fraction of the progress that's
explained by that.
So if we've just got this rocket, how high will it take us?
And does it get us space or not?
If it doesn't, then we just have to rely on the algorithmic progress, which has been
a sliver. But you think it might get us space or not? And if it doesn't, then we just have to rely on the algorithm that progress, which has been this.
Slower.
Yeah.
But you think it might get us space?
Yeah.
I think there's a chance that, like, oh,
continual learning is also like, you know,
I had this whole theory about, oh, it's so hard,
and how do you slide it in?
And they're like, I fucking trained it to do this.
Like, whatever you're talking about here.
That leads into another thing that I've thought about,
which is how poor our track record for making predictions
about the future of
AI has been.
The first time you and I hung out, I don't know if you remember this, was with Leopold.
Yeah.
Oh, really?
Yeah.
I remember this.
It was at your old house, and Leopold was just pronouncing a whole bunch of pronouncements
from the couch.
Yeah.
And he released this big situational awareness thing.
How long ago was that?
A year and a half?
Yeah.
Yeah.
I would say that already most of the things he predicted have been invalidated or made irrelevant.
Really?
In the last year and a half.
And especially in terms of all the stuff about competition with China.
It turns out filtration was able to get them a whole lot of things that he never predicted.
It turns out that so many of the things, other than just the idea that AI would keep getting better,
which he predicts and a lot of people predict,
but then I feel like a lot of the specific predictions
about US capabilities and Chinese capabilities
and what would be the bottlenecks
and what would be the things that, you know,
here's how we can deal with China,
that's all been proven wrong since.
I think this is actually an interesting trend
in the history of science where like some of the scientists
who are the smartest in thinking about the progression
of the atom bomb or progression of physics
just had these like ideas about the only way we can sustain this
is if we have one world government,
I'm talking about after World War II.
There's no other way we can deal with this new technology.
I do think relative to the technological predictions,
Leo, I think the main way he's been wrong
is that it didn't take some breaking the servers
in order to learn how O3 or something works.
It was just public, just seeing you being able to use the model.
You can talk to it and learn what it knows.
Just knowing a reasoning model works.
And you can use it and you see, oh, what is the latency?
How fast is it outputting tokens?
That will teach you how big is the model.
You learn a lot just from publicly using a model
and knowing a thing is possible.
He has been right in one big way, which is he identified
three key things that would be required to get us from GPT-4
to BBAGI kind of thing,
which was being able to think, so test time compute, onboarding.
Did you talk about test time compute in that document?
Yeah, yeah. It was like one of his three big unhoppings.
Then like onboarding in terms of the workplace, and then I think the final one was computer use.
Look, one out of three. And it was a big deal.
So I think you got some things right, some things wrong, but yeah. And then what's your take on the model of automating
AI research as the path to AGI?
The Meteor uplift paper, contrary to expectations,
they found that whenever senior developers working
in repositories that they understood well used AI,
they were actually slowed down by 20%.
Yeah, I did see that.
Yeah.
Whereas they themselves thought that they were sped up 20%.
And so there's a bunch of theories. I'm getting things done. Yeah, I did see that. Yeah. Whereas they themselves thought that they were sped up 20%.
And so there's a bunch of theories.
I'm getting things done.
This goes back to your theory about the phones
are destroying us.
That is an update towards the idea
that AI is not on this trend to be this super useful assistant
that's helping us already make the short process of training
AI much faster.
And this will just be this feedback loop and exponential.
I have other independent reasons.
I'm like, I don't know, I'm like 20% that will have some sort of intelligence explosion.
One of the other label predictions was nationalization.
Is that something you could potentially foresee in the next few years?
I don't think it's politically plausible, especially given this administration.
I don't think it's desirable.
First, I think it would just drastically slow down AI progress
because, look, this is not 1945 America,
and also building an atom bomb is a way easier project
than building AGI.
But China's quasi-nationalizing most of its...
I mean, China doesn't control BYD's day-to-day decisions
about what to build, but then if China says,
do this, BYD does it, as does every Chinese.
I mean, that's kind of the relationship
of American companies and the US government as well.
You think so?
I mean, somewhat.
Also, the big difference is, what
do we mean by nationalization?
There's one thing, which is like, there's
a party cadre who is in your company.
Exactly.
There's another, which is that each province is just
pouring a bunch of money into building their own competitor
to BYD in this potentially wasteful way.
That distributed competitive process seems like the opposite of nationalization to me.
When people imagine AGI nationalization, I don't think they're saying Montana will have
their AGI and Wyoming will have their AGI and they'll all compete against each other.
I think they imagine that all the labs will merge, which is actually the opposite of how
China does industrial policy.
But then you do think that the American government, basically, if it says do this then like XAI and OpenAI will do it? No, actually I think in that way
obviously the Chinese system and the US system are different. Although it has been
interesting to see that whenever, I don't know, we've noticed the way that different
lab leaders have changed their tweets in the aftermath of the election. I mean
also, yeah, more bullish open source. Right. And I didn't say a lot of the things were, I think
previously he said that AI will take jobs, how do we deal with this?
And then, didn't he recently say something at a panel where I think President Trump is correct that AI will like create jobs or something?
Where I don't think in the long run you believe this.
But the reason why humans should be excited about even their jobs being taken is just they'll be so rich that why do they even need it?
Yeah.
Much richer than an era now.
Right. Modulo this redistribution slash not fucking it over with some guild like thing.
Yeah. You mentioned the atomic bomb and we also mentioned off camera that you don't think the nuke is a good comparison for what happens.
How does it play out when a lab figures out AGI? What then happens? Is there a huge advantage if one country has it first or if one lab has it first, do they dominate? I think it's less like the nuclear bomb, where there's a self-contained technology
that is so obviously relevant to specifically
this offensive capability.
And you can say, well, there's nuclear power as well.
But like, neither of those, like, nuclear power
is just like this very self-contained thing.
Whereas I think intelligence is much more
like the Industrial Revolution, where there's not
like this one machine that is the Industrial Revolution.
It is just this broader process of growth and automation and so forth.
So Brad DeLong's right and Robert Gordon is wrong.
If Robert Gordon said there's only four things, it's just four big things.
Oh really?
And Brad DeLong is like, no, it's a process of discovering things.
Interesting.
What were Rob's four things again?
Oh, I mean electricity.
Test time compute.
Not kidding.
Test time compute, the internal combustion engine, steam power, and then,
what was the fourth one?
Maybe plumbing, I think, was the fourth one.
Yeah.
Or even in that case, maybe that actually is,
maybe that's closer to how I think about it.
But then you needed so many complementary innovations.
So internal combustion engines, I think, invented in the 1870s.
Drake finds the oil well in Pennsylvania in the 1850s.
Obviously, it takes a bunch of complementary innovations before these two things can merge. which is I think invented in the 1870s. Drake finds the oil well in Pennsylvania in the 1850s.
Obviously, it takes a bunch of complementary innovations
before these two things can merge.
Before, they're just using the oil for the kerosene
to light lamps.
But regardless, so if it's this kind of process,
it was the case that many countries
achieved industrialization before other countries.
And China was dismembered and went
through a terrible century because the Qing dynasty
wasn't up to date on the
Industrialization stuff and much smaller countries were able to dominate it
But that is not like we developed the atom bomb first and now you have decisive advantage
I think it's because it was us if that had been Nazi Germany the Soviet Union it would have gone differently
Yeah, how do you see the US-China competition playing out in terms of AI? I
genuinely
Don't know.
Yeah, I think it's possible that there could be some positive,
some like not like a nuclear weapon where
both countries can just adopt AI.
And there is this dynamic where if you have higher inference capacity,
not only can you deploy AI's faster and you have more economic value that's generated,
but you can have a single model
learn from the experience of all of its copies, and you can
have this basically broadly deployed intelligence
explosion. So, I think it really matters to get to
that discontinuity first. I don't have a sense of
at what point, if ever, is it treated like the main
geopolitical issue that countries are prioritizing.
I also, from the misalignment stuff, the main thing
I worry about is the AI playing us off
each other rather than us playing the AIs off each other.
You mean AI just telling us all to hate each other
the way Russian trolls currently tell us all to hate each other?
More so the way that the East India Company
was able to play different provinces in India off
of each other.
And ultimately, at some point, you realize, OK,
they control India.
And so you could have a scenario like, OK,
think about the conquistadors, right?
A couple hundred people show up to your border,
and they take over an empire of 10 million people.
And this happened not once.
It happened two to three times.
OK, so why was this possible?
Well, it's that the Aztecs, the Incas,
weren't communicating with each other.
They didn't even know the other empire existed. Whereas Cortes learns from the subjugation of Cuba,
and then he takes over the Aztecs.
Pizarro learns from the subjugation of the Aztecs
and takes over the Incas.
And so they're able to, like, just learn about,
okay, you take the Emperor hostage,
and then this is the strategy you employ, et cetera.
It's interesting, the Aztecs and Incas never met each other,
and that worked both times, sort of.
Yeah.
That's interesting that these totally disconnected civilizations
both had the similar vulnerabilities.
Yeah. It was literally the exact same playbook.
The crucial thing that went wrong is that
at this point in the 1500s,
we actually don't have modern guns, we have arquebuses,
but the main advantage that the Spanish had was
they had horses and then secondly they had armor
and it was just incredibly, you'd have thousands of warriors.
If you're fighting on an open plain,
the horses with armor will just trounce all of them.
Eventually, the Incas had this rebellion,
and they learned they can roll rocks down hills,
and the rebellion was moderately successful,
even though it was eventually, we know what happened.
You could say that the Spanish on their side had guns, germs,
and steel.
So how could this have turned out differently?
If the Aztecs had learned this and then had like told the Incas, I mean they weren't in
contact, but if there's some way for them to communicate like, here's how you take down
a horse.
I think what I would like to see happen between the US and China basically is like the
equal of some red telephone during the Cold War where you can communicate, look, we noticed
this, especially when AI becomes more integrated with the economy and government, etc.
Like we noticed this crazy attempt to do some sabotage,
like be aware that this is a thing they can do, like train against it, etc.
Right. AI is trying to trick you into doing this. Watch out.
Yeah, exactly. Though the required level of trust, I'm not sure is plausible,
but that's the optimal thing that would happen.
At the lab level, do you think it's a multipolar or is there consolidation,
and who's your bet to win? I've been surprised. So you would expect over time as the cost of competing at the frontier has
increased, you would expect there to be fewer players at the frontier. This is what we've seen
in semiconductor companies, right, that it gets more expensive over time. There's now maybe one
company that's at the frontier in terms of like global semiconductor manufacturing. We've seen
the opposite trend in AI where there's like more competitors today than there were a year ago,
even though it's gotten more expensive. I don't know where the equilibrium here is because the cost of training these models
is still much less than the value they generate.
So I think it'll like, would still make sense to 10x the amount of investment.
Somebody new to come into this field and 10x the amount of investment.
Do you have a take on where the equilibrium is?
Oh, well, I mean, it has to do with entry barriers.
Basically, it's all about entry barriers.
It's the question of, if I just decide to plunk down
this amount of money.
So if the only entry barrier is fixed costs,
I'd say we have such a good system for just loaning people
money that that's not going to be that big a deal.
But if there's entry barriers that
have to do with if you make the best AI, it gets even better.
So why enter?
That's the big question.
I don't actually know the answer to that question.
There's a broad question we ask in general,
is like, what are the network effects here?
And what is the usability?
And it seems often to be brand.
Yeah, I mean, I'm not sure it's a network effect,
but brand, like OpenAI, chat GPT, is the Kleenex of AI,
in that Kleenex is actually called a tissue,
but we call it a Kleenex because there was a company called Kleenex.
Where are you going with this? Are you back in the grog doing anything?
Oh, no.
Well, what's another example? Xerox.
You Xerox this thing. Xerox is just one company that makes a copier, right?
Not even the biggest, but everybody knows that it's Xeroxing.
And so, chat GPT gets massive rents from the fact that everyone just says,
I'll use AI. What's an AI? Chat GPT, I'll use it.
And so like brand is the most important thing.
But I think that's mostly due to the fact that this key capability of learning on the job has not been unlocked.
And so, I don't think-
And I was saying that could be a technological network effect that could supersede the brand effect possibly.
Yeah. And I think that that will have to be unlocked before most
of the economic value of these models can be unlocked.
And so by the point they're generating hundreds
of billions of dollars a year, or maybe
trillions of dollars a year, they
will have had to come up with this thing, which
will be a bigger advantage, in my opinion, than brand network
effects.
Is Zuck throwing away money, wasting it on hiring all these guys?
No. People have been saying, look, the messaging could have been better or whatever. Is Zuck throwing away money, wasting it on hiring all these guys?
No, people have been saying like, look, the messaging could have been better or whatever.
I mean, I think it's just much better to have worse messaging or something,
but then not sleepwalk towards losing.
Also, if you just think about like, if you pay an employee $100 million,
and they're a great AI researcher, and they make your training or your inference 1% more efficient,
Zuck is spending on the order of like $80 billion a year on compute. or your training or inference 1% more efficient.
Zuck is spending on the order of $80 billion a year on compute.
That's made 1% more efficient.
That's easily worth $100 million.
And if we as podcasters encourage one researcher to join Meta.
This has been a phenomenal conversation. We've got more great conversations coming your way.
See you next time.