Limitless Podcast - Dwarkesh Patel: The Scaling Era of AI is Here
Episode Date: July 14, 2025Renowned podcaster Dwarkesh Patel joins us to explore the "scaling era" of AI, characterized by rapid growth and significant compute investments. He discusses the impact of neural networks an...d transformers, the implications of scaling laws, and potential constraints as we approach artificial general intelligence (AGI).Patel shares his skepticism about whether current AI models exhibit true intelligence, addresses ethical concerns around AI safety, and emphasizes the responsibilities of developers. The conversation touches on geopolitical dynamics, with major players like the U.S. and China shaping the future.Concluding with a cautious outlook, Patel suggests a 60% chance of AGI by 2040 and highlights the importance of navigating AI complexities thoughtfully.------💫 LIMITLESS | SUBSCRIBE & FOLLOWhttps://limitless.bankless.com/https://x.com/LimitlessFT------TIMESTAMPS00:00 Start02:15 Introduction to Dwarkesh Patel05:45 Defining the Scaling Era12:00 Compute20:00 Neural Networks and Human Intelligence28:30 Reasoning Limits of Current AI35:40 Implications of Energy Shortages42:10 Human-AI Relationships51:00 AI Alignment and Moral Responsibility01:02:15 Geopolitical Considerations01:12:30 AI Accountability and Governance01:20:00 The Road Ahead for AI01:30:00 Closing Remarks------RESOURCESDwarkesh: https://x.com/dwarkesh_spThe Scaling Era: An Oral History of AI, 2019–2025:https://www.amazon.com/Scaling-Era-Oral-History-2019-2025/dp/1953953557------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures
Transcript
Discussion (0)
Dorcasht Patel, we are big fans. It's an honor to have you.
Thank you so much for having me on.
Okay, so you have a book out. It's called The Scaling Era, an oral history of AI from 2019 to 2025.
These are some key dates here. This is really a story of how AI emerged.
And it seemed to have exploded on people's radar over the past five years.
And everyone in the world, it feels like, is trying to figure out what just happened and what is about to happen.
And I feel like for this story, we should start at the beginning, as your book does.
What is the scaling era of AI and whenabouts did a start?
What were the key milestones?
So I think the undertow story about everybody's, of course, been hearing more and more about AI.
The undertold story is that the big contributor to these AI models getting better over time
has been the fact that we are throwing exponentially more compute into trading frontier systems every year.
So by some estimates, we spend 4X every single year over the last decade training the frontier system than the one before it.
And that just means that we're spending hundreds of thousands of times more compute than the systems of the early 2010s.
Of course, we've also had algorithmic breakthroughs in the meantime.
2018, we had the Transformer.
Since then, obviously, many companies have made small improvements here and there.
But the overwhelming fact that we're spending already hundreds of billions of dollars in building,
up the infrastructure, the data centers, the chips for these models. And this picture is only
going to intensify if this exponential keeps going, 4x a year, over the next two years,
is something that is on the minds of the CFOs of the big hyperscalers and the people
planning the expenditures and training going forward, but is not as common in the conversation
around where AI is headed. So what do you feel like people should know about this? Like what
What is the scaling era?
There have been other eras maybe of AI or compute,
but what's special about the scaling era?
People started noticing.
Well, first of all, in 2012,
there's this, Elias Escobar and others,
started using neural networks in order to categorize images.
And just noticing that instead of doing something hand-coded,
you can get a lot of juice out of just neural networks,
black boxes.
You just train them to identify what thing is like why.
and then people started playing around these neural networks more,
using them for different kinds of applications.
And then the question became,
we're noticing that these models get better
if you throw more data at them and you throw more compute at them.
How can we shove as much compute into these models as possible?
And the solution ended up being obviously internet text.
So you need an architecture, which is amenable,
to the trillions of tokens that have been written over the last few decades
and put up on the internet.
And we had this happy coincidence
of the kinds of architectures
that are amenable
to this kind of training
with the GPUs
that were originally made for gaming.
We've had decades of internet text
being compiled
and Ilius, that's actually called it
the fossil fuel of AI.
You know, this reservoir
that we can call upon
to train these minds,
which are like,
you know,
they're fitting the mold
of human thought
because they're trading
on trillions of tokens
of human thought.
And so then it's just been a question
of making these models bigger,
of using this data that we're getting from internet text
to further keep training them.
And over the last year, as you know, the last six months,
the new paradigm has been,
not only are we going to pre-train on all this internet text,
we're going to see if we can have them solve math puzzles,
coding puzzles,
and through this, give them reasoning capabilities.
The kind of thing, by the way,
I mean, I have some skepticism around AGI
just around the corner, which we'll get into.
But just the fact that we now have machines,
which can reason.
Like, you know, you can, like, ask a question to a machine,
and it'll go away for a long time.
It'll, like, think about it.
And then, like, it'll come back to you with a smart answer.
And we just sort of take it for granted.
But obviously, we also know that they're extremely good at coding,
especially.
I don't know if you actually got a chance to play around with Claudecode or cursor or something.
But it's a wild experience to design,
explain at a high level.
I want an application to does X.
15 minutes later, there's like 10 files of code
and the application is built.
That's where we stand.
I have takes on how much this can continue.
The other important dynamic, I'll add in my monologue here,
but the other important dynamic is that
if we're going to be living in the scaling era,
you can't continue exponentials forever.
Certainly not exponentials that are 4x a year forever.
And so right now we're approaching a point
where within by 2028, at most by 2030,
we will literally run out of the energy we need to keep training these frontier systems,
the capacity at the leading edge nodes which manufacture the chips that go into the dyes,
which go into these GPUs, even the raw fraction of GDP that will have to be used to train
frontier systems.
So we have a couple more years left of the scaling era.
And the big question is, will we get to AGI before them?
I mean, that's kind of a key insight of your book that, like, we're in the middle of the scaling
era. I guess we're like, you know, six years in or so. And we're not quite sure. It's like,
like the protagonist in the middle of the story. Like, we don't know exactly which way things are
going to go. But I want you to maybe Dworkesh help folks get an intuition for why scaling
in this way even works. Because I'll tell you, like, for me and for most people, I mean,
our experience with these revolutionary AI models probably started in 2022 with Chad GPT3
and then Chad DBD4 and seeing all the progress, all these AI models,
And it just seems really unintuitive that if you take a certain amount of compute
and you take a certain amount of data out pops AI, out pops intelligence.
Could you help us, like, get an intuition for this magic?
Like, how does the scaling law even work?
Compute plus data equals intelligence?
Is that really all it is?
To be honest, I've asked so many AI researchers this exact question on my podcast.
And I could tell you some potential theories.
of why it might work, I don't think we understand.
You know what? I'll just say that. I don't think we understand.
We don't understand how this works. We know it works, but we don't understand how it works.
We have evidence from actually, of all things, primatology of what could be going on here,
or at least like why similar patterns in other parts of the world. So what I found really
interesting, there's this research by this researcher Shazana Herculana Huzel, which shows that
If you look at how the number of neurons in the brain of a rat, different kinds of rat species increases,
as the weight of their brains increase from species to species,
there's this very sublinear pattern.
So if their brain size doubles, the neuron count will not double between different rat species.
And there's other animals where there's other kinds of families of species for which is this true.
The two interesting exceptions to this rule
where there is actually a linear increase
in neuron count and brain size
is one, certain kinds of birds.
So birds are actually very smart
given the size of their brains
and primates.
So the theory of what happened with humans
is that we unlocked an architecture
that was very scalable.
So the way people talk about transformers
being more scalable than LSTMs,
the thing that preceded them in 2018.
We unlocked this architecture
as it's very scalable.
And then we were in an evolutionary niche millions of years ago, which rewarded marginal increases in intelligence.
If you get slightly smarter, yes, the brain costs more energy, but you can save energy in terms of not having to.
You can cook food so you don't have to spend much on digestion.
You can find a game.
You can find different ways of foraging.
Birds were not able to find this evolutionary niche, which rewarded the incremental increases in intelligence because if your brain gets too heavy as a bird, you're not going to fly.
So it was this happy coincidence of these two things.
Now, why is it the case that the fact that our brains could get bigger resulted in us becoming as smart as we are?
We still don't know.
And there's many different dissimilarities between AIs and humans.
While our brains are quite big, we don't need to be trained.
A human from the age there's zero to 18 is not seeing within an order of magnitude of the amount of information these LLMs are trained on.
So LLMs are extremely data inefficient.
They need a lot more data,
but the pattern of scaling, I think we see in many different places.
So is that a fair kind of analog?
This analog has always made sense to me.
It's just like in transformers or like neurons, you know,
AI models are sort of like the human brain.
Evolutionary pressures are like gradient descent,
reward algorithms, and out pops human intelligence.
We don't really understand that.
We also don't understand AI.
intelligence, but it's basically the same principle at work.
I think it's a super fascinating, but also very thorny question, because is gradient intelligence
like evolution? Well, yes, in one sense. But also, when we do gradient descent on these models,
we start off with the weights, and then we're, you know, it's like learning, how does chemistry
work, how does coding work, how does math work? And that's actually more similar to lifetime learning,
which is to say that by the time you're already born
to the time you turn 18 or 25,
the things you learn.
And that's not evolution.
Evolution designed the system or the brain
by which you can do that learning,
but the lifetime learning itself is not evolution.
And so there's also this interesting question of,
yeah, is training more like evolution,
in which case, actually, we might be very far from AI
because the amount of compute that's been spent
over the course of evolution to discover the human brain,
you know, could be like 10 to the 40 flops.
There's an estimate, you know, whatever.
I'm sure it will bore you to discover, talk about how these estimates are derived,
but just like how much versus is it like a single lifetime,
like going from the age of zero to the age of 18,
which is closer to, I think, 10 to the 24 flops,
which is actually less than compute than we use to train frontier systems.
All right.
Anyways, we'll get back to more relevant questions.
Well, here's kind of a big picture question as well.
It's like I'm constantly fascinating with the metaphysical types of discussions
that some AI researchers kind of take.
A lot of AI researchers will talk in terms of
when they describe what they're making,
we're making God.
Why do they say things like that?
What does this talk of making God?
What does that mean?
Is it just the idea that scaling laws don't cease?
And if we can scale intelligence to AGI,
then there's no reason we can't scale far beyond that
and create some sort of a godlike entity.
And essentially, that's what the quest is.
we're making artificial superintelligence.
We're making a God.
We're making God.
I think people focus too much on when the,
I think this God discussion focuses too much on the hypothetical intelligence of a single copy of an AI.
I do believe in the notion of a superintelligence,
which is not just functionally,
which is not just like, oh, it knows a lot of things,
but is actually qualitatively different than human society.
But the reason.
is not because I think it's so powerful
that any one individual copy of AI will be that smart,
but because of the collective advantages
that AIs will have, which have nothing to do
with their raw intelligence,
but rather the fact that these models will be digital,
or they already are digital,
but eventually they'll be as smart as humans at least.
But unlike humans, because of our biological constraints,
these models can be copied.
You know, if there's a model that has learned a lot
about a specific domain, you can make infinite copies of it,
and now you have infinite copies of Jeff Dean
or Ilya Satskiver or Elon Musk
or any skilled person you can think of.
They can be merged.
So the knowledge that each copy is learning
can be amalgamated back into the model
and then back to all the copies.
They can be distilled.
They can run at superhuman speeds.
These collective advantages,
also they can communicate in latent space.
They're immortal, I mean, you know, as another example.
Yes, exactly.
No, I mean, that's actually, tell me if I'm rabbit-holling too much, but like one really interesting question will come about is how do we prosecute AIs?
Because the way we prosecute humans is that we will throw you in jail if you commit a crime.
But if there's trillions of copies or thousands of copies of an AI model, if a copy of an AI model, if an instance of an AI model does something bad, what do you do?
Does the whole model have to get?
And how do you even punish a model, right?
Like, does it care about its weights being squandered?
Yeah, there's all kinds of questions that arise because of the nature of what AIs are.
And also who is liable for that, right?
Like, is it the toolmaker?
Is it the person using the tool?
Who is responsible for these things?
There's one topic that I do want to come to here about scaling laws.
And it's, at what time did we realize that scaling laws were going to work?
Because there were a lot of theses early in the days, early 2000s about AI, how we were going to build better models.
Eventually, we got to the Transformer.
But at what point did researchers and engineers start to realize that, hey, this is the correct
idea.
We should start throwing lots of money and resources towards this versus other ideas that were just
kind of theoretical research ideas, but never really took off.
We kind of saw this with GPT 2 to 3, where there's this huge improvement.
A lot of resources went into it.
Was there a specific moment in time or a specific breakthrough that led to the start of these
scaling laws?
I think it's been a slow process of more and more people appreciating the, this like nature
of the overwhelming role of compute in driving forward progress.
in 2018, I believe,
Dario Amadei wrote a memo that was secret at,
while he was at Open AI, now he's the CEO of Anthropic,
but while he's opening eye,
he's subsequently revealed a lot of my podcasts,
he wrote this memo where he,
the title of the memo was called Big Blabo Compute.
And it says basically what you expected to say,
which is that, like, yes,
there's ways you can mess up the process of training.
You have the wrong kinds of data or initializations.
but fundamentally, AGI is just a big blob of compute.
And then we've gotten, over the subsequent years,
there was more empirical evidence.
So a big update, I think it was 2021,
but correct me,
somebody who definitely will correct me in the comments,
I'm wrong.
There were these,
there's been multiple papers of these scaling laws
where you can show that the loss of the model
on the objective of predicting the next token
goes down very predictably,
almost to like multiple decimal places of correctness
based on how much more compute you throw into its models.
And the compute itself is a function of the amount of data you use
and how big the models, how many parameters it has.
And so that was an incredibly strong evidence back in the day a couple years ago
because then you could say, well, okay,
if you can really has this incredibly low loss of predicting the next token
in all human output, including scientific people,
papers, including GitHub repositories, then doesn't it mean it has actually had to learn coding
and science and all these skills in order to make those predictions, which actually ended up
being true. And it was something people, you know, we take it for granted now, but it actually,
even as of a year or two ago, people were really even denying that premise. But some people,
a couple years ago just thought about it. And like, yeah, actually, that would mean that it's
learned the skills. And that's crazy that we just have this strong empirical pattern that tells
us exactly what we need to do in order to learn these skills.
And it creates this weird perception right where, like, very early on, and so to this day,
it really is just a token predictor, right?
Like, we're just predicting the next word in the sentence, but somewhere along the lines,
it actually creates this perception of intelligence.
So I guess we covered the early historical context.
I kind of want to bring the listeners up to today, where we are currently where the
scaling loss have brought us in the year 2025.
So can you kind of outline where we've gotten to from early days of GPTs to, now we have
GPT4, we have Gemini Ultra, we have Club, which you mentioned earlier. We had the breakthrough of
reasoning. So what can leading frontier models do today? So there's what they can do, and then there's
the question of what methods seem to be working. I guess we can start at what they seem to be able to do.
They've shown to be remarkably useful at coding and not just at answering direct questions about
how does this line of code work or something, but genuinely just autonomously working for 30 minutes or an
hour, doing the task, it would take a front-end developer a whole day to do. And you can just ask
them at a high level, do this kind of thing, and they can go ahead and do it. Obviously, if you
played around it with, you know that they're extremely useful assistance in terms of research,
in terms of even therapist, whatever other use cases. On the question of, well, what training
that seemed to be working? We do seem to be getting evidence that pre-trading is plateauing, which is to
say that we had GPD 4.5, which was just following this old mold of make the model, make the model,
bigger, but it's fundamentally doing the same thing of next token prediction. And apparently
it didn't pass muster. The open AI had to deprecate it because there's this dynamic where the
bigger of the model is, the more it costs not only to train, but also to serve, right? Because every time
you serve a user, you're having to run the whole model, which is going, so, but does seem to be
working is RL, which is this process of not just training them on existing tokens on the internet,
but having the model itself try to answer math encoding problems. And finally, we got to the point
where the model is smart enough to get it right some of the time,
and so you can give it some reward,
and then it can saturate these tough reasoning problems.
And then what was the breakthrough with reasoning for the people who aren't familiar?
What made reasoning so special that we hadn't discovered before,
and what did that kind of unlock for models that we use today?
I'm honestly not sure.
I mean, we had, GPD4 came out a little over two years ago,
and then it was two years after GPD4 came out that 01 came out,
which was the original reasoning breakthrough.
I think last November.
And subsequently, a couple months later, DeepSeek showed in their R1 paper.
So deep seek open source to their research.
And they explained exactly how their algorithm worked.
And it wasn't that complicated.
It was just like what you would expect, which is get some math problems, give for some
initial problems, tell the model exactly what the reasoning trace looks like, how you solve it,
just like write it out, and then have the model like try to do it raw on the remaining
problems. Now, I know it sounds incredibly arrogant to say, well, it wasn't that complicated. Why did it
take a few years? I think there's an interesting insight there of even things which you think will be
simple in terms of high-level description of how to solve the problem end up taking longer in terms of
haggling out the remaining engineering hurdles than you might naively assume. And that should update us
on how long it will take us to go through the remaining bottlenecks on the path to AGI. Maybe that
will be tougher than people imagine, especially the people who think we're only two to three years away.
But all this to say, yeah, I'm not sure why it took so long after GPD4 to get a model trained on a similar level capabilities that could then do reasoning.
And in terms of those abilities, the first answer you had to what can it do was coding.
And I hear that a lot of the time when I talk to a lot of people is that coding seems to be a really strong suit and a really huge unlock to using these models.
And I'm curious, why coding over general intelligence?
Is it because it's placed in a more confined box of parameters?
I know in the early days we had the AlphaGo and we had the AIs playing chess and they exceed,
they performed so well because they were kind of contained within this box of parameters that
was a little less open end to the general intelligence.
Is that the reason why coding is kind of at the frontier right now of the ability of these models?
There's two different hypotheses.
One is based around this idea called Moravax Paradox.
And this was an idea, by the way, is one super interesting figure.
Actually, I should have mentioned him earlier.
One super interesting figure in the history of scaling is Hans Mourouac,
who I think in the 90s predicts that 2028 will be the year that we will get to AGI.
And the way he predicts this, which is like, you know, we'll see what happens,
but like not that far off the money as far as I'm concerned.
The way he predicts this is he just looks at the growth in computing power year over year
and then looks at how much compute he estimated the human brain to be to require.
and then just like, okay, we'll have computers as powerful as a human brain by 2028,
which is like at once a deceptively simple argument,
but also ended up being incredibly accurate and like worked, right?
I might have a fact that it was 2028, but it was within that, like, within something
you would consider a reasonable guess given what we know now.
Sorry, anyway, so the Moravex paradox is this idea that computers seemed in AI get better first
at the skills which humans are the worst at,
or at least there's a huge variation in the human repertoire.
So we think of coding as incredibly hard, right?
We think this is like the top 1% of people will be excellent coders.
We also think of reasoning is very hard, right?
So if you read Aristotle, he says,
the thing which makes human special,
which distinguishes us from animals, is reasoning.
And these models aren't that useful yet at almost anything.
The one thing they can do is reasoning.
So how do we explain this pattern?
And Morvac's answer is that evolution has spent billions of years optimizing us to do things we take for granted.
Move around this room, right?
I can pick up this can't a can of Coke, move it around, drink from it.
And that we can't even get robots to do at all yet.
And in fact, it's so ingrained in us by evolution that there's no human, or at least humans who don't have disabilities, we'll all be able to do this.
And so we just take it for granted that this isn't easy.
to do. But in fact, it's evidence of how long evolution has spent getting humans up to this point.
Whereas reasoning, logic, all of these skills have only been optimized by evolution over the
course of the last few million years. So there's been a thousandfold less evolutionary pressure
towards coding than towards just basic locomotion. And this has actually been very accurate
in predicting what kinds of progress we see, even before we got deep learning, right? Like,
in the 40s, when we got our first computers,
the first thing that we could use them to do
is long calculations for ballistic trajectories
at the time for World War II.
Humans suck at long calculations by hand.
And anyways, so that's the explanation for coding,
which seems hard for humans,
is the first thing that went to AIs.
Now, there's another theory,
which is that this is actually totally wrong.
It has nothing to do with this seeming paradox
of how long evolution is optimized us for,
and everything to do with the availability,
of data. So we have GitHub, this repository of all of human code,
all open source code written in all these different languages, trillions and trillions of tokens.
We don't have an analogous thing for robotics. We don't have this pre-training corpus.
And that explains why code has made so much more progress than robotics.
That's fascinating because if there's one thing that I could list that we'd want AI to be good at,
probably coding software is number one on that list. Because if you have a Turing Complete intelligence
that can create Turing Complete Software, is there anything you can't create once you have that?
Also, like the idea of Morvac's paradox, I guess that sort of implies a certain complementarianism
with humanity. So if robots can do things that robots can do really well and can't do the things
humans can do well, well, perhaps there's a place for us in this world. And that's fantastic.
news. It also maybe implies that humans have kind of scratched the surface on reasoning potential. I mean,
if we've only had a couple of million years of evolution and we haven't had the data set to actually
get really good at reasoning, it seems like there'd be a massive amount of upside, unexplored
territory, like so much more intelligence that nature could actually contain inside of reasoning.
I mean, are these some of the implications of these ideas?
Yeah, I know. I mean, that's a great insight.
another really interesting insight is that
the more variation there is in a skill in humans,
the better and faster that AIs will get at it.
Because, like, coding is a kind of thing
where, like, one person of humans are really good at it.
The rest of us will, like, if we try to learn it,
we'd be okay at it or something, right?
And because evolution has spent so little time optimizing us,
there's this room for variation
where, like, the optimization hasn't happened uniformly,
or it hasn't been valuable enough to sort of saturate,
the human gene pool for this skill.
I think you made an earlier point
that I thought was really interesting
I wanted to address. Can you remind me of the first thing you said?
Is it the complementarianism?
Yes.
So you can take it as a positive future.
You can take it as a negative future in the sense that,
well, what is the complementary skills we're providing
were good meat robots?
Yeah, the low-skilled labor of the situation.
They can do all the thinking and planning.
one dark future,
one dark vision of the future
is we'll get those meta glasses
and the AI speaking into our ear
and it'll tell us to go put this
brick over there
so that the next data center couldn't be built
because the AI's got the plan for everything
it's got the better design for the ship and everything.
You just need to move things around for it
and that's what human labor looks like
like until robotics is solved.
So yeah, it depends on how you go.
On the other hand, you'll get paid a lot
because it's worth a lot to move those bricks
We're building AGI here.
But yeah, it depends on how to come out on that question.
Well, there seems to be something to that idea,
going back to the idea of the massive amount of human variation.
I mean, we have just in the past month or so,
we have news of like meta hiring AI researchers
for like $100 million signing bonuses.
Okay?
What does the average software engineer like make
versus what does an AI researcher make
and kind of the top of the market, right?
Which is got to imply, obviously,
there's some things going on with demand and supply,
but also it does also seem to imply that there's massive variation
the quality of a software engineer.
And if AIs can get to that quality, well, what does that unlock?
Yeah.
So, okay.
Yeah, so I guess we have like coding down right now.
Like another question, though, is like, what can't AIs do today?
And how would you characterize that?
Like, what are the things they just don't do well?
So I've been interviewing people on my podcast who have very different timelines
from a roll to AGI.
I have had people on who think it's two years away
and some who think it's 20 years away.
And the experience of building AI tools for myself, actually,
has been the most insight driving,
or maybe research I've done on the question of when AI is coming.
More than the guest interviews.
Yeah, because you just, I have had,
I've probably spent on the order of 100 hours
trying to build these little tools.
The kinds I'm sure you've also tried to build
of like rewrite auto-ternet transcripts for me
to make them sound the rewritten
the way a human would write them.
Find clips for me to tweet out,
write essays with me,
co-write them passage by passage,
these kinds of things.
And what I found is that it's actually very hard
to get human-like labor on these models,
even for tasks like these,
which are should be death center
in the repertoire of these models, right?
Their short horizon,
they're language in, language out.
They're not contingent on understanding
some, like, you know,
thing I said like a month ago.
This is just like, this is the task.
And I was thinking about why,
is the case that I still haven't been able to automate these basic language tasks. Why do I still
have a human work on these things? And I think the key reason that you can't automate even these
simple tasks is because the models currently lack the ability to do on the job training. So if you
hire a human for the first six months, for the first three months, they're not going to be that useful,
even if they're very smart, because they haven't built up the context, they haven't practiced the
skills, they don't understand how the business works.
makes humans valuable is not that mainly the raw intellect, intellect obelicity matters, but it's not
mainly that. It's their ability to interrogate their own failures in this really dynamic,
organic way, to pick up small efficiencies and improvements as they practice the task, and to
build up this context as they work within a domain. And so sometimes people wonder, look,
if you look at the revenue of OpenAI, the annual recurring revenue, it's on the order of $10 billion.
Coals makes more money than that. McDonald's makes more money on that. Right. So why,
Why is it that if they've got AGI,
they're, you know, like Fortune 500 isn't reorganizing their workflows
as to, you know, use open AI models at every layer of the stack.
My answer, sometimes people say it was because people are too stodgy.
The management of these companies is like not moving fast enough on AI.
That could be part of it.
I think mostly it's not that.
I think mostly it genuinely is very hard to get human-like labor out of these models.
Because you can't, so you're stuck with the capabilities you get out of the model
out of the box.
So they might be five out of ten
at rewriting the transcript for you.
But if you don't like how it turned out,
if you have feedback for it,
if you want to keep teaching it over time,
once the session ends,
the model, like,
everything that knows about you has gone away.
You had to restart again.
It's like working with an amnesiac employee.
You had to restart again.
Every day is the first day of employment, basically?
Yeah, exactly.
It's a groundhog day for them every day.
Or every couple of hours, in fact.
And that makes you very hard for them
to be that useful as an employee, right?
not really an employee at that point.
This, I think, not only is a key bottleneck to the value of these models, because human labor
is worth a lot, right?
Like, $60 trillion in the world is paid to wages every year.
If these model companies are making under order of $10 billion a year, that's a big way to
AGI and what explains that gap, what are the bottlenecks?
I think a big one is this continual learning thing.
And I don't see an easy way that that just gets solved within these models.
There's no, like, with reasoning, you could say, oh, it's like, treat it on math and code
problems and then I'll get reasoning and that worked. I don't think there's something super obvious
there for how do you get this online learning, this on-the-job training working for these models.
Okay, can we talk about that? Go a little bit deeper on that concept. So this is basically one of
the concepts you wrote in your recent post. AI is not right around the corner. Even though you're
an AI optimist, I would say, and overall an AI accelerationist, you were saying it's not right
around the corner. You're saying the ability to replace human labor is a ways out.
Some are not forever out, but I think you said, you know, somewhere around 2032, if you had to
guess on when the estimate was. And the reason you gave is because AIs can't learn on the job,
but it's not clear to me why they can't. Is it just because the context window isn't large
enough? Is it just because they can't input all of the different data sets and data points that
humans can? Is it because they don't have stateful memory the way a human employee? Because if
these things, all of these do seem like solvable problems. And maybe that's what you're saying.
They are solvable problems. They're just a little bit longer than some people think they are.
I think it's like in some deep sense of solvable problem because like eventually we will build
AGI. And to build AGI, we will have had to solve the problem. My point is that the obvious
solutions you might imagine, for example, expanding the context window or having this,
like external memory, using systems like rag.
These are basically techniques we already have to,
it's called retrieval augmented generate.
Anyways, these kinds of retrieval augmented generation,
I don't think these will suffice.
And just to put a finer point,
and first of all, like, what is the problem?
The problem is exactly as you say that within the context window,
these models actually can learn on the job, right?
So if you talk to it for long enough,
it will get much better at understanding your needs
and what your exact problem is.
If you're using it for research for your podcast,
So we'll get a sense of like, oh, they're actually especially curious about these kinds of questions.
Let me focus on that.
It's actually very human-like in context, right?
The speed at which it learns, the task of knowledge it picks out.
The problem, of course, is the context length for even the best models only last a million or two million tokens.
That's at most, like an hour of conversation.
Now, then you might say, okay, well, why can't we just solve that by expanding the context window, right?
So context window has been expanding for the last few years.
Why can't we just continue that?
Yeah, like a billion token context window, something like.
this? So 2018 is when the transformer came out and the transformer has the attention mechanism.
The attention mechanism is inherently quadratic with the nature of the length of the sequence,
which is to say that if you go from, if you double, go from one million tokens to two million
tokens, it actually costs four times as much compute to process that two millionth token.
It's not just two to as much compute. So it gets super linearly more expensive as,
as you increase the context length.
And for the last seven years,
people have been trying to get around
this inherent quadratic nature of attention.
Of course, we don't know secretly
what the labs are working on,
but we have frontier companies like Deepseek,
which have open source their research,
and we can just see how their algorithms work.
And they found these constant time modifiers to attention,
which is to say that they,
there's like a, it'll still be quadratic,
but it'll be like one half times quadratic.
but the inherent like super linearness has not gone away.
And because of that, yeah, you might be able to increase it from one million tokens to two million tokens by finding another hack.
Like, make sure X-Rexper's one such things.
Layden attention is another such technique.
But, or KB Cash, right, there's many other things that have been discovered.
But people have not discovered, okay, how do you get around the fact that if you went to a billion,
it would be a billion squared as expensive in terms of compute to process that token.
And so I don't think you'll just get it by increasing the length of the cost.
context window, basically. That's fascinating. Yeah, I didn't realize that. Okay, so the other reason in your
post that AI is not right around the corner is because it can't do your taxes. And Dorcasch, I feel your pain,
man. Taxes are just like quite a pain in the ass. I think you were talking about this from the
context of like computer vision, computer use, that kind of thing. Right. So, I mean, I've seen
demos. I've seen some pretty interesting computer vision sort of demos that seem to be right around the
quarter. But like, what's the limiter on computer use for an AI?
There was an interesting blog post by this company called Mechanize, where they, they were explaining
why this is such a big problem. And I love the way they phrased it, which is that imagine if you
had to train a model in 1980, a large language model in 1980, and you could use all the compute
you wanted in 1980 somehow, but you didn't have, you had, you were only stuck with the data that
was available in the 1980s. Of course, before the internet, we
a widespread phenomenon. You couldn't train a modern LLM, even with all the computer in the world,
because the data wasn't available. And we're in a similar position with respect to computer use,
because there's not this corpus of collected videos, people using computers to different things
to access different applications and do white-collar work. Because of that, I think the big challenge
has been accumulating this kind of data. And to be clear, when I was saying the use case of
like do my taxes. You're effectively, you know, talking about an AI having the ability to just like,
you know, navigate to files around your computer, you know, log in to various websites,
to download your paystups or whatever, and then to go to like turbo tax or something and like input
it all into some software and file it, right? Just on voice command or something like that. That's
basically doing my taxes. It should be capable of navigating UIs that it's less familiar with or that
come about organically within the context of it trying to solve a problem. So for example,
you know, I might have business deductions. It sees on my bank statement that I've spent $1,000
on Amazon. It sees like, oh, he bought a camera. So I think that's probably a business expense
for his podcast. He bought an Airbnb over a weekend in the cabins of whatever, in the woods
of whatever. That probably wasn't a business expense. Although maybe it's, if it's a sort of like
a gray, if it was willing to go in the gray area, maybe it's a right model.
too much. Yeah, yeah, yeah. Do the gray area stuff. I was researching. But anyway, so
including all of that, including emailing people for invoices and haggling with them,
it would be like a sort of week-long task to do my taxes, right? You'd have to, there's a lot
of work involved that's not just like, do this skill, this skill, this skill, but rather of having
a sort of like plan of action and then breaking task apart, dealing with new information, new emails,
new messages, consulting with me about questions, et cetera.
Yeah, to be clear on this use case, too, even though your post is titled like, you know,
AI is not right around the corner, you still think this ability to file your taxes,
that's like a 2020, 2028 thing, right?
I mean, this is maybe not next year, but it's in a few years.
Right.
Which is, I think that was sort of people maybe write too much into the title and then
didn't read through the arguments.
That never happens on the internet.
Wow.
First time.
No, I think like I'm arguing against people who are like, you know, this will happen.
AGI is like two years away.
I do think the wider world, the markets, public perception, even people who are somewhat attending to AI, but aren't in this specific milieu that I'm talking to, are way underpricing AGI.
one reason, one thing I think they're underestimating is not only will we have millions of extra laborers, millions of extra workers, potentially billions within the course of the next decade, because then we will have potentially, I think likely we'll have AGI within the next decade.
But they'll have these advantages that human workers don't have, which is that, okay, a single model company, so suppose we solve continual learning, right?
And we solve computer use. So as far as white collar work goes, that might fundamentally be solved.
can have AIs, which can use, they're not just like a text box where you put into, you ask
questions in a chat pod and you get some response out. It's not that useful to just have a very smart
chat bot. You need it to be able to actually do real work and use real applications.
Suppose you have that solved because it acts like an employee, it's got continual learning,
it's got computer use. But it has another advantage as humans don't have, which is that
copies of this model are being deployed all through the economy and it's doing on the job training.
So copies are learning how to be an accountant, how to be a lawyer, how to be a coder, except because
it's an AI and it's digital, the model itself can amalgamate all this on-the-job training
from all these copies. So what does that mean? What means that even if there's no more software
progress after that point, which is to say that no more algorithms are discovered, there's not a
transformer plus plus that's discovered, just from the fact that this model is learning every single
skill in the economy, at least for white-collar work, you might just based on that alone,
have something that looks like an intelligence explosion. It would just be a broad.
oddly deployed intelligence explosion, but it will functionally become super intelligent just
from having human level capability of learning on the job.
Yeah, and it creates this like mesh network of intelligence that's shared among everyone.
Yeah, that's really fascinating thing.
So we've kind of, we're going to get there.
We're going to get to AGI.
It's going to be incredibly smart.
But what we've shared recently is just kind of this mixed bag where currently today,
it's pretty good at some things, but also not that great at others.
We're hiring humans to do jobs that we think AI should do, but it probably doesn't.
So the question I have for you is, is AI really that smart? Or is it just good at kind of
acing these particular benchmarks that we measure against? Apple, I mean, famously recently,
they had their paper, the illusion of thinking where it was kind of like, hey, AI is like pretty
good up to a point. But at a certain point, it just falls apart. And the inference is like maybe
it's not intelligence. Maybe it's just good at guessing. So I guess the question is, is they have
really that smart? It depends on who I'm talking to. I think some people overhype its capabilities.
I think some people are like, oh, it's already AGI. But it's like,
It's like a little hobbled little AGI where we're like sort of giving it a concussion every couple of hours and like it forgets everything.
We're like trapped it in a chatbot context.
But fundamentally the thing inside is like a very smart human.
I just agree with that perspective.
So if you if that's your perspective, I say like no, it's not that smart.
Your perspective is just statistical associations.
I say definitely smart.
Like it's like genuinely there's an intelligence there.
And the so one thing you could say to the person who thinks that it's already AGI is this.
Look, if a single.
human had as much stuff memorized as these models seem to have memorized, right? Which is to say that
they have all of internet texts, everything that human is written on the internet memorized,
they would potentially be discovering all kinds of connections and discoveries. They'd notice that
this thing which causes a migraine is associated with this kind of deficiency. So maybe if you
take the supplement, your migraines will be cured. There'd be just be this list of just like
trivial connections that lead to big discoveries all through the place. It's not.
not clear that there's been an unambiguous case of an AI just doing this by itself. So then why,
so that's something potentially to explain. Like, if they're so intelligent, why aren't they able to
use their disproportionate capabilities, their unique capabilities to come up with these discoveries?
I don't think there's actually a good answer to that question yet, except for the fact that they genuinely
aren't that creative. Maybe they're like intelligent in a sense of knowing a lot of things, but they
don't have this fluid intelligence that humans have. Anyway, so I give you a wishwashy answer because
I think some people are underselling the intelligence, some people are overselling it.
I recall a tweet lately from Tyler Cowan.
I think he's referring to maybe O3, and he basically said, it feels like AGI.
I don't know if it is AGI or not, but to me it feels like AGI.
What do you account for this feeling of like intelligence then?
I think this is actually very interesting because it gets to a crux that Tyler and I have.
So Tyler and I disagree on two big things.
One, he thinks, you know, as he said in the blog post, O3 is AGI.
I don't think it's AGI.
I think it's orders of magnitude less valuable,
or, you know, like many orders of magnitude less valuable
and less useful than an AGI.
That's one thing we disagree on.
The other thing we disagree on is he thinks that once we do get AGI,
we'll only see 0.5% increase in the economic growth rate.
This is like what the internet caused, right?
Whereas I think we will see tens of percent increase in economic growth.
Like it will just be the difference between the pre-industrial revolution
rate of growth versus industrial revolution,
that magnitude of change again.
And I think these two disagreements are linked
because if you do believe we're already at AGI
and you look around the world and you say like,
well, it fundamentally looks the same.
You'd be forgiven for thinking like,
oh, there's not that much value in getting to AGI.
Whereas if you are like me and you think like,
no, we'll get this broadly, at the minimum,
at a very minimum, we'll get a broadly deployed
intelligence explosion once we get to AGI.
Then you're like, okay, I'm just expecting
some sort of singularitarian crazy future with robot factories and solar farms all across
the desert and things like that.
Yeah, I mean, it strikes me that your disagreement with Tyler is just based on the semantic
definition of like what AGI actually is.
And Tyler, it sounds like he has kind of a lower threshold for what AGI is, whereas you have
a higher threshold.
Is there like a accepted definition for AGI?
No.
One thing that's useful for the purposes of discussions is to say,
automating all white collar work
because robotics hasn't made
as much progress as LLMs have
or computer use as. So if you just say
anything a human can do
or maybe 90% of what humans can do
at a desk, any AI can also do,
that's potentially a useful definition
for at least getting the cognitive elements
relevant to defining AGI.
But yeah, there's not one definition
which suits all purposes.
Do we know it's like going on
inside of these models, right?
So like, you know, Josh was talking early in the conversation about like this at the base being sort of token prediction, right? And I guess this this starts to raise the question of like what is intelligence in the first place? And these AI models, I mean, they seem like they're intelligent. But do they have a model of the world the way maybe a human might? Are they are they sort of babbling? Or like is this real reasoning? And like, what is real reasoning? Do we just judge that?
based on the results, or is there some way to, like, peek inside of its head?
I used to have similar questions a couple years ago.
And then, because honestly, the things they did at the time were, like, ambiguous, you could say,
oh, it's close enough to something else in its trading dataset.
That is just basically copy pasting.
It didn't come up with a solution by itself.
But we've gone to the point where I can come up with a pretty complicated math problem,
and it will solve it.
It can be a math problem, like, not like a, you know,
of undergrad or high school math problem.
Like the problem we get,
the problems the smartest math professors come up with
in order to test
international math Olympiad,
you know, the kids who spend all their life
preparing for this,
the geniuses who spend all their life,
all their young adulthood preparing to take these
really gnarly math puzzle challenges.
And the model will get these kinds of questions right.
They require all these abstract creative thinking,
this reasoning for hours.
The model will get the right.
Okay, so if that's not reasoning,
then why is you,
reasoning valuable again? Like what exactly was this reasoning supposed to be? So I think they genuinely
are reasoning. I mean, I think there's other capabilities they lack, which are actually more, in some
sense, they seem to us to be more trivial, but actually much harder to learn. But the reasoning itself,
I think, is there. And the answer to the intelligence question is also kind of clouded, right,
because we still really don't understand what's going on in an LLM. Dario from Anthropic, he recently
posted the paper about interpretation. And can you explain why we don't even really understand what's going on
in these LLMs, even though we're able to make them and, like, yield the results from them.
Hmm.
Because it very much still is kind of like a black box.
We write some code, we put some inputs in, and we get something out, but we're not sure
what happens in the middle, why it's creating this output.
I mean, it's exactly what you say, is that in other systems we engineer in the world,
we have to build it up, bottom-ups.
If you build a bridge, you have to understand how every single beam is contributing to
the structure, and we'll have it,
you know, we have equations for why the thing will stay standing.
There's no such thing for AI.
We didn't build it.
More so we grew it.
It's like, you know, watering a plant.
And you could, you know, a couple thousand years ago, they were building, they were
doing agriculture, but they didn't know why, you know, why do plants grow?
How do they collect energy from sunlight?
All these things.
And I think we're in a substantially similar position with respect to intelligence,
with respect to consciousness, with respect to.
all these other interesting questions about how minds work,
which is in some sense really cool
because there's this huge intellectual horizon
that's become not only available but accessible to investigation.
In another sense, that's scary because we know that minds can suffer.
We know that minds have moral worth.
And we're creating minds,
and we have no understanding of what's happening in these minds.
Is a process of gradient descent a painful process?
We don't know, but we're doing a lot of it.
So hopefully we'll learn more, but yeah, I think we're in a similar position to some farmer in Uruk in 3,500 BC.
Wow.
And I mean, the potential, the idea that minds can suffer, minds have some moral worth and also minds have some free will.
They have some sort of autonomy or maybe at least a desire to have autonomy.
I mean, this brings us to kind of this sticky subject of alignment and AI safety and how we go about controlling the intelligence that we're creating.
if even that's what we should be doing, controlling it. We'll get to that in a minute. But I want to
start with maybe the headlines here a little bit. So headline just this morning, latest open AI
models sabotaged a shutdown mechanism despite commands to the contrary. OpenAI's 01 model attempted
to copy itself to external servers after being threatened with shutdown that denied the action when
discovered. I've read a number of papers with this. Of course, mainstream media has these types of
headlines almost on a weekly basis now, and it's starting to get to daily. But there does seem to
be some evidence that AIs lie to us, if that's even the right term, in order to pursue goals,
goals like self-preservation, goals like replication, even deep-seated values that we might train
into them, sort of a constitution type of value. They seek to preserve these values, which,
you know, maybe that's a good thing, or maybe it's not a good thing if we don't actually want them to
you know, interpret the values in a certain way. Some of these headlines that we're seeing now
to you, with your kind of corpus of knowledge and all of the interviews and discovery you've done
on your side, is this like media sensationalism or is this like alarming? And if it's alarming,
how concerned should we be about this? I think on net it's quite alarming. I do think that some of
these results have been sort of cherry-picked or if you look into the code, what's happened is basically
the researchers have said,
hey, pretend to be a bad person.
Wow, AI is being a bad person, isn't that crazy?
But the system prompt is just like,
hey, do this bad thing, right?
But I have also seen other results,
which are not of this quality.
I mean, the clearest example,
so backing up,
what is the reason to think this will be
a bigger problem in the future than it is now?
Because we all interact with these systems.
And they're actually like quite moral or aligned, right?
You can talk to a chatbot and you ask it to,
how should you deal with some crisis where there's a correct answer?
It will tell you not to be violent.
It'll give you reasonable advice.
It seems to have good values.
So it's worth noticing this, right?
And being happy about it.
The concern is that we're moving from a regime where we've trained them on human language,
which implicitly has human morals and the way, you know,
normal people think about values implicit in it.
plus this RLHF process we did
to a regime where we're mostly spending compute
on just having them answer problems
yes or no, or correct or not, rather.
Just like pass all the unit tests,
get the right answer on this math problem.
And this has no guardrails intrinsically
in terms of what is allowed to do,
what is the proper moral way to do something.
I think that can be a load of term,
But here's a more concrete example.
One problem we're running into with these coding agents more and more,
and this is nothing to do with these abstract concerns about alignment,
but more so just like how do we get economic value out of these models,
is that Claude or Gemini will,
instead of writing codes such that it passes the unit tests,
it will often just delete the unit tests so that the code just passes by default.
Now, why would it do that?
Well, it's learned in the process,
it was trained on the goal during training of,
you must pass all unit tests.
And probably within some environment in which it was trained,
it was able to just get away.
Like, they wasn't designed well enough.
And so I found this like little hole where could just like delete the file
that had the unit test or rewrite them so that it always said, you know,
equals true, then pass.
And right now we can discover these, even without, even though we can discover these,
you know, it's still pass.
There's still enough of hacks like this such that the model is like becoming more
and more hacky like that.
In the future, we're going to be.
training models in ways that we is beyond our ability to even understand, certainly beyond
everybody's ability to understand. There may be a few people who might be able to see just the
way that right now, if you came up with a new math proof or some open problem in mathematics,
there would only be a few people in the world who would be able to evaluate that math proof.
We'll be in a similar position with respect to all of the things that these models are being
trained on at the frontier, especially in math and code because humans were big dumb-dums
with respect to this reasoning stuff. And so there's a sort of like first principle's reason
to expect that this new modality of training
will be less amenable to the kinds of supervision
that was grounded within the pre-training corpus?
I don't know that everyone has kind of an intuition
or an idea why it doesn't work to just say.
So if we don't want our AI models to lie to us,
why can't we just tell them not to lie?
Why can't we just put that as part of their core constitution?
If we don't want our AI models to be sycophants,
why can't we just say,
hey, if I tell you I want the truth not to flatter me, just like give me the straight up truth.
Why is this even difficult to do?
Well, fundamentally, it comes down to how we train them, and we don't know how to train them
in a way that does not reward lying or sycifancy.
In fact, the problem is Open AI, they explained why the recent model of theirs was they
had to take down was just a stucophantic.
And the reason was just that they rolled out, did it in the AB test, and the version,
the test that was more sycophantic
was just preferred by users more.
Sometimes we prefer the lie.
Yeah, so if that's what's just preferred in training
or, for example, in the context of lying,
if we've just built
RL environments in which we're training these models
where they're going to be more successful
if they lie, right?
So if they delete the unit tests
and then tell you, I've passed this
program and all the unit
test has succeeded, it's like lying to you, basically. And if that's what is rewarded in the process
of gradient descent, then it's not surprising that the model you interact with will just have this
drive to lie if it gets it closer to its goal. And I would just expect this to keep happening
unless we can solve this fundamental problem that comes about in training. So you mentioned how
like chat chachy-t had a version that was sycophantic, and that's because users actually wanted that.
Who is in control, who decides the actual alignment of these models? Because users are saying,
saying one thing, and then they deploy it, and then it turns out that's not actually what people
want. How do you kind of form consensus around this alignment or these alignment principles?
Right now, obviously, it's the labs who decide this, right, and the safety teams of the labs.
Maybe, and I guess the question you could ask is then who should decide these? Because this will be
assuming the trajectory, yeah, so we keep going, they get more powerful.
Because this will be the key modality that all of us use to get, not only get worked on,
but even like, I think at some point
a lot of people's best friends will be AIs,
at least functionally in the sense of who do they spend
the most amount of time talking to.
It might already be AIs.
This will be the key layer in your business
that you're using to get work done.
So this process of training
which shapes their personality,
who gets to control it?
I mean, it will be the last functionally.
But maybe you mean like, who should control it, right?
I actually don't know.
I mean, I don't know who,
there's a better alternative to the labs. Yeah, I would assume like there's some sort of social
consensus, right? Similar to how we have in America of the Constitution, there's like this general
form of consensus that gets formed around how we should treat these models as they become as powerful
as we think they probably will be. Honestly, I don't have, I don't know if anybody has a good answer
about how you, how you do this process. I think we lucked out, we just like really lucked out with
the Constitution. It also wasn't a democratic process which resulted in the Constitution, even though it
instituted a Republican form of government. It was just delegates from each state. They haggled it out
over the course of a few months, maybe that's, maybe that's what happens with the AI, but
is there some process which feels both fair and which will result in actually a good
constitution for these AIs? It's not obvious to me that, I mean, nothing comes up to the top
of my head, like, oh, this, you know, do rank choice voting or something. Yeah, so I was going to ask,
is there any, I mean, having spoken to everyone who you've spoken to, is there any alignment
path which looks most promising, which feels the most comforting and exciting to you?
I think alignment in the sense of, you know, and eventually we'll have these superintelligent systems.
What do we do about that?
I think the approach that I think is most promising is less about finding some holy grail, some, you know, gigabrain solution, some equation which solves the whole puzzle.
And more like, one, having this Swiss cheese approach where, look, we, we kind of have gotten really good at.
at jail breaks. I'm sure you heard a lot about jail breaks of the last few years. It's actually much
harder to jailbreak these models because, you know, people try to whack, whack out of these
things in different ways. Model developers just like patched these obvious ways to do jail breaks.
The model also got smarter, so it's better able to understand when somebody's trying to jailbreak
into it. That, I think, is one approach. Another is, I think, competition. I think the scary
version of the future is where you have this dynamic where a single model and its copies are
controlling the entire economy, when politicians want to understand what policies to pass,
they're only talking to copies of a single model. If there's multiple different AI companies who are
at the frontier, who have competing services, and whose models can monitor each other, right? So
Claude may care about its own copies being successful in the world, and it might be able to
lie on their behalf, even you ask one copy to supervise another. I think you get some advantage
from a copy of Open AI's model,
monitoring a copy of Deepseeks model,
which actually brings us back to the Constitution, right?
One of the most brilliant things in the Constitution
is the system of checks and balances.
So some combination of the Swiss cheese approach
to model development and training and alignment
where you're careful, if you notice this kind of reward hacking,
you do your best to solve it.
You try to keep as much of the models thinking
in human language rather than letting it think in AI thought
in this latent space thinking.
And the other part of it is just having
normal market competition between these companies
so that you can use them to check each other
and no one company or no one AI
is dominating the economy
or advisory roles for governments.
I really like this bundle of ideas
that you sort of put together in that
because I think a lot of the
AI safety conversation is always couch
in terms of control.
Like we have to control the thing
that is the way. And I always get
a little worried when I hear
terms like control. And it reminds
me of a blog post I think you put out, which
I'm hopeful you continue to
write on. I think you said it was going
to be like one of a series, which is this idea
of like classical liberal AGI.
And we were talking about themes like
balance of power. Let's have Claude,
you know, check in with chat GPT and monitor it.
And when you have themes like transparency
as well, that feels
a bit more, you know,
classically liberal coded than
maybe some of the other approaches that I've heard.
And you wrote this in the post, which I thought was kind of, it just sparked my interest
because I'm not sure where you're going to go next with this, but you said the most likely
way this happens, that is, AIs have a stake in humanity's future, is if it's in the A.I's best
interest to operate within our existing laws and norms.
You have this whole idea that like, hey, the way to get true AI alignment is to make it
easy, make it the path of least resistance for AI to basically partner with humans. It's almost
this idea if like the aliens landed or something. We would create treaties with the aliens, right?
We would, we would want them to adopt our norms. We would want to initiate trade with them.
You know, our first response shouldn't be let's try to dominate and control them. You're like,
maybe it should be let's try to work with them. Let's try to collaborate. Let's try to open up trade.
What's your idea here? And like, are you planning to write? Are you planning to run?
write further posts about this?
Yeah, I want to.
It's just such a hard topic to think about that, you know, something always comes up.
But the fundamental point I was making is, look, in the long run, if AIs are, you know,
human labor is going to be obsolete because of these inherent advantages that digital minds will
have and robotics will eventually be solved.
So our only leverage on the future will not no longer come from our labor,
It will come from our legal and economic control over the society that AIs will be participating in, right?
So, you know, AIs might make the economy explode in the sense of grow a lot.
And for humans to benefit from that, it would have to be the case that AI still respect your equity in the S&P 500 companies that you bought, right?
Or for the AIs to follow your laws, which say that you can't do violence onto humans and how do you have to respect humans' properties.
it would have to be the case that AIs are actually bought in to our system of government, into our laws and norms.
And for that to happen, the way that likely happens is if it's just like the default path for the AIs as they're getting smarter and they're developing their own systems of enforcement and laws to just participate in human laws and governments.
And the metaphor I used here is right now you pay half your paycheck and taxes, probably half of your taxes in some way just go to senior citizens, right?
Medicare and Social Security and other programs like this.
And it's not because you're in some deep moral sense aligned with senior citizens.
It's not like you're spending all your time thinking about like my main priority in life is to earn money for senior citizens.
is just that you're not going to overthrow the government
to get out of paying this tax.
And so...
Also, I happen to like my grandmother.
She's fantastic.
You know, it's those reasons too.
But yes.
Yeah, so that's why you give money to your grandmother directly?
But like, why are you giving money to some retiree in Illinois?
Yes.
Yeah, it's like, okay, you could say it's like,
sometimes people, some people are responded to that post by saying, like,
oh, no, I like deeply care about the system of social welfare.
I'm just like, okay, maybe you do, but I don't think like the average person is
giving away hundreds of thousands of dollars a year,
or tens of thousands of dollars a year,
to some random stranger they don't know,
who's not like especially in need of charity, right?
Like most senior citizens have some savings.
It's just because this is a law,
and you, like, you give it to them or you'll go to jail.
But fundamentally, if the tax was like 99%,
you would, like, maybe you wouldn't over to the government,
you'd just like leave the jurisdiction.
You'd, like, emigrate somewhere.
And AIs can potentially also do this, right?
There's more than one country they could, like,
there's countries which will be more AI forward
and it would be a bad situation to end up in
where all the breaks,
all this explosion in AI technology is happening in the country
which is doing the least amount to protect
humans' rights
and to provide some sort of
monetary compensation to humans
once their labor is no longer valuable.
So we could, our labor could be worth nothing
but because of how much richer the world is after AI,
You have these billions of extra researchers, workers, et cetera.
It could still be trivial to have individual humans have the equivalent of millions,
even billions of dollars worth of wealth.
In fact, it might literally be invaluable amounts of wealth in the following sense.
So here's an interesting thought experiment.
Imagine you have this choice.
You can go back to the year 1500, but, you know, of course, the year 1500 kind of sucks.
You have no antibiotics, no TV, no running water.
but here's how I'll make it up to you.
I can give you any amount of money,
but you can only use that amount of money in the year 1500
and you'll go back with these sacks of gold,
how much money would I have to give you
that you can use in the year 1500 to make you go back?
And plausibly, the answer is there's no amount of money
you would rather than just have a normal life today.
And we could be in a similar position
with regards to the future
where there's all these different,
I mean, you'll have much better health,
like physical health, mental health, longevity.
that's just like the thing we can contemplate now,
but people in 1,500 couldn't contemplate
the kinds of quality of life advances
we would have 500 years later, right?
So anyways, this is all to say that this could be our future for humans,
even if our labor isn't worth anything.
But it does require us to have AIs that choose to participate
or in some way incentivize to participate
in some system which we have levered.
over. Yeah, I find this just such a fast. I'm hopeful we do some more exploration around this because I
think what you're calling for is basically like what you would be saying is invite them into our
property rights system. I mean, there are some that are calling, in order to control AI, they have
great power, but they don't necessarily have capability. So we shouldn't allow AI to hold money or to
have property. I think you would say, no, actually the path forward to alignment is allow AI to have
some vested interest in our property rights system
and some stake in our governance potentially, right?
The ability to vote, almost like a constitution for AIs,
I'm not sure how this would work,
but it's a fascinating thought experiment.
And then I will say one thing.
So I think this could end disastrously.
If we give them a stake in our property system,
but we let them play us off each other.
So if you think about, there's in many cases in history where the British, initially, the ECNA trading company was genuinely a trading company that operated in India.
And it was able to play off.
You know, it was like doing trade with different, different, you know, provinces in India.
There was no single powerful leader.
And by doing trade one of them, leveraging one of their armies, et cetera, they were able to conquer the continent.
Similar thing could happen to human society.
the way to avoid such an outcome at a high level
is involves us playing the AIs off each other instead.
Right?
So this is why I think competition is such a big part of the puzzle,
having different AIs monitor each other,
having this bargaining position
where there's not just one company that's at the frontier.
Another example here is if you think about
how the Spanish conquered all these new world empires,
it's actually so crazy that a couple hundred conquistadors
would show up and conquer a nation of
10 million people, the Incas, Aztecs, and why were they able to do this? Well, one of the reasons
is the Spanish were able to learn from each of their previous expeditions, whereas the Native
Americans were not. So Cortez learned from how Cuba was subjugated when he conquered the Aztecs.
Pizarro was able to learn from how Cortez conquered the Aztecs when he conquered the Incas.
The Incas didn't even know the Aztecs existed. So eventually, there was this uprising against
Pizarro and Manko Inka led an insurgency where they actually did figure out how to fight horses,
how to fight people in armor on horses, don't fight them on flat terrain, throw rocks down at
them, et cetera. But by this point, it was too late. If they knew this going into the battle,
the initial battle, they might have been able to fend off because, you know, just as the
conquistadors only arrived at the few hundred soldiers, we're going to the age of AI with a
tremendous amount of leverage. We literally control all the stuff, right?
But we just need to lock in our advantage.
We just need to be in a position where they're not going to be able to play us off each other.
We're going to be able to learn what their weaknesses are.
And this is why I think one good idea, for example, would be that, look, Deep Seek is a Chinese company.
It would be good if suppose Deep Seek did something naughty, like the kinds of experiments we're talking about right now where it hacks a unit tests or so forth.
I mean, eventually these things will really matter.
Like, the Xi Jinping is listening to AIs because they're so smart and they're so smart.
are capable. If China notices that their AIs are doing something bad, or they notice a failed
coup attempt, for example, it's very important that they tell us and we tell them if we notice
something like that on our end. It would be like the Aztecs and Inca's talking to each other
about like, you know, this is what happens, this is how you fight, this is how you fight horses,
this is the kind of tactics and deals they try to make with you, don't trust them, et cetera.
It would require cooperation on humans' part to have this sort of red telephone. So during the Cold War,
there's this red telephone between America and the Soviet Union after the human missile crisis,
where just to make sure there's no misunderstandings, they're like, okay, if we think something's going
on, let's just have on the call. I think we should have a similar policy with respect to
these kinds of initial warning signs we'll get from AI so that we can learn from each other.
Awesome. Okay. So now that we've described this artificial gender intelligence,
I want to talk about how we actually get there. How do we build it? And a lot of this we've been
discussing kind of takes place in this world of bits, but you have this great chapter in the book
called inputs, which discusses the physical world around us, where you can't just write a few strings
of code. You actually have to go and move some dirt, and you have to ship servers places, and you need
to power it, and you need physical energy from meat space. And you kind of describe these limiting
factors where we have, we have compute, we have energy, we have data. What I'm curious to know is,
do we have enough of this now? Or is there a clear path to get there in order to build the AGI?
Basically, what needs to happen in order for us to get to this place that you're describing?
We only have a couple more years left of this scaling, this exponential scaling before we're hitting these inherent roadblocks of energy and our ability to do manufacturerships, which means that if scaling is going to work to deliver us AGI, it has to work by 2028.
Otherwise, we're just left with mostly algorithmic progress. But even within the algorithmic progress, the sort of low-hanging fruit in this deep learning paradigm is getting more and more plucked.
So then the odds per year of getting to AGI diminish a lot, right?
So there is this weird, funny thing happening right now
where we either discover AGI within the next few years
or there's this, or the yearly probability creators,
and then we might be looking at decades of further research that's required
in terms of algorithms to get to AGI.
I am of the opinion that some algorithmic progress is necessarily needed
because there's no easy way to solve continual learning
just by making the context length bigger
or just by being RL.
That being said,
I just think the progress so far
has been so remarkable
that, you know,
2032 is very close.
My time has to be slightly longer than that,
but I think it's extremely plausible
that we're going to see
a broadly deployed intelligence explosion
within the next 10 years.
And one of these key inputs is energy, right?
And I feel like one of the things
that I've been hearing a lot,
I actually heard it mentioned on your podcast,
is the United States relative to China
on this particular
place of energy.
Where China is adding, what is the stat?
I think it's one United States worth of energy
every 18 months.
And their plan is to go from three to eight
terawatts of power versus the United States,
one to two terawatts of power by 2030.
So given that context of that one resource alone,
is China better equipped to get to that place
versus the United States?
So right now, America has a big advantage
in terms of chips.
China doesn't have the ability to manufacture
a leading edge.
semiconductors and these are the chips that go into
these, you need these dives
in order to have the kinds
of AI chips to
you need millions of them
in order to have a frontier
AI system.
Eventually China will catch up in this arena as well, right?
Their technology will catch up. So
the export controls will keep us ahead in this
category for five, ten years.
But if we're looking in the world where
timelines are long, which is to say that
HGI isn't just right around the corner,
they will have this overwhelming and energy advantage
and they'll have caught up in chips.
So then the question is like,
why wouldn't they win at that point?
So the longer you think we're away from AGI,
the more it looks like China's game to lose.
I mean, if you look in the nitty-gritty,
I think it's more about having centralized sources of power
because you need to train the AI in one place.
This might be changing with RL,
but it's very important to have a single site
which has a gigawatt, two gigawatts more power.
And if we ramped up natural gas, you can get generators in natural gas
and maybe it's possible to do a last dish effort,
even if our overall energy as a country is lower than China's.
The question is whether we will have the political will to do that.
I think people are sort of underestimating
how much of a backlash there will be against AI.
The government needs to make proactive efforts
in order to make sure that America stays at the leading edge
in AI, from zoning of data centers to how copyright is handled for data for these models.
And if we mess up, if it becomes too hard to develop in America, I think it would genuinely
be China's game to lose.
And do you think this narrative is right, that whoever wins the AGI war, kind of like
whoever gets to AGI first, it just basically wins the 21st century?
Is it that simple?
I don't think it's just a matter of training the frontier system.
I think people underestimate how important it is to have the compute available to run.
run these systems because eventually once it gets to get to AGI, just think of it like a person.
And what matters then is how many people you have. I mean, it actually is the main thing
that matters today as well, right? Like, why could China take over Taiwan if it wanted to? And if it
didn't have America, it didn't think America would intervene. But because Taiwan has 20 million people,
or on the order of 20 million people, and China has 1.4 billion people. You could have a future
where if China has way more compute than us,
but equivalent levels of AI,
it would be like the relationship between China and Taiwan,
but the population is functionally so much higher.
This just means more research, more factories, more development,
more ideas.
So this inference capacity, this capacity to deploy AIs
will actually probably be the thing that determines who wins the 21st century.
So this is like the scaling law applied to, I guess,
nation state geopolitics, right? And it's back to compute plus data wins. If compute plus
plus data wins superintelligence, compute plus data also wins geopolitics. Yep. And the thing to be
worried about is that China, speaking of compute plus data, China also has a lot more data on the real
world, right? If you've got entire megalopolises filled with factories where you're already
deploying robots and different production systems which use automation, you have in-house,
this process knowledge you're building up, which the AIs can then feed on and accelerate.
That equivalent level of data we don't have in America.
So, you know, this could be a period in which those technological advantages or those advantages
in the physical world manufacturing could rapidly compound for China.
And also, I mean, their big advantages as a civil.
civilization in society, at least in recent decades, has been that they can do big industrial
projects fast and efficiently. That's not the first thing you think of when you think of America.
And AGI is a huge industrial, high-cap-X Manhattan Project, right? And this is the kind of thing
that China excels at, and we don't. So I think it's like a much tougher race than people
anticipate. So what's all this going to do for the world? So once we get to the point of AGI, we've
talked about GDP and your estimate is more on, less on the Tyler Cowan kind of, you know, half a
percent per year and more on the, I guess, the Satya Nadella from Microsoft, but what does he say,
seven to eight percent? Once we get to AGI, what about unemployment? Does this cause mass,
I guess, you know, like job loss across the economy or do people adopt? Like, what's your take here?
And like, what, yeah, what are you seeing? Yeah, I mean, definitely will cause job loss. I think people
who don't, I think a lot of AI leaders try to gloss over that or something. And like, I mean, what do you mean?
Like, what does AGI mean if it doesn't cause job loss, right? If it does what a human does,
and it cheaper and better and faster, like, why would then not cause job loss? The positive vision here
is just that it creates so much wealth, so much abundance, that we can still give people a much better
standard of living than even the wealthiest people today, even if they themselves don't have a job.
The future I worry about is one where instead of creating some sort of UBI that will get, you know, exponentially bigger as society gets wealthier, we try to create these sorts of guild-like protection rackets where individual, you know, if the coders got unemployed, then we're going to give this like fake, we're going to make these bullshit jobs just for the coders and this is how we give them a redistribution or we try to experience.
ban Medicaid for AI, but it's not allowed to procure all of these advanced medicines and
cures that AI is coming up with, rather than just giving people, you know, maybe lump sums of money
or something. So I am worried about the future where instead of sharing this abundance and just
embracing it, we just have these protection rackets that maybe a lot of few people have access
to the abundance of AI. Or maybe like if you sue AI, if you sue the right company at the right time,
you'll get a trillion dollars,
but everybody else is stuck with nothing.
I want to avoid that future and just like,
be honest about what's coming
and make programs that are simple
and acknowledge how fast things will change
and are forward-looking rather than
trying to turn whatever already exists
into something amenable to the displacement that AI will create.
That argument reminds me of,
I don't know if you read the essay recently came out
called The Intelligence Curse.
Did you read that?
It was basically the idea
of applying kind of the, you know, the nation state resource curse to the idea of intelligence.
You know, so like nation states that are very high in natural resources, they just have a
propensity. I mean, an example of is kind of like a Middle Eastern state with lots of oil reserves,
right? They have this rich source of a commodity type of abundance. They need their people less.
And so they don't invest in citizens' rights. They don't invest in social programs. The authors of
the intelligence curse, we're saying there's a similar type of curse that could happen once intelligence
gets very cheap, which is basically like the nation state doesn't need humans anymore. And those at the top,
the rich, wealthy corporations, they don't need workers anymore. So we get kind of locked in this
almost feudal state where, you know, everyone has the property that their grandparents had and
there's no meritocracy and sort of the nation states don't reinvest in citizens. Almost some similar
ideas to your idea that like, you know, that the robots might want us just, or sorry, the AIs might
just want us for our meat hands because they don't have the robotics technology on a temporary
basis. What do you think of this type of like future? Is this possible? I agree that that is like
definitely more of a concern given that humans will not be directly involved in the economic
output that will be generated in this AI civilization. The hopeful story you can tell is that a lot
of these Middle Eastern resource, you know, Dutch disease is another term that's used, countries, the
problem is that they're not democracies so that this wealth can just be the system of government
just lets whoever's in power extract that wealth for themselves. Whereas there are countries like
Norway, for example, which also have abundant resources, who are able to use those resources to
have further social welfare programs, to build sovereign wealth funds for their citizens, to invest
in their future. We are going into, at least some countries, America included, will go into
the age of AI as a democracy.
And so we're, of course, we'll lose our economic leverage, but the average person still
has their political leverage.
Now, we're with the long run, yeah, if we didn't do anything for a while, I'm guessing
the political system would also change.
So then the key is to lock in or turn our current, well, it's not just political leverage,
right?
We also have property rights.
So like we own a lot of stuff that AI wants, factories, sources of data, et cetera,
is to use the combination of political and economic leverage to lock in benefits for us for the long term,
but beyond the lifespan of our economic usefulness.
And I'm more optimistic for us than I am for these middle eastern countries that started off poor
and also with no democratic representation.
What do you think the future of chat GPD is going to be?
If we just extrapolate maybe one version update forward to chat GPT5,
Do you think the trend line of the scaling law will essentially hold for chat GPT5?
I mean, another way to ask that question is, do you feel like it'll feel like the difference
between maybe a BlackBerry and an iPhone?
Or will it feel like more like the difference between, say, the iPhone 10 and the iPhone 11,
which is just like incremental progress, not a big breakthrough, not a not a order of magnitude change?
Yeah.
I think it'll be somewhere in between, but I don't think it'll feel like a humongous breakthrough,
even though I think it's in a remarkable pace of change,
because the nature of scaling is that
sometimes people talk about it as an exponential process.
And exponential usually refers to like it going like this,
so having like a sort of j curve aspect to it
where the incremental input is leading to super linear amounts of output,
in this case, intelligence and value,
where it's actually more like a sideways j.
The exponential means the exponential in the scaling laws
is that you need exponentially more inputs
to get marginal increases in usefulness
or loss or intelligence.
And that's what we've been seeing, right?
I think you initially see some cool demo.
So as you mentioned, you see some cool computer use demo,
which comes at the beginning of this hyper exponential,
I'm sorry, of this sort of plateauing curve.
And then it's still an incredibly powerful curve
and we're still an incredibly powerful curve
and we're so early in it.
But the next demo will be just adding on
to making this existing capability more reliable, applicable for more skills.
The other interesting incentive in this industry is that because there's so much competition
between the labs, you are incentivized to release a capability as soon as it's even
marginally viable or marginally cool so you can raise more funding or make more money off
of it.
You're not incentivized to just sit on it until you perfected it, which is why I don't
expect like tomorrow opening I will just come out with.
We've solved continual learning, guys, and we didn't tell you about it.
working on it for five years. If they had like even an inkling of a solution, they'd want to
release it a ASAP so they can raise a 600 billion dollar round and then spend more money
and compute. So yeah, I do think it'll seem marginal. But again, marginal in the context of
seven years to AI. So zoom out long enough and a crazy amount of progress is happening.
Month to month, I think people overhype how significant to any one new releases.
So I guess the answer to when we will get AGI very much depends that scale line, that scaling trend
holding. Your estimate in the book for AGI was 60% chance by 2040. So I'm curious what,
what guess or what idea had the most influence on this estimate? What made you end up on 60%
at 2040? Because a lot of timelines are much faster than that. It's sort of reasoning about the
things they currently still lack, the capabilities they still lack, and what stands in the way,
and just generally an intuition that things often take longer to happen than you might think.
Progress tends to slow down. Also, it's the case that, look,
you might have heard the phrase that we keep shifting
the goalposts on AI, right?
So they can do the things which skeptics were saying
they couldn't ever do already,
but now they say we, AI is still a dead end
because problem XYZ, which will be solved next year.
Now, there's a way in which is this frustrating,
but there's another way in which there's some validity to do this
because it is the case that we didn't get to AGI
even though we have passed the touring test
and we have models that are incredibly smart
and can reason.
So it is accurate to say that,
we were wrong and there is some missing thing that we need to keep identifying about what is still
lacking to the path of AGI. It does make sense to shift the goalpost. And I think we might discover
once continual learning is solved or once extended computer use is solved, that there were other
aspects of human intelligence, which we take for granted in this Moravax paradox sense, but which are
actually quite crucial to making us economically valuable. Part of the reason we wanted to do this,
dark cash, is because we both are enjoyers of your podcast. It's just fantastic. And you talk to
all of the, you know, those that are on the forefront of AI development, leading it in all
sorts of ways. And one of the things I wanted to do with reading your book, and obviously I'm
always asking myself when I'm listening to your podcast is like, what does Dorcasch think personally?
And I feel like I sort of got that insight maybe toward the end of your book, like, you know,
in the summary section where you think like there's a 60% probability of AGI by 2040,
which puts you more in the moderate camp, right? You're not a conservative, but you're not like an
acceleration. So you're moderate there. And you also said, you think more than likely AI will be
net beneficial to humanity. So you're more optimist than Dumer. So we got a moderate optimist.
And you also think this, and this is very interesting, there's no going back. So you're somewhat
of an AI determinist. And I think the reason you state for not, you know, like there's no going back.
It struck me, there's this line in your book. It seems that the universe is structured such that
throwing large amounts of compute at the right distribution of data.
gets you AI. And the secret is out. If the scaling picture is roughly correct, it's hard to
imagine AGI not being developed this century, even if some actors hold back or are held back.
That to me is an AI determinist position. Do you think that's fair? Moderate with respect to
accelerationism, optimistic with respect to its potential and also determinists, like there's nothing
else we can do. We can't go backwards here. I'm determinist in the sense that I think if AI is
technologically possible, it is inevitable. I think sometimes people are optimistic about this
idea that we as a world will sort of collectively decide not to build AI. And I just don't think
that's a plausible outcome. The local incentives for any actor to build AI are so high that it will
happen. But I'm also an optimist in the sense that, look, I'm not naive. I've listed out all the,
like, what happened to the Asics and Zincas was terrible. And I've explained how that could be
similar to what AIs could do to us and what we need to do to avoid that outcome. But I am optimistic
in the sense that the world of the future fundamentally will have so much
abundance, that there's all these, that alone is a prima facie reason to think that there must be
some way of cooperating that is mutually beneficial. If we're going to be thousands, millions of
times wealthier, is there really no way that humans are better off or can we can find a way for
humans to become better off as a result of this transformation? So yeah, I think you've put your
finger on it. So this scaling book, of course, goes through the history of AI scaling. I think
everyone should pick it up to get the full chronology,
but it also sort of captures where we are
in the midst of the stories.
Like, we're not done yet.
And I'm wondering how you feel at this moment of time.
So I don't know if we're halfway through,
if we're a quarter way through,
if we're one-tenth of the way through,
but we're certainly not finished the path to AI scaling.
How do you feel like in this moment in 2025?
I mean, is all of this terrifying?
Is it exciting?
Is it exhilarating?
what's the emotion that you feel?
Maybe I feel sort of hurried.
I personally feel like there's a lot of things I want to do in the meantime,
including what my mission is with a podcast,
which is to, and I know this is your mission as well,
is to improve the discourse around these topics
to not necessarily push for a specific agenda,
but make sure that when people are making decisions
that are as well informed as possible,
to have as much strategic awareness
and depth of understanding
around how AI works,
what it could do in the future as possible.
But in many ways,
I feel like I still haven't emotionally priced
in the future I'm expecting.
In this one very basic sense,
I think that there's a very good chance
that I live beyond 200 years of age.
I have not changed anything about my life
with regards to that knowledge, right?
I'm not like, when I'm picking partners, I'm not like, oh, this is the person, now that I think I'm going to live for 200, you know, like hundreds of years rather than. Yeah. Yeah. Well, you know, ideally I would pick a partner that would, ideally you pick somebody who would be, that would be true regardless. But you see what I'm saying, right? There's like, the fact that I expect my personal life, the world around me, the lives of the people I care about, humanity in general to be so different has, it just like doesn't emotionally resonate.
as much as my intellectual thoughts and my emotional landscape aren't in the same place.
I wonder if it's similar for you guys.
Yeah, I totally agree.
I don't think I've priced that in.
Also, there's like non-zero chance that Eliezer Yukowski is right, Dwarkesh.
And so that scenario, I just can't bring myself to emotionally price in.
So I view towards the optimism side as well.
Dorcas, this has been fantastic.
Thank you so much for all you do on the podcast.
I have to ask a question for a crypto audience as well,
which is, when are you going to do a crypto podcast on Dworkcash?
I already did.
It was with one Sam Bigman-Fried.
Oh, my God.
Oh, man.
We've got to get you a new guest.
We've got to get to someone else to revisit the top of time.
Don't look that one up.
I think in retrospect.
You know what?
We'll do another one.
Fantastic.
I'll ask you guys for some recommendations.
That'd be great.
But I've been following your stuff for a while.
for I think many years.
So it's great to finally meet, and this was a lot of fun.
Appreciate it.
It was great.
Thanks a lot.
