Dwarkesh Podcast - Dario Amodei — The highest-stakes financial model in history
Episode Date: February 13, 2026Dario Amodei thinks we are just a few years away from AGI — or as he puts it, from having “a country of geniuses in a data center”. In this episode, we discuss what to make of the scaling hypoth...esis in the current RL regime, why task-specific RL might lead to generalization, and how AI will diffuse throughout the economy. We also dive into Anthropic’s revenue projections, compute commitments, path to profitability, and more.Watch on YouTube; read the transcript.Sponsors* Labelbox can get you the RL tasks and environments you need. Their massive network of subject-matter experts ensures realism across domains, and their in-house tooling lets them continuously tweak task difficulty to optimize learning. Reach out at labelbox.com/dwarkesh.* Jane Street sent me another puzzle… this time, they’ve trained backdoors into 3 different language models — they want you to find the triggers. Jane Street isn’t even sure this is possible, but they’ve set aside $50,000 for the best attempts and write-ups. They’re accepting submissions until April 1st at janestreet.com/dwarkesh.* Mercury’s personal accounts make it easy to share finances with a partner, a roommate… or OpenClaw. Last week, I wanted to try OpenClaw for myself, so I used Mercury to spin up a virtual debit card with a small spend limit, and then I let my agent loose. No matter your use case, apply at mercury.com/personal-banking.Timestamps(00:00:00) - Does task-specific RL hint at lack of generalization?(00:12:36) - Is economic diffusion just cope?(00:29:42) - Is continual learning necessary? How will it be solved?(00:46:20) - If AGI is 1-3 years away, why not buy more compute?(00:58:49) - How will AI labs actually make profit?(01:31:19) - Will regulations destroy the boons of AGI?(01:47:41) - Why can’t both China and America have a country of geniuses in a datacenter? Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Transcript
Discussion (0)
So we talked three years ago.
I'm curious in your view, what has been the biggest update of the last three years?
What has been the biggest difference between what I felt like last three years versus now?
Yeah, I would say actually the underlying technology, like the exponential of the technology, has gone broadly speaking, I would say, about as I expected it to go.
I mean, there's like plus or minus, you know, a couple.
There's plus or minus a year or two here.
There's plus or minus a year or two there.
I don't know that I would predicted the specific direction of code.
But actually, when I look at the exponential, it is roughly what I expected in terms of the march of the models from like, you know, smart high school student to smart college student to like, you know, beginning to do PhD and professional stuff.
And in the case of code, reaching beyond that.
So, you know, the frontier is a little bit uneven.
It's roughly what I expected.
I will tell you, though, what the most surprising thing has been.
The most surprising thing has been the lack of public recognition of how close we are.
to the end of the exponential.
To me, it is absolutely wild that, you know, you have people, you know, within the bubble
and outside the bubble, you know, but you have people talking about these, you know,
just the same tired old hot button political issues and like, you know, around us.
We're like near the end of the exponential.
I want to understand what that exponential looks like right now because the first question I asked
you when we recorded three years ago was, you know, what's up at scaling?
how does it work?
And I have a similar question now, but I feel like it's a more complicated question because,
at least from the public's point of view, yes.
Three years ago, there were these, you know, well-known public trends where across many
orders of magnitude of compute, you could see how the loss improves.
And now we have RL scaling and there's no publicly known scaling law for it.
It's not even clear what exactly the story is of, is it supposed to be teaching the model
skills?
It's supposed to be teaching meta-learning.
What is the scaling hypothesis at this point?
Yeah, so I have actually the same hypothesis that I had, even all the way back in 2017.
So in 2017, I think I talked about it last time, but I wrote a doc called the big blob of compute hypothesis.
And, you know, it wasn't about the scaling of language models in particular.
When I wrote it, GPT1 had just come out, right?
So that was, you know, one among many things, right?
There was back in those days, there was robotics.
People tried to work on reasoning as a separate thing from language models.
there was scaling of the kind of RL that happened, that, you know, kind of happened in AlphaGo,
and, you know, that happened at Dota at OpenAI, and, you know, people remember Starcraft at Deep Mind,
you know, the Alpha Star.
So it was written as a more general document.
And the specific thing I said was the following.
And, you know, it's very, you know, Rich Sutton put out the bitter lesson a couple years later.
But, you know, the hypothesis is basically the same.
So what it says is all the cleverness, all the techniques, all the kind of we need a new method to do something like that doesn't matter very much. There are only a few things that matter. And I think I listed seven of them. One is like how much raw compute you have. The other is the quantity of data that you have. Then the third is kind of the quality and distribution of data, right? It needs to be a broad, broad distribution of data. The fourth is, I think, how long you train for.
The fifth is you need an objective function that can scale to the moon.
So the pre-training objective function is one such objective function, right?
Another objective function is, you know, the kind of RL objective function that says, like, you have a goal, you're going to go out and reach the goal.
Within that, of course, there's objective rewards like, you know, like you see in math encoding.
And there's more subjective rewards like you see in RL from human feedback or kind of higher order, higher order versions of that.
And then the sixth and seventh were things around kind of like normalization or conditioning,
like, you know, just getting the numerical stability so that kind of the big blob of compute
flows in this laminar way instead of running into problems.
So that was the hypothesis.
And it's a hypothesis I still hold.
I don't think I've seen very much that is not in line with that hypothesis.
And so the pre-trained scaling laws were one example of kind of what we see.
there. And indeed, those have continued going. Like, you know, I think, I think now it's been,
it's been widely reported. Like, you know, we feel good about pre-training. Like, pre-training
is continuing to give us gains. What has changed is that now we're also seeing the same thing for
RL, right? So we're seeing a pre-training phase and then we're seeing like an RL phase on top of that.
And with RL, it's actually just the same. Like, you know, even other companies,
have published, like, you know, in some of their releases have published things that say,
look, you know, we train the model on math contests, you know, AIME or the kind of other things.
And, you know, how well the model does is log linear and how long we've trained it.
And we see that as well.
And it's not just math contest.
It's a wide variety of RL tasks.
And so we're seeing the same scaling in RL that we saw for precepts.
training. You mentioned Richard Sutton and the Bitter Lesson. Yeah. I interviewed him last year,
and he is actually very non-LLM-Pilled. And if I'm, if I, I don't know if this is this perspective,
but one way to pair a phrase, this objection is something like, look, something which possesses
the true core of human learning would not require all these billions of dollars of data and compute
and these bespoke environments to learn how to use Excel or how does an, you know, how to how to use PowerPoint.
how to navigate a web browser.
And the fact that we have to build in these skills
using these RL environments hints that
we're actually lacking this core human learning algorithm.
And so we're scaling the wrong thing.
And so, yeah, that is a reason.
Why are we doing all this RL scaling
if we do think there's something
that's going to be human-like
and its ability to learn on the fly?
Yeah, yeah.
So I think this kind of puts together
several things that should be kind of thought of differently.
I think there is a genuine puzzle here, but it may not matter.
In fact, I would guess it probably doesn't matter.
So let's take the RL out of it for a second because I actually think RL and it's a red herring to say that RL is any different from pre-training in this matter.
So if we look at pre-training scaling, it was very interesting.
Back in, you know, 2017 when Alec Radford was doing GPT1, if you look at the models before GPT1,
they were trained on these data sets that didn't represent a wide, you know, distribution of text, right?
You had like, you know, these very standard, you know, kind of language modeling benchmarks.
And GBT1 itself was trained on a bunch of, I think it was fan fiction, actually.
But, you know, it was like literary, you know, it was like literary text, which is a very small fraction of the text that you get.
And what we found with that, you know, and in those days it was like a billion words or something.
So small datasets and represented a pretty narrow distribution, right?
Like a narrow distribution of kind of what you can see, what you can see in the world.
And it didn't generalize well.
If you did better on, you know, the, you know, I forgot what it, but some kind of fan fiction corpus.
It wouldn't generalize that well to kind of the other tat.
You know, we had all these measures of like, you know, how well does the, how well does a model do at predicting all of these other kinds of texts?
you really didn't see the generalization.
It was only when you trained over all the tasks on the, you know, the internet, when you,
when you kind of did a general internet scrape, right, from something like, you know,
common crawl or scraping links on Reddit, which is what we did for GPT2.
It's only when you do that that you kind of started to get generalization.
And I think we're seeing the same thing on RL, that we're starting with first very simple
RL tasks, like training on math competitions, then we're kind of moving to, you know, kind of
broader training that involves things like code as a task. And now we're moving to do kind of
many, many other tasks. And then I think we're going to increasingly get generalization.
So that kind of takes out the RL versus the pre-training side of it. But I think there is a puzzle
here either way, which is that on pre-training, when we train the model on pre-training,
you know, we use like trillions of tokens, right? And humans don't see trillions of words. So there is
an actual sample efficiency difference here.
There is actually something different that's happening here, which is that the model
start from scratch and, you know, they have to get much more, much more training.
But we also see that once they're trained, if we give them a long context length, the only
thing blocking a long context length is like inference.
But if we give them like a context length of a million, they're very good at learning and
adapting within that context length.
And so I don't know the full answer to this.
But I think there's something going on that pre-training, it's not like the process of humans learning.
It's somewhere between the process of humans learning and the process of human evolution.
It's like it's somewhere between, like we get many of our priors from evolution.
Our brain isn't just a blank slate, right?
Whole books have been written about.
I think the language models, they're much more blank slates.
They literally start as like random weights, whereas the human brain starts with all these regions.
It's connected to all these inputs and outputs.
And so maybe we should think of pre-training, and for that matter, RL as well, as being something that exists in the middle space between human evolution and, you know, kind of human on the spot learning.
And as the in-context learning that the models do as something between long-term human learning and short-term human learning.
So, you know, there's this hierarchy of like there's evolution, there's long-term learning, there's short-term learning, and there's something.
and there's just human reaction.
And the LOM phases exist along this spectrum,
but not necessarily exactly at the same points.
There's no analog to some of the human modes of learning
that LOMs are kind of falling between the points.
Does that make sense?
Yes, although some things are still a bit confusing.
For example, if the analogy is that this is like evolution,
so it's fine that it's not that sample efficient,
then like, well, if we're going to get the kind of super sample efficient Asian
from in context learning,
Why are we bothering to build in, you know, there's our all environment companies which are, it seems like what they're doing is they're teaching it how to use this API, how to use Slack, how to use whatever.
It's confusing to me why there's so much emphasis on that if the kind of agent that can just learn on the fly is emerging or is going to soon emerge or has already emerged.
Yeah, yeah. So, I mean, I can't speak for the emphasis of anyone else. I can only talk about how we think about it.
I think the way we think about it is the goal is not to teach the model every person.
possible skill within RL, just as we don't do that within pre-training, right? Within pre-training,
we're not trying to expose the model to, you know, every, every possible, you know, way that words
could be put together, right? You know, it's rather that the model trains on a lot of things,
and then it reaches generalization across pre-training, right? That was the transition from GPT1 to
GPT2 that I saw up close, which is like, you know, the model reaches a point, you know, I, I,
I had these moments where I was like, oh, yeah, you just give the model, like, you just give the model a list of numbers that's like, you know, this is the cost of the house, this is the square feet of the house.
And the model completes the pattern and does linear regression.
Like, not great, but it does it.
But it's never seen that exact thing before.
And so, you know, to the extent that we are building these RL environments, the goal is very similar to what is, you know, to what was done five or five or.
10 years ago with pre-training with we're trying to get a whole bunch of data not because we want
to cover a specific document or a specific skill, but because we want to generalize.
I mean, I think the framework you're laying down obviously makes sense.
Like we're making progress towards AGI.
I think the crux is something like nobody at this point disagrees that we're going to achieve AGI in
the century.
And the crux is you say we're hitting the end of the exponential.
and somebody else looks at this and says,
oh, yeah, we're making progress.
We've been making progress since 2012.
And then 2035 will have a human-like agent.
And so I want to understand what it is that you're seeing,
which makes you think, yeah, obviously we're seeing the kinds of things that evolution did
or that within human lifetime learning is like in these models.
And why think that it's one year away and not 10 years away?
I actually think of it as like two, there's kind of two cases to be made here.
or like two claims you could make, one of which is like stronger and the other of which is weaker.
So I think starting with the weaker claim, you know, when I first saw the scaling back in like,
you know, 2019, you know, I wasn't sure. You know, this was the whole, this was kind of a 50-50 thing, right?
I thought I saw something that was, you know, and my claim was this is much more likely than anyone thinks it is.
Like, this is wild. No one else would even consider this. Maybe there's a 50% chance.
this happens. On the basic hypothesis of, you know, as you put it, within 10 years, we'll get to,
you know, what I call kind of country of geniuses in the data center. I'm at like 90% on that.
And it's hard to go much higher than 90% because the world is so unpredictable. Maybe the irreducible
uncertainty would be if we were at 95% where you get to things like, I don't know, maybe multiple,
you know, multiple companies have, you know, kind of internal turmoil and nothing happens. And then Taiwan gets invaded and like all the, all the fabs get blown up by missiles. And, and, you know, and then now you would drink to scenario. Yeah. Yeah. You know, just you could construct a scenario where there's like a 5% chance that it, or, you know, you can construct a 5% world where like things, things get delayed for, for, for, for 10 years. That's maybe 5%. There's another 5% which is that I'm,
very confident on tasks that can be verified. So I think with coding, I'm just except for that
irreducible uncertainty. There's just there's, I mean, I think we'll be there in one or two years.
There's no way we will not be there in 10 years in terms of being able to do it end-to-end coding.
My one little bit, the one little bit of fundamental uncertainty, even on long time scales,
is this thing about tasks that aren't verifiable, like planning a mission to Mars, like, you know,
doing some fundamental scientific discovery like CRISPR, like, you know, writing a novel, hard to
verify those tasks. I am almost certain that we have a reliable path to get there, but like,
if there was a little bit uncertainty, it's there. So, so, so, so, so, so on the 10 years,
I'm like, you know, 90%, which is about as certain as you can be. Like, I think it's, I think it's
crazy to say that this won't happen by by 2035.
Like in some sane world, it would be outside the mainstream.
But the emphasis on verification hints to me as a lack of belief that these models
was generalized.
If you think about humans, we are going to things that both which we get,
verifiable reward, and things which we don't.
You're like, you have a start?
No, no, this is why I'm almost sure.
We already see substantial generalization from things that,
that verify to things that don't vary.
We're already seen.
But it seems like you were emphasizing this as a spectrum which will split apart which
domains you see more progress.
And I'm like,
but that doesn't seem like how humans get better.
The world in which we don't make it or the world in which we don't get there is the
world in which we do all the things that are that are verifiable.
And then they like, you know, many of them generalize, but we kind of don't get fully
there.
We don't, we don't fully, you know, we don't fully color in this side of the box.
It's not a binary thing.
But it also seems to me, even if in the world where generalization is weak when you only
say to profile domains, it's not clear to me in such a world you could automate software
engineering because software, like in some sense, you are quote unquote, a software engineer.
Yeah, but part of being a software engineer for you involves writing these like long memos about
your grand vision about different things.
Well, I don't think that's part of the job of SWI.
That's part of the job of the company.
But I do think SWI involve like design documents and other things like that, which, by the way,
The models are not bad.
They're already pretty good at writing comments.
And so with, again, I'm making, like, much weaker claims here than I believe to, like, you know, to kind of set up a, you know, to distinguish between two things.
Like, we're already almost there for software engineering.
We are already almost there.
By one metric.
There's one metric, which is, like, how many lines of code are written by AI?
And if you use, if you consider other productivity improvements in the course of the history of software engineering, compilers write all the lines of software.
and but there's a difference between how many lines are written and how big the productivity improvement is.
Oh, yeah.
And then like we're almost there meaning like how big is the productivity improvement, not just how many lines are in?
Yeah, yeah.
So I actually, I actually agree with you on this.
So I've made this series of predictions on code and software engineering.
And I think people have repeatedly kind of misunderstood them.
So let me let me, let me lay out the spectrum, right?
Like I think it was like, you know, like, you know, eight or not.
months ago or something I said, you know, the AI model will be writing 90% of the lines of code
in like, you know, three to six months, which happened at least at some places, right?
Happened, happened at Anthropic, happened with many people downstream using our models.
But that's actually a very weak criterion, right?
People thought I was saying, like, we won't need 90% of the software engineers.
Those things are worlds apart, right?
Like, I would put the spectrum as 90% of code is written by the model.
100% of code is written by the model, and that's a big difference in productivity.
90% of the end-to-end sui tasks, including things like compiling, including things like setting up clusters and environments, testing features, writing memos, 90% of the sui tasks are written by the models.
100% of today's sui tasks are written by the models.
And even when that happens, it doesn't mean software engineers are out of a job, like there's like new higher-level things they can do where they can.
can manage. And then there's a further down the spectrum like, you know, there's 90% less
demand for suites, which I think will happen. But like, this is a spectrum. And, you know, I wrote
about it in the adolescence of technology where I went through this kind of spectrum with farming.
And so I actually totally agree with you on that. It's just these are very different benchmarks from
each other, but we're proceeding through them super fast. It seems like in part of your vision,
it's like going from 90 to 100.
first is going to happen fast
and two, that somehow that leads to huge
productivity improvements
whereas when I notice even in Greenfield projects
that people start with Claude
or something, people report starting a lot of projects
and I'm like, do we see in the world out there
a renaissance of software,
all these new features that wouldn't exist otherwise?
And at least so far, it doesn't seem like we see that.
And so that does make me wonder,
even if I never had to intervene on ClaudeCode,
there is this thing of like there's just
the world is complicated
jobs are complicated
and closing the loop on self-contained systems
whether it's just writing software or something
how much sort of how much broader gains
we would see just from that
and so maybe that makes us
this should dilute our estimation
of the country of geniuses
well I actually I like
I like simultaneously
I simultaneously agree with you
agree that it's a reason
why these things don't happen
instantly. But at the same time, I think the, the effect is going to be very fast. So, like,
I don't know, you could have these two poles, right? One is like, you know, AI is like, you know,
it's not going to make progress. It's slow. Like, it's going to take, you know, kind of forever to
diffuse within the economy, right? Economic diffusion has become one of these buzzwords that's like a reason
why we're not going to make AI progress or why AI progress doesn't matter. And, you know, the other
axis is like, we'll get recursive self-improvement, you know, the whole thing, you know,
can't you just draw an exponential line on the on the curve?
You know, it's, we're going to have, you know,
Dyson spheres around the sun in like, you know, you know,
so many nanoseconds after, you know, after after we get recursive.
I mean, I'm completely caricaturing the view here.
But like, you know, there are these two extremes.
But what we've seen from from the beginning, you know,
at least if you look within Anthropic, there's this bizarre 10x per year growth
and revenue that we've seen, right?
So, you know, in 2023, it was like zero to 100 million.
2024, it was 100 million to a billion.
2025, it was a billion to like nine or 10 billion.
And then...
You guys should have just bought like a billion dollars with your own products
so you could just like kind of clean 10B.
And the first month of this year, like that exponential is, you would think it would slow down,
but it would like, you know, we added another few billion to like, you know,
to, we added another few billion to revenue in January.
And so, you know, obviously that curve can't go on forever, right?
You know, the GDP is only so large.
I don't, you know, I would even guess that it bends, that it bends, bend somewhat this year.
But like, that is like a fast curve, right?
That's like a really fast curve.
And I would bet it stays pretty fast even as the scale goes to the entire economy.
So, like, I think we should be thinking about this middle world where things are like extremely
fast, but not instant where they take time because of economic diffusion, because of the need
to close the loop, because, you know, it's like this fiddly, oh, man, I have to do change management
within my enterprise. You know, I have to like, you know, I set this up, but, you know, I have to
change the security permissions on this in order to make it actually work. Or, you know, I had this,
like, old piece of software that, you know, that like, you know, checks the model before it's compiled.
and like released, and I have to rewrite it.
And yes, the model can do that, but I have to tell the model to do that.
And it has to take time to do that.
And so I think everything we've seen so far is compatible with the idea that there's one fast
exponential that's the capability of the model.
And then there's another fast exponential that's downstream of that, which is the diffusion
of the model into the economy, not instant, not slow, much faster than any previous
technology, but it has its limits.
And this is what we, you know, when I look inside Anthropic, when I look at our customers,
fast adoption, but not infinitely fast.
Can I try a hot take on you?
Yeah.
I feel like diffusion is cope that people use to say when it's like, if the model isn't
able to do something, they're like, oh, but it's like a diffusion issue.
But then you should use the comparison to humans.
You would think that the inherent advantages that AIs have would make diffusion a much
easier problem for new AIs getting onboarded than new humans getting onboarded. So an AI can read
your entire Slack and your drive in minutes. They can share all the knowledge that the other copies of
the same instance have. You don't have this adverse selection problem when you're hiring AI
because you can just hire copies of a vetted AI model. Hiring a human is like so much more hassle.
And people hire humans all the time, right? We pay humans upwards of $50 trillion in wages because they're
useful, even though it's like in principle, it would be much easier to integrate AIs into the economy
than it is to hire humans. I think like the diffusion, I feel like doesn't really explain.
I think diffusion is very real and doesn't have to, you know, doesn't exclusively have to do with
limitations limitations on the AI models. Like, again, there are people who use diffusion to, you know,
as kind of a buzzword to say this isn't a big deal. I'm not talking about that. I'm not talking
about, you know, AI will diffuse at the speed that previous. I think AI will diffuse much faster than
previous technologies have, but not infinitely fast. So I'll just give an example of this, right?
Like, there's like Claude Code. Like, Claude Code is extremely easy to set up. You know,
if you're a developer, you can kind of just start using Claude Code. There is no reason why
a developer at a large enterprise should not be adopting Claude Code as quickly as, you know,
individual developer or developer at a startup. And we do everything we can to promote it, right?
We sell, we sell, we sell Claude Code to Enterprises and big enterprises, like, you know, big financial companies, big pharmaceutical companies, all of them, they're adopting Claude code much faster than enterprises typically adopts new technology, right?
But, but again, it like, it, it takes time.
Like any given feature or any given product like Claude Code or like co-work will get adopted by the, you know,
the individual developers who are on Twitter all the time by the like series A startups
many months faster than, you know, then they will get adopted by like, you know,
a like large enterprise that does food sales.
There are a number of factors like you have to go through legal.
You have to provision it for everyone.
It has to, you know, like it has to pass security and compliance.
The leaders of the company who are further away from the AI revolution, you know,
are forward looking.
but they have to say, oh, it makes sense for us to spend 50 million.
This is what this Claude Code thing is.
This is why it helps our company.
This is why it makes us more productive.
And then they have to explain to the people two levels below.
And they have to say, okay, we have 3,000 developers, like, here's how we're going to roll it out to our developers.
And we have conversations like this every day.
Like, you know, we are doing everything we can to make Anthropics revenue grow 20 or 30x a year instead of 10x a year.
You know, and again, you know, many enterprises are just saying this is so productive like, you know, we're going to take shortcuts in our usual procurement process, right?
They're moving much faster than, you know, when we tried to sell them just the ordinary API, which many of them use.
But Claude code is a more compelling product.
But it's not an infinitely compelling product.
And I don't think even AGI or powerful AI or country of geniuses in the data center will be an infinitely compelling product.
It will be a compelling product enough maybe to get three or five or ten X a year growth, even when you're in the hundreds of billions of dollars, which is extremely hard to do.
And it has never been done in history before, but not infinitely fast.
I buy that.
It would be a slight slowdown.
And maybe this is not your claim.
But sometimes people talk about this like, oh, the capabilities are there, but because of diffusion.
Otherwise, like, we're basically at AGI.
And then I don't believe we're basically at AGI.
I think if you had the country of geniuses in a data center, if your company didn't adopt the country of geniuses.
If you had the country of geniuses in a data center, we would know it.
Right, yeah.
We would know it if you had the country of geniuses in a data center.
Like, everyone in this room would know it.
Everyone in Washington would know it.
Like, you know, people in rural parts that might not know it.
But like, we would know it.
We don't have that now.
That's very clear.
As Dario was ending at, to get generalization,
you need to train across a wide variety of realistic tasks and environments.
For example, with a sales agent,
the hardest part isn't teaching it to mash buttons in a specific database in sales force.
It's training the agent's judgment across ambiguous situations.
How do you sort through a database with thousands of leads to figure out which ones are hot?
How do you actually reach out? What do you do when you get ghosted?
When an AI lab wanted to train a sales agent, Labelbox brought in dozens of Fortune 500 salespeople
to build a bunch of different RL environments.
They created thousands of scenarios where the sales agent had to engage with the potential customer,
which was role played by a second AI.
Labelbox made sure that this customer AI had a few different personas,
because when you cold call, you have no idea who's going to be on the other end.
You need to be able to deal with a whole range of possibilities.
Limbax's sales experts monitor these conversations turn by turn,
tweaking the role-playing agent to ensure it did the kinds of things an actual customer would do.
A label box could iterate faster than anybody else in the industry.
This is super important because RL is an empirical science.
It's not a solve problem.
Labelbox has a bunch of tools for monitoring agent performance in real time.
This lets their experts keep coming up with tasks so that the model stays in the right distribution of difficulty and gets the optimal reward signal during training.
Labelbox can do this sort of thing in almost every domain.
They've got head front managers, radiologists, even airline pilots.
So whatever you're working on, labelbox can help.
Learn more at labelbox.com slash Vorecash.
Coming back to concrete predictions, because I think because there's so many different things to disambiguate, it can be easy to talk past each other when.
we're talking about capabilities. So, for example, when I interviewed three years ago,
I asked your prediction about what should we expect three years from now. I think you were right.
So you said, we should expect systems, which if you talk to them for the course of an hour,
it's hard to tell them apart from a generally well-educated human.
Yes.
And I think you were right about that. And I think spiritually, I feel unsatisfied because my internal expectation was,
was that such a system could automate large parts of white-collar work. And so it might be more
productive to talk about the actual
end capabilities you want such a system.
So I will basically tell you
what, you know,
where I think we are.
But let me ask it in a very specific question so that we can
figure out exactly what kinds of capabilities
we should go back soon. So maybe I'll
ask about it in the context of a job
I understand well, not because it's the most
relevant job, but just because I can evaluate
the claims about it.
Take video editors, right? I have video editors.
And part of their
job involves learning
about our audience's preferences, learning about my preferences and taste and the different tradeoffs
we have and just over the course of many months building up this understanding of context.
And so the skill and ability they have six months into the job, a model that can pick up that
skill on the job, on the fly, when should we expect such an AI system?
Yeah. So I guess what you're talking about is like, you know, we're doing this interview
for three hours and then like, you know, someone's going to come in, someone's going to edit it.
They're going to be like, oh, you know, I don't know, Dario like, you know, scratched his head.
and, you know, we could edit that out and, you know, identify that.
There was this, like, long, there was this, like, long discussion that, like, is less interesting to people.
And then, you know, then there's other thing that's, like, more interesting to people.
So, you know, let's kind of make this edit.
So, you know, I think the country of geniuses in a data center will be able to do that.
The way it will be able to do that is, you know, it will have general control of a computer screen, right?
Like, you know, and you'll be able to feed this in.
And it'll be able to also use the computer screen to, like, go on the web,
look at all your previous, look at all your previous interviews, like look at what people are saying on Twitter in response to your interviews, like talk to you, ask you questions, talk to your staff, look at the history of kind of edits that you did, and from that, like, do the job.
Yeah.
So I think that's dependent on several things.
One, that's dependent.
And I think this is one of the things that's actually blocking deployment, getting to the point on computer use, where the models are really masters at using the computer, right?
And, you know, we've seen this climb in benchmarks.
And benchmarks are always, you know, imperfect measures.
But like, you know, OS World is, you know, went from, you know, like 5%, you know, like I think when we first released, you know, a computer use like a year and a quarter ago.
It was like maybe 15%.
I don't remember exactly.
But we've climbed from that to like 65 or 70%.
And, you know, there may be harder measures as well.
but I think computer use has to pass a point of reliability.
Can I just as a follow up on that before you move on to the next point?
I often, for years, I've been trying to build different internal LLM tools for myself.
And often I have these text in text out tasks, which should be debt center in the repertoire of these models.
And yet I still hire humans to do them just because if it's something like,
identify what the best clips would be in this transcript.
And maybe they'll do like a seven out of ten job at them.
but there's not this ongoing way I can engage with them
to help them get better at the job
the way I could with a human employee.
And so that missing ability,
even if you saw computer use,
would still block my ability to, like,
offload an actual job to them.
Again, this gets back to what we were talking about
before with learning on the job,
where it's very interesting.
You know, I think with the coding agents,
like, I don't think people would say
that learning on the job is what is, you know,
preventing the coding agents from like, you know, doing everything end to end.
Like they keep, they keep getting better.
We have engineers and anthropic who like don't write any code.
And when I look at the productivity, to your previous question, you know, we have folks
who say this, this GPU kernel, this chip, I used to write it myself, I just have
Claude do it.
And so there's this, there's this enormous improvement in productivity.
And I don't know, like when I see Claude code, like familiarity with.
the code base or like, you know, or a feeling that the model hasn't worked at the company
for a year, that's not high up on the list of complaints I see. And so I think what I'm saying
is we're like, we're kind of taking a different path. Don't you think with coding that's because
there is an external scaffold of memory which exist instantiated in the code base, which I don't
know how many other jobs have coding made fast progress precisely because it has this unique
advantage that other economic activity doesn't? But when you say, but when you say,
say that, what you're implying is that by reading the code base into the context, I have everything
that the human needed to learn on the job. So that would be an example of whether it's written or not,
whether it's available or not, a case where everything you needed to know you got from the
context window, right? And that what we think of as learning, like, oh, man, I started this job.
It's going to take me six months to understand the code base. The model just did it in the context.
Yeah, I honestly don't know how to think about this because there are people who qualitative report what you're saying.
There was a meter study, I'm sure you saw last year, where they had experienced developers try to close a poll request in repositories that they were familiar with.
And those developers reported an uplift.
They reported that they felt more productive at the use of these models.
But in fact, if you look at their output and how much was actually merged back in, there's a 20% downlift.
They were less productive as a result of using the models.
And so I'm trying to square the qualitative feeling that people feel with these models versus, one, in a macro level, where is this like renaissance of software?
And two, when people do these independent evaluations, why are we not seeing the productivity benefits that we would expect?
Within Anthropic, this is just really unambiguous, right?
We're under an incredible amount of commercial pressure and make it even harder for ourselves because we have all this safety stuff we do that I think we do more than other companies.
So, like, the pressure to survive economically while also keeping our values is just incredible, right?
We're trying to keep this 10x revenue curve going.
There's like there is zero time for bullshit.
There is zero time for feeling like we're productive when we're not.
Like, these tools make us a lot more productive.
Like, why do you think we're concerned about competitors using the tools?
because we think we're ahead of the competitors
and like we don't want to excel.
We wouldn't be going through all this trouble
if this was secretly reducing our productivity.
Like we see the end productivity every few months
in the form of model launches.
Like there's no kidding yourself about this.
Like the models make you more productive.
One, people feeling like they're more productive
is qualitatively predicted by studies like this.
But two, if I just look at it,
at the end output, obviously you guys are making fast progress. But the fact, you know, the, the idea
was supposed to be with recursive improvement is that you make a better AI, the AI helps you build
a better next AI, et cetera, et cetera. And what I see instead, if I look at the you open AI deep mind,
is that people are just shifting around the podium every few months. And maybe you think that stops
because you won or whatever. But why are we not seeing the person with the best coding model
have this lasting advantage if, in fact, there are these enormous productivity gains from
the last coding model.
So, no, no.
I mean, I mean, I think it's all like my model of the situation is there's an advantage
that's gradually growing.
Like, I would say right now, the coding models give maybe, I don't know, a like 15, maybe 20% total
factor speed up.
Like, that's my view.
And six months ago, it was maybe 5%.
And so it didn't matter.
Like 5% doesn't register.
It's now just getting to the point where it's like one of several factors that that kind of matters.
And that's going to keep speeding up.
And so I think six months ago, like, you know, there were several companies that were at roughly the same point because, you know, this wasn't a notable factor.
But I think it's starting to speed up more and more.
you know, I would also say there are multiple companies that, you know, write models that are used for code and, you know, we're not perfectly good at, you know, preventing some of these other companies from from using from kind of using our models internally.
So, you know, I think, I think everything we're, kind of everything we're seeing is consistent with this kind of, this kind of snowball model where, you know, there's no hard, again, my, my, my, my, my, my, my, my, my, my, my, my, my, my, my, my, my my, my, my, my, my, my, my, my, my my.
My theme in all of this is like all of this is soft takeoff, like soft, smooth exponentials,
although the exponentials are relatively steep.
And so we're seeing this snowball gather momentum where it's like 10%, 20%, 25%, you know,
40%.
And as you go, yeah, Amdahl's law, you have to get all the like things that are preventing
you from closing the loop out of the way.
But like, this is one of the biggest priorities within Anthropic.
stepping back, I think before in the stack we were talking about, well, when do we get this on the job learning?
And it seems like the coding, the point you were making the coding thing is we actually don't need on the job learning.
That you can have tremendous productivity improvements.
You can have potentially trillions of dollars of revenue for AI companies without this basic human ability.
Maybe that's not your claim.
You should clarify.
But without this basic human ability to learn on the job.
But I just look at it like in most.
means of economic activity. People say, I hired somebody, they weren't that useful for the
first few months, and then over time, they built up the context understanding. It's actually
hard to define what we're talking about here. But they got something. And then now they're
power horse and they're so valuable to us. And if AI doesn't develop this ability to learn
on the fly, I'm not, I'm a bit skeptical that we're going to see huge changes to the world.
Yeah. So I think, I think two things here, right? There's the state of the technology right now,
which is, again, we have these two stages.
We have the pre-training and RL stage where you throw a bunch of data and tasks into the models,
and then they generalize.
So it's like learning, but it's like learning from more data and not, you know,
not learning over kind of one human or one model's lifetime.
So again, this is situated between evolution and human learning.
But once you learn all those skills, you have them.
And just like with pre-training, just how the models know more.
Or, you know, if I look at a pre-trained model, you know, it knows more about the history of samurai in Japan than I do.
It knows more about baseball than I do.
It knows, you know, it knows more about, you know, low-pass filters and electronics.
You know, all of these things, its knowledge is way broader than mine.
So I think, I think even just that, you know, may get us to the point where the models are better at, you know, kind of better at everything.
And then we also have, again, just with scaling the kind of existing setup, we have the in-context learning, which I would describe as kind of like human on-the-job learning, but like a little weaker and a little short term.
Like you look at in-context learning, you give the model a bunch of examples.
It does get it.
There's real learning that happens in context.
And like a million tokens is a lot.
That's, you know, that can be days of human learning, right?
You know, if you think about the model, you know, kind of reading a million words.
Words, you know, it takes me, how long would it take me to read a million? I mean, you know, like days or weeks at least. So you have these two things. And I think these two things within the existing paradigm may just be enough to get you the country of geniuses in the data center. I don't know for sure, but I think they're going to get you a large fraction of it. There may be gaps. But I certainly think just as things are, this I believe, is enough to generate trillions of dollars of revenue. That's one. That's all one. Two,
is this idea of continual learning, this idea of a single model learning on the job,
I think we're working on that too. And I think there's a good chance that in the next year or two,
we also make, we also solve that. I, again, I, you know, I think you get most of the way
there without it. I think the trillions of dollars of, you know, the, I think the trillions
of dollars a year market, maybe all of the national security implications in the
safety implications that I wrote about in adolescence of technology can happen without it. But I also
think we, and I imagine others, are working on it. And I think there's a good chance that we get there
within the next year or two. There are a bunch of ideas. I won't go into all of them in detail,
but, you know, one is just make the context longer. There's nothing preventing longer context from
working. You just have to train at longer context and then learn to serve them at inference. And
Both of those are engineering problems that we are working on and that I would assume others are working on as well.
Yeah. So this context line increase, it seemed like there was a period from 2020 to 2023 where from GPD3 to GPD4 turbo, there was an increase from like 2,000 context lines to 128K.
I feel like for the next, for the two-ish year since then, we've been in the sameish ballpark.
Yeah.
And when context lines get much longer than that, people report qualitative degradation in the ability of the model is considered that full context.
So I'm curious what you're internally seeing that makes you think like, oh, 10 million contacts, 100 million contacts, to get human like six-month learning, billion contacts.
This isn't a research problem.
This is a, this is an engineering and inference problem, right?
If you want to serve long context, you have to, like, store your entire KV cash.
You have to, you know, it's difficult to store all the memory in the GPUs, to juggle the memory around.
I don't even know the detail.
You know, at this point, this is at a level of detail that.
that I'm no longer able to follow, although, you know, I knew it in the GPD3 era of, like, you know, these are the weights, these are the activations you have to store. But, you know, these days the whole thing is flipped because we have M-O-E models and kind of all of that. But, and this degradation you're talking about, like, again, without getting too specific, like a question I would ask is like, there's two things. There's the context length you train at, and there's a context length that you serve at. If you train at a small context length,
and then try to serve at a long contract length,
like maybe you get these degradations.
It's better than nothing.
You might still offer it,
but you get these degradations.
And maybe it's harder to train at a long context length.
So, you know, there's a lot.
I want to at the same time ask about, like,
maybe some rabbit holes of like,
well, wouldn't you expect that if you have to train a longer context length,
that would mean that you're able to get sort of like less samples in
for the same amount of compute.
But before, maybe it's not worth diving deep on that.
I want to get an answer to the bigger picture question,
which is like, okay, so I don't feel a preference for a human editor that's been working for me for six months versus an AI that's been working with me for six months.
What year do you predict that that will be the case?
I, my, I mean, you know, my guess for that is, you know, there's a lot of problems that are basically like we can do this when we have the country of geniuses in a data center.
And so, you know, my, my, my picture for that is, you know, again, if.
If you, if you, if you know, if you made me guess, it's like one to two years, maybe one to three years.
It's really hard to tell.
I have a, I have a strong view, 99, 95 percent that like all this will happen in 10 years.
Like that's, I think that's just a super safe bet.
Yeah.
And then I have a hunch.
This is more like a 50-50 thing that it's going to be more like one to two, maybe more like one to three.
So one to three years.
Country of Genius says, and the slightly less economically valuable task of editing videos.
It seems pretty economically valuable, let me tell you.
It's just there are a lot of use cases like that, right?
There are a lot of similar ones.
So you're predicting that within one to three years.
And generally, Anthropic has predicted that by late 26, early 27, we will have AI systems that are, quote, have the ability to navigate interfaces available to humans doing digital work today, intellectual capabilities mashing or exceeding that of Nobel Prize winners, and the ability to interface with the physical world.
and then you gave an interview two months ago with DealBook,
where you were emphasizing your company's more responsible compute scaling
as compared to your competitors.
And I'm trying to square these two views where if you really believe
that we're going to have a country of geniuses,
you want as big a data center as you can get.
There's no reason to slow down.
The tam of a Nobel Prize winner that actually can do everything a Nobel Prize winner can do
is like trillions of dollars.
And so I'm trying to square this conservatism,
which seems rational if you have more moderate timelines with your stated views about AI progress.
Yeah. So it actually all fits together. And we go back to this fast, but not infinitely fast diffusion. So like, let's say that we're making progress at this rate. You know, the technology is making progress this fast. Again, I have, you know, very high conviction that like it's going, you know, we're going to get there within within a few years. I have a hunch that we're going to get there within a few years.
I have a hunch that we're going to get there within a year or two.
So a little uncertainty on the technical side, but like, you know, pretty strong confidence that it won't be off by much.
What I'm less certain about is, again, the economic diffusion side.
Like, I really do believe that we could have models that are a country of geniuses, a country of geniuses in a data center in one to two years.
One question is, how many years after that do the trillions in, you know, do the, do the, do the trillions in revenue start rolling in? I don't think it's guaranteed that it's going to be immediate. You know, I think it could be one year. It could be two years. I could even stretch it to five years, although I'm like, I'm skeptical of that. And so we have this uncertainty, which is, even if there's, even if there's,
technology goes as fast as I suspect that it will, we don't know exactly how fast it's going
to drive revenue. We know it's coming, but with the way you buy these data centers,
if you're off by a couple years, that can be ruinous. It is just like how I wrote, you know,
in Machines of Loving Grace, I said, look, I think we might get this powerful AI, this country
of genius in the data center. That description you gave comes from the machines of loving grace.
I said, we'll get that 2026, maybe 2027 again. That is, that is my.
hunch, wouldn't be surprised if I'm off by a year or two, but like, that is my hunch.
Let's say that happens.
That's the starting gun.
How long does it take to cure all the diseases, right?
That's one of the ways that, like, drives a huge amount of economic value, right?
Like, you cure every disease, you know, there's a question of how much of that goes to the
pharmaceutical company, to the AI company, but there's an enormous consumer surplus because
everyone, you know, assuming we can get access for everyone, which I care about greatly,
we, you know, we cure all of these diseases.
How long does it take? You have to do the biological discovery. You have to, you know, you have to, you know, manufacture the new drug. You have to, you know, go through the regulatory process. I mean, we saw this with like vaccines and COVID, right? Like there's just this, we got the vaccine out to everyone, but it took a year and a half, right? And so my question is, how long does it take to get the cure for everything, which AI is the genius that can, in theory, invent out to everyone? How long from when that AI first is, you?
exists in the lab to when diseases have actually been cured for everyone, right?
And, you know, we've had a polio vaccine for 50 years. We're still trying to eradicate it in the
most remote corners of Africa. And, you know, the Gates Foundation is trying as hard as they can.
Others are trying as hard as they can. But, you know, that's difficult. Again, I, you know,
I don't expect most of the economic diffusion to be as difficult as that, right? That's like the most
difficult case. But there's a real dilemma here. And where I've settled on it is it will be,
it will be faster than anything we've seen in the world, but it still has its limits.
And so then when we go to buying data centers, you know, you again, again, the curve I'm looking at is,
okay, we, you know, we've had a 10x a year increase every year. So beginning of this year,
we're looking at 10 billion in annual rate of annualized revenue at the beginning of the year.
We have to decide how much compute to buy.
And, you know, it takes a year or two to actually build out the data centers, to reserve the data center.
So basically, I'm saying like in 2027, how much compute do I get?
Well, I could assume that the revenue will continue.
continue growing 10x a year. So it'll be 100 billion at the end of 2026 and one trillion at the end of
27. And so I could buy a trillion dollars. Actually, it would be like five trillion dollars of compute
because it would be a trillion dollar a year for five years, right? I could buy a trillion dollars
of compute that starts at the end of 2027. And if my if my revenue is not a trillion dollars,
if it's even 800 billion, there's no force.
on earth. There's, there's no hedge on earth that could stop me from going bankrupt if I,
if I buy that much compute. And, and so, even though a part of my brain wonders if it's going to
keep growing 10x, I can't buy a trillion dollars a year of compute in, in, in, in, in, in, in, in, in, in, if I'm
just off by a year in that rate of growth, or if the, the growth rate is five X a year instead
of 10x a year, then, then, you know, you go bankrupt. And, and, and, and, and, and, and, and, and, and, and, and, and so,
you end up in a world where, you know, you're supporting hundreds of billions, not trillions,
and you accept some risk that there's so much demand that you can't support the revenue,
and you accept still some risk that, you know, you got it wrong and it's still so.
And so when I talked about behaving responsibly, what I meant actually was not the absolute amount.
That actually was not, you know, I think it is true we're spending somewhat less than some of the other players.
it's actually the other things like have we been thoughtful about it or are we yoloing and saying,
oh, we're going to do $100 billion here or $100 billion there.
I kind of get the impression that, you know, some of the other companies have not written down the spreadsheet,
that they don't really understand the risks they're taking.
They're just kind of doing stuff because it sounds cool.
And we've thought carefully about it, right?
We're an enterprise business.
Therefore, you know, we can rely more on revenue.
It's less fickle than consumer.
We have better margins, which is the buffer between buying too much and buying too little.
And so I think we bought an amount that allows us to capture pretty strong upside worlds.
It won't capture the full 10x a year.
And things would have to go pretty badly for us to be in financial trouble.
So I think we've thought carefully and we've made that balance.
And that's what I mean when I say that we're being responsible.
Okay.
So it seems like it's possible that we're actually just have.
different definitions of a country of a genius in a data center. Because when I think of like
actual human geniuses, an actual country of human geniuses in the data center, I'm like,
I would happily buy $5 trillion dollars a worth of compute to run actual country of human geniuses
in a data center. So let's say J.B. Morgan or Moderna or whatever, it doesn't want to use them.
I've got a country of geniuses. They'll start their own company. And if like they can't start
their own company and they're bottlenecked by clinical trials, it is worth stating with clinical trials.
Like most clinical trials fail because the drug doesn't work. There's not efficacy, right?
And I make exactly that point in machines of loving grace.
I say the clinical trials are going to go much faster than we're used to, but not instantly fast.
And then suppose it takes a year for the clinical trials to work out so that you're getting revenue from that and you can make more drugs.
Okay, well, you've got a country of geniuses and you're an AI lab.
And you have, you could use many more AI researchers.
You also think that there's these like self-reinforcing gains from, you know, smart people working on AI tech.
So like, okay, you can have the data center working on and like AI progress.
Is there more gains from buying, like substantially more gains from buying a trillion dollars a year of compute versus $300 billion a year of compute?
If your competitor is buying a trillion, yes, there is.
Well, no, there's some gain.
But then, but again, there's this chance that they go bankrupt before, you know, again, if you're off by only a year, you destroy yourselves.
That's the balance.
a lot. We're buying a hell of a lot. Like, we're not, you know, we're buying an amount that's
comparable to that, you know, the, the biggest players in the game are buying. But, but if
you're asking me, why, why haven't we signed, you know, 10, 10 trillion of compute starting
in, starting in mid-2020. First of all, it can't be produced. There isn't that much in the world.
But, but second, what if the country of geniuses comes, but it comes in mid-20208, instead of
mid-2027, you go bankrupt.
So if your projection is one to three years,
it seems like you should want $10 trillion to compute by 2029.
2020, maybe 2020.
Like, I mean, you know, you're like, are you on your,
like it seems like even in your,
the longest version of the timelines you state,
the compute you are ramping up to build doesn't seem in accordance.
What makes you think that?
Well, as you said, you want the 10 trillion.
Like human wages, let's say are on the order of 50 trillion.
If you look at, so I won't talk about Anthropic in particular, but if you talk about the industry, like the amount of compute the industry hit, you know, the, the amount of compute the industry is building this year is probably in the, you know, I don't know, very low tens of, you know, call it 10, 15 gigawatts next year. I, you know, it goes up by roughly 3x a year. So like next year's 30 or 40 gigawatts and 2028 might be 100.
2029 might be like 300 gigawatts and like each gigawatt costs like maybe 10.
I mean, I'm doing the math in my head, but each gigawatt costs maybe $10 billion, you know,
of order $10 to $15 billion a year.
So, you know, you kind of, you know, you put that all together and you're getting about
what you described.
You're getting multiple trillions a year by 2028 or 2029.
So you're getting exactly that.
You're getting exactly what you predict.
That's for the industry.
That's for the industry.
So suppose Anthropics compute keeps 3xing a year,
and then by like 27, you have, or 27, 28, you have 10 gigawatts.
And, like, multiply that by, as you say, 10 billion.
So then it's like $100 billion a year.
But then you're saying the TAM by 20, 28, 29.
I don't want to give exact numbers for Anthropic,
but these numbers are too small.
These numbers are too small.
Okay, interesting.
I'm really proud that the puzzles I've worked on with Jane Street,
have resulted in them hiring a bunch of people from my audience.
Well, they're still hiring, and they just send me another puzzle.
For this one, they spent about 20,000 GPU hours
training backdoors into three different language models.
Each one has a hidden prompt that elicits completely different behavior.
You just had to find the trigger.
This is particularly cool, because finding backdoors is actually an open question
in Frontier AI research.
Anthropic actually released a couple of papers about sleeper agents,
and they show that you can build a simple classifier on the residual stream
to detect when a backdoor is about to fire.
But they already knew what the triggers were because they built them.
Here, you don't.
And it's not feasible to check the activations for all possible trigger phrases.
Unlike the other puzzles they made for this podcast,
Jane Street isn't even sure this one is solvable.
But they've set aside $50,000 for the best attempts and write-ups.
The puzzles live at janestreet.com slash the barcash.
And they're accepting submissions until April 1st.
All right, back to Dario.
You've told investors that you plan to be profitable starting in 28.
And this is the year where we're potentially getting the country of geniuses of the data center.
And this is going to now unlock all this progress and medicine and health and et cetera, et cetera, and new technologies.
Wouldn't this be exactly the time where you'd like want to reinvest in the business and build bigger countries so they can make more discoveries?
Yeah. So, I mean, profitability is this kind of like,
weird thing in this field. I don't think, I don't think in this field profitability is actually
a measure of, you know, kind of spending down versus investing in the business. Like,
let's let's just, let's just take a model of this. I actually think profitability happens
when you underestimated the amount of demand you were going to get and loss happens when you
overestimated the amount of demand you were going to get because you're buying the
data centers ahead of time. So think about it this way. Ideally, you would like, and again,
these are stylized facts. These numbers are not exact. I'm just trying to make a toy model here.
Let's say half of your compute is for training and half of your compute is for inference.
And, you know, the inference has some gross margin that's like more than 50%. And so what that means is
that if you were in steady state, you build a data center. If you knew exactly exactly the
demand you were getting, you would, you know, you would, you would, you would, you would, you would,
you would, you would, you would, you would, you would get a certain amount of revenue.
Say, I don't know, let's say you pay a hundred billion dollars a year for compute.
And on $50 billion a year, you support $150 billion of, of, of, of, of, of, of, of, of, of, of, of, of, of, of, of, or sorry, not today, but like,
That's where we're projecting forward in a year or two.
The only thing that makes that not the case is if you get less demand than 50 billion,
then you have more than 50% of your data center for research and you're not profitable.
So you, you know, you train stronger models, but you're like not profitable.
If you get more demand than you thought, then your research gets squeezed.
But, you know, you're kind of able to support more inference and you're more
profitable. So it's, maybe I'm not explaining it well, but, but the thing I'm trying to say is you decide the
amount of compute first, and then you have some target desire of inference versus, versus training,
but that gets determined by demand. It doesn't get determined by you. What I'm hearing is the
reason you're predicting profit is that you are systematically underinvesting in compute, right?
Because if you actually like, I'm saying, I'm saying it's hard to predict. So, so these things about
2028 and when it will happen.
That's our attempt to do the best we can with investors.
All of this stuff is really uncertain because of the cone of uncertainty.
Like, we could be profitable in 2026 if the revenue grows fast enough.
And then, you know, if we overestimate or underestimate the next year, that could swing wildly.
Like, what I'm trying to get is you have a model in your head of like the business invest, invest, invest, invest, invest.
get scale and kind of then becomes profitable. There's a single point at which things turn around.
I don't think the economics of this industry work that way. I see. So if I'm understanding correctly,
you're saying because of the discrepancy between the amount of compute we should have gotten and the
amount of compute we got, we were like sort of forced to make profit. But that doesn't mean we're
going to continue making profit. We're going to like reinvest the money because, well, now AI has made
so much progress and we want the bigger country of geniuses. And so then back into
Revenue is high, but losses are also high.
If we predict, if every year we predict exactly what the demand is going to be will be profitable every year.
Because spending 50% of your compute on, 50% of your compute on research, roughly, plus a gross margin that's higher than 50% and correct demand prediction leads to profit.
That's the profitable business model that I think is kind of like there, but like obscured.
by these like building ahead and prediction errors.
I guess you're treating the 50% as a,
as a sort of like, you know, just like a given constant.
Whereas in fact, if the eye progress is fast
and you can increase the progress by scaling up more,
you should just have more than 50% and not make profit.
Here's what I'll say.
You might want to scale up it more.
You might want to scale it up more.
But, but, you know, remember the log returns to scale, right?
If 70% would get you a very little bit of a smaller model
through a factor of 1.4x, right?
Like, that extra $20 billion is, you know, that each dollar there is worth much less to you because
the log linear setup.
And so you might find that it's better to invest that, that, that, it's better to invest that
$20 billion in, you know, in serving inference or in hiring engineers who are, who are kind of better,
who are kind of better, who are kind of better at what they're doing.
So the reason I said 50%, that's not, that's not exactly our target.
It's not exactly going to be 50%.
It will probably vary, very over time.
What I'm saying is the like log linear return, what it leads to is you spend of order
one fraction of the business, right?
Like not 5%, not 95%, and then it, you know, then you know, then that you get diminishing
returns because of the log log scale off.
I'm like convincing Dario to like believe in AI progress or something.
But like, okay, you don't invest in research because it has diminishing returns, but you
invest in the other things you mentioned. Again, again, we're talking about diminishing returns
after you're spending 50 billion a year, right? Like, this is a point I'm sure you would make,
but like diminishing returns on a genius could be quite high. And more generally, like, what is
profit in the market economy? Profit is basically saying the other companies in the market can
like do more things with this money that I can. Yeah, put aside anthropic. I'm just trying to like,
because I, you know, I don't want to give information about anthropic is why I'm giving these
stylized numbers, but like, let's just derive the equilibrium of the industry, right?
I think the, so, so, so, so why doesn't everyone spend 100% of their, you know, 100% of
their compute on training and not serve any customers, right?
It's because if they didn't get any revenue, they couldn't raise money, they couldn't do
compute deals, they couldn't buy more compute the next year.
So there's going to be an equilibrium where every, every company spends less than 100% on, on, on, on, on, on
training and certainly less than 100% on inference, it should be clear why you don't just serve the
current models and never train another model because then you don't have any demand because
you'll fall behind. So there's some equilibrium. It's not going to be 10%, it's not going to be 90%.
Let's just say as a stylized fact, it's 50%. That's what I'm getting at. And I think we're going to be
in a position where that equilibrium of how much you spend on training is less than the gross margins
that you're able to get on compute.
And so the underlying economics are profitable.
The problem is you have this hellish demand prediction problem when you're buying the next year of compute.
And you might guess under and be very profitable but have no compute for research.
Or you might guess over and you're you are not profitable and you have all the compute for research in the world.
Does that make sense?
Just as a dynamic model of the industry.
Maybe stepping back, I'm like, I'm not saying I think the country of genius is going to come in two years and therefore you should buy this compute.
To me, what you're saying, the end conclusion you're arriving at makes a lot of sense.
But that's because, like, oh, it seems like country of geniuses is hard and there's a long way to go.
And so the stepping back, the thing I'm trying to get at is more like it seems like your worldview is compatible with somebody who says,
were like 10 years away from a world in which like we're generating trillions of dollars.
And that's just that's just not my view. Yeah. That is that is not my view. Like I so so I'll like,
I'll like make another prediction. It is hard for me to see that that there won't be trillions of
dollars in revenue before 2030. Um, like, uh, I can, I can instruct a plausible world. It takes maybe
three years. So that would, you know, that would be the end of what I think it's plausible. Like in
In 2020, we get the real country of geniuses in the data center.
You know, the revenue's been going into the maybe is in the low hundreds of billions by 2020.
And then the country of geniuses accelerates it to trillions.
You know, and we're basically on the slow end of diffusion.
It takes two years to get to the trillions.
That would be the world where it takes until, that would be the world where it takes until 2030.
I suspect even composing the technical exponential
and diffusion exponential will get there before 2030.
So you laid out a model where anthropic makes profit
because it seems like fundamentally we're in a compute-constrained world
and so it's like eventually we keep growing compute.
No, I think the way the profit comes is, again,
and let's just abstract the whole industry here.
Like we have a, you know, let's just imagine we're in like an economics textbook.
We have a small number of firms,
each can invest a limited amount in, you know, or like each can invest some fraction in R&D.
They have some marginal cost to serve.
The margins on that, the profit margin, the gross profit margins on that marginal cost are like very high because because inference is efficient.
There's some competition, but the models are also differentiated.
There's some, you know, companies will compete to push their research budgets up.
But like, because there's a small number of players, you know, we have.
have the, what is it called?
The Krono equilibrium, I think, is what the, what the small number of firm equilibrium is.
The point is it doesn't equilibrate to perfect competition with with, with, with, with zero margins.
If there's like three firms, if there's three firms in the economy, all are kind of independently behaving, behaving rationally, it doesn't equilibrate to zero.
Help me understand that because right now we do have three leading firms and they're not making profit.
And so what, yeah, what is changing?
Yeah.
So, again, the gross margins right now are very positive.
What's happening is a combination of two things.
One is we're still in the exponential scale-up phase of compute.
So basically what that means is we're training, like a model gets trained.
It costs, you know, let's say a model got trained that costs a billion dollars last year.
And then this year, it produced $4 billion of revenue and cost $1 billion to inference from.
So, you know, again, I'm using stylized number here, but, you know, there would be 75% gross, gross margins and, you know, this 25% tax.
So that model as a whole makes $2 billion.
dollars. But at the same time, we're spending $10 billion to train the next model because there's an exponential scale up. And so the company loses money. Each model makes money, but the company loses money. The equilibrium I'm talking about is an equilibrium where we have the country of geniuses. We have the country of geniuses in the data center. But that model training scale up has equilibrated more. Maybe it's still going up. We're still trying to predict the demand. But it's more. It's more.
leveled out.
I'll give you a couple of things there.
So let's start with the current world.
In the current world, you're right that, as you said before,
if you treat each individual model as a company, it's profitable.
But, of course, a big part of the production function of being a frontier lab
is training the next model, right?
So if you didn't do that, then you'd make profit for two months.
And then you wouldn't have margins because you wouldn't have the best model.
And then so, yeah, you can make profits for two months on the current system.
At some point, that reaches the biggest scale that it can reach.
And then in equilibrium, we have algorithmic improvements, but we're spending roughly the same
amount to train the next model as we spent to train the current model.
So this equilibrium relies...
I mean, at some point, at some point, you run out of money in the economy.
A fixed lump of laborer phallus.
The economy is going to grow, right?
That's one of your predictions.
Well, yes.
But this is...
Data centers in space.
But this is another example of the theme I was talking about, which is that the economy
will grow much faster with AI than I think it ever has before, but it's not like right now,
the compute is growing 3x a year.
Yeah.
I don't believe the economy is going to grow 300% a year.
Like I said this in Machines of Loving Grace.
Like I think we may get 10 or 20% per year growth in the economy, but we're not going to get
300% growth in the economy.
So I think in the end, you know, if compute becomes the majority of what the economy produces,
it's going to be capped by that.
So, okay, now let's assume a model where compute stays capped.
Yeah.
The world where frontier labs are making money is one where they continue to make fast progress,
because fundamentally your margin is limited by how good the alternative is.
And so you are able to make money because you have a frontier model.
If you didn't have frontier model, you wouldn't be making money.
And so this model requires there never to be a steady state.
Like forever and ever, you keep making.
No, I don't think that's true. I mean, I feel, I feel like we're like, we're taught, you know,
we're, you know, this is an economics, like, you know, this is this like an economics class?
Do you know, the Tyler Cowenco? We never stop talking about economics. We never, we never stop talking
about economics. So, no, but, but there are, there are worlds in which, you know, there,
so I don't think this field's going to be a, I don't think this field's going to be a monopoly.
All my lawyers never want me to say the word monopoly. But I don't think this field's going to be a
monopoly, but but you do get, you get industries in which there are a small number of players,
not one, but a small number of players. And ordinarily, like the way you get monopolies like
Facebook or meta, I always call them Facebook, but is these kind of network effects.
Yeah. The way you get industries in which there are small number of players are very high costs
of entry, right? So, you know, cloud is like this.
I think cloud is a good example of this.
You have three, maybe four players within cloud.
I think that's the same for AI, three, maybe four.
And the reason is that it's so expensive, it requires so much expertise and so much capital to, like, run a cloud company.
Right.
And so you have to put up all this capital.
And then in addition to putting up all this capital, you have to get all of this other stuff that, like, you know, requires a lot of skill to, you know, to make it happen.
And so it's like, if you go to someone and you're like, I want to disrupt this industry, here's $100 billion.
You're like, okay, I'm putting $100 billion and also betting that you can do all these other things that these people have been doing.
Well, it'll only decrease the profit in the industry.
And then the effect of your entering is the profit margins go down.
So, you know, we have equilibrium like this all the time in the economy where we have a few, we have a few players.
Profits are not astronomical.
Margins are not astronomical, but they're not zero, right?
And, you know, I think that's what we see on cloud.
Cloud is very undifferentiated.
Models are more differentiated than cloud, right?
Like, everyone knows Claude is good at different things than GPT is good at is good at, then Gemini is good at.
And it's not just Claude's good at coding, GPT is good at, you know, math and reasoning, you know, it's more subtle than that.
Like, models are good at different types of coding.
Models have different styles.
Like, I think these things are actually, you know, quite different from each other.
And so I would expect more differentiation than you see in cloud.
Now, there actually is a counter.
There is one counter argument.
And that counter argument is that if all of that, the process of producing models becomes, if AI models can do that themselves, then that could spread throughout the economy.
But that is not an argument for commoditizing AI models in general.
That's kind of an argument for commoditizing the whole economy at once.
I don't know what quite happens in that world where basically anyone can do anything, anyone can build anything, and there's like no mode around anything at all.
I mean, I don't know, maybe we want that world.
Like, maybe that's the end state here.
Like maybe, you know, maybe when when kind of AI models can do, you know, when when, when, when AI models can do everything, if we've solved all the.
safety and security problems, like, you know, that's one of the, one of the, one of the mechanisms for,
you know, just, just kind of the economy flattening itself again. But that's kind of like post,
like far post-country geniuses in a data center. Maybe a finer way to put that potential point
is one, it seems like AI research is especially loaded on raw intellectual power, which will be
especially abundant in a world with AGI.
And two, if you just look at the world today,
there's very few technologies that seem to be diffusing as fast as AI algorithmic progress.
And so that does hint that this industry is sort of structurally diffusive.
So I think coding is going fast,
but I think AI research is a super set of coding and there are aspects of it that are not going fast.
But I do think, again, once we get coding, once we get AI models going fast,
then, you know, AI, you know, that will speed up the ability of AI models to kind of do everything else.
So I think while coding is going fast now, I think once the AI models are building the next AI models and building everything else, the kind of whole economy will side it kind of go at the same pace.
I am, I am worried geographically, though.
I'm a little worried that like just proximity to AI, having heard about AI, that that that may be one differentiator.
when I said the like, you know, 10 or 20 percent growth rate, a worry I have is that the growth rate
could be like 50 percent in Silicon Valley and, you know, parts of the world that are kind of
socially connected to Silicon Valley and, you know, not that much faster than its current pace
elsewhere.
And I think that'd be a pretty messed up world.
So one of the things I think about a lot is how to prevent that.
Yeah.
Do you think that once we have this country of geniuses at a data center that robotics is sort of quickly
solved afterwards because it seems like a big problem with robotics is that a human can learn
how to teleoperate current hardware, but current AI models can't, at least not in a way that's
super productive. And so if we have this ability to learn like a human, should it solve robotics
immediately as well?
I don't think it's dependent on learning like a human. It could happen in different ways.
Again, we could have trained the model on many different video games, which are like robotic
controls or many different simulated robotics environments or just, you know, train them to control
computer screens and they learn to generalize. So it will happen. It's not necessarily
dependent on human-like learning. Human-like learning is one way it could happen if the model's like,
oh, I pick up a robot, I don't know how to use it, I learn. That could happen because we
discovered discovering continual learning. That could also happen because we train the model on a
bunch of environments and then generalized, or it could happen because the model learns that
in the context length. It doesn't actually matter which way. If we go back to the discussion,
we had like an hour ago, that type of thing can happen in, that type of thing can happen in several
different ways.
But I do think when for whatever reason the models have those skills, then robotics will be
revolutionized, both the design of robots because the models will be much better than humans
at that, and also the ability to kind of control robots.
So we'll get better at the physical, building the physical hardware, building the physical
robots and will also get better at controlling it. Now, you know, does that mean the robotics
industry will also be generating trillions of dollars of revenue? My answer there is yes,
but there will be the same extremely fast but not infinitely fast diffusion. So will robotics be
revolutionized? Yeah, maybe tack on another year or two. That's my, that's the way I think
about these things. There's a general skepticism about extremely fast progress. Like here's my
which is like, it sounds like you are going to solve
continual learning one or another within the matter of years.
But just as people weren't talking about
continual learning a couple of years ago,
and then we realized,
oh, why are these models as useful as they could be right now,
even though they are clearly passing the touring test
and are experts in so many different domains?
Maybe it's this thing.
And then we solve this thing and we realize,
actually there's another thing that human intelligence can do
and that's a basis of human labor that these models can't do.
And then, so why not think there will be more things like this?
So I think that, like, we're, you know, we've, like, found the pieces of human intelligence.
Well, well, to be clear, I mean, I think continual learning, as I've said before, might not be a barrier at all, right?
Like, like, you know, I think, I think we maybe just get there by pre-training generalization and, and RL generalization.
Like, I think there just might not be, um, there basically might not be such a thing at all.
In fact, I would point to the history in ML of people coming up with things that are barriers,
that end up kind of dissolving within the big blob of compute, right?
That, you know, people talked about, you know, how do you have, you know, how do your models keep track of nouns and verbs?
And, you know, how do they, you know, they can understand syntactically, but they can't understand semantically.
You know, it's only statistical correlations.
You can understand a paragraph.
You can understand a word.
There's reasoning.
You can't do reasoning.
But then suddenly it turns out you can do code and math very well at all.
So I think there's actually there's actually a stronger history of some of these things seeming like a big deal and then and then kind of and then kind of dissolving.
Some of them are real.
I mean, the need for data is real.
Maybe continual learning is a real thing.
But again, I would ground us in something like code.
Like I think we may get to the point in like a year or two where the models can just do sui and end.
Like, that's a whole task.
That's a whole sphere of human activity that we're just saying models can do it now.
When you say end-to-end, do you mean setting technical direction, understanding the context of the problem?
Yes.
Yes.
I mean, I mean, that is, I feel like, AGI complete, which maybe is internally consistent.
But it's not like saying 90% of code or 100% of code.
It's like, no, no, the other parts of the job as well.
No, no, no, I gave this, I gave this spectrum.
90% of code, 100% of code, 90% of N10 SWI, 100% of N10 SWI, new tasks are created for sues.
Eventually, those get done as well.
Yeah, it's a long spectrum.
It makes sense.
But we're traversing the spectrum very quickly.
Yeah. I do think it's funny that I've seen a couple of podcasts you've done where the host will be like,
ah, but the forecast for the same of the learning thing.
And it always makes me crack up because you're like, you know, you've been an AI researcher for like 10 years.
of that. I'm sure there's like some
feeling of like, okay, so a podcast or wrote an essay.
And like every interview I get asked about it.
You know, the truth of the matter is that we're all trying to figure this out together.
Yeah.
Right.
There are some ways in which I'm able to see things that others aren't.
These days, that probably has more to do with like I can see a bunch of stuff within Anthropic
and have to make a bunch of decisions than I have any great research insight that others don't.
right. I'm running a 2,500 person company. Like, it's actually pretty hard for me to have concrete research insight, you know, much harder than, you know, than it would have been, you know, 10 years ago or, you know, or even two or three years ago.
As we go towards a world of a full drop in remote worker replacement, does a API pricing model still make the most sense? And if not, what is the correct way to price AGI or serve AGI?
Yeah, I mean, I think there's going to be a bunch of different business models here, sort of all at once that are going to be that are going to be experimented with.
I actually do think that the API model is more durable than many people think.
One way I think about it is if the technology is kind of advancing quickly, if it's advancing exponentially, what that means is there's always kind of like a surface area of kind of new,
use cases that have been developed in the last three months.
And any kind of product surface you put in place is always at risk of sort of becoming
irrelevant, right?
Any given product surface probably makes sense for, you know, a range of capabilities
of the model, right?
The chatbot is already running into limitations of, you know, making it smarter.
It doesn't really help the average consumer that much.
But I don't think that's a limitation of AI models.
I don't think that's evidence that, you know, the models are, the models are good enough and they're, you know, them getting better doesn't matter to the economy.
It doesn't matter to that particular product.
And so I think the value of the API is the API always offers an opportunity, you know, very close to the bare metal to build on what the latest thing is.
And so, you know, there's kind of always going to be this, you know, this kind of front of new startups and new ideas.
that weren't possible a few months ago and are possible because the model is advancing.
And so I actually, I kind of actually predict that we are, it's going to exist alongside
other models, but we're always going to have the API business model because there's always
going to be a need for a thousand different people to try experimenting with the model in different
way and 100 of them become startups and 10 of them become big successful startups.
and two or three really end up being the way that people use the model of a given generation.
So I basically think it's always going to exist.
At the same time, I'm sure there's going to be other models as well.
Like not every token that's output by the model is worth the same amount.
Think about, you know, what is the value of the tokens that are like, you know, that the model outputs when someone, you know,
call, you know, someone, you know, calls them up and says, my Mac isn't working or something.
You know, the models like restart it, right?
Yeah.
And like, you know, someone hasn't heard that before, but like, you know, the model said that like 10 million times, right?
You know, that's, that maybe that's worth like a dollar or a few cents or something.
Whereas if the model, you know, the model goes to, you know, one of the pharmaceutical companies and it says, oh, you know, this molecule you're developing, you should take the aromatic ring.
from that end of the molecule and put it on that end of the molecule.
And, you know, if you do that, wonderful things will happen.
Like those tokens could be worth, you know, tens of millions of dollars, right?
So I think we're definitely going to see business models that recognize that, you know, at some point we're going to see, you know, pay for results or, you know, in some in some form or we may see forms of compensation that are like labor.
you know, that kind of work by the hour.
I, you know, I don't know.
I think, I think because it's a new industry,
a lot of things are going to be tried.
And I, you know, I don't know what will turn out to be the right thing.
What I find, I take your point that people will have to try things to figure out what is the best way to use this blob of intelligence.
But what I find striking is Claude Code.
So I don't think in the history of startups, there has been a single application that has been as hotly computer.
it in has coding agents.
And CloudCode is a category leader here.
And that seems surprising to me.
Like, it doesn't seem intrinsically
like Anthropic had to build this.
And I wonder if you have an accounting
of why it had to be anthropic
or how Anthropic ended up building an application
in addition to the model underlying it.
Yeah.
So it actually happened in a pretty simple way,
which is we had our own, you know,
we had our coding models,
which were good at coding.
And, you know, around the beginning of 2025, I said,
I think the time has come where you can have non-trivial acceleration of your own research
if you're an AI company by using these models.
And of course, you know, you need an interface.
You need a harness to use them.
And so I encourage people internally.
And I didn't say this is one thing that, you know, that you have to use.
I just said people should experiment with this.
And then, you know, this thing, I think it might have been originally.
called Claude C-L-I and then the name eventually got changed to Claude
internally was the thing that kind of everyone was using and it was seeing fast internal
adoption. And I looked at it and I said, probably we should launch this externally, right?
You know, it's seen such fast adoption within Anthropic like, you know, like, you know,
coding is a lot of what we do. And so, you know, we have a audience of many, many hundreds of
people that's in some ways, at least, representative of the external audience. So it looks like
we already have product market fit.
Let's launch this thing.
And then we launched it.
And I think, you know, just the fact that we ourselves are kind of developing the model
and we ourselves know what we most need to use the model.
I think it's kind of creating this feedback loop.
I see.
In the sense that you, let's say a developer at Anthropic is like, ah, it would be better if it was better at this X thing.
And then you bake that into the next model that you build.
That's one version of it.
But then there's just the ordinary product iteration of like, you know, we have a bunch of coders within Anthropic.
Like we, you know, they like use ClaudeCode every day.
And so we get fast feedback.
That was more important in the early days.
Now, of course, there are millions of people using it.
And so we get a bunch of external feedback as well.
But it's, you know, it's just great to be able to get, you know, kind of fast, fast internal feedback.
You know, I think this is the reason why we launched a coding model and, you know, didn't launch a,
a pharmaceutical company, right?
You know, my background's in, my background's in, like, biology, but, like, we don't have
any of the resources that are needed to launch a pharmaceutical company.
So there's been a ton of hype around OpenClawe, and I want to check it out for myself.
I've got a day coming up this weekend, and I don't have anything planned yet.
So I gave OpenClaa a mercury debit card.
I set a couple hundred dollar limit, and I said, surprise me.
Okay, so here's the MacManyi it's on, and besides having access to my mercury, it's totally
quarantined.
And actually felt quite comfortable giving it access to a debit card because Mercury makes it super easy to set up guard rails.
I was able to customize permissions, cap to spend, and restrict a category of purchases.
I wanted to make sure the debit card worked, so I asked OpenCloud to just make a test transaction,
and decided to donate a couple bucks to Wikipedia.
Besides that, I have no idea what's going to happen.
I will report back on the next episode about how it goes.
In the meantime, if you want a personal banking solution that can accommodate all the different ways that people use their money,
even experimental ones like this one, visit mercury.com.
personal. Mercury is a fintech company, not an FDIC insured bank. Banking services provided
through Choice Financial Group and column NA members FDIC. You know she thinks we're getting coffee
and walking around the neighborhood. Let me ask you about now making AI go well. It seems
like whatever vision we have about how AI goes well has to be compatible with two things.
One is the ability to build and run AIs is diffusing extremely rapidly.
And two is that the population of AIs, the amount we have and their intelligence, will also increase very rapidly.
And that means that lots of people will be able to build huge populations of misaligned AIs or AIs, which are just like companies which are trying to increase their footprint or have weird psyches like Sydney Bing, but now they're superhuman.
And what is a vision for a world in which we have an equilibrium that is compatible with lots of different AIs, some of which are misaligned running around?
Yeah, yeah.
So I think, you know, in the adolescence of technology, I was kind of, you know, skeptical of like the balance of power.
But I think I was particularly skeptical of or the thing I was specifically skeptical of is you have like three or four of these companies like kind of all building models that are kind of dry, you know, sort of sort of, sort of.
like derived from the, like, derive from the same thing and, you know, that these would check each other.
Or even that kind of, you know, any number of them would, would check each other.
Like, we might live in a offense dominant world where, you know, like one person or one AI model is, like, smart enough to do something that, like, causes damage for everything else.
I think in the, I mean, in the short run, we have a limited number of players now.
so we can start by within the limited number of players,
we,
you know,
we kind of,
you know,
we need to put in place the,
you know,
the safeguards.
We need to make sure everyone does the right alignment work.
We need to make sure everyone has bio classifiers.
Like,
you know,
those are kind of the immediate things we need to do.
I agree that,
you know,
that doesn't solve the problem in the long run,
particularly if the ability of AI models
to make other AI models proliferates,
then, you know,
the whole thing can kind of,
you know,
can become,
harder to solve. You know, I think I think in the long run we need some architecture of governance, right? Some are some architecture of governance that preserves human freedom, but but kind of also allows us to like, you know, govern the very large number of kind of, you know, human systems, AI systems, hybrid, hybrid human, hybrid human AI, like, you know, hybrid human AI like, you know, companies.
or like or like economic units.
So, you know, we're going to need to think about like, you know, how do we, how do we
protect the world against, you know, bioterrorism?
How do we protect the world against like, you know, against like, against like mirror life?
Like, you know, probably we're going to need to, you know, need some kind of like AI monitoring
system that like, you know, kind of monitors for all of these things.
But then we need to build this in a way that like, you know, preserve civil liberties and like
our constitutional rights.
So I think just as is anything else, like it's like a new security landscape with a new set of, you know, a new set of tools and a new set of vulnerabilities.
And I think my worry is if we had 100 years for this to happen all very slowly, we'd get used to it.
You know, like we've gotten used to like, you know, the presence of, you know, the presence of explosives in society or like the, you know, the presence of various, you know, like new weapons or.
or the president of video cameras,
we would get used to it over 100 years.
And we'd develop governance mechanisms.
We'd make our mistakes.
My worry is just that this happening all so fast.
And so I think maybe we need to do our thinking faster
about how to make these governance mechanisms work.
Yeah.
It seems like in an offense dominant world,
over the course of the next century,
so the idea is the AI is making the progress
that would happen over the next century
happened in some period of five to 10 years.
but we would still need the same mechanisms
or balance of power would be similarly intractable
even if humans were the only game in town
and so I guess we have the advice of AI
it fundamentally doesn't seem like a totally different ballgame here
if checks and balances were going to work
they would work with humans as well
if they aren't going to work they wouldn't work with AI's as well
and so maybe this just dooms human checks and balances as well
But yeah, again, I think there's some way to, I think there's some way to make this happen.
Like it, you know, it just, it just, you know, the governments of the world may have to work together to make it happen.
Like, you know, we may have to, you may have to talk to AIs about kind of, you know, building societal structures in such a way that like these, these defenses are possible.
I don't know.
I mean, this is so, this is, you know, I don't want to say so far ahead in time, but like so far ahead in technological ability that may happen over a short.
period of time, that it's hard for us to anticipate it in advance.
Speaking of government is getting involved, on December 26, the Tennessee legislature introduced
a bill, which said, quote, it would be an offense for a person to knowingly train artificial
intelligence to provide emotional support, including through open-ended conversations
with a user. And of course, one of the things that Claude attempts to do is be
a thoughtful, thoughtful friend, thoughtful, knowledgeable friend. And in general, it seems like we're
going to have this patchwork of state laws, a lot of the benefits that normal people could
experience as a result of AI are going to be curtailed, especially when we get into
the kinds of things you discuss in machines of loving grace, biological freedom, mental
health improvements, et cetera, et cetera. It seems easier to imagine worlds in which these get
whack them all the way by different laws. Whereas bills like this don't seem to address the
actual existential threats that you're concerned about. So I'm curious about to understand in
in the context of things like this, your anthropics position against the federal moratorium on state
AI laws.
Yes.
So, I don't know.
There's, there's many different things going on at once, right?
I think, I think that, that particular law is, is dumb.
Like, you know, I think it was, it was clearly made by legislators who just probably had little idea what AI models could do and not do.
They're like, AI models serving us.
That just sounds scary.
Like, I don't want, I don't want that to happen.
So, you know, we're not, we're not in favor of that, right?
But, you know, that wasn't the thing that was being voted on.
The thing that was being voted on is we're going to ban all state regulation of AI for 10 years with no apparent plan to do any federal regulation of AI, which would take Congress to pass, which is a very high bar.
So, you know, the idea that we'd ban states from doing anything for 10 years and people said they had a plan for federal government, but, you know, there was no actual, there was no proposal on the table.
there was no actual attempt.
Given the serious dangers that I lay out in adolescence of technology around things like the, you know, kind of biological weapons and bioterrorism, autonomy risk, and the timelines we've been talking about, like 10 years is an eternity.
Like, that's a, that's a, I think that's a crazy thing to do.
So if that's the choice, if that's what you force us to choose, then we're going to choose not to have that moratorium.
And, you know, I think the benefits of that position exceed the costs, but it's not a perfect position if that's the choice.
Now, I think the thing that we should do, the thing that I would support is the federal government should step in, not saying states you can't regulate, but here's what we're going to do.
And states, you can't differ from this, right?
Like, I think preemption is fine in the sense of saying that federal government says, here's our standards.
This applies to everyone.
states can't do something different.
That would be something I would support if it would be done in the right way.
But this idea of states, you can't do anything and we're not doing anything either,
that struck us as, you know, very much not making sense.
And I think we'll not age well, it's already starting to not age well with all the backlash that you've seen.
Now, in terms of what we would want, I mean, you know, the things we've talked about are starting with transparency standards.
standards, you know, in order to monitor some of these autonomy risks and bioterrorism risks.
As the risks become more serious, as we get more evidence for them, then I think we could be more
aggressive in some targeted ways and say, hey, AI bioterrorism is really a threat.
Let's pass a law that kind of forces people to have classifiers.
And I could even imagine, it depends.
It depends how serious the threat it ends up being.
We don't know for sure.
then we need to pursue this in an intellectually honest way where we say ahead of time,
the risk has not emerged yet.
But I could certainly imagine with the pace that things are going that, you know, I could
imagine a world where later this year we say, hey, this AI bioterrorism stuff is really serious.
We should do something about it.
We should put it in a federal standard.
And if the federal government won't act, we should put it in a state standard.
I could totally see that.
I'm concerned about a world where if you just consider the,
the pace of progress you're expecting, the life cycle of legislation, you know, the benefits are,
as you say, because of diffusion lag, the benefits are slow enough that I really do think this patchwork of,
on the current trajectory, this patchwork of state laws would prohibit. I mean, having an emotional chatbot
friend is something that frees people out than just imagine the kinds of actual benefits from AI.
We want normal people to be able to experience from improvements in health and health span and improvements in mental health and so forth.
Whereas at the same time, it seems like you think the dangers are already on the horizon.
And I just don't see that much.
It seems like it would be especially injurious to the benefits of AI as compared to the dangers of AI.
And so that's maybe the where the cost benefit makes less sense to me.
So there's a few things here, right?
I mean, people talk about there being thousands of these state laws.
First of all, the vast, mass majority of them do not pass.
And, you know, the world.
works a certain way in theory, but like just because the law has been passed doesn't mean it's really enforced, right? The people, the people, you know, implementing it may be like, oh my God, this is stupid. It would mean shutting off like, you know, everything that's ever been built and everything that's ever been built in Tennessee. So, you know, very often laws are interpreted in like, you know, a way that makes them that that makes them not as dangerous or not as harmful on the same side. Of course, you have to worry if you're passing a law to stop a bad thing. You had this, you had this problem as well. Yeah.
Look, my, look, I mean, my basic view is, you know, if, if, you know, we could decide, you know, what laws were passed and how things were done, which, you know, we're only one small input input into that, you know, I would deregulate a lot of the stuff around the health benefits of AI.
I think, you know, I don't worry as much about the, like, the kind of chatbot laws.
I actually worry more about the drug approval process where I think AI model.
are going to greatly accelerate the rate at which we discover drugs.
And just the pipeline will get jammed up.
Like the pipeline will not be prepared to like process all of the stuff that's going through it.
So, you know, I think reform of the regulatory process to bias more towards, we have a lot of things coming where the safety and the efficacy is actually going to be really crisp and clear.
Like, I mean, a beautiful thing.
really, really crisp and clear and like really, really effective. But, you know, and maybe we don't need all this, all this, like, all this superstructure around it that was designed around an error of drugs that barely work and often have serious side effects.
But at the same time, I think we should be ramping up quite significantly the, you know, this kind of safety and security legislation. And, you know, like I've said, you know, start.
Starting with transparency is my view of trying not to hamper the industry, right?
Trying to find the right balance.
I'm worried about it.
Some people criticize my essay for saying, that's too slow.
The dangers of AI will come too soon if we do that.
Well, basically, I kind of think like the last six months and maybe the next few months are going to be about transparency.
And then if these risks emerge when we're more certain of them, which I think we might be as soon as later this year, then I think we need to act very,
fast in the areas that we've actually seen the risk. Like, I think the only way to do this is to be
nimble. Now, the legislative process is normally not nimble, but we need to emphasize to everyone
involved, the urgency of this. That's why I'm sending this message of urgency, right? That's why I wrote
adolescents of technology. I wanted policymakers to read it. I wanted economists to read it. I want
national security professionals to read it. You know, I want decision makers to read it so that they
have some hope of acting faster than they would have otherwise. Is there anything you can do
or advocate that would make it more certain that the benefits of AI are better instantiated,
where I feel like you have worked with legislatures to be like, okay, we're going to prevent
bioterrorism here away, we're going to increase transparency, we're going to increase whistleblower
protection. And I just think by default, the actual, like the things we're looking forward to
here, it just seems very easy.
They seem very fragile to different kinds of moral panics or political economy problems.
Yeah.
I don't actually, so I don't actually agree that much in the developed world.
I feel like, you know, in the developed world, like markets function pretty well.
And when there's when there's like a lot of money to be made on something and it's clearly the best available alternative, it's actually hard for the regulatory system to stop it.
You know, we're seeing that in AI itself, right?
I, you know, like a thing I've been trying to fight for is export controls on chips to China, right?
And like, that's in the national security interests of the U.S.
Like, you know, that's like square within the, you know, the policy beliefs of, you know,
almost everyone in Congress of both parties.
But, and, you know, I think the case is very clear.
The counterarguments against it are, I'll politely call them fishy.
And yet it doesn't happen.
and we sell the chips because there's so much money.
There's so much money riding on it.
And, you know, that money wants to be made.
And in that case, in my opinion, that's a bad thing.
But it also applies when it's a good thing.
And so I don't think that if we're talking about drugs and benefits of the technology,
I am not as worried about those benefits being hampered in the developed world.
I am a little worried about them going too slow.
And as I said, I do think we should work to speed the approval process in the FDA.
I do think we should fight against these chatbot bills that you're describing, right, described individually.
I'm against them.
I think they're stupid.
But I actually think the bigger worry is a developing world where we don't have functioning markets,
where, you know, we often can't build on the technology that we've had.
I worry more that those folks will get left behind.
And I worry that even if the cures are developed, you know, maybe there's someone in rural Mississippi who doesn't get it as well, right?
That's a, that's a kind of smaller version of the thing, the concern we have in the developing world.
And so the things we've been doing are, you know, we work with, you know, we work with, you know, philanthropists, right?
You know, we work with folks who, you know, who, you know, deliver, you know, medicine and health
interventions to, you know, to developing world, to sub-Saharan Africa, you know, India, Latin America,
you know, other, other developing parts of the world.
That's the thing I think that won't happen on its own.
You mentioned expert controls.
Yeah.
Why can't U.S. and China both have a country of geniuses on a data center?
Why can't, you know, why won't it happen or why shouldn't it happen?
No, like, why shouldn't it happen?
You know, I think if this does happen, you know, then we kind of have a, well, we could have a few situations.
If we have like an offense dominant situation, we could have a situation like nuclear weapons, but like more dangerous, right?
Where it's like, you know, kind of kind of either side could easily destroy everything.
we could also have a world where it's kind of, it's unstable.
Like, the nuclear equilibrium is stable, right?
Because it's, you know, it's like deterrence.
But let's say there were uncertainty about, like, if the two AIs fought, which AI would win.
That could create instability, right?
You often have conflict when the two sides have a different assessment of their likelihood of winning, right?
If one side is like, oh, yeah, there's a 90% chance I'll win.
And the other side's like, there's a 90% chance I'll win, then a fight is much more likely.
They can't both be right, but they can both think that.
But this is like a fully general argument against the diffusion of AI technology, which that's the implication of this world.
Let me just go on because I think we will get diffusion eventually.
The other concern I have is that people, the governments will oppress their own people with AI.
And so, you know, I'm just, I'm worried about some world where you have a country that's already, you know, kind of a, you know, there's a, there's a.
government that kind of already, you know, is kind of kind of building a, you know, a tech,
a high-tech authoritarian state. And to be clear, this is about the government. This is not about the
people like people. We need to find a way for people everywhere to benefit. My worry here is
about governments. So, yeah, my, you know, my worry is if the world gets carved up into two
pieces, one of those two pieces could be authoritarian or totalitarian in a way that's very difficult
to displace. Now, will governments eventually get power?
AI and, you know, there's risk of authoritarianism, yes, will governments eventually get powerful
AI and there's risk of, you know, of kind of bad, bad equilibrium? Yes, I think both things.
But the initial conditions matter, right? You know, at some point, we're going to need to set up
the rules of the road. I'm not saying that one country, either the United States or a coalition of
democracies, which I think would be a better setup, although it requires more international
cooperation that we currently seem to want to make. But, you know, I don't think a coalition of
democracies or certainly one country should just say, these are the rules of the road. There's
going to be some negotiation, right? The world is going to have to grapple with this. And what I would
like is that the, you know, the democratic nations of the world, those with, you know, whose governments
have represented closer to pro-human values are holding the stronger hand then, have. Have
have more leverage when the rules of the road are set.
And so I'm very concerned about that initial condition.
I was re-listing to an interview from three years ago.
And one of the ways it aged poorly is that I kept asking questions,
assuming there's going to be some key fulcrum moment two to three years from now,
when, in fact, being that far out, it just seems like progress continues,
AI improves, AI is more diffused and people will use it for more things.
It seems like you're imagining a world in the future where the countries get together
and here's the rules of the world, and here's the leverage we have, here's the leverage you have.
When it seems like on current trajectory, everybody will have more AI.
Some of that AI will be used by authoritarian countries.
Some of that within the authoritarian countries will be used by private actors versus state actors.
It's not clear who will benefit more.
It's always unpredictable to tell in advance.
It seems like the Internet privileged authoritarian countries more than you would have expected.
And maybe the AI will be the opposite way around.
So I want to better understand what you're imagining.
here. Yeah, yeah. So just to be precise about it, I think the exponential of the underlying
technology will continue as it has before, right? The models get smarter and smarter. Even when they
get to country of geniuses in a data center, you know, I think you can continue to make the model
smarter. There's a question of like getting diminishing returns on their value in the world,
right? How much does it matter after you've already solved human biology or, you know, at some point
You can do harder math.
You can do more abstruse math problems, but nothing after that matters.
But putting that aside, I do think the exponential will continue, but there will be certain
distinguished points on the exponential.
And companies, individuals, countries will reach those points at different times.
And so, you know, there's, you know, could there be some, you know, I talk about is a nuclear
deterrent still in adolescence of technology, is a nuclear deterrent still stable in the world of
I don't know, but that's an example of like one thing we've taken for granted that like the technology could reach such a level that it's no longer like, you know, we can no longer be certain of it at least.
You know, think of, think of others.
You know, there are, you know, there are kind of points where if you reach a certain point, maybe you have offensive cyber dominance.
And like every, every computer system is transparent to you after that, unless the other side has a kind of equivalent defense.
So I don't know what the critical moment is or if there's a single critical moment, but I think there will be either a critical moment, a small number of critical moments, or some critical window where it's like AI is, AI confers some large advantage from the perspective of national security and one country or coalition has reached it before others.
that, you know, that, you know, I'm not advocating that they're just like, okay, we're in charge now.
That's not, that's not how I think about it, you know, that there's always the other side is catching up.
There's extreme actions you're not willing to take. And, and it's not right to take, you know, to take complete, to take complete control anyway.
But, but at the point that that happens, I think people are going to understand that the world has changed.
And there's going to be some negotiation implicit or implicit of.
about what is the post-A.I. World Order look like. And I think my interest is in, you know, making that negotiation be one in which, you know, classical liberal democracy has, you know, has a strong hand.
Well, I would understand what that better means because you say in the essay, quote, autocracy is simply not a form of government that people can accept in the post-powerfully act.
age. And that sounds like you're saying the CCP as an institution cannot exist after we get
AGI. And that seems like a very strong demand and it seems to imply a world where the leading
lab or the leading country will be able to and by their language should get to determine how the
world is governed or what kinds of governments are allowed and not allowed.
Yeah. So when I believe that paragraph was, I think I said something like you could take it even further and say X. So I wasn't necessarily endorsing that. I wasn't necessarily endorsing that view. I, you know, I was saying like, here's first, you know, here's a weaker thing that I believe. But, you know, I think I said, you know, we have to worry a lot about authoritarian. And, you know, we should try and, you know, kind of kind of check them and limit their power. Like, you could take this kind of.
further much more interventionist view that says like authoritarian countries with AI are these,
you know, these kind of self-fulfilling cycles that you can't, that are very hard to
displace. And so you just need to get rid of them from the beginning. That has exactly all the
problems you say, which is, you know, if you were to make a commitment to overthrowing every
authoritarian country, I mean, then they would take a bunch of actions now that like, you know,
that could could lead to instability. So that, that may or, you know,
that just may not be possible.
But the point I was making that I do endorse is that it is quite possible that, you know, today, you know, the view or at least my view or the view in most of the Western world is democracy is a better form of government than authoritarianism.
But it's not like if a country's authoritarian, we don't react the way we reacted if they committed a genocide or something, right?
And I guess what I'm saying is I'm a little worried that in the age of AGI,
authoritarianism will have a different meaning.
It will be a graver thing.
And we have to decide one way or another how to deal with that.
And the interventionist view is one possible view.
I was exploring such views.
You know, it may end up being the right view.
It may end up being too extreme to be the right view.
But I do have hope.
And one piece of hope I have is there.
There is, we have seen that as new technologies are invented, forms of government become obsolete.
I mentioned this in adolescence of technology where I said, you know, like feudalism was basically, you know, like a form of government, right?
And then when we invented industrialization, feudalism was no longer sustainable, no longer made sense.
Why is that hope?
Couldn't that imply that democracy is no longer going to be a competitive?
It could, right, it could go, it could go either way, right? But, but I actually, so I, these problems with authoritarianism, right, that the problems of authoritarianism get deeper. I just, I wonder if that's an indicator of other problems that authoritarianism will have, right? Another words, people become, because authoritarianism becomes worse, people are more afraid of authoritarianism.
And they work harder to stop it.
It's more of a, like, you have to think in terms of total equilibrium, right?
I just wonder if it will motivate new ways of thinking about with the new technology, how to preserve and protect freedom.
And even more optimistically, will it lead to a collective reckoning and, you know, a kind of a more emphatic realization of how important some of the things we take?
as individual rights are, right?
A more emphatic realization that we just, we really can't give these away.
There's, we've seen, there's no other way to live that actually works.
I, I am actually, I am actually hopeful that, I guess one way to say it, it sounds too
idealistic, but I actually believe it could be the case, is that, is that dictatorships
become morally obsolete.
They become morally unworkable forms of government.
and that the crisis that that creates is sufficient to force us to find another way.
I think there is genuinely a tough question here, which I'm not sure how you resolve.
And we've had to come out one way or another on it through history, right?
So with China in the 70s and 80s, we decided, even though it's an authoritarian system, we will engage with it.
And I think in retrospect, that was the right call because in a state of authoritarian system,
but a billion plus people are much wealthier and better off than they would have otherwise been.
And it's not clear that it would have stopped being an authoritarian country.
Otherwise, you can just look at North Korea as an example of that, right?
And I don't know if that takes that much intelligence to remain in authoritarian country
that continues to coalesce its own power.
And so you can just imagine North Korea with an AI that's much worse than everybody else's,
but still enough to keep power.
And so in general, it seems like, should we just have this out of
of the benefits of AI will, in the form of all these
empowerments of humanity and health and so forth, will be big.
And historically, we have decided it's good to spread the benefits of
technology widely, even to people whose governments are authoritarian.
And I guess it is a tough question how to think about it with AI,
but historically we have said, yes, this is a positive some world,
and it's still worth diffusing the technology.
Yeah, so there are a number of choices we have.
I think framing this as a kind of government to
government decision and, you know, in national security terms, that's like one lens,
but there are a lot of other lenses. Like, you could imagine a world where, you know, we produce
all these cures to diseases and, like, you know, the, the, the cures to diseases are fine to
sell to authoritarian countries. The data centers just aren't, right? The chips and the data
centers just aren't. And the AI industry itself. You know, like another possibility is,
and I think folks should think about this, like, you know, could there be,
developments we can make, either that naturally happened as a result of AI, or that we could make
happen by building technology on AI, could we create an equilibrium where it becomes infeasible
for authoritarian countries to deny their people kind of private use of the benefits of the technology?
You know, are there, are there, are there, are there, are there, are there, are there, are there,
are there, are there, are there, are there, are there, are there, are there, are there
own AI model that kind of, you know, like, defends themselves from surveillance. And there isn't a
way for the authoritarian country to, like, crack down on this while retaining power. I don't know.
That sounds to me like if that went far enough, it would be, it would be a reason why authoritarian
countries would disintegrate from the inside. But maybe there's a middle world where, like,
there's an equilibrium where if they want to hold on to power, the authoritarians can't deny
kind of individualized access, access to the technology. But I actually do have a hope for
the for the, for the, for the more radical version, which is, you know, is it possible that the
technology might inherently have properties or that by building on it in certain ways we could
create properties that, that, that, that, that, that, that, that, that, that, that, you know,
that, that, that, you know, that, that, you know, property and turns out not to. But, but, but I, I don't
know. What if we could, uh, what if we could try again with, with the knowledge of how many things
could go wrong and that this is a different technology? I don't know that it would work,
but it's worth a try. Yeah. I think it's just, it's very unpredictable. Like, there's
first principles reasons why authoritarian is a mighty privilege. It's all very unpredictable.
I don't think, I mean, we got it, we just got to, we kind of, we got to recognize the problem and
then we got to come up with 10 things we can try and we got to try those and then assess whether
they're working or which ones are working, if any, and then try new ones if the old ones aren't.
But I guess what it nets out to today is you say, we will not sell data centers, or sorry, chips, and then the ability to make chips to China.
And so in some sense, you are denying there will be some benefits to the Chinese economy, Chinese people, et cetera, because we're doing that.
And then there would also be benefits to the American economy because it's a positive some world.
We could trade.
They could have their country data centers doing one thing.
We could have ours doing another.
And already you're saying it's not worth that positive sum.
stipend to empower this country.
What I would say is that, you know, we are about to be in a world where growth and
economic value will come very easily, right, if we're able to build these powerful AI models,
growth and economic value will come very easily.
What will not come easily is distribution of benefits, distribution of wealth, political freedom,
you know, these are the things that are going to be hard to achieve.
And so when I think about policy, I think, I think,
that the technology in the market will deliver all the fundamental benefits, you know, almost,
almost faster than we can take them. And that these questions about distribution and political
freedom and rights are the ones that will actually matter and that policy should focus on.
Okay, so speaking on distribution, as you're mentioning, we have developing countries,
and in many cases, catch-up growth has been weaker than we would have hoped for.
But when catch-up growth does happen, it's fundamentally because,
they have underutilized labor.
And we can bring the capital and know-how
from developed countries to these countries
and then they can grow quite rapidly.
Yes.
Obviously, in a world where labor is no longer
the constraining factor,
this mechanism no longer works.
And so is the hope basically to rely on philanthropy
from the people who immediately get wealthy from AI
or from the countries that get wealthy from AI.
What is the hope for...
I mean, philanthropy should obviously play some role
as it has in the past.
But I think growth is always...
Growth is always better and stronger if we can make it endogenous.
So, you know, what are the relevant industries in like, in like, in like an AI-driven
world?
Look, there's lots of stuff, you know, like there's, you know, I said, I said we shouldn't
build data centers in China, but there's no reason we shouldn't build data centers in Africa, right?
In fact, I think it'd be great to build data centers in Africa.
You know, as long as they're not owned by China, we should, we should build, we should
build data centers in Africa.
I think that's a, that's a, I think that's a great thing to do.
you know, we should also build, you know, there's no reason we can't build, you know, a pharmaceutical
industry that's like AI driven. Like, you know, if AI is accelerating, accelerating drug discovery,
then, you know, there will be a bunch of biotech startups. Like, let's make sure some of those
happen in the developing world. And certainly during the transition, I mean, we can talk about
the point where humans have no role, but humans will still have some role in starting up these
companies and supervising the AI models. So let's make sure some of those humans are humans
in the developing world so that fast growth can happen there as well.
You guys recently announced quad is going to have a constitution that's aligned to a set
of values and not necessarily just to the end user.
And there's a world you can imagine where if it is aligned to the end user, it preserves
the balance of power we have in the world today because everybody gets to have their own
AI that's advocating for them.
And so the ratio of bad actors are good actors stays constant.
It seems to work out for our world today.
Why is it better not to do that but to have a specific set of values that the AI
should carry forward.
Yeah.
So I'm not sure I'd quite draw the distinction in that way.
There are maybe two relevant distinctions here, which are, I think you're talking about a mix of the two.
Like, one is, should we give the model a set of instructions about do this versus don't do this?
And the other, you know, versus should we give the model a set of principles for, you know, for kind of how to act?
And there it's, you know, it's, you know, it's just, it's kind of purely a practical and empirical thing that we've observed that by teaching the model principles, getting it to learn from principles, its behavior is more consistent.
It's easier to cover edge cases.
And the model is more likely to do what people want it to do.
In other words, if you, you know, if you're like, you know, don't tell people how to hotwire a car.
don't speak in Korea and don't, you know, just, you know, if you give it a list of rules,
it doesn't really understand the rules and it's kind of hard to generalize from them, you know,
if it's just kind of a like, you know, list of do's and don'ts, whereas if you give it
principles and then, you know, it has some hard guardrails like don't make biological weapons,
but overall you're trying to understand what it should be aiming to do, how it should be aiming
to operate.
So just from a practical perspective, that turns out to be just a more effective way.
to trade the model. That's one piece of it. So that, you know, it's the kind of rules versus
principles tradeoff. Then there's another thing you're talking about, which is kind of like
the courageability versus like, you know, I would say kind of intrinsic motivation tradeoff,
which is like, how much should the model be a kind of, I don't know, like a skin suit or something
where, you know, you just kind of, you know, it just kind of directly follows the instructions
that are given to it by whoever is giving it those instructions,
versus how much should the model have an inherent set of values
and go off and do things on its own?
And there, I would actually say everything about the model
is actually closer to the direction of like, you know,
it should mostly do what people want.
It should mostly follow these.
We're not trying to build something that like, you know,
goes off and runs the world on its own.
We're actually pretty far on the courageable side.
Now, what we do say is there are certain things that the model won't do, right?
That it's like, you know, that, that, that I think we say it in various ways in the Constitution, that under normal circumstances, if someone asks the model to do a task, it should do that task.
That should be the default.
But if you've asked it to do something dangerous or if you've, you know, if you've asked it to, you know, to kind of harm someone else, then the model is unwilling to do that.
So I actually think of it as like a mostly a mostly corrugable model that has some limits, but those limits are based on principles.
Yeah, I mean, then the fundamental question is how are those principles determined?
And this is not a special question for Anthropic.
This would be a question for any I company.
But because you have been the ones to actually write down the principles, I get to ask you this question.
Normally a constitution is like you write it down, it's set in stone, and there's a process of updating it.
and changing it and so forth.
In this case, it seems like a document
that people have been changed at any time
that guides the behavior of systems
that are going to be the basis
of a lot of economic activity.
What is the...
How do you think about how those principles
should be set?
Yes.
So I think there's two,
there's maybe three
kind of sizes of loop here, right?
Three ways to iterate.
One is you can iterate,
we iterate within anthropic
We train the model.
We're not happy with it.
And we kind of change the Constitution.
And I think that's good to do.
And, you know, putting out publicly, you know, making updates to the Constitution every once in a while saying, here's a new Constitution.
Right.
I think that's good to do because people can comment on it.
The second level of loop is different companies will have different constitutions.
And, you know, I think it's useful for, like, Anthropic puts out a Constitution and, you know, the Gemini model puts out a Constitution.
And, you know, other companies put out a constitution.
and then they can kind of look at them, compare, outside observers can critique and say, this, I like this one, this thing from this constitution and this thing for that constitution.
And then kind of that creates some kind of, you know, soft incentive and feedback for all the companies to like take the best of each elements and improve.
Then I think there's a third loop, which is, you know, society beyond the AI companies and beyond just those who kind of, you know, who comment on the constitutions without hard power.
And there, you know, we've done some experiments like, you know, a couple years ago we did an experiment with, I think it was called the Collective Intelligence Project to like, you know, to basically poll people and ask them what should be in our AI Constitution.
And, you know, I think at the time we incorporated some of those changes.
And so you could imagine with the new approach we've taken to the Constitution doing something like that, it's a little harder because it's like that was actually an easier approach to take when the Constitution was like a list of do's and don'ts.
At the level of principles, it has to have a certain amount of coherence.
But you could still imagine getting views from a wide variety of people.
And I think you could also imagine, and this is like a crazy idea, but hey, you know,
this whole interview is about crazy ideas, right?
So, you know, you could even imagine systems of kind of representative government having input, right?
Like, you know, I wouldn't do this today because the legislative process is so slow.
Like, this is exactly why I think we should be careful about the legislative.
process and AI regulation. But there's no reason you couldn't in principle say like,
you know, all AI, you know, all AI models have to have a constitution that starts with like
these things. And then like you can append, you can append other things after it. But like there
has to be this special section that like takes precedent. I wouldn't do that. That's too rigid.
That sounds, you know, that that that sounds kind of overly prescriptive in a way that I think
overly aggressive legislation is. But like that is a thing you could, you know, like,
Like, that is a thing you could try to do.
Is there some much less heavy-handed version of that?
Maybe.
I really like Control Loop, too, where obviously this is not how constitutions of actual governments do or should work, where there's not this vague sense in which the Supreme Court will feel out how people are feeling and what are the vibes and then update the constitution accordingly.
So with actual governments, there's a more procedural process.
Yeah, exactly.
But you actually have a vision of.
competition between constitutions, which is actually very reminiscent of how some libertarian charter
city's people used to talk about what an archipelago of different kinds of governments would
look like. And then there would be selection among them of who could operate the most effectively
in which place people would be the happiest. And in a sense, you're actually, yeah,
there's this vision. I'm kind of recreating that. Yeah, yeah. Like the same utopia of archipelagos.
Again, you know, I think that vision has, you know, things to recommend it and things that
things that will kind of go wrong with it.
You know, I think, I think it's a, I think it's an interesting in some ways compelling vision,
but also things will go wrong with it that you hadn't, that you hadn't imagined.
So, you know, I like loop two as well.
But I feel like the whole thing has got to be some mix of loops one, two, and three.
And it's a matter of the proportions, right?
I think that's got to be the answer.
when somebody eventually writes the equivalent of the making of the atomic bomb for this era,
what is the thing that will be hardest to glean from the historical record that they're most likely to miss?
I think a few things.
One is at every moment of this exponential, the extent to which the world outside it didn't understand it.
This is a bias that's often present in history where anything that actually happened looks inevitable in retrospect.
And so, you know, I think when people look back, it will be hard for them to put themselves in the place of people who are actually making a bet on this thing to happen that wasn't inevitable, that we had these arguments, like the arguments that, you know, that I make for scaling or that continual learning will be solved.
you know that that you know some of us internally in our heads put a high probability on this happening
but but it's like there's there's a world outside us that's not that's not acting on that's
kind of not acting on that at all and and and I think I think the weirdness of it um I think
unfortunately like the insularity of it like you know if if we're one year or two years away from
it happening like the average person on the
Street has no idea. And that's one of the things I'm trying to change, like with the memos,
with talking to policymakers, but like, I don't know. I think, I think that's just a, that's just like a
crazy, that's just like a crazy thing. Yeah. Finally, I would say, and this probably applies to almost
all historical moments of crisis, how absolutely faster was happening, how everything was happening
all at once. And so decisions that you might think, you know, were kind of carefully calculated,
well, actually, you have to make that decision and then you have to make 30 other decisions on the
same day because it's all happening so fast. And you don't even know which decisions are going to
turn out to be consequential. So, you know, one of my, one of my, I guess, worries, although it's
also an insight into, you know, into kind of what's happening is that, you know, some very
critical decision will be, will be some decision that, you know, someone just comes
into my office and is like, Dario, you have two minutes. Like, you know, should we, should we do,
you know, should we do thing, thing A or thing B on this like, you know, someone gives me this
random, you know, half page, half page memo and is like, should we, should we do A or B? And I'm like,
I don't know, I have to eat lunch. Let's do B. And, you know, that ends up being the most
consequential thing ever. So final question. It seems like you have, there's not tech
CEOs who are usually writing 50-page memos every few months.
And it seems like you have managed to build a role for yourself and a company around you,
which is compatible with this more intellectual-type role of CEO.
And I want to understand how you construct that and how, like, how does that work to be,
you just go away for a couple of weeks and then you tell your company, this is the memo, like,
here's what we're doing.
It's also reported you write a bunch of these internally.
Yeah.
So, I mean, for this particular one, you know, I wrote it over winter break.
So there was the time, you know, and I was having a hard time finding the time to actually find it, to actually write it.
But I actually think about this in a broader way.
I actually think it relates to the culture of the company.
So I probably spend a third, maybe 40% of my time making sure the culture of Anthropic is good.
As Anthropic has gotten larger, it's gotten harder to just, you know, get involved in like, you know, directly involved in like the training of the models, the launch of the models, the building of the products.
It's 2,500 people.
It's like, you know, there's just, you know, I have certain instincts, but like there's only, you know, I, it's very difficult to get to get involved in every single detail.
You know, I like, I try as much as possible.
But one thing that's very leveraged is making sure Anthropic is a good place to work.
People like working there.
Everyone thinks of themselves as team members.
Everyone works together instead of against each other.
And, you know, we've seen as some of the other AI companies have grown without naming any names.
you know, we're starting to see
decoherence and people fighting each other.
And, you know, I would argue there was even a lot of that
from the beginning, but, you know, that it's gotten worse.
But I think we've done an extraordinarily good job,
even if not perfect, of holding the company together,
making everyone feel the mission that we're sincere about the mission
and that, you know, everyone has faith
that everyone else there is working for the right reason,
that we're a team, that people aren't trying to get ahead
at each other's expense or backstab each other.
which again, I think happens a lot at some of the other places.
And how do you make that the case?
I mean, it's a lot of things.
You know, it's me.
It's Daniela who, you know, runs the company day to day.
It's the co-founders.
It's the other people we hire.
It's the environment we try to create.
But I think an important thing in the culture is I, you know, the other leaders as well,
but especially me have to articulate what the company is about, why it's doing what it's doing,
what its strategy is, what its values are, what its mission is, and what it stands for.
And, you know, when you get to 2,500 people, you can't do that person by person.
You have to write or you have to speak to the whole company.
This is why I get up in front of the whole company every two weeks and speak for an hour.
It's actually, I mean, I wouldn't say I write essays internally.
I do two things.
One, I write this thing called DVQ, Dario Vision Quest.
I wasn't the one who named it that.
That's the name it received, and it's one of these names that I kind of, I tried to fight it because it made it sound like I was like going off and smoking peyote or something.
But the name just stuck.
So I get up in front of the company every two weeks.
I have like a three or four page document and I just kind of talk through like three or four different topics about what's going on internally.
The models were producing the products, the outside industry, the world as a whole as it relates to AI and geopolitically in general.
you know, just some mix of that.
And I just go through very, very honestly, I just go through and I just, I just say, you know, this is what I'm thinking.
This is what anthropic leadership is thinking.
And then I answer questions.
And that direct connection, I think, has a lot of value that is hard to achieve when you're passing things down the chain, you know, six, six levels deep.
And, you know, large fraction of the company comes to attend, either in person or either in person or virtually.
And, you know, it really means that you can communicate a lot.
And then the other thing I do is I just, you know, I have a channel in Slack where I just
write a bunch of things and comment a lot.
And often that's in response to, you know, just things I'm seeing at the company or questions
people ask or like, you know, we do internal surveys and there are things people are
concerned about and so I'll write them up.
And I'm like, you know, I'm just, I'm very honest about these things.
You know, I just say them very directly.
and the point is to get a reputation of telling the company the truth about what's happening,
to call things what they are, to acknowledge problems, to avoid the sort of corpus speak,
the kind of defensive communication that often is necessary in public because, you know,
the world is very large and full of people who are, you know, interpreting things in bad faith.
But, you know, if you have a company of people who you trust and we try to hire people that we trust,
then, you know, you can, you can, you know, you can really just be entirely unfiltered.
And, you know, I think, I think that's an enormous strength of the company.
It makes it a better place to work.
It makes people more, you know, more of the sum of their parts.
And increases likelihood that we accomplish the mission because everyone is on the same page about the mission.
And everyone is debating and discussing how best to accomplish the mission.
Well, in lieu of an external Daria Vision quest, we have this interview.
This interview is a little like that.
This is in fun, Dario.
doing it. Yeah, thank you, Dorcasch. Hey, everybody. I hope you enjoyed that episode. If you did,
the most helpful thing you can do is just share it with other people who you think might enjoy.
It's also helpful if you leave a rating or comment on whatever platform you're listening on.
If you're interested in sponsoring the podcast, you can reach out at thwarcash.com slash advertise.
Otherwise, I'll see you on the next one.
