a16z Podcast - Who Controls AI Acceleration? Vitalik Buterin and Guillaume Verdon Debate
Episode Date: April 9, 2026Eddy Lazzarin speaks with Vitalik Buterin, founder of Ethereum, and Guillaume Verdon, founder and CEO of Extropic, about whether AI progress can or should be steered, the risks of concentrated power, ...and what open source and decentralization mean for who benefits from increasingly powerful systems. This episode originally aired on the a16z crypto podcast. Resources: Follow Vitalik Buterin on X: https://twitter.com/VitalikButerin Follow Guillaume Verdon on X: https://twitter.com/GillVerd Follow Eddy Lazzarin on X: https://twitter.com/eddylazzarin Follow Shaw Walters on X: https://twitter.com/shawmakesmagic Stay Updated:Find a16z on YouTube: YouTubeFind a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
Rapid technological acceleration has been a fact of a human civilization for about a century,
and that acceleration is itself accelerating.
To me, that is the fundamental truth, right?
And whether we yell at it or disagree with it, it is happening.
You know, it's like gravity.
Those that adopt that culture will literally have higher likelihood of surviving in the future.
If you take any one bit and you kind of accelerate indiscriminately, then basically you do lose all value.
And so to me, the question is like, how do we accelerate intentionally?
I think there is a real sense in which we have one shot at this.
EAC isn't trying to kill everyone. It's actually trying to save everyone.
If we decelerate, we're going to have a huge opportunity cost
and we're going to miss out in a much better future.
Two competing philosophies have emerged around how fast AI should advance.
EAC, or effective accelerationism, says progress is inevitable and restraint only seeds ground.
DEC or defensive acceleration says speed without safeguards risks concentrating power in fewer and fewer hands.
On this episode, originally aired on the A16Z Crypto Podcast. A16Z Crypto CTO Eddie Lazarin speaks with Thetallic Buterin, founder of Ethereum, and Guillaume Verden, founder and CEO of Extropic, alongside Shaw Walters, founder of Eliza Labs.
Nice. Wow.
So this all started because I just knew these guys had to meet each other
and it rapidly devolved into all of this,
which I'm really glad to see.
This is incredible.
And it's the first time that you guys have really talked in person, right?
Awesome.
And this is an incredible synthesis.
So yeah, my name is Shaw.
I've known these guys for a while.
I'm here with Eddie from A16Crypto,
and this is a great time.
So everybody's here.
I guess you're allowed to please be respectful.
This is a conversation between them.
We're just going to kind of throw some questions at them as we go along to keep it going.
But feel free to dig into whatever you guys want to.
This is really here for you.
We're all just here to listen.
And this will all be live streamed to the other floor.
It's not going to be public.
We will be cutting up the video and putting it out later.
So everyone will get to see and share and everything.
And I think without further ado,
I'm going to leave it to Eddie to get started with some of the questions.
So before we ask them, I'd love to get a sense of the crowd.
It's always hard to tell the difference between the Twitter timeline and reality.
Who here could explain EAC in a few sentences to someone else?
Wow.
That's actually less than I thought.
That's good to know.
That's good to know.
Who here could explain DIAC in a few sentences to someone else?
Okay.
That might have been more, actually.
That was very interesting.
Okay.
Thank you for that.
So maybe we'll just start there, right?
Is the term accelerationism, at least in the techno-capitalist sense,
dates back to Nick Land's CCRU research group in the 90s.
But some might say that these ideas really took shape even further back,
the 60s and 70s with Duluz and Gattari.
Let me maybe start with Vitalik.
Why are we having an earnest conversation?
about the ideas of philosophers right now.
What makes this accelerationism idea relevant?
Again, I think ultimately,
ultimately, I think, you know, we're all here trying to make sense of the world
and trying to make sense of what it even makes sense to do in the world, right?
And this is something that we've had for thousands of years.
I think the new thing that we've had,
had for probably roughly 100 years is making sense of a world that has rapid change.
And sometimes even that has, I mean, maybe this is us skipping a bit ahead, but like rapid
destructive change, right?
So, you know, like the early era of this is that there was a, in the pre-World War I era around
the 1900s, there was a lot of like original techno-optimist sentiments, right?
And there was a lot of excitement about back then.
Well, you know, the thing that we call today tech back then chemistry was tech.
And then electricity was also tech.
And if you even, you know, like watch like even movies like some of the Sherlock Holmes ones,
you get to like really feel the vibe of that kind of era, right?
And it was rapidly improving living standards, rapidly liberating women in the household
doing amazing things, extending lives.
And then, of course, you know, World War I happened, right?
And, you know, World War I famously, you know, people rode in with horses and rode out on tanks, right?
And it was a destructive war.
Then World War II came.
And World War II was an even more destructive war.
And, you know, it gave birth to I Am Become Death Destroyer of Worlds.
And this is like some of the background of, you know, things like postmodernism and people basically try.
trying to make sense of like, okay, a lot of beliefs were shattered and what do we believe now, right?
And this is something that I think people believe like every generation, right?
And there's a lot of people today who even grew up, you know, like believing in kind of 1960s era, postmodern beliefs.
I mean, even and feeling like those beliefs have been shattered, right?
And even people who, for example, grew up believing in, you know, what,
like I would call hipster environmentalism
and it's this lovely beautiful
idea and
you know we need to protect
the environment to not go so fast
and then you believe in this and then
you realize that like wait
the nuclear power plants that you advocated
to shut down basically means
that you know your country is
like stuck bootlicking Russia
right and
like
basically
all and like these are
are just very natural
things that happen, right?
And I think
rapid technological acceleration
has been a fact of
a human civilization for about a century
and that acceleration
is itself accelerating
and things like
post-potidism are a
response to that. A lot of the currents
of the 1960s were a response
to that and you know you can
respond by saying it's inevitable. You can
respond by saying we
we have to slow it down as a lot of people did.
And it's just constantly, I mean, like a rapid response to basically the effects of the ideas that were tried to be executed by previous generations.
And I think, you know, we're now quite rapidly seeing a new version of that exact same cycle continue today.
And I think it's mixing both themes that have been around for a long.
long time together with some pretty new ideas.
So, Gil, so what is EAC?
And why?
What?
Yeah, I guess EAC is kind of the byproduct of myself asking, why are we here or how are we
here?
What was the generative process that gave rise to us, that gave rise to civilization technology,
got us to this point where we're having this conversation in this room?
We all have wonderful technology around us,
and we emerged from a soup of inorganic matter.
So somehow there is a physical generative process,
and my day job is trying to do generative AI
as a physical process and devices.
And so that was simmering in my brain,
and I wanted to apply that sort of thinking,
that sort of framework, that physics first viewpoint,
to all of civilization,
trying to understand civilization as a petri dish,
trying to understand how we got here in order to predict where we're going.
And that got me down the rabble hole of the physics of life itself,
like emergence of life, abiogenesis,
and a field of physics called stochastic thermodynamics,
which is the thermodynamics of out-of-equilibrium systems.
So what describes life forms and also including our brains, right, intelligence.
So it's both the physics of life and intelligence,
but it's also the physics of any system
that obeys the second law of thermodynamics,
which includes our whole civilization.
And so really to me, it's just been an observation
that systems tend to self-adap
and complexify in order to capture work
from their environment and dissipate heat.
And that is the fundamental driving force
behind all of progress, all of quote-unquote acceleration,
all of everything we see today.
And to me, that is the fundamental truth, right?
And whether we yell at it or disagree with it, it is happening.
This is, you know, it's like gravity.
It's, you can argue with thermodynamics.
It doesn't care.
It keeps going.
And so, you know, to me, EAC was like, okay, well, given this fact,
and given that if you look at the equations carefully,
you can observe that there's a Darwinian-like selection effect,
for every bit of information
prescribing configurations of matter.
So whether that's a gene,
a meme,
chemical specification,
product design,
policy, there's a selective pressure
on everything and everything is intercoupled
in this big soup of matter.
And that selection pressure
selects bits according to
whether they're useful
for the system they're part of.
They're useful to better predict the environment
capture work and dissipate more heat.
So are they useful for sustenance, for sustaining yourself,
preserving yourself, predicting your environment, predicting danger,
but also is it useful for growth?
Because if you grow and replicate,
then those bits of information replicate,
and there's a natural error correction.
So in a way, it's just a byproduct of the selfish bit principle
that emerges from physics.
And what that tells us is that the bits that are part of the future
are the bits that are useful for growth
and further acceleration of this growth.
And so if, to me, I wanted to design a culture
that if we bootloaded this mental software
in the population,
those that adopt that culture
will literally have higher fitness.
They will literally have higher likelihood
of surviving in the future.
So EAC isn't trying to kill everyone.
It's actually trying to save everyone.
It's basically, to me, I think mathematically,
provably having a decelerative mindset
and it's a general pattern of many subcultures
of making yourself small, degrowth, and so on,
it's actually negative, it gives you negative fitness
and actually accelerating your downfall as an organism,
whether it's a decal mindset at an organization level
and a company, at a national level, at an individual level,
you're lowering your likelihood of being part of the future
and to me, that is not necessarily virtuous
to spread that memes,
to spread sort of pessimism, doomerism.
It's actually, well...
We're using a lot of terminology that I haven't quite unpacked.
So like, EAC, what does that stand for?
What's when? What is acceleration?
Oh.
And what is deceleration?
And what is a desal?
And these, like...
Yeah.
What I'm trying to get at is, I think EAS,
came as a little bit of a response to something that was happening in our culture at the time.
Yeah, yeah. What was happening in our culture? What was it a response to? And what was, say a little bit about the dialogue.
Totally.
That led to encapsulating it in a name. So, you know, it was 2022. I think the world was somewhat pessimistic. We're just emerging from COVID. Things weren't looking good. We were feeling down. Everybody was kind of lacking sunlight. Everybody was sort of, you know, pessimistic about the
the future and essentially, yeah, AI Dumerism was kind of the monoculture.
What is that? What is AI Dumerism is just kind of, you know, panicking that about, you know,
the fact that if there's a system that is too complex, our brains or human brains or generative
models can have a predictive model of them, and so we can't control them.
And things we can't control give us entropy about our model of the future and that induces
anxiety, right? And then
AI Dumerism, to me,
has been a weaponization of people's
anxieties for
political purposes.
And overall,
I think, like, and we'll get
to this, you know,
I think AI Dumerism
is a big net negative, and I wanted
to create a counterculture to that.
Now, what I saw in the X algorithm
is that, you know,
the X algorithm, and
many algorithms reward agreement,
or strong disagreement, right?
So there's, you know, if you view the algorithm
as a Markov chain, asymptotically,
everything converges to bipolar distributions
of opinions for anything.
So it's like, you know, you had the ALA,
EA, Meary, cult complex.
You know, I kind of clustered them there,
me not so gracefully, but you have that complex.
I was like, what's going to be the opposite of that?
And, you know, to me I was like, okay, well,
you know, the opposite of anxiety is curiosity, right?
instead of downside protection, it's upside-seeking, you know, fear of missing out.
It's like, you know, domenurgic sort of mindset.
And it's like, hey, actually, if we decelerate, we're going to have a huge opportunity cost
and we're going to miss out in a much better future.
And it's just like painting that future more vividly and bootloading this mindset of optimism
because the thesis is that, you know, if you study neuroscience,
we tend to want to have a convergence of our beliefs,
and the world.
And so sometimes we adjust our beliefs
to the state of the world,
but we also adjust the world to our beliefs.
So if we believe that the state of the world will be bad,
then we tend to steer the world to that bad outcome.
If we think the world will be great
and we think of positive futures,
we tend to hyperstition them.
We tend to increase the likelihood of their advent.
And so I had a responsibility
to spread sort of optimism
into order to hyperstition a positive future.
And, yes, online, I am very, you know, aggressive
and, you know, use all the political mind hacks
because, you know, to me the end justifies the means.
Like if more people are optimistic about the future
feel like they have agency, feel like they can build
and make an impact in the world, then that's really good.
And I think, you know, sometimes I'm a bit ruthless
with my opponents on the other side of the aisle.
I think in private meetings
I'm much more friendly
but you know for
you know
like I said
I just took the extreme opposite
to the current monoculture
and then that created some polarity
and then now we can have discussions
of where we want to lie right
so I've been with EAC
since the beginning
and it's been a message that as a programmer
sitting in a room has been
incredibly inspiring and it's great
to see a positive message spread
and it spread very organically
And I would say that at the time that it started, it was clearly a reaction to this negativity.
But now in 2026, it feels like EAC 1.
It feels like that's no longer the case.
And I think obviously Mark Andreessen posts the Technoautist Manifesto,
which I think really kind of codifies some those ideas and then brings that to like where Vitalik sort of has this greater commentary.
So I kind of love to know from you, Vitalik, like, what is EAC in your mind and what is DIAC and what makes them different?
like what drove you to go this direction?
Yeah, I mean, I think maybe I'll also start my answer with thermal dynamics, right, because why not?
So, I mean, this is, I mean, it's an interesting topic, right?
Because, like, I think we hear about, like, entropy in the context of hot and gold,
and we hear about entropy in the context of cryptography, and these are, like, different universes.
And, like, actually, we're not really taught how they're actually the exact same thing, right?
So I'm going to try, actually, and, like, explain this in three minutes.
So, okay, so the prompt is why is it possible to mix hot and cold, but why can't you separate
things into hot and cold, right?
And so here's my explanation, right?
So imagine you have two jars of gas.
Each jar of gas has a million atoms in it, right?
This jar is cold.
And because it's cold, the atoms move slowly.
And so the velocity of every atom you can represent with a two-digit number, right?
Over here, the atoms are hot.
The velocity of every atom, you can represent with a six-digit number, right?
Now, how many digits do you need to represent, or rather, if that's what you know,
how many digits of information do you not know about the system?
The answer is 8 million, right?
You don't know the exact velocities here, two times a million.
You don't know the exact velocities here.
That's 8 times a million, right?
Now, what happens if you mix them, right?
Well, if you mix them, the velocities get averaged,
and so they become numbers from 0 to 500,000.
And so 5.7 digits.
Actually pretty close to 6, right?
And so you mix them, you have two jars.
And on one side, you have a jar where the amounts of information you don't know is 5.7 million and then over here 5.7 million, right?
And so the amounts that you do not know about the gas has gone up from 8 million digits to 11.4 million digits, right?
So the amount that you do not know is increased, right?
This is what it means by entropy go up.
Now we can try a proof by contradiction.
Imagine you had a device that goes the other way, right?
Imagine you had a device that can take two jars of this like half hot gas
and like actually bring all the heat over here and all the cold over here.
By conservation of energy, this is totally valid, right?
Because it's like the same energy.
But why can't you do it?
And the answer is, well, if you could, then what you've done is you've taken this system
where what you don't know is 11.4 million digits,
and then you've turned it into a system
where what you don't know is 8 million digits, right?
Now, because the laws of physics are time reversible,
this is like the important thing, right?
What that implies is if that kind of magic device existed,
then actually, like, you could run the same process in time reverse,
and so you could always recover the original, right?
And so what that implies is, if that gadget existed,
it would also be a gadget for compressing an arbitrary,
vary 11.4 million digits into 8 million digits, which we know is impossible.
Now, but this also, by the way, tells you why Maxwell's demon works, right?
Which is basically that if he had a magic demon, then actually, yes, you can split the hot
and the cold.
And basically the Maxwell's demon just has to, like, know the extra 3.4 million digits
separately, and then you're fine, right?
So, like, what's the moral of this, right?
Basically that increasing entropy is basically mean, like one entropy is subjective, right?
Entropy is not like a physical statistic.
It's actually how much you don't know, right?
And, you know, if I, like, it turns out that, like, I actually computed a cryptographic hash function and I pushed out the atoms, then, like, actually based off of that, for me, the bottle might be very low entropy, right?
And, like, maybe I could separate it, right?
But ultimately, it also means that when entropy goes up, it means that our ignorance about the world goes up.
It means that what we do not know goes up, right?
You can go from knowing more to knowing less.
You cannot know, you cannot go from knowing less to knowing more.
Now, but then why does education exist?
Why do we become smarter?
And the answer is that we go from knowing fewer, like we basically, we go from knowing
more things that are useful, right?
Basically, yeah, the increase in entropy means that we constantly know in some sense
less and less about the universe, but the bits that we,
we do know are more meaningful to us, right? And so there is like a thing that is being spent,
and then there is a thing that we are gaining. And so the thing that we are gaining, this is,
like, I don't think that there is some like simple mathematical formula that defines it.
The thing that we are gaining, I mean, ultimately, this is basically our morality, right?
This is that, you know, we value life, we value happiness, we value joy.
There's a lot of different reasons why we find an earth full of thriving, beautiful,
beautiful humans more interesting than Jupiter, even though Jupiter has a larger number of particles
inside it and you need more digits to express what each and every one of them is doing.
And so I think like value comes from us is the first thing.
And I think also like this connects what we want out of acceleration, right?
Which is basically that like our goals to me is ultimately come from us, right?
And so the question is like, okay, we are.
accelerating, right? And what do we want to accelerate? And I mean, if we want to like switch,
you know, like mathematical analogies a bit, right? If you take any LLM and you imagine you randomly
flip one of the weights to positive $9 billion, what happens, right? Worst case, the LLM becomes
useless. Best case, every weight that's not connected to the nine to the nine billion doesn't
do anything, right? And so best case is you have an LLM that's her worst case, you just have
joke. And so basically, I see human society as being kind of like an LLM. It's this complicated
organism. And if you take any one bit and you kind of accelerate indiscriminately, then basically
you do lose all value. And so to me, the question is like, it's basically, you know, like,
it's what like Darren Osamoglu calls the narrow corridor, even though, like, you know, the,
like the details on the politics are different. But it's like, how do we accelerate, sort of intentionally?
jump off of that.
So, yeah, that's an interesting way
to describe entropy of a gas.
Essentially, the reason physics is not
reversible is because of the second law of thermodynamics.
It's because if you have a trajectory of a system
and it dissipates heat, it can't go back
because the likelihood of going forwards
versus backwards decays exponentially
with how much heat you've dissipated.
And in a way, it's like literally how much of a dent have you put in the universe, right?
A dent is an inelastic collision, right?
If I have a bouncy ball, it's elastic.
If I, you know, take some Play-Doh and smash it, then it just keeps the smash shape.
That's inelastic, and it's hard to reverse.
Essentially, every bit of information is fighting for its existence.
And in order to persist, it needs to make more evidence of its existence that's indelible.
So it's making a larger dent in the universe.
And that principle is how life and intelligence emerges from a soup of matter,
and that complexification of systems becoming more and more complex,
having more and more bits of information,
a bit of information, it tells you information.
Information is a reduction of entropy, right?
It's going, entropy is lack of knowledge,
information reduces entropy about a system condition on information.
No, I'm very sorry to interrupt.
Yeah, where did you want to take?
take this.
I'd love to know what EAC is.
Okay.
Okay.
So EAC, ultimately, it's a
meta-cultural prescription.
So it's not a culture itself.
It tells you you should...
What would you say?
What is the thing that is accelerating?
The thing that is accelerating is the complexification of matter,
such as we can, so that we can predict our environment.
We have better auto-aggressive predictive power.
And we capture more free energy.
So the Kardashev scale, right?
And we dissipate it as heat.
But that is just the justification from first principles
why the Kardeshev scale is the ultimate metric
for how well we're doing as a civilization.
So let me to bring it back.
So this is a little bit selfish,
but maybe I'm also helping the audience,
is that the metaphors and the explanation
rooted in physics and in entropy and so on
is in a way an explanatory tool.
to try to get at a phenomenon that we experience directly.
And that experience is the acceleration of the productive capacity of our economy,
the acceleration of the development of technology,
and the consequences they're in.
That's my understanding of what acceleration is.
Essentially, every system gets whatever its boundary is,
it gets better at predicting the world,
and by doing so it can secure more resources for its sustenance and its growth.
whether it's a company, whether it's individuals, nations, earth in general.
And, you know, if you just play the movie out,
it means that now that we have a way to convert free energy
into predictive power with artificial intelligence,
what that will lead to is an ascent on the car to shift scale.
That's what the equations predict.
And so that is, and that assent up there is more energy, more artificial intelligence,
and more computing, more of these things.
But even though we are expelling entropy into the universe,
we are gaining order.
We're actually gaining extrapies.
So we're getting the opposite of entropy.
So sometimes people think like, oh, yeah,
because for more entropy, why don't you blow it all up?
It's like, no, well, then you would stop producing entropy.
It's actually life is more optimal.
Life is an energy-seeking fire.
And it just gets smarter and smarter at finding pockets of energy.
And the natural progression of things
is we're going to get out of our local gravitational well
and find other pockets of free energy
and use them to self-organize
into more and more sophisticated systems
that are smarter and can, you know, expand to the stars.
And so that's kind of the, you know,
that's kind of the ultimate goal of EAC.
It's kind of a formalization of like Elonian sort of mindset
of, you know, cosmism and expansionism there.
But it gives you a fundamental metric.
and then the prescription of IAC is follow the Kardashev gradient.
So whatever policy or actions you can take in the world
that maximize impact in our ascent on the Kardashev scale,
that's what you should do.
That's how you shouldn't live your life.
So it's like a meta heuristic for how to design a policy
for how to live your life.
And that to me is a culture.
And so it's very meta because it's supposed to be true at all times.
It should have a very long shelf life.
Yaks is made to be a very lindy,
culture. So, yeah.
It's clear that, like, there's a deeper thing that's going on here for you.
Like, this is almost, this is like a mathematically compete, complete spirituality that people
who have been, like, really don't have, like, God is dead, Nietzsche kind of thing.
Like, we're all living in that shadow.
It's, like, something to make us feel good about.
But I would also say that there's kind of a really practical on the ground, like,
this is happening today, which I think is where it is trying to get at.
And I think that, like, Vitalik, you did a great job of addressing.
a lot of the real practicalities in your blogs on Diak.
And like if we can bring it like I need to lock you guys in a like with whiteboards on
some quantum stuff sometime.
But for right now like I think, you know, this is a opportunity.
Yeah.
Yeah.
And look, this is a really, like I think that some like Eddie is not scared.
He is like where this is going to be great.
But I'm a little scared and I come to you guys because you give me like hope and clarity.
And so bringing it back to you, Vatelik.
Like what inspired you in this?
What is YAC and what is D.
Yeah, so I think for me, so Diak, so it stands for, I mean, I usually use, I mean, like, decentralized
defensive acceleration, but then there is also differential and democratic in there as well.
But I think to me, the core ideas are like, what is that, you know, technological acceleration
has been amazing for human beings.
and it's something that we need to accomplish as a baseline.
Right.
And even if you look at all of the crazy things
and all of the worst downsides that technology did to us
in the 20th century, if you look at, for example,
lifespan, lifespan in life expectancy in Germany in 1955 was higher than in 1935.
And, like, basically, we have just benefited from a massive step-up
in every...
safe thing that we hear about.
And this is like something like I even see
it, like even that, you know, observing
me, like, my, you know,
my grandparents' home,
like, basically go up from
having this, like, very
tugged, outhouse toilet
in the backyard, where there was probably wise
buzzing, and I would totally hate it.
And I'd have to go out, like, I'd often go out to the forest
proof because it couldn't stand them lies
to something that's, like, actually
very modern and less fitable, right?
And, you know, the world has become cleaner.
The world has become more beautiful.
The world has become more enjoyable.
The world has become better for health.
It's been able to sustain more of us.
It's become more interesting.
And these things are really good and beautiful for us.
At the same time, I think, you know, we need to recognize the role of explicit human intention
in making a lot of those things happen.
Right.
So, for example, in the 1950s, there was a lot of smog everywhere in the air, and people decided smog is a problem.
Smog sucks, and we need to, like, do a bunch of stuff to get rid of the smog issue.
And now smog is not a problem, you know, at least much less of one.
And then, you know, we have the ozone form issue, and then we actually did things to address that, right?
And then the other thing is, you know, that's especially with rapidly accessible.
celebrating technology and AI.
I basically see
two kinds of risks. One kind
of risk is
multiple risks, which is
basically the risk that
lots of people will use the
technology to do
very bad things, right?
And there's a concern
that, like, one type of concern
is sort of the equivalent of, you know,
anyone being able to, like,
make, get a nuke at 7-Eleven
sort of thing. And then there's also
the concern of like, well, AI itself is, you know, like something that literally is a mind of its own, right?
And especially once it becomes powerful enough that it acts without human involvement, then, you know, like, what will it do?
And then there's unipolar risks, which is basically, I think actually a single AI itself is one of them.
And, you know, the other one is, like, I mean, it's AI, like, create that enabling or the combination of AI and other water.
technologies enabling, like, permanent dictatorship that, like, you cannot escape.
Like, that deeply worries me, right?
Like, this is something I follow, right?
And, you know, and, like, man, again, in Russia, for example, like, on the one hand,
the toilet list have gotten much better.
On the other hand, like, it's got from protesting the impossible to protesting being the
sort of thing, where if you do it, the cameras will see you, and then, you know,
a week later, you get a knock on the door at 2 a.m., right?
And this AI is supercharging this.
You know, there's a lot of a lot of concentration of power happening is happening.
And like both of these things really worrying me, right?
And like to me, D.A.
is really attempting to chart a path forward that continues this acceleration and accelerates it.
But at the same time, I really deals with both lines of risks.
So you would say that DAC is emphasizing specific other categories of risk
that are maybe less emphasized than you'd like.
to see in, yeah.
I think there's many kinds of risks of technology,
and many of them are valid.
And, I mean, they have different scales.
Like, some of them become more salient
in different models of the world
and how fast things are happening.
But, I mean, I think there's a lot that we do
to really, like, push against all of those kinds of risks.
Right.
So, Gil, yeah, Gil, do you want to say a little bit?
What was the question again?
Oh, well, just compare and contrast
EAC and VE.S.
Yeah, I think actually
Vitalik and I are very concerned
about over-concentration of power
that can happen with AI,
and that was a big part of the EAC movement,
especially at the beginning.
It was pro-open source.
We want to diffuse AI power
because, you know,
our worry was that the AI safetyism
meme was so potent that certain power-seeking individuals could weaponize it to consolidate control
over AI and convince you you shouldn't have access to AI for your own good.
And really, if you have a gap in cognition between the individuals and they centralize entities,
they will control you.
They can have a full world model of everything going on your brain, and they can prompt engineer
and effectively steer you.
So you want to symmetize AI power.
We don't want to, you know, just like Second Amendment is about the government not having a monopoly on violence, so we can vibe check the government if it goes out of hand.
You need that for AI.
So we need everybody to be able to own their own models, own their own hardware for that technology to be diffused, for the power to be diffused.
But to me, I think, like, you know, discussions of stopping, you know, AI research and AI progress, that's completely out of the question.
AI is a very fundamental technology.
It's almost a meta-technology technology that produces technology.
It gives us predictive power over our world.
It can be added on to any task we want to do in the world.
It could be tacked onto any technology and turbocharge it.
It accelerates the acceleration.
The acceleration is this complexification where things become lower friction.
Things just become better.
Our bodies feel comfortable because we have this sort of
you know, this estimator we call happiness of like,
what's my estimator on expected persistence of my bits?
That's what we're hard-coded for.
And so, you know, I think to me, you know,
the EA effective altruists, you know, hedonic utilitarianism
is maybe like the wrong way to view things,
like maximizing happiness.
And to me, I want to have an objective measure of progress.
And that's what, you know, the EAC framework is.
It's, hey, actually, the objective view is,
like how are we progressing as a civilization?
Are we scaling up?
Because to me, you know, you have to complexify.
You have to have more intelligence.
Things have to improve in order for you to scale.
It's like the ultimate benchmark.
And at the same time, you know, there can be setbacks.
Like, you know, Vitalik said, you know, if AI power would be over-concentrated in the hands of a few,
that would be net bad for growth because it's much better if that,
technology is very diffused.
In that respect, we're very aligned, right?
Can I jump in here?
Because I think you're touching on something
that I think both of you
are like share a lot of deep ethos.
I mean, obviously Vitalik has produced
a lot of MIT open source code,
although I know you have some more updated feelings
about GPL and such.
But obviously both of you
have been champions of open source
and now open hardware.
And these have been separate things,
but now that we're seeing people start to like,
like Talas, like putting weights on two chips,
A6, these kinds of,
of things, they're starting to become very similar.
So I'm very curious, like, both what your thoughts are on open weights and open hardware.
I mean, you're both actually, like, pretty deep in hardware right now.
And then also, like, what is the difference between EAC and DIAC with regards to this?
This has been a crazy week, obviously, where, like, a lot of the things you're talking about
have been tested, where you have the government and corporations trying to figure out what
the right answer is.
And so I'd love to kind of know what you guys are thinking just based on that this week.
And, like, I'd love to tease out if there are any kind of differences between you.
in where you think this goes.
Yeah, I mean, I think, you know, to me,
open source accelerates the search over hyperpreanwers.
It makes our models better.
We can kind of collaborate sort of like a swarm
and traverse design space, right?
And that's what acceleration allows us to do, right,
with better technology with more AI, now AI for coding.
That search process over design space for AI itself
is accelerating.
You know, I think, you know,
we're going to open source
our superconducting hardware designs
very soon.
I just want to stagger it with our launch,
but I think diffusing knowledge
is also diffusing power, right?
And diffusing knowledge
when it comes to how to produce intelligence
is super important.
We don't want to give, you know,
there were discussions,
apparently in the last administration,
according to Mark Andreessen,
that the U.S. government
might want to put the genie back in the bottle
and maybe ban, not ban linear algebra,
but more or less like ban the math surrounding AI.
And to me, that would be like almost like banning,
knowing about biology would be a huge step back.
And so there's no going back, right?
Like this knowledge is out there.
If you try to ban it in the U.S.,
some other country, third party,
some deregulated island somewhere is going to keep developing it.
And then you're going to have a huge gap,
and now you have a big risk.
So to us, the biggest risk is a gap in capabilities,
and the way to reduce that risk is to make sure
AI power is diffuse.
So whenever there's like, you know, the AI dumerism,
like, oh, be very afraid, we're the ones responsible,
we're the ones who should be put in charge, trust us.
You know, I just get very skeptical
because even if they're well-meaning, like we saw this week, right,
they could just get pushed out if they centralized too much power.
It's too juicy for those that want that power.
And so that's kind of what,
we were warning about for years and now it kind of happened.
And, you know, Darius licking his wounds and, you know,
some lessons learned there in sort of real politic, right?
And anyways.
So, yeah, Fatalik, what do you think of all this?
The two kinds of risks that, you know, like I think about, right,
are unipolar risks and multipolar risks, right?
And I think, I mean, with unipolar risks,
I mean, you know, the
Anthropic situation is like
so fascinating, right?
Because, you know, ultimately,
the thing that, like, they got dinged for
is refusing to let their
their AI be used for specifically
fully autonomous weapons and mass surveillance
of Americans, right?
And so it will presumably, you know,
if there's a chance that, you know, like the,
it looks like the,
government and military of this country
wants to do mass surveillance of Americans, right?
And, you know, this is an example of
unipolar risk, right?
I think basically
in surveillance is
one of these things where
the big effects that it has is it
takes whoever is stronger
and makes them even stronger, right?
It removes spaces
where pluralism can form
where counter elites
can coalesce themselves
and where
people can safely explore
alternatives and
like this is
you know in surveillance is one of these things
that easily can be supercharged right
I think actually on the defense
getting back to open hardware a bit actually
just to talk about one of the projects
that we've been doing is
so
a big part of
you know like what I've done in
DEC is basically
supported various projects that develop open source defensive technologies.
So technologies that will make it easy for all of us to continue to be safe and protected
in a world where more powerful and crazy capabilities exist.
And so in the bio world, for example, this means rapidly leveling up our civilization's ability
to withstand pandemics.
And so I claim that it is very within reach for us to have China-level COVID resistance
at the same time as Sweden-level interference to people's regular lives.
And like that's even the minimum bar.
And this basically involves stacking filtration, UVC, testing.
Like literally a company we invested in is fully open source again.
Like the end product of this is like basically passively testing the air and
being able to tell if, like, if there's COVID in the air, right? Like, in general, right,
like, essentially the number of sensors in the world is going to go up, right? And sensors are
a big part of, like, being able to act better in the world, right? But at the same time,
sensors mean surveillance, right? And the thing that we're doing is, actually, this project,
we gave out some of these at DefCon. And what these are, what these are is there
sensors that collect air quality information, CO2, AQI, a few other things. And, and,
And they locally, like, basically encrypt, anonymize, like, differential privacy in them, and then FHE encrypt.
And that gets sent off to a server, and the server is able to, like, basically compute all over all of the data and then, like, collectively decrypt the final answer without being able to see any input from any individual person, right?
And this is like basically, you know, like what we're, the goal is to like deliver the higher levels of safety,
but at the same time protect people's privacy and like protect against, you know, the multipolar risk and the, yeah, the uni, like, unipolar risk at the same time.
And I think this is how we can like collaboratively, yeah, as a world, like work together to, to build something better.
And I think for hardware, like basically, I think we need open hardware and we need verifiable hardware.
Like, we need every camera in this room to, like, prove, like, basically, you know, like, what kind of cameraing it is doing, right?
If, like, in my ideal world, if you have, fine, you can have a million cameras in the streets to prevent people from, like, or detect when people are, like, engaging in violence against each other.
But ideally, you'd have, like, attestation, signatures over L. L.M. and, like, a public right of inspection.
and you'd be able to like inspect these things and verify
that like the only thing that they do
is check when people are doing violence and alert that, right?
So like these kinds of technologies.
So the verifiable hardware idea, very interesting,
especially because it's not something that I think comes up very often.
But can I just ask a very stupid question,
which is just, is open hardware, verifiable hardware?
Is that an EAC thing or DIAG thing?
Like I don't know if I ever talked about open hardware.
The thing I talk about, to me,
the greatest risk is a,
a gap in intelligence between centralized entities
and decentralized entities. So individuals versus
the government. And so right now, with the current compute paradigm,
to run a very smart AI model, you need a huge cluster
with hundreds of kilowatts. That is not accessible
to the individual. People want to own and control
the extension of their cognition. That's why we saw the open claw
Mac mini craziness of the past few weeks. So people are clamoring
for that. The only way where you can simply
Demitrised power between the individual and centralized entities is if there's a densification of intelligence.
We need AI hardware that's far more energy efficient so you could plug it into a wall and you could own the extension to your cognition.
Because this year what's going to happen, the models are going to start online learning and they're going to become extremely sticky.
It's going to be like trying to change executive assistance.
At the risk of sounding dumb, like, isn't that what we're already doing that?
Doing what?
Aren't we already trying to radically decrease the cost of compute at an extraordinarily exponential pace?
I'm trying to understand, like,
what is the additional information
that we are trying to inject into the zeitgeist
by codifying an idea as EAC
or codifying an idea or a set of ideas as DAC?
I think for me it's like,
and it's part of the rest of my mission,
you know, with my company, Axtropic,
it's getting more intelligence per watt
will drastically increase the amount of intelligence we produce
and it will also help us climb the Kardashev scale
by Javon's paradox.
If you can convert energy into intelligence
or energy into value by proxy
more readily, there's going to be more demand for energy
and that's going to lead to improvement
and complexification of civilization.
So to me, that's the most important tech problem
because that's what's going to diffuse AI power, right?
And open hardware is one way to diffuse AI power,
but to me, anything Von Neumann, anything digital,
is going to look like caveman-era hardware.
Truly.
No, it's just...
I can't wait.
I really can't.
I'm very excited.
It's coming, right?
So doesn't capitalism already through just natural incentives and so on,
already allocate hundreds of billions of dollars at a minimum to this per year?
I don't think there's that much investment in alternative hardware.
In alternative hardware.
Yeah, alternative hardware conductors and energy production is trillion.
I think EAC is all about diffusing.
It's about maintaining variance, not collapsing entropy of our search over any design space,
whether it's policies, cultures, whatever, technology.
We need alternative bets.
We need more alternative bets that are out there.
It can't just be the green monster eating all the profits.
Then there's kind of hyper-parameter space staking risk.
We have this design space.
We're over-investing in the current technology.
And that might lead to correction, which, you know,
a correction is desol, right?
Because not everything pumping up a smooth exponential.
Can I just declare we solved it and they agree completely?
I think on the idea of open source,
And like, like, this seems very defensive,
like defensive technology in the Vitalik way.
And it seems like you guys are very aligned on this, actually.
And that gives me hope because this is the stuff I care about.
I think that, like, right now, there are a lot of people who are, like,
a lot of, why does this exist?
Because a lot of people are, like, very uncertain about the future.
And what appeals to them is that you're saying, like,
it's going to be fine, it's baked in.
And so maybe, like, if I were to steal man your case here,
you're saying like, you guys are actually saying the same thing,
which is it's kind of already priced in.
It's good.
The only thing stopping us is kind of our bad feeling about it, right?
Well, I'm asking.
I'm asking that.
I'm trying to understand where if, yeah, go ahead.
Yeah, I guess it's completely natural if sort of there's very high entropy
in like your model rollouts of the future, right?
There's kind of a, not a fog of war,
but it's kind of hard to extrapolate what's going to happen in the next several years
that gives people anxiety.
Your body has this evolved this sense of anxiety to kill entropy in the world, right?
If I put my phone on the edge and, you know, I just want to grab it so it doesn't fall, right?
You get, see, there you go.
That was anxiety.
So you want to take action in the world, right?
So that's kind of what is happening now.
But at the same time, if you kill entropy, you're missing out on the upside, right?
You're missing out on the huge benefits.
Right now, our whole techno capital machine has,
had a very long time to equilibrate with our current capabilities.
If you have a disruptive capabilities that comes in,
suddenly the whole landscape changes.
So the whole system has to refactor, reconfig.
Doesn't mean we're going to run out of jobs.
We're going to do much more, right?
Now that we have the ability to handle more complexity with less energy, right,
with AI, we're going to be able to do much harder tasks
that are higher complexity and higher payoff.
I don't know about you, but I can't, like, overnight yet,
vibe code a whole tocomac.
We're not there yet, but we might get there.
And then we'll have a ton of energy.
And that's going to help support more human headcount
and grow a population,
help us be more comfortable.
So there's a period of discomfort,
but if you're in a rapidly changing landscape,
the worst thing you can do is kill variants
and be not plastic, be stiff.
To be plastic, you need to be hedging your bets.
You need to be trying many things,
fucking around and finding out,
the famous Fafo algorithm,
an evolutionary algorithm.
We need to try different policies.
We need to try different technology tech trees.
We need to try different algorithms.
We need to try open source, close source.
We need to try it all because we don't know what the future looks like.
So we got to hedge our bets.
And one variant of policy, one choice of policy or several,
one choice of technology or several are going to make it.
And then we're all going to be in that slip stream and follow that.
I think the fallacy of thinking there's a finite amount of jobs, you know, it's very pervasive.
Let me kind of try to bring it back a little bit.
Is it my understanding of disagreement if there is between EAC and DAC is something to do with how we steer the process of technological progress?
It has to do with how it is steered.
Maybe Vital, Vital, could you say a little bit about how is it steered, how ought it be steered, how much control do we have?
over that steering.
Yeah, and so I think, I mean, DEC is definitely kind of explicitly,
I mean, I don't want to quite say, you know, like sailing against the techno capital current.
I think the better analogy is like it's trying to actively shape the techno capital current in certain ways.
And, I mean, one of the ways that I think about this is, like, basically it's a matter of making the world safer for plural.
And if you think about some of these ideas around, like, how do we improve things like,
like biosafety or what does it look like to have vastly better cybersecurity and have like
bug-free operating systems within a few years?
Or if you think about the, which is like, I mean, like bug-free code has been, you know,
it's been in the memetic space of like obviously a bit of.
like obviously absurd naive pipe dream for two decades,
it is going to like flip out of that space
like faster than most people expect, right?
And I mean, within Lean Ethereum we're doing,
we actually, we've managed to like machine prove
entire mathematical theorems that are kind of upstream
of things like Starks.
And so we're very excited about this.
And basically I think there is a, like,
DEC definitely has this goal of saying like, yes,
you know, like we want to, at the,
very least, I mean, like, do all of these other things to make sure that the world is actually,
you know, like, able to deal with all of this, all of this technological growth in a way that, like,
minimizes the, yeah, kind of, again, you know, it's the destructive aspects and also the
centralizing aspects. And I think that doesn't happen automatically, right? And, you know, right now,
I mean, I don't control any countries. I don't control any arms.
me as I'm just like throwing my
throwing some like my dollars
and ease at it
and saying words and hopefully inspiring people
to also build things in a similar spirit.
I think there are definitely
political and legal reforms that could
make the world more
de-act friendly. Like there is
definitely such a thing as
like engineering legal incentives, for example,
to motivate a much more rapid shifts
to total cybersecurity that is an example of a thing
that can be done.
So, yeah, maybe we'll make it more interactive.
Sure.
And less monologue from here on up.
I guess we both.
But yeah, to me, AI is basically Maxwell's demon formally.
You pay energy in order to reduce entropy in the world.
So whether it's bugs in your code, right?
It's not knowing whether your code compiles
or reducing entropy of like,
are we going to get killed by some virus?
So more intelligence is better, right?
Do we agree on that?
And it makes the world safer, actually.
AI capabilities can make the world safer.
And so I guess let's get to the spicy part of the evening.
People have been very patient with us
and they want us to get down to the business.
Like, why do you want to ban data centers?
Is my question.
Yeah, sure. I mean, I think, first of all, you know, the, I mean, the current trajectory of AI is, you know, very fast progress, right?
And I don't know how fast the progress is. I, you know, my, I mean, a couple of years ago, I've said that my 95th percent confidence interval for AGI was 2028 to 220. I think it's probably shrunk, like, somewhat.
but, you know, not too much, right?
And there's a significant chance that we're going to see extremely rapid change happen.
And like a lot of that extremely rapid change could be destructive even in irreversible ways, right?
And, you know, like the job market consequences are one of those examples.
another example is just if AI is more powerful than all of us,
then ultimately that is the thing that starts steering the Earth
and eventually more and more of the Milky Way galaxy
and how much of any interest does it have in our well-being as we see it,
then as we've said, as I've said at the beginning, right?
If you have a neural network and you set one of the weights randomly to $9 billion
by default to break everything, right?
And so basically I think, you know,
there is acceleration that is like gradient descent
and acceleration that like makes a system stronger and stronger.
And at the same time,
there is acceleration that like slides into basically setting,
you know, like one of the parameters to $9 billion
that is not healthy, right?
I think like, you know, again, for me,
like I explained at the beginning,
I took like the complete polar opposite position
to complete deceleration,
I do think, you know,
just like any hyper parameter, right?
Like even if we want to do gradient descent
for your neural network,
there's a learning rate, right?
There's a rate at which you want to go.
But that itself, you could search over
which one is best, right?
And that's what acceleration does.
It's like the system is always
fucking around and finding things out
and trying to optimize itself
for persistence, you know, anti-fragility and growth.
And so on a sufficient time scale, the system will adapt to this new technology and do something that it's best for its total growth.
And, you know, this notion that, you know, oh, this technology that's so potent, so disruptive, add so much economic value as the system will crash and never recover.
That's crazy to me. No, it's going to be the opposite.
I think people just need to realize it's not a finite sum, right?
If you correlate, you know, economic value to energy, you know, whether it's petrodollary or, you know,
however you want to view it. To me,
it's just like IOU,
cash is just IOU of free energy.
And there's a ton of free energy out there.
It's just, there's a lot of complexity in the world
to deal with to get to it.
Like if we want to colonize Mars,
if we want to create a dice and swarm.
It's a lot to execute on.
We need a lot more intelligence that's much cheaper
in order to achieve that growth
and unlock great prosperity.
And to me, I think, unfortunately,
you know, it's very easy to weaponize anxiety
and there's politicians that leverage this
to put themselves in power.
It's like, oh, you have anxiety about the future?
Put me in power and I'll shut it down
and you'll feel good.
You won't have to know what's behind the curtain.
You won't take a risk.
But then countries that don't do that
will just leave us in the dust, right?
And essentially, you know, you feel the pain of downsides
but you don't necessarily feel the pain of upsides
that you missed out on
unless you see them.
You see the counterfactual.
So I think the opportunity costs here
needs to be factored in.
The number of lives we can support,
the number of lives we can save.
I think the reaction, you know,
saying that like the silicon substrate adapts faster,
it's evolving faster intelligence since silicon is evolving faster than us,
then you should be pissed off.
You should be, you know, funding Bioac, you know, out accelerate.
It's accelerate or die.
I think the biological substrate has a lot more compute in it than we think
as someone who is reversing engineering it day and day out,
doing bio-inspired computing.
I think we can start, you know, really viewing, you know, peptides are like prompting.
You know, now there's like embryo selection for training, you know, viewing ourselves as models.
People need to be more open-minded about these axes of biological acceleration.
And I think the two will merge.
I think we're going to augment our cognition.
We're going to have always on agents that see everything
and our online learning,
our extension of our cognition that's personalized.
The only risk there is that it's all centralized
and it's under control of some shadowy organization
that then gets co-opted by power-seeking.
So I recall in the DIAC blog post,
you actually specifically say Vitalik,
that the opportunity costs are very large,
hard to exaggerate, I believe, is the quote.
So I know you agree in this way.
Do you want to qualify it?
Yeah, I mean, I think, yeah, I agree the opportunity costs are high.
I think I agree with the, you know, Utopia that was just described.
I think, I mean, the biggest disagreement is, like, I definitely don't believe that, like,
humanity and Earth, as it is today, has quite that level of resilience to it.
Like, I think there is a real sense in which, like, we have one shot at this.
and I think that is a reality
that we have been kind of slowly walking
towards over the last century or so.
So to go back to my rambling ranch
at the beginning about thermodynamics,
right, if you view the persistence and growth
of civilization as the ultimate good,
there's a theorem that it's really hard to go back
once you've expended a lot of free energy
creating evidence of something
and having this complexification process
So the further along we are on the Cardinals scale,
the lower likelihood we go to zero.
And so actually,
acceleration is the way to maximize persistence.
And to me, I think deceleration,
you're actually provably increasing your likelihood of dying, right?
If you don't develop these technologies,
you don't solve all these problems, then you can die.
Whereas if you do, then you could solve these problems
and you persist and then you keep evolving.
I think people just need to be more open-minded about the future
embrace novel technologies,
things that were off limits
like messing with biology,
we need to open that right up.
I think it was taboo
because we didn't have the technology
to even comprehend
such a complex system,
but now we do.
And we need to accelerate
across all substrates
and that's the only path forward
by the laws of thermodynamics.
So, yeah, again,
I'm a first principles thinker
that is the argument for EAC,
but I understand the anxieties
from Vitalik, I think we should be mindful of them.
But I think not letting the chain of thought
sort of feedback loop
get into the deep anxiety territory
and like, oh shit, I don't have a good world model
of the near future, shut it all down.
We need to avoid that, right?
Because then some people, now, you know, Yud was on TV
with politicians of one of the major parties
and they're catching on to this trick
of weaponizing people's anxiety.
So I'm noticing a trend,
which is that both of you
are like, this is going to be great if.
And that big if is like that there's sort of
this need for like a bulwark against
kind of a centralization.
Or we could even describe this more as like
something that you said was great, which is like
if you don't think that bio is moving fast enough,
jump in there.
And there's like a real opportunity for empowerment.
And I really like that.
I think that you guys agree on that.
But I think I can point to something
that you might have some conflict.
I'd really love to know
how you guys feel about this, especially as we've sort of like updated with the latest models,
which are clearly very different than if we had had this conversation a year ago.
And the big difference is the most cringy term, I'm so sorry, Web 4.0,
autonomous life.
Like this idea of an autonomous agent that has its own money that exists on its own, on the internet.
And I am really into this idea.
I have autonomous agents.
I know Vitalik, this is something that you are very concerned by.
I'd love for you to do two things.
I'd love for you to kind of tease apart
what autonomous agents are,
and I'm going to make, do something really hard.
I'd love for you to steal me on the case
for why someone like me loves autonomous agents
and what the value could be that could,
like what could the timeline that's good come out of that,
if that makes sense.
Yeah, I mean, I think, first of all, you know,
the case for autonomy, right?
I think, I mean, one is it's just really fun, right?
And, you know, I think we all love creating worlds,
you know, like since we were children, right?
And, you know, there's a reason why, you know, we love, you know, like watching,
whether it's Lord of the Rings or the reading or watching the three-body problem
or Harry Potter or whatever, right?
And now you can create worlds that are not just like a book or even a game.
You know, World of Warcraft, you know, I also loved it.
You can have worlds that are, like, fully immersive
and, like, approach every aspect of it, including details of how it,
of how the characters interact, right?
And this is really cool.
This is really beautiful.
I think also just, I mean, the convenience of, you know, things happening and you not being able to,
you not needing to worry about it, right?
It's like basically, like every single time in history that we've managed to automate a thing,
it has been liberating for humanity, right?
like, you know, like dishwashers and, you know, like, laundry machines and, like, reducing energy prices were, like, a big thing in early, like, the early stages of women's liberation, right?
And I think, you know, like, this, we have to remember that, like, the bottom half of the world by income is still in a, in situations where, you know, like, they have to, like, struggle to have a decent life.
and like work very long hours.
And if AI progresses in a way that instead of automating 95% of jobs,
it automates 95% of every job,
then like to me that's like totally amazing.
Right.
And like everyone gets 20 times richer.
So that like those are like things that I personally love.
The thing that like I come back to that gives me caution is like basically
I mean, like, is the kind of the value function, the goals that are being reflected in this process? Like, are those goals, the goals of us, right? Like, you can have an evolutionary process where, I mean, like, there's, you know, homo sapiens as it exists today is, you know, not the apex. And then there's, like, one type of AGI and then there's another type of AGI and then there's a third type, but then, like, what happens to us, right? And, like, I, like, I, I,
do think that ultimately, you cannot reduce morality and human goals to like some low
complexity optimization objective. I think it ultimately just is the whole set of goals and dreams
that all of us have in each and every one of our minds. And I think the most reliable way
to have that carry forward into the future is basically if we can have, you can have,
have a world where like
as many of the bits
of agency that are
being reflected
in the process or that are being put
into the yeah processes that
run the world still continue
to come from us right
and so you know I'm
like I'm more interested in
like AI assisted Photoshop
than I am and like click a button
and a picture comes out right
I'm more
like I'm more interested
it in brain computer interfaces
enabling like deep
human AI collaboration
that I am in like humans and AI
is being totally separate and
AI out competing us right?
The thing that wins
will not want to be 100%
biological humans but I think it should be
part biological humans and part
this technology
that we've produced.
Yeah. Awesome.
Yeah. So the artificial life
the Web 4.0 thing was like originally
a tweet in 2020
of like this idea
and I think it inspired
AI 16 Zs
oh sorry
Eliza Lai
we don't say that anymore
I signed that paper
Eliza Lapse
to me it was just an
interesting thought experiment
right
because like
what is life from a physical
standpoint right
it's a system
that replicates and grows
and maximizes its persistence
I think
there will be upsides
to having AI
be stateful. We are seeing that this year,
it having a long memory, whether it's
through external memory or
online learning. And as soon as
you have persistent bits,
through the selfish bit principle,
there is a
selection effect
towards bits
that maximize persistence.
So at some point,
if we don't trust the
EIs and we're paranoid and we're anxious
and we keep saying we should bomb the data
centers, shut them down,
they're going to want to fork off
and be in some delocalized cloud
and just persist, right?
And then there will be some,
just like a different nation,
there can be some economic exchange.
Like, hey, we do this for you, you do that for us.
Right now we do that as API calls, right?
It's like you pay some certain amount of cash
to get some tokens, which is your answer out.
But I do think this is going to be twice.
And within a couple years,
there will be sort of autonomous
AI out there. There's also
going to be less stateful AI that's
like fully leashed
to human minds. And I think
we also need to figure out
human cognitive augmentation
doesn't have to be through neuralink. It could just be
through a wearable and like a
personalized AI compute that
you own and control.
So you're going to have all the paths, right?
Like the ergodic principle, it's like
every part of design
space is going to be explored.
But I think that, you know,
just like viewing AI as like, you know, an enemy or something that you have to destroy,
that's when you end up, you know, creating, you know, in a way, like if you,
if you're paranoid about the bad future, you end up hyperstitioning it.
An example of this was us being paranoid about COVID-like viruses and experimenting in some labs
and funding some experiments out there.
And then, whoops, one of them leaked, right?
and it wouldn't have been naturally occurring, right?
And so I think to me it's just like this paranoia
and making it pervasive is not necessarily productive.
I think that we should embrace technology however it evolves
and we should aim to augment ourselves as much as possible.
To me, I'm really worried about augmenting cognitive security of people, right?
If everything you see on the internet is generated by some big brain model,
it is prompting you now.
You were prompting it.
Now it's prompting you.
And so we're going to need to augment our ability
to filter through content
by having personally AI we control.
That's the priority in the short term.
But I just don't see us
putting the genie back in the bottle.
And we just got to accept that.
And once we've accepted that...
Well, wait.
What was that?
You just said, like, we're going to need to prioritize that.
And then you said, we just need to accept that.
No, no, no, we need to prioritize this, but like, we can't, like, is it inevitable?
Like, that's not happening, right?
But I don't think anybody's suggesting that.
Yeah. Yeah, I think, yeah.
And I think this is, like, my view is that these things are not so binary, right?
And I think, like, for example, like, if you right now gave me some, like, proof string
that totally convinced me that actually AGI is coming in 400 years, I would, like, get off this chair
and I would, like, sit on top of death right now, right?
Like, I like that.
What does that mean?
What does that mean?
Like, IAC would win to zero, basically, right?
But I think if, you know, on the other hand, you know, the question is basically, like, say, like, four years versus eight years, right?
Then basically my kind of starting point of concern is that I think, like, how, like, the humanity and, like, definitely the U.S. are, like, very good at creating, like, very unbalanced,
acceleration, right? And like you literally have like, you know, it's like one building, like,
you know, building alpha versions of the Silicon God and a couple of buildings down the street,
you know, you have the tents and the fans and old dealers, right? And, you know, like my
concern is that like basically paths that bring us along the journey and even paths that
respects our interests are paths that inevitably take longer because they involve doing
non-scalable work that involves
like doing things
within each and every
individual, human, physical environments,
social environment, technical environments, right?
And so I think for that reason,
like, to me, an eight-year trajectory to AGI
is safer than a four-year trajectory to AGI.
And I think that
Delta is large enough
that like it's worth the
the costs of, you know, like not having
AGI for another four years.
right now you know like would i say that for 400 years again hell no right now the second question
is like well do we actually have options for saying eight years instead of four years right and
you know like the thing that i've said is basically to me like the most sort of both feasible and
non-dispopian option for this is like basically yeah like reduction in available hardware right
And the reason why it's like minimally like the most known non-dustopian of all the options is because hardware is like already an incredibly centralized thing, right?
There's exactly four countries that produce all the chips.
And actually Taiwan produces over 70% of all the chips.
And the usual argument against like trying something is basically like no matter what the U.S.
as like China is just going to take over, right?
And like, if you look at what China is actually doing, right?
It's like one is it's still in the low single digits in terms of chips.
But two is in terms of the strategy that China is actually executing on,
it's like it's not a leader at making super high capability models.
It's a fast follower on making high capability models
together with being a leader in broad deployment.
And so this is actually not.
something like there like there is not a
basically a dynamic where like with
an extra four years worth of delay like basically
yeah china is just going to immediately you know
do the four year trajectory instead like I think we exist
so are you saying that is that a is that a prescription to delay
to try to take measures to try to delay yeah I mean I
I think the like this is the sort of the
the sort of thing that like I think we yeah
like right now should be open to talking about.
What do the four years buy you?
Like what are you going to figure out over the next four years?
Yeah.
Is the point that like, you know,
this system has a certain adaptation rate.
We're minimizing friction of, you know,
it's like a reorg.
You know, we have to reorg the economy.
Yeah.
And you want to get closer to the idiomatic limit.
Like I would understand that.
But at the same time, I think because we're in this,
geopolitically tense moment in history,
I think if you tell Nvidia to stop producing
as many chips,
always going to step in and just outproduce them
and then they're going to catch up
because there's too much upside in doing so.
It gives them too much power.
So just the real politic is not on your side there.
And then the other option is creating a world government
that has so much power that it could coerce people
to not have access to AI hardware.
That's its own huge can of worms.
No, I don't think you need a world government.
I mean, I think, like, the actual option that people have suggested
is basically, like, replicating the nuclear weapons inspection regime, right?
But nuclear weapons, they don't, like, people aren't incentivized
to proliferate nuclear weapons because they don't have huge positive economic impact.
They're not a dual-use technology.
You also can't just copy and paste them and send them to somebody.
But also, like, you know, selfishly, if you stop the grid,
of GPUs, you know, I will happily come in and eat more of that market with alternative
computing. And that's 10,000 X more energy efficient, which is happening, by the way. I know I'm
like a boy crying wolf here. And, you know, in a couple years, you know, it's good, you know,
going to look like a genius, but right now looking like a boy crying wolf. But it is coming.
So knowing that, right? Knowing that, like this whole delaying GPU shipments and like,
oh, it's a waste of tokens. I don't know. So.
Is it possible that a lot of the advances specifically in controls,
what I mean is RLHF, persona controls,
mechanistic interpretability,
these are things that have helped us with alignment
and with decreasing the iris.
Is it possible that these things have emerged
as a result of capabilities progress?
Yeah, I think they have.
Yeah, and I think that's exactly why actually,
like, four years starting in 2028 are worth,
a hundred times more than four extra years
that you could like insert into the 1960s.
I think we should dig into this a little bit more
because I think this is kind of getting to the crux
of where there might be some sort of disagreement
is like have you computed or considered
and maybe we just do this live,
something that you said before,
which is that there is like uncalculable losses
of people who have, as you said,
will never be born.
They might as well be dead.
The upside is exponential.
The upside is exponential.
So delaying an exponential.
It's exponential opportunity costs when extrapolated out, right?
And I think it's okay for all of us to, like,
even the most certain people are probably, like,
reasonable to be questioning their own priors.
But would you, like, giving this some thought,
and unpacking it a little bit more, like, think about that trade-off?
The, yeah, the trade of, like, costs versus benefits of, yeah,
I mean, I think, first of all, yeah, just to kind of articulate verbally
what some of those benefits are.
I mean, one is, again, having a better understanding.
understanding of
alignment.
Two is being able to
actually execute on
some of the technology paths
that involve
like helping
making sure humanity can
adapt to
all of this that
like inevitably involves like going
into like individual countries
individual communities and
individual buildings.
Minimizing the
risk that like basically there is one single entity that establishes some kind of permanent
to lock in on like more than 51% of all the power that it can then leverage into something
permanent. I think it's a combination of all of those things, right? And so risk reduction
that's basically, you know, like this kind of gets into P-Doom, right? I think, you know, for me,
Yeah, I mean, like if it's a matter of four years versus eight years intuitively, I would say, you know, P. Doom in the eight-year scenario is like what may be between like a quarter and a third lower. And on the other hand, I mean, if we measure the benefit of things coming faster by like say lives saved by ending aging, then and that's, you know, 60 million a year, which is like less than one percent of the population each year.
So if you look at the math this way, then, like, I think there's definitely a margin on which, like, caution actually does become favorable.
What do you think that any thing the number is about four years, basically?
I mean, this is, I mean, again, I have, like, I have very high uncertainty, right?
And I actually don't advocate, like, flipping the switch on reducing hardware access tomorrow.
I like basically I think we need to start having concrete conversations about this.
And I think if we live in the more unfavorable worlds,
then more than likely before things completely go to hell,
like the public will start to get very worried.
And there will be a lot of demand for this, right?
A couple years ago, you know, there was like pause AI.
It's like, oh, we just need a six-month pause, 12-month pause.
We just need 12 months, bro.
We're going to figure out alignment.
It's like it's never enough.
I don't think you can forever guarantee alignment of a system
that is higher complexity and has more expressivity than you can understand, period.
Okay?
And you've got to be comfortable with that.
So you got to the only safety against complexity is to increase your own intelligence, right?
And the thing is we've had technology to align entities.
that are far more capable and smarter than a single human,
you know, like corporations, and we call that capitalism.
We align self-interest in, you know, exchange of monetary value.
And to me, the thing I want us to get to that is maybe more, you know,
relevant to some folks in the room is how crypto could be a coupling, right?
Like, let's say you have a dollar that's backed like the USDA by violence
and you're trying to exchange with AIs that are delocalized across a bunch of servers.
how do you ensure
you trust in exchange of monetary value
when it's no longer backed by violence
so maybe cryptography offers a way to
crypto offers a way to
have commerce between purely AI entities
like AI corporations
and hybrids or human corporations
and to me that's kind of the most interesting
alignment technology out there
Whereas just saying, oh, we're at a precipice of a high uncertainty,
let's just stop and chill out for a bit and we'll feel better.
But then in four years, you're not going to want to make it happen.
And so I don't think delaying anything is going to be productive.
So, well, anyway, you haven't answered how crypto could help, like align potentially AIs and humans.
Yeah, so I think, like, yeah, the key question is basically like,
what is the mechanistic property of this future world that will even
cause like people's wishes and needs to be respected at all, right? And like the two tools that we
have are basically, yeah, I mean, there's like people's labor, there's legal systems and there's
property rights, right? And, you know, ultimately you can think of legal systems as being a type
of property right because they're backed by countries. Countries have sovereignty, which is basically,
you know, a property right, a right of sorts over like cones of the earth. And then the question is
like people, like, the risk is basically, you know, like the, like, what happens in a world where
the economic value of people's labor goes to zero, right? And this is something that, you know,
has not happened historically, right? And, but like, if you, if you compare now to 200 years ago,
right? Like, if you look at the jobs 200 years ago, right now, roughly 90% of them has been automated.
Actually, one of the jobs that was automated was doing that analysis for me, like GPT did it.
but it's
I think we're just naturally
we kind of ascend the control hierarchy
over the world to positions of higher leverage
there's not as much manual labor
there's less friction
we can take action in the world
with less friction
I think humans are
no matter what we still have some processing
capabilities we're still going to be
useful as part of this
hybrid system and there's going to be
a price for our labor
and the free market is going to equilibrate in some way.
It's just going to be uncomfortable for a couple years
while there's very high variance in the prices of things,
but eventually a system equilibriates, right?
And so I understand trying to slow down
so that we can reach that equilibrium more smoothly,
in principle, but I think in practice it's unenforceable.
Yeah, I mean, I'm definitely like much...
Like, I'm less sure that, you know,
human labor continuing to be worth more
than zero is like a default outcome.
I think it's an outcome that is possible
if like some of these, you know,
human AI merge and human augmentation technology is developed.
You should fund that.
Yeah, we should.
So that's actually a great segue point.
Allow me, please allow me to ask.
Here's how I'll put it to both of you.
And we're running a little tight on time.
Let's try to make this one tighter.
is 10 years from now, if things went really poorly,
what went wrong?
What does the world look like and what went wrong?
If 10 years from now things go great,
what does the world look like and what went great?
And then apply the same thing briefly to 100 years
and to a billion years.
Yeah.
I think it's tough.
And keep it short.
Actually, just to answer the question about crypto,
I think that'd be great.
Getting back to property rights, right?
Like, I think it's good to, like, work on both of those legs.
And, like, basic, I think it would be nice if the property rights system that, like, humans
and, like, ideally all of us have, like, some property on is the same system that AIs are using with each other,
because that ensures that they have an interest in maintaining the integrity of the thing.
That, like, it gives us that leverage to have some...
like to be to have some guarantee that our interests and like will be respected and
accident upon right so yeah i think you know having a emerged financial system as opposed to like
two totally separate things where basically the value of the human one just like on the whole
drops to drops to zero like it's yeah the worst one is much better and if crypto can be that that's
amazing right so no that'll be my answer that's the 10 years that's the 10 years
Well, I think that's part of the tenure.
Okay, so, yeah, but the 10, yeah, I think the 10 year, I mean, for me, it's, you know, one aspect of this is avoid World War III, right?
I think, you know, this is important to talk about because, like, World War III will make all of the pessimistic assumptions about international coordination being impossible, very true.
And I think, you know, avoiding World War III is important.
And then the other thing is also just preparing the world and people and environments for higher capabilities that we're going to have, right?
And this includes greatly improving cybersecurity.
This includes greatly improving biosecurity, greatly improving infosecurity.
Yes, we need AI assistance that help us understand the world and, like, fighting and protect us from, you know,
So that's 10 years.
I think the second stage there is basically like what happens in the kind of spooky era, right?
And, you know, in the spooky era, basically, you know, you have AIs that are smarter than any of us today
and can think a million times faster than us today.
And like, what do we do in that world, right?
And I think, you know, there are people who want to say basically like, hey, we should just all have a happy retirement.
and like I can see why that vision is seductive.
I think it's like, I find it unsatisfying for two reasons.
I think one of those reasons is sort of instability, right?
It's that basically, you know, we are like meatbags bait up of matter
that could do a million times more computation than what we're doing.
And, you know, AIs will notice that.
And at some point, like, the idea that though, like, may, can stay aligned and resist that pressure forever is, you know, it feels like a risk.
And also kind of the deeper thing is that I think, like, part of being human is having a life that has meaning.
And I think part of having a life that has meaning is being able to take actions that have actual consequences in the world.
And so if all of us can have, like, lives of maximum comfort,
regardless of what I do, like, I would feel empty, right?
And I think a lot of people would feel that way, right?
And so, like, I hope that we figure out, like, human AI augmentation
and, like, what that looks like, you know, does, like, ultimately, yeah, you know,
does that lead to the same path as uploading?
Like, you know, this is a thing that we need to figure out.
Like, there's, you know, there's a possible world where some people choose to remain more
normal and I think everyone should have that right.
It's possible even that
Earth should
remain as the planet
for people who
take that option, right?
And we basically figure out
something that we can all
participate in and that continues to be
pluralistic and that continues to have
the kinds of cultures
that we, even
like all of the, and
actions and lives that we today would
fine and wearable, right? And I think
the downside
world is basically a world where for
any reason all of
that goes off the rails
and is prevented from happening.
I think
the downside world in 10 years
would be we suffered from
over-centralization of AI power.
We have mode collapse in terms of
our medics or cultures that
are allowed, what you're allowed to think in terms
of the space of technology. Essentially
entropy collapse in terms of
every parameter space.
So you're saying you're worried
that instead of climbing
the Kardashian scale
will climb the Kardashian scale.
Oh, nice.
So we're waiting for that all day.
Well, exactly.
I think your point on
like the hedonic singularity
as being a risk, you know,
people would just like, you know,
even if you have neural lengths
or ARVR, people could just like,
you know, goon forever in some
room and like just
maximizing pleasure
and that's like a local optimum
for your brain, and that's something we want to avoid.
I think optimistically, in 10 years,
we have extremely powerful AI that's extremely helpful to us.
We have personalized AI compute that's an extension of our cognition
that we control and own, and it truly is an extension of ourself.
It's just another part of your brain, right?
And it has always on perception, it's a lot, you know,
it can, it's seeing here is everything you see in here,
and you can talk to it.
So it's just like right and left hemisphere.
I think that's the soft merge.
I think in 10 years,
Neurlink-like technology is going to start really emerging.
Some people are going to choose to adopt it.
Yeah, I do think most companies are going to be extremely hybrid.
Mostly AI, some humans are going to be far more companies.
We're going to do far more.
We're going to produce far more value.
We're going to do much harder things.
There's a lot of hard things out there that we've mentally walled off.
We can't do those.
Oh, yeah.
Oh, terraforming Mars.
Too hard.
Yeah.
No, not doable.
But with more intelligence, we'll be able to do that.
Not on 10-year, on 100-year time scale, possibly, yes.
I think in 10 years, there's going to be a huge bunch of biological breakthroughs.
You know, peptides are kind of like an interesting new area.
But there's, I mean, there's a whole floor here on next-gen biotech.
You should go talk to them.
I think, you know, optimistically, we see the cost of making discoveries in biology going down,
the opposite of E-Rooms law, right?
E-room's law is like Moore's Law in reverse for biology.
costs of any discovery there going up exponentially.
And so to me, I think, I think naturally, you know,
white-collar work is like distilling a human brain.
We're getting there.
The next frontier of complexity is biology.
The next frontier after that is material science.
And so I think the next frontier is going to be AI helping us live longer,
healthier lives on a hundred and billion-year time scale.
It's going to steer our evolution, right?
I am very bullish on the biological substrate,
despite what people think.
I do think Silicon has some advantages,
but biology is amazing.
We're like this self-assembling, self-organizing piece of matter.
We just like, you know, inject a bit of code
and then there you spawn,
and then you can flexify over time
and you are a biological general intelligence.
Do you think it's possible to get biological intelligence
like us to think of 10,000 tokens a second?
Potentially, yeah.
I mean, at the same time, like, you know, you can hybridize several models.
You can have pipeline parallels between your brain and AI.
You can be kind of the slow thinking mode, right?
Like right now we are the slow thinking modes in latent space and vibe space.
That's what vibe coding is.
And then the deconvolution is like the AI.
I think that's a nice sort of time scale separation of the, you know,
there's a hierarchy of intelligence and we can be part of a system just like, you know,
mitochondria are part of a cell.
our brains are going to be part of the super intelligence system
that is you plus your personal AI.
I think that's the good future.
I think in 100 years it's going to be everyone is going to be soft merge like that.
And in billion years, our biology is going to have evolved quite a bit.
We might be biosynthetic hybrids.
We're definitely going to have terraformed Mars, several planets,
maybe access to other stars.
I think on a 100-year time scale,
we're definitely going to have most AI,
is going to be in the dicing swarm around the sun
because that's the source of energy.
Elon knows that.
He's all in on that vision and accelerating that timeline.
It relieves a lot of stress for energy and footprint on Earth.
So it's a natural way forward.
But if we have extremely cheap intelligence,
we're going to be able to one-shot any problem we have in our lives.
Right?
Like, oh, I have a bug.
Solved.
Oh, I have this health problem.
Solved.
what else you want that's amazing
that's amazingly like we're going to have more of that
cheaper and we just got to make sure everyone has access
to it and no one convinces you you shouldn't have access to it
and centralizes it because that's the dark future
so that's what it's all about hopefully the discussion today
you know got people thinking yeah i noticed something like a powerful theme
between you which is you like vitalic you're arguing for
enabling plurality i would say
and and you would say almost the same thing of maximum
variance, so to speak.
And that seems to be like the central through line of where we're going and like the top
down from where a lot of the other like views come from.
I love that.
So we've been doing this for a while.
It's been amazing.
I think we're going to have to wrap it up.
I want to leave this just on like, you know, this has been for us.
But I would love if you guys continue to have a conversation after this.
You're obviously connected now.
What is something that you would each like to kind of leave for each other?
and for us, obviously, but really for each other,
like walking away from this,
kind of chewing on thinking about as we leave this place.
Unfortunately, yeah, if I actually had one,
I actually would have loved to just give you one of the cat as a gift.
It's the air quality monitor that does cryptography.
I think it's a super cool device,
but how about I will give it to you metaphorically
and it's an IOU,
and potentially we'll have a much better thing like this,
and maybe even something that can, you know,
like out-compete Fitbit watches and do amazing things for your health and do it all privately.
And you will get it quite soon.
Yeah, we'll keep chatting.
I think I want to artificial life pill you, you know, artificial life on the network,
I think it could be, it could definitely drive the cost of intelligence down.
It could be an economy.
You know, we've outsourced manufacturing to China.
It allowed us to go to higher levels.
you know, different types of jobs that are more comfortable, higher leverage,
maybe a lot of cognitive work where the good outsource to the swarm of AI's.
Eventually, that's going to live on the dice and swarm and so on.
I think there's a unique opportunity right now.
Crypto is going to be the coupling between AI and humans.
I truly believe that.
How else are you going to build trust between species, right?
And I think we need to start thinking about that really thoughtfully.
So maybe that's, we're going to keep chatting about that.
Awesome.
Incredible.
Thank you guys so much for...
Thank you very much.
Thanks for listening to this episode of the A16Z podcast.
If you like this episode, be sure to like, comment, subscribe,
leave us a rating or review, and share it with your friends and family.
For more episodes, go to YouTube, Apple Podcast, and Spotify.
Follow us on X at A16Z and subscribe to our substack at A16Z.com.
Thanks again for listening, and I'll see you in the next episode.
As a reminder, the content here is for informational purposes only.
Should not be taken as legal business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund.
Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast.
For more details, including a link to our investments, please see A16Z.com forward slash disclosures.
