Bankless - 200 - Vitalik Buterin's Philosophy: d/acc
Episode Date: December 11, 2023✨ DEBRIEF | Ryan & David unpacking the episode: https://www.bankless.com/debrief-vitalik-philosophy ------ This past year there’s been a great societal wide debate: Tech-acceleration (e/acc) vs de...acceleration. Should we continue the path toward AI or are the robots going to kill us? We’ve been hoping Vitalik would weigh in on this debate ever since our episode Elizer Yudkowski —now he has, so what’s Vitalik’s probability of AI doom? ----- 🏹 Airdrop Hunter is HERE, join your first HUNT today https://bankless.cc/JoinYourFirstHUNT ------ BANKLESS SPONSOR TOOLS: 🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE https://k.xyz/bankless-pod-q2 🦊METAMASK PORTFOLIO | MANAGE YOUR WEB3 EVERYTHING https://bankless.cc/MetaMask ⚖️ARBITRUM | SCALING ETHEREUM https://bankless.cc/Arbitrum 👾GMX | V2 IS NOW LIVE https://bankless.cc/GMX 🔗CELO | CEL2 COMING SOON https://bankless.cc/Celo 🦄UNISWAP | ON-CHAIN MARKETPLACE https://bankless.cc/uniswap ------ 0:00 Intro 3:50 The Technology Conversation 15:30 The Accelerationist View 30:30 The AI Problem 38:30 8 Year Old Child 44:00 AI Doom 51:00 d/acc 1:08:30 Improving the World 1:16:45 e/acc 1:20:30 Crypto Tribes 1:27:00 Direction and Priorities 1:30:50 Closing Optimism ------ Resources: Vitalik's Techno-Optimism: https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html Tweet Thread: https://x.com/VitalikButerin/status/1729251808936362327?s=20 Techno-Optimist Manifesto: https://a16z.com/the-techno-optimist-manifesto/ ------ More content is waiting for you on Bankless.com 🚀 https://bankless.cc/YouTubeInfo
Transcript
Discussion (0)
We are creating financial systems that work without relying on any specific country.
We are creating forms of privacy that work without relying on a central actor to hold everyone's information in custody for them.
We are creating forms of accounts recovery that don't depend on, you know, Google or Twitter having everyone's master keys.
And that's happening with social recovery wallets and account abstraction at ERC-4337.
we are creating zero knowledge proof technologies that let people prove that they are trustworthy
without revealing any more information about themselves beyond that.
So we're creating all of these really powerful tools that in a lot of cases are substitutes
for more centralized forms of trust.
Welcome to bankless, where we explore the frontier of internet money and internet finance.
This is Ryan Sean Adams.
I'm here with David Hoffman, and we're here to help you become more bankless.
Vitalik Buteran on the episode today.
He's sharing his philosophy with us.
He calls it Diak, and he explains what he means by it in this episode.
I think there's really three reasons we wanted to have this conversation on Bankless.
The first is this.
There has been a society-wide debate on what to do with technology, specifically AI technology.
So we've got the tech accelerationists, that's the EAC community.
We've got the tech de-accelerationists.
That's the EA community.
And the debate is whether we should continue on the...
path forward towards AI the way we've been doing it, or if we should stop because maybe the robots
are going to come kill us. And I know David and I have been hoping to pick Vitalik's brain on this
for quite a while, ever since we had our episode with Eliezer Yudkowski, who informed us very politely
that we're all going to die. So we asked Vitalik, what's his probability of AI doom?
The second reason for this episode is I think the philosophy that Vitalik lays out is something
everybody in crypto can align on. It's a way to really unite the tribes. It can help us
explain what we're doing here to the world and why it matters. And more important, I think coming
out of 2022, when we seem to have lost our way in crypto, it reaffirms why we're here. It's a core
part of reestablishing our soul and getting to the bottom of things. And the third reason we're
having this conversation, it's Vitalik Buren. Okay? That's enough. He's always got interesting things to
say. David, why was this episode significant to you? I think even more broadly, outside of crypto,
society at large is having a conversation with itself as to how fast it wants to go into the future.
I think there's some parts of society, which is concerned about the vanguard of Silicon Valley and
tech innovation moving faster than what society can really as a whole keep up with.
And then there's other parts of society who's like, guys, you solve problems via technology.
The acceleration and the speed of technology will help others catch up as well.
And this conversation of how fast we want to go collectively as humanity is causing tension in society at large.
And, well, as podcasters, what do we do?
We try and define the landscape, define the contours of the conversation, help people learn the perspectives of other sides, discover what is signal versus noise, discover what is truth.
And so I think when we have clarity on these conversations, we can all be in agreement in the direction that we want to go.
And I think this is really what Vitalik did with his blog and what we were hoping to do.
do with his podcast is help define the conversation a little bit more so humanity can get on the same
page. And so that's why this episode is significant to me. All right, guys, we're getting it right
to the conversation with Vitalik on Diak and his philosophy. But before we do, we want to thank the sponsors
that made this episode possible, including our number one recommended crypto exchange. That's Cracken.
Go create an account. Cracken knows crypto. Cracken's been in the crypto game for over a decade.
And as one is the largest and most trusted exchanges in the industry, Cracken is on the journey with all of us to see what crypto can be.
Human history is a story of progress. It's part of us, hardwired.
We're designed to seek change everywhere, to improve, to strive.
And if anything can be improved, why not finance?
Crypto is a financial system designed with the modern world in mind.
Instant, permissionless, and 24-7.
It's not perfect, and nothing ever will be perfect.
But crypto is a world-changing technology at a time when the world needs it the most.
That's the Cracken mission to accelerate the global adoption of cryptocurrency
so that you and the rest of the world can achieve financial freedom and inclusion.
Head on over to crackin.com slash bankless to see what crypto can be.
Not investment advice, crypto trading involves risk of loss.
Cryptocurrency services are provided to U.S. and U.S. territory customers by Payward Ventures
Eek, PVI doing business as Cracken.
Metamask Portfolio is your one-stop shop to navigate the world of Defi.
And now bridging seamlessly across networks doesn't have to be so daunting anymore.
With competitive rates and convenient routes, MetaMasq Portfolio's bridge feature
lets you easily move your tokens from chain to chain,
using popular layer one and layer two networks.
And all you have to do is select a network you want to bridge from
and where you want your tokens to go.
From there, Metamask vets and curates the different bridging platforms
to find the most decentralized, accessible, and reliable bridges for you.
To tap into the hottest opportunities in crypto,
you need to be able to plug into a variety of networks,
and nobody makes that easier than Metamask portfolio.
Instead of searching endlessly through the world of bridge options,
click the bridge button on your Metamask extension
or head over to metamask.io slash portfolio to get started.
Arbitrum is the leading Ethereum scaling solution
that is home to hundreds of decentralized applications.
Arbitrum's technology allows you to interact with Ethereum at scale
with low fees and faster transactions.
Arbitrum has the leading defy ecosystem,
strong infrastructure options,
flourishing NFTs, and is quickly becoming the Web3 gaming hub.
Explore the ecosystem at portal.arbitrum.io.
Are you looking to permissionlessly launch your own Arbitrum orbit chain?
Arbitrum orbit allows anyone to utilize,
Arbitrum's secure scaling technology to build your own orbit chain, giving you access to
interoperable, customizable permissions with dedicated throughput. Whether you are a developer,
an enterprise, or a user, Arbitrum orbit lets you take your project to new heights. All of these
technologies leverage the security and decentralization of Ethereum. Experience Web3 development
the way it was always meant to be. Secure, fast, cheap, and friction-free. Visit arbitrum.com.
And get your journey started in one of the largest Ethereum communities.
Bankless Nation, I'm extremely excited to introduce you to Vitalik Buteran. You know him as the creator
of Ethereum and Ethereum researcher, but today he comes to us wearing the hat of philosopher,
which is, I think, what we are increasingly need, at least in the sector of tech, as we're having
conversations around the world of technology acceleration as we are approaching new frontiers,
both inside of crypto and outside of crypto, that are defining society at large. Vatelik recently
wrote this post, My Techno Optimism, which has made the waves.
in the tech space about what to do about this increasing pace of technology. And that is the subject
here on today's episode of Bankless. Vitalik, welcome back. Thank you guys. It's good to be here.
This post that you wrote Vitalik, which is subtitled, my own current perspective on the recent
debates around techno-optimism, AI risks, and ways to avoid extreme centralization in the 21st
century made the rounds inside of crypto and just immediately outside of crypto as well.
And this has been, I think, a continuation of a larger conversation that much of the world is having at large with its relationship to the globe's technology sector.
So can we set up the conversation and kind of set the table, if we will, because it's happening society-wide.
Two camps are forming.
There's what is known as now recently the accelerationists, the pro-tech, and then the decelerationist, the anti-tech.
I'm not sure if anti is fair.
How would you set the table of this global conversation,
as being had. I mean, I'm not even sure if two camps is the right way to describe it,
because I think what I honestly see in the world and some of the discussions that have been
happening in the world is, like, people being confronted with these completely new issues
made by completely new people that they are not used to paying attention to with completely new
memes and, you know, weird vocabulary like Shaw Goth and.
P. Doom and timelines and all of these things, it's almost like an awakening in that for the first
time people are actually thinking because, like, their existing camps don't really tell them
how to think, right? Like, if you think about the even most recent EAC versus a effective altruist
debate to kind of give very approximated crude labels to the two camps, for example, like, this is
not red versus blue. This is not U.S. versus China. This is not Europe versus Russia. This is not
woke versus anti-woke, right? This is basically a debate that's happening between two groups of people
who even two or three years ago considered themselves to totally be part of the same tribe,
basically the kind of San Francisco-centered, I mean, like tech forward AI-leaning great tribe,
people who went and in many cases continue to go to the same dinner parties or in the same
social circles. And like suddenly there's this issue that has basically pulled them in a very
different directions. And like if you're coming at it from the outside, then like,
existing tribal mirrors don't really tell you much about it, right? Because, like, if you're the type of person who, for example, thinks that tech people are bad and, you know, like, these are rich white male dominated fields that are totally out of touch with the rest of reality, then like, well, guess what? Like, both of these camps have a very large number of people that fit both of those descriptions, right? And both of those camps have, you know,
I think a large number that do not, that also gets underappreciated, right?
Like there's a huge international audience for a lot of these AI-focused things,
because similarly to how within the crypto space, things like, you know, ZKEVMs and ERC4337
are kind of resetting a playing field and giving opportunities to people from, you know,
regions that haven't really been historically well represented in Ethereum, I think,
a lot of people around the world, I do also see AI as that kind of opportunity, right? And then
if you're a left-leaning person who is very skeptical of corporations and who is, like, pro-governments,
to put it crudely, but most skeptical of government when that government is being influenced by
corporations, then like, well, guess what? You know, the EI and EA camps are both, you know, doing a lot of
influence, single government, and there's, like, lots of money on both sides, right? And so,
I think the way that, like, this conflict really doesn't map to these existing tribal mirrors is,
like, it does create this interesting property, right, and that people actually have to, like,
figure out for the first time, like, well, you know, like, what actually is their own perspective
on the particular issue? And, like, how do they actually think about this totally new thing? And,
that was not even on most people's radars, like, even two years ago, right?
And I think, like, within crypto, it's the same thing, right, in the sense that I feel like
crypto has operated in a bit of this bubble where, like, there hasn't really been too much
dialogue between, like, what's happening in the space and some of the discussions, like,
both technological and political that are happening outside of it, right?
And I feel like there is, to some degree, this kind of extends to which big parts of the space were born in this 2008 era context where, I mean, like, we're talking about, like, the chance over being on the brink for the second bailout for banks, as the Bitcoin Genesis block says, you know, like literally, as part of the block body, the discussions around like ending the Fed, creating an alternative to central banks and all of these things.
And a lot of the big discussions in 20203 are just totally not related to that at all, right?
It's like, you know, I'm sorry, but like the, like, you know, like the Israel-Gaza situation is not going to be better if those lands ran on sound money, right?
That's, you know, and, you know, like, same with AI, you know, sorry, like sound money is not going to make, you know, like, P-Dume go down, right?
It is possible to kind of overstate this, right?
I mean, I think, like, for example, with the recent election of Argentina, where, like,
I feel so far totally, you know, like, unqualified to give a kind of, like, a grand, you know,
this is good, this is bad perspective on it.
But what I have noticed from the sidelines that fascinates me is that, like,
Millet is actually talking about economics.
And, like, Argentine people do actually care about economics, right?
And there definitely is an extent.
to which kind of the U.S. and the rich countries generally have, I think, like Aspology has pointed out,
moved from caring about the economic access to caring about the cultural access.
Like, the issue that emotionally arouses people in the U.S. these days, like, it's not, you know,
like pensions and health care and, like, savings, right? Or at least like that's not what the newspapers report about.
But, you know, in places like Argentina, it still is. And there's like a refreshing sort of grounded in reality.
aspects to it. So it's like importance to recognize the ongoing importance of that, but also at the same
time recognize this growing rapid emergence of these conversations that have just like nothing to do
with any of those questions. And there's this big question of like, how does crypto actually
relate to these topics, right? And I think a lot of people who come into the space come into the
space because they have ideals and values and goals and dreams that extend, you know,
like beyond like fairly narrow details of like, this is, you know, like what the structure of the
money is going to look like. Yeah. So I think it's important for the space to try to kind of engage in
some of those other topics as well, right? I think for a lot of those reasons, like I've been
thinking about some of these other technological topics as well. And one of the things that I noticed,
especially this past year in
2023,
is that, like,
I have a lot of beliefs
about like blockchains and cryptocurrency
and CK SNARCs.
I also have a lot of beliefs about,
like, the importance of longevity research.
I also have my beliefs about geopolitics.
I also have my beliefs about AI.
And I also have my beliefs about effective altruism.
But like, these sets of beliefs
were not really talking to each other
enough in a lot of cases, right? And like, asking the question of, like, what is your actual take on,
like, how crypto fits into this larger picture of the world? And, like, do the different parts of
that perspective really, like, actually make sense in the context of the other parts of the
perspective, right? One of the questions that we totally should ask, for example, is, like,
if AI is so important, then, like, why not drop everything and start working on it, right? And, like,
I think there are good answers to that question. But,
But it is, I'm like a question that actually needs to be asked, right?
So, like, I was thinking about a lot of these topics.
And then, of course, about a month and a half ago, Mark Andreessen's Techno Optimist manifesto came out.
And then, of course, an entire spectrum of replies to the Techno Optimist manifesto came out.
And then I started at least thinking about how I would write this kind of piece.
And then, you know, XUConnect and Def Connect deleted a bunch.
And then finally, you know, the Open AI.
situation, just kind of like blasted basically the exact same topic and to focus in a lot of
people's minds again. And so I decided like, God, actually need to like get this document out there
and here we are. Yeah, that's what I very much saw in your post, Vitalik, is kind of like it accomplished
this goal of maybe creating a unified philosophy for crypto in the broader context of the societal
conversation. And I want to set that up because, you know, this year is, I think the year that
this societal conversation between this acceleration view, which you called
IAC, which stands for effective accelerationism for folks that are not familiar, and this more
kind of anti-tech type view, effective altruism, maybe, like, to your point, Vitalik.
Well, I mean, just to kind of insert a 10-second parenthetical, I think.
Yeah, go for it.
Just remember that, like, as little less two years ago, the main criticism of effective altruism
is that these are tech people who believe that, like, quantitative and technological
solution as stuff is the problem to everything and, you know, ignore the non-technical and
immeasurable side of life, right? And so fast forward two years later and now, like, it's just
interesting to note that, you know, if they're basically being criticized from the exact
opposite direction now, right? It is fascinating. And there does seem to be, like, to your point,
this is all the same tribe that has maybe, like in crypto, we call this a fork, maybe a social
fork, right? Where we've got now these accelerationists coming out and saying, whoa, whoa,
we don't subscribe to kind of some of the anti-tech philosophy of the,
some in the AI safety movement. I would say that bankless listeners may be, and from David
myself's perspective, were first exposed to this through the AI conversation, the AI safety
debate. So we had an episode back in February of 2023 with Eliezer Yikowski that is like,
what I would say, imprinted on my soul, Vitalik. Okay? So like, it was basically the first time I
was exposed in depth to someone who's very intelligent and had been thinking about a concept
for decades. Now here it was with ChatGPT that the AI posed actual existence.
threat to humanity. And so we went on a quick side quest, you know, from our regularly
scheduled crypto channel to explore that a bit. And we uncovered that this is not just a question
that bankless and people in crypto are facing, but it's a societal level question. And it feels like
the rest of society is, you know, the reason David put this into two camps, I realized that
there's a lot of, you know, subtlety and granularity between the two camps. But it's almost like
society's being asked to choose, right? What do you think about technology? Do you want to
pedal to the floor and fully accelerate, or do you want to just stop, slow it down, and just
like be cautious? And so I'm wondering, Vitalik, if you could give a quick definition for folks that
are unfamiliar, what is the accelerationist view? I think you have a meme of this that opens your
article, and it says, like, dangers behind, utopia ahead. That's the accelerationist view, you know,
with respect to technology. And the anti-tech view is there's safety behind and dystopia ahead. So we're
journeing into this dangerous frontier, I guess, I would say, is the anti-tech view.
Could you just illustrate the core viewpoints of both camps to ground us in this episode?
Sure.
So the way that I would think about the effective accelerationist perspective is basically, like,
it's all about recognizing the invisible graveyard, right?
The invisible graveyard is a phrase that I think either Alex Taberock or someone else came up with
in the context of talking about the harm that the FDA causes in the U.S.
just by delaying the extent to which it approves certain drugs, right?
Basically, that if a life-saving medicine gets delayed even by a month because of regulatory hurdles,
then, like, that's something that can easily kill tens of thousands of people.
And if you do the math, then the amount of people killed by these things potentially goes way higher.
And if you kind of zoom out even a bit more broadly, like in the meme there, right, you have a utopia ahead and then behind you, you have a bear.
And the biggest bear of the mall is probably aging, right, which is, you know, a condition that kills, you know, about 60 million, like literally a World War II scale number of people every year. And if technology doesn't massively accelerate, the base case is that literally all of us are, including everyone listening to this podcast right now is going to die, right? And the gains that come from technology are just massive, right?
Like, if you just think about the difference between the kind of life that we have now and the
life that we have a thousand years ago or even 50 years ago, there's just a whole number
of both measurable and immeasurable things that have massively improved.
And these improvements are incredibly large, and these improvements can even overshadow
some of the worst things that happen, and even if you can blame some of those things on
technology, right? So if you look at the chart, for example, right, like, we can understand
what some of those dips are, right? So, like, for example, if you look, there's like a whole
bunch of correlated dips in the 1910s, and obviously that's World War I. In the case where there's
a double dip, the second dip is the Spanish flu. We're looking, by the way, bankless listeners
at a life expectancy chart over the past 120 years or so. Yeah, and it has, I think, about
eight lines for various countries, you know, U.S., Europe, Asia. So there's some dips in the
where there's double dips, it's World War I and Spanish flu.
In the 1940s, there's dips, which is obviously World War II.
Then China's got a dip in 1960, which is the Great Loop Forward.
So there's like very visible disasters along here, but if you just zoom out even a bit,
all of these dips ultimately are overshadowed by just like the incredibly large gains from
medical technology, right?
And so growth in technology does a lot.
And even growth in wealth does a lot, for example, right?
Because if you imagine a world where everyone is wealthy, then like that's a world where
if you suddenly have to like leave your home and pack up and go to another country, then like
you're not going to starve.
You're going to like basically walk into a much more stable situation than you want otherwise, right?
And so there's just all of these big positives.
that come from technological gains,
and there's like an unrelenting history of thousands of years
of technology repeatedly doing good,
despite lots and lots of people,
you know, screeching and complaining that things could end up going
in the opposite direction, right?
So that's the accelerationist case.
The anti-tech case, I think it's important to separate
the kind of old-school anti-tech case versus the AI Dumer case,
specifically, right? So the old school anti-tech case, it's one that I admittedly am, you know,
on balance, very not sympathetic to. And I mean, like, I've written about this in the article.
But I think if I had to illustrate kind of the really biggest and most important, like the strongest
parts of that case, obviously, yeah, the environment and climate change are a really big
aspect of this, right? So there's this chart of how temperatures are suddenly rising.
in a way that's like totally unprecedented in any historical natural situation,
except possibly for like asteroids that like fell many tens of millions of years ago or something
similar, right?
Yep, there's a, there's the graph, right?
It's just over the last century and a half, it's just gone vertical.
There's graphs of a species extinction that are pretty bad.
There's graphs of, even like populations of particular animals.
that are pretty bad.
And then another kind of aspect of this that starts, you know, like, bleeding into
the AI discussion is, like, the possibility of technology getting misused by authoritarian governments.
But I feel like even, like, that's not a risk of super intelligent AI, but that is a, I mean,
super intelligent AI would make the risk worse, but that even is a risk of, like, present day
AI and, like, present day surveillance technology.
then there is just a fact that like easier communication creates greater economies of scale
and that creates greater centralization, which creates opportunities for political conflict
of a yeah scale that totally did not exist before.
So you could try to make that argument.
Though I think at the same time, like Genghis Khan would be a pretty big counter argument
to that, right?
This is, you know, the guy who genocided about as many people as Hitler back in the 13th
century, but then for some reason, it's, like, totally socially acceptable to sing songs about
him today.
But, you know, Jing, Jing, Jing is kind.
Hey, look up.
Oh, my God.
Yeah.
So, I think on the climate side, like, my counterarguments to that case is basically that, like,
there is a history of lots of specific environmental issues that once they became bad,
we actually did get together and solve them.
And, you know, like, improvements in air quality in cities are a big.
one, right? Like, I remember seeing the tail end of this myself. And the first time I visited in Beijing back in 2014, just remember how incredibly smoggy it is. And like, that aspect of China, like, improved massively and very visibly over the, you know, like, six years that I went to visit over and over again since then. So, and then, you know, there's like ozone and like some reforestation in some areas and so forth, right? So, like, that would be the counter to the
counter, but then the anti-tech argument that is more compelling to me personally is like this
this like very specific one about super intelligent AI, right? And super intelligent AI to me is
something that one way to think about it is to think about it as being something that's in the
category of technology. So like think about it as being the same kind of thing as smartphones,
the internet, contraception, the printing press, the way.
wheel, guns, the steam engine.
And these are technologies that in many cases really were socially disruptive, and in many
cases definitely did, you know, harm people who depended on the incumbents.
But at the same time, like if you just look at it from the eye of long-term history and you
realize that, I mean, there's massive good that came out of most of them.
I mean, guns are, I think, more controversial, right?
Because, like, military technology is the one branch of technology where, like, it's
I think much less clear that like improvements are good.
Though, I mean, actually, even there, right, there's an interesting argument that some historians
make that guns are sort of more democratizing than the previous wave of military technology,
which was Bose, which, you know, required like 10 years of training to be able to use well.
And so I kind of enabled more centralized forms of government.
But, like, even still, right, like military technology is like the other big
exception in general, right? But aside for military technology, like on average technology
is been crazy good. And so if you think of AI as being technology, then like your first instinct is
going to be AI is going to be crazy good. And like maybe you would worry about AI military
applications, right? But if you instead think about AI as not creating a tool, but as creating a new
type of mind and creating a new type of mind that is far more intelligent and powerful than the human mind,
then like, this puts us in a totally different category, right? Like, if you think about humans,
right? Humans have been able to take over and utterly dominate the world and even accidentally
genocide all kinds of species of animals, in most cases, without even intending or even realizing
that that's what we're doing. And humans got into this position of power.
entirely because of our minds, right?
Like, our minds enabled us to create, you know,
tools, technology, better work together collectively,
and cooperate and do all of these things.
And then now imagine an AI that beats humans
on that exact same metric, right?
By a factor of, you know, like a 10,000, right?
Then the question is like, well, what's going to happen to humans?
And the big.
And the big risk here comes from two arguments, right?
One argument is the difficulty of alignment and the second argument is instrumental convergence.
So the difficulty of alignment is basically the difficulty of like just making a thing that has the same kinds of goals that we have.
And this is like a surprisingly hard problem that we just have no idea how to do.
And like there have been, you know, like plenty of these myths and,
legends in history that kind of talk about the alignment problem.
So, like, there's, you know, King Midas who, you know, like, famously got the touch that turned
everything into gold. And then, of course, you know, he ended up dying of starvation.
Then there was, I forget who, but the Greek, you know, like, mythical figure who wished for
immortality, but forgot to wish for eternal youth, right? And so there's, like the problem that if you
don't perfectly specify what you want, then, like, there's lots of...
of ways for that to be just like slightly satisfied in a slightly different way.
Do you guys remember that? I think it was an Edgar Allan Poe short story called like the
monkey's paw or something like this, where the character receives a monkey's paw that gives him
three wishes. And so he would wish for things. One of his wishes was he wished for money.
And what ended up happening is the monkey's paw granted the wish, but his son died later that
day in a tragic factory accident. And so he received some proceeds from the workers' compensation
policy. Like, that's how the wishes were fulfilled. It reminds me of that as well.
Yeah. Now, I mean, I think one kind of rebuttal to this that, like, lots of people make and
that I've made is that, like, if you just interact with, you know, like, existing AI, like,
even chat GPT is just a little bit. Like, these are not, you know, hyper-autistic robots that
have no idea how to understand, like, context and unexpressed intentions and subtext, right?
of like like j gdbt will totally be able to understand that you know if a human says i want
anything i touch to turn into gold he um you know like has an exception in mind for things like food
and water and medicine right but like this by itself doesn't um save us and it gets a little bit
more tricky to sort of explain why right but i i can try the best analogy unfortunately is kind of
looking at some of the previous generation of AI that we've had.
Like, remember back in the mid-2010s when people were starting to work with deep learning
AIs for the first time and they were starting to get like kind of good at making pretty
pictures?
And there is like a thing that you can do where you can run the model forwards.
And if you run the more model forwards, you pass in an image and it tells you like,
is this thing a cat or is this thing a dog.
But then you can also run the model backwards, right?
And you can pass in the input that,
I want the ultimate essence
of a dog. I want the thing that
really is going to be a hundred percent
dog. It's going to maximize
the extent to which this classifier is going to
tell me that this thing is definitely a dog and it's not
something else. And then you run it through
and it generates the image that maximizes
that. And it turns
out that what you get is totally not a dog.
What you get is
some insanely crazy
contraption that definitely has
doggy aesthetics.
But it's also
God, maybe 12 eyes, maybe 48 eyes. Maybe there's like a whole bunch of dogs that have merged
bodies. It just like looks like some totally crazy thing. I'm getting visuals of a kaleidoscope of a
dog. Right. Exactly. Right. And like this is the thing that like maximizes the dog parameter, right?
And another example is like us humans have sort of hacked evolution in the same way, right? So like if you
think about what evolution is, evolution is like this agent that has a,
goal, right? And its goal is to, like, maximize the reproductive fitness. So, you know, survival
times how many children you have of whatever agents it's operating on in their environments,
right? And in order to fulfill this objective, evolution gave humans a lot of desires, right? Like,
there is the desire to have food, the desire to have delicious food, right? And, like, what is
delicious and what isn't delicious? Well, those are things that were fine-tuned based on, like,
what is nutritious in the natural environment, right?
And then there's a lot of desires associated with reproduction.
There's desires associated with survival and all kinds of things.
But then look at how modern humans have dealt with these desires, right?
And one is we've created a lot of food that's like hyper optimized for deliciousness
that in a lot of cases doesn't do well at all on nutritional value, right?
We have invented at least like five separate.
different types of technology
that let people have sex without
getting pregnant, right?
We've invented all kinds of things
that satisfy the proxies
that evolution has created in our minds
for evolution's goals,
but that do not actually satisfy
evolution's goals at all, right?
And, you know, the results of this is that, like,
lots of people are eating unhealthy food,
and there's increasingly
a depopulation crisis with lots of countries
having fertility rates that are below one,
right? And so now one thing you might ask is like, well, surely, you know, like, we as humans know that we are not following the goals of Mother Nature. And the answer is, yes, we know very well that we're not following the goals of Mother Nature. But guess what? We are humans. Our goal, like, maximizing reproductive fitness is the goal that Mother Nature had. It is not the goal that we have. And we know that Mother Nature was not able to perfectly sort of copy its goal into us, but we don't care. We have our
goals and we follow what our goals are, right? And so in the case of AI, what might happen is like,
we tell the AI, you know, like, Azalex Friedman would say to like, like, bring more love and peace
into the world. And then the AI would discover that like, okay, here's a bunch of things that
look like love and peace. And these are things that we would all recognize as being love and peace.
But then at some point, it would discover, like, wait, if I create this 47 dimensional squiggly that
looks in this particular way, then like, it's going to be even better at, like, satisfying
its own internal conception of, you know, like love and peace, which is going to be slightly
off because anything that is created by any finite process is going to be slightly off.
And this 47-dimensional squiggly is going to look incredibly lovely and peaceful to the
AI, but it's like nothing that any of us would recognize as being love and peace in any sense,
right?
And then we go and tell the AI, bring more love and peace into the world.
And then the AI just kills all of lessener places us all with 47 dimensional squigglys.
So this is kind of the AI safety case, right?
I mean, this is basically the same as, you know, like what I remember,
Elias Ryukowski, yeah, I told you guys.
And so the same thing as what a lot of people have said.
So this is like one of my points of concern regarding AI.
But in my post, I also talked about two other points of concern regarding AI, where
one of them is just this question of like, well, even if everything goes well, like, is this actually a world that we would want to live in, right? And it turns out that like if you examine the sci-fi worlds that people have tried to come up with that show, you know, humans and bots living in harmony, then like either the world is like insanely unrealistic and it's just unstable and it's just obviously going to collapse into AI's dominating everything in another one to 10 years. Or, you know,
Or it's like a world that actually really feels quite deeply unsatisfying from most people's
perspectives today, right?
Like we're basically talking about a world where, you know, we all become pets of the
superbot and a kind of human agency doesn't really play any part in, you know, determining
which way the universe goes from there, right?
And by the way, Vitalik, for those that maybe doubt that a bunch of machines or a bunch of
computers could actually rest control over humanity, when I was reading your article, I,
I was in kind of a serious mood and I was drinking some coffee. I literally spat out my coffee
laughing at this turn of phrase he used. You said this, to see why the machines could rest
control over humanity, imagine that you are legally a literal slave of an eight-year-old child.
If you could talk with that child for long enough time, do you think you could convince
the child to sign a piece of paper setting you free? I have not run this experiment, but my instinctive
answer is a strong yes. And so all in all, humans becoming pets seems like an attractor that is
very hard to escape. I was just visualizing in my mind, trying to convince an eight-year-old child
to set me free and what that process would look like. And that's kind of what we're dealing with
when we're dealing with a superintelligence. It's, to them, we would be that eight-year-old child.
It would have probably no trouble convincing us to do whatever it wants, to fulfill whatever
outcome and set of goals that it had. Yeah. And then the other thing to keep in mind is that,
Like to the extent that there is any notion of competition in this world of the future, like, whoever really gives up control of the reins to the AI is going to outperform the people that don't.
Because, you know, like that's what happened in chess.
That's what happens in Go.
That's, you know, just what eventually happens anywhere, right?
And then the third risk that I outlined is basically, you know, accentualization and surveillance, right?
And this isn't even just a risk of super intelligent AI.
risk of like basically AI of the type that exists already. And actually in some cases,
situations that happen already, right? So one of the things that's been happening in Russia for
the past while, actually, yeah, even quite a bit before the recent war started, is that
you'd have protests, right? And unfortunately, you know, the authoritarian's discovered this one
weird trick, which is you let the protest happen. And you send the police out. And you send the police
out and, you know, like, you do the usual, you know, like, protest versus police thing a little bit,
but, like, you don't aim to crush it right then and there, but you have the cameras out.
And then you identify who all of the key people in the protests are.
And then at some point later, at 2 a.m., they get a knock on the door.
And, you know, like, repeat it and rinse about 100 times.
And five years later, suddenly you have, like, basically, yeah, almost no one left to lead the protests, right?
And meanwhile, every other single country is like, oh, this is a normal functioning, you
society who's expressing their desires and they are free to express their desires.
Exactly. Yeah, it's like much less visible, which makes it kind of much more difficult
to coordinate against it's and, you know, it makes it much more difficult to, I'm even
create outrage against it internationally. Yeah, I mean, it's a big problem. I mean, I think it's
like a big part of the reason why, yeah, like it feels like protests against authoritarian regimes
have been getting less and less effective for the past while.
Like, if you just extrapolate this trend even further, then the risk basically is that there isn't a place to hide anymore, right?
Like, there isn't a place where any kind of credible opposition movement to a government could even start, because as soon as it starts, the surveillance can detect it.
And, I mean, you don't even need, like, physical police at some point.
The AI soldiers could just, you know, I could go and shut it.
down. And this gets even worse when we think about wars, because, like, the need to get the
population on your side has historically, like, actually been a pretty significant break,
at least slowing down people's desires to go to war. But then if your entire army is a bunch of
robots, then, like, you know, the dictator gets drunk at 10 p.m. They see someone being mean to
them on Twitter at 11 p.m. And the drones, I mean, like, start flying and raiding hellfire on other
countries before midnight, right? And so this kind of natural check and balance that comes from the
facts that, like, ultimately, there are decentralized humans that have to be doing the executing.
And if you try to do something really terrible, then, like, those humans are going to be demoralized.
And they're going to be much less willing to go along with your plans. And even if they do, lots of them
we're going to leak every single detail of your plans to the CIA or to whoever your opposition is,
which is actually yet another thing that, you know, fortunately, yeah, did happen in Russia.
Like, you know, like you have AI armies, like all of their checks and balances go away, right?
And so this is my other big concern about AI.
Like, basically is this sort of, you know, the ultimate centralization from which at some point
there might not actually be an escape.
So those three cases are my kind of big, you know, a note of caution on artificial super intelligence in particular and how it's like pretty unique and pretty different from all the other technologies that we've dealt with over the past time, like 10 millennia.
Okay, Vitalik. So David's going to come and summarize this in a moment. So we're tracking our journey so far through this. And then we want to kind of introduce your philosophy here and what you think the counter is and how that applies to crypto. But I have one follow up question to you, specifically.
that's been kind of burning in me since the L.E. is your podcast back in February. And that is,
what do you think? Like, so we talked about three different AI risk scenarios. The first is Doom.
Basically, we all die. The second is we become pets. And I'm like, at this stage, it's better than
one. That's not so bad. And then the third is totalitarianism. But I want to go back to the first,
because that's been giving me like an existential crisis all year. What's your personal take on this?
I know you've got a spectrum in your article on what you called earlier, the probability of doom,
the P-Doom ratio. What's your P-Doom ratio and why? I'm just curious where you weigh in.
Yeah. So the number I gave in my tweet thread is 0.1. So I'm in a 10% chance. Super intelligent
AI is going to kill us all. And I think the reason for this is like, I see both sides of the
arguments, right? Like I see the sort of quote, doomer arguments, which basically is essentially
what I've already outlined to you guys. And then I think if I had to give,
of the counter-dumer arguments, I would basically say something like, you know, look at the kinds
of AIs that we have now.
Like, those kinds of AIs are not even goal seekers.
They basically are things that, you know, like, put on human costumes and sort of play out
roles of whatever type they sort of pattern match themselves into thinking that they're in
at that particular moment, right?
Like, Chach EPP does not act like something that maximizes any particular
objective. And like, if you tell it to make the world more doggy, like, it's not going to do anything
that looks like, you know, maximize the assets of dog, right? It's going to, you know, just give
you a five-paragraph essay that's, like, a pretty normal and human thing that expresses, like,
what might it mean to make the world a more doggy place? And it's a sort of thing that's, like,
pretty inoffensive and like it looks and really like pretty fine, right?
So basically if you just compare, you know, the specific scenarios that people worrying about
AI feared what happened with AI at current level of capabilities and then you compare that
to like the actual thing that AI at current levels of capabilities does and like it's always
quite different, right?
And one thing I'm happy about is that I feel like the level of like harm from deep
fakes in particular, so far has been much lower than what I think most people expected and even
what I expected at current levels of capabilities, right?
Like, if you explain to someone from back in 2015 what the current level of capabilities of
AI making deepfakes is, like, they're probably going to tell you, like, whoa, like, we can't
trust anything that people say anymore, like, this is going to totally break elections, and it's
going to lead to all kinds of horrible consequences.
And like some bad stuff has happened, but like the reality is much less than that fear, right?
Which is like interesting and surprising.
And like it's, I mean, to some extent it shows the adaptability of mankind.
To some extent it shows that mankind is less evil than some people fear.
And then of course, the Duma would sort of counter by saying, well, guess what?
Like with super intelligence, neither of those two things even matter because the AI is going to be doing all the work.
But like it does still feel true that.
sort of the way that things keep progressing do sort of go in different directions than what,
you know, like people's existing worst fears have been.
And in ways that sometimes feel like we're going further and further away from the kinds
of, you know, like hyper optimizers that people are afraid of.
So that's the case against.
And, you know, if I had to like, again, counter the counter, I would say, well, that's LLMs.
And it's in looking very possible.
that like LLMs basically tap out at some level of capability.
And, you know, like, GPT4 and like a little bit better than GPT4 is like basically what we're going to get.
But then there's going to be some next technology, which like could be combining LLMs and Q learning.
Like, you know, it could be something else.
And look, we don't even know what property is that next level of technology is going to have.
And so it's like a big washout, right?
And so I think like there's a big chance that the A.
I doom problem is just like, it turns out that it was never that big a problem to begin with.
There is like a big chance that it is a really big problem. And within that chance, there's a
really big chance that it is a really big problem. But with awareness and hard work, we will be
able to deal with it and, you know, like make sure that we don't actually, yeah, get doomified.
Right. So that's where sort of the, I think the balance comes in. Right. But like, it's importance
to keep in mind that like a 10% probability of doomified.
is still a big deal, right? So, like, for example, like, one analogy for this is, like,
10% is, like, I think, greater by somewhere between a factor of one and three, I forget,
like, but it's, like, that, like, only a little bit greater than the probability that any of us
is going to die from a non-biological cause. Like an accident, a car accident, or something like this.
Right, car accidents, homicide, suicide, like, any nasty thing that's not disease. Like, if you think
about the amount of just like care and effort and thought that you personally put into your
physical safety and the amount of like care and concern that you expect, I mean, like,
governments to put into your physical safety with things like police, then like roughly that
level of care is a reasonable level of care to have about, you know, the possibility that
something really bad is going to happen out of AI, right? And that doesn't mean like overturned
the entire world to suddenly care about this problem, but it does definitely mean care about the
problem more than we do today.
SELO is the mobile first EVM compatible carbon negative blockchain built for the real world.
And now something big is happening.
Introducing the SELO layer two.
It's a game-changing proposal that's going to bring Sello's rapidly growing ecosystem home
to Ethereum.
Vitalik has shared its excitement for the SELO layer two on the Selo Forum.
So has Ben Jones from optimism.
But why?
The Sello Layer 2 will bring huge advantages like a decentralized sequencer, off-chain data
availability, and one block finality.
What does all that mean?
Rock Solid Security.
a trustless bridge to Ethereum, and more real-world use cases for Ethereum without compromise.
And real-world adoption is happening.
Active addresses on SELO have grown over 500% in the last six months.
With the SELO layer 2, gas fees will stay low and you can even pay for gas using ERC-20 tokens.
But SELO is a community-governed protocol.
This means that SELO needs you to weigh in and make your voice heard.
Join the conversation in the SELO Forum.
Follow at SELOorg on Twitter and visit sello.org to shape the future of Ethereum.
Introducing GMX, the deepest on-chain futures market to trade Bitcoin, Ethereum, and leading altcoins.
GMX is a permissionless decentralized exchange that offers perpetual futures and spot trading.
Lightning fast trade execution and competitive pricing with the security and self-custody of a decentralized exchange.
GMX is live now with V2, bringing new optimizations to on-chain leverage trading.
And even more than an improved trading experience, GMX will reward you for just participating.
All GMX users can easily set up a referral link, and with $12 million of arbitral
grant being distributed as incentives, and over $150 billion in trading volume to date,
all settled on-chain, GMX is leading the charge in terms of opportunities for defy liquidity
providers. The future is on-chain with your wallets, with your trades, and with your money
in your own hands. Try it out now at app.gmx.io.
You know Uniswap as one of the largest decentralized protocols with over $1.7 trillion
of trading volume, but Uniswap.
Uniswap is becoming so much more. Uniswap X is the newest product from Uniswap labs,
which aggregates liquidity across the ecosystem to give you the best defy trading experience.
The best part, it's gas-free and M-EV-P protected.
The best prices, zero gas and M-EV protection all rolled into one app.
So head over to app.uniswop.org.
Click the gear icon on the swap page and make sure that Uniswap X is toggled on.
And if zero-gas trading on Uniswap wasn't enough for you, the Uniswap app is now,
available on both iOS and Android. Start swapping seamlessly with products from the most trusted team
in D5. Visit app.uniswap.org to get started today. So, Vitalik, I want to kick the stool and
throw us all the way back to the start of this conversation. We just went down like this AI rabbit
hole to talk about the potential risks of acceleration, which is one of these camps that has
kind of emerged in our society is having this conversation. The risks of going forward in time
and accelerating our progress with technology. And you've labeled like,
well, AI is this exceptional case that we need to consider the circumstances of.
Some other examples are like climate change.
There seems to be a correlation with the increased risk of climate change and the quickening of technology.
And this is the decelerationist camp, if you will.
And then the accelerationist camp, which I think you have said that you are more resonant with,
says that, well, there as technology advances, we find these occurrences in history, these
wars, these increased capabilities to cause harm.
But by and large, they are completely drowned out by focus in all the other developments.
of technology. We've kind of like set the stage for these different constructs of thought.
And like all extremes, they're all relatively blunt, if you will, if you just only focus yourself
in one school of thought. It's a blunt tool for the job. And as we progress forward in your article
and in your thought, we start to get a little bit more nuanced. We don't have to be so blunt
in our thought about the direction of society. We can pick and choose components from different
schools of thought and put them all together. We understand that AI has risks. We understand that we have
to solve climate change. We understand that technology presents new risks, yet nonetheless,
technology also helps us navigate those risks all the same. So rather than having to pick an extreme,
pick a tribe, if you will, a school of thought and be shoehorned in there, how would you propose
we think about all of these different things that we've put together here? If we're talking about a
unified idea of thought. How would we have a framework for understanding if we are going to go forward
with technology, what is the correct path, the more optimistic, more precise path that we can
maximize the good? Because really all of these conversations are all about how do we maximize
human welfare and well-being at the end of the day. How would you proceed in this conversation from here?
Yeah. So I think this is where the idea of D.AC that I came up with in the post really comes out of,
right? So ACC is obviously acceleration, and D is intentionally standing for, primarily for defense, but also for decentralization and democratization. And so the idea of this philosophy is to basically, you know, look at the offense versus defense balance of the world, right? So basically look at, you know, like how easy is it to do things that, you know, like harm or goal against the goals of other people?
versus how easy it is to protect yourself or if you're a community against that.
And think of that as something that is and has always been shaped by technology,
but also something that can be shaped by future choices in technology.
And to really focus on building defensive technology,
and especially, and I think this is the angle that really naturally appeals to people in Ethereum
and people listening to this,
defensive technology is that work by improving defense in the abstract without kind of coming
with a built-in enforcer that decides what is good and what is bad across the entire world
or across an entire ecosystem, right?
Would you say like phrase differently, defensiveness as a platform or just a technology
platform that allows defensive technologies to emerge?
The word platform is interesting, right?
because I feel like this is, I mean, like, one of those sort of sometimes VC buzzwords that means a lot of different things.
The part of platform that appeals to me is like this idea that a lot of this is going to involve of creating common infrastructure and both defensive infrastructure and even a common infrastructure that enables building many kinds of defensive infrastructure.
the part of platforms that cautions me against embracing that word is that a lot of things in the modern world that call themselves platforms are things that do contain this centralized actor that controls the thing and that plays this role of Minogue deciding who's good and who's bad.
Right.
Like we talk about Facebook being a platform, Minilk OpenAI being a platform, Twitter being a platform.
And like all of these things have centralized actors.
that run and control them. They have centralized forms of moderation. They have centralization on
all kinds of levels. And this is one of those things that creates a lot of problems, right? And so
there's this book that was written recently that talks about this concept of weaponized interdependence,
basically this idea that the type of technology that we've been building for the past 20 years.
And when I say we, like, I do mean, you know, the centralized world. Like, this is the
place where the decentralized world gets to pat itself on the back and say, like, yeah, no, we're
actually better than that, is that the technology has been networks technology, and it's been,
it's network's technology that creates centralized choke points where the creator of the technology
has ongoing power over the users, right? Like, if you think about just going back to the year
1970, and let's say, pick, you know, like a random country that has powerful technology that we don't
trust. Actually, let's just like not say anything bad about anyone in the physical world. Let's say you have like Mordor, right? Like let's say yeah, you know, like you have literal Mordor and it just like pops up in the middle of the Atlantic and it turns out that that's what Atlantis was the whole time and it's the technological superpower, right? And imagine you're in the 1970s and you're buying cars and forks and knives and all kinds of things that are made in Mordor. And we ask a thought experiment of like, what is the worst that the Mordorians can do?
to you. And I think probably the worst that they could do is, one, they could do false advertising.
They could create things that look like they satisfy certain properties, but actually, like,
way underperform on durability or on safety, for example, right? And, like, that is bad. Probably
the most evil thing that they could do is, like, what we call the Razor Blade model, right?
like basically sell you devices, but then those devices end up depending on these sort of attachment
parts that have to constantly get renewed. And it turns out that Mordor is the only vendor
of those. So, like, that's the closest that Mordor could do to, like, really getting power over
you just by being a vendor, right? Otherwise, it's like, even if you're, like, very, you know,
like, anti-Mordor and you have a, you know, cell run must die, you know, like, poster on your
and, you know, like, you go around wearing, you know, free serif on gold t-shirts.
Like, Mordor can't really do much to hurt you, right?
But then fast forward to 2020, right?
In 2020, we have networks technology.
And if Mordor builds smartphones and you'll use a smartphone from Mordor,
well, the smartphone can spy on you.
If you use Internet platforms from Mordor, those platforms can censor some political viewpoints
and promote other political viewpoints, right?
And it's going to tell you that, like, hey,
yeah, you know, this free Sierra
on goal thing is like actually
a bunch of terrorists and
nobody's allowed to support them anymore.
It can affect, you know, the domestic
politics of other countries.
It can at any particular
moments, just like flip a switch
and take away the technology
from any subset of its users, right?
And so the amount of power that you
have over users by being the producer,
at least if you're building this kind of
centralized network technology, has just gone
way up now compared to what it was in
1970, right? That's the aspect of platforms that actually, yeah, it's one of the things that this is even reacting to, right? It's even one of the things that the, I think, crypto space is really reacting to, right? And so the goal here is to build defensive technologies that are not like that, right? Defensive technologies that do not assume that, like, they are being built in America and they are going to be good because everyone in the world agrees that America is good, right? Because, you know, unfortunately, this kind of consensus does not exist.
the world, right? And like, what we want to do is we want to build technologies where people
can trust them, even if they have different opinions on what's good and what's bad. And these are
technologies where there exist a lot of really interesting examples of already, right? So I, yeah,
split defensive technology into four different parts, right? Where, like, the first split is
the split between the world of bits and the world of atoms, right? And in the world,
of atoms, you have defense against big things and defense against small things. And defense against
small things is, of course, biodefense. And then in the world of bits, we have what I call
cyber defense, which is like defense against things where if you look at them hard enough,
it's obvious that they're attackers. And then what I call info defense, and this is like a very
specific distinction that I think other people haven't quite made in the same way, that basically
is about defending against things where there is much less consensus about who the attackers are, right?
And the big example here is what we call misinformation, right?
Like, people don't want to be misinformed.
People want to know the truth and not know false things.
But a lot of the sort of quote anti-misinformation ideas that have been proposed by the, you know,
the mainstream world or in what we might call the centralized world, they all involve their being.
a particular actor who understands what's right and what's wrong and basically forces that perspective
across an entire ecosystem, right? And so the question is like, well, can we build tools that
actually avoid having that central point of like deciding what's good and what's bad for everyone?
And so in the case of like the world of atoms, this is kind of somewhat easier. So like for macro,
for example, I talk about building resilient physical infrastructure.
So even like the facts that we have solar panels and the fact that we have such amazing batteries now is like amazingly good, right?
And if every household had those kinds of things, then like the amount of disruption that would happen to people's lives even as a result of, you know, cyber war or even regular war, which would already be significantly lower, right?
if we had much more distributed agriculture, then, like, that would improve things even more, right?
So there's things that are just, like, obviously defensive without having to, like, come with an opinion attached of, like, who is the one that you're trusting to do the defending for you.
And then in the biospace, like, there's vaccines, there's other kinds of prophylactics, there's things that boost your immune system.
There's like, I basically talk about how there's like this entire set of things that we can do that we totally are not putting enough resources into right now that could totally create a much more airborne pandemic resistance world where we have much less COVID, much less long COVID and even much less, you know, colds and flus.
And where lots of diseases would basically stop before they even start because there are zero would end up even being less than one in this kind of world.
But we just need more funding and more effort to actually make this happen.
And then in the world of bits, this is where crypto stuff really starts coming in somewhat, right?
Like, what are the things that I began this whole episode with is like asking the question,
well, you know, crypto needs to also think about some of these issues that people are really thinking a lot about in 20203 and what is the way that crypto plays into some of those concerns.
And like here it is, right?
like basically yeah we want to create a world that has much more digital hardness baked into it by default and where digital attacks become much harder and digital defense becomes much easier right and what's interesting about like the cryptocurrency and blockchain space is that it's great at doing that without relying on a single centralized party right so you know we are creating financial systems that we are creating financial systems that
work without relying on any specific
country. We are creating
forms of privacy
that work without relying
on a central actor
to hold everyone's information
in custody for them. We are
creating forms
of accounts recovery that
don't depend on, you know,
Google or Twitter having everyone's master keys.
And that's happening with
social recovery wallets and
account of extraction in ERAC-4337.
We are
creating zero-knowledge proof technologies that let people prove that they are trustworthy
without revealing any more information about themselves beyond that. So we're creating all of
these really powerful tools that in a lot of cases are substitutes for more centralized
forms of trust. And one of the arguments that I make in that section is basically that, like,
one of the reasons why the internet has become more centralized and a less free place over the
last 15 years is basically because, like, there are threats, and the easiest and leasiest
responses to threats that you could implement are responses that involve centralization, right?
Like, require everyone to have a Google account to sign in, and, like, that's your anti-civil
mechanism, right?
And the question is, like, well, how can we actually bring privacy back?
How can we actually bring, you know, the ability for Anon's to participate in the internet
back, how can we actually let people do all of the things that they need to do without creating
these mechanisms where, like, if you're in one of the, you know, quote, good countries, then you're
trusted. But if you're in, you know, like, one of the untrusted countries, then, like, you're
screwed, right? And, like, these are things that actually happen, right? Like, I mean, I love community
notes, for example, and I talked about community notes very positively. But I remember there was this
one thing about it that
at least when I checked
a couple of months ago, in order
to join community notes, you needed
to have a phone number from a quote
trusted carrier. And I remember
seeing a tweet from someone in India, basically
saying, like, hey guys, you just
made one of the major carriers in India that serves
hundreds of millions of people untrusted. And like,
there's this big population that's like 5%
of the world that's like locked out
of being a community notes participant.
And, like,
you can see how those kinds of problems just naturally come out of this centralized perspective on trust.
And so if we can create like better and decentralized alternatives, then like this ends up really solving that kind of problem as well.
Right.
So that's cyber defense, right?
Basically all of the stuff that we've been working on in terms of, you know, creating more decentralized and more robust financial systems in terms of grading these zero knowledge proof systems don't let us, you know,
prove that we're good guys without revealing any other information that let us prove computations,
like all of these things. They make a much more defense favoring world. And so it's amazing that
our space has been accelerating these technologies so much. Right. And so that's kind of the core
of the, you know, the way in which crypto fits into the DEAC vision. I'm getting a notion
Vitalik of Daniel Schmachtenberger and his metacrisis concepts where we have the increasing
capabilities, increasing capacities of technology to do stuff good or bad, doesn't really matter,
just stuff generally, neutrally. And some of that stuff sometimes ends up as like bad outcomes
or just problems that need to be solved. And the concern here is that when technology introduces
new problems to society, that society just comes up with centralized solutions to that problem.
or corporations and entrepreneurs can just move quickly and solve problems before humanity can come
together to coordinate on a mutually assured platform, mutually assured defensive technology to answer
this. And I think what I'm hearing from you is that, well, certain elements of cryptography,
coordination via Ethereum allows for a solution space to emerge that isn't merely just, you know,
some large company slapping on that patch onto society at large by say, hey, here's our solution.
It seems to be like what you are illustrating here is like there is a middle ground between the chaotic production of high capacity technology and just like centralized companies solving that problem space.
Is that a fair illustration?
Yeah.
I mean, I think like the really important piece of this is basically creating these technologies that kind of improve the baseline defensiveness of the world while at the same time allowing the world to remain and.
I mean, it could be even more of a pluralistic place, right?
So avoiding the usual trap where, you know, you basically have a danger all the way up until one group just takes over everything and imposes its well on everyone else.
So Vitalik, you know, what you're proposing here is maybe a philosophical framework for where crypto fits, right?
And the reason I really like this is because it seems to be like kind of a big tent.
And it's something that I personally resonate with.
So you're basically saying you don't have to pay.
pick the effective accelerationist, you know, the techno-utopians version of the world. You don't have
to pick the techno-phobes vision of the world and kind of the Luddite Dumerism picture either.
This is a third way. And you're calling this defensive or decentralized. The deacon stand
for all sorts of different things or democracy or differential accelerationism. So this is
D-AC, basically. And crypto fits under the defensive technology in kind of the sense.
cyber type of use case. And it feels very much like what you're advocating for is technologies that
increase and enhance human freedom. And so this can also be a bulwark against maybe your
AI risk scenario of probability three that, you know, the AI brings about totalitarian technologies
and now we have this defense, you know, against it. The other thing I would say is it seems like
it's a very broad tent. It's like who can't get behind some good old-fashioned defense, right? We're not
talking about something that could destroy the world. We're just talking about regular individuals and
societies being able to defend against something that can destroy the world. And I want you maybe
to talk about this as a philosophical framework. Like obviously people in crypto are hearing this. I'm sure
they resonate. Are you telling me that there's a, you know, a way to express my beliefs about
crypto and this defensive and freedom enhancing and decentralized technology? You call it DIAC? Like,
sign me up. There's also, I think, some other camps that could listen to this and be interested.
So you've got this section in your essay saying, is Diac compatible with your existing philosophy?
If you're an effective altruist, this is a rebranding of the idea.
If you're a libertarian, there's something here for you.
If you're a pluralist, if you're a public health advocate, I'm wondering if you talk to the specific camps here and the value that they might find in subscribing to the Diak belief.
Of course, I know you're pushing it.
You're not trying to necessarily convert people.
But in order to explain it, maybe, can you talk about the wins for these various camps?
Sure. I mean, any specific camp you want me to start with? I would love you to start with the libertarians, actually, because I think we have more than a few listening, maybe.
Sure. I mean, I think the best way to think about this is it's a pathway to, like, basically preserve Minilic Liberty going into a much more technologically advanced 21st century, right? And I think the challenge that Diak is looking at is basically that, like, there are lots of technologies, including technologies that are being done.
developed by governments or by corporations that are, you know, increasingly working with governments,
right? Like, there was that recent, you know, like A16C post that was like talking pretty enthusiastically
about, you know, like American dynamism in defense, which, of course, you know, it can be insinilitary
tech. And basically looking at, like, how do we create a world that through all of these changes
and through all of these pressures
doesn't just kind of maneuver itself into being this incredibly centralized place
where you've basically got these probably somewhere between one to four
of these sort of big super states worldwide
that are in full control of their tech ecosystems
and like regular people basically have no option except for being stuck inside of one of these
with no other real options for getting out of that equilibrium, right?
And so we can talk about like the possibility of that, you know,
we'll have much more offensive AI in the hands of governments.
We can also just look at existing trends in how, like, the internet is not going the way
that a lot of us hoped, right?
Like the concept of internet anonymity, for example, which was like a big hope of people,
I think, like 10 or 20 years ago.
but then the internet is obviously becoming an increasingly difficult place to actually be anonymous.
And a lot of the reason why is definitely just all of these military issues and people just kind of naturally grappling for the centralized solutions for those problems because they're the easy ones.
And Diak basically tries to ask a question of like, well, there are threats.
and you can go after a threat either by hunting down all the wolves or by putting armor on the sheep, right?
And putting armor on the sheep is like philosophically much better if you can do that, right?
Because the problem with hunting down all the wolves is like, well, the wolves are some of us.
And like we have to agree on who the wolves are.
And like there is a risk that, you know, the government's going to decide one day that you're one of the wolves, right?
And also the wolves don't want to be hunted.
Exactly.
And if we instead say, like, let's make the world a more defense favoring place by default, then, like, that is something that is much harder to, you know, like, twist into a narrative for, like, why, yeah, like, governments should just, like, go after all kinds of people that they don't like, right?
Okay.
So, Diak has some wins for libertarians here.
How about the solar puns?
How about the Kevin O'Walky Regen type community who are very oriented?
on collective action and human coordination.
Are there wins in D-AC for that community?
Absolutely.
I mean, I think Solar Punk, again, is, I think, a school that values human flourishing.
It values cooperation.
It also values decentralization as well.
I think, you know, ultimately there is, you know, a punk in Solar Punk.
It's not, you know, like solar monolith, right?
And I think a lot of people in that camp are concerned about the resilience of the world going forward,
our ability to survive different kinds of risks and are probably very cognizant of the facts that, like,
all kinds of centralized actors, including both corporations and governments, can be a big problem in making a lot of those risks larger.
and what Diak basically yes says is it says like here is a set of tools that we can use to like just cut down on a lot of those risks just across the board and make the world one that is like much more friendly to human flourishing without having to construct any of those kinds of monoliths.
So like if you think about the idea of like let's say the bio defense side of this, right, we can basically make a
world that is much more protected against diseases, natural and artificial pandemics, all kinds of
things natively, just by having cleaner air. It's a much more natural solution, and it's a solution to
that problem that really avoids some of the downsides of things like lockdowns that we've seen,
which can be justified if there's enough of a health risk in a particular situation, but which also are
just kind of massive, I mean, like, forced changes in the way that people just, like, live their
regular lives with their families and, I mean, look, their regular relationships and their work.
Like, it's an approach that allows us to be in harmony with each other. I would also even say,
in harmony with nature, because I think defense includes protecting the environment, absolutely.
it's an approach that leverages local communities instead of trying to put power into these kind of big super states that decide what is good and what is bad on behalf of like the entire world or on behalf of much larger groups of people.
It's a world that really empowers the local coordination much more.
And so I think there's a lot of technology is within the DEC umbrella, especially if you look at the kind of info defense category that we didn't go too much into that like really talks about improving social technology that can really make society both more defended against attacks and much more of the kind of society that has the kinds of relationships that SolarPunks would want us to have with each other.
The D can also stand for Democratic.
I can, yeah, we didn't have a chance to delve into that. But, okay, so we got the libertarians.
We got some wins in DIAQ for libertarians. We got some wins for the solar punks.
Let's talk about the original group that we started this entire episode with, which is,
on the one side is someone like Mark Andreessen, who is an EAC. He's effective accelerationism,
full throttle, pedal to the metal. Let's just do technology. And the other side is, you know,
somebody who's maybe in the EA community, the effective altruist community. So can you get Mark
Andreson on board with DAC? And can you get LEs or you?
Youdkowski on board with DEC.
Could they both agree about this one, you know, like narrow subset of technology accelerationism, do you think?
I mean, the post got retweeted by, you know, Mark Andrewyson and by AI not kill everyoneism memes.
And so, you know, it's some success.
Yeah, yeah, you know, it's pretty successful already.
I mean, I think the really big piece of it here is that I think for the EAC side, the thing that it brings that,
a lot of the previous philosophies don't bring is, I mean, one is there's just
optimism about technology in general, but then there also is an alternate path forward, right?
So the message is not just pause.
The message is, like, we proceed and here are some alternative routes for how to proceed
differently.
And so if you are a builder, then, like, the perspective does not kind of frame you as being
an incorrigible enemy, right?
like you can continue being a builder and there's plenty of amazing roles for builders to do really
great things within the diac context and the final stage of uh diac being successful uh like if we
imagine going out to the year 3000 like really does look like a you know like post singularity
like kardashv type two um like super advanced technological society of uh exactly the type that eack and
transhumanist people have been dreaming about.
And for people who are in the AI safety camp, I mean, the concept of differential technology
development is something that a lot of effective altruists have actually already been
talking about, right?
So I included a link to one of those posts.
But I think the thing that it adds is this kind of emphasis on a more democratic political
approach. And like, this probably is one of the big areas that, you know, like, effective altruists
do get criticized for, right? And, like, sometimes the criticism is, you know, like, unfounded because,
like, if you put, you know, the governments in charge of, like, distributed public health
funding, then, like, realistically, it's going to be rich people, countries, governments, and, like,
there's, I mean, you're going to, like, national governments have a huge track record of, like,
not even caring about what's going on in Africa, whereas effective altruists, like, actually
I already have put a lot of money in, right? But at the same time, once effective altruism starts
going away from putting money into obviously good, but we just might disagree on how good things,
and into kind of manipulating, like, big political objects, then, like, you start really needing
to care much more about legitimacy. And to me, I feel like both of the big effective altruism
related, you know, like fails, if you can call them of the last two years, right, where one is the
open AI situation and the other is FTX. I mean, to me, they both have to do with underrating
legitimacy, right? Like, SBF was clearly, yeah, I mean, he had all kinds of, you know, like, massive
problems, but one of them is definitely that he just overrated the extent to which he could become a
massively negative value actor just by, like, delegitimizing the ideas that he, yeah, and deeply
cared about, right?
And then on the open AI side, like basically, you know, what we saw was we saw a seemingly earnest and well-intentioned effort to kind of create a kind of clamps on the open AI effort that could try to kind of reduce its potential to become super harmful by creating this board that could push things in the other direction.
But the problem is like, it tried to do all of this through a completely undemocratic and unaccountable board of five people that saw no need.
to even try to explain its actions to the wider public, right?
And then what happened was, like, well, it fired Sam Altman, and then basically within three days,
well, it was, like, in some ways, like, a pretty unprecedented political fail,
because what happened was the employees of the company who are probably, you know, like,
capitalist, libertarian-leaning, just like tech software types in a lot of cases,
formed an impromptu union
to side with a billionaire CEO against the board.
Right?
Like, that's like a pretty big fail
if you think about it that way, right?
Like, congrats, you got software engineers to unionize
except they're standing behind the billionaire.
Right.
And I think the thing that, like,
Diac ideas can really bring here
is basically bringing back some of these concerns
about legitimacy and, like, understanding
that you're not just spending money points, but you're also spending social capital points,
and you really need to take that seriously. And like bringing that in a way that's not just
sort of an adjunct, but that is a really core part of the philosophy, right? Like, it's a core
part of the Deaq philosophy that we are trying to create a world that is more defense favoring
in anyone's perspective, regardless of whether or not you agree with any specific actor
that would enforce its own idea of who the wolves are and who are the wolves or not, right?
So, Vitalik, the last group to ask you about here on kind of compatible philosophy that's near and dear to our hearts is how about the crypto tribes?
So we have Bitcoiners, we have Ethereum, we have people who are into Solana and Cosmos, and there's a lot of tribalism.
Do you think something like Diak, we can all stack hands on something like Diak and say, hey, these are a common set of core values, defensive decentralization technology,
that we all agree on. Yeah, we have our differences with respect to implementation, but can we all
unite behind something like Diak? Is it that wide of a tent to bring the crypto tribes together?
I think it absolutely could be. And I think this is one of those places where I think it's good to
give a positive shout out to some of the positive aspects of the Bitcoin community, which is that
like there definitely is a strong sub-community in there that cares about non-blockchain decentralization
tech, right? Like there's Bitcoiners who really support.
things like Noster. There's Bitcoiners who support Tor and, you know, like things like internet freedom
tech. There's Bitcoiners who have supported more secure operating systems. And then there's
Ethereum people who have also supported, you know, like all kinds of things in each of those
categories as well, right? And so I think the idea of viewing the blockchain world as being
one part of this somewhat larger thing, which is a decentralization favoring vision of cybersecurity,
and then seeing that itself as being one picture in a broader vision of decentralization-friendly,
like pushing the offense defense balance strongly toward defense is something that Ethereum people,
and Bitcoin people as well can absolutely get behind.
But talk, I'd like to throw a different candidate.
it for what the D means for in DAC.
Mine might be directional
acceleration.
I thought you were going to say David.
I was going to say that too.
That can be a different topic
with the Dave Dow.
So to me, there's like the tribal
debates between decelerationism
and accelerationism or effective altruism
and accelerationism.
And when there is these tribal debates,
usually the weaknesses inside of
one tribe never really get addressed by
that tribe because they only ever really
argue in relation to a different tribe. The way I hear this blog post and hear you speaking about
is like, hey, we're going to move forward in scientific progress. We're going to have technology
that has higher capacities. And now it's really just a matter of picking our direction,
our priorities, and where we want to go and what technologies we want to prioritize first.
Because I think generally most people accept that technology has helped the world over the grand
arc of time. And now it's really more about choosing which direction
we go in, rather than blindly saying, yes, it's forward. It's more about saying, yes, forward,
and over here in this particular direction. How would you feel about directional accelerationism?
Yeah. I mean, I think with the caveat that, like, it's totally possible to create a directional
acceleration story that I totally disagree with. But, you know, as long as we understand.
We still have to debate in which direction. Exactly. As long as we understand which direction is,
I mean, you think, yes, absolutely.
Vitalik, as we maybe conclude this episode, you've given us a fantastic tour of this whole debate,
the societal debate in this context in crypto and a great definition of Diak, which is, I think,
just a fantastic philosophy that I think bankless listeners will probably take some time to mull over.
But your article concludes with this.
It includes with some optimism for human potential here.
You say, human beings are deeply good.
And I got to confess, like, in my darkest moments, of course, I sometimes doubt.
whether that is actually true. And if you look at, you know, even crypto in 2022, I feel like we all came
out of that collectively as an industry pretty beat up, pretty doubtful that human beings are
deeply good. But you say this, I love technology because technology expands human potential.
We are the brightest star. 10,000 years ago, we could build some hand tools, change which
plants grow on a small patch of land, and build basic houses. Today, we can build 800-meter tall
towers, store the entirety of recorded human knowledge and a device that we can hold in our hands,
communicate instantly across the globe, double our lifespan, and live happy and fulfilling
lives without fear of our best friends regularly dropping dead of disease. Zooming out, we have come
quite a ways, haven't we? What grounds your belief that human beings are deeply good, Vatelic?
I mean, I think just like what other thing even remotely compares to us?
right, is the question to ask, right? You know, the universe is a very lifeless and unforgiving place,
right, where we've had, you know, like first seven to nine billion years of just stars and planets
crashing into each other and randomly creating supernovas that would just completely wipe away
everything within light years without thinking about it. And then we had four billion years of
life, but life that was
very nasty, very brutish,
very short, and
that basically involved
militant predators constantly
running around and
eating prey and everyone
being on the brink of
the dropping dead of disease and
starvation. And, you know,
there is not a single
example of a cat
that modifies its
eating behavior because of a principled
stand that killing
mice is wrong, right? Like, that is just not a thing that happens. Whereas with humans, there are
plenty of humans that have written entire screens on why this is the case and that have made,
like, huge personal sacrifices to, you know, protect the people or animals or plants that they
care about. And I think to the extent that this happens, it's incredibly amazing and incredibly
beautiful. And I think if humanity continues on a positive trajectory, then the amount of good that we can do
just multiplies even further exponentially from there, right? In the 21st century, I think there is a
big chance that we're finally going to turn the corner on factory farmed animals. And probably the
biggest moral catastrophe that you still can blame humans for is something that we will actually
end up moving beyond.
And then, you know, one billion years from now, the sun is scheduled to get so bright
that life on Earth is not going to be possible anymore, right?
And, you know, does the sun think about the moral consequences of this act that it's going
to make?
Well, no, it does not, right?
But humans, well, what can we do?
Well, you know, we can sprinkle, you know, like calcium carbonite or sulfur into the
and compensate and reduce the amount of light that reaches a surface.
We can build giant mirrors in space to reflect the light.
We can go and terraform Mars.
We can do all kinds of things.
And so if the beauty of earthly life is still going to shine two billion years from now,
it will be because of us, right?
And so I think we carry the torch of this just enormous potential
that is unparalleled in any other thing.
in the universe that we currently have evidence of the existence of. And it's our job to do a good
job of carrying that torch and carry the torch forward. Beautiful. What a fantastic way to end it. I think
that's what the DEC movement, it sounds like, is all about. And I can say bankless is certainly
part of that movement. And I'm hopeful with crypto, we can help carry that torch further. So I'll
leave maybe bankless listeners with this line from your article. We are the brightest star. There's a lot of
good that can come from ongoing human progress into the stars and beyond, but there are big forks
in the road, and we need to choose carefully. Accelerate, but accelerate carefully and well.
Vitalik Buren, thank you so much for joining us today. Thank you so much for having me.
Some action items for you, Bankless Nation, a link to the tweet thread that we discussed today.
My techno-optimism, that's a Vitalik's article, will include a link to that and also the article
itself. Risk and disclaimers, of course, crypto is risky. So is philosophy. So is
technology. So many of the things we talked about.
Those accelerationalism? I think so. You could definitely
lose what you put in, but we are headed west.
This is the frontier. It's not for everyone.
But we're glad you're with us on the bankless journey.
Thanks a lot.
