Bankless - 159 - We’re All Gonna Die with Eliezer Yudkowsky
Episode Date: February 20, 2023Eliezer Yudkowsky is an author, founder, and leading thinker in the AI space. ------ ✨ DEBRIEF | Unpacking the episode: https://shows.banklesshq.com/p/debrief-eliezer ------ ✨ COLLECTIBLES... | Collect this episode: https://collectibles.bankless.com/mint ------ We wanted to do an episode on AI… and we went deep down the rabbit hole. As we went down, we discussed ChatGPT and the new generation of AI, digital superintelligence, the end of humanity, and if there’s anything we can do to survive. This conversation with Eliezer Yudkowsky sent us into an existential crisis, with the primary claim that we are on the cusp of developing AI that will destroy humanity. Be warned before diving into this episode, dear listener. Once you dive in, there’s no going back. ------ 📣 MetaMask Learn | Learn Web3 with the Leading Web3 Wallet https://bankless.cc/ ------ 🚀 JOIN BANKLESS PREMIUM: https://newsletter.banklesshq.com/subscribe ------ BANKLESS SPONSOR TOOLS: 🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE https://bankless.cc/kraken 🦄UNISWAP | ON-CHAIN MARKETPLACE https://bankless.cc/uniswap ⚖️ ARBITRUM | SCALING ETHEREUM https://bankless.cc/Arbitrum 👻 PHANTOM | #1 SOLANA WALLET https://bankless.cc/phantom-waitlist ------ Topics Covered 0:00 Intro 10:00 ChatGPT 16:30 AGI 21:00 More Efficient than You 24:45 Modeling Intelligence 32:50 AI Alignment 36:55 Benevolent AI 46:00 AI Goals 49:10 Consensus 55:45 God Mode and Aliens 1:03:15 Good Outcomes 1:08:00 Ryan’s Childhood Questions 1:18:00 Orders of Magnitude 1:23:15 Trying to Resist 1:30:45 Miri and Education 1:34:00 How Long Do We Have? 1:38:15 Bearish Hope 1:43:50 The End Goal ------ Resources: Eliezer Yudkowsky https://twitter.com/ESYudkowsky MIRI https://intelligence.org/ Reply to Francois Chollet https://intelligence.org/2017/12/06/chollet/ Grabby Aliens https://grabbyaliens.com/ ----- Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research. Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here: https://www.bankless.com/disclosures
Transcript
Discussion (0)
I think that we are hearing the last winds start to blow, the fabric of reality start to fray.
This thing alone cannot end the world, but I think that probably some of the vast quantities of money being blindly and helplessly piled into here are going to end up actually accomplishing something.
Welcome to Bankless, where we explore the frontier of internet money and internet finance.
This is how to get started, how to get better, how to front run the opportunity.
This is Ryan Sean Adams. I'm here with David Hoffman, and we're here to help you become more bankless.
Okay, guys, we wanted to do an episode on AI at bankless.
Got what we asked for.
But I feel like David, we accidentally waded into the deep end of the pool here.
And I think before we get into this episode, it probably warrants a few comments.
I'm going to say a few things. I'd like to hear from you too.
But one thing I want to tell the listeners, don't listen to this episode if you're not ready for an existential crisis.
Okay, like I'm kind of serious about this. I'm leaving this episode shaken. And I don't say that lightly. In fact, David, I think you and I will have some things to discuss in the debrief as far as how this impacted you. But this was an impactful one. It sort of hit me during the recording. And I didn't know fully how to react. I honestly am coming out of this episode wanting to refute some of the claims made in this episode by our guest. Eliasur Yikowski, who makes the claim that,
that humanity is on the cusp of developing an AI
that's going to destroy us
and that there's really not much we can do to stop it.
There's no way around it, yeah.
I have a lot of respect for this guest.
Let me say that.
So it's not as if I have some sort of big brain technical disagreement here.
In fact, I don't even know enough to fully disagree
with anything he's saying.
But the conclusion is so dire and so existentially heavy
that I'm worried about it impacting you, listener.
if we don't give you this warning going in.
I also feel like David, as interviewers,
maybe we could have done a better job.
I'll say this on behalf of myself.
Sometimes I peppered him with a lot of questions
in one fell swoop,
and he was probably only ready to synthesize one at a time.
I also feel like we got caught flat-footed at times.
I wasn't expecting his answers to be so frank and so dire, David.
It was just bereft of hope.
And I appreciated very much the honesty,
as we always do on bankless.
But I appreciated it almost in the way that a patient might appreciate the honesty of their
doctor telling them that their illness is terminal.
Like, it's still really heavy news, isn't it?
So that is the context going into this episode.
I will say one thing, in good news for our failings as interviewers in this episode,
they might be remedied because at the end of this episode, after we finished with hit
the record button to stop recording, Eliezer said,
he'd be willing to provide additional Q&A episode with the bankless community. So if you guys have
questions and if there's sufficient interest for Elyzer to answer, tweet us to express that interest,
hit us in Discord, get those messages over to us and let us know if you have some follow-up questions.
He said, if there's enough interest in the community, in the crypto community, I'll say he'd be
willing to come on and do another episode with follow-up Q&A. Maybe even a Vitalik and Elyzer episode is in
store that's a possibility that we threw to him. We've not talked to Vitalik about that too,
but I just feel a little overwhelmed by the subject matter here. And that is the basis,
the preamble through which we are introducing this episode. David, there's a few benefits and
takeaways I want to get into, but before I do, can you comment or reflect on that preamble?
What are your thoughts going to this one? Yeah, we approach the end of our agenda. For every
bankless podcast, there's an equivalent agenda that runs alongside of it. But
Once we got to this crux of this conversation, it was not possible to proceed in that agenda
because what was the point?
Nothing else mattered.
Nothing else really matters, which is also just kind of relates to the subject matter at hand.
And so as we proceed, you'll see us kind of circle back to the same inevitable conclusion over
and over and over again, which ultimately is kind of the punchline of the content.
And so I'm of a specific disposition where stuff like this,
I kind of am like, oh, whatever, okay, just go about my life.
Other people are of different dispositions and take these things more heavily.
So Ryan's warning at the beginning is if you are a type of person to take existential crises directly to the face,
perhaps consider doing something else instead of listening to this episode.
I think that is good counsel.
So a few things, if you're looking for an outline of the agenda, we start by talking about
chat, GPT.
Is this a new era of artificial intelligence?
It's got to begin the conversation there.
Number two, we talk about what an artificial superintelligence might look like.
How smart exactly is it?
What types of things could it do that humans cannot do?
Number three, we talk about why an AI superintelligence will almost certainly spell the end of humanity
and why it'll be really hard, if not impossible, according to our guest, to stop this from happening.
And number four, we talk about if there is absolutely anything.
we can do about all of this. We are heading, careening, maybe towards the abyss. Can we divert
direction and not go off the cliff? That is the question we ask Elyzer with. David, I think you and I
have a lot to talk about during the debrief. All right, guys, the debrief is an episode that we
record right after the episode. It's available for all bankless citizens. We call this the
bankless premium feed. You can access that now to get our raw and unfiltered thoughts on the
episode and I think it's going to be pretty raw this time around, David. I'm like,
I didn't expect this to hit you so hard, man. Oh, I'm dealing with it right now. Really?
And this is probably, you know, it's not too long after the episode. So, yeah, I don't know how I'm
going to feel tomorrow, but definitely want to talk to you about this and maybe, yeah, have you
I'll put my side cat on. Yeah. Please, I'm going to need some help. Guys, we're going to get right
to the episode with Eliezer. But before we do, we want to thank the sponsors that made this episode
possible, including Cracken, our favorite recommended exchange for 2023.
Cracken has been a leader in the crypto industry for the last 12 years.
Dedicated to accelerating the global adoption of crypto, Cracken puts an emphasis on security,
transparency, and client support, which is why over 9 million clients have come to love
Cracken's products. Whether you're a beginner or a pro, the Cracken U.S. is simple, intuitive,
and frictionless, making the Cracken app a great place for all to get involved and learn about
crypto. For those with experience, the redesigned Cracken Pro app and web experience is completely customizable
to your trading needs, integrating key trading features into one seamless interface. Cracken has a 24-7-365
client support team that is globally recognized. Cracken support is available wherever, whenever you need
them by phone, chat, or email. And for all of you NFTers out there, the brand new Cracken
NFT beta platform gives you the best NFT trading experience possible. Rarity rankings, no gas fees,
and the ability to buy an NFT straight with cash.
Does your crypto exchange prioritize its customers the way that Cracken does?
And if not, sign up with Cracken at Cracken.com slash bankless.
Hey, Bankless Nation.
If you're listening to this, it's because you're on the free Bankless RSS feed.
Did you know that there's an ad-free version of Bankless that comes with the Bankless
premium subscription?
No ads, just straight to the content.
But that's just one of many things that a premium subscription gets you.
There's also the token report, a monthly bullish, bearish neutral report on the hottest
tokens of the month. And the regular updates from the token report go into the token Bible.
Your first stop shop for every token worth investigating in crypto.
Bankless premium also gets you a 30% discount to the permissionless conference, which means
it basically just pays for itself. There's also the airdrop guide to make sure you don't
miss a drop in 2023. But really, the best part about bankless premium is hanging out with me,
Ryan, and the rest of the bankless team in the inner circle discord only for premium members.
Want the alpha? Check out Ben the analyst's DigenPig.
where you can ask him questions about the token report.
Got a question?
I've got my own Q&A room for any questions that you might have.
At Bankless, we have huge things planned for 2023,
including a new website with login with your Ethereum address capabilities,
and we're super excited to ship what we are calling Bankless 2.0 soon TM.
So if you want extra help exploring the frontier,
subscribe to Bankless Premium.
It's under 50 cents a day and provides a wealth of knowledge and support on your journey West.
I'll see you in the Discord.
The Phantom Wallet is coming to Ethereum.
The number one wallet on Solana is bringing its millions of users and beloved UX to Ethereum and Polygon.
If you haven't used Phantom before, you've been missing out.
Phantom was one of the first wallets to pioneer Solana staking inside the wallet,
and will be offering similar staking features for Ethereum and Polygon.
But that's just staking.
Phantom is also the best home for your NFTs.
Phantom has a complete set of features to optimize your NFT experience,
pin your favorites, hide your uglies, burn the spam,
and also manage your NFT sale listings from inside.
the wallet. Phantom is of course a multi-chain wallet, but it makes chain management easy,
displaying your transactions in a human-readable format with automatic warnings for malicious
transactions or phishing websites. Phantom has already saved over 20,000 users from getting
scammed or hacked. So get on the Phantom waitlist and be one of the first to access the multi-chain
beta. There's a link in the show notes, or you can go to phantom.com slash waitlist to get access
in late February. Bankless Nation, we are super excited to introduce you to our next guest.
Eliezer Utakowski is a decision theorist. He's an AI researcher. He's the cedar of the less wrong community blog, a fantastic blog, by the way. There's so many other things that he's also done. I can't fit this in the short bio that we have to introduce you to Eliezer. But most relevant probably to this conversation is he's working at the Machine Intelligence Research Institute to ensure that when we do make general artificial intelligence, it doesn't come kill us all. Or at least it doesn't come ban crypto.
currency because that would be a poor outcome as well.
Eliezer, it's great to have you on bankless.
How you doing?
Within one standard deviation of my own peculiar little mean.
Fantastic.
You know, we wanted to start this conversation with something that is
jumped onto the scene, I think, for a lot of mainstream folks quite recently.
And that is chat GPT.
So apparently over 100 million or so have logged on to chat GPT quite recently.
I've been playing it with it myself.
I found it very friendly, very useful. It even wrote me a sweet poem that I thought was very heartfelt and
almost human-like. I know that you have major concerns around AI safety, and we're going to get
into those concerns. But can you tell us in the context of something like a chat GPT,
is this something we should be worried about that this is going to turn evil and enslave the human
race? Like how worried should we be about chat GPT and Bard and sort of the new AI that's entered the
seen recently. Chat GPT itself? Zero. It's not smart enough to do anything really wrong or really
right either, for that matter. And what gives you the confidence to say that? How do you know this?
Excellent question. So every now and then somebody figures out how to put a new prompt into chat GPT.
You know, one time somebody found that it would talk, well, not chat GPT, but one of the earlier
generations of technology, they found that it would sound smarter if you first told it was
Eliasri Yudkowski. You know, there's other prompts too, but that one's one of my favorites.
So there's untapped potential in there that people haven't figured out how to prompt yet.
But when people figure it out, it moves ahead sufficiently short distances that I do feel
fairly confident that there is not so much untapped potential in there that it is going to take
over the world. It's like making small movements. And to take over the world, it needs to
a very large movement.
There's places where it falls down
on predicting the next line
that a human would say in its shoes
that seem indicative of
probably that capability
just is not in the giant
inscrutable matrices
or it would be using it to predict the next line,
which is very heavily what it was optimized for.
So there's going to be like some untapped potential in there,
but I do feel quite confident
that the upper range of that untapped potential
is insufficient to outsmart all of the living humans
and implement the scenario that I'm worried about.
So even so, though, is chat GPT a big leap forward
in the journey towards AI in your mind?
Or is this fairly incremental?
It's just for whatever reason it's caught mainstream attention.
GPT3 was a big leap forward.
There's rumors about GPT4, which, you know, who knows?
ChatGPT is a commercialization.
of the actual AI in the lab giant leap forward.
If you had never heard of GPT3 or GPT2
or the whole range of text transformers
before chat GPT suddenly entered into your life,
then that whole thing is a giant leap forward,
but it's a giant leap forward based in a technology
that was published in, if I recall correctly, 2018.
I think what's going around in everyone's minds right now
and the bankless listenership and crypto people at large are largely futurists.
So everyone, I think listening understands that in the future,
there will be sentient AIs perhaps around us,
at least by the time that we all move on from this world.
So, like, we all know that this future of AI is coming towards us.
And when we see something like chat GBT, everyone's like,
oh, is this the moment in which our world starts to become integrated with AI?
And so, Elyzer, you've tapped into the world of AI.
Are we on to something here, or is this just another, you know, fad that we will internalize and then move on for?
And then the real moment of generalized AI is actually much further out than we're initially giving credit for.
Like, where are we in this timeline?
You know, predictions are hard, especially about the future.
I sure hope that this is where it saturates.
This is like the next generation.
It goes only this far.
It goes no further.
it doesn't get used to make more steel or build better power plants.
First, because that's illegal.
And second, because the large language model technology's basic vulnerabilities.
That's not reliable.
Like, it's good for applications where it works 80% of the time,
but not where it needs to work 99.999% of the time.
This class of technology can't drive a car because it will sometimes crash the car.
So I hope it saturates there.
I hope they can't fix it.
I hope we get like a 10-year AI.
winter after this, this is not what I actually predict. I think that we are hearing the last
winds start to blow, the fabric of reality start to fray. This thing alone cannot end the world,
but I think that probably some of the vast quantities of money being blindly and helplessly piled
into here are going to end up actually accomplishing something. You know, not most of the money.
That's just like never happens in any field of human endeavor. But one percent,
of $10 billion is still a lot of money to actually accomplish something.
So I think listeners, I think you've heard Eliezer's thesis on this, which is pretty dim with
respect to AI alignment.
And we'll get into what we mean by AI alignment and very worried about AI safety related
issues.
But I think for a lot of people to even sort of worry about AI safety and for us to even
have that conversation, I think they have to have some sort of grasp of what AGI looks
like. That is, I understand that to mean artificial general intelligence and this idea of a super
intelligence. Can you tell us like if there was a super intelligence on the scene, what would it
look like? I mean, is this going to look like a big chat box on the internet that we can all
type things into? It's like an Oracle type thing or is it like some sort of a robot that it's
going to be constructed in a secret government lab? Is this like something somebody could accidentally
create in a dorm room? Like what are we even looking for when we talk about the term AGI and
superintelligence. So first of all, I'd say those are pretty distinct concepts. Chat GPT shows a
very wide range of generality compared to the previous generations of AI. Not like very wide
generality compared to GPT3, not like literally the lab research that got commercialized.
That's the same generation. But compared to, you know, stuff from 2018 or even 2020,
ChatGPT is better at a much wider range of things without having been explicitly programmed
by humans to be able to do those things.
It can to imitate a human as best it can.
It has to capture all of the things that humans can think about than it can, which is not
all the things.
It's still not very good at long multiplication unless you give it the right instructions,
which case suddenly you can do it.
So it's like significantly more general than that.
the previous generation of artificial minds, humans were significantly more general than the
previous generation of chimpanzees, or rather australapithecus, or last common ancestor.
Humans are not fully general. If humans were fully general, we'd be good at coding as we are
at football, throwing things, or running. Some of us are, you know, okay at programming,
but, you know, we're not spec for it. We're not fully general minds.
You can imagine something that's more general than a human, and if it runs into something unfamiliar,
it's like, okay, let me just go reprogram myself a bit, and then I'll be as adapted to this
thing as I am to, you know, anything else. So chat GPT is less general than a human, but it's
like genuinely ambiguous, I think, whether it's more or less general than, say, our cousins
the chimpanzees, or if you don't believe it's as general as a chimpanzee, a dolphin, or a cat.
So this idea of general intelligence is sort of a range of things that it can actually do, a range of ways it can apply itself?
How wide is it? How much reprogramming does it need? How much retraining does it need to make it do a new thing?
Bees build hives, beavers build dams. A human will look at a beehive and imagine a honeycomb-shaped dam.
And that's like humans alone in the animal kingdom. But that doesn't mean that we are,
general intelligence as it means we're significantly more generally applicable intelligences than chimpanzees.
It's not like we're all that narrow. We can walk on the moon. We can walk on the moon because there's
aspects of our intelligence that are like made in full generality for universes that contain
simplities, regularities, things that recur over and over again. We understand that if steel is hard
on earth, it may stay hard on the moon. And because of that, we can build rockets, walk on the moon
breathe amid the vacuum. Chimpanzees cannot do that, but that doesn't mean that humans are the most
general possible things. The thing that is more general than us that figures that stuff out faster
is the thing to be scared of if the purposes to which it turns are its intelligences are not
ones that we would recognize as nice things, even in the most cosmopolitan and embracing
senses of, you know, what's worth doing. And you said this idea of a general intelligence is different
than the concept of superintelligence, which I also brought into that first part of the question.
How is superintelligence different than general intelligence?
Well, because chat GPT has a little bit of general intelligence, humans have more general intelligence.
A superintelligence is something that can beat any human and the entire human civilization at all the cognitive tasks.
I don't know if the efficient market hypothesis is something where I can rely on.
Yes, we're all crypto investors here. We understand efficient market hypothesis for sure.
So the efficient market hypothesis is, of course, not generally true. Like, it's not true
that literally all the market prices are smarter than you. It's not true that all the prices on
earth are smarter than you. Even the most arrogant person who is at all calibrated, however,
still thinks that the efficient market hypothesis is true relative to them,
99.99.99% of the time. They only think that they know better about
one in a million prices. There might be important prices. Now, the price of Bitcoin is an important
price. It's not just a random price. But if the efficient market hypothesis was only true to you
90% of the time, you could just pick out the 10% of the remaining prices and compound like
and double your money every day on the stock market. And nobody can do that. Literally nobody can
do that. So this property of relative efficiency that the market has to you, that the price is
estimate of the future price, it already has all the information you have, not all the information
that exists in principle, maybe not all the information that the best equity budget, but relative
to you. It's efficient relative to you. For you, if you pick out a random price, like the price
of Microsoft stock, something where you've got no special advantage, that estimate of its price
a week later is efficient relative to you. You can't do better than that price. We,
have much less experience with the notion of instrumental efficiency, efficiency in choosing
actions, because actions are harder to aggregate estimates about than prices.
So you have to look at, say, alpha zero playing chess, or just, you know, like stockfish,
whatever the latest stockfish number is, an advanced chess engine.
When it makes a chess move, you can't do better than that chess move.
It may not be the optimal chess move, but if you pick a different chess move, you'll do worse.
That you'd call like a kind of efficiency of action.
Given its goal of winning the game, there is, once you know its move, unless you consult some more powerful AI than Stockfish, you can't figure out a better move than that.
A superintelligence is like that with respect to everything, with respect to all of humanity.
It is relatively efficient to humanity.
It has the best estimates, not perfect estimates, but the best estimates, and its estimates
contain all the information that you've got about it.
Its actions are the most efficient actions for accomplishing its goals.
If you think you see a better way to accomplish its goals, you're mistaken.
So you're saying this is superintelligence.
We'd have to imagine something that knows all of the chess moves in advance.
But here we're not talking about chess.
We're talking about everything.
It knows all of the moves that we would make and the most optimum pattern, including moves that we would not even know how to make, and it knows these things in advance.
I mean, how would, like, human beings sort of experience such a superintelligence?
I think we still have a very hard time imagining something smarter than us, just because we've never experienced anything like it before.
Of course, you know, we all know somebody who's genius level IQ, maybe quite a bit smarter than us, but we've never encountered something like that you're describing.
sort of mind that is super intelligent, what sort of things would it be doing that humans couldn't?
How would we experience this in the world? I mean, we do have some tiny bit of experience with it.
We have experience with chess engines, where we just can't figure out better moves than they make.
We have experience with market prices, where even though your uncle has this, you know,
like really long, elaborate story about Microsoft stock, you just know he's wrong. Why is he wrong? Because if he
was correct, it would already be incorporated into the stock price. And this notion, and especially
because the market's efficiency are not perfect, like that whole downward swing and then upward move
in COVID, I have friends who made more money off that than I did, but I still managed to buy
it back into the broader stock market on the exact day of the low, you know, basically coincidence.
So the markets aren't perfectly efficient, but they're efficient almost everywhere. And that sense
of like deference, that sense that your weird uncle can't possibly be right because the hedge
funds would know it, you know, unless he's talking about COVID, in which case maybe he is right.
If you have the right choice of weird uncle, you know, like, I have weird friends who are like
maybe better calling these things than your weird uncle. But, you know, so among humans,
it's subtle. And then with superintelligence, it's not subtle, just massive advantage, but not perfect.
It's not that it knows every possible move you make before.
you make it. It's that it's got a good probability distribution about that, and it, you know,
has figured out all the good moves you could make and figured out to apply to those. I mean, like,
in practice, what's that like? Well, unless it's limited narrow superintelligence, I think you
mostly don't get to observe it because you are dead, unfortunately. What? So, you know,
like stockfish make strictly better chess moves than you, but it's playing on a very narrow board
and the fact that it's better at you than chess. It doesn't mean it's better at you than everything.
And I think that the actual catastrophe scenario for AI looks like big advancement in a research lab,
may be driven by them getting a giant venture capital investment in being able to spend 10 times
as much on GPUs as they did before.
Maybe driven by a new algorithmic advance like Transformers.
Maybe driven by hammering out some tweaks in last year's algorithmic advance.
It gets a thing to finally work efficiently.
And the AI there goes over a critical threshold,
which most obviously could be like can write the next AI.
You know, that's so obvious that like science fiction,
writers figured it out almost before there were computers, possibly even before there were computers.
I'm not sure what the exact dates here are. But if it's better at you than everything, it's better at you
than building AIs. That snowballs. It gets an immense technological advantage. If it's smart, it doesn't
announce itself. It doesn't tell you that there's a fight going on. It emails out some instructions
to one of those labs that'll synthesize DNA and synthesize proteins from the DNA and get some
proteins mailed to a hapless humans somewhere who gets paid a bunch of money to mix together some
stuff they got in the mail in a file. You know, like smart people will not do this for any sum of money.
Many people are not smart. Builds the ribosome, but the ribosome that builds things out of
covalently bonded diamondoid instead of proteins folding up and held together by Vandra Wells forces,
builds tiny diamondoid bacteria. The diamondoid bacteria replicate using atmospheric carbon, hydrogen,
oxygen, nitrogen, and sunlight. And a couple of days later, everybody on Earth falls over dead in the same second.
That's what I think the disaster scenario. That's the disaster scenario if it's as smart as I am. If it's smarter, it might think of a better way to do things. But it can at least think of that if it's relatively efficient compared to humanity because I'm in humanity and I thought of it.
I've got a million questions, but I'm going to let David go first.
Yeah, so we've sped run in the introduction of a number of different concepts, which I want to go back and take our time to really dive into.
There's the AI alignment problem. There's AI escape velocity. There is the question of what happens
when AIs are so incredibly intelligent that humans are to AIs, what ants are to us. And so I want to
kind of go back and tackle these Aeaser one by one. We started this conversation talking about
chat GBT, and everyone's up in arms about chat GBT. And you're saying, like, yes, it's a great
step forward in the generalizability of some of the technologies that we have in the AI world.
sudden chat, GBT becomes immensely more useful and it's really stoking the imaginations of people
today. But what you're saying is it's not the thing that's actually going to be the thing to reach
escape velocity and create super intelligent AIs that perhaps might be able to enslave us.
But my question to you is, how do we know when that...
They don't enslave you, but sorry, go on.
Yeah, sorry.
Murder, David. Kill all of us.
The liaison was very clear on that.
So if it's not chat GPT, like how close?
are we? Because there's this like unknown event horizon where you kind of alluded to it. We're like,
we make this AI that we train it to create a smarter AI. And that smart, yeah, it's so incredibly
smart that it hits escape velocity and all of a sudden these dominoes fall. How close are we to that
point? And are we even capable of answering that question? How heck would I know?
And also when you were talking, L.EAS, it's like, if we had already crossed that event
Horizon, like a smart AI wouldn't necessarily broadcast that to the world.
And it's possible we've already crossed that event horizon, is it not?
I mean, it's theoretically possible, but seems very unlikely.
Somebody would need inside their lab an AI that was like much more advanced than the
public AI technology.
And as far as I currently know, the best labs and the best people are throwing their ideas
to the world.
Like, they don't care.
and there's probably some secret government labs with like secret government AI researchers.
My pretty strong guess is that they don't have the best people and that those labs like could not
create chat GPT on their own because chat GPT took a whole bunch of fine twiddling and tuning
and visible access to giant GPU farms and that they don't have the people who know how to do the
twiddling and tuning.
this is just a guess.
One of the big things that you spend a lot of time on
is this thing called the AI alignment problem.
Some people are not convinced that when we create AI,
that AI won't really just be fundamentally aligned with humans.
I don't believe that you fall into that camp.
I think you fall into the camp of when we do create this super intelligent,
generalized AI, we are going to have a hard time
aligning with it in terms of our morality and our ethics.
Can you walk us through a little bit of that thought process?
Why do you feel disaligned?
Yeah, I mean, the dumb way to ask,
question too. It's like, Eliezer, why do you think that the AI is automatically hates us?
It doesn't hate you. Why is it going to go? It doesn't even feel for it. The AI doesn't hate you. Why does it want to
kill us all? The AI doesn't hate you. Neither does it love you. And you're made of atoms that can use for
something else. It's indifferent to you. It's got something that actually does care about,
which makes no mention of you. And you are made of atoms that can use for something else. That's all
there is to it in the end. The reason you're not in its utility function is that the programmers did not
know how to do that. The people who built the AI or the people who built the AI that built the AI that
built the AI did not have the technical knowledge that nobody on Earth has at the moment as far as I
know whereby you can do that thing and you can control in detail what that thing ends up caring
about. So this feels like where humanity is hurtling itself towards an event horizon where there's
like this AI escape velocity and there's nothing on the other side.
as in we do not know what happens past that point as it relates to having some sort of super
intelligent AI and how it might be able to manipulate the world.
Would you agree with that?
No.
Again, the Stockfish chess playing analogy, you cannot predict exactly what move it would make
because in order to predict exactly what move it would make, you would have to be at least
that good at chess, and it's better than you.
This is true even if it's just a little better than you.
Stockfish is actually enormously better than you to the point that once it tells you
the move. You can't figure out a better move without consulting a different AI. But even if it was just a bit
better than you, then you're in the same position. This kind of disparity also exists between humans.
You know, if you ask me, like, where will Gary Kasparov move on this chessboard? I'm like,
I don't know, like, maybe here. And then if Gary Kasparov moves somewhere else, doesn't mean
that he's wrong. It means that I'm wrong. If I could predict exactly where Gary Kasparov would move
at a chess board, I'd be Gary Kasparov. I'd be at least that could a chess.
possibly better. I could also be able to predict him, but also like see an even better move than that.
So that's an irreducible source of uncertainty with respect to superintelligence or anything that's
smarter than you. If you could predict exactly what it would do, it would be that smart. Yourself,
it doesn't mean you can predict no facts about it. So with Stockfish in particular, I can predict
it's going to win the game. I know what it's optimizing for. I know where it's trying to steer the board.
I can predict that I can't predict exactly what the board will end up looking like after Stockfish has finished winning its game against me.
I can predict it will be in the class of states that are winning positions for black or white or whichever color stockfish picked, because it wins either way.
And that's similarly where I'm getting the kind of prediction about everybody being dead.
Because if everybody were alive, then there'd be some state that the superintelligence preferred to that state, which is all of the atoms,
making up these people on their farms or make use for something else that it values more.
So if you postulate that everybody's still alive, I'm like, okay, well, like, why you're
like postulating that stockfish made a stupid chess move and ended up with a non-winning board
position? That's where that class of predictions come from.
Can you reinforce this argument, though, a little bit? So, like, why is it that an AI can't be
nice, sort of like a gentle parent to us rather than sort of a murderer looking to deconstructor
atoms and, you know, apply for you somewhere else. Like, what are its goals and why can't they be
aligned to at least some of our goals? Or maybe why can't it get into a status which is, you know,
somewhat like us and the ants, which is largely we just ignore them unless they interfere in
our business and come in our house and, you know, raid our zero boxes. There's a bunch of different
questions there. So first of all, the space of minds is very wide. Imagine like this giant
sphere and all the humans are in this one tiny corner of the sphere.
And, you know, we're all like basically the same make and model of car running the same brand of engine.
We're just all painted slightly different colors.
Somewhere in that mind space, there's things that are as nice as humans.
There's things that are nicer than humans.
There's things that are trustworthy and nice and kind in ways that no human can ever be.
And there's even things that are so nice that they can understand the concept of leaving you alone and doing your own stuff sometimes instead of hanging you around,
trying to be like obsessively nice to you every minute and all the other famous.
disaster scenarios from ancient science fiction with folded hands by Jack Williamson is the one I'm
quoting there. We don't know how to reach into mind-designed space and pluck out an AI like that.
It's not that they don't exist in principle. It's that we don't know how to do it. And I'll
like hand back the conversational ball now and figure out like which next question do you want to go
down there. Well, I mean, why? Like, why is it so difficult to sort of align an AI with even our
basic notions of morality.
I mean, I wouldn't say that it's difficult to align
an AI with our basic notions of morality.
I'd say that it's difficult to align AI in a task like,
take this strawberry and make me another strawberry
that's identical to this strawberry down to the cellular level,
but not necessarily the atomic level.
So it looks under the same under like a standard optical microscope,
but maybe not a scanning electron microscope.
You know, do that.
Don't destroy the world as a side effect.
Now, this does intrinsically take a powerful AI.
There's no way you can make it easy to align by making it stupid.
To build something that's cellular identical to a strawberry,
I mean, mostly I think the way that you do this is with very primitive nanotechnology.
We could also do it using very advanced biotechnology.
And these are not technologies that we already have,
so it's got to be something smart enough to develop new technology.
Never mind all the subtleties of morality.
I think we don't have the technology
you to align an AI to the point where we can say,
build me a copy of the strawberry and don't destroy the world.
Why do I think that?
Well, case and point,
look at natural selection building humans.
Natural selection mutates the humans a bit,
runs another generation,
the fittest ones reproduce more,
their genes become more prevalent to the next generation.
Natural selection hasn't really had very much time to do this to modern humans at all,
but, you know, the hominid line, the mammalian line.
Go back a few million generations.
And this is an example of an optimization process building an intelligence.
And natural selection asked us for only one thing.
Make more copies of your DNA.
Make your alleles more relatively prevalent in the gene pool.
maximize your inclusive reproductive fitness, not just like your own reproductive fitness,
but you know, two brothers or eight cousins, as the joke goes.
Because they've got on average one copy of your genes, two brothers, eight cousins.
This is all we were optimized for, for millions of generations, creating humans from scratch,
from the first accidentally self-replicating molecule.
internally, psychologically, inside our minds, we do not know what genes are.
We do not know what DNA is.
We do not know what alleles are.
We have no concept of inclusive genetic fitness until, you know, our scientists figure out what that even is.
We don't know what we were being optimized for.
For a long time, many meanings thought they'd been created by God.
And this is when you use the hill climbing paradigm and optimize,
for one single extremely pure thing, this is how much if it gets inside.
In the ancestral environment, in the exact distribution that we were originally optimized
for, humans did tend to end up using their intelligence to try to reproduce more.
Put them into a different environment, and all the little bits and pieces and fragments
of optimizing for fitness that were in us now do totally different stuff.
We have sex, but we wear condoms.
If natural selection had been a foresightful, intelligent kind of engineer that was able to engineer things successfully, it would have built us to be revolted by the thought of condoms.
Men would be lined up and fighting for the rights to donate to sperm banks.
And in our natural environment, the little drives that got into us happened to lead to more reproduction.
but distributional shift run the humans out of their distribution over which they were optimized.
You get totally different results.
And gradient descent would by default just like do not quite the same thing.
It's going to do a weirder thing because natural selection has a much narrower information bottleneck.
In one sense, you could say that natural selection was at an advantage because it finds simpler solutions.
You could imagine some hopeful engineer who just built intelligences using,
gradient descent and found out that they end up wanting these like thousands and millions of little
tiny things, none of which were exactly what the engineer wanted, and being like, well, let's
try natural selection instead. It's got a much sharper information bottleneck. It'll find the
simple specification of what I want. But we actually get there as humans. Then gradient descent,
probably, maybe even worse. But more importantly, I'm just pointing out that there is no physical
law, computational law, mathematical, logical law, saying,
When you optimize using hill climbing on a very simple, very sharp criterion,
you get a general intelligence that wants that thing.
So just like natural selection, our tools are too blunt in order to get to that level of granularity
to like program in some sort of morality into these super intelligent systems?
Or build me a copy of a strawberry without destroying the world.
Yeah, the tools are too blunt.
So I just want to make sure I'm following with what you were saying. I think the conclusion that you left me with is that my brain, which I consider to be at least decently smart, is actually a byproduct, an accidental byproduct of this desire to reproduce. And it's actually just like a tool that I have and just like conscious thought is a tool, which is a useful tool in means of that end. And so if we're applying this to AI and AI's desire to achieve some certain goal, what's
the parallel there? I mean, every organ is your body is a reproductive organ. If it didn't help you
reproduce, you would not have an organ like that. Your brain is no exception. This is merely
conventional science and like merely the conventional understanding of the world. I am not saying
anything here that ought to be at all controversial. I'm sure it's controversial somewhere,
but, you know, within a pre-filtered audience. It should not be at all controversial. And
this is like the obvious thing to expect to happen with AI because why wouldn't it?
What new law of existence has been invoked?
whereby this time we optimize for a thing and we get a thing that wants exactly what we
optimized for on the outside.
So what are the types of goals an AI might want to pursue?
What types of utility functions is it going to want to pursue off the bat?
Is it just those it's been programmed with?
Like make it an identical strawberry?
Well, the whole thing I'm saying is that we do not know how to get goals into a system.
We can cause them to do a thing inside a distribution they were optimized over using gradient descent.
But if you shift them outside of that distribution, I expect other weird things start happening.
When they reflect on themselves, other weird things start happening.
What kind of utility functions are in there?
I mean, darn if I know.
I think you'd have a pretty hard time calling the shape of humans from advance by looking at natural selection, the thing that natural selection was optimizing for, if you'd never seen a human or anything like a human.
If we optimize them from the outside to predict the next line of human text, like GPT3, I don't actually think this line of technology leads to the end of the world, but maybe it does.
and, you know, like GPT-7, you know, there's probably a bunch of stuff in there to,
that desires to accurately model things like humans under a wide range of circumstances,
but it's not exactly humans because ice cream.
Ice cream didn't exist in the natural environment, the ancestral environment,
the environment of evolutionary adaptiveness.
There was nothing with that much sugar.
salt, fat combined together as ice cream. We are not built to want ice cream. We were built to want
strawberries, honey, a gazelle that you killed and cooked and had some fat in it and was
therefore nourishing and gave you the all-important calories you need to survive. Salt, so you
didn't sweat too much and run out of salt. We evolved to want those things, but then
ice cream comes along and it fits those taste buds better than anything that existed in the
environment that we were optimized over. So a very primitive, very basic, very unreliable, wild
guess, but at least an informed kind of wild guess. Maybe if you train a thing really hard to
predict humans, then among the things that it likes are tiny little pseudo-seudo-eastern,
things that meet the definition of human but weren't in its training data and that are much easier
to predict or where the problem of predicting them can be solved in a more satisfying way,
where satisfying is not like human satisfaction, but some other criterion of thoughts like this
are tasty because they help you predict the humans from the training data.
Aliaser, when we talk about all of these ideas about just like the ways that AI thought will be
fundamentally just incompatible or not be able to be understood by the ways that humans think.
And then all of a sudden we see this like rotation by venture capitalists by just pouring money
into AI. Do alarm bells go off in your heads? Like, hey guys, you haven't thought deeply about
these subject matters yet. Just like the immense amount of capital going into AI investment scare you?
I mean, alarm bills went off for me in 2015, which is when it became obvious that this is how it was going to go down.
I sure am now seeing the realization of that stuff I felt alarmed about back then.
Glaser, is this view that AI is incredibly dangerous and that AGI is going to eventually end humanity
and that we're just careening toward a precipice? Would you say this is like the consensus view now?
Or are you still somewhat of an outlier? And like, why aren't other smart people in this field as
alarmed as you? Can you like steal man their arguments?
You're asking, again, like several questions.
there. Is it the consensus view? No. Do I think that the people in the wider scientific
field who dispute this point of view, do I think they understand it? Do I think they've done
anything like an impressive job of arguing against it at all? No. They, like if you look at the,
like, famous, prestigious scientists who sometimes make a little fun of this view in passing,
they're making up arguments rather than deeply considering things that are held to any standard
of rigor, and people outside their own fields are able to validly shoot them down.
I have no idea how to pronounce his last name.
Francis C-H-O-L-L-E-T.
You know, like said something about like, oh, this, you know, I forget his exact words,
but it's something like, I never hear any good arguments for stuff.
And I was like, okay, here's some good arguments for stuff.
And you can read the reply from Yudkowski to C-H-O-L-L-E-T and Google that,
and that'll give you some idea of what the eminent voices versus, like,
the reply to the eminent voices sound like.
And, you know, like Scott Aronson, who at the time was off in complexity theory.
He was like, that's not how no-free launch theorems work correctly.
So, yeah, I think the state of affairs is we have eminent scientific voices
making fun of this possibility, but not engaging with the arguments for it.
Now, if you step away from the eminent scientific voices,
you can find people who are more familiar with all the arguments and disagree with me.
And I think they lack security mindset.
I think that they're engaging in the sort of blind optimism
that many, many scientific fields throughout history have engaged in
where when you're approaching something for the first time,
you don't know why it will be hard,
and you imagine easy ways to do things.
And the way that this is supposed to naturally play out
over the history of a scientific field
is that you run out and you try to do the things
and they don't work.
And you go back and you try to do other clever things
and they don't work either.
And you learn some pessimism
and you start to understand the reasons
why the problem is hard.
This is, in fact,
the field of artificial intelligence itself
recapitulated this very common
ontogeny of a scientific field
where, you know,
initially we had people getting to get at the dark mouth conference.
I forget what their exact famous phrasing was,
but it's something like,
we think we can make,
you know,
like we want to address the problem of getting AIs to,
you know,
like understand language,
improve themselves,
and I forget even what else was there,
a list of what now sound like grand challenges.
And we think we can make substantial progress on this
using 10 researchers for two months.
And I think that that,
at the core is what's going on. They have not run into the actual problems of alignment. They aren't
trying to get ahead of the game. They're not trying to panic early. They're waiting for reality to hit
them onto the head and turn them into grizzled old cynics of their scientific field to understand
the reasons why things are hard. They're content with the predictable life cycle of starting out
as bright-eyed youngsters, waiting for reality to hit them over the head with the news. And if it
wasn't going to kill everybody the first time that they're really wrong, it'd be fine. You know,
this is how science works. If we got unlimited free retries in 50 years to solve everything, it'd be
okay. We could figure out how to align AI in 50 years given unlimited retries. You know, the first team in
with the bright-eyed optimists would destroy the world and people would go, oh, well, you know,
it's not that easy. They would try something else clever. That would destroy the world. People would go
like, oh, well, you know, maybe this is this field is actually hard. Maybe this is actually one of the
thorny things like computer security or something.
And so what exactly went wrong last time? Why didn't these hopeful ideas played out?
Oh, like you optimize for one thing on the outside and you get a different thing on the inside.
Wow, that's really basic. All right. Can we even do this to use gradient descent? Can you even
build this thing out of giant inscrutable matrices of floating point numbers that nobody understands
at all? You know, maybe we need different methodology. And 50 years later, you'd have an aligned AGI.
Now, if we got unlimited free retries without destroying the world, it'd be, you know,
it'd play out the same way that, you know, chat GPT played out.
It's, you know, not from 1956 or 55 or whatever it was to 2023.
So, you know, about 70 years, give or take a few.
And, you know, 70 years later, you know, just like we can do the stuff that,
seven years later, we can do the stuff they wanted to do in a summer in 1955.
You know, 70 years later, we'd have your aligned AGI.
The problem is that the world got destroyed in the meanwhile.
And that's why we, you know, that's the problem there.
So this feels like a gigantic don't look up scenario.
If you're familiar with that movie, there's a, it's a movie about like this asteroid hurtling to Earth, but it becomes popular and in vogue to not look up and not notice it.
And Eliezer, you're the guy who's saying like, hey, there's an asteroid.
We have to do something about it.
And if we don't, it's going to come destroy us.
If you had God mode over the progress of AI research and just innovation and development,
what choices would you make that humans are not currently making today?
I mean, I could say something like shut down all the large GPU clusters.
How long do I have God mode?
Do I get to like stick around for 70 years?
You have God mode for the 2020 decade.
For 2020 decade.
All right, that does make it pretty hard to do things.
I think I
shut down all the GPU clusters
and get
all of the famous scientists and brilliant,
talented youngsters,
the vast, vast majority of whom are not going to be productive,
and where government bureaucrats are not going to be able to tell
who's actually being helpful or not,
but, you know, put them all in an island,
large island,
and
try to figure out,
some system for filtering the stuff through to me to give thumbs up or thumbs down on
that is going to work better than scientific bureaucrats producing entire nonsense because, you know,
the trouble is the reason why scientific fields have to go through this long process to produce
the cynical oldsters who know that everything is difficult. It's not that the youngsters are
stupid. You know, sometimes youngsters are fairly smart. You know, Marvin Minsky, John McCarthy back in
1955. They weren't idiots. You know, privileged to have met both of them. They didn't strike me as
idiots. They were very old. They still weren't idiots. But, you know, it's hard to see what's coming in
advance of experimental evidence hitting you over the head with it. And if I only have the
decade of the 2020s to run all the researchers on this giant island somewhere, it's really not
a lot of time. Mostly what you've got to do is invent some entirely new AI paradigm that
isn't the giant inscrutable matrices of floating point numbers on gradient descent?
Because I'm not really seeing what you can do that's clever with that,
that doesn't kill you and that you know doesn't kill you,
and doesn't kill you the very first time you try to do something clever like that.
I'm sure there's a way to do it.
And if you got it to try over and over again, you could find it.
Uniswap is the largest on-chain marketplace for self-custody digital assets.
Uniswap is, of course, a decentralized exchange, but you know this because you've been listening to bank lists.
But did you know that the Uniswop web app has a shiny new Fiat on ramp?
Now you could go directly from Fiat in your bank to tokens in Defi inside of Uniswap.
Not only that, but Polygon, Arbitrum, and Optimism, Layer 2s are supported right out of the gate.
But that's just Defi.
Uniswap is also an NFT aggregator, letting you find more listings for the best prices across the NFT world.
With Uniswap, you can sweep floors on multiple NFTs,
and Uniswop's universal router will optimize your gas fees for you.
Uniswap is making it as easy as possible
to go from bank account to bankless assets across Ethereum,
and we couldn't be more thankful for having them as a sponsor.
So go to app.uniswop.org today
to buy, sell, or swap tokens and NFTs.
Arbitrum 1 is pioneering the world of secure Ethereum scalability
and is continuing to accelerate the Web3 landscape.
Hundreds of projects have already deployed on Arbitrum 1, producing flourishing defy and NFT ecosystems.
With the recent addition of Arbitrum Nova, gaming and social daps like Reddit are also now calling Arbitrum home.
Both Arbitrum 1 and Nova leverage the security and decentralization of Ethereum and provide a builder experience that's intuitive, familiar, and fully EVM-compatible.
On Arbitrum, both builders and users will experience faster transaction speeds with significantly lower gas fees,
With Arbitrum's recent migration to Arbitram Nitro, it's also now 10 times faster than before.
Visit Arbitrum.io, where you can join the community, dive into the developer docs, bridge your assets, and start building your first app.
With Arbitrum, experience Web3 development the way it was meant to be.
Secure, fast, cheap, and friction-free.
How many total airdrops have you gotten?
This last bull market had a ton of them.
Did you get them all?
Maybe you missed one.
So here's what you should do.
Go to Earnify and plug in your Ethereum wallet, and Earnify will tell you if you have any unclaimed air drops.
that you can get. And it also does poaps and mintable NFTs. Any kind of money that your wallet can
claim, Earnify, will tell you about it. And you should probably do it now because some air drops
expire. And if you sign up for Earnify, they'll email you anytime one of your wallets has a new
irdrop for it to make sure that you never lose anirdrop ever again. You can also upgrade to
Earnify premium to unlock access to air drops that are beyond the basics and are able to set
reminders for more wallets. And for just under $21 a month, it probably pays for itself with just
one airdrop. So plug in your wallets at Earnify and see what you get. That's E-A-R-N-I.
And make sure you never lose another air drop.
Elias, do you think every intelligent civilization has to deal with this exact problem that
humanity is dealing with now? Is how do we solve this problem of aligning with an advanced
general intelligence? I expect that's much easier for some alien species than others.
like there are alien species who might arrive at this problem an entirely different way.
You know, like maybe instead of having two entirely different information processing systems,
the DNA and the neurons, they've only got one system.
They can trade memories around heritably by swapping blood sexually.
Maybe the way in which they confront this problem is that very early in their evolutionary history,
they have the equivalent of the like DNA that stores memories and like processes, computes
memories, and they swap around a bunch of it, and it adds up to something that reflects on
itself and makes itself coherent, and then you've got a superintelligence before they have invented
computers. And maybe that thing wasn't aligned. But, you know, how do you even align it
when you're in that kind of situation? It'd be a very different angle on the problem.
Do you think every advanced civilization is on the trajectory to creating a superintelligence at some
point in its history? Maybe there's ones in universes with alternate physics where you just
can't do that. Their universe's computational physics just doesn't support that much computation.
Maybe they never get there. Maybe their lifespans are long enough and their star lifespans
short enough that they never get to the point of a technological civilization before their star
does the equivalent of expanding or exploding or going out and their planet ends. Every alien
species covers a lot of territory, especially if you talk about alien species and universes with
physics different from this one. Well, talking about kind of our present universe, I'm curious if you've
sort of been confronted with the question of like, well, then why haven't we seen some sort of
superintelligence in our universe when we sort of look out at the stars, sort of the Fermi paradox
type of question? Do you have any explanation for that? Oh, well, supposing that they got killed
by their own AIs doesn't help at all with that, because then we'd see the AIs. And do you think
that's what happens. Yeah, it doesn't help with that. We would see evidence of AIs, wouldn't we?
Yeah. Yes. So why don't we?
I mean, the same reason we don't see evidence of the alien civilizations, not with AIs.
And that reason is, although it doesn't really have much to do with the whole AI thesis one way or another, because they're too far away.
Or so says Robin Hansen, using a very clever argument about the apparent difficulty of hard
steps in humanity's evolutionary history to further induce the rough gap between the hard steps.
And, you know, I can't really do justice to this. If you look up grabby aliens.
Grabby aliens?
I remember this. Yeah.
Grabby aliens. G-R-A-B-B-B-Y. You can find Robin Hanson's very clever argument for how far away the aliens are.
There's an entire website.
Bankless listeners, there's an entire website called grabbyalions.com you can go look at.
Yeah. And that contains, which is by far the best answer I've seen to where are they?
Answer too far away for us to see, even if they're traveling here at nearly light speed.
How far away are they? And how do we know that?
This is amazing.
But, yeah. There is not a very good way to simplify the argument, you know, any more than there is to simplify the notion of zero knowledge proofs.
It's not that difficult, but it's just like very not easy to simplify.
But if you have a bunch of locks that are all of different difficulties,
and a limited time in which to solve all the locks,
such that anybody who gets all through all the locks must have gotten through them by lock,
all the locks will take around the same amount of time to solve,
even if they're all of very different difficulties.
And that's the core of Robin Hansen's argument for how far away the aliens are
and how do we know that?
Eliezer, I know you're very skeptical
that there will be a good outcome
when we produce an artificial general intelligence.
And I said when, not if,
because I believe that's your thesis as well, of course.
But is there the possibility of a good outcome?
Like, I know you are working on AI alignment problems,
which leads me to believe that you have, like,
greater than zero amount of hope for this project.
Is there the possibility of a good outcome
what would that look like and how do we go about achieving it?
It looks like me being wrong.
I basically don't see on-model hopeful outcomes at this point.
We have not done those things that it would take to earn a good outcome.
And this is not a case where you get a good outcome by accident.
It's, you know, like if you have a bunch of people putting together a new operating system
and they've heard about computer security,
but they're skeptical that it's really that hard,
the chance of them producing a secure operating system is effectively zero.
That's basically the situation I see ourselves in with respect to AI alignment.
I have to be wrong about something, which I certainly am,
have to be wrong about something in a way that makes the problem easier rather than harder
for those people who don't think that alignment's going to be all that hard.
you know, if you're building a rocket for the first time ever and you're wrong about something,
it's not surprising if you're wrong about something.
It's surprising if the thing that you're wrong about causes the rocket to go twice as high
on half the fuel you thought was required and be much easier to steer than you were afraid of.
Where the alternative was, if you're wrong about something, the rocket blows up.
Yeah, and then the rocket ignites the atmosphere is the problem there.
Or rather, you know, like a bunch of rockets blow up, a bunch of rockets go places.
The analogy I usually use for this is very early on in the Manhattan Project.
They were worried about what if the nuclear weapons can ignite fusion in the nitrogen in the atmosphere.
And they ran some calculations and decided that it was like incredibly unlikely for multiple angles.
So they went ahead.
And we're correct.
You know, we're still here.
And I'm not going to say that it was luck because, you know, the calculations were actually pretty solid.
and AI is like that, but instead of needing to refine plutonium, you can make nuclear weapons out of a billion tons of laundry detergent.
The stuff to make them is like fairly widespread.
It's not a tightly controlled substance.
And they spit out gold up until they get large enough, and then they ignite the atmosphere.
And you can't calculate how large is large enough.
And a bunch of the people, the CEOs running these projects,
or making fun of the idea that it'll ignite the atmosphere.
It's not a very hopeful situation.
So the economic incentive to produce this AI,
one of the things why ChatGBTGBT has sparked the imaginations of so many people
is that everyone can imagine products.
Products are being imagined left and right about what you can do with something like ChatGBT.
There's like this meme at this point of people leaving and to go start their ChatGBT
Z startup.
And so like the metaphor is that like what you're saying is that there's this generally available
resource spread all around the world, which is chat GBT, and everyone's hammering it in order
to make it to spit out gold. But you're saying if we do that too much, all of a sudden,
the system will ignite the whole entire sky and then we will all die. Well, no, you can run chat
GPD any number of times without igniting the atmosphere. That's about what research labs at Google
and Microsoft, counting deep mind as part of Google and counting open AI as part of Microsoft,
that's what the research labs are doing, bringing more metaphorical plutonium together than ever before.
Not about how many times you run the things that have been built and not destroyed the world yet.
You can do any amount of stuff with chat GPT and not destroy the world.
It's not that smart.
It doesn't get smarter every time you run it.
Can I ask some questions that the 10-year-old in me wants to really ask about this?
And I'm asking these questions because I think a lot of listeners might be thinking them too.
So knock off some of these easy answers to me.
If we create some sort of unaligned, let's call it bad AI,
why can't we just create a whole bunch of good AIs to go fight the bad AIs
and solve the problem that way?
Can there not be some sort of counterbalance in terms of aligned human AIs and evil AIs
and there'd be sort of some battle of the artificial minds here?
Nobody knows how to create any good AIs at all.
The problem isn't that we have like 20 good AIs and then somebody finally builds an evil AI.
The problem is that the first very powerful AI is evil.
Nobody knows how to make it good.
And then it kills everybody before anybody can make it good.
So there is no known way to make a friendly, human aligned AI whatsoever.
And you don't know of a good way to go about thinking through that problem and designing one.
neither does anyone else is what you're telling us. I have some idea of what I would do if there were
more time, you know, back in the day we had more time, humanity's wandered it. I'm not sure there's
enough time left now. I have some idea of what I would do if I were in a 25-year-old body and had
$10 billion. That would be the island scenario of like your god for 10 years and you get all
the researchers on an island and go really hammer for 10 years at this problem.
If I have buy-in from a major government that can run actual security precautions and more than just
$10 billion, then you know, you could run a whole Manhattan project about it, sure.
This is another question that the 10-year-old in me wants to know is, so why is it that, at least
are people listening to this episode or people listening to the concerns or reading the concerns
that you've written down and published? Why can't everyone get on board who's building an
and just all agree to be very, very careful.
Is that not a sustainable game theoretic position to have?
Is this sort of like a coordination problem, more of a social problem than anything else?
Or like, why can't that happen?
I mean, we have so far not destroyed the world with nuclear weapons, and we've had them, you know, since the 1940s.
Yeah, this is harder than nuclear weapons.
There's a lot harder than nuclear weapons.
Why is this harder and why can't we just coordinate to just?
just all agree internationally that we're going to be very careful, put restrictions on this,
put regulations on it, do something like that.
Current heads of major labs seem to me to be openly contemptuous of these issues.
That's where we're starting from.
The politicians do not understand it.
There are distortions of these ideas that are going to sound more appealing to them than everybody
suddenly falls over dead, which is a thing that I think actually.
happens. Everybody falls over dead just as like doesn't inspire the monkey political parts of our
brains somehow. It's not like, oh no, what if terrorists get the AI first? It's like, it doesn't
matter who gets it first. Everybody falls over dead. And yeah, so you're describing a world
coordinating on something that is relatively hard to coordinate. Maybe so, you know, like,
could we if we tried starting today, you know,
like prevent anyone from getting a billion pounds of laundry detergent in one place worldwide,
control the manufacturing of laundry detergent, only have it manufactured in particular places,
not concentrate lots of it together, enforce it on every country.
You know, if it was legible, if it was clear that a billion pounds of laundry detergent in one place would end the world,
if you could calculate that, if all the scientists calculated,
and arrived at the same answer and told the politicians that maybe, maybe humanity would survive,
even though smaller amounts of land detergents spit out gold.
The threshold can't be calculated.
I don't know how you'd convince the politicians.
We definitely don't seem to have had much luck convincing those CEOs whose job depends on them
not caring to care.
Caring is easy to fake.
It's easy to, you know, like hire a bunch of people to be your AI safety team and redefine
AI safety as having the AI not say naughty words.
Or, you know, I'm speaking somewhat metaphorically here for reasons.
But the basic problem that we have is like trying to build a secure OS before we run up
against a really smart attacker.
And there's all kinds of like fake security.
It's got a password file.
This system is secure.
it only lets you in if you type a password.
And if you never go up against your really smart attacker,
if you never go far to distribution
against a powerful optimization process looking for holes,
then how does a bureaucracy come to know
that what they're doing is not the level of computer security
that they need?
The way you're supposed to find this out,
the way that the scientific fields historically find this out,
the way that fields of computer science historically find this out,
The way that crypto found this out back in the early days is by having the disaster happen.
And we're not even that good at learning from relatively minor disasters.
You know, like COVID swept the world.
Did the FDA or the CDC learn anything about don't tell hospitals that they're not allowed to use their own test to detect the coming plague?
Are we installing UVC lights in public spaces?
or in ventilation systems to prevent the next respiratory pandemic, we lost a million people.
And we sure did not learn very much as far as I can tell for next time.
We could have an AI disaster that kills 100,000 people.
How do you even do that?
Robotic cars crashing into each other?
Have a bunch of robotic cars crashing into each other.
It's not going to look like that was the fault of artificial general intelligence,
because you're not going to put AGIs in charge of cars.
They're going to pass a bunch of regulations that's going to affect the entire AGI disaster or not at all.
What does the winning world even look like here?
How in real life did we get from where we are now to this worldwide ban, including against
North Korea, and, you know, like some one rogue nation whose dictator doesn't believe in all this
nonsense and just wants the gold that these AIs spit out?
How did we get there from here?
How do we get to the point where the United States and China signed it,
whereby they would both use nuclear weapons against Russia if Russia built a GPU cluster that
was too large. How did we get there from here? Correct me if I'm wrong, but this seems to be
kind of just like a topic of despair talking to you now and hearing your thought process about
like there is no known solution and the trajectory is not great. Do you think all hope is lost here?
I'll keep on fighting until the end, which I wouldn't do if I had literally zero hope. I could
still be wrong about something in a way that makes this problem somehow much easier than it
currently looks. I think that's how you go down fighting with dignity. Go down fighting with dignity.
That's the stage you think we're at. I want to just double click on what you were just saying.
So part of the case that you're making is humanity won't even see this coming. So it's not like
a coordination problem like global warming where, you know, every couple of decades we see
the world go up by a couple of degrees. Things get hotter and we start to see these effects over time. The characteristics or the advent of an AGI in your mind is going to happen incredibly quickly. And in such a way that we won't even see the disaster until it's imminent, until it's upon us. I mean, if you want some kind of like formal phrasing, then I think that superintelligence will kill everyone before non-superintelligent AIs have killed one million people. I don't know if that's the phrasing you're looking for.
there. I think that's a fairly precise definition and why? What goes into that line of thought?
I think that the current systems are actually very weak. I mean, I don't know. Maybe I could use the
analogy of go, where you had systems that were finally competitive with the pros, where pros, like the
set of ranks and go. And then a year later, they were challenging the world champion.
and winning.
And then another year,
they threw out all the complexities
and the training from human databases of Go games
and built a new system, AlphaGo Zero,
that trained itself from scratch.
No looking at the human playbooks.
No special purpose code,
just a general purpose game player
being specialized to Go, more or less.
And three days.
there's a quote from Gwern about this,
which I forget exactly,
but it was something like,
we know how long AlphaGo Zero,
or Alpha Zero to different systems,
what was equivalent to a human go player,
and it was like 30 minutes
on the following floor of this such and such deep mind building.
And maybe the first system doesn't improve that quickly,
and they build another system that does.
And all of that with AlphaGo over the course of years going from like it takes a long time to train to it trains very quickly and without looking at the human playbook.
Like that's not with an artificial intelligence system that improves itself or even that sort of like gets smarter as you run it, the way that human beings, not just as you evolve them, but as you run them over the course of their own lifetimes, improve.
So if the first system doesn't improve fast enough to kill everyone very quickly, they will build one that's meant to spit out more gold than that.
And there could be weird things that happened before the end.
I did not see ChatGPT coming.
I did not see stable diffusion coming.
I did not expect that we would have AI's smoking humans in rap battles before the end of the world.
Well, they were clearly much dumber than us.
Kind of a nice send-off, I guess, in some ways.
So you said that your hope is not zero, and you are planning to fight to the end.
What does that look like for you?
I know you're working at MRI, which is the Machine Intelligence Research Institute.
This is a nonprofit that I believe that you've sort of set up to work on this AI alignment and safety sort of issues.
What are you doing there?
What are you spending your time on?
What do you think, like, how do we actually fight until the end?
If you do think that an end is coming, how do we try to resist?
I'm not saying it was sabbatical right now, which is why I have time for podcasts.
It's a sabbatical from, you know, like been doing this 20 years.
It became clear we were all going to die.
I felt kind of burned out, taking some time to rest at the moment.
When I dive back into the pool, I don't know, maybe I will.
go off to conjecture or anthropic or one of the smaller concerns like Redwood Research,
Redwood Research being the only ones I really trust at this point, but they're tiny,
and try to figure out if I can see anything clever to do with the giant inscrutable matrices
of floating point numbers. Maybe I just write, continue to try to explain in advance to people
why this problem is hard instead of as easy and cheerful as the current people who think
their pessimists think it will be, I might not be working all that hard compared to how I used
to work. I'm older than I was. My body is not in the greatest of health these days. Going down
fighting doesn't necessarily imply that I have the stamina to fight all that hard. I wish I had
prettier things to say to you here, but I do not. No, this is, you know, we intended to save probably
the last part of this episode to talk about crypto, the metaverse, and, and, and, you know, we intended to save,
AI and how this all intersects. But I got to say at this point in the episode, it all kind of
feels pointless to go down that track record. We were going to ask questions like, well,
in crypto, should we be worried about building sort of a property rights system, an economic
system, a programmable money system for the AIs to sort of use against us later on. But it sounds
like the easy answer from you to those questions would be, yeah, absolutely. And by the way,
none of that matters regardless. You could do whatever you'd like with crypto. This is
going to be the inevitable outcome no matter what. Let me ask you, what would you say to somebody
listening who maybe has been sobered up by this conversation is a version of you in your 20s
does have the stamina to continue this battle and to actually fight on behalf of humanity
against this existential threat? Where would you advise them to spend their time? Is this
a technical issue? Is this a social issue? Is it a combination of
both? Should they educate? Should they spend time in the lab? What should a person listening
to this episode do with these types of dire straits? I don't have really good answers. It depends
on what your talents are. If you've got the very deep version of the security mindset,
the part where you don't just put a password on your system so that nobody can walk in and
directly misuse it, but the kind where you, we're the kind we don't just increase,
the password file, even though
nobody's supposed to have access to the password
file in the first place, and thus already an authorized
user, but the part where you hash
the passwords and
salt the hashes.
If you're the kind of person who can think of
that from scratch,
maybe take your hand in alignment.
If you
can think of an alternative to giant
inscrutable matrices,
then, you know,
don't tell the world about that.
I'm not quite sure.
where you go from there, but, you know, maybe you work with Redwood Research or something.
A whole lot of this problem is that even if you do build an AI that's limited in some way,
you know, somebody else steals it, copies it, runs it themselves, and takes the bounds off the
four loops and the world ends. So there's that, there's, you think you can do something clever
with the giant inscrutable matrices. You're probably wrong. If you have the talent to
try to figure out why you're wrong in advance of being hit over the head with it,
and not in a way where you just like make random far-fetched stuff up is the reason why it won't work,
but where you can actually like keep looking for the reason why it won't work.
We have people in crypto who are good at breaking things,
and they're the reason why anything is not on fire.
And some of them might go into breaking AI systems instead,
because that's where you learn anything.
You know, any fool can build a crypto system,
that they think will work. Breaking existing crypto systems, cryptographical systems is how we learn
who the real experts are. So maybe the people finding weird stuff to do with AIs. Maybe those people
will come up with some truth about these systems that makes them easier to align than I suspect.
The saner outfits do have uses for money. They don't really have scalable uses for money, but they do burn
any money literally at all.
Like if you gave
Miri a billion dollars, I would not know
how to, well,
at a billion dollars, I might like
try to bribe people to
move out of AI development
that gets broadcast to the whole world
and move to the equivalent of an island somewhere,
not even to make any kind of critical discovery,
but, you know, just to remove them from the system
if I had a billion dollars.
If I just have another $50 million,
I'm not quite sure,
what to do with that.
But, you know, if you donate that to Miri,
then you at least have the assurance
that we will not randomly spray
money on looking like we're
doing stuff and will
reserve it as we are doing with the last
giant crypto donation somebody gave us
until we can figure out something to do with it
that is actually helpful.
And Miry has that property.
I would say probably Redwood Research
has that property.
Yeah.
I realize I'm sounding sort of disorganized
here, and that's because I don't really have a good organized answer to, you know, how in general
somebody goes down fighting with dignity. I know a lot of people in crypto. They are not as in touch
with artificial intelligence, obviously, as you are, and the AI safety issues and the existential
threat that you've presented in this episode. They do care a lot and see coordination problems
throughout society as an issue. Many have also generated wealth from crypto and care very much
about humanity not ending. What sort of things has, Miri, that is the organization I was talking about,
M-I-R-I, earlier, what sort of things have you done with funds that you've received from
crypto donors and elsewhere? And what sort of things might an organization like that pursue
to try to stave this off?
I mean, I think mostly we've pursued a lot of lines of research that haven't really panned out,
which is a respectable thing to do.
We did not know in advance that those lines of research would fail to pen out.
If you're doing research that you know will work,
you're probably not really doing any research.
You're just like doing a pretensive research that you can show off to a funding agency.
We tried to be real.
We did things where we didn't know the answer in advance.
They didn't work, but that was where the hope lay, I think.
But, you know, having a research organization that keeps it real that way, that's not an easy thing to do.
And if you don't have this very deep form of the security mindset, you'll end up producing fake research and doing more harm than good.
So I would not tell all the successful crypto people to cryptocurrency people to run off and start their own research outfits.
Redwood Research, I'm not sure if they can scale using more money, but, you know, you can give people more money and wait for them to figure out how to scale it later.
if they're the kind who won't just run off and spend it, which is what Murray aspires to be.
And you don't think the education path is a useful path, just educating the world.
I mean, I would give myself and Miri credit for why the world isn't just walking blindly into the whirling razor blades here.
But it's not clear to me how far education scales apart from that.
You can get more people aware that we're walking directly into the whirling razor blades
because even if only 10% of the people can get it,
that can still be a bunch of people.
But then what do they do?
I don't know.
Maybe they'll be able to do something later.
Can you get all the people?
Can you get all the politicians?
Can you get the people whose job incentives are against them,
admitting this to be a problem?
I have various friends who report like,
yes, if you talk to researchers at OpenAI in private,
they are very worried and say that they like,
cannot be that worried in public.
This is all a giant Moloch trap
is sort of what you're telling us.
I feel like this is the part of the conversation
where we've gotten to the end
and the doctor has just said that we have
some sort of terminal illness.
And at the end of the conversation,
I think the patient, David and I
have to ask the question,
okay, Doc, how long do we have?
Like, seriously, what are we talking about here
if you turn out to be correct?
Are we talking about years?
Are we talking about decades?
Like, what?
What are you prepared for?
What's your idea here?
Yeah.
How the hell would I know?
Enrico Fermi was saying that like fish and chain reactions were 50 years off if they could
ever be done at all.
Two years before he built the first nuclear pile.
The Wright brothers were saying heavier than air flight was 50 years off shortly before
they built the first Wright Flyer.
How on earth would I know?
It could be three years.
It could be 15 years.
We could get that AI winter I was hoping for, and it could be 16 years. I'm not really seeing 50
without some kind of giant civilizational catastrophe. And to be clear, whatever civilization arises
after that could probably, I'm guessing, end up stuck in just the same trap we are.
I think the other thing that the patient might do at the end of a conversation like this is
also consult with other doctors. I'm kind of curious if, you know, who we should talk to on this
quest, who are some people that if people in crypto want to hear more about this or learn more about
this, or even we ourselves as podcasters and educators want to pursue this topic, who are the
other individuals in the AI alignment and safety space you might recommend for us to have a
conversation with? Well, the person who actually holds a coherent technical view, who disagrees
with me is named Paul Christiano.
He does not write Harry Potter fanfiction, and I expect to have a harder time explaining
himself in concrete terms.
But that is like the main technical voice of opposition.
If you talk to other people in the effective altruism or AI alignment communities who disagree
with this view, they are probably to some extent repeating back their misunderstandings
of Paul Christiano's views.
You could try Agea Cotra,
who's worked pretty directly with Paul Cristiano,
and I think sometimes aspires to explain these things
that Paul is not the best at explaining.
I'll throw out Kelsey Piper as somebody who would be good at explaining,
like would not claim to be like a technical person on these issues,
but is like good at explaining the part that she does know.
And who else disagrees with me?
you know, I'm sure Robin Hanson would be happy to come up.
Well, I'm not sure he'd be happy to come on this podcast, but, you know,
Robin Hansen disagrees with me, and I kind of feel like the famous argument we had back in the, like, early 2010s, late 2000s, about how this would all play out.
I basically feel like this was the Yudkowski position, this is the Hansen position, and then reality was over here.
Like, to the, well, to the Yodkowski side of the Yodkowski position and the Yidkowski-Hanssen debate, but Robin Hansen does not feel that way.
and would probably be happy to expound on that at length.
I don't know.
Yeah, it's not hard to find opposing viewpoints.
The ones that'll stand up to a few solid minutes of cross-examination
from somebody who knows which parts to cross-examine.
That's the hard part.
You know, I've read a lot of your writings and listened to you on previous podcast.
One was in 2018 on the same Harris podcast.
This conversation feels to me like the most dire you've ever seemed on this topic.
And maybe that's not true.
maybe you've sort of always been this way,
but it seems like the direction of your hope
that we solve this issue has declined.
Yeah, I'm wondering if you feel like that's the case,
and if you could sort of summarize your take on all of this
as we close out this episode and offer, I guess,
any thoughts, concluding thoughts here?
Well, there was a conference one time
on what are we going to do?
about looming risk of AI disaster.
And Elon Musk attended that conference.
And I was like, maybe this is it.
Maybe, you know, maybe this is when the powerful people notice.
And it's, you know, like one of the relatively more technical powerful people who could be noticing this.
And maybe this is where humanity finally turns and starts, you know, not quite fighting back because there isn't an external enemy here.
but conducting itself with, I don't know, acting like it cares, maybe.
And what came out of that conference?
Well, was Open AI, which was basically the,
fairly nearly the worst possible way of doing anything.
Like, this is not a problem of, oh, no, what if secret elites get AI?
It's that nobody knows how to build a thing.
If we do have an alignment technique,
it's going to involve running the AI with a bunch of,
like careful bounds on it, where you don't just like throw all the cognitive power you have
at something. You have limits on the four loops. And whatever it is that could possibly
save the world, like go out and turn all the GPUs and the server clusters into Rubik's
or something else that prevents the world from ending when somebody else builds another
AI a few weeks later. You know, anything that could do that as an artifact where somebody else
could take it and take the bounds off the four loops and use it to destroy the world.
So, like, let's open up everything.
Let's accelerate everything.
It was like GPT3's version, though GPT3 didn't exist back then,
but it was like ChatGPT's blind version of, like, throwing the ideals at a place where they were exactly the wrong ideals to solve the problem.
And the problem is that demon summoning is easy, and angel summoning is much harder.
Open sourcing all the demon summoning circles is not the correct solution.
And I'm using Elon Musk's own terminology here.
They talk about AI is summoning the demon, which, you know,
know, not accurate, but, and then his solution was to put a demon-summoning circle in every household.
And why? Because his friends were calling him Luddites, once he'd expressed any concern about
AI at all. So he picked a road that sounded like openness and like accelerating technology.
So his friends would stop calling him Luddites. It was very much the worst, you know, like maybe
not the literal actual worst possible strategy, but so very far pessimal. And that was it. That was like,
that was me in 2015 going like, oh, so this is what humanity will elect to do.
We will not rise above.
We will not have more grace, not even here at the very end.
So that is, you know, that is when I did my crying late at night.
And then pick myself up and fought and fought and fought and fought until I'd run out all the
avenues that I seem to have the capabilities to do.
There's, like, more things, but they require scaling my efforts in a way that I've never
been able to make them scale.
And all that's pretty far-fetched at this point anyways.
So, you know, what's changed over the years?
Well, first of all, I ran out some remaining avenues of hope.
And second, things got to be such a disaster, such a visible disaster.
The AI's got powerful enough.
And it became clear enough that, you know, we do not know how to align these things,
that I could actually say what I'd been thinking for a while and not just have people go completely like,
what are you saying about all this?
You know, now the stuff that was obvious back in 2015 is, you know,
starting to become visible and distance to others and not just like completely invisible.
That's what changed over time.
What do you hope people hear out of this episode and out of your comments?
The Liaiser in 2023, who is sort of running on the last fumes of hope.
Yeah, what do you want people to get out of this episode?
What are you planning to do?
I don't have concrete hopes here.
You know, when everything is in ruins, you might as well speak the truth, right?
maybe somebody hears it somebody figures out something i didn't think of i mostly expect that this
does more harm than good in the modal universe because a bunch of people are like oh i have this
brilliant clever idea which is you know like something that somebody that you know i was arguing
against in 2003 or whatever but you know maybe somebody out there with the proper level of
pessimism hears and thinks of something i didn't think of i suspect that if there's hope
at all, it comes from a technical solution because the difference between technical problems and political
problems, it's at least the technical problems have solutions in principle. At least the technical
problems are solvable. We're not in course to solve this one, but I don't really see the,
I think anybody who's hoping for a political solution has frankly not understood the technical
problem. They do not understand what it looks like to try to solve the political problem to
such a degree that the world is not controlled by AI because I don't understand how easy it is to
destroy the world with AI, given that the clock keeps sticking forward.
They're thinking that they just have to solve, stop some bad actor, and that's why they think there's a political solution.
But, yeah, I don't have concrete hopes.
I didn't come on this episode out of any concrete hope.
I have no takeaways except, like, don't make this thing worse.
Don't, like, go off and accelerate AI more.
If you have a brilliant solution to alignment, don't be like, oh, yes, I have solved the whole problem.
We just use the following clever trick.
You know, don't make things worse than very much of a mess.
when you're pointing people at the field at all, but I have no winning strategy.
Might as well go on this podcast as an experiment and say what I think and see what happens.
And probably no good effort comes of it. But, you know, you might as well go down fighting, right?
If there's a world that survives, maybe it's a world that survived because of a bright idea somebody had after listening to this podcast, that was brighter, to be clear, than the usual run of bright ideas that don't work.
I want to thank you for coming on and talking to us today.
I don't know, by the way, you've seen that movie that David was referencing earlier,
the movie Don't Look Up,
but I sort of feel like that news anchor who's talking to like the scientist,
is it Leonardo DiCaprio, David?
Yeah, I think you have.
And the scientist is talking about kind of dire straight to the world.
And the news anchor just really just doesn't know what to do.
I'm almost at a loss for words at this point.
I've had nothing for a while.
But one thing I can say is I appreciate your honesty.
I appreciate that you've given this a lot of time and given this a lot of thought.
Anyone who has heard you speak or read anything you've written knows that you care deeply about
this issue and have given it a tremendous amount of your life force in trying to educate
people about it.
And thanks for taking the time to do that again today.
I guess I'll just let the audience digest this episode in the best way they know how.
But I want to reflect everybody in crypto and everybody listening to bankless, their thanks for
you coming on and explaining. Thanks for having me. We'll see what comes with it.
Action items for you, Bankless Nation. We always end with some action items, not really sure
where to refer folks to today. But one thing I know we can refer folks to is Miri, which is the
machine research intelligence institution that Ileaser has been talking about through this episode.
That is at intelligence.org, I believe. And, you know, some people in crypto have donated funds to this
in the past, Vitalik Buterin is one of them. You can take a look at what they're doing as well.
That might be an action item for the end of this episode. Got to end with risks and disclaimers.
Man, this seems very trite. But our legal experts have asked us to say these at the end of every
episode. Crypto is risky. You could lose everything. Apparently not as risky as AI, though.
You put in, yeah. But we're headed west. This is the frontier. It's not for everyone. But we're glad you're with us
on the bankless journey.
a lot. And we are grateful for the crypto community support. Like it was possible to end with even
less grace than this. Wow. And you made a difference. We appreciate you. You really made a difference.
Thank you.
