Lex Fridman Podcast - #83 – Nick Bostrom: Simulation and Superintelligence
Episode Date: March 26, 2020Nick Bostrom is a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risks, simulation hypothesis..., human enhancement ethics, and the risks of superintelligent AI systems, including in his book Superintelligence. I can see talking to Nick multiple times on this podcast, many hours each time, but we have to start somewhere. Support this podcast by signing up with these sponsors: - Cash App - use code "LexPodcast" and download: - Cash App (App Store): https://apple.co/2sPrUHe - Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Nick's website: https://nickbostrom.com/ Future of Humanity Institute: - https://twitter.com/fhioxford - https://www.fhi.ox.ac.uk/ Books: - Superintelligence: https://amzn.to/2JckX83 Wikipedia: - https://en.wikipedia.org/wiki/Simulation_hypothesis - https://en.wikipedia.org/wiki/Principle_of_indifference - https://en.wikipedia.org/wiki/Doomsday_argument - https://en.wikipedia.org/wiki/Global_catastrophic_risk This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 - Introduction 02:48 - Simulation hypothesis and simulation argument 12:17 - Technologically mature civilizations 15:30 - Case 1: if something kills all possible civilizations 19:08 - Case 2: if we lose interest in creating simulations 22:03 - Consciousness 26:27 - Immersive worlds 28:50 - Experience machine 41:10 - Intelligence and consciousness 48:58 - Weighing probabilities of the simulation argument 1:01:43 - Elaborating on Joe Rogan conversation 1:05:53 - Doomsday argument and anthropic reasoning 1:23:02 - Elon Musk 1:25:26 - What's outside the simulation? 1:29:52 - Superintelligence 1:47:27 - AGI utopia 1:52:41 - Meaning of life
Transcript
Discussion (0)
The following is a conversation with Nick Bostrom, a philosopher at University of Oxford
and the director of the Future of Humanity Institute.
He has worked on fascinating and important ideas in existential risk, simulation hypothesis,
human enhancement ethics, and the risks of super-intelligent AI systems, including in his book,
Super Intelligence.
I can see talking to Nick multiple times in this podcast
many hours each time,
because he has done some incredible work
in artificial intelligence,
in technology, space, science,
and really philosophy in general.
But we have to start somewhere.
This conversation was recorded before the outbreak
of the coronavirus pandemic,
that both Nick and I, I'm sure, will
have a lot to say about next time we speak.
And perhaps that is for the best, because the deepest lessons can be learned only in retrospect
when the storm has passed.
I do recommend you read many of his papers on the topic of existential risk, including
the Technical Report titled Global Catastrophic Risks Survey that he co-authored
with Anders Sandberg. For everyone feeling the medical, psychological, and financial burden of
this crisis, I'm sending love your way. Stay strong. We're in this together. We'll beat this thing.
This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, review it with 5 stars and Apple podcasts, support
on Patreon or simply connect with me on Twitter, at Lex Friedman spelled F-R-I-D-M-A-N.
As usual, I'll do one or two minutes of ads now and never any ads in the middle that
can break the flow of the conversation.
I hope that works for you and doesn't hurt the listening experience.
This show is presented by CashApp, the number one finance app in the App Store. When you
get it, use code LexPodcast. CashApp lets you send money to friends by Bitcoin and invest
in the stock market with as little as $1. Since CashApp does fractional share trading,
let me mention that the order execution algorithm
that works behind the scenes to create the abstraction of fractional orders is an algorithmic
marvel. So big props to the cash app engineers for solving a hard problem that in the end
provides an easy interface that takes a step up to the next layer of abstraction over the stock
market, making trading more accessible for new investors and diversification much
easier. So again, if you get cash out from the App Store Google Play and use the Co-Lex
podcast, you get $10 and cash out will also donate to $10 the first, an organization that
is helping to advance robotics and STEM education for young people around the world. And
now here's my conversation with Nick
Bostrom. At the risk of asking the Beatles to play yesterday or the Rolling Stones to play satisfactorily,
let me ask you the basics.
What is the simulation hypothesis?
That we are living in a computer simulation.
What is a computer simulation?
How are we supposed to even think about that? Well, so the hypothesis is meant to be understood in a literal sense,
not that we can kind of metaphorically view the universe as an information processing physical system,
but that there is some advanced civilization who built a lot of computers and that what we experience is an effect of what's going on inside one of those computers so that the world around us our own brains.
Everything we see and perceive and think and feel would exist because this computer is running certain programs.
Do you think of this computer as something similar to the computers of today, these deterministic,
sort of touring machine type things?
Is that what we're supposed to imagine or we're supposed to think of something more like
a quantum mechanical system, something much bigger, something much more complicated,
something much more mysterious
from our current perspective.
So the ones we have today would do
finally bigger, certainly.
You'd need more memory and more processing power.
I don't think anything else would be required.
Now, it might well be that they do have addition,
maybe they have quantum computers and other things
that would give them
even more of some kind of plausible, but I don't think it's a necessary assumption in order
to get to the conclusion that a technical and mature civilization would be able to create
these kinds of computer simulations with conscious beings inside them. So do you think the simulation hypothesis is an idea that's most useful in philosophy,
computer science, physics, sort of, where do you see it having valuable kind of starting
point in terms of the thought experiment of it?
Is it useful? I guess it's more informative and interesting and maybe important, but it's not designed
to be useful for something else.
Okay, interesting.
Sure, but is it philosophically interesting or is there some kind of implications of computer
science and physics?
I think not so much for computer science or physics per se.
Certainly it would be of interest in philosophy,
I think also to say cosmology or physics
in as much as you are interested in the fundamental building
blocks of the world and the rules that govern it.
If we are in a simulation, there
is then the possibility that say physics at the level
where the computer running the simulation
could be different from the physics governing phenomena
in the simulation.
So I think it might be interesting from point of view
of religion or just for a kind of trying
to figure out what the heck is going on.
So we mentioned the
simulation hypothesis so far. There is also the simulation argument which I
tend to make a distinction. So simulation hypothesis, we are living in a computer simulation.
Simulation argument, this argument that
tries to show that one of three propositions is true,
one of which is the simulation hypothesis, but there are two alternatives in the original
simulation argument, which we can get to. Yeah, let's go there, by the way, confusing terms,
because people will, I think, probably naturally think simulation argument equals simulation hypothesis.
It's just terminology wise, But let's go there.
So simulation hypothesis means that we
are living in simulations.
The hypothesis that we're living in simulation,
simulation argument has these three complete possibilities
that cover all possibilities.
So what are the?
Yeah.
So it's like a disjunction.
It says at least one of these three is true.
Although it doesn't on its own tell us which one.
So the first one is that almost all civilizations that are current stage of technical
recall development, coextinct before they reach technological maturity.
So there is some great filter that makes it so that basically none of the civilizations
throughout, you know, maybe vast cosmos will ever get to realize the full potential of
technological development.
And this could be theoretically speaking, this could be because most civilizations kill themselves too eagerly
or destroy themselves too eagerly, or it might be super difficult to build a simulation.
So the span of time theoretically could be both.
Now I think it looks like we would technically be able to get there in a time span that is short compared to, say, the lifetime of planets and other sort of
astronomical processes. So your intuition is the bill's simulation is not. Well, so this is
an interesting concept of technological maturity. It's kind of an interesting concept to have other
purposes as well. We can see, even based on our current limited understanding,
what some lower bound would be on the capabilities
that you could realize by just developing
technologies that you already see are possible.
So for example, one of my research fellows here, Eric Drexler,
back in the 80s, studied molecular manufacturing.
That is, you could analyze using theoretical tools
and computer modeling, the performance of various
molecularly precise structures that we didn't then
and still don't, did I have the ability to actually fabricate?
But you could say that, well, if we could put these atoms
together in this way, then the system would be stable and it would
rotate at the speed and have these computational characteristics.
And he also outlined some pathways that would enable us to get to this kind of molecularly
manufacturing in the fullness of time. And you could do other studies we've done.
You could look at the speed at which say,
it would be possible to colonize the galaxy
if you had mature technology.
We have an upper limit, which is the speed of light.
We have sort of a lower current limit,
which is how fast current rockets go.
We know we can go faster than that
by just making them bigger and have more fuel and stuff.
And you can then start to describe the technological
affordances that would exist once a civilization has had
enough time to develop, at least those technologies
we already know are possible.
Then maybe they would discover other new physical phenomena
as well that we haven't realized that would enable them
to do even more, but at least there is this kind of basic set of capabilities.
Can you just linger on that?
How do we jump from molecular manufacturing to deep space exploration to mature technology?
What's the connection?
Well, so these would be two examples of technological capability sets
that we can have a high degree of confidence are physically possible in our universe and that
a civilization that was allowed to continue to develop its science and technology would eventually
attain. You can into it like we can kind of see the set of breakthroughs.
They're likely to happen. So you can see like what did you call it?
The technological set with computers.
Maybe it's easier.
So, um, in the one is we could just imagine bigger computers using exactly
the same parts that we have.
So you can kind of scale things that way, right?
But you could also make processors a bit faster
if you had this molecular nanotechnology
that directs the described.
He characterized a kind of crude computer built
with his parts that would perform, you know,
at a million times the human brain while being significantly
smaller, the size of a sugar cube.
And he may not claim that that's the optimum computing structure.
Like for all TNO, we could build a faster computer
that would be more efficient, but at least you could do that.
If you had the ability to do things that were atomically precise,
I mean, so you can then combine these two.
You could have this kind of nano molecular ability
to build things atom by atom and then say at this,
as a spatial scale that would be attainable
through space colonizing technology.
You could then start, for example,
to characterize a lower bound on the amount of computing
power that a technological and mature civilization would have
if it could grab resources, you know,
planets and so forth and then use this molecular nanotechnology
to optimize them for computing.
You get a very, very high lower bound on the amount of compute.
So sorry, just to define some terms.
So technologically mature civilization is one that took that piece of technology to its
lower bound.
What is the technologically matureization?
Well, okay, so that means it's a strong concept
and we really need for the simulation iPods.
I just think it's interesting in its own right.
So it would be the idea that there is
a some stage of technological development
for you basically maxed out.
That you developed all those general purpose widely
useful technologies that could be developed,
or at least kind of come very close to the
United 9.9% there or something
So that's that's an independent question
You can think either that there is such a ceiling or you might think it just goes the technology tree just goes on forever
Where what is your sense for I would guess that there is a I is a maximum that you would start to asymptote towards.
So new things won't keep springing up.
New ceilings.
In terms of basic technological capabilities,
I think that there is like a finite set of those
that can exist in this universe.
More of a, I mean, I wouldn't be that surprised
if we actually reached close to that level fairly shortly
after we have, say, machine super-intelligence.
So I don't think it would take millions of years for a human-originating civilization
to begin to do this.
It's like more likely to happen on historical timescales.
But that's an independent speculation
from the simulation argument.
I mean, for the purpose of the simulation argument,
it doesn't really matter whether it goes indefinitely
far up or whether there is a ceiling,
as long as we know we could at least get to a certain level.
And it also doesn't matter whether that's gonna happen
in a hundred years or 5,000 years or 50 million years.
Like the timescabs really don't make any difference
for this election.
Can you lend me a little bit?
Like there's a big difference between 100 years
and 10 million years.
Yeah.
So it doesn't really not matter.
Like you just said, this doesn't matter if we jump scales
to beyond historical scales.
So we describe that.
So for the simulation argument, sort of, doesn't it matter that we, if it takes 10 million
years, it gives us a lot more opportunity to destroy civilization in the meantime?
Yeah, well, so it would shift around the probabilities between these three alternatives.
That is, if we are very, very far away from being able to create these simulations,
if it's like say the billions of years into the future, then it's more likely that we will fail
ever to get there. There are more time for us to kind of, you know, go extinct along the way.
And so it's similarly for other civilizations.
So it is important to think about how hard it is to build simulation.
In terms of figuring out which of the disjunks, but for the simulation
argument itself, which is agnostic as to which of these three alternatives is true.
Yeah, okay. You don't have to, like the simulation argument would be true,
whether or not we thought this could be done in 500 years or it would take 500 million years.
No, for sure. The simulation argument stands, I mean, I'm sure there might be some people who oppose it, but it doesn't matter.
I mean, it's very nice. Those three cases cover it, but the fun part is at least not saying what the probabilities are,
but kind of thinking about kind of intuitive reasoning about what's more likely, what what are
the kind of things that would make some of the arguments
less and more so like, but let's actually, I don't think
we went through them. So number one, is we destroy ourselves
before we ever create similar, right? So that's kind of sad,
but we have to think not just what might destroy us.
I mean, so that could be some whatever disasters
or meteorites slamming the earth a few years from now
that could destroy us, right?
But you'd have to postulate in order for this first
distinct to be true, that almost all civilizations
throughout the cosmos also failed to reach technological
maturity.
And the underlying assumption there is that there is likely a very large number of other
intelligent civilizations.
Well, if there are, yeah, then they would virtually all have to succumb in the same way.
I mean, then that leads off another.
I guess there are a lot of little digressions
that they're interesting.
They go there, they go there.
Yeah, he means dragon is back.
Well, there are these, there is a set of basic questions
that always come up in conversations
with interesting people like the Fermi paradox.
Like there's like, you can almost define
whether a person is interesting. interesting, whether at some point
the question of the Fermi paradox comes up. Well, so forward it's worth, it looks to me that the
universe is very big, meaning in fact, according to the most popular current cosmological theories,
infinitely big. And so then it would follow pretty trivially
that it would contain a lot of other civilizations.
In fact, infinite domain.
If you have some locals, docasticity,
and infinite domain is like infinite
the many lumps of matter, one next to another,
there's kind of random stuff in each one,
then you're going to get all possible outcomes
with probability one infinitely repeated.
So then certainly that would be a lot of extraterrestrial start there.
If a short of that, if the universe is very big, that might be a finite but large number.
If we were literally the only one, then of course, if we went extinct, then all of civilization
is at our current stage, would have gone extinct
before becoming technological material.
So then it kind of becomes triviality
that a very high fraction of those went extinct.
But if we think there are many, I mean,
it's interesting because there are certain things
that possibly could kill us, like if you look at existential risks.
And it might be a different, like the best answer to what would be most likely to kill us
might be a different answer than the best answer to the question, if there is something
that kills almost everyone, what would that be?
Because that would have to be some risk factor that was kind of uniform overall possible
civilization.
So in this for the for the sake of this argument, you have to think about not just us,
but like every civilization dies out before they create the simulation.
Yeah, or something very close to everybody.
Okay. So what's number two in the number two is the convergence. Yeah, or something very close to everybody.
Okay, so what's number two in the?
Well, number two is the convergence hypothesis
that maybe like a lot of some of these civilizations
do make it through to technical maturity,
but out of those who do get there,
they all lose interest in creating these simulations
so they just have the capability of doing it,
but they choose not to.
Not just a few of them decide not to,
but, you know, out of a million,
you know, maybe not even a single one of them would do it.
And I think when you say lose interest,
that sounds like unlikely because it's like they get bored
or whatever, but it could be so many possibilities within that.
I mean, losing interest could be anything from it being exceptionally difficult to do,
to fundamentally changing the sort of fabric of reality. If you do it, ethical concerns,
all those kinds of things could be exceptionally strong pressures.
Well, certainly, I mean, ethical concerns. I mean, not really too difficult to do. I mean,
in a sense, that's the first assumption that you get to technological maturity where you would have the ability
using only a tiny fraction of your resources
to create many many simulations
So it wouldn't be the case that they would need to spend half of their GDP forever in order to create one simulation And I had this like difficult debate about whether they should you know invest half of their GDP for this
It would more be like well well, if any little fraction
of the civilization feels like doing this at any point
during maybe there, millions of years of existence,
then there would be millions of simulations.
But certainly, there could be many conceivable reasons
for why there would be this convert.
Many possibilities for not running ancestor simulations many conceivable reasons for why there would be this convert. Many, many
possibilities for not running ancestor simulations or other computer simulations,
even if you could do so cheaply. By the way, what's an ancestor simulation?
Well, that would be the type of computer simulation that would contain people
like those we think have lived on our planet in the past and like ourselves in terms of the types of experiences they have.
And where those simulated people are conscious. So like not just simulated in the same sense that
a non-player character would be simulated in a current computer game where it kind of has
you can have a target and then a very simple mechanism that moves it forward,
the backwards or, but something where the simulated being has a brain, let's say, that simulated
at the sufficient level of granularity that it would have the same subjective experiences as we have.
So what is consciousness fit into this? Do you think simulation? Like, is there
different ways to think about how this can be simulated? Just like you're talking about
now? Do we have to simulate each brain within the larger simulation? Is it enough to simulate
just the brain, just the minds and not the simulation, not the big universe itself.
Is there a different way to think about this?
Yeah, I guess there is a kind of premise in the simulation argument rolled in from philosophy
of mind.
That is that it would be possible to create a conscious mind in a computer and that what determines whether some system is conscious or not is not
like whether it's built from organic
biological neurons, but maybe something like what the structure of the computation is that it implements
So we can discuss that if we want, but I think it would be
for us might be that it would be more forward as far as my view, that it would be sufficient, say, if you had a computation
that was identical to the computation in the human brain
down to the level of neurons.
So if you had a simulation with 100 billion neurons
connected in the same way as the human brain,
and you then roll that forward with the same kind
of synaptic weights and so forth, you actually had the same behavior coming out of this as a human with that brain would
have done. Then I think that would be conscious. Now, it's possible you could also generate
consciousness without having that detail, the simulation. There, I'm getting more uncertain
exactly how much you could simplify or abstract
away. Can you look on that? What do you mean? I missed where you're placing consciousness in this
second. Well, so if you are a computationalist, you think that what creates consciousness is the
implementation of a computation. Some property, emerging property, the computation itself.
implementation of a computation. Some property, emerging property,
the computation itself.
Yeah, it's the idea.
Yeah, you could say that.
But then the question is, what's the class of computations
such that when they are run consciousness emerges?
So if you just have something that adds one plus one,
plus one, plus one, plus one,
like a simple computation, you're thinking,
maybe that's not going to have any consciousness.
If on the other hand, the computation is one, like our human brains are performing, where as part of the computation, there is like a global workspace, sophisticated attention mechanism,
there is self-representations of other cognitive processes and a whole lot of other things that
possibly would be conscious and in fact if it's exactly like ours I think definitely it would.
But exactly how much less than the full computation that the human brain is performing would be
required is a little bit I think of an open question.
would be required is a little bit, I think, often open question.
And he asked another interesting question as well, which is, would it be
sufficient to just have, say, the brain or would need the environment? Right, that's a nice way to do that.
In order to generate the same kind of experiences that we have.
And there is a bunch of stuff we don't know.
I mean, if you look at, say, current virtual reality environments, one thing that's clear is that
we don't have to simulate all details of them all the time in order for, say, the human player
to have the perception that there is a full reality in that you can have, say,
procedurally
generated, which you might only render a scene when it's actually within the view of the
player character.
So similarly, if this environment that we perceive is simulated, it might be that all the
parts that come into our view are rendered at
any given time. And a lot of aspects that never come into view, say the details of this
microphone I'm talking into exactly what each atom is doing at any given point in time
might not be part of the simulation, only a more coarse-grained representation.
So that to me is actually from an engineering perspective why the simulation hypothesis is
really interesting to think about is how much, how difficult does it to fake sort of
in a virtual reality context?
I don't know, fake is the right word, but to construct a reality that is sufficiently real to us, to be immersive in a way that the physical world is.
I think that's actually probably an answerable question
of psychology, of computer science, of how,
where's the line where it becomes so immersive
that you don't want to leave that world? Yeah, all right, that you don't realize while you're in it that it is a virtual world.
Yeah, those are two actually questions. Yours is the more sort of the good question about the realism,
but mine, from my perspective, what's interesting is it doesn't have to be real, but it...
how can you construct a world that we wouldn't want to leave?
I mean, I think that might be too low a bar. I mean, if you think, say, when people first
had pung or something like that, I'm sure there were people who wanted to keep playing it
for a long time, because it was fun, and I wanted to be in this little world.
I'm not sure we would say it's immersive. I mean, I guess in some sense it is, but like an absorbing activity doesn't even have to be.
But they left that world though. So like, I think that bar is deceivingly high. So
they eventually, so you can play pong or starcraft or whatever more sophisticated games for hours,
or Starcraft or whatever more sophisticated games for hours,
for months, you know, while the work has to be in a big addiction, but eventually they escape that.
So you mean when it's absorbing an auth
that you would spend your entire,
it would choose to spend your entire life in there.
And then thereby changing the concept of what reality is,
but as your reality,
your reality becomes the game, not because you're
fooled, but because you've made that choice.
Yeah, and it made people might have different preferences regarding that.
Even if you had any perfect virtual reality, it might still prefer not to spend the rest of their lives there. I mean in philosophy,
there's this experience machine thought experiment. Have you come across this? So Robert
Nozick had this thought experiment where you imagine some crazy super duper neuroscientists
of the future have created a machine that could give you any experience you want if you step in there. And for the rest of your life, you can kind of
pre-programmed it in different ways. So your fund streams could come true, you could,
whatever you dream, you want to be a great artist, a great lover, like have a wonderful
life, all of these things.
If you step into the experience machine, it will be your experience. It's constantly happy.
But it would kind of disconnect from the rest of the reality and it would float there in a tank.
And so Nosec thought that most people would choose not to
enter the experience machine.
Many might want to go there for a holiday, but they wouldn't want to check out of existence
permanently.
He thought that was an argument against certain views of value, according to what we value
is a function of what we experience.
Because in the experience machine, you could have any experience you want. And yet, many people would think
that would not be much value.
So therefore, what we value depends on other things
than what we experience.
So, okay, can you take that argument further?
What about the fact that maybe what we values
the up and down of life?
So, you could have up ands in the experience machine, right?
But what can to have in the experience machine?
Well, I mean, that then becomes an interesting question to explore.
But for example, real connection with other people, if the experience machine is a solar
machine where it's only you, like that's something you wouldn't have there.
You would have this subjective experience that would be like fake people
Yeah
But you when if you gave somebody flowers that wouldn't be anybody they who actually got happy it would just be a little
simulation of somebody smiling but the simulation would not be the kind of simulation
I'm talking about in the simulation argument where the simulated creature is conscious
It would just be a kind of smiley face that would look perfect and real to you.
So we're now drawing a distinction between
appear to be perfectly real and actually being real.
Yeah.
So that could be one thing.
I mean, like a big impact on history, maybe.
It's also something you won't have
if you check into this experience machine.
So some people might actually feel the life I want to have
for me is one where I have a big positive impact on history unfolds. So you could kind of
explore these different possible explanations for why this you wouldn't want to go into the
experience machine if that's what you feel.
One interesting observation regarding this
no-seqtholte experiment in the conclusions you wanted to draw from it is
how much is a status quo effect.
A lot of people might not want to get this on
current reality to plug into this remachine.
But if they instead were told,
well, what you've experienced up to this point
was a dream.
Now, do you want a disconnect from this
and then do the real world?
When you have no idea maybe what the real world is
or maybe you could say, well, you're actually a farmer
in Peru growing peanuts peanuts and you could
live for the rest of your life in this. Or would you want to continue your dream alive as a
Lex Friedman going around the world, making podcasts and doing research? So if the status quo was that they were actually in the experience mission, I think a lot of
people might prefer to live the life that they are familiar with rather than sort of bail
out into.
So essentially, the change itself, the leap, whatever.
So it might not be so much the reality itself that we are after, but it's more that we
are maybe involved in certain projects and
relationships and we have a self identity and these things that are also kind of connected with
carrying that forward. And then whether it's inside a tank or outside a tank in Peru or whether
inside a computer outside a computer, that's kind of less important to what we ultimately care
about.
Yeah, but still so, just to linger on it, it is interesting.
I find maybe people are different, but I find myself quite willing to take the leap to
the farmer in Peru, especially as the virtual reality system become more realistic.
I find that possibility. And I think
more people would take that leap. But so in this, in this, in this thought experiment,
just to make sure we are on the same. So in this case, the, the former and Peru would not
be a virtual reality. That would be the real, the real, the real, the real, your life,
like before this whole experience machine started. Well, I kind of assume from that description, you're being very specific, but that kind
of idea just like washes away the concept of what's real.
I mean, I'm still a little hesitant about your kind of distinction to real and illusion,
because when you can have an illusion that feels, I mean, that looks real.
I mean, what, I don't know how you can definitively say something is real or not.
Like, what's a good way to prove that something is real in that context?
Well, so I guess in this case, it's more a prediction.
In one case, you're floating in a tank with these wires by the super duper neuroscientists plugging into your head,
giving you like freedman experiences. And in the other, you're actually tilling the soil in
Peru growing peanuts, and then those peanuts are being eaten by other people all around the world
who buy the exports. And that's two different possible situations in the one and the same real world
possible situations in the one and the same real world that you could choose to occupy. But just to be clear, when you're in a vat with wires and the neuroscientists, you can still go farming
in Peru, right? No, well, if you wanted, you could have the experience of farming in Peru.
But that wouldn't actually be any peanuts grown.
experience of farming in Peru. But that wouldn't actually be any peanuts grown.
Well, but what makes a peanut so so a peanut could be grown and you could feed things with that peanut and why can't all that be done in a simulation? I hope first of all that they actually have
peanut farms in Peru. I guess we'll get a lot of comments out of us from angry.
they actually have peanut farms in Peru, I guess. We'll get a lot of comments out of us from angry. I was with the upper to the point when you started talking about you
should know you can't realize you're a lot of these in that climate. No, I mean, I
think I mean, I in the simulation, I think there is a sense, the important sense in which
it would all be real. Nevertheless, there is a sense, the important sense in which it would all be real.
Nevertheless, there is a distinction between inside a simulation and outside a simulation,
or in the case of no-six, thought experiment, whether you're in the vat or outside the vat.
And some of those differences may or may not be important. I mean, that that constant your values and preferences.
So if the experience machine only gives you
the experience of growing peanuts, but you're the only one in the experience machines.
So there's other, you can within the experience machine, others can plug in.
Well, there are versions of the experience machine. So in fact, you might want to have
distinguishable thought experiments, different versions of it. So in fact, you might want to have this thing with Schipert thought experiments, different versions of it. So in like in the
original thought experiment, maybe it's only you, right? Just you. So and you
think, I wouldn't want to go in there. Well, that tells you something interesting
about what you value and what you care about. Then you could say, well, what if you
add the fact that there would be other people in there and you would interact
with them? Well, it starts to make it more attractive, right? Then you could
add in, well, what if you could also have important long-term effects on human history in the world, and you
could actually do something useful even though you were in there, that makes it maybe even more
attractive, like you could actually have a life that had a purpose and consequences. And so, as you
sort of add more into it, it becomes more similar to the baseline reality that
you were comparing it to.
Yeah, but I just think inside the experience machine, and without taking those steps you
just mentioned, you still have an impact on long-term history of the creatures that live inside that of the quote unquote fake creatures that live inside
that experience machine. And that like at a certain point, you know, if there's a person waiting
for you inside that experience machine, maybe your newly found wife and she dies, she has fears, she has hopes and she exists in that machine when you
plug out, when you unplug yourself and plug back in, she's still there going on about
her life.
Oh, well, in that case, yeah, she starts to have more of an independent existence as
I independent existence.
But it depends, I think, on how she's implemented in the experience machine.
Take one of them in case where all she is,
is a static picture on the wall of photograph.
So you think, well, I can look at her, right?
But that's it, there's no, the then you think,
well, it doesn't really matter much what happens to that.
And any more than a normal photograph,
if you tear it up, right?
It means you can't see it anymore,
but you haven't harmed the person who's picture tore it up, right? It means you can't see it anymore, but you haven't harmed
the person who's picked you to tore it up.
The good.
But if she's actually implemented, say at a neural level of detail so that she's a fully
realized digital mind with the same behavioral repertoire as you have, then very plausibly
she would be a conscious person
like you are.
And then you would, what you do in this experience machine
would have real consequences
for how this other mind felt.
So you have to specify which of these experience
machines you're talking about.
I think it's not entirely obvious
that it would be possible to have an experienced machine that gave you
a normal set of human experiences, which include experiences of interacting with other people,
without that also generating consciousnesses corresponding to those other people.
That is, if you create another entity that you perceive and interact with, that to you
looks entirely
realistic. Not just when you say hello, they say hello back, but you have a
rich interaction, many days deep conversations. Like it might be that the only
possible way of implementing that would be one that also has a side effect
instantiate the dissadiperson in enough detail that you would have a second
consciousness there. I think that's to some extent an open enough detail that you would have a second consciousness there.
I think that's to some extent an open question.
So you don't think it's possible to fake consciousness and fake intelligence?
Well, it might be.
I mean, I think you can certainly fake, if you have a very limited interaction with somebody,
you can certainly fake that.
That is, if all you have to go on is somebody said hello to you. That's not enough for you to tell whether there was a real person there or a pre-recorded
message or a very superficial simulation that has no consciousness.
Because that's something easy to fake.
We could already fake it now.
You can record a voice recording.
But if you have a richer set of interactions where you're allowed to answer open-ended questions
and probe from different angles that couldn't sort of be, you couldn't give canned answer
to all of the possible ways that you could probe it, then it starts to become more plausible
that the only way to realize this thing in such a way that you would get the right answer
from any of which angle you probed would be a way ofiating it, where you also instantiated a conscious mind.
Yeah, I'm with you on the intelligence part, but there's something about me that says consciousness
is easier to fake.
Like, I recently gotten my hands in a lot of rubas.
Don't ask me why or how, but, and I've made them, this is just a nice robotic mobile platform
for experiments.
And I made them scream and or moan and pain, so on, just to see when they're responding
to me.
And it's just a sort of psychological experiment on myself.
And I think they appear conscious to me pretty quickly.
To me, at least my brain can be tricked quite easily. Right. I sort of if I introspect, it's harder for me
to be tricked that something is intelligent.
So I just have this feeling that inside this experience
machine, just saying that you're conscious
and having certain qualities of the interaction,
like being able to suffer, like being able to hurt,
like being able to wander about the essence of your own existence, not actually, I mean, you know, creating the
illusion that you're wandering about it is enough to create the fifth consciousness and
be the illusion of consciousness. And because of that, create a really immersive experience
to where you feel like that is the real world.
So you think there's a big gap between appearing conscious and being conscious?
Or is it that you think it's very easy to be conscious?
I'm not actually sure what it means to be conscious.
All I'm saying is the illusion of consciousness is enough for this to create a social interaction
that's as good as if the thing was conscious,
meaning I'm making it about myself.
Right, yeah.
I mean, I guess there are a few different things.
One is how good the interaction is, which might,
I mean, if you don't really care about probing hard for it,
the thing is conscious.
Maybe it would be a satisfactory interaction,
whether or not you really thought it was conscious.
Now, if you really care about it being conscious in like inside this experience machine,
how easy would it be to fake it?
And you say it sounds easy.
It's fairly easy, but then the question is,
would that also mean it's very easy to
instantiate consciousness?
Like, it's much more widely spread in the world than we have thought it doesn't require
a big human brain with a hundred billion neurons.
All you need is some system that exhibits basic intentionality and can respond and you
already have consciousness.
Like, in that case, I guess you still have a close coupling. I guess a data case would be where they can come apart, where you could create the appearance
of there being a conscious mind, where they're actually not being another conscious mind.
I'm somewhat agnostic exactly where these lines go.
I think one observation that makes it plausible that you could have very
realistic appearances relatively simply, which also is relevant for the simulation argument
and in terms of thinking about how realistic with the virtual reality model have to be in order
for the simulated creature not to notice that anything was awry. Well, just think of our own humble brains during the VR's of the night when we are
dreaming. Many times, well, dreams are very immersive, but often you also don't realize
that you're in a dream. And that's produced by simple, primitive, three-pound lumps of neural matter effortlessly.
So if a simple brain like this can create the virtual reality that seems pretty real to
us, then how much easier would it be for a super-intelligent civilization with planetary-sized
computers optimized over the Eons to create a realistic environment for you to interact with.
Yeah, by the way, behind that intuition
is that our brain is not that impressive
relative to the possibilities of what technology could bring.
It's also possible that the brain is the pedemy,
is the ceiling.
Just be careful. The ceiling, how is that possible?
Meaning like this is the smartest possible thing
that the universe could create.
So that seems unlikely to me.
Yeah, I mean, for some of these reasons,
we alluded to earlier in terms of designs we already have
for computers that would be faster by many orders of magnitude than the human brain.
Yeah, but it could be that the cognitive constraints in themselves is what enables
the intelligence.
So the more powerful you make the computer, the less likely it is to become super intelligent.
This is where I say dumb things to push back on.
Yeah, I'm not sure I thought we might,
no, I mean, so there are different dimensions
of intelligence.
A simple one is just speed.
Like if you can solve the same challenge faster
in some sense, you're smarter.
So there, I think we have very strong evidence for thinking that you could have a
computer in this universe
that would be much faster than the human brain and
therefore have speed super intelligence like be completely superior, maybe a million times faster
Then maybe there are other ways in which you could be smarter as well. Maybe more qualitative
then maybe there are other ways in which you could be smarter as well. Made more qualitative ways, right?
And there the concepts are a little bit less clear cut, so it's harder to make a very crisp,
neat, firmly logical argument for why that could be qualitative super intelligence,
as opposed to just things that were faster. Although I still think it's very plausible.
And for various reasons, that are less than watertight arguments.
But you could sort of, for example, if you look at animals and brain cells.
And even within humans, there seems to be Einstein versus random person.
It's not just that Einstein was a little bit faster, but how long would it take a normal
person to invent general relativity?
It's not 20% longer than it took Einstein or something like that.
It's like, I don't know whether that would do it at all or it would take millions of years
or some totally bizarre.
So, but your intuition is that the compute size will get you go.
Increasing the size of the computer and the speed of the computer might create some much more powerful levels of intelligence
that would enable some of the things we've been talking about, like the simulation, being
able to simulate an ultra realistic environment, ultra realistic reception of reality.
Yeah, I mean, strictest speaking, it would not be necessary to have super intelligence
in order to have, say, the technology to make these simulations, ancestor simulations or other kinds of simulations.
As a matter of fact, I think if there are, if we are in a simulation, it would most likely
be one built by a civilization that had super intelligence.
It certainly would help a lot. I mean, it could build more efficient, larger scale structures if you had super intelligence. It certainly would help,
I mean, it could build more efficient,
larger scale structures if you had super intelligence.
I also think that if you had a technology
to build these simulations,
that's like a very advanced technology.
It seems kind of easier to get the technology
to super intelligence.
Yeah.
So I'd expect by the time
that could make these fully realistic simulations
of human history with human brains in there.
Like before that, they got to that stage that would have figured out how to create
machine super intelligence or maybe biological enhancements of their own brains if
their biological creatures to start with. So we talked about the three parts of the simulation
argument. One, we destroy ourselves before we ever create the simulation.
parts of the simulation argument. One, we destroy ourselves before we ever create the simulation.
Two, we somehow, everybody somehow loses interest in creating simulation. Three, we're living in a simulation. So you've kind of, I don't know if your thinking has evolved on this point, but you
kind of said that we know so little that these three cases might as well be equally probable. So probabilistically speaking,
where do you stand on the surface? Yeah, I don't think equal necessarily would be the most
supported probability assignment. So how would you, without assigning actual numbers,
what's more or less likely in your view?
Well, I mean, I've historically attended to Punt on the question of like us between these
three. So maybe you asked me another way is which kind of things would make it each of
these more or less likely? What kind of thing?
I mean, certainly in general terms, if you say anything that
say increases or reduces the probability of one of these,
we tend to slosh probability around on the other.
So if one becomes less probable,
like the other would have to, because it's got to add up to one.
Yes. So if we consider the first hypothesis,
the first alternative that this filter
that makes it so that virtually no civilization
reaches technical maturity.
In particular, our own civilization,
if that's true, then it's like very unlikely
that we would reach technical maturity
because if almost no civilization at our stage does it,
then it's unlikely that we do it. So I had
I was arguing longer than that for a second. Well, if it's the case that almost all civilizations at our current stage of
technical maturity fail at fail at our current stage of technical development fail to reach maturity.
That would give us very strong reason for thinking we will fail to reach technical maturity.
Also, the flip side of that is the fact that we've reached it means that many other
civilizations have reached it.
Yeah, so that means if we get closer and closer to actually reaching technical maturity,
there's less and less distance left where we could go extinct before we are there.
Therefore, the probability that we will reach increases,
as we get closer, and that would make it less likely
to be true that almost all civilizations
at our current stage failed to get there.
We would have this, the one case we'd started ourselves
would be very close to getting there.
That would be strong evidence.
It is not so hard to get to technical maturity.
So to the extent that we feel we are moving nearer to technical maturity,
that would tend to reduce the probability of the first alternative
and increase the probability of the other two.
It doesn't need to be a monotonic change.
Like if everyone's in a while, some new threat comes into view,
some bad new thing you could do with some novel technology for example you know that that could change our probabilities in the other direction.
But that that technology again you have to think about as that technology has to be able to equally in even way affect every civilization out there. Yeah, pretty much.
I mean, that streak, streak, the speaking is not true.
I mean, I could, I could be two different existential risks
and every civilization, you know,
a nine or one or the other, like,
but none of them killed more than 50 percent.
Like, yeah, but, um,
I incidentally, so in some of my the work,
I mean, on machine super intelligence, like
so I planted some existential risks related to sort of super intelligent AI and how we must
make sure, you know, to handle that wisely and carefully.
It's not the right kind of existential catastrophe to make the first alternative true though.
It might be bad for us if the future lost a lot of value as a result of it being shaped
by some process that optimized for some completely non-human value.
But even if we got killed by machine superintendents, is that machine superintendents might still
attain technical maturity.
So.
Oh, I see.
So you're not very, you're not human exclusive.
This could be any intelligent species that achieves,
like it's all about the technological maturity.
It's not that the humans have to attain it.
Right.
So like super intelligence, because it
replaces, and that's just as well for simulation argument.
And that's, well, yeah, yeah. I mean, it could interact with the second high-by alternative. Like if the thing that replaced us
with either more likely or less likely, then we would be to have an interesting,
creating ancestor simulations, you know, that could affect probabilities. But yeah, to a first
order, like if we all just die, then yeah, we won't produce any simulations
because we are dead, but if we all die and get replaced by some other intelligent thing,
that then gets the technological maturity, the question remains, of course, if it might not
that thing, then use some of its resources to do this stuff. So can you reason about this stuff? This is given how little we know about
the universe. Is it reasonable to reason about these probabilities? So like, how little,
well, maybe you can disagree, but to me, it's not trivial to figure out how difficult it is to build a simulation. We kind of talked about it a little bit.
We also don't know, like as we tried to start building it,
like start creating virtual worlds and so on,
how that changes the fabric of society.
There's all these things along the way that can fundamentally change just so many aspects of our society
about our existence that we don't know anything
about. Like the kind of things we might discover when we understand to a greater degree,
the fundamental, the physics, like the theory, if we have a breakthrough and have a theory
and everything, how that changes, how that changes, space exploration and so on. So is it still possible
the reason for the probability given how little we know? Yes, I think that there will be a large
residual of uncertainty that we'll just have to acknowledge. And I think that's true for most of these big picture questions that we might wonder about. It's just we are small,
short-lived, small-brand cognitively-verily-medited humans with little evidence and it's amazing.
We can figure out as much as we can really about the cosmos.
But okay, so there's this cognitive trick that seems to happen when I look at the simulation
argument, which for me, it seems like case one and two feel unlikely.
I want to say feel unlikely as opposed to sort of like it's not like I have too much scientific
evidence to say that either one or two are not true. It just seems unlikely that every
single civilization destroys itself and it seems like feels unlikely that the civilization
loses interest. So naturally, without necessarily explicitly doing it, but the simulation argument
basically says, it's very likely we're living in a simulation.
Like to me, my mind naturally goes there,
I think the mind goes there for a lot of people,
is that the incorrect place for it to go?
Well, not necessarily.
I think the second alternative,
we'd just to do with the motivations and interest
of technological and mature civilizations. We just to do with the Motivations and interest
Of technological and mature civilizations
I think there is much we don't understand about that
Yeah, can you talk about that a little bit? What do you think? I mean this question that pops up when you have when you build an AGI system or build a general intelligence or
How does that change our motivation? Do you think how does that change our motivations?
Do you think it will fundamentally transform our motivations?
Well, it doesn't seem that implausible that once you take this leap to
to technological maturity, I mean, I think like it involves creating
machine super intelligence, possibly that would be sort of on the path for
basically all civilizations, maybe before they are able to create large numbers machine super intelligence, possibly that would be sort of on the path for basically
all civilizations maybe before they are able to create
large numbers of antestero simulations,
that possibly could be one of these things that quite
radically changes the orientation of what a civilization is,
in fact, optimizing for.
There are other things as well. So at the moment we have not perfect
control over our own being, our own want to experience a pleasure and happiness, you
might have to do a whole host of things in the external world to try to get into the mental
state where you experience pleasure. Like, people get some pleasure from eating great
food. Well, they can just turn that on. They have to kind of actually go to a nice restaurant
and then they have to make money to it.
So there's like all this kind of activity
that maybe arises from the fact that we are trying
to ultimately produce mental states,
but the only way to do that is by a whole host
of complicated activities in the external world.
Now, at some level of technical development,
I think we'll become out of potent in the sense
of gaining direct ability to choose our own internal configuration
and in our knowledge and insight to be able to actually
do that in a meaningful way.
So then it could turn out that there
are a lot of instrumental goals that would drop out of the picture and be replaced by other instrumental goals because we could now serve some of these final goals in more direct ways.
And who knows how all of that shakes out after civilisation is reflected on that and converge on different attractors and so on and so forth.
And that could be new instrumental considerations that come into view as well, that we are just oblivious to,
that would maybe have a strong shaping effect on actions.
Like very strong reasons to do something or not to do something.
And we just don't realize
they're there because we are so dumb, bumbling through the universe. But if, if almost inevitably,
on root to attaining the ability to create many answers to simulations, you do have this cognitive
enhancement or advice from super-intelligence, this or yourself, then maybe there's like this
additional set of considerations coming into view in your Australia, it's obvious that the thing that makes sense is to do X. Whereas right now
it seems you could X, Y or Z and different people will do different things and we are kind
of random in that sense.
Yeah, because at this time with our limited technology, the impact of our decisions is minor.
I mean, that's starting to change in some ways.
But...
Well, I'm not sure.
It follows that the impact of our decisions is minor.
Well, it's starting to change.
I mean, I suppose 100 years ago was minor.
It's starting to...
Well, it depends on how you view it.
So what people did 100 years ago
still have effects on the world today.
Oh, I see.
As a civilization in the togetherness.
Yeah.
So it might be that the greatest impact of individuals is not at technical maturity or
very far down.
It might be earlier on when there are different track civilization could go down.
Maybe the population is smaller. Things still haven't settled out. If you count
indirect effects, that that that those could be bigger than the direct effects that people
have later on. So part three of the argument says that, so that leads us to a place where eventually
somebody creates a simulation.
That I think you had a conversation with Joe Rogan.
I think there's some aspect here where you got stuck a little bit.
How does that lead to likely living in a simulation. So this kind of probability argument,
if somebody eventually creates a simulation,
why does that mean that we're now in a simulation?
What you get to, if you accept alternative three first,
is that would be more simulated people
with our kinds of experiences than non-simulated ones.
Like if, and kind of, if you look at the world as a whole by the end of time,
as it were, you just count it up.
That would be more simulated ones than non-simulated ones.
Then there is an extra step to get from that.
If you assume that, supposed for the sake of the argument, that that's true.
from that, if you assume that, I suppose, for the sake of the argument that that's true, how do you get from that to the statement we are probably in assimilation?
So here you are introducing an indexical statement, like it's that this person right now is in
assimilation.
There are all these other people that are in assimilation, some that are is in assimilation. There are all these other people that are in simulations
and some that are not in the simulation.
But what probability should you have that you yourself
is one of the simulated ones in that setup.
So yeah, so I call it the bland principle of indifference,
which is that in cases like this,
when you have two, I guess, sets of observers,
one of which is much larger than the other,
and you can't from any internal evidence you have,
tell which set you belong to,
you should assign a probability that's proportional
to the size of the sets, so that if there are
10 times more simulated people with your kinds of experiences, you would be 10 times more
likely to be one of those.
Is that as intuitive as it sounds?
I mean, that seems kind of, if you don't have enough information, you should rationally
just assign the same probability
as the size of the set.
It seems pretty plausible to me.
Where the holes in this is it at the very beginning, the assumption that everything stretches
sort of infinite time, essentially?
You don't need infinite time.
You need what, how long does the time mean?
Well, however long it takes, I guess,
for a universe to produce in-e-t-t-al-it-and-civilization
that then attains the technology
to run some ancestor simulations.
Gotcha.
When the first simulation is created,
that stretch of time, just a little longer
than they'll all start creating simulations.
Kind of like order.
Well, I mean, that might be different.
It might, if you think of there being a lot of different planets and some subset of them
have life and then some subset of those get to intelligent life and some of those may
be eventually start creating simulations, they might get started at quite different times. Like maybe on some planet, it takes a billion years longer before you get
like monkeys or before you get even bacteria than on another planet.
So that like this might happen kind of at different cosmological epochs.
Is there a connection here to the Doomsday argument and that sampling there?
Yeah, there is a connection in that they both involve an application of anthropic reasoning that is
reasoning about these kind of indexical propositions. But the assumption you need in the case of the simulation argument, it's much weaker than the simulation,
the assumption you need to make the Doomstay argument
go through.
What is the Doomstay argument?
And maybe you can speak to the anthropoc reasoning
in more general.
Yeah, that's a big and interesting topic
in its own right, anthropics.
But the Doomstay argument is this really first discovered by Brandon Carter, who was a theoretical
physicist and then developed by philosopher John Leslie.
I think it might have been discovered initially in the 70s or 80s and less than or this book
I think in 96.
And there are some other versions as well by Richard God, to the physicist, but let's focus on the Cartel S version,
where it's an argument that we have systematically
underestimated the probability that
humanity will go extinct soon.
Now, I should say most people probably think at the end of the day there is something wrong
with this doomsday argument that it doesn't really hold.
It's like there's something wrong with it, but it's proved hard to say exactly what is
wrong with it.
And different people have different accounts.
My own view is it seems inconclusive.
But I can say what the argument is.
Yeah, that'll be good. Yeah. So maybe it's easy
to explain via an analogy to sampling from earns. So imagine you have a big, imagine you have two
earns in front of you and they have balls in them that have numbers. So there's the two runs look the same, but inside one there are 10 balls, ball number one, two,
three up to ball number 10. And then in the other earn, you have a million
balls, numbered one to a million. And now somebody puts one of these
earns in front of you and ask you to guess what's the chance it's the 10 ball earn? And you say,
well 50-50, I can't tell which earn it is. But then you're allowed to reach in and pick a ball
at random from the earn. And that's suppose you find that it's ball number seven.
So that's strong evidence for the 10 ball hypothesis. Like, it's a lot more likely that you would get
such a low numbered ball if they're on the 10 balls in the end.
Like, it's, in fact, 10% done, right?
Then if there are a million balls,
it would be very unlikely you would get number seven.
So you perform a Bayesian update.
And if your prior was 50-50, that it was the 10 ball earned,
you become virtual virtually certain after finding
the random sample was seven that it only has 10 balls in it.
So in the case of the Ernst,
this is on controversial just elementary probability theory.
The Doomsd argument says that you should
reason in a similar way with respect
to different hypotheses about how many balls
that will be in the earn of humanity, I said, for how many balls that will be in the urn of humanity, I said,
for how many humans that will happen to be by the time we go extinct.
So to simplify, let's suppose we only consider two hypotheses, either maybe 200 billion
humans in total or 200 trillion humans in total.
You could fill in more hypotheses, but it doesn't change the principle here.
So it's easiest to see if we just consider these two.
So you start with some prior based on ordinary empirical ideas about threats to civilization
and so forth.
And maybe you say it's a 5% chance that we will go extinct by the time there will have
been 200 billion only.
You kind of optimistic, let's say.
I think probably we'll make it through colonized universe. But then according to this
tombstone argument, you should take off your own birth rank as a random sample. So your
birth rank is your sequence in the position of all humans that have ever existed.
It turns out you're about a human number of 100 billion,
you know, give or take.
That's like roughly how many people have been born before you.
As fascinating, because I probably,
we each have a number.
We each have a number in this.
I mean, obviously, the exact number would depend
on where you started counting,
like which ancestors was human,
and us to count as human.
But those are not really important,
they're relatively few of the...
So, yeah, so you're roughly 100 billion.
Now, if they're only going to be 200 billion in total,
that's a perfectly unremarkable number.
You're somewhere in the middle,
right?
It's the run of the male human,
completely unsurprising.
Now, if they're going to be 200 trillion, you would be remarkably early.
Like, what are the chances out of these 200 trillion human, that you should be human number
100 billion.
That seems it would have a much lower conditional probability.
And so analogous to how in the earned case you thought after finding this
low number random sample, you updated in
favor of the earn having few balls.
Similarly, in this case, you should
update in favor of the human species
having a lower total number of
members that is doomed soon.
You said doomed soon. That's the
that would be the hypothesis in this case that it will end
just like 100 billion. I just like that term for the hypothesis. So what it kind of
crucially relies on the doom's diagram is the idea that you should reason as if you were
a random sample from the set of all humans that will ever have existed. If you have that assumption, then I think the rest kind of follows.
The question then is why should you make that assumption?
In fact, you know you are 100 billion, so where do you get this prior?
And then there is a literature on that with different ways of supporting that assumption.
That's just one example of a topic reasoning right that that seems to be kind
of convenient when you think about humanity when you when you think about sort of even like existential
threats and so on as it seems that quite naturally that you should assume that you're just an average case.
Yeah, that you're kind of a typical randomly sampled.
Now, in the case of the DOOM's argument,
it seems to lead to what in two to do,
we think it's the wrong conclusion,
or at least many people have this reaction
that there's gotta be something fishy about this argument
because from very, very weak premises,
it gets this very striking implication
that we have almost no chance of reaching size 200 trillion
humans in the future. And how could we possibly get there just by reflecting on when we were born?
It seems you would need sophisticated arguments about the impossibility of space colonization
blah blah. So one might be tempted to reject this key assumption. I call it the self sampling assumption.
The idea that you should reason as if you were a random sample from all observers or in your some reference class.
However, it turns out that in other domains, it looks like we need something like this self sampling assumption to make sense of
bona fide scientific inferences in contemporary cosmology, for example, you have these multiverse
theories. And according to a lot of those, all possible human observations are made.
If you have a sufficient large universe, you will have a lot of people observing all kinds
of different things. So if you have two competing theories, say about the value of some constant, it could be true according to
both of these theories that there would be some observers observing the value that
corresponds to the other theory because there would be some observers that have hallucinations,
so there is a local fluctuation or a statistically anomalous measurement.
These things will happen.
And if you're not observers, make in us different observations,
there will be some that sort of by chance make these different ones.
And so what we would want to say is, well, many more observers,
a larger proportion of the observers, will observe as it were the true value.
And a few will observe the wrong value. If we think of ourselves as a random sample, we should expect with a more improbability to observe the true value, and that will then allow us to conclude
that the evidence we actually have is evidence for the theories we think are supported.
It kind of done, is a way of making
sense of these inferences that clearly seem correct,
that we can make various observations
and infer what the temperature of the cosmic background
is and the fine structure constant and all of this.
But it seems that without rolling in some assumption
similar to the self-sampling assumption,
this inference does not go through.
And there are other examples.
So there are these scientific contexts
where it looks like this kind of
anthropic reasoning is needed and makes perfect sense.
And yet, in the case of the dubster,
argument it has this weird consequence
and people might think there is something wrong with it there.
So there is done this project that would consist
and try to figure out what are the legitimate ways of reasoning
about these indexical facts when observer selection effects
are in play, in other words, developing
a theory of anthropics.
And there are different views of looking at that.
And it's a difficult methodological area.
But to tie it back to the simulation argument,
the key assumption there, this bland principle of indifference,
is much weaker than the self sampling assumption.
So if you think about, in the case of the Doomsd argument,
it says you should reason as
if you are a random sample from all humans that will have lived, even though, in fact,
you know that you are about number 100 billion human and you're alive in the year 2020,
whereas in the case of the simulation argument, it says that, well, if you actually have no
way of telling which one you are, then you should assign this
kind of uniform probability.
Yeah, yeah.
Your role as the observer in the simulation argument is different.
It seems like, like, who's the observer?
I keep assigning the individual consciousness.
Yeah.
I mean, when you see you, when a lot of observers in the simulation, in the context of the
simulation argument, the relevant observers would be a, the people in the context of the simulation argument. But there are relevant observers would be A,
the people in original histories,
and B, the people in simulations.
So this would be the class of observers
that we need, I mean, there are also maybe the simulators,
but we can set those aside for this.
So the question is,
given that class of observers,
a small set of original history observers
and the large class of simulated observers,
which one should you think is you? Where are you amongst this?
Where are you?
I'm maybe having a little bit of trouble wrapping my head around the intricacies of what it
means to be an observer in this, in the different instantiations of the anthropic reasoning cases that we mentioned.
I mean, it doesn't have to be.
No, I mean, maybe in this year, a way of putting it is just like, are you simulated or
you're not simulated?
Given this assumption that these two groups of people exist.
Yeah, in the simulation case, it seems pretty straightforward.
Yeah, so the key point is the methodological assumption
you need to make to get the simulation argument
to where it wants to go is much weaker and less problematic.
Then the methodological assumption
you need to make to get the doomsd argument to its conclusion.
Maybe the doomsd argument is sound or unsound,
but you need to make it much stronger
and more controversial assumption to make it go through.
In the case of the Dune Starglement,
sorry, simulation argument, I guess one,
maybe way intuition pub to support this bland principle
of indifference is to consider a sequence of different cases
where the fraction of people who are simulated to non-simulated approaches one.
So in the limiting case where everybody is simulated,
obviously you can deduce with certainty that you are simulated.
If everybody with your experiences is simulated, then you know
you've got to be one of those. You don't need the probability at all, you just kind of logically
conclude it, right? So then as we move from a case where say 90% of everybody is simulated, 99% 99.9%. It should seem possible that the probability you assign should
sort of approach one certainty as the fraction approaches the case where everybody is in a
simulation. And it's like you wouldn't expect that to be a discrete. Well, if there's one
non-simulated person, then it's 50-50.
But if we'd moved that, then it's 100%.
Like it should kind of...
There are other arguments as well.
One can use to support this blind principle of indifference,
but that might be nice to...
But in general, when you start from time equals zero
and go into the future,
the fraction of simulated,
if it's possible to create simulated worlds, the fraction of simulated, if it's possible to create simulated worlds,
the fraction of simulated worlds will go to one.
Well, it's one of these kind of
all the way to one in reality that would be some ratio.
Although maybe a technological metropolis
civilization could run a lot of simulations
using a small portion of its resources.
It probably wouldn't be able to run infinite demand.
If we take, say, the physics in the observed universe,
if we assume that that's also the physics at the level of the simulators
that would be limits to the amount of information processing that anyone civilization
could perform in its future trajectory. Right. And that's like, well, first of all, there's limited
amount of matter you can get your hands off because with the positive cosmological constant,
the universe is accelerating, there is like a finite sphere of stuff, even if you traveled with the speed of light that you could ever reach, you have a finite amount of stuff.
And then if you think there is like a lower limit to the amount of loss you get when you
perform an erasure of a computation, or if you think, for example, just matter gradually
over cosmological timescales, decay, maybe protons, decay, other things,
and you radiate out gravitational waves.
Like there's all kinds of seemingly unavoidable losses
that occur.
So eventually we'll have something
like a heat death of the universe or a cold death
or whatever.
So it's fine there, but of course we don't know
which if there's
many ancestral simulations, we don't know which level we are.
So there could be, could there
be like an arbitrary number of simulations that spawned ours?
And those had more resources, terms of physical universe
to work with.
Sorry.
What do you mean that that could be sort of, okay, so.
If simulation spawn, other simulation, it seems like each new spawn has fewer resources to work with.
Yeah.
But we don't know at which, which step along the way we are at. Right.
Anyone observer doesn't know whether we're in level 42 or 100 or one or is that not
matter for the resources? I mean, it's true that they would that would be
on certain days. You could have stacked simulations. Yes. And that could then be uncertain as to which level we are at. As you remarked also,
all the computations performed in a simulation within the simulation also have to be expanded
at the level of the simulation.
Right.
So the computer in base mentality were all these simulations with the simulations with
the simulations are taking place.
Like that, that computer ultimately, it's CPU or whatever it is, like that has to power
this whole tower, right?
So if there is a finite compute power in base mentality, that would impose a limit to how tall this tower can be. And if each level kind of it poses a large extra overhead, you might think maybe the tower
would not be very tall. That most people would be low down in the tower.
I love the term basement reality. Let me ask one of the popularizers you said there's many through this when you look at sort of the last few years of
The simulation hypothesis just like you said it comes up every once in a while some new community discovers it and so on
But I would say one of the biggest popularizers of this idea is Elon Musk
Do you have any kind of intuition about what Elon thinks about when he thinks about simulation?
Why is this of such interest? Is it all the things we've talked about? Or is there some special
kind of intuition about simulation that he has? I mean, you might have a better... I think,
I mean, why it's of interest, I think it seems fairly obvious why, to the extent that one
thing the argument is credible, why it would be of interest, it would, if it's correct, tell us something very important about the world,
in a one way or the other, whichever of the three alternatives for a simulation that seems like
arguably one of the most fundamental discoveries, right? Now, interestingly, in the case of
somewhat like Elon, so there's like the standard arguments for why you might want to take the
simulation hypothesis seriously, the simulation argument, right? In the case that if you are actually Elon Musk, let's say, there's a kind of an additional
reason in that what are the chances you would be Elon Musk?
Like, it seems like maybe there would be more interest in simulating the lives of very
unusual and remarkable people.
So if you consider not just simulations
where all of human history or the whole of human civilization
are simulated, but also other kinds of simulations, which
only include some subset of people.
In those simulations that only include a subset,
it might be more likely that they would include
subsets of people with unusually interesting or consequential life. You got to wonder, right?
It's more like, yeah, or if you are, like, if you are Donald Trump or if you are Bill Gates,
or you're like some particularly, like, distinctive character, you might think that that,
I mean, if you just think of yourself into the shoes, right? It's got to be like an extra reason to think.
That's kind of so interesting.
So on a scale of like farmer and peru to Elon Musk, the more you get towards the Elon Musk,
the higher the probability.
You're right, and I would be so extra boost from that.
There's an extra boost.
So he also asked the question of what he would ask an
AGI saying the question being what's outside the simulation. Do you think about the answer
to this question, if we are living a simulation, what is outside the simulation? So the programmer
of the simulation?
Yeah, I mean, I think it connects to the question of what's inside the simulation in that,
if you had views about the craters of the simulation, it might help you make predictions
about what kind of simulation it is, what might happen, what happens after the simulation,
if there is some after, but also the kind of setup.
So these two questions would be quite closely intertwined.
But do you think it would be very surprising to,
like, is the stuff inside the simulation
is possible for it to be fundamentally different
than the stuff outside?
Yeah, like another way to put it,
can the creatures inside the simulation be smart enough to
not even understand or have the cognitive capabilities or any kind of information processing
capabilities enough to understand the mechanism that created them?
I might understand some aspects of it.
I mean, it's a level of, it's kind of, there are levels of explanation, like degrees
to which you can understand. So does your dog understand what it is to be human? Well, it's
got some idea, like humans are these physical objects that move around and do things. And
I, a normal human would have a deeper understanding of what it is to be a human. And maybe some very experienced
psychologies, their great novelist might understand a little bit more about what it is to be
human. And maybe super intelligent as you could see right through your soul. So similarly,
I do think that that we are quite limited in our ability to understand all of the relevant aspects of the
larger context that we exist in. But that might be hope for some.
I think we understand some aspects of it, but you know how much good is that? If there's
like one key aspect that changes the significance of all the other aspects. So we understand maybe seven out of 10 key insights
that you need, but the answer actually
like varies completely depending on what number 8, 9,
and 10 insight is.
It's like whether you want to suppose
that the big task were to guess whether a certain number was odd or even,
like a 10-didet number.
And if it's even the best thing for you to do in life is to go north, and if it's odd,
the best thing for you to go south.
Now we are in a situation where maybe through our science and philosophy we figured out
what the first seven did its are.
So we have a lot of information, right? Most of it, we figured out. But we are closely to this about what the last three
digits are. So we are still completely clueless about whether the number is odd or even, and
therefore whether we should go north or go south. I feel that's an analogy, but I feel we're
somewhat in that predicament. We know a lot about the universe.
We've come maybe more than half of the way there
to fully understanding it,
but the parts we're missing are plausibly ones
that could completely change the overall upshot of the thing.
And including change our overall view
about what the scheme of priorities should be
or which strategic direction would make sense to pursue.
Yeah, I think your analogy of us being the dog trying to understand human beings is an
entertaining one and probably correct.
The closer the understanding tends from the dog's viewpoint to us human psychologists
viewpoint, the steps along the way there will have completely transformative
ideas of what it means to be human. So the dog has a very shallow understanding. It's interesting
to think that to analogize that a dog's understanding of a human being is the same as our current
understanding of the fundamental laws of physics in the universe. Oh, man, okay, we spent an hour and 40 minutes
talking about the simulation. I like it. Let's talk about super intelligence, at least for
a little bit. And let's start at the basics. What to you is intelligence? Yeah, I tend not
to get to stuck with the definition of question.. I mean, the common sense to understand it,
like they're related to solve complex problems
to learn from experience, to plan, to reason,
some combination of things like that.
Is consciousness mixed up into that or no?
Is consciousness mixed up into that or is it...
Well, I don't think...
I think it could be fairly intelligent, at least,
without being conscious, probably. Well, I don't think I think it could be fairly intelligent at least without
being conscious probably and
So then what is super intelligence? So yeah, that would be like something that was much more
Had much more general cognitive capacity than we humans have so
If we talk about general super intelligence it would be faster learner, be able to recent much better mid-plans that are more effective at achieving its goals, say, in a wide
range of complex, challenging environments.
In terms of, as we turn our eye to the idea of sort of existential threats from superintelligence,
do you think superintelligence has to exist in the physical world or can
it be digital or only?
Sort of, we think of our general intelligence as us humans, as an intelligence that's
associated with a body that's able to interact with the world that's able to affect the
world directly with physically.
I mean, digital only is perfectly fine, I think.
I mean, you could, it's physical in the sense
that obviously the computers and the memories are physical.
But it's capability to affect the world, sort of.
Could be very strong, even if it has a limited set
of actuators, if it can type text on the screen
or something like that, that would be, I think, ample.
So in terms of the concerns of existential threat of AI,
how can an AI system that's in the digital world
have existential risk?
So what are the attack vectors for a digital system?
Well, I mean, I guess maybe to take one step back,
so I should emphasize that I also think there's this huge
positive potential from machine intelligence,
including super intelligence.
And I want to stress that because like some of my writing
has focused on what can go wrong.
And when I wrote the book super intelligence,
at that point, I felt that there was a kind of neglect of what would happen if AI succeeds
and in particular a need to get a more granular understanding of where the pitfalls are.
So we can avoid them.
I think that since the book came out in 2014, there has been a much wider recognition of that
and a number of research groups are not actually working
on developing, say AI alignment techniques and so on and so forth.
So, I think now it's important to make sure we bring back
onto the table, the upside as well.
And there's a little bit of neglect now on the upside,
which is, I mean, if you look at talking to a friend, if you look at the amount of information that is available or
people talking and people being excited about the positive possibilities of general intelligence,
that's not, it's far outnumbered by the negative possibilities in terms of our public discourse.
Possibly.
Yeah.
It's hard to measure.
So, but what are, can you link on that for a little bit?
What are some, to you, possible big positive impacts
of general intelligence?
Super intelligence.
Well, I mean, it's a super intelligence.
I get, because I tend to also want to distinguish
these two different contexts of thinking about AI and AI
impacts, that kind of near-term
and long-term, if you want, both of which I think are legitimate things to think about
and people should, you know, discuss both of them, but they are different and they often get
mixed up and then I get, you get confusion. I think you get simultaneously like maybe an overhyping
of the near-term and then underhyping of the long-term. And so I think you get simultaneously like maybe an over-hyping of the near term
and then under-hyping of the long term.
I think as long as we keep them apart,
we can have two good conversations
or we can mix them together and have one bad conversation.
Can you clarify just the two things we were talking about
the near-term and long term?
What are the distinctions?
Well, it's a blur distinction. But say the things I wrote about in this book, Super
Intelligence, long term. Things people are worrying about today with, I don't know, algorithmic
discrimination or even things self-driving cars and drones and stuff, more near term.
And then of course, you could,
by the some medium-term where they kind of overlap
and they won it balls into the other.
But I don't know, I think both, yeah,
the issues look kind of somewhat different
depending on which of these contexts.
So I think it would be nice if we can talk about the long-term
and think about a positive
impact or a better world because of the existence of the long term superintelligence.
Do you have views of such a world?
Yeah, I guess it's a little harder, particularly because it seems obvious that the world has
a lot of problems as it currently stands.
And it's hard to think of any one of those
which it wouldn't be useful to have
like a friendly aligned super intelligence working on.
So from health to the economic system
to be able to sort of improve the investment and trade and foreign
policy decisions, all that kind of stuff. All that kind of stuff and a lot more.
I mean, what's the killer app? Well, I don't think there is one. I think
especially artificial general intelligence is really the ultimate general purpose technology.
So it's not that there is one problem, this one area where it will have a big impact.
But if and when it succeeds, it will really apply across the board in all fields where
human creativity and intelligence and problem solving is useful, which is pretty much all fields.
The thing that it would do is give us a lot more control over nature.
It wouldn't automatically solve the problems that arise from conflict between humans.
Fundamentally political problems. Some subset of those might go away if you just had more resources
and call or text, but some subset would require a coordination that is not automatically achieved just by having more
technological capability. But anything that's not of that sort, I think you just get like an
enormous boost with this kind of cognitive technology once it goes all the way. Now again,
that doesn't mean I'm like thinking, oh, people don't recognize what's possible
with current technology and sometimes things get over
high, but those are perfectly consistent views
to hold the ultimate potential being enormous.
And then it's a very different question of how far
are we from that or what can we do with near-term technology?
Yeah, so what's your intuition about the idea
of intelligence explosion?
So there's this, you know, when you start to think about
that leap from the near term to the long term,
the natural inclination, like for me,
sort of building machine learning systems today,
it seems like it's a lot of work
to get the general intelligence.
But there's some intuition of exponential growth,
of exponential improvement of intelligence explosion. Can you maybe try to elucidate, try
to talk about what's your intuition about the possibility of intelligence explosion?
They won't be this gradual, slow process. There might be a phase shift.
Yeah, I think it's, we don't know how explosive it will be. I think for what it was, it seems fairly likely to me that at some point
I will be some intelligence explosion, like some period of time
where progress in AI becomes extremely rapid.
Roughly, in the area where you might say it's kind of
human-ish equivalent in core cognitive faculties,
that the concept of human equivalent
starts to break down when you look to close that.
And just how explosive does something have to be
for it to be called an intelligence explosion? Like does something have to be for it to be called an intelligence explosion.
Like does it have to be like overnight literally or a few years or so.
But overall, I guess if you plotted the opinions of different people in the world,
I guess I would be somewhat more probability towards the intelligence explosion scenario,
then probably the average AI you know, AI researcher,
I guess. So, and then the other part of the intelligence explosion, or just forget explosion,
just progress, is once you achieve that gray area of human level intelligence, is it obvious to
you that we should be able to proceed beyond it to get the super intelligence. Yeah, that seems, I mean, as much as any of these things can be obvious, given we've never
had one, people have different views, more people have different views, it's like there's
like some degree of uncertainty that always remains for any big futuristic philosophical
grand question that just we realize humans are fallible especially about
these things, but it does seem as far as I'm judging things based on my own impressions,
that it seems very unlikely that that would be a ceiling at or near human cognitive capacity.
And this is a, I don't know, this is just a special moment.
It's both terrifying and exciting to create a system
that's beyond our intelligence.
So maybe you can step back and say,
like, how does that possibly make you feel
that we can create something?
It feels like there's a line beyond which it steps, you'll be able to
outsmart you. And therefore, it feels like a step where we lose control.
Well, I don't think the latter follows that is you could imagine. And in fact,
this is what a number of people are working towards, making sure that we could
ultimately project higher levels
of problems holding ability while still making sure
that they are aligned, like they are in the service
of human values.
I mean, so, so,
inducing control, I think, is not a given
that that would happen.
Now, I asked how it makes me feel like,
I mean, to some extent, I've lived with this for so long
since as long as I can remember being an adult
or even a teenager, it seemed to me obvious
that at some point AI will succeed.
And so I actually, misspoke, I didn't mean control.
I meant, because the control problem is an interesting thing. And I think
the hope is at least we should be able to maintain control over systems that are smarter than us.
But we do lose our specialness. It sort of will lose our place as the smartest, coolest thing on earth.
And there's an ego involved that,
that humans are very good at dealing with.
I mean, I value my intelligence as a human being.
It seems like a big transformative step to realize
there's something out there that's more intelligent.
I mean, you don't see that essentially fundamentally.
Well, yeah, I think, yes, a lot. I think it would be small. I mean, I think there are already
a lot of things out there that are, I mean, certainly if you think the universe is big,
there's going to be other civilizations that already have super-intelligence, or that
just naturally have brains, the size of beach balls and are like
completely leaving us in the dust and
We haven't faced a face with face we haven't faced a face, but I mean that's an open question
What what would happen in in a kind of
post-human world like how much day to day would
these super-intelligence
be involved in the lives of ordinary? I mean, you could imagine some scenario where it would be
more like a background thing that would help protect
against some things, but you wouldn't,
like that wouldn't be this intrusive kind of,
like making you feel bad by making clever jokes
on your ex-part, like there's like all sorts of things
that maybe in the human context would feel awkward about that.
You don't want to be the dumbest kid in your class, everybody picks it like a lot of those things, maybe you need to abstract away from it.
You're thinking about this context where we have infrastructure that is in some sense beyond any or all humans. I mean, it's a little bit like say the scientific community
as a whole, if you think of that as a mind.
It's a little bit of a metaphor,
but I mean, obviously it's gonna be like way more
capacious than any individual.
So in some sense, there is this mind like thing already out there
that's just vastly more intelligent than an individual is. And we think,
okay, you just accept that as a fact.
That's the basic fabric of our existence.
Yeah.
There's a super intelligent.
Yeah.
You get used to a lot of, I mean, there's already Google and Twitter and Facebook, these
system, recommender systems that are the basic fabric of our, I could see
them becoming, do you think of the collective intelligence of these systems as already
perhaps reaching super intelligence level?
Well, I mean, I hear it comes to the concept of intelligence and the scale and what human level means. The kind of vagueness and indeterminacy
of those concepts starts to dominate how we would answer that question. So, like, say
the Google search engine has a very high capacity of a certain client, like retrieving, remembering and retrieving information,
particularly like text or images that are, you have a kind of string, a word string key,
like obviously a superhuman at that, but a vast set of other things it can't even do at all, not just not do well. So, you have these current AI systems that are super human in some limited domain and
then like radically subhuman in all other domains.
Same with a just like a simple computer that can multiply really large numbers, right?
So it's going to have this like one spike of super intelligence and then a kind of a zero level of capability across all other cognitive fields.
And yeah, I don't necessarily think the generalness, I mean, I'm not so attached to it, but I
could sort of, it's a gray area and it's a feeling, but to me, sort of alpha zero is somehow much more intelligent, much, much more intelligent than deep blue.
And to say which demand, you could say, well, these are both just board game, they're
both just able to play board games who cares if they're good to do better or not.
But there's something about the learning, the self play that makes it crosses over into
that land of intelligence
that doesn't necessarily need to be general.
And the same way Google is much closer
to deep blue currently in terms of its search engine.
Then it is to sort of the alpha zero.
And the moment these recommended systems really
become more like alpha zero, but being able to learn a lot
without the constraints of being heavily constrained by human interaction, that seems like a special moment in time.
Certainly learning ability seems to be an important facet of general intelligence. that you can take some new domain that you haven't seen before and you weren't specifically pre-programmed for and then figure out
what's going on there and eventually become really good at it. So that's something alpha 0
has much more often than deep blue had
and in fact, I mean systems like alpha 0 can learn not just go but other
or can learn, not just go, but other, in fact, probably beat the blue in chess and so far, so.
So you do say this is general, and to say it matches the intuition, we feel it's more
intelligent, and it also has more of this general purpose learning ability.
And if we get systems that have even more general purpose learning ability, it might also
trigger an even stronger intuition that they are actually starting to get smart. So if you were to pick a future, what eating a utopia looks like with AGI systems?
Is it the neural link, brain computer interface world, where we're really close to the
intralink with AIsystems? Is it possibly where AGI systems replace us completely while maintaining the values
and the consciousness?
Is it something like it's a completely invisible fabric like you mentioned in society where
it just aids in a lot of stuff that we do like hearing diseases and so on?
What is utopia if you get to pick?
Yeah, I mean, it's a good question and a deep and difficult one.
I'm quite interested in it.
I don't have all the answers yet, but I might never have, but I think there are some different
observations one can make.
One is if this scenario actually did come to pass, it would open up this vast space of
possible modes of being. On one hand,
material and resource constraints would just be like expanded dramatically, so there would be
a lot of a big pie, let's say, right? Also, it would enable us to do things, including to ourselves, or like that, it would just open up this much
larger design space, an option space, then we have ever had access to in human history.
So, I think two things follow from that. What one is that we probably would need to make
a fairly fundamental rethink of what ultimately we value,
like things things more from first principles.
And the context would be so different from the familiar
that we could have just take what we've always been doing
and then like, oh well, we have this cleaning robot
that cleans the dishes in the sink
and a few other small things.
And like I think we would have to go back
to first principles.
And so from even from the individual level, go back to the first principles of what is the meaning of life,
what is happiness, what is fulfillment. And then also connected to this large space of resources,
is that it would be possible. And I think something we should aim for is to do well by the lights of more
than one value system. That is, we wouldn't have to choose only one value criterion and
say we're going to do do something that scores really high on
the metric of say hedonism and then is like a
zero by other criteria like kind of wireheaded brains in a bat and
It's like a lot of pleasure. That's good. But then like no beauty no achievement like no
I or or or pick it. And I think to some significant, not unlimited sense, but the
significant sense, it would be possible to do very well by many
criteria, like maybe you could get like 98% of the best according
to several criteria at the same time, given this, this
great expansion of the option space.
And so, so I have competing value systems, competing criteria as sort of forever, just like
our Democrats of Republican, there seems to be this always multiple parties that are useful
for our progress in society, even though though might seem dysfunctional inside the moment, but having the multiple value systems seems to be beneficial for, I guess, a balance
of power.
So that's, yeah, not exactly what I have in mind, that it's, well, although it may be
in an indirect way, it is.
But that if you had the chance to do something that scored well on several different metrics,
our first instinct should be to do that rather than immediately leap to the thing,
which ones of these value systems are we're going to screw over.
So let's first try to do very well by all of them.
Then it might be that you can't get 100% of all,
and you would have to then have the hard conversation
about which one will only get 97%.
There you go.
There's my cynicism that all of existence
is always a trade-off.
Will you say, maybe it's not such a bad trade-off
as first of these trade-offs?
Well, this would be a distinctive context
in which at least some of the constraints would be removed.
I'll leave you. That's probably still be trade-offs in the end. It's just that we should first make sure at least some of the constraints would be removed.
That's probably still be trade-offs in the end. It's just that we should first make sure
we at least take advantage of this abundance.
So in terms of thinking about this,
like yeah, one should think,
I think in this kind of frame of mind of generosity
and inclusiveness to different value systems and see how far one can get there first.
And I think one could do something that would be very good, according to many different criteria. about AGI fundamentally transforming the value system of our existence, the meaning of life.
But today, what do you think is the meaning of life? What do you? The silliest or perhaps the
biggest question? What's the meaning of life? What's the meaning of existence? What makes,
what gives your life fulfillment, purpose, happiness, meaning?
What gives your life fulfillment, purpose, happiness, meaning?
Yeah, I think these are like, I guess a bunch of
different related questions in there that one can ask.
Happiness, meaning. Yeah.
All the...
I mean, it's like you could imagine somebody getting a lot of happiness from something that they didn't think was meaningful.
Like mindless, like watching reruns
of some television series, writing junk food,
like maybe some people that gives pleasure,
but they wouldn't think it had a lot of meaning.
Whereas conversely, something that might be quite loaded
with meaning might not be very fun always.
Like some difficult achievement that really helps a lot of people
maybe requires self-sacrifice and hard work.
So these things can, I think, come apart, which is something to bear in mind,
also when we are thinking about these utopia questions that you might actually start to do some constructive thinking about that.
You might have to isolate and distinguish these different kinds of things that might be valuable in different ways.
Make sure you can clearly perceive each one of them.
And then you can think about how you can combine them.
And just as you said, hopefully come up with a way to maximize all of them together. Yeah, or at least get me, maximize or get a very high score on a wide range of them,
even if not literally all. You can always come up with values that are exactly opposed to one
another, right? But I think for many values, they are kind of opposed with, if you place them
within a certain dimensionality of your space,
like there are shapes that are kind of, you can't entangle,
like in a given dimensionality,
but if you start adding dimensions,
then it might in many cases just be that they are easy to pull apart and you could.
So we'll see how much space there is for that,
but I think that there could be a lot in this context of radical abundance, if ever we get to that.
I don't think there's a better way to end it, Nick.
You've influenced a huge number of people to work on what could very well be the most
important problems of our time.
So it's a huge honor to thank you so much for talking to us.
Well, thank you for coming by, Lex.
That was fun.
Thank you. Thanks for listening to Well, thank you for coming by. Lex, that's fun. Thank you.
Thanks for listening to this conversation with Nick Bostrom. And thank you to our presenting sponsor,
cash app. Please consider supporting the podcast by downloading
cash app and using code Lex podcast. If you enjoy this podcast,
subscribe on YouTube, review it with five stars and Apple
podcasts, support on Patreon or simply connect with me on
Twitter at Lex Friedman.
And now let me leave you with some words from Nick Bostrom.
Our approach to existential risks cannot be one of trial and error.
There's no opportunity to learn from errors.
The reactive approach, see what happens, limit damages, and learn from experience, is unawirkable.
Rather, we must take a proactive approach.
This requires foresight to anticipate new types of threats and a willingness to take decisive, preventative action and to bear the costs, moral and economic of such actions.
Thank you for listening and hope to see you next time.
you