The Joe Rogan Experience - #1350 - Nick Bostrom
Episode Date: September 11, 2019Nick Bostrom is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal t...est.
Transcript
Discussion (0)
And here we go.
All right, Nick, this is one of the things that scares people more than anything,
is the idea that we're creating something or someone's going to create something
that's going to be smarter than us, that's going to replace us.
Is that something we should really be concerned about?
I presume you're referring to babies.
I'm referring to artificial intelligence.
Ah, yes.
Well, it's the big fear and the big hope, I think.
Both?
At the same time, yeah.
How is it the big hope?
Well, there are a lot of things wrong with the world as it is now.
Pull this up to your face, if you would.
All the problems we have uh most of them could be
solved if we were smarter or if we had somebody on our side who were a lot smarter
with better technology and so forth um also i think if we want to imagine some really grand future where humanity or our descendants one day go out and colonize the universe,
I think that's likely to happen, if it's going to happen at all, after we have super intelligence that then develops the technology to make that possible.
The real question is whether or not we would be able to harness this intelligence or whether it would dominate.
Yeah, that certainly is one question.
Not the only.
You could imagine that we harness it, but then use it for bad purposes as we have a
lot of other technologies through history.
So I think there are really two challenges we need to meet.
One is to make sure we can align it with human values and then make sure that we together do something better with it than fighting wars or oppressing one another.
I think, well, what I'm worried about more than anything is that human beings are going to become obsolete, that we're going to invent something that's the next stage of evolution.
I'm really concerned with that.
I'm really concerned with if we look back on ancient hominids, Australopithecus, just think of some primitive ancestor of man.
We don't want to go back to that.
That's a terrible way to live.
I'm worried that what we're creating is the next thing.
And we don't necessarily want, or at least I wouldn't be totally thrilled with a future where humanity as it is now was the last and final word, the ultimate version beyond.
I think there's a lot of room for improvement.
Sure.
But not anything that is different is an improvement.
Right. So the key would be, I think, to find some path forward where the best in us can continue to exist and develop to even greater levels.
And maybe at the end of that path, it looks nothing like we do now.
Maybe it's not two-legged, two-armed creatures running around with three pounds of thinking matter. It might be something quite different.
three pounds of thinking matter, right? It might be something quite different. But as long as what we value is present there, and ideally in a much higher degree than in the current world, then
that could count as a success. Yeah, the idea that we're in a state of evolution, that we are
just like we look at ancient hominids, that we are eventually going to become something more
advanced, or at least more complicated than we are now. But what I'm worried is that biological life itself has so many limitations.
When we look at the evolution of technology, if you look at Moore's Law or if you just
look at new cell phones, like they just released a new iPhone yesterday and they're talking
about all these incremental increases in the ability to take photographs and wide-angle
lenses and night mode and a new chip that works even faster.
These things, there's not, the word evolution is incorrect, but the innovation of technology is so much more rapid than anything we could ever even imagine biologically.
Like if we had a thing that we'd create, if we had created, instead of artificial intelligence in terms of like something in a chip or a computer,
if we created
a life form a biological life form but this biological life form was improving radically
every year like it didn't even exist like the iphone existed in 2007 that's when it was invented
if we had something that was 12 years old but all of a sudden was infinitely faster and better and
smarter and wiser than it was 12 years
ago, the newest version of it, version X1, we would start going, whoa, whoa, whoa, hit
the brakes on this thing, man.
How many more generations before this thing's way smarter than us?
How many more generations before this thing thinks that human beings are obsolete?
Yeah, it's coming at us fast, it feels like.
But some people think, oh, it's slowing down now.
Who thinks it's slowing down?
Well, don't they have like Tyler Cowen and even Peter Thiel sometimes goes on about the pace of innovation not really being what it needs to be. I mean, maybe it was faster in like 1890s,
but still compared to almost all of human history,
it seems like a period of unprecedented rapid progress right now.
Unprecedented.
I'd say so, yeah.
I mean, except for maybe a couple of decades,
a hundred years ago when there was a lot of electricity, the whole thing.
Yeah.
No, I agree.
I just – I don't think it's a concern because it's more of a curiosity to me.
I am concerned, but the more I look at it and go, well, this is – it seems inevitable that we're going to run into artificial intelligence.
But the questions are so open-ended. We really don't know when. We really don't know what form it's going to run into artificial intelligence. But the questions are so open-ended.
We really don't know when,
we really don't know what form it's going to take,
and we really don't know what it's going to do to us.
Yeah, so I see it as not something that should be avoided,
neither something that we should just be completely gung-ho about,
but more like a kind of gate
through which we will have to pass at some point.
All paths that are both plausible and lead to really great futures,
I think, at some point involve the development
of greater-than-human intelligence, machine intelligence.
And so that our focus should be on getting our act together
as much as we can in whatever period of time we have
before that occurs.
Prepare ourselves.
Well, I mean, that might involve doing some research
into various technical questions as how you build these systems
so that we actually understand what they are doing
and they have some intended impact on the world.
It might also, if we are able to get our act together a little bit
on the kind of global political scene,
a little bit more peace and love in the world
would be good, I think.
Sure, that'd be nice.
And then refraining from destroying ourselves
through some other means
before we even get a chance
to try to needle our way through this gate.
Well, that's certainly possible.
We're certainly capable of screwing it all up.
Where is the current state of technology now in regards to artificial intelligence? And how far
away do you think we are from AGI? Well, different people have different views on that. I think the
truth of the matter is that it's very hard to have accurate views about the timelines for these things that
still involve
big new breakthroughs that have
to happen.
Certainly, over the last
eight or ten years, there has been a lot of
excitement with the deep learning revolution.
It used to be that
people thought of AI as this kind of
autistic savant, really
good at logic and counting and memorizing facts, but with no intuition.
And this deep learning evolution, when you began to do these deep neural networks, you
kind of solved perception in some sense.
You can have computers that can see, that can hear,
and that have visual intuition.
So that has enabled a whole wide suite of applications,
which makes it commercially valuable,
which then drives a lot of investment in it.
So there's now quite a lot of momentum in machine learning and trying to kind of stay ahead of that.
It's interesting that when we think about artificial intelligence
and whatever potential form that it's going to take,
if you look at films like 2001, like Hal,
like open the door, Hal, you know,
like we think of something that's communicating to us
like a person would and maybe is a little bit colder and doesn't
share our values and has a more pragmatic view of life and death and things.
When we think of intelligence, though, I think intelligence in our mind is almost inexorably
connected to all the things that make us human, like emotions and ambition and all these things, like the reason why
we innovate.
It's not really clear.
We innovate because we enjoy innovation and because we want to make the world a better
place and because we want to fix some problems that we've created and we want to solve some
limitations of the human body and the environment that we live in.
But we sort of assume that intelligence that we create will
also have some motivations. Well, there is a fairly large class of possible
structures you could do if you want to do anything that has any kind of cognitive or
intellectual capacity at all. A large class of those would be what we might call agents.
So these would be systems that interact with the world in pursuit of some goal.
And if there are a sophisticated
class of agents, they can plan
ahead the sequence of actions. Like more primitive
agents might just have reflexes.
But a sophisticated agent
might have a model of the world where it can
kind of think ahead before it starts doing stuff.
It can kind of think, what would I need to do in order to reach this desired state?
And then reason backwards from that.
So I think it's a fairly natural, it's not the only possible cognitive system you could build.
But it's also not this weird, bizarre, special case that, you know, it's a fairly natural thing to aim for.
If you're able to specify the goal,
something you want to achieve,
but you don't know how to achieve it,
a natural way of trying to go about that
is by building the system that has this goal
and is an agent and then moves around
and tries different things
and eventually perhaps learn to solve that task.
Do you anticipate different types
of artificial intelligence,
like artificial intelligence that mimics the the human emotions like do you think that people will construct something that's
very similar to us in a way that we can interact with it in in common terms or do you think
it will be almost like communicating with an alien
so there are different scenarios here.
I mean, I guess, my guess is that the first thing
that actually achieves super intelligence
would not be very human-like.
There are different possible ways
you could try to get to this level of technology.
One would be by trying to reverse engineer the human brain.
We have an existence in the limiting case.
You might imagine if you just made an exact duplicate
in silicon of the human brain,
like every neuron had some counterpart.
So that seems technologically very difficult to do,
but it wouldn't require a big theoretical breakthrough to do it.
You could just, if you had sufficiently good microscopy
and large enough computers and enough elbow grease,
you could kind of, but it seems to me plausible
that what will work before we are able to do it that way
will be some more synthetic approach work
that would only be a very rough resemblance
maybe with the neocortex.
Yeah, that's one of the big questions, right?
Whether or not we can replicate
all the functions of the human brain in the way it functions and mimic it exactly, or whether we
could have some sort of superior method that achieves the same results that the human brain
does in terms of its ability to calculate and reason and do multiple tasks at the same time.
Yeah. And I also think that maybe once you have a sufficiently high level of this general form of intelligence,
then you could use that maybe to emulate or mimic
things that we do differently.
So maybe our cortex is quite limited,
so we rely a lot on earlier neurological structures that we have.
We have to be guided by emotion
because we can't just calculate everything out
and instinct.
And if we lost all of that,
we would be helpless.
But maybe some system
that had a sufficiently high level
of this more abstract reasoning capability
could maybe use that to substitute
for things that weren't built in
in the same way that we do.
Have you ever talked to sam harris about this
yeah a little bit have you ever had a podcast with him yeah actually he had him on his podcast
half a year ago i'll have to i'll have to listen to it because he has the worst view of the the
future in terms of artificial intelligence he's terrified of it. And when I talk to him, he terrifies me. And Elon Musk is right up there.
He also has a terrifying view
of what artificial intelligence
could potentially be.
What do you say to those guys?
Well, I mean, I do think
that there are these
significant risks that will be associated
with this transition to the machine
intelligence era.
Including existential risks, threats to the very survival of humanity or what we care about.
So why are we doing this?
Well, there are a lot of things we're doing that maybe globally it would be better if we didn't do.
Why do we build thousands of nuclear weapons?
Why do we overfish the oceans?
Yeah. If one actually asks why do different individuals work on AI research or why do different companies and governments fund it, I mean, there are a lot of explanations.
It's like a great scientific endeavor.
If you can make the Google search engine 1% better, that's got to be worth like a billion dollars right off the bat.
It's become a kind of prestige thing now where nations want to have some sort of strategy
because it's seen as this new frontier.
Just like when you had steam engines
and industrialization a few hundred years ago
and electricity,
like it's going to just open up
a lot of economic opportunities.
You want to be in there.
You want to be this kind of,
we are going to do subsistence agriculture
while the rest of the world is moving on.
So there's a lot of,
it's kind of overdetermined.
You could remove some of these reasons
and there would still be enough reasons
for why people would be pushing forward with this.
One of the things that scares me the most
is the idea that if we do create
artificial intelligence,
then it will improve upon our design and create far more sophisticated versions of itself.
And that it will continue to do that until it's unrecognizable, until it reaches literally a godlike potential.
Maybe you could tell us.
But someone had calculated, some reputable source had calculated the amount of improvement that sentient artificial intelligence would be able to create inside of a small window of time.
Like if it was allowed to innovate and then make better versions of itself, and those better versions of itself were allowed to innovate and make better versions of itself.
You're talking about not an exponential increase of intelligence, but an explosion.
Well, we don't know.
It's hard not to forecast the pace at which we will make advances in AI. Because we just don't know
how hard the problems are that we haven't yet solved.
Once you get to human level or a little bit above, who knows?
It could be that there is some level where to get further,
you would need to put in a lot of thinking time to kind of get there.
Now, what is easier to estimate is if you just look at the speed,
because that's just a function of the hardware that you're running it on, right?
So there we know that there is a lot of room in principle.
If you look at the physics of computation
and you look at what would an optimally arranged physical system be
that was optimized for computation,
that would be like way many, many orders above what we can do now.
And then you could have arbitrarily large systems like that.
So from that point of view,
we know that there could be things that would be like a million times faster
than the human brain and with a lot more memory and stuff like that.
And then something, if it did have a million times more power than the human brain, it could create something with a million times more computational power than itself.
It could make better versions.
It could continue to innovate. Like if we create something and we say,
you are, I mean, it is sentient.
It is artificial intelligence.
Now, please go innovate.
Please go follow the same directive and improve upon your design.
Yeah, well, we don't know how long that would take then to get to something. We already have sort of millions of times more thinking capacity than a human has.
I mean, we have millions of humans.
Right.
So if you kind of break it down, you think there's like one milestone when you have maybe an AI that could do what one human can do.
But then that might still be quite a lot of orders of magnitude until it would be equivalent of the whole human species.
And maybe during that time, other things happen, maybe we upgrade our own abilities in some way.
So there are some scenarios where it's so hard to get even to one human baseline that we use this
massive amount of resources just to barely create kind of, you know, a village
you did.
Yes.
Using billions of dollars of compute, right?
So if that's the way we get there, then, I mean, it might take quite a while because
you can't easily scale something that you've already spent billions of dollars building.
Yeah.
Some people think the whole thing is blown out of proportion, that we're so far away
from creating artificial general intelligence that resembles human beings, that it's all just vaporware.
What do you say to those people?
Well, I mean, one would be that I would want to be more precise about just how far away
does it have to be in order for us to be rational to ignore it.
It might be that if something is sufficiently important and high stakes, then
even if it's not going to happen in the next
5, 10, 20, 30 years, it might still
be wise for
our pool of 7 billion plus
people to have some people
actually thinking about this ahead of time.
Yeah, for sure.
So some of these disagreements, I guess this is my point,
are more apparent than real.
Some people say it's going to happen soon, and some other people say, no, it's not going to happen for a long time.
And then one person means by soon, five years, and another person means by a long time, five years.
And it's more of different attitudes rather than different specific beliefs.
So I would first want to make sure that there actually is a disagreement.
So I would first want to make sure that there actually is a disagreement.
Now, if there is, if somebody is very confident that it's not going to happen in hundreds and hundreds of years,
then I guess I would want to know their reasons for that level of confidence.
What's the evidence they're looking at?
Do they have some ground for being very sure about this?
Certainly the history of technology prediction is not that great.
You can find a lot of other examples
where even very eminent
technologists and scientists were
cocksure it's not going to happen
in our lifetime.
In some cases that actually already just happened
in some other part of the world
or it happened a year or two later.
So I think some
epistemic humility with these things would be wise.
I was watching a talk that you were giving,
and you were talking about the growth of innovation and technology and GDP
over the last 100 years.
And you were talking about the entire history of life on Earth
and what a short period of time humans have been here.
And during what a short period of time humans have been here and then what a
during what a short period of time what a stunning amount of innovation and how much change we've
enacted on the earth and just a blink of an eye and you had the scale of gdp over you know the
course of the last hundred years it's it's crazy to because it's so difficult for us with our current perspective, just being a person, living, going about the day-to-day life that seems so normal, to put it in perspective time-wise and see what an enormous amount of change has taken place in relatively an incredibly short amount of time.
Yeah.
I mean, we think of this as sort of the normal way for things to be.
The idea that the alarm wakes you up in the morning,
and then you commute in and sit in front of a computer all day,
and you try not to eat too much.
And that if you sort of imagine that, you know, maybe in 50 years or 100 years
or at some point in the future, it's going to be very different.
That's like some radical hypothesis.
But, of course, this quote-unquote normal condition is a huge anomaly
any which way you look at it.
I mean, if you look at it on a geological timescale,
the human species is very young.
If you look at it historically, for more than 90%,
we were just hunter-gatherers running around
and agriculturalists for the last couple of hundred years when some parts of the world have escaped the Malthusian condition where you basically only have as much income as you need to be able to produce two children.
And we have the population exploit.
Like all of this is very, very, very recent.
And in space as well of
course almost everything is ultra high vacuum and and we live on the surface of this little
special crumb and and yet we think this is normal and everything else is weird but
i think that's a complete inversion and so when you do plot, if you do plot, for example, world GDP,
which is a kind of rough measure for the total amount of productive capability that we have, right?
If you plot it over 10,000 years, what you see is just a flat line and then a vertical line.
And you can't really see any other structure.
It's so extreme, the degree to which humanity is productive.
So if one looks at this picture and now we imagine this is now the normal,
this is the way it's going to be now indefinitely,
it just seems prima facie implausible.
It doesn't look like we are in a static period right now.
It looks like we're in the middle of some kind of explosion.
Explosion.
And oddly enough, everyone involved in the explosion,
everyone that's innovating,
everyone that's creating all this new technology,
they're all a part of this momentum
that was created before they were even born.
So it does feel normal.
They're just a part of this whole spinning machine.
And they jump in, they're born, they go to college,
next thing you know, they have a job,
and they're contributing, they're making new technology,
and then more people jump in and add on to it.
And there's very little perspective
in terms of the historical significance
of this incredible explosion technologically.
When you look at what you're talking about,
that gigantic spike, no one feels it,
which is one of the weirdest things about it.
I mean, you kind of expect every year
there will be a better iPhone or whatever.
Yes, if not, we'd be upset.
For almost all of human history,
people lived and died,
and saw absolutely no technological change.
And in fact, you could have many, many generations.
The very idea that there was some
trajectory in the material conditions is a relatively new idea I mean people thought of
history either as you know some kind of descent from a golden age or some people had a cyclical
view but it was all in terms of political organization.
That would be a great kingdom,
and then a wise ruler would rule for a while,
and then a few hundred years later,
their great-great-grandchildren would be too greedy,
and it would come into anarchy,
and then a few hundred years later,
it would come back together again.
So it would be all these pieces moving around,
but no new pieces really entering.
Or if they did, it was at such a slow rate that you didn't notice.
But over the eons, the wheel slowly turns,
and somebody makes a slightly better wheel,
somebody figures out how to irrigate a lot better,
they breed better crops,
and eventually there is enough that you could have enough of a population
enough brains
that then create more ideas at a quick enough rate
that you get this industrial revolution
and that's where we are now
Elon Musk
had the most terrifying description of humanity
said that we are the biological
bootloader for artificial intelligence.
That's what we're here for.
Well, bootloaders are important.
They are important, but I think there's like objectively and there's personally.
Like objectively, if you were outside of the human race
and you were looking at all these various life forms competing on this planet for for resources and for survival you would look at humanity and
you go well you know clearly that's not it's not finished so there's going to be another version
of it it's like when is this version going to take place is it going to take place over millions and
millions of years like it has historically when it comes to biological organisms or is it going to invent something that takes over from there and then that's the new thing
some something that's not based on tissue something that's not based on cells it doesn't
have the biological limitations that we have nor nor does it have all the emotional attachments
attachments to things like breeding social social dominance, hierarchies, all those things were no consequence to it.
It doesn't mean anything because it's not biological.
Yeah, yeah.
I mean, I don't think millions of years.
I mean, a number of decades or whatever.
But it's interesting that even if we set that aside,
we say machine intelligence is possible for some reason.
Let's just play with that. I still think
that would be very rapid change, including
biological change.
We are
making great advances
in biotech as well,
and we'll increasingly
be able to control what
our own organisms are doing
through different means
and enhance human capacities through biotechnology.
And so even there, it's not going to happen overnight,
but over a historically very short period of time,
I think you would still see quite profound change
just from applying bioscience to change human capacities.
Yeah, one of the technologies or one of the things that's been discussed to sort of mitigate
the dangers of artificial intelligence is a potential merge, some sort of symbiotic
relationship with technology that you hear discussed.
Like, I don't know exactly how Elon's neural link works, but it seems like a step in that direction.
There's some sort of a brain implant that interacts with an external device, and all of this increases the bandwidth for available intelligence and knowledge.
Yeah, I'm sort of skeptical that that will work.
knowledge. Yeah, I'm sort of skeptical that that will
work. I mean, good that somebody
tries it, you know, but
I think it's quite
technically hard to
improve
a normal healthy human being's
say,
cognitive capacity or other capacities by
implanting things in them
and get benefits that you couldn't
equally well get by having the gadget
outside of the body so um i i don't need to have an implant to be able to use google right right
and there are a lot of advantages to to having it external you can kind of upgrade it very easily
you can shut it off because well hopefully you could do that even with implant um and once you
start to look into the details,
there's these kind of demos,
but then if you actually look at the papers often,
you find, well, and then there were these side effects
and the person had headaches or they had some deficit
and the speech didn't, like, infection.
Like, it's just, biology is messy.
Yes.
So, you know, maybe it will work better than I expect.
That could be good.
But otherwise, I think that the place where it will first become possible
to enhance human biological capacities would be through genetic selection,
which is technologically something very near.
You mean like CRISPR type?
So that would be editing, right?
When you actually go in and change things.
That also is moving.
What do you mean by selection?
So this would just be in the context of, say,
in vitro fertilization.
You have usually some half dozen
or a dozen embryos created
during this fertility procedure,
which is standardly used.
So rather than just the doctor kind of looking at these embryos and saying, well, that one looks
healthy, I'm going to implant that, you could run some genetic test and then use that as a
predictor and select the one you think has the most desirable attributes. And so this could be
a trend in terms of how human beings reproduce that
we instead of just randomly having sex woman gets pregnant gives birth to a child we don't know what
it's going to be what it's what's going to happen we just hope that it's a good kid instead of that
you start looking at the all the various components that we can measure. Yeah.
And so, I mean, to some extent we already do this.
There are a lot of testing done for various chromosomal abnormalities
that you can already check for.
But our ability to look beyond clear, stark diseases
that this one gene is wrong,
to look at more complex trait is
is increasing rapidly um so obviously there are a lot of ethical issues and yeah but if we're just
talking what is technologically feasible i think that that i mean already you could do a very
limited amount of that today and maybe you would get you or three IQ points in expectation more
if you selected using current technology based on 10 embryos, let us say.
So very small.
But as genomics gets better at deciphering the genetic architecture
of complex traits, whether it's intelligence or personality attributes,
then you would have more selection power and you could do
more. And then there is a number of other technologies we don't yet have, but which if
you did, would then kind of stack with that and enable much more powerful forms of enhancement.
So there, yeah, I don't think there are any major technological hurdles really in the way,
just some small amount of incremental further improvement?
When you talk about doing something with genetics and human beings and selecting,
selecting for the superior versions, and then if everybody starts doing that,
the ethical concerns, when you start discussing that, people get very nervous
because they start to look at their own genetic defects and they go oh my god what if i didn't
make the cut like i wouldn't be here and you start thinking about all the imperfect people
that have actually contributed in some pretty spectacular ways to what our culture is and like
well if everybody has perfect genes would all these things even take place like what are we
doing really if we're bypassing nature
and we're choosing to select for the traits and the attributes
that we find to be the most positive and attractive?
Like, what are, like, that gets slippery.
And you think what would have happened if, say, some earlier age
had had this ability to kind of lock in their, you know,
their prejudices or if the Victorians had had this,
maybe we would all be whatever, pious and patriotic now or something.
Yeah, we know, like the Nazis.
Another, yeah.
So in general, with all of these powerful technologies we are developing, there is, I think the ideal course would be that we would first gain a bit more wisdom and then we would get all of these powerful tools.
But it looks like we're getting the powerful tools before we have really achieved a very high level of wisdom.
Yeah.
But we haven't earned them.
achieved a very high level of wisdom.
Yeah.
But we haven't earned them.
The people that are using them are sort of where we haven't, like think about the technology that all of us use.
How many pieces of technology do you use in a day
and how much do you actually understand any of those?
Most people have very little understanding of any of the things they use work.
They put no effort at all into
creating those things but yet they've inherited the responsibility of the power that those things
possess yeah i mean that that's the only way we can do it it's just way too complex for any person
if you had to sort of learn how to build everything every tool you use like you wouldn't get very far
isn't that fascinating though when you think about human beings and all the different things we do?
We have very little understanding of the mechanisms behind most of what we need for day-to-day life, yet we just use them because there's so many of us and so many people are understanding various parts of all these different things that together collectively we can utilize the intelligence of all these millions of people that have innovated. And we with no work whatsoever just go into the Verizon store and pick up the new phone.
Yeah.
I mean and not just technology but worldviews and political ideas as well.
It's not as if most people sit down with an empty table trying to think from the basic principles up what would be the ideal configuration of the state
or something like that.
You just kind of absorb it
and go with it.
You float in the stream of culture.
And it's amazing just how little of that actually
at any point channels through your sort of conscious attention
where you make some rational otherwise,
but like deliberate decision.
Most you just get carried away with um so but but
that that again i mean if we have if this is what we have to work with then there's no other way
there's no other way possibly it's there's no other way and there's no way even like you and
i discussing this like discussing the the history of this incredible spike of evolution or innovation rather in technology,
it just doesn't feel like anything.
It feels normal.
So even though we can intellectualize it, even though we can have this conversation,
talk about what an incredible time we're in and how terrifying it is,
that things are moving at such an incredibly rapid rate,
and no one's putting the brakes on it.
No one's thinking about the potential pros and cons.
We're just pushing ahead.
Yeah.
Well, not nobody.
Not no one, but a very small—
I mean, there are a few people.
I've got my research group.
Yes.
There's actually increased—I mean, when I got interested in these things in the 90s,
and it was very much a fringe activity.
There was some internet mailing list and people
exchanging ideas. But
since then, I mean, there's now a small
set of
academic research institutes and some other
that are kind of actually trying to do more
systematic thinking about
some of these big picture topics.
When did it seem like it was possible?
Like if you got involved in it in the 90s,
it must have seemed like some very fringe
sort of pie in the sky idea
of our general artificial intelligence.
Ah, so we're talking specifically about AI?
Well, actually,
the field of artificial intelligence
sometimes is kind of dated to 1956.
That was a conference, but I mean, it's somewhat arbitrary,
but roughly that's when it got started.
But the pioneers, even right back at the beginning,
thought that they would gonna be able to do all the things
that the human brain does.
And in fact, they were quite optimistic,
like they thought maybe 10 years or something like that.
Back then?
Yeah, many of them.
Really?
Even before computers? No, they had computers in 1956 how did they yeah what kind of computers
well slow yeah slow computers when when was the computer invented well it's one of those things
um i think during the um the second world war they had computers that were useful
for doing stuff. Then before that
they had kind of tabulating machines
and before that
they had designs for things that
if they had been put together would have been able to
calculate a lot of numbers.
And then before that they had an abacus
it kind of
there's a number
the line from having some external tool like a notepad, which you can calculate bigger numbers, right?
If you can scribble on a piece of paper to a modern day supercomputer, like that kind of, you can break it down into small steps and they happen gradually.
But yeah, roughly since the 40s or so.
That's when they first invented code?
Like electrical, yeah.
Yeah.
I think.
So even back then,
they thought we're only about 10 years away.
Well, no.
So in the mid-50s,
when people started using the word artificial intelligence,
some of these AI researchers at the time
were quite optimistic about the timelines.
were quite optimistic about the timelines.
In fact, there was some summer project that we're going to have a few students or whatever
and work over the summer.
And I thought, oh, maybe we can solve vision over the summer.
And now we've kind of solved vision,
but that's like six years later.
So it can be hard to know how hard a problem is
until you've actually solved it.
But the really interesting thing to me is that even though
I can understand why they were wrong about how difficult it is
because how would you know, right?
If it's 10 years of work or 100 years of work,
kind of hard to estimate at the outset.
But what is striking is that even the ones who thought it was 10 years away,
they didn't think of what the obvious next step would be after that.
Like if you actually succeeded at mechanizing all the functions of the human mind,
they couldn't think, well, it's obviously not going to stop there
once you get human equivalence.
Like you're going to get super intelligence.
But it was as if the imagination muscle had so exhausted itself
thinking of this radical possibility.
You could have a machine that does everything that the human knows,
that it couldn't kind of take the next step after that,
or for that matter, the immense ethical and social implications.
Even if all you could do is to replicate a human mind,
like in a machine,
if you actually thought you were building that
and you were 10 years away,
it'd be crazy not to spend a lot of time thinking about how this is going to impact the world.
But that didn't really seem to have occurred much to them at all.
Well, sometimes it seems that people just want to do it.
Like even with the creation of the atomic bomb,
I mean, they felt like they had to do it because we had to develop it before the Germans did.
Right. But that was a specific reason. Like it wasn't just, oh, it could be fun to do it because we had to develop it before the Germans did. Right, but that was a specific reason.
It wasn't just, oh, it could be fun to do.
Sure.
And so with the Manhattan Project, obviously it was during wartime,
and maybe Hitler had a program they thought,
so you could easily see why that would motivate a lot of people.
But even before they actually started the Manhattan Project,
a lot of people.
But even before they actually started the Manhattan Project,
so the guy who kind of first conceived
of the idea that you could make
a nuclear explosion, Leo Szilard,
he was a kind of eccentric physicist
who conceived of the idea of chain reaction.
So it's been known before that
that you could split the atom
and a little bit of energy came out.
But if you're going to split one atom at a time,
you're never going to get anything because it's too little.
So the idea of a chain reaction was that if you split an atom
and it releases two neutrons,
then each of those can split another two atoms
that then release four neutrons
and you get an exponential blow-up.
So he thought of this.
I forget exactly when
it must have been in the
early 30s probably
and
he was a remarkable person because
he didn't just think, oh this is a fun idea
I should publish it and get a lot of
citations. But he thought, what would this mean for the
world?
This could be bad
for civilization.
So he then went to try to persuade some other of his colleagues
who were also working in nuclear physics not to pursue this,
not to publish on related areas and had some partial success.
So there was some partial success where his colleagues agreed.
Some things were not published immediately.
Not all of his colleagues listened to him. Of course. Isn't that the problem?
That is the problem. Some people are always going to want to be the ones that sort of innovate.
That is the problem in those cases where you would actually prefer the innovation
not to happen. Historically, of course, we now look back and think
there are a lot of dissenters that
we are now glad could have their way
because a lot of cultures were quite resistant to innovation.
And they wanted to do the way things had always been,
whether it's like social innovation or technological innovation.
The Chinese were at one point ahead in seafaring, exploring,
and then they shut all of that down
because the emperor at the time, I guess, didn't like it.
And so there are many examples of kind of stasis,
but as long as there were a lot of different places,
a lot of different countries, a lot of different mavericks,
then somebody would always do it,
and then once the others could see that it worked,
they could kind of copy and things move forward.
But of course, if there is a technology
you actually want not to be developed,
then this multipolar situation makes it very, very hard
to coordinate, to refrain from doing that.
And yeah, this I think is a kind of structural problem in the current human condition that is ultimately responsible for a lot of the existential risks that we will face in this century.
There's this kind of failure of ability to solve global coordination problems.
And when you think about the people that did Oppenheimer and the people behind the Manhattan Project, they were inventing this to deal with this existential threat, this horrific threat from Nazi Germany, the Japanese and the World War II, this idea that this evil empire is going to try to take over the world and this created the momentum and this created the motivation to develop this incredible technology that wind up making a great amount of our electricity and wound up creating
enough nuclear weapons to destroy the entire world many times over and we're in this strange state
now or it was motivated by this horrific moment in history, this evil empire that tries to take over the world, and we come up with this incredible technological solution, the ultimate weapon that we detonate a couple of times on some cities, and then now we're in this weird state where we're how many years later, 80 years later, and we're not doing it anymore.
We don't drop any bombs on people anymore,
but we all have them, and we all have them pointed at each other.
Well, not all.
Well, yes.
Which is a good thing, I think.
Quite a few.
But it's incredible that the motivation for this incredible technology,
this amazing technology,
was actually to deal with something that was awful.
Yeah, yeah.
I mean, war has been,
like, has had a way of focusing minds and stuff.
Now, I think that nuclear energy we would have had anyway.
Maybe it would have been developed,
like, five years or ten years later.
Reactors are not that difficult to do.
So I think we could have gotten to all the good uses
of nuclear technology that we have today
without having to have had kind of the nuclear bomb developed.
Now, you pay attention to like Boston Dynamics and all these different robotic creations that they've made.
They seem to have a penchant for doing really sinister looking bots.
I think all robots that are, you know, anything that looks autonomous is kind of sinister looking bots. I think all robots that are, you know,
anything that looks autonomous is kind of sinister looking.
Well, I mean, you see the Japan, yeah, I mean,
like the Japanese have these like big eyes, sort of rounded.
So it's a different.
They're trying to trick us.
Boston Dynamics is, I guess,
they want the Pentagon to give them funding or something.
Right, DARPA.
Yeah, they look like they're developing Terminators.
Yeah.
Yeah.
But what I was thinking is if we do eventually come to a time where those things are going to war for us instead of us, like if we get involved in robot wars, our robots versus their robots, and this becomes the next motivation for increased technological innovation
to try to deal with superior robots
by the Soviet Union or by China.
These are more things that could be threats
that could push people to some crazy level
of technological innovation.
Yeah, it could.
I think there are other drivers
for technological innovation as well
that seem plenty strong.
Sure.
Commercial drivers, let us say, that we wouldn't have to rely on war or the threat of war to kind of stay innovative.
And I mean, there has been this effort to try to see if it would be possible to have some kind of ban on lethal autonomous weapons.
Just as, I mean, there are a few technologies that we have, like there has been a relatively successful ban on chemical and biological weapons, which have by and large been, you know, honored and upheld.
There are kind of treaties on nuclear weapons,
which has limited proliferation.
Yes, there are now maybe, I don't know,
a dozen, I don't know the exact number,
but it's certainly a lot better than 50 or 100 countries.
Yes.
And some other weapons as well,
blinding lasers, landmines, cluster munitions.
So some people think maybe we could do something like this
with lethal autonomous weapons, killer bots.
Is that really what humanity needs most now,
like another arms race to develop killer bots?
It seems arguably the answer to that is no.
like killer bots?
It seems arguably the answer to that is no.
I've kind of, as a lot of my friends are supportive,
I kind of stood a little bit on the sidelines on that particular campaign,
being a little unsure exactly what it is that,
I mean, certainly I think it'd be better
if we refrained from having some arms race
to develop these than not.
But if you start to look in more detail,
what precisely is the thing that you're hoping to ban?
So if the idea is the autonomous bit,
like the robot should not be able
to make its own firing decision.
Well, if the alternative to that is
there is some 19-year-old guy
sitting in some office building
and his job is whenever the screen flashes fire now,
he has to press a red button.
And then exactly the same thing happens.
I mean, I'm not sure how much is gained by having that extra step.
But it is something, it feels better for us.
For some reason, someone is pushing the button.
Right, but exactly what does that mean?
Like in every particular firing decision?
Or is it like some...
Well, you've got to attack this group of surface ships here.
And here are the general parameters.
And you're not allowed to fire outside these coordinates.
I don't know.
I mean, another is the question of...
It would be better if we had no wars.
But if there is going to be a war,
maybe it is better if it had no wars but if there is going to be a war maybe it is better
if it's robots
v robots
or if
if there's going to be
bombing
like maybe you want
the bombs to have
high precision
rather than low precision
like get fewer
civilian casualties
and operating under
artificial intelligence
so it makes better decisions
well
it depends
like exactly on how
so I don't know
I mean on the other hand you could imagine it kind of reduces the threshold for going depends exactly on how. So I don't know. I mean, on the other hand,
you could imagine it kind of reduces the threshold
for going to war
if you think that you wouldn't fear any casualties.
Maybe you would be more eager to do it.
Right.
Or if it proliferates
and you have these kind of mosquito-sized killer bots
that terrorists have
and it doesn't seem like a good thing to have a society
where you have like a facial recognition thing and then the bot flies out and you just have a kind of dystopia.
So, yeah.
I think we're thinking rationally.
We're thinking rationally given the overall view of the human race that we want peace and everything to be well.
We want peace and everything to be well.
But realistically, if you were someone who is trying to attack someone militarily, you would want the best possible weapons that give you the best possible advantage.
And that's why we had to develop the atomic bomb first.
It's probably why we'll develop the – or we'll try to develop the killer autonomous robot first.
Yeah, yeah.
Because someone else would have it.
Right, the fear that the other is. So this is why it's basically a coordination problem.
It's hard for any one country unilaterally
to make sure that the world is peaceful and kind.
It requires everybody to synchronize their actions.
And then you can have successes like we've had with some of these treaties.
Like we've not had a big arms race in biological weapons or in chemical weapons.
I mean, there have been.
There were cheaters even on the biological warfare program like the Soviet Union had massive efforts there.
But still probably less use of that and less development than if there had been no such treaty.
And just look at the amount of money being wasted every year to maintain these large arsenals so that we can kill one another if one day we decide to do it.
There's got to be a better way.
But getting there is hard.
We would hope that we would get to some point where all this would be irrelevant. Yeah. Because there's no more war.
Yeah, and so if you look at the biggest efforts so far
to make that happen,
so after the First World War,
people were really aware of this.
They said, this sucks, like war.
I mean, look at this.
Like a whole generation just ground up machine guns.
Like this is, got to make sure this never happens again.
So they tried to do the League of Nations,
but then didn't really invest it with very much power.
And then the Second World War happened.
And so then again, just after that, it's fresh in people's memory saying,
well, never again, this is it, the United Nations.
And in Europe, the European Union,
it's kind of both designed as ways to try to prevent this.
But again, with kind of maybe in the case of the United Nations,
quite limited powers to actually enforce the agreement.
And there's a veto, which makes it hard if it's two of the major powers that are at loggerheads.
So it might be that if there were a third big conflagration, that then people would say, well, this time, you know, we've got to really put something kind of institutional solution in place that has enough enforcement power that we don't try this yet again.
So we don't have a second robot war.
So once we get to the first robot war.
I mean, but the kind of memories fade, right?
Yes, that's the problem.
So even the Cold War, I mean, I grew up, I'm Swedish.
I remember we were kind of in between, right?
And we were taught in schools about nuclear fallouts and stuff.
It was like a very palpable sense that at any given point in time,
there could be some miscalculation or crisis or something.
some miscalculation or crisis or something.
And all the way up to senior statesman at the time,
this was like a very real and very serious.
And I feel that memory of just how bad it is to live in that kind of hair-trigger nuclear arms race
Cold War situation has kind of faded.
And now we think, wow, maybe the world didn't blow up, so maybe
it wasn't so bad after all. Well, I think that would be
the wrong lesson to learn.
It's a bit like you're
playing Russian roulette and you survive
one and you say, well, it isn't so dangerous
at all to play Russian roulette. I think I'm going to
have another go. You've got to realize
like, well, maybe that was a 10% chance
or a 30% chance that the world would blow up
during the Cold War and we were lucky but it doesn't mean we want to have another one when i was in high
school it was a real threat when i was in high school everyone was terrified that we were going
to go to war with russia it was it was a big thing and and you talk to people from my generation
about that and everybody remembers it remember that feeling that you had in high school that
like we there any at any day something
could go wrong and we could be at war with another country that's a nuclear superpower yeah but that's
all gone now like that that feeling that fear people are so confident that that's not going
to happen that that's not even in people's consciousness and then a number of maneuvers are made
and then you find yourself in a kind of situation
where there's like honored stake and reputation
and you feel you can't back down
and then another thing happens
and you get into this place
where if you even say something kind about the other side,
you're seen to be like, you know,
you're a soft, you're a pinky or like,
and on both sides, in the other side as well soft, you're a pinky, you're like, and then on both
sides, in the other side as well, obviously, they're going to have the same internal dynamic.
And each side says bad things about the other, it makes the other side hate them even more.
And these things are then hard to reverse. Like once you find this dynamic happening,
it's kind of almost, oh, it's not too late, you can try it, but it can be very hard to back out
of that. And so if you can prevent yourself from going down that path to begin with
that's much preferable when you see boston dynamics and you see those robots is there
something comparable that's being developed either in the soviet union or in china or somewhere else
in the world where there's similar type robots well i think a lot of the boston dynamic thing
seems more showy than actually useful.
Really?
These kind of animal-like things that hop around with 150 decibels or something.
If I were a special ops trying to sneak in,
I wouldn't want this kind of big alarm.
Right.
But I think a lot of action would be more in terms of flying drones, maybe submarine stuff, missiles, that kind of stuff.
But when you see these robots and you see the ones that look like dogs or like insects, couldn't you imagine those things being armed with guns?
Oh, yeah, I mean, I could.
When they are, then it doesn't really look showy anymore.
It seems pretty effective.
Like, you can't even kick those things over.
Yeah, well, I mean, I think if it has a gun,
I mean, it doesn't really matter whether it looks like a dog
or if it's just a small flying platform.
Sure.
I mean, in general, I think that the more with AI and robotics,
like the more the cooler something looks,
usually technically the less impressive it is.
As you see this cute robot.
The cooler it looks.
Yeah, the extreme case of this is these robots
that look exactly like a human,
like maybe shaped like a beautiful woman or something like that.
They are complete hype.
Like Ex Machina.
Well, so the movies, obviously they do it
because they don't want to film in movies.
But every once in a while you have some press release.
I forget what the name is of this female-looking robot
that got citizenship in Saudi Arabia a few years ago.
It's like a pure publicity stunt,
but the media just laps it up.
Wow, they've created this. It's exactly pure publicity is done, but the media just laps it up. Wow, they've created this
like, it's exactly like a human.
What a big breakthrough.
And it's like nothing.
When you see Ex Machina,
do you think that that's something
that could be realistically
that could be implemented in a hundred years
or so? Like we really could
have some form of artificial human
that's indistinguishable?
Well, I think the action is not going to lie
in the robotic part so much as in the brain part.
I think it's the AI part.
And robotics only insofar as it becomes enabled
by having, say, much better learning algorithms.
So right now, if you have a robot,
for the most part, in any one of these big factories,
it's like a blind, dumb thing
that executes a pre-programmed set of motions
over and over again.
And if you want to change up the production,
you need to get in some engineers to reprogram it.
But with a human, you could kind of show them
how to do something once or twice, and then they can do it.
So it will be interesting to see over the next few years
whether we can see some kind of progress in robotics
that enable this kind of imitation learning
to work well enough that you could actually start doing it.
There are demonstrations already, but robustly enough that it would be useful
and you could replace a lot of these kind of industrial robotics experts by having this.
So, yeah.
So I think, I mean, in terms of making things look like human,
I think that's more for Hollywood and for press releases than the actual
driver of progress.
More so the actual driver of progress
but someone is going to probably
try to replicate a human being once the technology
becomes viable.
Did you see the movie Ex Machina?
I've probably, I mean I don't see
yeah, I just, it's a little bit of a blur.
I've seen some of these
and not others.
Ex Machina was the one where the guy lives in a very remote location.
Yeah, like a beautiful place in Norway.
Yeah, I saw that.
Yeah, and he created this beautiful girl robot that seduces this man.
And at the end of it, she leaves him locked up in this thing
and just takes off and gets on the helicopter and flies
away and the thing that's disturbing is that they she knew how to manipulate his emotions to achieve
a desired result which was him helping her escape yeah and but then once she did she had no real
emotions so he was screaming and you know she had no compassion and no empathy she just hopped on
the helicopter and left him there starved to death inside that locked box. And that is what scares people. This idea that we're going to create something that's
intelligence, it has intelligence like us, but it doesn't have all the things that we have, like
caring, love, friendship, compassion, the need for other human beings. If you develop an autonomous
robot that's really autonomous, it has no need for other people
that's where we get weirded out like it doesn't need us right yeah i mean i think the same would
hold even if it were not a robot but just a program inside a computer um but but yeah yeah
and the idea that you could have something that is strategic and deceptive and so forth yeah so
that i mean but then other elements of the movie, of course, and in general, a reason why it's bad
to get your kind of map of the future from Hollywood.
So if you think there's this one guy,
presumably some genius living out in the nowhere
and kind of inventing this whole system,
like in reality, it's like anything else.
There are a lot, like hundreds of people
programming away on their computers,
writing on whiteboards,
and sharing ideas with other people across the world.
It doesn't look like a human.
And there would often be some economic reason
for doing it in the first place.
Like not just, oh, we have this Promethean attitude
that we want to kind of bring.
So all of those things don't make for such good plot lines,
so they just get removed.
But then I wonder if people actually think of the future
in terms of some kind of supervillain and some hero
and it's going to come down to these two people
and they're going to wrestle.
Yeah. And it's going to come down to these two people and they're going to wrestle and yeah you know um and it's going to be very personalized and concrete and localized whereas a lot of things that determine what happens in the world are very spread out and bureaucracies
churning away and sure um yeah that was a big problem that a lot of people had with the movie
was the idea that this one man could innovate at such a high level and be so far
beyond everyone else is ridiculous that he's just doing it by himself on this weird compound
somewhere come on yeah that's that but that makes a great movie right yeah fly in in the helicopter
drop you off on a remote location this guy shows you something he's created that is going to change
the whole world and it looked beautiful i, I could imagine doing some writer's retreat there or something.
Well, when the iconic image of aliens from another world is these little gray things with no sexual organs and large heads and black eyes.
This is the iconic thing that we imagine when we think about things from
another planet. I've often wondered if what we think of in terms of like artificial life from
another planet or life from another planet is that it's like an artificial creation.
Like in our ideas that we understand that the biological limitations of the body when it comes
to traveling through space, the dealing with radiation, the death, need for food, things along those lines, that what we would do is create some artificial thing to travel for us like we've already done on Mars.
We have the rover that roams around Mars.
would be an artificial, autonomous, intelligent creature that has no biological limitations like we do
in terms of its ability to absorb radiation from space.
And we create one of those little guys just like that
with an enormous head, no sex organs, doesn't need sex organs,
and we have this thing, pilot these ships
that can defy our own physical limitations in terms of what would happen to us
if we had to deal with you know one million g force because it's moving at some preposterous
rate through space like we when we think of these things coming from another planet if we think of
life on another planet if they can innovate in a similar fashion the way we do we would imagine they would
create an artificial creature to do all their dirty work like why would they want to like risk
their body right yeah i mean except i think creature might conjure up stuff that i mean i
okay if you have this spaceship i mean you don't have to have like build a little thing that sits
and turns the steering wheel right i mean this this could be all automated. Sure. And you'd imagine a technology that is space-faring in a serious way
would have nanotechnology.
So they'd have basically the ability to arbitrarily configure matter
in whatever structure they wanted.
They would have, like, nanoscale probes and things that could shapeshift.
It would not be that there would be this person
sitting in a seat behind a steering wheel.
If they wanted to,
there could be invisible
tasks, I think, like nanoscale things
hiding in a rock somewhere.
Then just connecting with an
information link up to some planetary
sized computer somewhere far away
which would be doing that.
So, yeah, I think that's the way that space is most likely to get colonized.
It's not going to be like with meat sacks kind of driving spaceships around and having
Star Trek adventures.
It's going to be some spherical frontier emanating from whatever the home planet was, moving
at some significant fraction of the speed of light and converting everything in its path into infrastructure
of whatever type is maximally valuable for that civilization,
maybe computers and launchers to launch more of these space probes
so that the whole wavefront can continue to propagate.
But we are, I mean mean one of the things you brought
up earlier is that
if human beings are going to
continue and
we're going to propagate through the universe, we're going
to try to go to other places,
we're going to try to populate
other planets, and
are we going to do that with just robots?
Are we going to try to do that biologically?
We're probably going to try to do it biologically.
One of the things you were saying earlier is one of the things that artificial intelligence could possibly do
is accelerate our ability to travel to other lands or other planets.
I mean, we're going to try.
I mean, in fact, some people are, right, trying to biologically.
I just think that's going to not lead to anything important until those efforts becomes obsoleted
by some radical new technology
wave probably
triggered by
machine super intelligence
that then rapidly leads to
something approximating technological maturity
once innovation happens
at digital timescales rather than human timescales
then all these things that you could
imagine we doing if we're doing,
if we had 40,000 years to work on it,
we would have space colonies and cures for aging
and all of these things, right?
But if that thinking time happens at digital speeds,
then that long future gets telescoped,
and I think you fairly quickly reach a condition
where you have close to optimal technology.
And then you can colonize the space cost-effectively.
You just need to send out one little probe
that then can land on some resource
and set up a production facility to make more probes,
and then it spreads exponentially everywhere.
And then if you want to, you could then,
after that initial
infrastructuring has happened, you could
transport biological human
beings to other planets if you wanted to.
But it's not really where the action is going to be.
But what if we were concerned
there's some sort of a threat
to the Earth?
Some sort of
asteroid impact, something.
At that stage of technology, averting some asteroid would, something. I mean, at that stage of technology,
averting some asteroid would be, I think, trivial.
Really?
It would be like a gift of free energy.
Like, oh, here comes an energy package. Great.
That's a funny way to look at it.
Do you think we're going to eventually colonize Mars?
Well, I think the answer is this.
I think if and only if we manage to get through these key technological transitions, and then I think we will colonize not just Mars, but everything else that is accessible in the universe.
When you talk about these things, people always want to know when.
When do you think it's going to happen?
What's the timeline?
Yeah, so my guess would be after technological maturity like after super intelligence now with mars it's possible that there would be like a little kind of prototype colonization
thing because they're like people like really excited about that so if yeah so you could
imagine some little demo project happening sooner but if we're talking about something say that
would survive long termterm even if the Earth
disappeared, like some kind of
self-sustaining civilization,
I think that's going to be
very difficult to do
until you have superintelligence, and then it's
going to be trivial.
So you think superintelligence
could potentially be what
one of the applications would
be to terraform Mars,
to change the atmosphere,
to make it sustainable for biological life.
Yeah, for biological life.
So we have like a second spot.
Yeah, for example.
Like a vacation house.
Now, I also think that this is a very radical context,
technological maturity,
because we already,
maybe there are additional technologies
we can't even think of yet,
but even just what we already know
about physics, et cetera, we can sort of see possible technologies
that we're not yet able to build, but we can see that they would be
consistent with physics, that it would be stable structures.
And already that creates a vast space of things you could do.
And so, for example, I think it would be possible
at technological maturity to upload
human minds into
computers, for example.
You think that's going to happen, like Ray Kurzweil
stuff? Well, I think, again, it would be
technologically possible
at technological maturity to do it.
Now, whether it's
actually going to happen then depends, A, do we
reach technological maturity?
And B, are we interested in using our technology for that purpose at that time?
But both of those seem kind of reasonably possible.
Yeah, reasonably possible, especially in comparison to what we've already achieved. If I had a time machine and it could jump you 1,000 years from now into the future,
would you do it?
Would you jump in?
I mean, I think just going on a long jet flight is kind of already stretching my… What if it was instantaneous?
What if it was an instantaneous trip to 1,000 years?
Could I come back?
No.
Well, I probably wouldn't.
I don't know.
I mean, I'm kind of a bit cautious with these things.
At the very least, I've got to think about it
for a long time before.
I mean, also, I have attachments.
There are people I care about here and projects
and maybe even opportunities to try to make some difference.
If we actually are in this weird time right now,
different from all of earlier human history
where nothing really much was happening,
and we're not yet where it's all out of our hands
and the superintelligence is running the show.
If that's true, if that's true,
then that means we right now live
in this very weird period where
our actions might have
cosmological consequences. If we
affect the precise
time and way in which
the transition to machine super intelligence happens,
we would be hugely influential.
And
if you have some
ambition to try to
do some good in the world
then that kind of
can be a very exciting
prospect as well
like there might be
no other
better time to exist
if your goal is
to do good
we might be
in the golden years
for
in terms of ability
to have
to take actions
that have large consequences
also this very unique transitionary period between the times of old in terms of ability to take actions that have large consequences.
Also, there's a very unique transitionary period between the times of old and the times of new.
We're really in the heat of the change
in terms of the Internet is only 20-plus years old.
Phones are only, cell phones at least,
people carrying them all the time,
it's only 15-plus years old.
This is very, very new.
So it's an exciting, crazy time where all these changes are taking place really rapidly.
Like if you were from the future, this might be the place where you would travel to
to experience what it was like to see this immense change take place almost instantaneously.
Like if you could go back in time to a specific time in history and experience what life was like,
to me, I think I'd probably pick ancient Egypt.
Like during the days of the pharaohs, I would love to see.
Did you get to choose who you are there?
No, no, you just need to watch.
Just to see what it looks like, what it's like to experience life back then.
But if I was from the future where things were…
Just out of curiosity, what do you think it would look like?
Like what do you imagine yourself seeing in this?
I would imagine, I mean, I've really thought long and hard about the construction methods of ancient Egypt.
I would love to see what it looked like when they were building the pyramids.
Like how long did it take?
What were they doing?
Like,
how did they do it?
We still don't know.
It's all really theoretical.
There's all these ideas of how they constructed it with incredible precision
and,
you know,
precision in terms of the way it's astronomically aligned to certain
areas of our solar system and different constellations it's amazing i would love to
have seen how they did that and what was the planning like and had they implemented how many
people did it take and how long did it take because we really don't know it's all speculation
during the burning of the library of alexand Alexandria we lost so much information and we've got you know
hieroglyphs and the physical structures that are still daunting we have no idea
we look at those like we they look at the Great Pyramid of Giza the the huge
one with two million plus stones in it like who made that how how'd you guys do
it I mean what what did you did you draw
it out first like how did you get all the rocks there like i mean i think that would be probably
the spot that i would want to go to i would want to be there in the middle of the construction of
the pyramids just to watch just to so that certainly would be like big like i guess tourist
destinations from time travelers.
And I guess in terms of if one is thinking, I'm just saying what was going on back then.
We think the pyramids and the slave trains and dialogue.
But of course, for most Egyptians, most of the time, they would be picking weeds from their field or putting their baby to sleep or stuff like that.
So kind of the typical moment of human existence.
They don't even think it's slaves anymore, I don't think.
I think they think it's skilled labor based on their diet.
Based on the diet, the utensils that they found in these camps,
these workers' camps,
they think that these were highly skilled craftspeople,
that it wasn't necessarily slaves.
They used to think it was slaves, but now because of the bones of the food,
they were eating really well, And they think that, well, and also the level of sophistication
involved, this is not something you just get kind of slaves to do. This seems to be that there was
a population of structural engineers, that there was a population of skilled construction people and that they tried
to you know utilize all of these great minds that they had back then and put this thing together but
it's still a mystery i think that's the spot that i would go to because i think it would be amazing
to see so many different innovative times i mean it would amaze it'd be amazing to be alive during the time of Genghis Khan or to be alive during some of the wars of 1,000, 2,000 years ago just to see what it was like.
But the pyramids would be the big one. Artificial intelligence runs everything, and human beings are linked to some sort of neurological implant that connects us all together.
And we long for the days of biological independence, and we would like to see what was it like when they first started inventing phones?
What was it like when the internet was first opened up for people?
was first opened up for people.
What was it like when people saw,
when someone had someone like you on a podcast and was talking about potential artificial intelligence
and where it could lead us and what it could do?
It's the most interesting time.
It is the most interesting time.
Yeah.
That's what's cool about it to me
is that we seem to be in this really Goldilocks period
of great change where we're still human, but we're worried about privacy.
We're concerned our phones are listening to us.
We're concerned about surveillance dates.
People pull little stickers over their laptop camera.
We see it coming, but it hasn't quite hit us yet.
We're just seeing the problems that are associated
with this increased level of technology in our lives.
Which is, yeah, that is a strange thing.
If we add up all these pieces, it does put us in this very weirdly special position.
Yeah.
And you wonder, hmm, it's a little bit too much of a coincidence.
I mean, it might be the case,
but yeah, it does put some strain on it.
When you say a little too much of a coincidence, how so?
Well, so, I mean,
I guess the intuitive way of thinking about it,
like what are the chances that just by chance
you would happen to be living
in the most interesting time in history,
being like a celebrity, like whatever,
like that's pretty low prior
probability like most people
well for you I mean for
for all of us really
and so
that could just be
I mean if there's a lottery
somebody's gotta have the ticket
right but
or
or we are wrong about this whole picture
and there is some very different structure in place.
That's where I was getting to.
Which would make our experiences more typical.
That's where I was getting to.
Yeah, I gathered.
Yeah.
So how much have you considered the possibility of a simulation?
Well, a lot.
I mean, I developed the simulation argument back in the early 2000s.
And so, yeah.
But I mean, I know that you developed this argument,
and I know that you've spent a great deal of time working on this.
But personally, the way you view the world how much how much
how much does it play into your vision of what reality is well um it's hard to um say i mean for
to say.
I mean, for the majority of my
time, I'm not actively thinking
about that. I'm just like,
you know, living.
Now, I have
this weird, my work is actually
to think about big picture
questions. So it kind of comes in
through my
work as well. When you're trying to make sense
of our position,
our possible future prospects,
the levers which we might have available
to affect the world,
what would be a good and bad way of pulling those levers,
then you have to try to put all of these constraints
and considerations together.
And in that context, I think it's important.
I think if you are just going about
your daily existence then it might not really be very useful or relevant to uh constantly
like try to bring in hypotheses about the nature of our reality and stuff like that
because for most of the things you're doing on a day-to-day basis,
they work the same, whether it's inside a simulation
or in basement-level physical reality.
You still need to get your car keys out.
So in some sense, it kind of factors out
and is irrelevant for many practical intents and purposes.
Do you remember when you started to contemplate
the possibility of a simulation?
intents and purposes. Do you remember when you started to contemplate the possibility of a simulation?
No.
I mean, I remember when the simulation argument
occurred to me, which is less...
It's not just... I mean, for as long
as I can remember, like, yeah, I mean, maybe
it's a possibility, like, oh, it could all be
a dream, it could be a simulation.
But
there is this specific argument
that kind of narrows down the range of possibilities possibilities and where the simulation hypothesis is then one of only three kind of options.
What are the three options?
Well, one is that there is almost all civilizations at our current stage of technological development go extinct before reaching technological maturity.
That's like option one.
Kind of would be bad news.
Could you define technological maturity?
Well, say having developed at least all those technologies
that we already have good reason to think
are physically possible.
Okay.
So that would include the technology
to build extremely large and powerful computers
on which you could run detailed computer simulations
of conscious individuals.
So that kind of would be
a pessimistic,
like if almost all civilizations
at our stage
failed to get there,
that's bad news, right?
Because then we'll fail as well,
almost certainly.
That's one possibility. Yeah,, that's bad news, right? Because then we'll fail as well, almost certainly. That's one possibility.
Yeah, so that's option one.
Option two is that there is a very strong convergence
among all technologically mature civilizations
in that they all lose interest in creating ancestor simulations
or these kinds of detailed computer simulations of conscious people
like their historical predecessors or variations.
So maybe they have all of these computers that could do it,
but for whatever reason, they all decide not to do it.
Maybe there's an ethical imperative not to do it or some other.
I mean, we don't really know much about these post-human creatures
and what they want to do and don't want to do.
Post-human creatures.
Well, I'd imagine that by the time they have the technology to do this,
they would also have enhanced themselves in many different ways.
Right.
Perhaps enhancing their ability to recognize the consequences
of creating some sort of a simulation.
Yeah, they would almost certainly have cognitively enhanced themselves,
for example.
Well, the concept of downloading consciousness into a computer,
it almost ensures that there's going to be some type of simulation.
If you have the ability to download consciousness into a computer,
once it's contained into this computer,
what's to stop it from existing there?
As long as there's power,
and as long as these chips are firing,
and electricity is being transferred and data
is being moved back and forth, you would essentially be in some sort of a simulation.
Well, I mean, if you have the capability to do that and also the motive.
It would have to simulate something that resembles some sort of a biological interface.
Otherwise, it's not going to know what to do, right?
Yeah.
resembles some sort of a biological interface.
Otherwise, it's not going to know what to do, right?
So we have these kind of virtual reality environments now that are imperfect but improving.
And you could kind of imagine that they get better and better
and then you have a perfect virtual reality environment.
But imagine also that your brain,
instead of sitting in a box with big headphones and some glasses on,
the brain itself also could be part of the simulation.
The matrix.
Well, I think in the matrix there are biological humans outside that plug in, right?
Right.
But you could include in the simulation, just as you have maybe simulated coffee mugs and cars, etc.,
you could have simulated brains.
maybe simulated coffee mugs and cars, etc. You could have simulated brains.
Here is one assumption coming in from outside the simulation argument, and one can talk about it separately, but it's the idea that I call it the substrate independence thesis, that you could
in principle have conscious experiences implemented on different substrate. It doesn't
have to be carbon atoms, as is the case with the human brain. It could be silicon atoms.
That what creates conscious experiences is some kind of structural feature of the computation
that is being performed rather than the material that is used to underpin it. So in that case,
you could have a simulation
with detailed simulations of brains in it
where maybe every neuron and synapse is simulated
and then those brains would be conscious.
And that's possibility number two?
Well, no, so possibility number two is that
these posthumous just are not at all interested in doing it.
And not just that some of them don't,
but like of all these civilizations that reach technological maturity,
that's kind of pretty uniformly,
just don't do that.
And what's number three?
That we are in a simulation,
the simulation hypothesis.
And where do you lean?
Well, I generally tend to punt
on the question of precise probabilities there.
I mean, I think it would be a probability thing, right?
Yes.
It assigns some to each. But mean, I think it would be a probability thing, right? It assigns sum to each.
But, yeah, I've
refrained from giving a very
precise number.
Partly because, I mean, if I said
some particular number, it would get called there
and it would create this, maybe, sense of
false precision.
The argument doesn't allow you to derive
this, the probabilities
x, y, z. It's just that at least one of these three has to obtain.
So, yeah, so that narrows it down.
Because you might think, you know,
why do we know the future is big?
You could just make up any story and we have no evidence for it.
But it seems that there are actually,
if you start to think everything through,
quite tight constraints
on what
probabilistically coherent views you could have.
And it's kind of hard even to find one
overall hypothesis that
fits this and various
other considerations that
we think we know.
The idea would be that if there
is one day the ability
to create a simulation, that
it would be indiscernible from reality itself?
Say if we are not in a simulation yet.
If this is just biological life, we're just extremely fortunate to be in this Goldilocks period.
But we're working on virtual reality in terms of like Oculus and all these companies are creating these consumer-based virtual reality things that are getting better and better and really kind of interesting.
You've got to imagine that 20 years ago there was nothing like that.
20 years from now, it might be indiscernible.
You might be able to create a virtual reality that's impossible to discern from the reality that we're currently experiencing.
Or maybe 20,000
years or 20 million years. The argument
makes no assumption at all about how long
it will take. But one
day, if things continue to
improve, computational power,
the ability to replicate
experiences and even
feedback in terms of biological
feedback, touch and feel and smell,
if they figure out a way to do that,
one day they will have an artificial reality
that's indiscernible from reality itself.
And if that is the case, how do we know if we're in it?
Right.
That is roughly the gist of it.
Now, as I said, I think if you simulate the brain also,
Now, as I said, I think if you simulate the brain also,
you have a cheaper overall system than if you have a biological component in the center
surrounded by virtual reality gear.
So you could, for a given cost,
I think create many more ancestor simulations
with simulated brains in them
rather than biological brains with VR gear.
So in these scenarios where there would be a lot of simulations,
most of those scenarios would be the kind of where everything is digital
because it's just cheaper with mature technology to do it that way.
This is one of the biggest, for lack of a a better terms mind fucks when you really stop
and think about reality itself that if we are living in a simulation like what
what is it and why and where does it go and how do i respond how do i move forward
if i really do believe this is a simulation, what am I doing here?
Yeah, those are big questions.
Huge questions.
And some of them arise even if we're not in a simulation.
Yeah.
And aren't there people that have done some strange, impossible to understand calculations
that are designed to determine whether or not there's a likelihood of us being involved in a simulation currently?
Yeah, I think it slightly misses the point.
So there are these attempts to try to figure out the computational requirements
that would be required if you wanted to simulate some physical system with perfect precision.
if you wanted to simulate some physical system with perfect precision.
So if we have some human, a brain, a room, let's say, and we wanted to simulate every little part, every atom,
every subatomic particle, the whole quantum wave function,
what would be the computational load of that?
And would it be possible to build a computer powerful enough
that you could actually do this?
Now, I think the way that this misses the point is that
it's not necessary to simulate all the details of this environment
that you want to create in an Ancestry simulation.
You would only have to simulate it insofar as it is perceptible
to the observer inside the simulation.
So if some post-human civilization wanted to create Joe Rogan
doing a podcast simulation, they'd need to simulate Joe Rogan's brain
because that's where the experiences happen,
and then whatever parts of the environment
that you are able to perceive.
So surface appearances,
maybe of the table and walls.
Maybe they would need to simulate me as well
or at least a good enough simulacrum
that I could sort of spit out words
that would sound like they came from a real human, right?
I don't know.
Now we're getting quite good with this GPpt2 like this kind of ai that just
spews out words with i don't know whether anyway so so you'd have but but like what what is
happening inside this table right now is completely irrelevant you have no idea of knowing whether
there even are atoms there now you could take a big electron microscope um and and look at finer structure and then then you could take an atomic electron microscope and look at finer structure,
and then you could take an atomic force microscope
and you could see individual atoms even,
and you could perform all kinds of measurements.
And it might be important that if you did that,
you wouldn't see anything weird,
because physicists do these experiments
and they don't see anything weird.
But then you could kind of fill in those details,
like if and when somebody were performing those experiments.
That would be vastly cheaper than continuously running all of this.
And so this is the way a lot of computer games are designed today,
that they have a certain rendering distance.
Like you only actually simulate the virtual world
when the character goes close enough that you could see it.
And so you imagine these kind of super intelligentintelligent post-humans doing this,
obviously they would have figured that out and a lot of other optimizations.
So in other words, these calculations or experiments, I think,
don't really tell on the hypothesis.
Without assigning a probability to either one of those three scenarios,
what makes you think,
if you do stop and think,
I think we're in a simulation,
what are the things that are convincing to you?
Well, it would mainly go through the simulation argument.
To the extent that I think the alternative two hypotheses are improbable,
then that would kind of shift the probability mass on the third remaining.
Is it really only three? So the
ones are
that human beings go extinct?
And also other civilizations
at our stage in the cosmos
or whatever.
It's a strong filter.
That they either go extinct
or they decide not to pursue it.
They all lose interest, yeah.
Or it becomes a simulation.
Is that really the only three options?
Well, I think the only three live options.
Now, so you can kind of unfold the argument a little bit more and look more granular.
So suppose that the first two options are false.
So some non-trivial fraction of civilizations at our stage do get through.
And some non-trivial fraction of civilizations at our stage do get through, and some non-trivial fraction of those are still interested,
then I think you can convincingly show that
by using just a small portion of their resources,
they could create very, very many simulations.
And you can show that or argue for that
by comparing the computational power of systems that we know are physically possible to build.
We can't currently build them, but we could see that you could build them with nanotech and if you have planetary-sized resources, on the one hand.
And on the other hand, estimates of how much compute power it would take to simulate a human brain.
of how much compute power it would take to simulate a human brain.
And you find that a mature civilization
would have many, many orders of magnitude more
so that even if they just used 1% of their compute power
of one planet for one minute,
they could still run thousands and thousands
and thousands of these simulations.
And they might have billions of planets
and they might last for billions of years.
So the numbers are quite extreme, it seems.
So then what you get is this implication that if the first two options are false,
it would follow that there would be many, many more simulated experiences of our kind
than there would be original experiences of our kind.
So the idea is that if we continue to innovate, if human beings or intelligent life in the
cosmos continues to innovate, that creating a simulation is almost inevitable?
No, no.
I mean, the second might be…
That we decide not to.
Yeah, and others with the same capability.
But what if they don't decide not to?
If they don't decide to not choose – the first option, if human beings do figure out a way to not die and stay innovative and we don't have any sort of natural disasters or man-made created disasters, then step two don't we don't decide to not pursue this
if we continue to pursue all various forms of technological innovation including simulations
that it becomes inevitable if we get past those two first options becomes inevitable that we pursue it.
Well, so if they have the capacity,
then they will do it. And the
motive, or like the desire to
do it. So then they would
create hugely many
of these. So not just one
simulation, right? Because it's
so cheap at technological maturity. If you have
a cosmic empire of resources,
they don't have to have a very big desire to do this.
They might just think, well, you know.
Well, that was the big question that Elon said he would ask artificial intelligence.
He said, what's beyond the simulation?
Okay.
That's the real question.
If this is a simulation, if there's many, many simulations running currently, what's beyond the's beyond the simulation well yeah you might be curious about that i mean i think the more important
question would be like what uh do we all things considered have the most reason to do in our
situation like what would it be wise for us to do is that like some way that we can be
um helpful or have the best life or whatever your goal is.
Or is that ridiculous to even consider?
Maybe it's beyond us.
The question of what is outside?
Yes.
Well, I don't think it's ridiculous to consider.
I think it might be beyond us,
but maybe we would be able to form some abstract conception of what it is.
I mean mean in fact
if if we if the path to believing the simulation hypothesis is the simulation argument and i mean
we have a bunch of structure there that gives us some idea like there would be some advanced
civilization that they would have developed a lot of technology over time including compute
technology ability to do virtual reality very well.
We imagine probably they would have used that technology
for a whole host of other purposes as well.
You wouldn't just get that technology
and not be able to create a train or something like that.
They'd probably be super intelligent
and have the ability to colonize the universe
and do a whole host of other things.
And then for one reason or another
they would have decided to use some of the resources
to create simulations
and inside one of those simulations
perhaps
our experiences would be taking place.
So
you could more
speculatively fill in more details
there, but I still think that fundamentally our ability to grok this whole thing
would be very limited.
And there might be other considerations that we are oblivious to.
I mean, if you think about the simulation argument,
it's quite recent, right?
So it's only less than 20 years old.
So if you think that,
suppose it's correct, for the sake of argument,
then up to this point, everybody was
missing something like hugely important
and fundamental, right?
Really smart people,
hundreds of years, like
this massive piece right in the center.
But what's the chances that we now
have figured out the last big missing piece?
Like, presumably, there must be some further
big, giant realization that is, like, beyond us currently.
So I think having some...
Yeah, I mean, that looks kind of plausible,
but maybe there are further big discoveries or revelations
that would kind of maybe not falsify the simulation,
but maybe change the interpretation, like, do something that is hard to know in advance what that would kind of maybe not falsify the simulation, but maybe change the interpretation,
like do something that is hard to know in advance what that would be.
Now, is the concept that if there is a simulation
that all the historical record is simulated as well?
Or when did it kick in?
Well, there are different options there, right?
And there might be many different simulations
that are configured differently.
There could be ones that run for a very long time,
ones that run for a short period of time,
ones that simulate everything and everybody,
others that just focus on some particular scene or person.
It's just a vast space of possibilities there.
And which ones of those would be most likely
is really hard to say much about
because it would depend on the reasons for creating these simulations,
like what would the interests of these hypothetical post-humans be.
Have you ever had a conversation with a pragmatic, capable person
who really understands what you're saying,
but they disagree about even the possibility of a simulation?
even the possibility of a simulation?
It must have occurred,
but it doesn't tend to be the place where the conversation usually goes.
Where does the conversation usually go?
Well, I mean, I move in kind of unrepresentative circles.
So I think amongst the folk I interact with a lot,
I think a common reaction is that it's
plausible
and
still there is some uncertainty
because these things are always hard to figure out.
But we should assign it some probability.
But I'm not saying
that would be the typical reaction if you
did a Gallup survey or something like that.
I mean, another common thing is, I guess,
to misinterpret it in some way or another.
And there are different versions of that.
So one would be this idea that
in order for the simulation hypothesis to be true,
it has to be possible to simulate everything around us to perfect microscopic detail,
which we discussed earlier.
Right.
Then some people might not immediately get this idea that the brain itself could be part of the simulation.
So they imagine it would be plugged in with like a big cable.
And if you just somehow could reach behind you and like that would be,
so that would be another possible common misconception, I guess.
Then I think a common thing is to conflate the simulation hypothesis
with the simulation argument.
The simulation hypothesis is we are in a simulation.
The argument is that one of these three options is true,
only one of which is the simulation hypothesis.
So some conflation there happens.
How do you factor dreams into the simulation hypothesis?
Well, I think they are irrelevant to it.
That is that whether or not we are in a simulation,
people presumably still have dreams,
and there are other reasons and explanations for why that would happen.
So you have dreams even if you're in the simulation?
Well, why not?
Okay.
Why not?
Okay, okay.
Why not? So some people,
so I sometimes get this kind of random email
that's like, oh, well, you know,
yes, thank you, Robert Wurström.
Your theory is very interesting,
and I found proof.
And like, oh, when I looked in my bathroom mirror,
I saw pixels, like random things like that.
Crazy people.
Varying degrees. I mean, maybe we're, I saw pixels, like random things like that. Crazy people. Varying degrees.
I mean, maybe we're all crazy.
Yes, for sure.
But yeah, but I think that those things are not evidence.
In general speaking, you would expect,
if we're not in a simulation,
there's still to be various people
who claim to perceive various things.
Sometimes people have hallucinations,
sometimes they misremember, sometimes they make stuff up.
You just imagine that it would be...
So the most likely explanation for those things is not...
Even if we are in a simulation,
the most likely explanation for those things
is not that there was a glitch in the simulation,
it's that one of these normal psychological phenomena took place.
So, yeah, I would not be inclined to think
that this would be an explanation.
If somebody has those kind of experiences,
it's probably not because we are in the same...
Even if the simulation hypothesis is true,
it's probably not the explanation.
The concept of creativity, how does that play into a simulation?
If during the simulation,
you're coming up with these unique creative thoughts,
are these unique creative thoughts your own
or are these unique creative thoughts stimulated by the simulation?
They would be your own in the sense that it would be your brain that was producing them.
Something else would have produced your brain.
But obviously there's some incredible influences on your brain
if you're involved in some sort of an external stimulation
or simulation rather. That's true in
physical reality as well.
Sure. So it's like
it doesn't come from nowhere. But it's still your brain.
I think it would be
as much, potentially as much
your own in the simulation as it would be
outside the simulation.
Unless the simulators had
for whatever reason set it up
with
the view that they
for some reason they just wanted to have
oh this is Rogan coming up with this particular
idea and that kind of configured the initial conditions
and just the right way to achieve that.
Maybe then when you come up with it
maybe it's less your achievement
than the people who set up the initial conditions.
But other than that, I think
kind of be similar.
Because the reason I ask that is
all ideas,
everything that gets created,
all innovation initially comes from some
sort of a point of someone figuring something
out or coming up with a creative idea.
All of it. Like everything
you see in the external world, like
everything from televisions
to automobiles was an idea and then somebody implemented that idea or groups of people
implemented the technology involved in that idea and then eventually it came to fruition
if you're in a simulation how much of that is being externally introduced into your consciousness
by the simulation?
And is it pushing the simulation in a certain direction?
Yeah, I don't know.
I mean, you could imagine both kinds of simulations.
Like simulations where you just set up the initial conditions and let it run to see what happens.
Right.
And others where maybe you want to just simulate this particular
historical counterfactual.
Like what would have happened if Napoleon had been defeated?
Maybe that's our simulation.
They put in some specific thing there.
You could imagine either or both of those types
of ways of doing it.
But your simulation hypothesis,
if we're in it, it's running's running now is it running and we
independently interact with the simulation or is the simulation introducing ideas into our minds
that then come to fruition inside the simulation is Is that how things get done?
Like if we are in a simulation, right?
And if during the simulation someone has created a new iPhone,
like why are they doing that?
Are there other people in the simulation?
Or is this simulation entirely unique to the individual?
Is each individual involved in a different coexisting simulation?
Right.
So I think the kind of simulation that it would be clearest,
the clearest case for why that would be possible
would be one where all people would be simulated,
that you perceive in each brain.
Yeah, that is it.
Because then you could get the realistic behavior out of the brain
if you simulated the whole brain at a sufficient level of detail.
So everyone you interact with is also a simulation.
Well, that type of simulation should certainly be possible.
Then it's more of an open question whether it would also be possible
to create simulations where there was, say, only one person conscious and the others were just like simulacra.
Yeah, like they acted like humans, but there's nothing inside.
So these would be, in philosophers' parlance, zombies.
That is like a technical term, it means in in when philosophers discuss it
somebody who acts exactly like a human but with no conscious experience now whether those things
are possible or not is an open and question do you consider that ever when you're communicating
with people do you ever stop and think like does it ever i mean it has occurred to me but not not
not not regular i don't know yeah does it but does it ever get to your head where you're like,
this might not be real.
Like this person might not be a real person.
This might be a simulation.
Right.
I mean, I guess there are two things.
One is that you'd probably have some probability distribution
over all these different kinds of situations that you could be in.
Maybe all of those situations are simulated
in different frequencies and stuff,
different numbers of times, that is.
So there would be some probability distribution there.
That would be the first thought,
that in reality you're always kind of uncertain.
The second would be that even if you were in that kind of simulation,
it might still be that behaviorally what you should do
is exactly the same as if you were in the other simulation.
So it might not have that much day-to-day implications.
Do you think there's psychological benefits
for interacting with life as if it's a simulation? No, I don't think that would be an advantage. I mean, maybe a disadvantage in some
cases. What an alleviation of existential angst. Yeah, maybe, but who knows if it could also, I
guess, if it's sort of interpreted in the wrong way, maybe lead you to feel more alienated
or something like that
I don't know
but I think to a first approximation
the same things that would be
work well
and make a lot of sense to do
in physical reality
would be also our best bets
in a simulated reality
that's where it gets really weird like if it's
a simulation but you must behave in each and every instance as if it's not if you know if you were
given a like if you had a test you could take like a pregnancy test when you went to the cvs and you know you you pee on a strip
and it tells you guess what nick this shit isn't real you're in a simulation 100 proven absolutely
positive you know from now on from this moment on that everything you interact with is some sort
of a creation it It's not real.
But it is real because you're having the same exact experience as if it was real.
How do you proceed?
Yeah.
I think there might be very subtle
reprioritizations that would happen.
What would you do, like personally?
Well,
I don't know the full answer to that.
I think there are certain possibilities that look kind of far-fetched
if we're not in a simulation that become more realistic if we are.
So one obvious one is if a simulation could be shut off,
like if the computer where the simulation is running,
if the plug is pulled, right?
Right.
So we think physical universe, as we normally understand,
it can't just suddenly pop out of existence.
There's like conservation of energy and momentum and so forth.
But a simulated universe,
that seems like something that could happen.
It doesn't mean it is likely to happen
or it doesn't say anything about what timeframe,
but at least it's like enters as a possibility where
it was not there before other things as
well become more maybe
similar to various theological
possibilities that
like afterlife and stuff like
that and in fact it kind of
maybe through a very different
path leads to some similar
destinations as people through thinking about leads to some similar destinations
as people through thinking about theology and stuff
have arrived at.
In that, I mean, it's kind of different.
I think there is no logically necessary connection either way,
but there are some kind of structural parallels, analogs, between
the situation of a simulated
creature to their simulators
and a
created entity to
their creator
that are interesting,
although kind of different.
So there might be
comparisons there that you could make that would
give you some possible ways of proceeding.
It seems like paralysis by analysis.
You just sit there and think about it, at least I would.
I would almost wind up not being able to do anything
or not being able to act or move or think.
That seems kind of likely to be suboptimal, right?
Suboptimal for sure.
Yeah.
But the concept is so prevalent and it's so common and it's so often discussed.
Well, it's interesting how much it has just over the last 10, 15 years, how long the idea has.
It was this really radical thing when it started.
And now you have all these kind of figures that kind of almost like en passant
just kind of throws it off.
And yeah, it's interesting how ideas can migrate from some kind of extreme radical fringe
and some decade or two later they're just kind of almost common sense.
Why do you think that is?
Well, we have a great ability to get used to things i mean this
comes back to our discussion about the pace of technological progress it seems like the normal
way for things to be we are very adaptable creatures right um you can adjust to almost
everything and we have no kind of external reference point really and mostly these judgments uh are based on what we think other people think
uh so if it looks like if some high status individual elon musk or whatever talk
seems to take the simulation argument seriously then people think oh it's a sensible idea
um and it only takes like one or two or three of those like people that that that are highly
regarded and suddenly it becomes normalized people that are highly regarded,
and suddenly it becomes normalized.
Is there anyone highly regarded that openly dismisses this possibility?
There must be, but I'm not sure they would have bothered to go on the record specifically.
I guess the people who are dismissive of it wouldn't maybe even bother to address it or something.
I'm trying to think, yeah,
and I'm drawing a blank on whether there's a particular
person I could... I would love to hear the argument
against it. I would love to hear
someone like you or Elon
interact with them and try to
volley
back and forth these ideas.
Yeah.
That could be interesting. Yeah you see you've never had like
some sort of a debate with someone openly dismisses it well like a big public debate
i don't know even private yeah i don't know i mean i've i've it was kind of a long time since
when i first put this article out uh i I guess I had more conversations
about the argument itself.
What was the reaction
when you first put it out?
There was a lot of attention, right?
I mean, pretty much right off the bat,
including public.
I mean, it was published
in some academic journal,
Philosophical Quarterly.
But yeah, it quickly
drew a lot of attention.
And then it's kind of come in waves, like every year or so.
There should be like some new group of,
either a new generation or some new community
that hears about it for the first time,
and it kind of gets a new wave of attention.
But in parallel to these waves,
there's also this chronic trend towards it becoming more part of the mainstream conversation and seeming kind of less far out there.
Yeah.
And I think, yeah, that's maybe partly just if the idea, like maybe if there were some big flaw in the idea, it would have been discovered by now.
So if it's been around for a while, it makes it a little bit more credible.
It might also be slightly assisted by just technological progress.
If you see virtual reality getting better and stuff,
it becomes maybe easier to imagine how it could become so good one day
that you could create perfectly flawless.
I was going to introduce that as option four.
Is option four the possibility that one day we could conceivably create some sort of an amazing simulation, but it hasn't been done yet?
And this is why it's become this topic of conversation is that there's some need for concern because as you extrapolate technology and you think about where it's going now and where it's headed, there could conceivably be one day where this exists.
Should we consider this and deal with it now?
Well, so I'd say that that would be highly unlikely in that if the third, so if the first
two are wrong, right, then there are many, many more simulated ones than non-simulated
ones will be over the course of all of history.
Over the course of all of history, but what if it hasn't yet happened?
Right. But so then the question is, given that, you know that by the end of time, there
will have been, let's say, just a million simulations and one original history.
Sure.
And that all of these simulated people and the original history people all have subjectively indistinguishable experiences.
You can't from the inside tell the difference.
Right.
Then what, given that assumption,
would it be rational for you to believe?
Should you think you're one of the exceptional ones
or should you think you're one, you know,
amongst the larger set, the simulated ones?
Or should you think that it just has not happened yet?
But that would be equivalent to saying
that you would be one of the non-simulated ones.
You're talking about in the universe?
Yeah, but you could make it even just,
you could look at the narrow case of just the Earth.
Let's just look in the narrow case of just the Earth.
In the narrow case of just the Earth,
if the historical record is accurate,
if it's not a simulation then it seems very reasonable that we're just dealing with incremental increases in technology that's pretty stunning and pretty pretty profound
currently but that we haven't experienced a simulation yet isn't that that's how it looks
right sure yeah but that's also how it would look if you were in a simulation.
Yes, but it's also how it would look if you're not in a simulation yet. Right.
That's also a possibility too, no?
Right, yeah.
But for most people for whom it looks like that, it would be the case that they would be simulated.
Why?
Well, by assumption, if there are all these simulations created.
Well, not yet.
Well, right.
But you don't know what time it is in external reality.
But why we assume something so unbelievably fantastic when just life itself is preposterous.
Because life itself, just being a human being on a planet, you know spinning 1,000 miles an hour, hurling through infinity, that in itself is fairly preposterous if it didn't exist.
But it does exist.
And we know that we at least – we're all agreeing upon a certain historical record.
We're agreeing upon Oppenheimer, the Manhattan Project, World War I, World War II.
We're agreeing on Korea and Vietnam.
We're agreeing on Reagan and Kennedy. We're agreeing on korea and vietnam we're agreeing on reagan and
kennedy we're agreeing on all these things historically if we are all agreeing that
there's a sort of historical process we are all agreeing i remember when the first iphone was
invented i remember when the first computer i remember when this i remember the internet why would we assume that there's a simulation
we could assume that there's a possibility of a simulation but why would you assume the
simulation hasn't occurred why wouldn't we assume the simulation hasn't occurred yet
right i mean so it it is a possibility that we would be in the first time segments of all of
these like it's just't that be more likely?
Well, I'd say no.
I mean, so it comes down then to this field,
which is tricky and problematic called anthropics.
So this is about how to assign probabilities in situations
where you have uncertainty about who you are,
what time it is, where you are.
So if you imagine, for example,
all of these people who would exist in this scenario
having to place bets on whether they're simulated or not.
And you think about two possible different ways of reasoning about this.
So one is you assume you're a randomly selected individual
from all these individuals, and you bet accordingly.
Randomly selected individual?
Yeah, so then you would bet you're one of the simulated
ones because like a randomly selected ones
if most are simulated, most lottery
tickets are... But why are we assuming that
most are simulated? This is where I'm getting confused.
Well, we'll have been simulated
by the end of time. By the end of time.
This is like a timeless claim.
But why already when it hasn't existed yet?
Let's say for the sake of argument, because I don't really have an opinion on this, pro or con.
I'm open to the air.
But if I was going to argue about pragmatic reality, the practicality of biological existence as a person that has a finite lifespan.
You're born, you die, you're here right now,
and we're a part of this just long line of humanity
that's created all these incredible things that's led up to civilization,
that's led up to this moment right now where you and I are talking into these microphones
and it's being broadcast everywhere.
Why isn't it likely that a simulation hasn't occurred yet,
everywhere. Why isn't it likely that a simulation hasn't occurred yet, that we are in the process of innovating and one day could potentially experience a simulation? But why are you not factoring in
the possibility or the probability that that hasn't taken place yet?
Yeah, I mean, so it's in there. But if you imagine that people follow this general principle of assuming that they would be the ones in the original history before the simulations had happened.
Right.
Then almost all of them would turn out to be wrong, and they would lose their bets.
Once a simulation has actually…
Right.
I mean, if you kind of integrate over the universe.
But there's no evidence that a simulation has taken place.
But there is evidence that you're alive.
You have a mother.
You have a father.
Well, I mean, those things could be true in the simulation as well.
I mean, it could be.
But isn't that a pipe dream?
Well, it depends on what simulation, right?
I mean, a lot of simulations might run for a long time and have, et cetera.
But we know that if someone shoots you, you'll die.
We know if you eat food, you full we know these things these things could be objective facts these could be no i think they
are true yeah yes right now why would we assume why why would a simulation be the most likely
scenario when we've experienced at least we believe we've experienced,
all this innovation in our lifetime, we see it moving towards a certain direction.
Why wouldn't we assume that that hasn't taken place yet?
Yeah, I think to try to argue for the premise that conditional on there being first an initial segment
of non-simulated Joe Rogan experiences
and then a lot of other segments of simulated ones.
Yes.
That conditional on that being the way the world in totality looks,
you should think you're one of the simulated ones.
Why?
Well, to argue for that, I think then you need to roll in this piece of probability theory
called anthropics, which I alluded to.
And just to pull one little element out of there
to kind of create some initial possibility for this,
if you think in terms of rational betting strategies
for this population of Joe Rogan experiences,
the ones that would lead to the overall
maximal amount of winning would be
if you all thought you're probably one of the simulated segments.
If you had the general reasoning rule
that in this kind of situation,
you should think that you're the initial segment
of the non-simulated Rogan,
then the great preponderance
of these simulated experiences
would lose their bets.
But there's no evidence of a simulation.
Well, I'd say that there is indirect evidence insofar as there is evidence against these
two alternatives.
Well, the two alternatives being that people go, intelligent life goes extinct before they
create any sort of simulation or that they agree to not create a simulation.
But what about if they're going to create a simulation?
There has to be a time before the simulation is created.
Why wouldn't you assume that that time is now currently happening
when you've got a historical record of all the innovation that's leading up to today?
If we understand…
Yeah, I mean, so I think the historical record would be there in the simulation, but...
But why would it have to be there in a simulation
and not be there in reality?
Well, I mean, it could be there in the simulation
if it's a kind of simulation that tracks the original, yeah.
If it's a fantasy simulation, then, you know, maybe it wouldn't be there.
Right, but it could just be reality.
It doesn't have to be a simulation.
Right, and in some sense it would be both, right?
I mean, there would be one Joe Rogan experience
in the real original history,
and then maybe a million, let's say,
in simulated realities later.
But if you think about your actions
that kind of can't distinguish
between these different possible locations
in space-time where you could be,
most of the impact of your decisions will come from impacting all of these million DeRogan
instances that exist in the simulation.
Yeah, but this is once a simulation has been proven to exist, which it hasn't been.
We have, at least in terms of what we all agree. We're proven to have biological lives.
We breed, we sleep, we eat, we travel on planes.
All these things are very tangible.
So those things are true, yeah.
I mean, I'd say those are true,
probably even if we're in a simulation.
But why would you assume we're in a simulation?
This is where I'm stuck.
Because why wouldn't you assume
that a simulation is one day possible?
There's no proof or no evidence that makes any sense to me
that there is currently any simulation.
Right.
I mean, so it's a matter of probabilities and the number of schemes, right?
Is it?
That's what I would assert, yes.
But this current reality.
But what would point to the possibility that it's more probable that we were in a simulation?
This is what escapes me.
Okay.
So I could mention some possibilities that would.
Okay.
So the most obvious, like a big window pops up in front of you saying you're in a simulation.
Click here for more information.
That would be wonderful information.
That would be pretty conclusive.
Right.
Yes.
Right.
So short of that, you would have weaker probabilistic evidence
insofar as you had evidence against the two alternatives.
So, for example, if you got some evidence that suggested it was less likely
that all civilizations at our stage go extinct before maturity.
Let's say we get our act together.
We eliminate nuclear weapons.
We become prudent,
and we check all the asteroids,
nothing is on collision course with Earth.
That kind of tends to lower the probability of the first, right?
Okay.
So that would tend to shift probability
over on the remaining alternatives.
Let's suppose that we moved closer ourselves
to becoming post-human.
We develop more advanced computers
and VR and we're getting close
to this point ourselves and we still
remain really interested in
running ancestor simulations. We think
this is what we really want to spend our resources
on as soon as we can make it work.
That would move probability
over from the second
alternative.
It's less likely that there is this strong convergence among all post-human technologically mature civilizations
if we ourselves are almost post-human
and we still have this interest in creating ancestor simulations.
So that would shove probability over to the remaining alternative.
Take the extreme case of this.
Imagine if we, a thousand years from now
have built our own
planetary sized computer
that can run these simulations
and we are just about
to switch it on
and it will create
the simulation of precisely
people like ourselves.
And as we move towards
the big button
to sort of initiate this,
like then the probability
of the first two hypotheses
basically goes to zero.
And then we would have to conclude with near certainty
that we are ourselves in a simulation
as we push this button to create a million simulations.
Once we achieve that state, but we have not achieved that state,
why would we not assume that we are in the actual state
that we currently experience without a simulation? We shouldn't assume. We should assume that we are in the actual state that we currently experience? Well, I said, yes, we shouldn't assume.
We should assume that we are ignorant as to which of these different time slices we are,
which of these different Rogan experiences is the present one.
We just can't tell from the inside which one it is.
Yeah, I'm still...
I mean, if you could see some objective clock
and say that, well, as yet, the clock is so early
that no simulations have happened,
then obviously you could conclude that you're in the original history.
But if we can't see that clock outside the window,
if there is no window in the simulation to look out,
then it would
look the same.
And then I'd say, oh, we have no way of telling which of these different instances we are.
And one of them might be that there is no simulation and that we're moving towards that
simulation, that one day it could be technologically possible.
One in a million.
Really?
So one in a million is that life is what you experience right now.
One in a million. No, no, no. million is that life is what you experience right now. One in a million.
No, no, no.
I mean conditional on the other alternatives being wrong.
Not even conditional on those other alternatives being wrong.
Let's say that human beings haven't blown themselves up yet.
Let's say that human beings haven't come up with – there's no need to make the decision to not activate the simulation because the simulation hasn't been invented yet.
Isn't that also a possibility?
Isn't also a possibility that the actual timeline
of technological innovation that we all agree on is real
and that we're experiencing this as real live human beings,
not in a simulation,
that one day the simulation could potentially take place,
but has not yet.
Isn't that also a possibility?
Yeah, I mean, sure.
It's just a question of how probable that is, given the...
But why isn't it super probable?
Because we're experiencing it.
Well, I mean, it would be a very unusual situation
for somebody with your experiences to be in.
What about your experiences?
For my experiences, the same there, yeah.
It would be extremely unusual.
But there's 7 billion unusual experiences taking place simultaneously.
Why would you assume that's a...
Well, if there were, like, say, a million simulations,
then that would be a million times more.
But why would there be any simulations?
Why would there not just be 7 billion people experiencing life?
Right, yeah.
That would have to be something that prevents these simulations from
being created. This is where you lose me.
Yeah. So I think maybe the difference is
I tend to think in terms of
the world
as a four-dimensional structure
with time being
one dimension, right? Okay.
So you think in the
totality of existence
that will have happened by the end of time,
you look at all the different experiences that match your current experience.
Okay.
Given these various assumptions, the vast majority of those would be simulated.
Why?
Why?
Well, the various assumptions being that option one and two are false, basically.
What about option, my option?
Yeah, so in your option, the vast majority of all these experiences that will ever have existed will also be simulated, if I understand your option correctly. No, no, no. My option is that nothing's happened yet.
Yeah, but there will have been.
Maybe, but not yet.
Right, but as I understand your option is that
if we look at the universe at the end of time
and we look back,
there will be a lot of simulated versions of you
and then one original one.
But I'm not even considering that.
And you think you might be the original one.
No, I'm not even considering that.
What I'm saying is we may just
be here.
There is no simulation.
Maybe it will take place someday,
but maybe it will not.
But you have to pick which of those
scenarios
you're considering.
That is the scenario
I'm considering. The scenario I'm considering is
we are just here. We are actually live. But what happens after? I want the scenario I'm considering. The scenario I'm considering is we are just here.
We are actually live.
But what happens after?
So I want the scenario to say what's happened in the past, what happens now, and what will happen in the future.
Well, we don't know what's going to happen in the future.
That's right.
So we can consider both options, right?
Yes.
One option where there are no simulations created later.
Right.
Then I would say that means
one of the first two alternatives.
But another option is there could be
a simulation created later,
but it has not taken place yet.
That there will be simulations later.
That it's a possibility,
but it has not happened yet.
Right, but that there will be later.
That's one possibility.
And so then I say,
if that's the world that we
are looking at, then
most
experiences of your
kind
exist inside the simulation.
I still don't understand that.
Why can it not
have happened yet?
Well, it could.
It depends on which of these experiences is your present moment
in that scenario, right? So there's going to be a million of them, plus an initial one.
You can tell from the inside. Maybe there will be a million of them, but there's right now,
no evidence that there's going to be. No evidence that there is, no evidence that there's going to be no evidence that there is no evidence
that it's ever even going to be possible technologically we think there could be
but it hasn't happened yet so why would you assume that we are in a simulation
currently when there's no evidence whatsoever that it's even possible to
create a simulation maybe there is some alternative way of trying
to explain
how I'm thinking.
I'm thinking like suppose
I understand you're saying that.
I'm sorry to interrupt you.
I'm just thinking maybe we could think of some simpler
thought experiment which has
nothing to do with simulations and stuff.
imagine if
so I'm making this up as i go along so we'll
we'll see if it actually works but you are taking into a room uh and then you're awake there for one
hour and then a coin is tossed and if it it lands heads, then the experiment ends,
and you exit the room, and everything is normal again.
But if it lands tails, then you're given an amnesia drug,
and then you're woken up in the room again.
You think you're there for the first time
because you don't remember having been there before.
Right.
And then this is repeated 10 times.
So we have a world
where either there is
one one-hour experience of you in the room
or else it's a world
with 10 Jorgen experiences in the room
with an episode of amnesia in between.
But when you're in the room now,
you find yourself in this room,
you're wondering,
hmm, is this the first time I'm in this room?
It could be.
But it could also be that later on,
and I was just given an amnesia drug.
Okay.
So the question now is, when you wake up in this room,
you have to assign probabilities to these different places
you could be in time.
And then maybe you have to bet or make some decision that depends on where you are um so uh what i guess i could ask
you like so if you wake up in this room what what do you think the probability should be that the
coin that that you're like at time one versus at some later time well what is the probability should be that the coin, that you're at time one versus at some later time.
Well, what is the probability that I'm actually here
versus what is the probability of this highly unlikely scenario
that I keep getting drugged over and over again every hour?
We assume that you're certain that the setup is such
that there was this mad scientist who had the means to do this
and he was going to flip this coin.
So we're assuming that you're sure about that either way.
The only thing you're unsure about is how the coin landed.
Okay, well, if that was a scenario
where I knew that there was a possibility of a mad scientist
and I could wake up over and over again,
that seems like a recipe for insanity.
Well, it's a philosophical thought experiment.
It is a philosophical experiment.
So we can abstract away from the possibility of it.
My point initially, and I'll get back to it,
is there's no evidence at all that we're in a simulation.
So why wouldn't we assume that the most likely scenario
is taking place, which is we are just existing,
and life is as it seems, but strange.
Okay, so if you don't want to do this thought experiment.
No, I do want to do the thought experiment, but it seems incredibly limited.
Right.
Well, I'm trying to distill the probability theory part from the wider simulation.
But I guess I could also ask you, if we were to move closer to this point where we ourselves can create simulations, if we survive, we become a multi-planetary, we build planetary-sized computers.
Yeah.
How would your probability in the simulation hypothesis change as we kind of develop along this?
Well, it would change based on the evidence of some profound technological innovation that actually would allow someone to create a simulation
that's indistinguishable from reality.
But I would rather assume that reality itself currently is just that
because it seems to be, isn't that Occam's razor?
Isn't that the simplest answer?
This is reality.
This is wood.
You are here.
You actually are here.
One day there may be a simulation.
It has not happened yet.
Yeah, I think it's not thought-cum-trace in that it would require you to postulate that you are this very unusual and special observer amongst all the observers that will exist.
But everyone is unusual in their own way.
That's true.
Because there's no clones.
There's no one person that's a version that's living the same exact life in a million different scenarios.
But in this respect, if there are all these simulations, then most of these people are not special in this way.
Most of them are simulated.
And only a tiny minority.
If there's a simulation.
There are many simulations. But if there's no simulations,
you have seven billion unique minds.
If there are no simulations and there will never be any
simulations, then... Well, who's saying there never
will be?
Well, so this...
Since we don't know
what time it is
now in
external reality,
and we therefore can't tell It is now in external reality. Right.
And we therefore can't tell from looking at our evidence where we are in a world where either there is just an original history and then it ends,
or there is a world with an original history and then a lot of simulations.
We need to think about how to assign probabilities given each of these two scenarios.
And so then we have a situation that is somewhat analogous to this one with the amnesia room
where you have some number of episodes.
And so the question is in those types of situations
how do you allocate probability
over the different hypotheses about how the world is structured?
And this kind of betting argument is one type of argument
that you can try to deduce to kind of get some grip on that.
And another is by looking at various applications in cosmology and stuff
where you have multiverse theories,
which say the universe is very big,
maybe there are many other universes,
maybe there are a lot of observers,
maybe all possible observers exist out there
in different configurations.
How do you derive probabilistic predictions from that?
It seems like whatever you observe
would be observed by somebody,
so how could you test that kind of theory?
And this same kind of anthropic reasoning
that I want to use in the context of the simulation argument
also plays a role, I think,
in deriving observational predictions
from these kinds of cosmological theories
where you need to assume something like
you are most likely a typical observer
from amongst the observers that will ever have existed,
or so I would suggest.
Now, I should admit as an asterisk
that this field of anthropic reasoning
is tricky and not fully settled yet,
and there are things there that we don't yet fully understand.
But still, the particular application of anthropic reasoning
that is relevant for the simulation argument,
I think, is one of the relatively less problematic ones.
So that conditional on there being, by the end of time,
a large number of simulated georogans and only one original one, I think, conditional on that hypothesis,
it would seem that most of your probability should be on being
one of the simulated ones.
But I'm not sure I have any other ways of making it more vivid or possible.
No, I completely understand what you're saying.
I completely understand what you're saying.
But I don't know why you're not willing to take into account
the possibility
that it hasn't occurred yet. Yeah, so I mean, the way I see it is that I have taken that into
account, and it receives the same probability that I'm that initial segment as I would give
to any of the other Nick Bostrom segments that all have the same evidence. See, that's where
we differ, because I would give much more probability to the fact that we are existing right now in the current state as we experience it in real life, carbon life,
no simulation, but that potentially one day there could be a simulation which leads us
to look at the possibilities and look at the probabilities that it's already occurred.
All right, so what about this?
Suppose it is the case that, all right, so what we think happened is there was a big bang,
planets formed, and then some billions of years later we evolved, and here we are now, right? Right.
Suppose some physicist told you that, well, the universe is very big,
and early on in the universe, in very, very rare occasions, there was a big gas cloud.
In an infinite universe,
this will happen somewhere, right?
Where just by chance,
there was a kind of Joe Rogan-like brain
coming together for a minute
and then dissolved in the gas.
Right.
And yeah, if you have an infinite universe,
it's going to happen somewhere.
But there's going to be many, many fewer
Joe Rogan brains in such situations
than will exist later on on planets
because evolution helps funnel probability
into these kinds of organized structures, right?
So if some physicist told you that,
well, this is the structure of our part space-time,
like there are a few very, very rare spontaneously materialized brains
from gas clouds early in the universe,
and then there are the normal Rogams much later.
And there are, of course, many, many more normal ones.
The normal ones happen in one out of every
10 to the power of 50 planets,
whereas the weird ones happen
in one out of 10 to the power of 100.
Normal versus weird, how so?
How are you defining it?
Well, the normal ones are ones that have evolved on planets
and had a mother and eat.
Different planets.
Is that what you're talking about?
Yeah, different planets.
Okay, but we only have one planet, right?
Right, but this again is like a,
well, I mean, actually,
there are a lot of planets in the universe,
and if it's infinite,
there's got to be a lot of copies of it, right?
But one planet that we're aware of that has life.
Right.
This is pure speculation, right?
Well, this is a thought experiment,
which in fact
actually probably
matches reality in this
respect. Most likely there's some other
planets out there.
I think the fact that it matches reality is
irrelevant to the point I want to make.
So, if
this turned out to be the way the world works,
a few weird ones happening from gas
clouds and then the vast majority are just normal people living on a planet.
Would you similarly say, given that model,
that you should think,
oh, it might just as well be one of these gas cloud ones
because, after all, the other ones might not have happened yet?
Or have I lost you?
You lost me.
Sorry.
Yeah.
Anyway, I think that
this would be a structurally
similar situation
where there would be
a few exceptional
early living versions
that would be
very small in numbers
compared to the later ones.
And if they
allow themselves
the same kind of reasoning where they would say,
well,
the other ones may or may not come to exist later on planets.
Um,
I have no reason to believe I'm one of the planet living ones.
Then it seems that in this model of the universe,
you should think you're one of these early gas cloud ones.
And as I,
as I,
as I said,
I mean,
this looks like it probably actually is the world we're living in,
in that it looks like it's infinitely big,
and there would have been a few georogans spontaneously generated very early from random processes.
There are going to be very few numbers compared to ones that have risen on planets.
compared to ones that have risen on planets.
So that by taking the path you want to take with relation to the simulation argument,
I wonder if you would not then be committed to thinking
that you would be, in effect, a Boltzmann brain
in a gas cloud super early in the universe.
I still don't understand what you're saying.
What I'm saying is that we scientists agree.
If you believe in science and if you believe in the discoveries that so far people have all currently agreed to,
we've agreed that clouds are formed and that planets are created and that all the matter comes from inside of the explosions of a star
and that it takes multiple times for this
to coalesce before we can develop carbon-based life forms.
All that stuff, science currently agrees on, right?
And then we believe in single-celled organisms become multi-celled organisms through random
mutation and natural selection.
We get evolution, and then we agree that we have come to a point now where technology has hit this gigantic spike that you described earlier.
So human beings have created all this new innovation.
Why wouldn't we assume that all this is actually taking place right now with no simulation?
Yeah, I mean, the simulation argument is the answer to that,
but with the qualification that, A, the simulation argument doesn't even
purport to prove the simulation
hypothesis, because there are these two alternatives.
B, that even if
the simulation hypothesis is true,
in many versions of it,
it would actually be the case that
in the simulation,
all of these things have taken place.
And the simulation might go back a long time
and it might be a reality tracking simulation.
Maybe these same things also happened
before outside the simulation.
I understand that.
But or all these things have actually happened
and there is no simulation yet.
That's possible too.
Doesn't that seem really probable?
Well, to me it seems probable only if at least one of the other alternatives is true.
Or, I mean, I admit that there is also this general possibility,
which is always there, that I'm confused about some big thing.
Like maybe the simulation argument is wrong in some way.
I'm just looking at the track record of science and philosophy
we find we're sometimes wrong.
So I attach some probability to that.
But if we're working within the parameters
of what currently seems to me to be the case,
that we would be the first civilization in a universe
where there will later be many, many simulations
seems unlikely for those exact reasons.
And that if we are the first,
it's probably because one of the alternatives is true.
It's a mind-blower, Nick.
The more you sit and think about it,
the more you ponder these concepts.
And I'm not on one side or the other.
It's scary, but it's also amazing.
And what else is there that we haven't figured out yet?
If we come back in 50 years, even just with human beings thinking about stuff.
And I think I have this concept of a crucial consideration.
I alluded to it a little bit earlier,
but it's the idea of some argument or data or insight
that if only we got it,
would radically change our mind
about our overall scheme of priorities,
not just change the precise way
in which we go about something,
but kind of totally reorient ourselves.
Like an example would be
if you are an atheist
and you have some big conversion experience
and suddenly your life feels very different, right?
You had, what were you doing before?
You were basically wasting your time
and now you found what it's all about.
But that could be sort of
slightly smaller versions of this.
And I wonder what the chances are
that we have discovered
all crucial considerations now.
Because it looks like
at least up until very recently,
we hadn't,
in that there are these important considerations that seems to,
whether it's AI, like if this stuff about AI is true,
like maybe that's the one most important thing that we should be focusing on
and the rest is kind of frittering away our time as a civilization.
We should be focused on AI alignment.
So we can see that it looks like all earlier ages up until very recently
were oblivious to at least one crucial consideration
insofar as they wanted to have maximum positive impact on the world.
They just didn't know what the thing was to focus on.
And it also seems kind of unlikely that we just now have found the last one.
That just seems kind of...
Given that we keep discovering these up until quite
recently we're probably missing out on on one or more likely several more crucial considerations
and if that's the case then it means that we are fundamentally in the dark in that we are
basically clueless in we might try to improve the world uh but we are overlooking maybe several factors each one
of which would make us totally change our mind about how to go about this um and so it's less
of a problem i think if your goal is just to lead your normal life and be happy and have a happy
family and you know because that there we
have a lot more evidence and it doesn't seem to keep changing every few years like we still know
yeah have good relationships you know don't ruin your body don't jump in front of trains like these
are tried and tested yes right but if your goal is to somehow steer humanity's future in such a way
that you maximize expected utility.
There, it seems our best guess is keep jumping around every few years and we haven't kind of settled down into some stable conception of that.
Nick, I'm going to have to process the conversation for a long time,
but I appreciate it.
And thank you for being here, man.
It was really cool, very fascinating discussion.
Good to meet you, yeah. Thank you was really cool very fascinating discussion good to meet you thank you
thank you very much
if people would like to
read any of your stuff
where can they get it
nickbostrom.com
probably the best starting point
okay
thank you
my brain's broken
bye everybody Oh, skew.