Lex Fridman Podcast - #77 – Alex Garland: Ex Machina, Devs, Annihilation, and the Poetry of Science
Episode Date: March 3, 2020Alex Garland is a writer and director of many imaginative and philosophical films from the dreamlike exploration of human self-destruction in the movie Annihilation to the deep questions of consciousn...ess and intelligence raised in the movie Ex Machina, which to me is one of the greatest movies on artificial intelligence ever made. I'm releasing this podcast to coincide with the release of his new series called Devs that will premiere this Thursday, March 5, on Hulu. EPISODE LINKS: Devs: https://hulu.tv/2x35HaH Annihilation: https://hulu.tv/3ai9Eqk Ex Machina: https://www.netflix.com/title/80023689 Alex IMDb: https://www.imdb.com/name/nm0307497/ Alex Wiki: https://en.wikipedia.org/wiki/Alex_Garland This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast". Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 - Introduction 03:42 - Are we living in a dream? 07:15 - Aliens 12:34 - Science fiction: imagination becoming reality 17:29 - Artificial intelligence 22:40 - The new "Devs" series and the veneer of virtue in Silicon Valley 31:50 - Ex Machina and 2001: A Space Odyssey 44:58 - Lone genius 49:34 - Drawing inpiration from Elon Musk 51:24 - Space travel 54:03 - Free will 57:35 - Devs and the poetry of science 1:06:38 - What will you be remembered for?
Transcript
Discussion (0)
The following is a conversation with Alex Garland, writer and director of many
imaginative and philosophical films from the dreamlike exploration of human
self-destruction in the movie annihilation to the deep questions of consciousness
and intelligence raised in the movie X Machina, which to me is one of the greatest
movies an artificial intelligence ever made. I'm releasing this podcast to
coincide with the release of his new series called Devs, that will premiere this Thursday
March 5th on Hulu, as part of FX on Hulu. It explores many of the themes that
this very podcast is about, from quantum mechanics, to artificial life, to simulation, to the modern nature of power in the tech world.
I got a chance to watch a preview and loved it.
The acting is great, Nick Alphamon especially is incredible in it.
The cinematography is beautiful and the philosophical and scientific ideas explored are profound.
And for me as an engineer and scientist, we're just fun to see brought to life. For example, if you watched a trailer for the series
carefully, you'll see there's a programmer with a Russian accent looking at a
screen with Python-like code on it that appears to be using a library that
interfaces with a quantum computer. This attention and technical detail on several
levels is impressive.
And one of the reasons I'm a big fan of how Alex we've science and philosophy together
in his work.
Meeting Alex for me was unlikely, but it was life-changing.
In ways I may only be able to articulate in a few years.
Just this meeting spot many of Boston Dynamics for the first time planted a seed of an idea
in my mind, so did meeting Alex Garland.
He's humble, curious, intelligent, and to me an inspiration.
Plus, he's just really a fun person to talk with about the biggest possible questions
in our universe.
This is the Artificial Intelligence Podcast.
If you enjoy it, subscribe on YouTube, give it 5 stars in Apple Podcast, supporting on
Patreon, or simply connect with me on Twitter, and Lex Friedman spelled F-R-I-D-M-A-N.
As usual, I'll do one or two minutes of ads now and never any ads in the middle that
can break the flow of the conversation.
I hope that works for you and doesn't hurt the listening experience.
This show is presented by CashApp, the number one finance app in the App Store.
When you get it, use code Lex Podcast.
CashApp lets you send money to friends by Bitcoin and invest in the stock market with
as little as one dollar.
Since CashApp allows you to buy a Bitcoin,
let me mention that cryptocurrency in the context of the history of money is fascinating.
I recommend a cent of money as a great book on this history.
Debuts and credits on ledgers started 30,000 years ago.
The US dollar was created about 200 years ago.
A Bitcoin, the first decentralized cryptocurrency was released just over 10 years ago.
So given that history, cryptocurrency is still very much in its early days of development,
but it still is aiming to and just might redefine the nature of money.
So again, if you get cash app from the App Store or Google Play and use code LX Podcast,
you'll get $10 and cash out will also donate $10 to the first, one of my favorite organizations
that is helping advance robotics and STEM education for young people around the world.
And now here's my conversation with Alex Garland.
You described the world inside the shimmer in the movie Annihilation as dreamlike in that it's internally consistent
but detached from reality.
That leads me to ask,
do you think a philosophical question,
I apologize, do you think we might be living
in a dream or in a simulation
like the kind that the shimmer creates?
We human beings here today.
Yeah. I wanna sort of separate that out into two things.
Yes, I think we're living in a dream of sorts.
No, I don't think we're living in a simulation.
I think we're living on a planet with a very thin layer of atmosphere
and the planet is in a very large space and the space is full of other
planets and stars and quasars and stuff like that. I don't think those physical objects,
I don't think the matter in that universe is simulated, I think it's there. We are definitely
so hot problem is saying definitely, but in my opinion, I'll just go about that.
I think it seems very light we're living in a dream state, I'm pretty sure we are.
And I think that's just to do with the nature of how we experience the world, we experience
it in a subjective way.
And the thing I've learnt most, as I've got older in some respects, is the degree to which reality is counterintuitive
and that the things that are presented to us as objective turn out not to be objective and quantum mechanics is full of that kind of thing,
but actually just days of day life is full of that kind of thing as well. So my understanding of the way the brain works is you get some information, hit your optic
nerve and then your brain makes its best guess about what it's seeing or what it's saying
it's seeing.
It may or may not be an accurate best guess.
It might be an inaccurate best guess and that gap, the best guess gap, means that we are
essentially living in a subjective state, which
means that we're in a dream state.
So I think you could enlarge on the dream state in all sorts of ways, but so yes, dream
state, no simulation would be where I'd come down.
So going further deeper into that direction, you've also described that world as a psychedelia. So on that topic, I'm curious about that world. On the topic of
psychedelic drugs, do you see those kinds of chemicals that modify our
perception as a distortion of our perception reality or a window into another
reality? No, I think what I'd be saying is that we live in a distorted reality and then those kinds of drugs give us a different kind of
distorted.
Disorted. Yes, exactly.
They just give an alternate distortion.
And I think that what they really do is they give, they give a distorted perception, which is a little bit more
allied to daydreams or unconscious interests. So if for some reason you're feeling unconsciously
anxious at that moment and you take a psychedelic drug, you'll have a more pronounced unpleasant
experience and if you're feeling very calm or happy you might have a good time. But yeah,
so if I'm saying we're starting from a premise, our starting point is, or we were already in the slightly psychedelic state, what those drugs do is help you go further down and
have a new, or maybe a slightly different avenue, but that's what.
So in that movie annihilation, the shimmer, this alternate dreamlike state is created by,
I believe, perhaps, an alien entity.
Of course, everything is up to interpretation.
But do you think there's in our world, in our universe, do you think there's intelligent
life out there?
And if so, how different is it from us humans?
Well, one of the things I was trying to do in annihilation was to offer up a form of alien life that was actually alien
because it would often seem to me that in the way
we would represent aliens in books or cinema
or television or any one of the sort of storytelling mediums is
We would always give them very human-like qualities So they wanted to teach us about galactic federations or they wanted to eat us or they wanted our resources like our water
Or they want to enslave us or whatever happens to be but all of these are incredibly human-like motivations and
I was interested in the idea of an alien that was not in any way
like us. It didn't share. It maybe had a completely different clock speed, maybe it's way...
So we're talking about, we're looking at each other, we're getting information, light hits
our optic nerve, our brain makes the
best guess of what we're doing. Sometimes it's right something, you know, the thing we
were talking about before, what if this alien doesn't have an optic nerve, maybe it's way
of encountering the space it's in is wholly different. Maybe it has a different relationship
with gravity. The basic laws of physics that operates under might be fundamentally different,
it could be a different time scale and so on.
Yeah, or it could be the same laws,
it could be the same underlying laws of physics.
You know, it's a machine created or it's a creature
created in a quantum mechanical way.
It just ends up in a very, very different place to the one we end up in.
So, so part of the preoccupation with annihilation was to come up with an alien that was really
alien and didn't give us and it didn't give us and we didn't give it any kind of easy connection
between human and the alien because I think it was to do with the idea that you could have an alien
that landed on this planet that wouldn't even know we were here. And we might only glancingly know it was here. There'd just be this strange point where the event diagrams
connected where we could sense each other or something like that. So in the movie, first of all,
incredibly original view of what an alien life would be. And in that sense, it's a huge success.
Let's go inside your imagination.
Did the alien that alien entity know anything
about humans when it landed?
No.
So the idea is your boat,
you're basically an alien life is trying to reach out
to anything that might be able to hear
its mechanism of communication.
Or was it simply, was it just basically
there biologist exploring different kinds of stuff
that you can find?
But you see, but this is the interesting thing
is as soon as you say there biologists,
you've done the thing of attributing
human type motivations to it,
I was trying to free myself from anything like that.
So all sorts of questions you might answer about this notion or alien, I wouldn't be able to answer because I don't know what it
was or how it worked. I gave it some rough ideas, like it had a very, very, very slow clock
speed. And I thought maybe the way it is interacting with this environment is a little bit
like the way an octopus will change its colour forms around the space that it's in. So it's sort
of reacting to what it's into and extent, but the reason it's reacting in that way is in determinate.
But its clock speed was slower than our human life clock speed or interrupt, but it's so but its clock speed was slower than our human life
clock speed or inter but it's faster than evolution
First of all then our evolution. Yeah, given the four billion years it took us to get here
Then yes, maybe it started eight if you look at the human civilization is a single organism. Yeah
In that sense, you know this evolution could be us
You know the evolution of the living
organs on Earth could be just a single organism and it's kind of, that's its life. Is the
evolution process that eventually will lead to probably the heat death of the universe
or something before that? I mean, that's just an incredible idea. So you almost don't know if created something
that you don't even know how it works.
Like, yeah.
Because any time I tried to look into how it might work,
I would then inevitably be attaching my kind of thought processes into it.
And I wanted to try and put a bubble around it.
I was saying, no, this is alien in its most alien form.
I have no real point of contact.
So unfortunately, I can't talk to Stanley Kubrick.
So I'm really fortunate to get a chance to talk to you.
Do you, on this particular notion, I'd like to ask it,
in a bunch of different ways and we'll explore in different ways.
But you have to consider human imagination, your imagination as a window into a possible future
and that what you're doing, you're putting that imagination on paper as a writer and then on screen as a director.
And that plants the seeds and the minds of millions of future and current
scientists. And so your imagination, you putting it down, actually makes it a reality. So it's
almost like a first step of the scientific method. Like you imagine what's possible, and
your new series with X-Machina is actually inspiring, you know, thousands of 12-year-olds, millions of scientists, and actually creating the future of you have imagined?
Well, all I could say is that from my point of view, it's almost exactly the reverse, because I see that pretty much everything that I do is a reaction to what scientists are doing.
I'm an interested lay person and I feel, you know, this individual, I feel that the
most interesting area that humans are involved in is science.
I think art is very, very interesting, but the most
interesting is science. And science is in a weird place because maybe around the time Newton was
alive, if a very, very interested layperson said to themselves, I want to really understand what
Newton is saying about the way the world works, with a few years of dedicated thinking
they would be able to understand
the sort of principles he was laying out.
And I don't think that's true anymore.
I think that's stopped being true now.
So I'm a pretty smart guy.
And if I said to myself,
I want to really, really understand
what is currently the state of quantum mechanics or string theory or any of the sort of branching areas of it, I wouldn't be
able to.
I'd be intellectually incapable of doing it because to work in those fields at the moment
is a bit like being an athlete.
I suspect you need to start when you're 12, you know. And if
you start in your mid 20s, start trying to understand it in your mid 20s, then you
just never going to catch up as the way it feels to me. So what I do is I try to make
myself open. So the people that you're implying, maybe I would influence, to me, it's exactly
the other way around. These people are strongly influencing me.
I'm thinking they're doing something fascinating.
I'm concentrating and working as hard as I can
to try and understand the implications of what they say.
And in some ways, often what I'm trying to do
is disseminate their ideas into a means by which it can
enter a public conversation.
So X-Mac and it contains lots of name checks,
all sorts of existing thought experiments,
shadows on Plato's cave and Mary in the black and white room,
and all sorts of different long-standing thought processes about sentience or consciousness
or subjectivity or gender or whatever it happens to be. And then I'm trying to marshal that
into a narrative to say, look, this stuff is interesting and it's also relevant and this is my
best shot at it. So I'm the one being influenced in my construction. That's fascinating. Of course, you would say that because you're not even aware of your own.
That's probably what Kubrick will say too, right?
In describing why how 9,000 is created, the way how 9,000 is created,
is you're just studying what's... But the reality when the specifics of the knowledge passes through your imagination,
I would argue that you're incorrect in thinking that you're just disseminating knowledge
that the very act of your imagination, consuming that science, it creates the next step, potentially
that science creates something, creates the next step, potentially creates the next step. I certainly think that's true with 2001 Space Odyssey.
I think at its best, you know, for fails, it's true of that, yeah, it's true of that, definitely.
At its best, it plans something, it's hard to describe, but it inspires the next generation.
And it could be field dependent.
So you're new series as more a connection to physics, quantum, quantum mechanics, quantum
computing, and yet, x-mark and as more artificial intelligence.
I know more about AI.
My sense that AI is much, much earlier in its, in the depth of its understanding.
I would argue nobody understands anything to the depth that physicists do about physics.
In AI, nobody understands AI that there is a lot of importance and role for imagination,
which they think, you know, we're in that, we're Freud, imagine the subconscious, we're in that stage of AI, where
there's a lot of imagination you didn't think about outside the box.
Yeah, it's interesting. The, the, the spread of discussions and the
spread of anxieties that exist about AI fascinate me. The way in
which some people are, some people seem terrified about it, whilst also pursuing
it. And I've never shared that fear about AI personally, but the way in which it agitates
people and also the people who it agitates. I find kind of fascinating. Are you afraid? Are you excited?
Are you sad by the possibility? Let's take the existential risk of artificial intelligence by
the possibility that an artificial intelligence system becomes our offspring and makes us obsolete.
are offspring and makes us obsolete. I mean, it's a huge, it's a huge subject to talk about, I suppose. But one of the things
I think is that humans are actually very experienced at creating new life forms, because that's
why you and I are both here, and it's why everyone on the planet is here. So something in the process of having a living
thing that exists that didn't exist previously is very much encoded into the structures of our
life and the structures of our societies. It doesn't mean we always get it right, but it doesn't
mean we've learnt quite a lot about that. We've learnt quite a lot about what the dangers are
We've learned quite a lot about what the dangers are of allowing things to be unchecked, and it's why we then create systems of checks and balances in our government,
and so on and so forth. I mean, the other thing is it seems like there's all sorts of things
that you could put into a machine that you would not be. So with us, we sort of roughly try to
give some rules to live by,
and some of us then live by those rules, and some don't. With a machine, it feels like you could
enforce those things. So partly because of our previous experience and partly because of the
different nature of a machine, I just don't feel anxious about it. I, more I just see all the good,
you know, broadly speaking, the good that can come from it. But that's
just my, that's just where I am on that anxiety spectrum. You know, it's kind of, there's a sadness
so we as humans give birth to other humans, right, but there's in the generations and there's
often in the older generation of sadness about what the world has become now. I mean, that's kind of
yeah, there is, but there's a counterpoint as well, which is the most parents would wish
for a better life for their children. So there may be a regret about some things about
the past, but broadly speaking, what people really want is that things will be better for
the future generations, not worse. And so... And then it's a question about what constitutes
a future generation, a future generation
could involve people, it also could involve machines and it could involve a sort of cross-pollinated
version of the two or any, but none of those things make me feel anxious.
It doesn't give you anxiety, it doesn't excite you, like anything that's new.
No, it does.
Not anything that's new.
I don't think, for example, I've got've got I my anxieties relate to things like social media
That that so I've got plenty of anxieties about that which is also driven by artificial intelligence in the sense that
There's too much information to be able to do is that an algorithm has to filter that information and present to you
So ultimately the the algorithm,
a simple, oftentimes simple algorithm is controlling the flow of information on social media. So that's another form of that. It is, but at least my sense of it, I might be wrong, but my
sense of it is that the algorithms have an either conscious or unconscious bias, which is created by the people who are making
the algorithms and sort of delineating the areas to which those algorithms are going to
lean. And so, for example, the kind of thing I'd be worried about is that it hasn't been
thought about enough how dangerous it is to allow algorithms to create echo chambers, say. But that doesn't seem to me to be about
the AI or the algorithm. It's the naivety of the people who are constructing the algorithms
to do that thing. If you see what I mean.
Yes. So in your new series, Devs, and we could speak more broadly, there's a, let's talk
about the people constructing those algorithms, which in our modern society,
Silicon Valley, those algorithms happen to be a source of a lot of income because of
advertisements.
So, let me ask sort of a question about those people.
Are there current concerns and failures on social media?
They're a naivety.
I can't pronounce that word well.
Are they naive?
Are they,
I use that word carefully, but evil in intent or misaligned in intent?
I think that's a, do they mean well and just go, uh, have a unintended consequence?
Or is there something dark in them that results
in them creating a company, results in that super competitive drive to be successful and
those are the people that will end up controlling the algorithms.
At a guess, I'd say there are instances of all those things.
So sometimes I think it's naivety. Sometimes I think it's extremely dark. And sometimes I
think people are not being naive or dark. And and then in those
instances are sometimes generating things that are very
benign and other times generating things that despite their best
intentions are not very benign.
It's something, I think the reason why I don't get anxious about AI in terms of, or at least,
AI's that have, I don't know, a relationship with, some sort of relationship with humans is that I think that's the stuff we're quite well equipped to understand how to mitigate.
The problem is issues that relate actually to the power of humans or the wealth of humans
and that's where it's dangerous here and now. So, so what I see, I tell you what I sometimes feel about Silicon Valley is that it's like Wall Street in the 80s.
It's, it's rabidly capitalistic, absolutely rabidly capitalistic and it's rabidly greedy.
ebidly capitalistic and it's rabidly greedy. But whereas in the 80s, the sense one hat of Wall Street was that these people kind of knew
they were sharks and in a way relished in being sharks and dressed in sharp suits and
and kind of lauded over other people and felt good about doing it. Silicon Valley has managed to hide its
voracious Wall Street like capitalism behind hipster t-shirts and, you know, cool cafes
in the place where they set up there. And so that obfuscates what's really going on and
what's really going on is the absolute voracious pursuit of money and power. So that's where
it gets shaky for me.
So that veneer, and you explore that brilliantly,
that veneer of virtue that Silicon Valley has,
which they believe themselves, I'm sure.
Oh, wait, so let me, okay.
I hope to be one of those people.
And I believe that.
So as maybe a devil's advocate,
term poorly used in this case,
what if some of them really are trying
to build a better world?
I can't.
I'm sure I think some of them are.
I think I've spoken to one.
So I believe in their heart
feel they're building a better world. Are they not able to? No, no, they may or may not be, but it's just
as a zone with a lot of bullshit flying about. And there's also another thing which is this actually
goes back to, I always thought about some sports that later turned out to be corrupt in the way that the sport like who won the boxing match or
how a football match got thrown or cricket match or whatever happened to be and I used to think well
look if there's a lot of money and there really is a lot of money people stand to make
millions or even billions you will find a corruption that's going to happen. So it's in the nature of
its voracious appetite that some people will be corrupt and some people will exploit
and some people will exploit whilst thinking they're doing something good. But there are
also people who I think are very, very smart and very benign and actually very self-aware. And so I'm not trying to, I'm not trying to wipe out the motivations of this entire area.
But I do, there are people in that world who scare the hell out of me.
Yeah, sure.
Yeah, I'm a little bit naive in that, like I don't care at all about money.
And so, I'm...
You might be one of the good guys.
Yeah, but so the thought is, but I don't have money.
So my thought is if you give me a billion dollars,
it would change nothing and I would spend it right away
in resting right back and creating a good world.
But your intuition is that billion,
there's something about that money
that's maybe slowly corrupts.
People around you, there's somebody gets in
that corrupts the your souls of you,
the way you either are.
Money does corrupt, we know that.
But there's a different sort of problem
aside from just the money corrupts.
You know, thing that we're familiar with in throughout
history. And it's more about the sense of reinforcement and individual gets, which is so effectively
works like the reason I earned all this money and so much more money than anyone else
is because I'm very gifted. I'm actually a bit smarter than they are
or I'm a lot smarter than they are and I can see the future in the way they can't. And maybe some
of those people are not particularly smart, they're very lucky or they're very talented entrepreneurs
and there's a difference between... So in other words, the acquisition of the money and power can suddenly
start to feel like evidence of
virtue.
And it's not evidence of virtue.
It might be evidence of completely different things.
As brilliantly put, yeah, yeah, yeah, yeah, it's brilliantly put like.
So I think one of the fundamental drivers of my current morality, let me just represent
nerds in general of all kinds, is of constant self-doubt and the signals, you know, I'm
very sensitive to signals from people that tell me I'm doing the wrong thing, but when
there's a huge inflow of money, you're just put it brilliantly that that could become
an overpowering signal that
everything you do is right.
And so your moral compass can just get thrown off.
Yeah.
And it's that is not contained to Silicon Valley.
That's across the board in general.
Yeah.
Like I said, I'm from the Soviet Union.
The current president is convinced, I believe.
Actually, he is, he wants to do really good by the country and by the world,
but his moral clock maybe, or our compass, maybe off because...
Yeah, I mean, it's the interesting thing about evil, which is that I think most people who do
spectacularly evil things think themselves, they're doing really good things.
They're not there thinking, I am a sort of incarnation of Satan, they're doing really good things. They're not there thinking,
I am a sort of incarnation of Satan, they're thinking, I've seen a way to fix the world,
and everyone else is wrong, here I go. In fact, I'm having a fascinating conversation with a
historian of Stalin, and he took power, he actually got more power than almost any person in history.
And he wanted, he didn't want power.
He just wanted, he truly, and this is what people don't realize, he truly believed that communism
will make for a better world.
Absolutely.
And he wanted power.
He wanted to destroy the competition to make sure that we actually make communism work
in the Soviet Union and that spread it across the world. He was trying to do good.
I think it's typically the case that that's what people think they're doing. And I think
that, but you don't need to go to Stalin. I mean, Stalin, I think Stalin probably got
pretty crazy. But actually, that's another part of it, which is that the other thing that comes
from being convinced of your own virtue
is that then you stop listening
to the modifiers around you.
And that tends to drive people crazy.
It's other people that keep us sane.
And if you stop listening to them,
I think you go a bit mad.
That also.
That's funny.
Disagreement keeps us sane.
To jump back for an entire
generation of AI researchers 2001, a space Odyssey, put an image, the idea of human level,
superhuman level intelligence into their mind. Do you ever sort of jumping back to X Mach
and talk a little bit about that? Do you ever consider the audience of people
who you
Who build the systems there are bought us as the scientists that build the systems based on the stories you create
Which I would argue I mean there's literally most of the top researchers
about 40 50 years old and plus, you know, that's their favorite movie, 2001 Space Odyssey.
It really is in their work, their idea of what ethics is, of what is the target, the hope,
the dangers of AI, is that movie. Do you ever consider the impact on those researchers when you
create the work you do? Certainly not with X-marketing in relation to 2001, because I'm not sure, I mean I'd
be pleased if there was, but I'm not sure in a way there isn't a fundamental discussion
of issues to do with AI that isn't already and better dealt with by 2001. 2001 does a very, very good account of the way in which an
AI might think and also potential issues with the way the AI might think. And also then a separate
question about whether the AI is malevolent or benevolent. And 2001 doesn't really,
it's a slightly odd thing to be making a film
when you know there's a pre-existing film
which is not a really supernatural.
But there's questions of consciousness embodiment
and also the same kinds of questions.
Could you, those are my two favorite AI movies.
So can you compare Hell 9,000 and Ava,
Hell 9,000 from 2001 Space Odyssey and Ava, hell 9000 from 2001 space out of CNA or from X Machina?
In your view, from a philosophical perspective.
They've got different goals. The two AI's have completely different guys.
I think that's really the difference. So in some respects, X Machina took as a premise.
How do you assess whether something else has consciousness?
So it was a version of the Turing Test,
except instead of having the machine hidden,
you put the machine in plain sight
in the way that we are in plain sight of each other
and say, now assess the consciousness
and in a way it was illustrating the way in which you
assess the state of consciousness of a machine
is exactly the same way we assess the state of consciousness of a machine is exactly the same way we assess
the state of consciousness of each other, and in exactly the same way that in a funny way
your sense of my consciousness is actually based primarily on your own consciousness.
That is also then true with the machine.
And so it was actually about how much of the sense of consciousness is a projection rather than something that consciousness is actually containing.
And have played us cave. I mean, you really explored, you could argue that how sort of space-auticy explores
idea of the torrentess for intelligence. So in that test, there's no test, but it's more focused on intelligence and X-Mark and kind of goes around intelligence and says the consciousness
of the human to human human to a robot interactions more interest more important more at least the
focus of that particular particular movie.
Yeah, it's about the interior state and and what constitutes the interior state and how
do we know it's there and and actually in that respect, X-Machina is as much about consciousness in general
as it is to do specifically with machine consciousness.
And it's also interesting, you know, the thing you started asking about the dream state,
and I was saying, well, I think we're all in a dream state because we're all in a subjective state. One of the things that I became aware of with X-Mackener is that
the way in which people reacted to the film was very based on what they took into the film.
So many people thought X-Mackener was the tale of an evil robot who murders two men and
escapes and she has no empathy, for example, because she's
a machine.
As I felt, no, she was a conscious being, with a consciousness different from mine, but
so what, imprisoned and made a bunch of value judgments about how to get out of that
box.
And there's a moment which it sort of slightly bugs me, but nobody
ever has noticed it in its years after, so I might as well say it now, which is that after
Ava has escaped, she crosses a room and as she's crossing a room, this is just before she leaves
the building, she looks over her shoulder and she smiles. And I thought after all the conversation about tests,
in a way the best indication you could have
of the interior state of someone
is if they are not being observed
and they smile about something
with their smiling for themselves.
And that, to me, was evidence of Ava's true sentience,
whatever that sentience was.
But that's really interesting.
She, we don't get to observe Ava much
or something like a smile in any context,
except through interaction,
trying to convince others that she's conscious.
That's beautiful.
Yeah, exactly, yeah.
But it was a small, in a funny way,
I think maybe people saw it as an evil smile, like,
ha, you know, I fooled them.
But actually, it was just a smile.
And I thought, well, in the end, after all the conversations about the test, that was
the answer to the test, and then she goes,
So if we align, if we just delingered a little bit longer on how and Ava, do you think in terms of motivation, what was how's
motivation, is how good or evil, is Ava good or evil?
Ava's good, in my opinion, and how is neutral, because I don't think how is presented as having a sophisticated emotional life.
He has a set of paradigms, which is that the mission needs to be completed.
I mean, it's a version of the paper clip.
The idea that it's just a super intelligent machine, but it's just performed a particular
task.
And doing that task may destroy everybody.
And Arthur or me, me, me, achieve undesirable effects for us humans.
Precisely.
But what if, okay.
At the very end, you say something like I'm afraid, Dave, but that, that maybe he is on some
level, experiencing fear, or it may be this is the terms in which it would be wise to
stop someone from doing the thing they're doing if you say what I mean.
Yes, absolutely. So actually, it's funny. So that's such a small short
exploration of consciousness that I'm afraid. And then you just with X Mark and
I say, okay, we're going to magnify that part and then minimize the other part
So that's that's a good way to sort of compare the two
But if you could just use your imagination and if
Ava sort of
I don't know
Rand the Rand was president of the United States
So had some power so what kind of world would she want to create?
Because you kind of say good.
And there is a sense that she has a really,
like, there's a desire for a better human
to human interaction, human to robot interaction in her.
But what kind of world do you think
she would create with that desire?
So that's a really, that's a very interesting question that I'm going to approach it slightly
obliquely, which is that if a friend of yours got stabbed in a mugging and you then felt
very angry at the person who'd done the stabbing. But then you learned that it was a 15-year-old
and the 15-year-old, both their parents were addicted to crystal meth and the kid had been addicted
since he was 10 and he really never had any hope in the world and he'd been driven crazy by his
upbringing and did the stabbing. That would hugely modify and it would also make you wary about that kid then becoming
president of America. And Ava has had a very, very distorted introduction into the world.
So although there's nothing as it, as it were organically within Ava that would lean
her towards badness, it's not that robots or sentient robots are bad. She did not, her arrival
into the world was being imprisoned by humans. So I'm not sure she'd be a great prison.
Yeah, the trajectory through which she arrived at her moral views, have some dark elements. But I like Ava personally, I like Ava. And
I think you vote for her. I'm having difficulty finding anyone to vote for. In my country,
or if I live here in yours, I am. So that's a yes, I guess, because the competition
she could easily do a better job than any of the people we've got around at the moment. Okay, I'd vote for Boris Johnson. So what is a good test of consciousness?
We talk about consciousness a little bit more. If something appears conscious, is it conscious?
He mentioned the smile, which seems to be something done. I mean, that's a really good indication because it's a tree falling in the forest, we'll not be there to hear it.
But does the appearance from a robotics perspective or consciousness mean consciousness to you?
No, I don't think you could say that fully because I think you could then easily have a thought experiment which said
We will create something which we know is not conscious, but is going to give a very very good account of seeming conscious
And so and and also it would be a particularly bad test where humans are involved because humans are so quick
to
project
Sentience into things that don't have sentience.
So someone could have their computer playing up
and feel as if their computer is being malevolent
to them when it clearly isn't.
And so of all the things to judge consciousness us.
Humans are bad at it.
We're empathy machines.
So the flip side of that, the argument there is
because we just attribute consciousness to
everything almost, and to form our fives, everything, including Roombas, that maybe consciousness
is not real, that we just attribute consciousness to each other.
So you have a sense that there is something really special going on in our mind that makes
us unique and gives us subjective experience.
There's something very interesting going on in our minds. I'm slightly worried about the
word special because it gets a bit nudged towards metaphysics and maybe even magic. I mean,
in some ways something magic like which I don't think is there at all.
I mean, if you think about, so there's an idea of cod pancichism that says consciousness
is an everything.
Yeah, I don't buy that.
I don't buy that.
Yeah, so the idea that there is a thing that it would be like to be the sun.
Yes.
Yeah, no, I don't buy that.
I think that consciousness is a thing. My sort of broad modification is that usually,
the more I find out about things,
the more illusory our instinct is
and is leading us into a different direction
about what that thing actually is.
That happens, it seems to me in modern science,
that happens a hell of a lot, whether it's to do with how even how big or small things
are. So my sense is that consciousness is a thing, but it isn't quite the thing, or maybe
very different from the thing, that we instinctively think it is. So it's there, it's very interesting,
but we may be in sort of quite fundamentally misunderstanding
it for reasons that are based on intuition.
So I have to ask, this is kind of an interesting question.
The ex-marker for many people, including myself, is one of the greatest AI films ever made.
It's number two for me.
Thanks.
Yeah, it's definitely not a problem. It's number one I'd really me. Thanks. Yeah, it's definitely not a big one.
It's number one I'd really have to have.
Well, it's anywhere, yeah.
Wherever you go up with something, right?
Whenever you're up with something, it's in the mud.
But there's one of the things that people bring up
and can't please everyone, including myself.
This is what I first reacted to the film
is the idea of the
lone genius. This is the criticism that people say, sort of me as an ad research, I'm trying
to create what Nathan is trying to do. So there's a brilliant series called Chernobyl.
Yes, it's fantastic. Absolutely. You speak. I mean, I mean, they got so many things brilliant
or right. But one of the things, again, the criticism there, it inflated lots of people
into one character that represents all nuclear scientists, the Wana Komiak. It's a composite
character that presents all scientists. Is this what you were, is this the way you were thinking about that?
Or is it just simplifies the storytelling?
How do you think about the lone genius?
Well, I'd say this.
The series I'm doing at the moment is a critique in part
of the lone genius concept.
So yes, I'm sort of oppositional and either agnostic or atheistic about that as a concept.
I mean, not entirely, you know, with a lone, lone is the right word, broadly isolated,
but Newton clearly exists in a sort of bubble of himself in some respect, so does Shakespeare.
So do you think we would have an iPhone without Steve Jobs?
I mean, how much information from my genius?
Well, no, it's just a bit serious. But George clearly isn't alone genius, because there's
too many other people in the sort of superstructure around him
who are absolutely fundamental to that journey.
But you're saying Newton, but that's a scientific,
so there's an engineering element to building Ava.
But just to say, what X-Maconers is really, it's a thought
experiment. I mean, so it's a construction of putting four people in a house.
Nothing about X-Maconers adds up in all sorts of ways, in as much as the
who built the machine parts, did the people building the machine parts know what they were creating and how did they get there? And it's a thought experiment. So it doesn't stand up to scrutiny
of that sort. I don't think it's actually that interesting of a question, but it's brought up
so often that I had to ask it because that's exactly how I felt after what, you know,
because that's exactly how I felt after a while. There's something about, there was almost a defense,
like I've watched your movie the first time,
and at least for the first little while,
in a defensive way, like how dare this person try
to step into the AI space and try to beat Kubrick.
That's the way I was thinking,
because it comes off as a movie that really is going after
the deep fundamental questions about AI.
So there's a kind of a nerds do say, I guess, automatically searching for the flaws.
And I do exactly the same.
I think in annihilation and the other movie, I was able to free myself from that much
quicker.
It is a thought experiment.
There's, you know, who cares if there's batteries that don't run out, right? Those kinds of questions.
That's the whole point. But it's never less something I wanted to bring up.
Yeah, it's a fair thing to bring up. For me, you hit on the lone genius thing. For me,
it was actually, people always said, X-Mackener makes this big leap in terms of where AI has got to.
And also what AI would look like if it got to that point.
There's another one, which is just robotics.
I mean, look at the way Aval walks around the room, so I forget it building that.
That's also got to be a very, very long way off. And if you did get
that, would it look anything like that? It's a thought experiment.
Actually, I think the way, as a ballerina, Alicia, a conduit, brilliant actress actor that moves
around, that we're very far away from creating that. But the way she moves around is exactly
the definition of perfection for a roboticist.
It's like smooth and efficient.
So it is where we wanna get, I believe.
Like I think, so I hang out with a lot of human or robotics
people, they love elegant smooth motion like that.
That's their dream.
So the way she moved is actually what I believe
that would dream for a robot to move.
It might not be that useful to move that sort of that way,
but that is the definition of perfection in terms of movement.
Drawing inspiration from real life.
So for devs, for X-Mock and a look at characters like Elon Musk,
what do you think about the various big technological efforts of Elon Musk
and others like him
and that he's involved with such as Tesla, SpaceX, Neuralink?
Do you see any of that technology potentially defining the future worlds you create in your work?
So Tesla's automation, SpaceX's space exploration, Neuralink is very machine interface,
somehow merger of biological and electric systems.
In a way, I'm influenced by that, almost by definition, because that's the world I live in,
and this is the thing that's happening in that world. And I also feel supportive of it.
So I think amongst various things, Elon Musk has done, I'm almost sure he's done a very, very
good thing with Tesla for all of us.
It's really kicked.
All the other car manufacturers in the face, it's kicked the fossil fuel industry in the
face and they needed kicking in the face and he's done it.
That's the world he's part of creating and I live in that world, just bought a Tesla,
in fact. And so does that play into whatever I then make in some ways? It does partly because I try
to be a writer who quite often filmmakers are in some ways fixated on the films they grew up with
and they sort of remake those films in some ways. I've always tried to avoid
that and so I looked at a real world to get inspiration and as much as possible
sort of by living I think and so yeah I'm sure. Which of the directions do you find most exciting?
Space travel.
Space travel. So you haven't really explored space travel in your work.
You've said, you've said something like, if you had unlimited amount of money,
I think I'd read it AMA, that you would make like a multi-year series of space wars or something
like that. So what is it that
excites you about space exploration? Well, because if we have any sort of long-term future,
it's that. It just simply is that if energy and matter are linked up in the way we think they're linked up, we'll run out if we don't move.
So we got to move. But also, how can we not, it's built into us to do it or die trying.
I was on Easter Island a few months ago, which is, as I'm sure you know, the middle of the Pacific
and difficult for people to have got to, but they got there.
And I did think a lot about the way those boats must have set out into something like
space.
It was the ocean and how sort of fundamental that was to the way we are. It's the one that most excites me
because it's the one I want most to happen. It's the thing, it's the place where we could
get to as humans. In a way, I could live with us never really unlocking, fully unlocking
the nature of consciousness. I'd like to know I'm really curious
But if we never leave the solar system and if we never get further out into this galaxy or maybe even galaxies beyond our galaxy
That that would that feels sad to me because
Because it's so limiting. Yeah, there's something hopeful and beautiful about reaching out any kind of exploration
reaching out across earth
Centuries ago and then reaching out into space. So what do you think about colonization of Mars?
So go to Mars does that excite you the idea of a human being stepping for our Mars?
It does it absolutely does but in terms of what would really excite me it would be leaving their solar system system. In as much as that, I just think, I think we already know quite a lot about Mars.
But yes, listen if it happened, that would be...
I hope I see it in my lifetime. I really hope I see it in my lifetime.
It would be a wonderful thing.
Without giving anything away, but...
The series begins with the use of quantum computers.
The new series, Deus, begins with the use of quantum computers to simulate basic living organisms.
Or actually, I don't know if the quantum computers are used, but basic living organisms are simulated on a screen.
It's a really cool kind of demo.
Yeah, that's right. They're using yes, they are using a quantum computer to simulate a
Neumatote. Yep. So we're turning to our discussion of simulation or
thinking of the universe as a computer. Do you think the universe is deterministic? Is there a free will?
So with the qualification of what do I know because I'm a layman, right, lay person.
But with big imagination.
Thanks.
With that qualification, yep, I think the universe is deterministic and I see absolutely,
I cannot see how free will fits into that.
So, yes, deterministic, no free will.
That would be my position.
And how does that make you feel?
It partly makes me feel that it's exactly in keeping with the way these things tend to
work out, which is that we have an incredibly strong sense that we do have free will.
And just as we have an incredibly strong sense that time is a constant and
Turns out probably not to be the case or or definitely in the case of time, but but but it's the problem I always have with free will is that it gets I
Can never seem to find the place where it is supposed to reside
Yet you explore just a bit of very very but we have something we can call free will
but it's not the thing that we think it is.
Well, free will, so do you, what we call free will, is just what we call it as the illusion of it.
That's a subjective experience of the illusion.
Yeah, which is a useful thing to have and it partly comes down to, although we live in a deterministic universe, our brains are not very well equipped to fully determine the deterministic universe.
So we're constantly surprised and feel like we're making snap decisions based on imperfect information, so that feels a lot like free will.
It just isn't. That's my guess. So in that sense, your sense is that you can unroll the universe forward or backward and
you will see the same thing.
And you would, I mean, that notion sort of, but yeah, sorry, go ahead.
I mean, that notion is a bit uncomfortable to think about that it's good you can roll it back and forward and
well if you were able to do it it would certainly have to be a quantum
computer something that worked in a quantum mechanical way in order to
understand a quantum mechanical system I guess but and so that I'm rolling, there might be a
multiverse thing where there's a bunch of branching.
Well, exactly.
Because it wouldn't follow that every time you roll it back or
forward, you'd get exactly the same result.
Which is another thing that's hard to wrap in my hand.
Yeah, but that, yes, but essentially what you just described,
that the yes forwards and yes backwards, but, but that yes, but essentially what you just described that the yes forwards and
yes backwards, but you might get a slightly different result.
It's a very different result.
We're very different. Along the same lines, you've explored some really deep scientific ideas
in this new series. I mean, it's just in general, you're unafraid to, to, uh, to grow on yourself
and some of the most amazing scientific ideas of our time.
What are the things you've learned, ideas you find beautiful, mysterious about quantum
mechanics, multiverse, string theory, quantum computing that you've learned?
Well, I would have to say every single thing I've learned is beautiful. And one of the motivators for me is that I think that people tend not to see scientific
thinking as being essentially poetic and lyrical, but I think that is literally exactly what
it is. And I think the idea of entanglement or the idea of superpositions, or the fact
that you could even demonstrate a superposition or have a machine that relies on the existence of superpositions in order
to function, to me is almost indescribably beautiful. It fills me with awe. It fills me
with awe. And also, it's not just a sort of grand, massive or of, but it's also delicate.
It's very, very delicate and subtle.
And it has these beautiful sort of nuances in it.
And also these completely paradigm-changing thoughts and truths.
So it's as good as it gets as far as I can tell. So broadly everything,
that doesn't mean I believe everything I read, because obviously a lot of the interpretations
are completely in conflict with each other and who knows whether string theory will turn out
to be a good description or not, but the beauty in it seems undeniable. And I
do wish people more readily understood how beautiful and poetic science is, I would say. In terms of quantum computing being used to simulate things, or just in general, the idea
of simulating small parts of our world, which actually current physicists are really excited
about simulating small quantum mechanical systems on quantum computers, but scaling that
up to something bigger like simulating life forms.
How do you think, what are the possible trajectories of that going wrong or going right if you
unroll that into the future?
Well, if a bit like Aver and her robotics, you park the sheer complexity of what you're
trying to do.
The issues are, I think it will have a profound...
If you were able to have a machine that was able to project forwards and backwards accurately,
it would in an empirical way show.
It would demonstrate that you don't have free will.
So the first thing that would happen is
people would have to
to really take on a very very different idea of what they were, the thing that they truly truly believe they are, they are not.
And so that I suspect would be very very disturbing to a lot of people.
Do you think there has a positive or negative effect on society, the realization that you cannot control
your actions essentially, I guess, is the way that could be interpreted.
Yeah, although in some ways we instinctively understand that already because in the
example I gave you of the kid in the stabbing, we would all understand that that kid was
not really fully in control of their actions.
So it's not an idea that's entirely alien to us. But I don't know we understand that. I think there's
a bunch of people who see the world that way, but not everybody. Yes, true. But what this
machine would do is prove it beyond any doubt, because someone would say, well, I don't
believe that's true. And then you'd you'd predict, well, in 10 seconds, you're going to do this.
And they'd say, no, no, I'm not.
And then they'd do it.
And then determinism would have played its part.
But I or something like that, but actually the exact terms of that thought
experiment probably wouldn't play out, but still broadly speaking, you could predict
something happening in another room, sort of unseen, I suppose, that four knowledge would not allow you to affect. So what effect would that have,
I think people would find it very disturbing, but then after they'd got over their sense
of being disturbed, which by the way, I don't even think you need a machine to take this
idea on board, but after they've got over that, they'd still understand that even though I have no free will and my actions are in effect already determined, I still
feel things. I still care about stuff. I remember my daughter saying to me, she got hold of
the idea that my view of the universe made it meaningless.
And she said, well, then it's meaningless.
And I said, well, I can prove it's not meaningless because you mean something to me and I mean
something to you.
So it's not completely meaningless because there is a bit of meaning contained within
this space.
And so with a lack of free will space, you could think, well, this robs me of everything I am and you then you'd say well
No, it doesn't because you still like eating cheeseburgers
And you still like going to see the movies and so so how big a difference does it really make?
But I think I think initially people would find it very disturbing. I think I think
That what would come if you could really unlock with a determinism machine, everything,
there'd be this wonderful wisdom
that would come from it, and I'd rather have that than not.
So that's a really good example of a technology
revealing to us humans something fundamental
about our world, about our society.
So it's almost this creation is helping us understand ourselves
in the thing to be said about artificial intelligence. almost this creation is helping us understand ourselves.
The thing to be said about artificial intelligence, so what do you think us creating something like Ava
will help us understand about ourselves?
How will that change society?
Well, I would hope it would teach us some humility.
Humans are very big on exceptionalism.
teach us some humility. Humans are very big on exceptionalism, you know? America is constantly proclaiming itself to be the greatest nation on earth, which it may feel like that if you're
an American, but it may not feel like that if you're from Finland, because there's also
the things you'd dearly love about Finland. And exceptionalism is usually bullshit,
probably not always. If we both sat here, we could find a good example of something that isn't, but as a rule of thumb. And what it would
do is it would teach us some humility and about, you know, actually often that's what science
does in a funny way. It makes us more and more interesting, but it makes us a smaller and
smaller part of the thing that's interesting.
And I don't mind that humility at all.
I don't think it's a bad thing.
Our excesses don't tend to come from humility.
Our excesses come from the opposite, megalomania.
We tend to think of consciousness as having some form of exceptionalism attached to it.
I suspect if we ever unravel it,
it will turn out to be less than we thought in a way.
And perhaps you're a very own exceptional assertion
earlier on in our conversation
that consciousness is something belongs to us humans
or not humans believing organisms.
Maybe you will one day find out
that consciousness is in everything.
And that will humble you.
If that was true, it would certainly humble me.
Although maybe, almost maybe, I don't know,
I don't know what effect that would have.
I said, I mean, my understanding of that principle
is along the lines of say that an electron
has a preferred state, or it may or may not pass through a bit of glass, it may reflect
off or it may go through or something like that.
And so that feels as if a choice has been made.
And but if I'm going down the fully deterministic route, I would say there's
just an underlying determinism that has defined the preferred state or the reflection or non-reflection.
But look, yeah, you're right. If it turned out that there was a thing that it was like
to be the sun, then I would be amazed and humbled.
And I'd be happy to be both.
Sounds pretty cool.
And it'll be, you'll say the same things you said to your daughter,
but it's nevertheless feels something like to be me
and that's pretty damn good.
Yeah.
So Kubrick created many masterpieces,
including the Shining, Dr. Strange Love,
Clockwork Orange.
But to me, he will be remembered, I think,
to many a hundred years from now,
40,000 one in Space Odyssey.
I would say that's his greatest film.
I agree.
You are incredibly humble.
I listened to a bunch of your interviews,
and I really appreciate that you're humble
in your creative efforts and your work.
But if I were to force you a gunpoint, you don't know that the mystery,
is to imagine a hundred years out into the future, what will Alex, Garland be remembered for from something you've created already or feel you may feel somewhere deep inside, you
may still create. Well, okay, well, I'll take I'll take the question in the spirit was asked, but
for a generous gunpoint. Yeah. What I what I try to do, so therefore what I hope
Yeah, if I'm remembered what I might be remembered for is is as someone who who participates in a conversation and I think that
Often what happens is people don't participate in conversations. They make proclamations
They make statements and people can either react against the statement or can fall in line behind it and I don't like that
So I want to be part of a conversation I take as a sort of basic principle
I think I take lots of my cues from science
But one of the best ones it seems to me is that when a scientist has something proved wrong that they previously believed in
They then have to abandon that position
So I'd like to be someone
who has allied to that sort of thinking. So part of an exchange of ideas, and the exchange
of ideas for me is something like people in your world show me things about how the world
work, and then I say this is how I feel about what you've told me. And then other people can react to that.
And it's not to say this is how the world is.
It's just to say it is interesting to think about the world in this way.
And the conversation is one of the things I'm really hopeful about in your works.
The conversation you're having is with the viewer in the sense that
you're having is with the viewer in the sense that you you're bringing back
You and several others, but you very much so sort of intellectual depth to
cinema
to now series
sort of
Allowing film to be something that
Yeah, sparks a conversation, is a conversation,
let's people think, allows them to think.
But also, Crute, it's very important for me that if that conversation is going to be a good conversation,
what that must involve is that someone like you who understands AI,
and I imagine understands a lot about quantum mechanics,
if they then watch the narrative
feels yes, this is a fair account. So it is a worthy addition to the conversation. That for me
is hugely important. I'm not interested in getting that stuff wrong. I'm only interested in trying
to get it right. Alex, it was truly an honor to talk to you. I really appreciate it. I really enjoyed it.
Thank you so much.
Thank you.
Thanks, man.
Thanks for listening to this conversation with Alex Garland.
And thank you to our presenting sponsor, CashApp.
Download it, use code Lex Podcast.
You'll get $10 and $10 will go to first, an organization that inspires and educates
young minds to become science and technology innovators of tomorrow. If you enjoy this podcast subscribe on YouTube
get five stars and Apple podcasts support on Patreon or simply connect with
me on Twitter at Lex Friedman and now let me leave you with a question from
Ava the central artificial intelligence character in the movie X Machina, that she asked during her
touring test. What will happen to me if I fail your test? Thank you for listening and hope to see you
next time. Thank you.