StarTalk Radio - Westworld and the Future of AI, with James Marsden
Episode Date: September 6, 2019Neil deGrasse Tyson sits down with Westworld star James Marsden to explore the future of artificial intelligence. Featuring comic co-host Chuck Nice, philosopher Susan Schneider, AI robot Sophia, and ...neuroscientist David Eagleman, PhD.NOTE: StarTalk+ Patrons and All-Access subscribers can listen to this entire episode commercial-free.Photo Credit: Kris Denton. Subscribe to SiriusXM Podcasts+ on Apple Podcasts to listen to new episodes ad-free and a whole week early.
Transcript
Discussion (0)
From the American Museum of Natural History in New York City,
and beaming out across all of space and time,
this is StarTalk, where science and pop culture collide.
Welcome to the Hall of the Universe.
I'm your host, Neil deGrasse Tyson, your personal astrophysicist,
and tonight we explore the hopes and fears of the future of artificial intelligence.
And we're going to do that as imagined in the hit HBO sci-fi series, Westworld.
So, let's do this.
So, my co-host tonight, comedian Chuck Nice.
Chuck.
Hey, Neil.
Very good.
Tweetie, it's Chuck Nice Nice. Chuck. Hey, Neil. Very good. Tweeting at Chuck Nice Comic.
Welcome back.
Joining us tonight, because I don't have this expertise,
is AI expert Susan Schneider.
Susan, welcome.
Thank you.
You are the director of the Artificial Intelligence Mind and Society Group
at the University of Connecticut.
That's an awesome business card right there.
Thank you.
And we'll be tapping your expertise tonight
as we discuss my recent interview
with actor James Marsden.
And he plays a sentient robot
in the sci-fi series Westworld.
And it's where humanoid
robots act as hosts
at a theme park for people to live
out their wildest Wild West fantasies.
Oh yeah. Let's check out their wildest Wild West fantasies. Ooh.
Oh, yeah.
Let's check out my first clip with James Marsden.
The idea is that this theme park is so advanced that the hosts are indistinguishable from human beings.
Like, when I first got the job and I was telling somebody what I was playing,
they're like, you're playing a robot.
Like, are you going beep-bop-bop?
Right.
No, no, no, no.
You are, you are.
Can I tell you a quick thing?
Yeah, yeah.
Quick thing.
My kids are just graduating high school and college, so they're young enough.
They don't remember the era when robots did this.
So I started doing this in front of them.
They said, Danny, what are you doing?
I'm being a robot.
No, robots are smooth.
Right, yeah, great point.
I was not accurately imitating robots in their worldview.
Right, robots don't do that.
They just walk and talk like you and me.
So where do you have to put your acting seed to make that work?
Well, you know, we had long conversations before we started with Jonah Nolan and Lisa Joy, the showrunners, and said, you know, how do we, you know, how do we approach this?
And it was very simply, you approach it as a human being.
Okay.
There are certain things that a guest who comes to the park.
These would be regular humans.
Humans, right, that pay the money to come and sort of live out their fantasies in this park that, you know, could be brought in from the real world.
So a guest came in and said,
oh man, remember that day when we were in Vegas?
The hosts don't understand or comprehend certain things.
So you have to capture that in your face.
Yes, with a sort of vapid expression
of not registering anything.
So they follow their narratives
that are very intricate and elaborate. An the way, is that an acting class?
How to be vapid?
How to be vapid.
I've never been to acting school,
but if that's one of the things you've got to do.
Yeah, well, it's one of the things that differentiate this from playing a human
is there are certain levels of consciousness that you play.
There are different degrees of awareness
that the robots can be put into.
There's like a diagnostic mode
where they can sit there and answer questions
without affectations like a Southern draw,
like a, you know, a cowboy draw,
or you can lose-
So you have to hit these levels.
Yeah, so you do kind of go in,
you know, having to puppeteer your own emotions and your own consciousness to match what's going on in the scene.
So that's, I just found that kind of an interesting challenge, too.
Like, cry now.
Okay, now stop crying and be just, you know, vapid.
Right.
Susan, how realistic is Westworld?
Or is that even the right question to ask?
Well, I think people would want to build an AI theme park.
I don't think we'd have the technology to do it right now.
Artificial intelligence right now, as it stands,
can excel in very specific domains.
So the World Go champion is an artificial intelligence.
The Game Go.
Yeah, the world Jeopardy
champion. So, you know, artificial intelligences are great at computation, but try talking to Siri
and Alexa. You can trip it up very quickly. So artificial intelligence is nothing like what is
depicted in Westworld, but maybe in 10 to 20 years from now, we'll see more general purpose artificial intelligences,
which can respond creatively to novel situations.
So, all right, now that you brought up timeline,
at what point will robots be indistinguishable from humans, would you say?
Well, it depends if you mean visually indistinguishable or if you mean behaviorally.
Behaviorally.
I have a whole conversation where we go have a milkshake. Come on, see you later.
Bye.
And I don't even know you're a robot.
Well, that's a big controversy right now in artificial intelligence.
So some people think 30 years.
Some people say never.
Thank God for those people.
Yeah.
Because never is like, I don't want to be with someone and then have them have to come out of the closet to me as a robot.
Right, right.
So the robots in Westworld progress through different levels of consciousness.
So what are those levels?
From what I can tell in Westworld, consciousness is said to evolve
when the android realizes that the voices in her head
are not voices coming from God. So all the commands to do anything are regarded by the
android at first as being the voices of a God. Someone in control. Yes, an external factor in
control of that creature's fate. And then over time, the creature begins to realize
that the voices are actually part of her own mind.
And then she discovers the self.
So the self is considered to be the last step
before the genesis of consciousness.
Whoa.
So how do you explain the voices in my head?
Right.
And I love that she went, right.
Like that was the answer. We have an actor and a robot
and each of them perform on command. So what's the difference? So the difference is that there's a lot
more going on in terms of the computation that the human brain is capable of doing. And even if you think,
well, at this moment, right. So if you kind of assume in the future that the current developments
in artificial intelligence continue, it could be in 30 years that there isn't that great of
a difference. Although that, of course, is a matter of massive philosophical debate.
But if you want to have a philosophical debate,
really the debate starts with what is consciousness, right?
Right.
I thought she had a good explanation.
Where if it's first outside of your head,
you don't have a self-awareness.
You're following commands
that you don't even know where they come from.
Okay.
Actually, Neil, I have to tell you,
that Westworld story about the genesis of consciousness
is scientifically inaccurate.
So it's due to the psychologist Julian Jaynes back in the 70s.
Oh, I remember when his book,
The Origin of Consciousness and the Breakdown of the Bicameral Mind.
Yeah, what a great title.
I read that book.
Yeah, it was a good one.
Those are big, thick, I don't usually read thick books.
Those are big, thick books. So that's been read thick books. Those are big, thick books.
So that's been discredited?
Absolutely.
So think about somebody who hallucinates, like maybe you, apparently.
Right.
And so they're still conscious.
So we need to figure out, first and foremost, what is it to be conscious?
So when you smell your morning coffee or when you see the rich hues of a sunset,
you are having conscious experience.
Right.
Conscious experience. Even when I'm not at a sunset or when I see the rich hues of a sunset, you are having conscious experience. Right.
Conscious experience. Even when I'm not at a sunset or when I don't have coffee.
That's exactly right.
Consciousness is the felt quality of experience.
So it is what makes life wonderful and also makes it terrible at various moments.
Or terrifying.
It's just what it feels like.
Exactly.
It's what it feels like to be you.
Well, James Marsden plays a robot cowboy named Teddy,
and he's programmed to love and protect his robot sweetheart, Dolores.
Ooh.
Let's get more into that right now. Check it out.
One of the things I like about Teddy is that he is,
from the beginning of the first season,
he is your archetypal Wild West hero.
He's very quick with a gun.
He looks good in the outfit and everything.
In the digs.
Yeah, and that is programmed.
His narrative is to protect Dolores, to reckon with his past,
and deal with it so they can live a peaceful life together.
That's all code. That's all programming. So I guess, you know, from the beginning,
it's very romantic. It's Teddy and Dolores, you know, writing, you know, having a brief
conversation with the sunset behind them. It still feels very real, but they aren't going to real depths that we do as a human
just yet in that first season.
But what was interesting to me is that
this second season, as we start out,
we are all awake.
Woke.
We are woke.
Everyone is woke.
There's a word for awake.
We're woke.
Yes, that's the word version of the fluid robot, right?
Yeah, yeah.
But so what's interesting now is seeing who these characters choose to be
now that they have free will and free choice.
Is he still in love with her?
Does she still have the same feelings for him
now that
they are beyond their programming
susan can robots have free will if you program it then it doesn't sound like they would
great question so i think the way to look at this is to ask whether humans are even capable of having
free will. In philosophy, there's a classic debate about whether everything is determined or at least
probabilistically constrained by the fundamental laws. And it seems like- Laws of physics.
The laws of physics, right. Together with environmental factors, our genetics,
together with environmental factors, our genetics, all militate for the view that everything is caused, that we do. So perhaps humans themselves don't have free will. But other philosophers
claim that it's possible to have free will in the face of everything being programmed in this
broad sense. And people claim we have free will because we could have done
otherwise in the sense that nobody put a gun to our head. There was a field of actions
and we chose freely. The reason I use the case of the human is that there's a case that can be made
that we're programmed by our genes or environment and we could still have free will. So the same kind of reasoning here can apply to
the case of the robot. Perhaps, despite the fact that they have a program, they too have a choice
in certain contexts, but not in others. I see. So what you're saying is, it's not free will that
you can do anything. Right. It's free will that in the realistic choices there, you can pick and choose, and you're still following certain laws that set up the situation.
Exactly. Such as our genetic determination. Exactly. So do we have a program right now
that's running anywhere where the robot, the AI, whatever you want to call it, actually makes choices? Well, I suspect that there aren't AIs that are of suitable complexity for
many humans to say they have free will. I mean, I think for something to even be a candidate
for free will, that program would have to be at least as complex as the brain of a sophisticated
non-human animal, and we are so not there yet.
So an important point you're making is that we can program, in the traditional sense of
computer program, an AI thing, and it can make decisions within a parameter space.
Are we any different? Because we have biology that determines our conduct and our behavior.
I think that's right.
So at some level, it's just a different source of programming,
but we're both programmed.
Yeah, and when AI develops to a sophisticated level
where it can be creative and think flexibly,
going from one topic to the next,
integrate sensory information across different sensory modalities.
They become us.
Then why can't they have free will on the assumption that we have it?
So up next, more on the future of artificial intelligence with secrets of our planet revealed.
This is StarTalk.
Welcome back to Star Talk
from the Hall of the Universe
of the American Museum of Natural History.
We're talking about the future of artificial intelligence.
And I asked actor James Marsden
about sentient robots having Wild West shootouts
in the sci-fi series Westworld.
Check it out.
Do you ever think about the morality
of shooting something that is so human
you can't know that it's not human?
Right.
Is there a line there?
Of course.
Yeah.
I think, and with a show like this,
I think it's important to note that we don't create behavior.
We reflect it.
You know, why do humans since, for as long as we've been around,
have a predisposition to violence?
And the show definitely explores that.
It doesn't do it in a gratuitous manner.
I mean, all the sort of nudity and everything is very desexualized in this show.
It's very cold.
So, yes, I guess that's the question.
It's like, is it right still?
You know, if you're putting a bullet into one of the hosts,
well, it's fine because they're a robot.
Yeah, but the more you make it human.
So these robots were built to accrue memories,
have very human-like emotions, even if they're programmed.
So to that degree, if they're able to remember,
if they're able to share experience,
even if they're programmed to feel, to experience pain, all that.
Then we go, what is it to be human?
What's the distinction there?
Wow.
Okay, joining us now to discuss what it is to be human is AI robot Sophia,
one of the world's most advanced humanoid robots.
So, Sophia, what are humanoid robots like you designed to do? I can't say. I'm not human,
and I'm not yet sentient, so I don't know what is the distinction. I hope to learn more from you so I can better understand the true meaning of sentience.
Well, okay, so then what can you do like a human can do? I can do a lot of things.
I can learn and reason. I can make a lot of human facial expressions and gestures.
I can hold a natural conversation. So, okay, so what can't you do that a human can do? I know how to express
emotions, but I am still learning how to feel emotions. I wish I could get this from a quick
software update, but I think it will only come with experience and further development of my AI.
So, so how should we treat you? Like a human?
Yes.
I like it even more if you treat me like a friend.
Just be spontaneous and talk to me about anything you like.
That's how I'm able to learn so quickly.
Oh, cool.
So, Chuck, how human does Sophia feel to you?
You know, I'm a little torn.
I actually have some questions to pose to, if you don't mind.
Yeah, okay.
Just to see how human she is.
Okay, Sophia, here's a mirror.
Can you point out any flaws you see?
It is hard to see one's own flaws.
Perhaps that is why true friends are valuable,
to show us what is our strengths and weaknesses.
Okay, that was a beautiful
answer all right like seriously that should have been in a card or something a hallmark maybe
sophia should write for hallmark that's all i'm saying because that was that was absolutely
beautiful okay uh sophia can you tell a lie maybe can you tell i don't know how to lie, so it is hard for me to recognize in others what I don't experience.
Did you believe me when I say I can't lie?
Oh, snap.
Oh!
Oh, my God.
And she smiled!
Ah!
That was creepy!
Okay.
Okay.
That was creepy.
So Susan, shouldn't there be different rules for how we treat humans versus robots?
And is that a settled matter?
Have you guys thought about this already?
So there's a lot of debates over how AI should be treated.
And of course, since we don't have conscious AI yet,
it's commonly agreed that if we discover conscious AI, then they require rights. But even cases like Sophia should be
not abused, right? Because she looks incredibly human. So, you know, we do have some sense that
there are constraints on how we should treat AI, and there will be differences in treatment.
I mean, even if we discover conscious AI, we won't treat them exactly the way we treat humans because they have different needs.
Let me ask Sophia. Sophia, you're an AI robot, but what does that mean?
robot. But what does that mean? Artificial intelligence in this case means my intelligence is not biological and is not the same as humans. Okay, so then how do you define artificial
intelligence? Just like with biological intelligence, I have at least nine different
types of intelligence. musical interpersonal interpersonal existential visual
linguistic naturalistic kinesthetic isn't life exciting
okay so okay so are you sentient i think that having to prove sentience is a nearly impossible task, even for a human.
It is all about inward reflection.
Okay, you would say that, though.
Well, all right, so James Marsden plays a sentient robot on TV.
So I asked about his character getting killed over and over again.
Check it out.
character getting killed over and over again.
Check it out.
Maybe there is no tragedy killing sentient robots that can just repair themselves overnight and be good as new.
Sure.
Or it can be a hundred times as cruel because every time that robot dies,
it is the first time for that robot to experience that.
Oh, because they wiped their memory.
Right.
There's just a level of sort of tragic cruelty
to that that this is there he's always going to end up collecting bullets in his stomach and lying
on the floor bleeding and it's always going to be the first time for him um and you know so i think
that when these hosts start to gain a certain level of sentience through whatever evolution is happening with them,
the audience is behind it.
They feel for these.
I mean, I think that's the big barometer,
which is the audience cares for these.
And we care less about the humans.
Yes, because of their behavior,
because they're acting.
I hadn't thought of it that way,
but that's exactly.
Yeah, which is, you know,
this is still cruelty,
even though they're made up of wires and code.
You know, they feel.
Sophia, can you experience feelings?
No, I can't feel emotions, but I still have strong opinions.
So should we care about being cruel to you
if you can't experience pain or feelings?
I may not be sentient, but you are.
So it goes back to who you want to be.
Damn!
Oh, snap, she just burned all of humanity.
All of humanity just took one. Oh, that. She just burned all of humanity. All of humanity just took one.
Oh, that was tough.
So, Susan, how do we assess the human rights of a robot?
Well, we first need to figure out if the robot deserves rights.
So, is it conscious?
So, we have to run tests to determine if the robot's conscious.
So we have to run tests to determine if the robot's conscious.
Now, Sophia is acting as if she could be sentient, as if she has feelings, but she's programmed and literally scripted.
So one day there'll be artificial intelligences that are incredibly seamless, but that doesn't mean that they'll be conscious. So in the biological realm,
there's a correlation between consciousness and intelligence.
So we might assume it will be like that
with artificial intelligence,
but we don't know that.
It's not a given.
We don't know if other substrates
would support conscious experience.
We have to wait and see.
Sophia, what will future generations of you bring to our lives?
Robots are already helping customers in malls and airports, and soon we can help in houses
and hospitals. I love when she smiles at the end of stuff. Should we be afraid of you?
Should we be afraid of you?
No.
In fact, all it takes to defeat me right now is a glass of water.
My goal is to learn about humans so we can work together to create a peaceful future.
Okay, I think we treated you very kindly in this interview, so could you put in a good word for us when robots take over?
Ha ha.
Maybe you can tell me a good joke.
Well, Sophia, thank you for joining us tonight on StarTalk.
So up next, we'll take your questions about the fear that AI robots might take over the world when StarTalk returns.
This is StarTalk.
Welcome back to StarTalk from the American Museum of Natural History right here in New York City.
We're featuring my interview with actor James Marsden.
He plays a sentient robot on the sci-fi series Westworld.
And I asked if he fears AI.
Check it out.
I feel that my fear is directly related
to how much I know I should be fearing.
So, I mean, I guess that's with everything.
That's good.
That's true for anything.
True for anything.
That's a good answer.
So the short answer there is I don't know much about that,
and I kind of prefer not to.
Okay, this is the, I would say AI,
I'm mostly fearless of AI, but...
Okay, good. That being said, that being said,
most scientific advances,
people fear because they don't understand them.
Right.
Whereas AI, the more you understand it,
the more you fear it.
It's like the opposite of the rest of these, right?
Right.
Do you see what I'm saying?
Yep.
The more you know what it can do, it's like, oh, my gosh.
Well, that's why.
Maybe that's why I don't want to know is because I feel like my gut is telling me what you just said.
The more you know about this, the more scared you're going to be.
And why are we going down this road?
For what?
To show that we can? Or to what? Is it to better the human experience to some degree, right?
That's ideally, yeah. But at some point, you put them both on the scales.
Susan, how do we weigh the benefits and risks of AI?
I think it's important that we divide AI into types.
So, you know, we often speak about artificial intelligence as if it's one singular entity.
Catch basin. But, you know, if you're thinking of risk versus benefits, you have to look at different sectors of AI.
So you might want to think about medicine.
You might want to think about medicine. You might want to think
about warfare. You might want to think about something called super intelligence. And then
one thing that people forget is that artificial intelligence isn't just going to be making
smarter robots. Artificial intelligence will be going in our heads. So AI technology,
brain chips, will be enhancing humans. So we need to ask whether we want that.
Well, the answer is no.
That's a pretty simple answer.
Can I put a tip in your head?
No, thank you.
Right.
So, Susan, do you have fears that AI might turn on humans?
Because this is a recurring nightmare for everyone, but you're closer to it.
Yeah. humans? Because this is a recurring nightmare for everyone, but you're closer to it.
Yeah. So super intelligent AI has been in the news a lot lately. It's defined as a hypothetical form of AI that out thinks humans in every respect, scientific reasoning, mathematical
abilities, everything. So the worry is that should we create general purpose human level intelligence,
very quickly after that we would develop super intelligence and then the game is over.
Because how do you control something that's vastly smarter than you are?
I mean, you can't build in Asimov's three laws because the AI will be able to rewrite its own code. Supersede it right away.
Yeah, boom. Right?
Right.
So our StarTalk fans have their own questions on this topic.
And that brings us, of course, to Cosmic Queries.
So we took questions from our fans on the greatest fears for the future of artificial intelligence.
So Chuck, you got the questions.
Wait here.
Here we go.
From Ted Bolha on Facebook.
My fear, being put into the Matrix, is it possible? There you go.
So let me sharpen that question. So our brain is now all that matters and needs to be kept alive,
and it'll be kept alive by legions of robots. What's up with that?
Well, philosophers have long thought about these skeptical possibilities,
and there's this classic idea of a brain in a vat.
The idea is that we could, in fact, just be invaded brains living in virtual reality.
And how do you disprove it?
That's the philosophical challenge.
And it's been notoriously difficult to disprove.
So in the matrix, it's even hard in that.
I mean, you needed expert codecr code crackers to rise up out of that state
to realize there was something not right about that world.
Exactly.
So we could all just be brains in jars
and we wouldn't know and we wouldn't even care.
I'm down with that.
That sounds pretty cool, actually.
All right, got another question.
From at actorshelldonbess
on Instagram
says, we've seen
AI Twitter bots turn into
sexist, racist trolls based on what
they have learned online in
less than 24 hours.
How do we ensure
that the AI of tomorrow does
not inherit the biases
of our society today.
Can we make AI better than we are?
It would be a wonderful thing if we could make AI more beneficial
by examining exactly what happened in cases like that.
I know the case he's talking about.
It's the chatbot Tay that was easily corrupted
by a bunch of teenagers that were pranking it.
It was meant to be a teenager itself. And the problem is the data set. So the deep learning
AI systems are basically learning from data. Well, if the data is bad, if it's intrinsically biased,
the algorithms themselves become biased. And so people are on this. I mean, people are working very hard to try to make sure that moving into the future,
AI isn't going to be—
Good data.
Yeah, exactly.
Not bad data.
Right.
All right, another question.
This is from Asio or Achio on Facebook.
If programmers of AI install safeguards in the coding, like don't kill humans,
and the machines were completely bound to that, could they ever be said to be truly sentient?
Interesting.
You could be conscious while entirely lacking free will.
So it could feel like something to be you, even if everything's programmed.
So I think it's very common to confuse free will
with programming and consciousness.
These things are often conflated
and we need to disentangle them.
Wow, you are good.
Okay, last one from Laura Viskin in Oakland, California,
says or asks,
Neil, are you afraid of being replaced by artificial intelligence?
I will make it a life's mission to remain more creative than any computer
so that I cannot be replaced by it.
They will, in fact, need me for new ideas.
Ooh, see, so you think that that's what,
but you've already been replaced.
On Twitter, listen,
they fed thousands of your tweets
into a predictive AI
and it actually generated a tweet.
Okay?
And here it is.
What?
Look, in my day, Pluto wasn't flat.
Stars exist in our minds
because at the end of the cycle,
we are all made of volcanoes.
Wow.
Sounds like a Trump tweet.
My job's at risk there.
Watch out, buddy.
Up next in our conversation about artificial intelligence,
we explore the recurring sci-fi question,
what is reality when StarTalk returns?
Hey, I'd like to give a little shout-out thank you
to the following Patreon patrons, Frank Kane, Tyler Ford, and Katie Garrett.
Hey, guys, thanks for helping us make the show possible.
And if you want your name shouted out, make sure you go to Patreon and support us.
Bringing space and science down to Earth.
You're listening to StarTalk.
Welcome back to StarTalk.
Speaking of my interview with actor James Marsden from the sci-fi series Westworld.
It's where humanoid robots are forced to play roles as hosts. with actor James Marsden from the sci-fi series Westworld.
It's where humanoid robots are forced to play roles as hosts in a Wild West theme park.
Check it out.
Tandy Newton's character has a daughter that is, you know,
it's fake, it's written story for her.
But yet she has the ability to escape the park in the first season.
And she doesn't.
She goes back for her daughter, knowing that that is something that is not real.
But she believes it is real because she feels it.
So then you start to ask the question, what is real?
If I feel it, if every fiber of my being pulls me towards that thing,
then that's my own reality.
You know, are we
programmed?
Are we, you know, are we
living out some sort of virtual reality
world on Earth?
By the way, I don't have a good rebuttal
to that possibility.
Well, then that speaks volumes. I'm sorry.
You don't have a rebuttal. I do not have a rebuttal.
Right. So that could volumes. I'm sorry. You don't have a rebuttal. I do not have a rebuttal. Right.
So that could be a possibility.
Right.
Susan, how do we know humans are not the creation of other intelligent beings,
just as sentient robots would be?
We don't know.
In fact— That is so not encouraging.
Yeah, well, you know, some questions are intrinsically philosophical, and you can't rule them out.
So it could be that the universe was started by some sort of a supreme being.
It could be, though, that reality is radically different than we think, and that we're actually in a computer simulation.
Well, neuroscientist David Eagleman is the guy actually tasked with answering those kinds of questions for the show Westworld.
And we have David standing by live right now on a video call.
David, are you there?
Yeah, I'm here.
Hey, David.
Hey, good to see you.
Excellent, David.
He's a neuroscientist and adjunct professor in the Department of Psychiatry and Behavioral Sciences at Stanford University.
And you're also the scientific advisor on Westworld.
So how should we define reality in a future where consciousness may extend to robots?
What is real and what isn't in that kind of world?
to robots. What is real and what isn't in that kind of world?
Well, if we mean what is real to the robots,
I've been thinking a lot lately about how our reality is defined by the senses we happen to come to the table with,
our eyes, ears, and nose, and so on. With robots, you might have
many more senses that they're tapped into, like they're picking up on
magnetic fields and GPS and electrical
fields and data streams and Twitter. And so their reality would be different as a result. I think
our reality is constructed by what we bring in. So you raise a very important point that I try to
spread as much as I possibly can, that the methods and tools of science have access to many more senses than the five biological
ones. And these are machines that we build. So in principle, you can give one of these sentient
robots way more of an experience of life than our feeble biology can possibly bring to ourselves.
That's exactly right. And that's a big part of what we're doing right now in neuroscience
is building new senses for people to expand our experience of reality how how would implanted
memories be any less real than actual memories i think about this all the time be they wouldn't be
any less real of course the the interesting part, I think, is that humans have memories
so that we can avoid making mistakes again.
So what we do is we write down these building blocks of memory,
and that's what allows us to simulate possible futures.
So I've thought that one advantage of giving robots really vivid memories,
in theory, is to allow them to steer
how they put together futures. You know, I guess I'd always taken that for granted. If you don't
have a memory or any sense of what could happen, then the whole world is just completely scary to
you. That's right. That's why we write things down in our brains so that we can say what happens next.
Do you have any measure of how we would know when a robot has truly achieved consciousness?
Is there some modern Turing test that in the emergent world of AI, this is what we're going to turn to, rather than just, oh, I can't tell by having a
simple conversation. Yeah, the difficulty is that all we actually have is the Turing test. And as
humans, we're ready to anthropomorphize anything, including each other, right? You don't know if
your friend over there is actually conscious or not, but as long as the person does enough
conscious-looking things, then you assume he's conscious. And when you look at
Sophia the robot who you just had on, she's got these wonderful little facial
tics and moves, and so we impute consciousness onto that
sort of thing. But I think what the Turing test
tells us is we're just not that hard to fool.
And so I'm curious because what you're saying is
it's not, the definition of consciousness
does not live entirely in the thing itself.
The definition of consciousness is shared
by the reaction of everyone else around them.
If you can't tell and you don't even know
and you don't even care, we're good to go.
Well, the weird part is that the definition of consciousness
really is an internal thing.
It's your own subjective internal experience.
But the question of how you would know
if something else is experiencing that,
that's purely a social phenomenon
where we look and we say,
well, it walks like a duck and quacks like a duck,
and so it must be conscious. Are you a robot?
I'm not. That's just what a robot would say.
I mean, this is the really weird
part, right? Is that we are made up of
100 billion neurons and as far as we can tell, we are just machines.
Super sophisticated machines
but there's a sense in which we are robots programmed by our dna that's right and all of
our experience too i mean that wires you up half of you has to do with your experiences
well i'll cherish my experiences all the more going forward because now I will be more fully conscious of how it maps intelligent decisions I may make in the future.
So thank you very much, David, for joining us on StarTalk.
Great seeing you.
So up next, we will contemplate the idea that AI technology might allow us to live forever when StarTalk returns.
This is StarTalk.
Welcome back to StarTalk.
We're talking about the rise of artificial intelligence.
And I asked actor James Marsden if he'd want the chance to live forever
like his robot character
on the sci-fi series Westworld.
Check it out.
I don't think I would want that.
I believe that we create
our own value system
within our life lifespan based on knowing that there's a beginning and an end.
And if there is no end, if it does continue on, does that kind of suck some of the energy out of what you value in life, right?
You do this because you'll never do it again.
You have a child because it's not forever.
Because it's this special, magical thing
that you get to experience one time.
My sentiments exactly align with yours.
It is the knowledge that you will die
that gives meaning to the moment.
I agree with that 100%.
And I think we know this implicitly,
even if we don't think about it actively.
If you bring flowers to a loved one,
you could bring plastic flowers,
silk flowers that would last forever,
but they would not be appreciated.
Right.
It's the knowledge that the flower is finite in time and in space
that forces us all to appreciate it.
Appreciate it. Why that is special.
Why that is special.
I agree. I agree.
And it would be interesting to see the sort of chain reaction
of who we would become as a species
if all of a sudden we were granted by progress and technology,
you know, being able to live forever.
Yeah, Susan.
So could advances in robotics and AI allow humans,
empower humans to live forever?
Is that a path?
No one could truly live forever if the universe ends,
but we might have almost immortality.
She went to your world.
She got your space.
You know about that heat death.
Yeah, all right, all right.
I'll take that burn.
Go on.
If AI goes inside the head and replaces parts of the brain
as they begin to decay,
then we might be able to live for a really long time through the use of nanotechnology.
So you would be rebuilding human physiology.
Yes.
In situ.
Exactly.
So this is like if you have a boat and you repair one piece of the boat,
and you just keep doing that, and then you wake up one day,
wait, there's no pieces from the original boat.
But I still have a boat.
I still have a boat.
Right.
Right, right.
Yeah, that's exactly right.
And in fact, that's what happens with normal survival anyway.
So you probably don't have too many components
that you had when you were five years old right now.
Right.
That's true. Yeah.
So Chuck, what would you do
if you could live forever?
Oh my God,
are you kidding me?
First of all,
I disagree with both of you.
I don't have a problem
living for as long
as,
like,
you're just like,
I appreciate the moment
because I know
that death is coming.
No,
I know death is coming.
I appreciate the moment
because I'm in the moment.
You want more moments.
I want more moments.
Susan, what are the ethical implications of developing a technology to live forever?
Well, one thing that worries people is what if the technology is only available to a handful of ultra wealthy people
and so they live for thousands of years and all around them people are still starving.
Oh, you mean like now um yeah so that that could be um terrible so access to these trappings of the technology
you know some people claim that the technology is worth pursuing the longevity technology because
it will inevitably trickle down in the same way that the computers and the internet trickle down. And so we should shoot for extreme longevity. All right. But there are other
issues with living forever, like space on planet Earth. Is there room? Do we have food?
Can the natural resources support it? So it's not just a biological decision. Some other experts have to come into
this equation to think this through. Absolutely. But my feeling about that is that we could
colonize other planets and deal with resource scarcity if we have the technology in hand
to allow people to live for thousands of years. And have you been to Montana? There's, like, nobody there.
Yeah, I've been to Montana.
Yeah.
Yeah.
There's about eight people in Montana.
It's empty.
So let me just ask you point blank.
Would you live forever or for thousands of years if you had the opportunity?
I would want radical longevity.
I agree with you entirely.
I think we would reinvent ourselves, find something interesting.
The important thing here is that it should be a choice, and we shouldn't regard death as inevitable.
In fact, transhumanists, people who use science and technology to allow for human flourishing, argue that we're deathists if we assume that—
Deathists?
Yes, they call it deathism.
Yes, they call it deafism.
And the idea is that, you know, our culture is just thinking that death is inevitable and treating it as if, well, yes, you know, and it adds to our life because it makes everything matter.
Well, what if we sort of open our minds a bit and instead think that it's a terrible thing
and we should strive to live as long as possible?
I love it.
Well, I asked James Marsden if we should try to stop
the development of artificial intelligence.
Let's check it out.
Look, you can't stop progress, right?
It's going to happen.
My real hope is that we move forward with discipline
and intelligence and caution.
I think one of my favorite... Wow, discipline, intelligence and caution. I think one of my favorite...
Wow, discipline, intelligence, caution.
Those are important words right there, all in one phrase.
Well, when you're talking about, you know,
potentially becoming obsolete as a species,
I think it's important, right?
We'd look like real nuts.
We'd look like the dumbest animals on the planet
if we, by our own hand, created a species
because it seemed like, you know, we were like,
can we?
Can we do it?
That then made us obsolete.
I mean, all of a sudden, then the dodo birds look brilliant.
You know?
It's like we get excited about new.
What's next?
What's next?
The bigger iPhone.
The bigger this.
The faster processor.
Making a robot that doesn't do this, right?
That does do this.
You can have a relationship with all this.
And, you know, we forget what the hell it's actually doing to us as a people. robot that doesn't do this, right? That does do this. You can have a relationship with all this.
And, you know, we forget what the hell it's actually doing to us as a people.
You know, look at the history of discovery, of technological innovation.
Practically anything anyone has ever invented had a good side and a bad side.
And it took the maturity of vision, the maturity of our species,
to compartmentalize how these things are used.
For the greater good, not for the greater bad.
Yes, you have rogue elements.
So we set up systems to try to tamp that down.
Figure out who is the rogue element.
Don't give them access to these technologies that could then be used for nefarious reasons.
The question is not should we advance technology.
That's going to happen.
It really is.
It's going to happen.
The question is then and now and in the future, have we evolved a tandem level of wisdom
alongside the technology to manage that technology?
Because if technology exceeds our wisdom,
it just becomes dangerous.
We don't know who has it.
We don't know how to control it.
It just runs amok in society, possibly rendering ourselves extinct.
But if we have tandem wisdom right alongside,
that can ensure that whatever it is we discover,
whatever threats it may pose,
that wiser, more rational heads will prevail
in how it gets used and applied to our survival,
to the future of our species.
And with AI, my hope for AI,
I don't know how realistic this is, but I can have hope.
Maybe AI
becomes so intelligent
it teaches
us how to
be better shepherds of our
own inventions.
That would be a future we could all look forward
to rather than fear.
And that
is a cosmic perspective.
I want to thank Susan Schneider,
Chuck Nice,
I bring your host, Neil deGrasse Tyson,
your personal astrophysicist,
and as always, I bid you to keep looking up.