StarTalk Radio - C-3PO and the Rise of Robots, with Anthony Daniels
Episode Date: March 23, 2020What’s the difference between a robot and an android? Should laws protect robots? Neil deGrasse Tyson explores the rise of robots with “I Am C-3PO” author and Star Wars actor Anthony Daniels, co...mic co-host Chuck Nice, and robot ethicist Kate Darling, PhD. NOTE: StarTalk+ Patrons and All-Access subscribers can watch or listen to this entire episode commercial-free here: https://www.startalkradio.net/show/c-3po-and-the-rise-of-robots-with-anthony-daniels/ Thanks to our Patrons Leon Galante, Tyler Miller, Chadd Brown, Oliver Gigacz, and Mike Schallmo for supporting us this week. Photo Credit: StarTalk. Subscribe to SiriusXM Podcasts+ on Apple Podcasts to listen to new episodes ad-free and a whole week early.
Transcript
Discussion (0)
Welcome to StarTalk, your place in the universe where science and pop culture collide.
StarTalk begins right now.
This is StarTalk. I'm Neil deGrasse Tyson, your personal astrophysicist.
And today I got with me Chuck Knight.
Hey, Neil.
Just bumped that.
Boom, what's up?
And there's like someone between us here.
I know, I know.
We're always fist bumping in somebody's face.
In somebody's face.
Today, we're talking about robots.
In fact, that's not only what we're going to do,
that's the title of the show.
Talking about robots.
Talking about robots.
Talking about robots.
A little on the nose.
And we have as our studio guest, Kate Darling.
Kate, welcome.
I don't get a fist bump?
Double fist bump.
Give it a...
Bam!
Oh, look at that.
There you go.
There you go.
And while you're here with us, you are a robot ethicist.
Didn't even know that was a thing.
We'll get into that in a minute.
From the MIT Media Lab, of course, in Cambridge, Massachusetts.
And what we're featuring today is my interview with Anthony Daniels.
Anthony Daniels.
Yeah.
Oh, she's getting all nerdy, nerding out on that one.
Anthony Daniels, the actor who portrayed C-3PO.
Oh, my God.
Oh, dear.
Oh, my God.
Oh, oh, oh, oh.
Oh, you want the gig.
How lovely.
Oh, no, I love it. R2, R2.
You want the gig.
In particular, we're not just talking about robots.
We're talking about relationships with robots.
Okay.
Between humans and robots.
And we don't even know what that means entirely.
Not at the moment.
The movie AI kind of covered it.
Yeah, okay.
All right, all right.
But, of course, C-3PO is from the Star Wars franchise.
One of the most successful movie franchises there ever was.
And just let me get a little bit of background on you, Kate.
So, did you come to this from robotics?
No.
No.
Well, I've always loved robots, but I'm a social scientist.
Nice.
I have a legal background.
I did social scientist. Nice. I have a legal background.
I did social sciences, and now I study human-robot interaction from a social, legal, and ethical perspective.
Wow.
So it's good to learn that someone such as you exists in that world.
Yes.
We should have somebody like you in all of the potentially troubled places where technology is going.
You mean everywhere?
Yeah, like everywhere.
So you have a book that may be coming out in 2021.
Yeah.
I have a title here. Is this the right one?
The New Breed.
What Our History with Animals Reveals About Our Future with Machines.
That is an awesome title. That really is so so let me go get
to my first clip with anthony daniels he's the only actor that was in all nine star wars movies
wow all 90 official not the right not the the the the fan off ramps right exactly and he's also the
author of imc 3po thePO, The Inside Story. Cool.
You see what he did there?
Yes.
Very clever.
You see what he did there.
So what is it about C-3PO?
Is it his performance, the way he speaks, that people could relate to him so deeply?
Oh, gosh.
C-3PO is amazing.
I think what it is, actually, is that C-3PO looks like a robot but acts kind of like a human.
Like, he's very flawed and has all these human emotions.
And I think people just relate to him, ironically, because he's so human-like.
Oh, so it's the opposite.
It's not like he's the perfect robot and we're finding a way to relate to that.
Right.
It's that he has enough human in him
so that he's an imperfect robot right that's what we're related is that what you just told me yeah
yeah well you know that makes sense because that is what makes us human they're like you know the
fact that we are flawed and imperfect and and kind of annoying and well definitely not kind of
definitely annoying you know so i kind of liked it when he definitely. Not kind of. Definitely annoying.
You know?
So I kind of liked it when he got a little excited.
Oh, oh, oh.
What shall we do?
That was just kind of fun.
It was like, hey, that's kind of cool.
Yeah, exactly.
So it's the human side of the robot that we're relating to.
Sometimes, yeah.
In that case?
In that case, for sure.
Okay. Okay.
And so what is it about him, other than his costume,
that told you he's a robot in how he's interacting with people?
What is the evidence that he's a robot,
other than he's a shiny metal thing?
Right.
I mean, a lot of it is the design.
I'm trying to remember anything, like, specifically robotic that he said.
See, I don't think so.
Other than I know 80 trillion languages.
Right, exactly.
Yeah, something like that maybe.
We don't know human wood, so that's a robot talent.
Right.
But I think a lot of the visual and the way he moves.
Yes, and he does.
It looks like he's actually doing the dance, the robot.
You know what you want to do if you want to be a contemporary robot, though,
is fall over a lot.
Oh, this will be YouTube videos.
Yeah.
Well, let's go to a clip.
So I sat down with Anthony Daniels.
And did you know he wasn't always a fan of sci-fi?
Really?
Well, remember, he's an actor.
Until he got that first check.
Love me.
Oh, dear.
I do believe I love me some sci-fi.
Oh, R2, R2, where's the nearest bank?
Let's find out.
What was he up to before he landed where he did?
Check it out.
Maybe I'd been traumatized.
You know, I never thought of this.
I had been bashed around the head by 2001 A Space Odyssey
to the point where I never wanted to see another spaceship.
Because that was a long movie with very little dialogue.
There's no character development except for the how, the computer.
Well, you're right there.
So it's a very different genre.
It's not even science fiction.
It's a science portrait, in a way.
It was almost a philosophical treatise on man
and man versus space.
Man, machine, and space.
And so there we are.
So I didn't want to go because back
then the only robot that
I remembered really were the Daleks
on television. Oh, the Daleks.
Yeah, so of Doctor Who.
Yeah, Doctor Who.
Doctor Who with sink plungers on their faces.
I mean, cute, and as a kid I adored them,
but as an acting role, not so much.
Then there was Robbie, before them,
Robbie the robot in, what's it called?
The Forbidden Planet.
The Forbidden Planet.
And he was this kind of lumbering thing
made of Michelin tires, really, it seemed to me.
Yeah, because he had these horizontal segments.
That's right, and he lumbered
in a kind of
unprepossessing way,
I felt.
But anyway...
Are you judging
the acting talents
of a robot?
No, but it was...
You just gave a critique.
No, no, no.
He lumbered.
He didn't pull off
that movement convincingly.
He lumbered,
and when you read
on page 95
that I met Mr. Kanishita,
who designed him,
and I said joyfully,
oh, what was it like to see
your design come off the paper? And he went
not so good.
I didn't mean him to kind of
lumber. And I said, oh so you get
why 3PO kind of teeters around
because it's more characterful, more forgiving
more
human in a way.
Yeah, because
Robbie the robot was,
I'm going to say robotic.
No, I don't know.
He just didn't, there was no,
he had the arms, that was it.
Whereas C-3PO, there were sort of body gestures
that could help communicate a mood.
And it's all I had, really.
Right, because there's no,
this is not a moving mouth here.
And you'd be surprised how many people think
it's makeup that I'm wearing. No, it's a solid. We, because there's no, this is not a moving mouth here. And you'd be surprised how many people think it's makeup that I'm
wearing. No, it's a solid... We all saw
Goldfinger, so just a couple years ago.
Ah, yeah. She was prettier than me.
So he was referencing his book, I Am
C-3PO, and he said, on page 95.
Right. So Kate, how has,
we talked about a few generations of robots
there, how has our concept of robot
evolved from the beginning until now?
It used to be that anything that was even remotely automata
and could move on its own was viewed as a robot,
and now we have a little bit more that we expect a robot to be able to do in behavior.
Right, because in fact now, so when does it become an android versus a robot?
An android is a robot that looks deceptively human-like.
Lieutenant Commander Data.
Yes, yes, Data, my favorite android.
And then there's C-3PO, which is more of a humanoid robot.
So like head, torso.
I forgot the word humanoid goes in there.
Yeah, so androids look realistic.
Humanoids just have a kind of human-shaped torso, head, arms, legs.
So not R2-D2? Not R2- arms, legs. So not R2-D2?
Not R2-D2.
So what's R2-D2?
He's just a robot.
Just a robot.
Damn, with no oid on it.
Right.
Poor R2.
He's just a robot.
Okay, so, but presumably all three kinds of robots are still legit in storytelling today.
Oh, yeah, for sure.
Okay.
But we don't have the big
Robbie the Robot kinds anymore.
Those, the lumbering.
Right.
With the circuits turning in his...
Warning, Real Robinson.
Warning.
Right, that was a Robbie Robot style.
Exactly.
In Lost in Space.
Lost in Space.
Warning, Real Robinson.
Danger.
Not warning.
Danger, Real Robinson.
And with the arms would be flailing.
Right, right. And then Dr. Smith would be like, Oh, warning. Danger, real Robertson. And with the arms would be flailing. Right, right.
And then Dr. Smith
would be like,
oh, dear.
Oh, dear.
You know.
At what point
do you inform
a person who might be
trying to design a robot
in terms of
its personality
or its character
or how they would
best be an actor
doing so?
I mean, I do, I work with roboticists a lot in social robotics.
And, you know, we have more and more robots coming into shared spaces and they have to,
you know, interact with people.
And not all roboticists...
What's a shared space?
Oh, you know, a workplace, household, public areas.
Oh, okay, okay.
Like Stop and Shop has robots roaming the aisles now.
Do they?
Yeah.
You could call, they are indeed a robot, but it's more like an obelisk on wheels with googly
eyes attached.
It looks like a penis.
Wow.
What?
All right.
Good.
It does, though.
I mean.
I'm just saying maybe to you.
Okay.
So it goes up and down the aisles?
Yeah.
And there's no one controlling it?
No one's controlling it.
There's no joystick?
It actually moves about like a Roomba.
Like, you know, except it doesn't have to touch things.
And I assume it doesn't bump into the orange aisle.
It doesn't bump into the orange stack, right.
But I think, I've never engaged with them personally,
but I think you can ask them questions
and they will direct you to places in the store.
All right, I got it.
The next time I see one,
I'm doing all kinds of experiments on it.
Oh my God. I was going to say., I'm doing all kinds of experiments on it. Oh, my God.
I was going to say.
I might have gone
to Stop and Shop
last weekend
with Daniela,
who's one of the students
in the personal robotics lab
who is obsessed
with this robot.
And we might have, like,
put stuff in front of the robot
to see what it did.
You might.
Just might have.
And what happened?
Yeah.
Well, they might have done it.
They didn't.
Oh, you might have.
If you had.
We might have.
What do you think
would have happened
if you had actually done it?
If you had.
If you had done it. If we had. What do you think would have happened if you had actually done it?
If you had done it.
If we had.
What do you think might have happened?
What were you testing it for?
We didn't get kicked out yet.
We're going to go back.
Well, we just wanted to see what it would do because the purpose of the robot is to find hazards on the floor and alert someone to come pick them up.
And so we wanted to know what's the hazard.
Clean up all four.
Clean up all four.
All right.
Yep.
Now, I wonder what it would do
if you just laid down
in the floor,
like on the floor
in front of it.
Like, what it would do
if it's actually...
It'll go around you.
See, that's a really
bad robot.
No, why?
What are you talking about?
So you'll recognize a spill,
but I just had
a damn heart attack.
And you'll just go around me.
And you'll go around me?
Really?
So somebody drops a jar of pickles
and it's a huge monumental problem.
We need, you know,
somebody to get here right away.
But you fall and you can't get up
and the robot,
you're just in the way.
Exactly.
Instead of hitting my
meta-alert bracelet,
you're just like,
okay, excuse me.
Like, really?
So that's the thing though.
Like when designing robots,
you have to think about
like what's going to be frustrating to people
when they're interacting with it.
And they're going to be like,
why isn't it helping me do X
when they don't understand that building a robot
is really, really hard.
And they only have very limited capabilities.
And so roboticists really need to think
not just about how they're working,
but how people are going to perceive them.
They only have limited capabilities now.
Right.
Yeah. Stop covering for them. So only have limited capabilities now. Right. Yeah.
Stop covering for him.
So Anthony Daniels, as an actor,
he's best known for C-3PO,
but he almost didn't take the gig.
Oh.
Yeah.
Let's find out why.
Okay.
Check it out.
So there we were thinking about playing a robot.
And the thing that really changed my mind was reading the words that I had not written.
George and his team had written them, pretty much George.
And clearly he had invented a machine with more human characteristics
than he could apply to a human being.
You couldn't get away with Han Solo being
the character of Threepio, if you see what I mean. So Threepio is allowed to have intense humanity
because he isn't a human. He isn't human. That's deep. Not really. Yes, it is. Because what you're
saying is, with a machine who is sort of human, but it's still a machine,
you can take it to human places that would be unconvincing
if written for a human character.
And slightly uncomfortable.
I never thought about that.
You will now.
Thank you.
In your next lecture, you can talk this out.
There is a film called Bicentennial Man.
I never got to see that.
It's interesting.
Robin Williams.
Beautiful guy.
I had luck to meet him a couple of times.
We didn't talk about it,
but that was a slightly uncomfortable film
because the storyteller was...
He was transitioning from a robot
that arrived in a packing case, you know,
from Amazon or somewhere.
And then...
Pack that.
And then pack that thing.
Whatever the Amazon equivalent was
back when that movie was made.
And then gradually he metamorphoses into a human,
and it's slightly uncomfortable
because it veers towards pushing our humanity buttons.
Like, what does it take to be human?
And why are we slightly uncomfortable
through the uncanny valley and beyond?
So tell us about the uncanny valley, because it's a great name,
but it still has to be defined for people to know what it is.
It is often used in games or in visuals or in film or in computer terms.
The Turing test almost gets there, but it's when something is almost real,
looks great, and it speaks nicely
and has great skin, for instance, in a real world.
But there's something that's not quite right.
There's something that we sniff as a human being
that's not quite there.
So it's even unconscious within us, perhaps.
It's innate within us.
Innate, that's a better word, right.
You don't even know how to verbalize it.
Verbalize it, yeah.
And so people have coined this phrase, the uncanny valley,
because you know there's something not quite right.
Kate, do all humans respond to the uncanny valley the same way?
So people have tested this theory empirically with very mixed results,
but most people who work in robotics
seem to think that there's something there.
And they're wrong.
They're wrong?
Yeah, they are.
Let me save them a lot of money.
You're wrong.
Save all the academics who have researched this.
Save all the academics who are researching this forever.
You're wrong.
What you're talking about
is the perception of normal humanity.
That's why you can't put your finger on it
because it doesn't exist. We feel
the same way about human beings that may have some type of brain disorder. And we talk to them
and we go, oh, something not quite right here. But you don't say they're not a human being,
but that's really what your perceptions are telling you. So what you're talking about is the normal perception of humanity as opposed to what makes someone human.
And they're two different things.
I think you're right.
I think I personally think it's about.
I know I'm right, Kate.
Oh, oh.
Let me tell you something.
No, I'm joking.
I'm joking.
I'm joking.
So that comedy thing doesn't work out.
I'll hire you in the lab.
All right.
But go ahead.
I'm joking.
So that comedy thing doesn't work out.
I'll hire you in the lab.
All right.
But go ahead.
Well, for me, Uncanny Valley has always been about expectation management because you're expecting something to behave a certain way.
If it looks human, you're expecting it to blink like a human
and not twitch its face.
And if it doesn't, that kind of unsettles you.
If it does something that you're not expecting.
So what do programmers, yourself, what do you all do in the media lab to either exploit the uncanny valley or to dodge it?
I don't think anyone wants to exploit it.
But I also don't understand why we would try to create something that looks like a human or talks like a human because we can create anything we want.
Why create, like, why try to, like, risk this uncanny valley creepiness factor
when we can create
an R2D2
that communicates
in beeps and boos?
Do you tell this
to your peeps
back at MIT?
Oh, yeah.
Like,
everyone,
I think,
in the social robotics field
agrees that making,
you know,
human-like robots
is not as interesting
as making something
that has
expression.
Something better?
Yeah,
you can make something better.
Something better than humans.
Animators have honed this technique
for hundreds of years,
how you can make something like Bambi
that looks like a deer,
but actually looks better than a deer to us.
Okay, all right.
So I agree with you,
but from a different direction.
So I think the future of AI and robots
is not to try to mimic a person.
Okay.
A person is not even
an ideal form.
No.
Right, right.
For tasks that you want to conduct,
the human body is like,
why would you design that?
Right.
That's not the case.
Even with the people
who don't have legs
but they run track.
On the blades.
On blades.
Yeah.
We're not trying to duplicate the bones of a foot and then put flesh on it and say, now you're...
No.
It's like, we got something better.
Something better.
Something better.
Here's something that'll spring and propel you forward faster.
So in your lab, are people thinking of the task they need, not trying to duplicate a human?
Because we can just...
People make babies all the time.
Why do you need to make a robot human?
We have real humans.
Well, I think people have like this fascination with recreating ourselves, but like, I really
don't see the point.
I think we're all in agreement here.
Wow.
Yeah.
I mean, I've never thought of it that way, but you're right.
I think, you know, when you look at sci-fi movies, like Alien comes to mind and the so-called android robot is so human that it's indistinguishable.
But the problem is it doesn't have a soul.
So it can't make any more.
It's a sociopath.
Let's get to that next.
Okay.
We got to take a break.
When StarTalk returns, more about the evolving relationship between robots and humans. We're back with Star Talk.
Neil deGrasse Tyson, your host.
Chuck Nice.
That's right.
Kate Darling.
Kate Darling.
Kate Darling.
Kate Darling.
In from Boston.
Thanks for coming.
From Cambridge, specifically.
The MIT Media Lab.
Good stuff happens. Every time something amazing is happening, it's traceable, the MIT Media Lab. Good stuff happens.
Every time something amazing is happening, it's traceable back to the MIT Lab.
It's funny how that works.
Just not only like art science and robotic science and computing and culture.
Yeah.
So congratulations to all y'all.
Oh, yeah.
It's just me.
I mean.
You the one. All right. I just want. It's just me. I mean. You the one.
All right.
No credit.
Excellent.
I just want to pick up on where we left off.
This idea that you're talking about in the Alien series, there was a human who was not human.
Right.
So not even humanoid, android.
Android.
Android.
Yeah.
And you're cool until you realize they would make a different ethical choice than you would.
Yeah.
Yeah.
So do you have to program this in?
Is this something they can learn?
There's a whole field called machine ethics that looks at can you program ethics into machines?
And it turns out that's really, really hard because we don't even fully understand or agree on humans.
We haven't programmed ethics into us.
Yes.
So... You can't program something that you ain't got yourself.
Yeah.
So I would prefer maybe not to create robots
that have to make those kinds of ethical decisions.
But there are people
who are trying to solve that problem.
Okay.
And so, but it would also be a way if some,
so let's get back to the concept of soul.
The religious person would say the soul gives you a sense of right and wrong and purpose and these sorts of things.
And that was the idea with Bishop.
Bishop didn't have a soul, so if it meant that bringing back this life form to earth that could potentially wipe out all humans, it doesn't make a difference because it's in the interest... It's an interesting experiment. Right. It's in the interest of experimentation
and exploration.
So who cares?
I think this is people's greatest fear
about scientists gone astray.
Yes.
Yeah.
Without a doubt.
So will that be the hardest thing
to program into robots?
A soul?
That's three lines of code.
I think that's all.
Well, Japanese actually believe
that certain things have souls.
Tell me about the Japanese.
Yeah.
So, like, there's this Japanese roboticist who creates these very, very lifelike androids.
Like, he's made one of himself.
Hiroshi Ishiguro is his name.
Ishiguro.
Ishiguro.
And, like, it seems that in, you know, eastern cultures that have a history of Shintoism
and believe that,
you know,
even objects can have a soul,
like they have funerals
for sewing needles,
for example.
It seems that they're more...
Yeah.
Yeah.
That must have been
a badass sewing needle.
If you were to give it a funeral,
that must have sewn
some good stuff.
See?
With a darn,
a lot of socks.
Poor Needy,
we knew him well.
Needy? Is that the... Needy, that, we knew him well. Needy, is that the?
Needy, that's what we called him.
That's your nickname.
Yes, exactly.
So tell me, I was unfamiliar with this, so keep going.
Yeah, and we don't have that concept in more Judeo-Christian society.
We have, oh, things are alive and have a soul.
Things are not alive, don't have a soul.
And so there's this idea.
Not only humans have soul.
Or that only humans, yeah, depending on, you know, yeah.
But that's why some people say
that the Japanese are much more accepting of robots
and this idea of having humanoid and android robots around
because they're like, hey, that's cool.
Are they also more accepting of robots
in the Uncanny Valley?
They might be.
Again, like I said, the empirical testing on the uncanny valley has
kind of been mixed so there's not a good scientific basis for it but anecdotally yes is it part of the
fact that in their culture they have a greater need for robots i mean it is clear that they have
like in japanese health care uh they don't have enough people and you have a
a great advancement of robotics in that particular arena are you confusing robots with automation
no i'm talking about actual robot care i mean in different like for instance in a hospital
like for um the delivery of certain. A robot will do that.
Rather than an orderly or something.
Rather than an orderly, right.
You know, so for instance,
or just even go outside of healthcare.
Hotels that you go to where they have robot check-in
and it'll be like a Tyrannosaurus Rex
will check you into the hotel.
Yeah.
You know, a robotic, a robotic Tyrannosaurus.
Because it's a novelty. That's how much into robots they are that we are not. You know, a robotic, a robotic terrarium. Just for fun. Because it's a novelty.
That's how much into robots they are that we are not, you know?
So, and what about Shintoism enables that or empowers it or drives it?
Well, some people would say that that makes them more willing to accept robots as this thing that's alive but not really alive.
Oh, so the simple element of inanimate objects having souls,
that alone would be sufficient.
So that is one reason people think the Japanese are more accepting of robots.
Another reason is, like you said, the need.
As robots come more into these shared spaces
and people interact with them more, people just get used to them.
And then there's also the fact that their science fiction and pop culture
tends to be less dystopian
when it comes to robots.
Like they have Astro Boy.
They have these positive stories.
I grew up with Astro Boy.
Astro Boy bounds away
on his mission today.
Rocking high to the sky.
How come I don't remember that?
Because I just made it up.
No, I'm joking.
That was the actual song.
That was the actual song.
I remember Astro Boy.
Yeah.
And there was, well, they also had Speed Racer. There was a lot of sort of early anime. That was the extra song. That was the extra song. I remember Astro Boy. Yeah. All the, and there was,
well,
they also had Speed Racer.
There was a lot of
sort of early anime.
That was the early Japanese anime.
Yeah.
That made it to American television.
That's right.
And yeah,
it was all very,
very happy stories.
Yeah.
And.
But we have a lot of Terminator
and stories of the robots taking over.
They have less of that.
That is true.
We are so messed up here.
So,
so are,
are the,
is the Japanese culture a good bellwether for the global acceptance and trajectory of robots?
I would say not necessarily.
I think that maybe the ways that they will want to use robots are different.
Okay.
Like the fact that they like androids, and I don't really think that we do in Western society.
Right on, yeah.
But it'd be interesting to see if all countries have equal access to this technology,
what they'll come up with relative to their own cultural needs.
Right, for sure.
So, Anthony Daniels, we're featuring my clips with him.
He has an interesting perspective on what makes C-3PO more human than a robot.
Ooh.
That's his perspective, because he was, he is C-3PO.
Cool.
So let's check it out.
George came up with this idea of this kind of figure, this Art Deco figure.
Then he employed Ralph McQuarrie, who made this life-changing painting that I saw
of the character. And then Liz Moore, the sculptor, turned that into 3D and made this
beautiful face that people recognize. And interesting, I only just realized the other
day because I was trying to cheat in Photoshop. Because some robot faces are just scary.
shop because some some some some robot faces are just scary and this is actually very it's it's got it's got curiosity in it and it's but it's you want to know what he's thinking because he clearly
is thinking and partly it's that sort of wide-eyed uh almost babyish stare with with big eyes um
and what was interesting liz had actually and I never realized it until recently, created
something that wasn't machine perfect, it wasn't symmetrical about a center point. It is actually,
as in a human face, I tried to flip it in Photoshop to double it up to make it perfect.
And it doesn't work because he is asymmetric. And that is one of the clues, I think, to his humanity.
That's an interesting philosophical point
because there's been research on symmetry
and there's a whole off-ramp from that research that says,
maybe it's not an off-ramp, maybe it's an on-ramp,
that a little bit of asymmetry brings interest to a character,
to an image, to a painting, to art.
Perfection, there's nothing more to say.
It's like somebody did it already.
Right, right, right.
Now, just between you and me, you do have a very symmetric face.
Let me just stare into the camera here.
I personally don't.
If you cut me in half, if you cut me...
Here we go.
It is not symmetric.
Yeah, you are so symmetric.
Would I like...
You are perfect.
I'd probably like to be, but it's too late now.
I think we have to go, but it's too late now. I think we
have to go
with what
we've got.
Were you
hitting on
C-3PO?
No, I was
just saying.
His book
had a picture
of him and
the robot,
and so I
put the other
half of him
next to his
head, and
it was him.
I'm saying.
But what a
genius design
tactic to
actually purposely put
in asymmetry. Tell me about perfection.
I mean, I hadn't heard about
this asymmetry thing before. That's really
interesting. But
one of the tricks that a lot of robot
designers use in social robotics is
to, you know, if you're going to
give it a face, don't make it as human
like as possible and don't give it too many features.
Don't necessarily give it eyebrows or a nose.
Just eyes is enough.
Things that we automatically respond to,
like he was saying,
like the big eyes, the babyish face,
things that we kind of evolutionarily respond to
are the best design tricks.
Oh, okay.
That's right, because babies,
their head grows only by a factor of three
and the body grows by a factor of five or six.
So babies have a disproportionately large head
to their body.
Yeah, I pushed one out of me.
Tell me about it.
Oh, is it?
Okay.
You should have just built it in the lab yeah you you had the power you have the power
you don't have to do you don't have to do it you don't have to biologically recreate
okay the rest of us we'd do that if we could right yeah so i think that the argument from
evolutionary biology standpoint at least what i learned from my colleagues here at the American Museum of Natural History, here's a commercial, is that in order to prevent mammals from killing their children, the children have to look cute.
Yeah.
And so the, not that everyone would kill their children, but I'm just saying.
I would.
Most would.
No, it's not that you would want to kill them all the time.
There are occasions in the arc of raising children
where if they weren't cute,
we go extinct a long time ago.
Have you done research into what our relationship
with robots says about us?
Ooh.
A little bit. Psychologically? us? Ooh. A little bit.
Psychologically? Emotionally?
A little bit.
I could talk about this all day,
but it's kind of like, you know how
when you go on a date with someone
and they're really mean to the waiter
and you're like, that's a red flag?
Some of our research indicates that
if you're mean or violent to a lifelike robot,
that might say something about you as a person.
Wow.
You know, that makes sense.
So Boston Dynamics has these videos online of robots being abused.
And I know clearly that that's a thing.
That's not a person.
And I got to tell you, it is so hard to watch
because they're hitting it with bats and they're kicking it and they're knocking it over.
It's a robot that's trying to walk.
Yes, it's trying to walk.
And basically—
I've seen those.
You've seen those?
Yeah.
And it's really disturbing.
Yeah, people get really upset.
Like the first time that they put one out that looked kind of like a dog and they named it Spot.
And then they're like kicking it and it's like struggling to stay on its feet.
They named it Spot, and then they're, like, kicking it, and it's, like, struggling to stay on its feet.
People got so upset that PETA, the animal rights organization, was getting a bunch of phone calls and had to issue a press statement.
And they didn't even take it seriously.
They were like, yeah, we're not going to lose any sleep over this.
It's not a real dog.
But there actually might be something there.
Okay. Okay, so would you preemptively, I mean, is this like, what's that movie?
Tom Cruise?
Yeah, yeah, The Minority Report.
The Minority Report.
Is this how you would pre-diagnose someone's propensity to, you sort of already said so,
because in a date, someone behaves in a way that is to someone who they have power over.
I mean, right now robots are still really
primitive and we're still able to
mentally compartmentalize.
But as the design gets more and more
lifelike, I mean, we do
definitely draw connections between
animal abuse and child abuse in the same household
legally. If you have a case of one, you
look for a case of the other. And it's possible
that... Strong correlations already
established. Okay. All right. And if you have a robot
that can mimic pain
and suffering
and you enjoy
inflicting that on it,
that might be an indicator
that you might also enjoy
torturing an animal.
But we don't know.
We don't have the evidence.
This requires more research.
All right, so it's certainly
evidence that you're a dick.
That's for sure.
I feel like it kind of is. So it's certainly evidence that you're a dick. That's for sure. I feel like he kind of is.
So it's interesting.
So in the dating scene, these are like secondary cues.
They can be really nice to you, but the waiter not so much.
If they kick the Roomba, it's over.
Given the examples of the Boston Dynamics and people kicking it,
and you feel the emotion for something that is not alive.
Right.
Where do laws ultimately have to land with regard to rights for robots?
Well, it depends.
So I believe in evidence-based policy.
Really?
Yes, I actually do.
What's wrong with you?
Unlike most legislators.
Have you been checked out?
I know, I know.
But like really,
like it would be nice
to have some evidence.
And if, for example,
we found out that
it was actually desensitizing
to people to behave
really violently
towards life like robots,
then, you know,
there's
some question of whether we should regulate and say you're not allowed to do certain things
to certain types of robots.
Because it's fostering behavior that would be counter to the interest of civilization.
So to me, only if it actually has an impact on that behavior.
Actually, I have to say that makes a lot of sense.
That would be evidence-based legislation.
That's evidence-based.
That makes a lot of sense. Very hopeful be evidence-based legislation. That's evidence. That makes a lot of sense.
Very hopeful there.
But it's tough to research.
So actually, my last clip of this segment,
I talked to Anthony Daniels about robots today.
Just to get a sense of what is...
Because, you know, that character dates from the 70s.
Right.
So just what were his thinking about the interaction of humans and robots today?
Let's check it out.
And one of the frustrations
we have now
with machines
that pretend to be human
and certainly in Japan
there are companies
working on human.
Oh, do I need
leading the way on that?
Every time I see a new robot,
it's a Japanese robot.
Yeah, well,
they like that kind of thing.
They've slightly taken it
to their own
in the sense of social interactions with machines,
human to machine, human-cyborg relations.
Indeed, George was there first.
Oh, yes.
Some of them, we're in early stages of real robotics,
and we have to think what we want from that.
But when you have something that pretends to be human
and then sort of suddenly malfunctions,
it's like, well, we're talking about Stepford Wives.
Suddenly I'm alerting to all these...
You've given us a full review of 20th century robots here.
This is great.
And 20th century film writers, script writers,
who now very, I think, cogently have adopted this slightly outer world,
nether world, where we are going, not in my lifetime, I hope,
because I need the work, you know.
So let's not move.
Actually, I'll come back to that, because in Japan it's widely known
that they are looking for really human-relatable,
probably bed-sized machines that people can relate to.
But then you have to look at what kind of figure physically do you supply?
Because if it's too humanoid and it starts clicking,
then it's a little scary, isn't it? Right. If it's too mechanical, then you're it's a little scary isn't it if it's too mechanical
then you're relating to i don't know in a can of fizzy drink like where's the balance between
who i want to believe i'm relating to because if i get too fond of you and you're a machine
it's not going to end happily there are all kinds kinds of off-ramps there for where that would go.
Let's not go there.
That's for the second series.
So is there any thinking in your lab about human-robot relationships?
Bonding?
Yes.
And where does that land?
Bonding?
Yes.
And where does that land?
Well, for me and a lot of my colleagues,
I feel like we're, as humans,
capable of a lot of different types of relationships.
And to me, the relationship with a robot isn't necessarily the replacement of a human relationship.
It's more like how we would treat a pet
or something completely different and new.
So it's not something that I worry about.
That's enlightened, though.
But maybe not.
But I think that's an enlightened outlook.
It's not clear to me that that's where that's going to go.
I think people, you know, if people can have imaginary friends,
then they can have a robot that becomes a friend.
That becomes a friend.
Okay, but why is that bad?
No, no, I'm asking you.
Yeah.
Is there, should we, I don't mean to imply it's inherently bad.
I'm asking you, have you guys thought about whether or not it's.
Well, what I, what keeps me up at night isn't.
That's what we want to know.
Isn't that someone might bond or have like a friend as a robot.
It's that a company is making that robot and maybe is using the robot to emotionally manipulate that person.
But that already happens in toys.
No, no, it's called advertising.
Yeah.
That's even better.
Manipulate all the time.
Don't even take a robot.
Doesn't even take a robot.
You're absolutely right.
Big psychological brain screw
called advertising.
But no, I remember
it was like a furry or a Furby
or something,
but it's a little robot
and it says things like,
I love you and
you're my friend. And it's like,
you know, I was like, I would never
get that for my kid. Like, that's the loneliest kid
in the world that needs this toy that's like
giving it love and affection
and reinforcement. There was an episode
of The Twilight Zone
where there's a guy isolated on a on an
asteroid somewhere it's early before they knew how right what space was really going to be but
anyway he's on an asteroid and this asteroid apparently has a breathable atmosphere but
holding that aside holding aside these complex these these holding that aside right okay um
they he couldn't be rescued for like a long time and he's slightly going crazy
so they brought him a robot a female robot okay and it says turn here and then she comes to life
comes to life of course it's played by an actual actress but it doesn't matter she's a robot and
matter she's a robot and then it's they they're they're companions and they're there for like a year and then the rescue mission finally comes but there's no room on the ship for her
no wow no this was a deep story oh my god what happens. They said, either nobody gets back, or we're going back without your companion.
He says, no, but she's this, isn't it?
I love her.
Guy takes out his gun, shoots her in the head.
What?
And then the springs come out and the thing, and it said, let's go.
Who does that?
It's in the show.
The guy in love with her?
No, another guy.
No, the other guy who's trying to save his fellow astronaut.
That's so cool.
To remind him that she's just a robot.
She's just a robot.
So talk to me.
I mean, robots can fill a void like that.
They're already being used as an animal therapy replacement in nursing homes
because we can't use real animals.
And so you bring in this baby seal robot that gives you the sense of nurturing something and people become very attached to them wow no so i'm asking
yeah the ethics of the story i just shared with you oh well i i mean i i think it's unethical to
shoot lady robot but uh okay but that wasn't psychologically damaging for otherwise they all die because
there's only one seat on that rescue well yeah yeah okay that's the construct yes and you're
an ethicist talk to me why couldn't she sit on the outside she doesn't breathe there oh good boy
they just strapped her to the bottom of the ship. Chuck, I forgot about this. Yeah. You know.
Thank you.
Now we don't have to resolve the ethical issue.
Right.
Oh, my gosh.
Oh, wow.
Chuck solved that problem.
Okay.
No, but tell me, how would you, where?
Can it be unhealthy, though, this bonding that you're talking about?
It can be, yeah.
I mean, if it's being used to manipulate someone
or if they're bonding with something.
So it sounds like she was meant to be a tool
and they didn't anticipate that he would bond with her this much.
Yes.
And this happens in the real world.
This happens with soldiers bonding with their bomb disposal robots
where they treat them like pets and they get really upset if they get broken.
Particularly if they save your life, it doesn't matter.
Yeah.
So Peter Singer has written about soldiers actually risking their lives.
Peter Singer, the Princeton philosopher.
No.
So there are two Peter Singers.
Peter Singer, there's a Peter Singer who has written a book called Wired for War about military robots.
Oh.
And apparently soldiers have risked their lives
to save the robots
that they work with.
they're actually missing
the point of that robot.
Yeah,
well,
or the people.
The point of the robot
is to save their lives.
Right.
Yes.
Exactly.
Yes.
But,
kind of,
you bond with something
if it saves your life,
though.
And I don't think
the people who deployed
that really anticipated
that response.
Real interesting.
Okay,
so here's the question then,
if you're going to make it empirical.
There's the risk to his psychological health
having no companion for a year
versus the risk to his psychological health
of having a companion that you put a bullet through her head.
Which of those is worse?
I mean, not having a background in psychology, my guess would be it's...
Ethically, even.
I mean, you know, we get pets and we know they're going to die.
And this is a similar thing.
That's true.
All right.
We got to land this plane.
We have a whole other segment.
Wow.
Okay.
We're going to take a break.
When we come back, more of the relationship between humans and robots on StarTalk.
Hey, we'd like to give a Patreon shout-out to the following Patreon patrons, Leon Galante and Tyler Miller.
Guys, thanks so much for your support, because without you, we couldn't make this show.
And if you would like your very own Patreon shout-out,
make sure you go to patreon.com slash startalkradio and support us.
We're back.
StarTalk.
We're exploring the relationship between robots and humans.
Yes.
Featuring my interview with Anthony Daniels, who recently published the book, I Am C-3PO.
Ooh.
Yeah.
And we have with us,
as sort of an expert commentator,
Kate Darling.
Kate, reintroducing you to those who,
whoever comes in only in the third segment,
I don't know who that is.
Animals, that's who.
So, anyway, thanks for bringing
your MIT Lab perspectives for us.
And I was delighted to learn that Anthony Daniels was affiliated with academia.
Let's check it out.
Maybe I shouldn't call you Anthony Daniels.
I should call you Professor Daniels.
I'm fundamentally an academic, so my radar perks up.
Well, you can call me a professor, but I know where you're going because I'm not an academic, so my radar perks up.
You can call me a professor, but I know where you're going,
because I'm not a professor.
I am a kind of visiting professional at Carnegie Mellon University. Carnegie Mellon, one of the leading institutions in computer science
and robotics and everything automated.
But years ago, I kind of got connected with it,
curious circumstances, through the Robot Hall of Fame.
Oh.
They invited me.
It's an institution in the Science Museum there in Pittsburgh.
They contacted me.
Would I come and accept an award for C3PO to be part of their exhibit?
Yes, of course.
How could you not?
But on the way there...
So where is the Hall of Fame?
It's in the center of Pittsburgh.
It's in the Science Museum.
Oh, very good.
And for instance,
they've got C-3PO,
they've got R2-D2,
they've got a machine
that can pot a ball every time.
Get that hoop every time.
A basketball.
A basketball.
It can do it mechanically
every time perfectly.
No matter where you put it
in the,
or just from that one spot.
I think you,
yeah,
it's cheating,
isn't it?
Yeah,
yeah.
It's rubbish,
isn't it?
I know.
And they've got
one of the original arms
of that original,
could pick up an egg
and put it there
and just do that
all the time.
They've also got,
Oh,
so they have the history.
They've got the history.
And at the time,
that would have been quite remarkable to get a machine to do anything. It was the They've also got... Oh, so they have the history. They've got the history. And at the time, that would have been quite remarkable
to get a machine to do anything.
It was the first,
what was the industrial robot there was.
There it is.
And it's got to be able to pick up an egg
and not break it.
Not break it,
but also put it exactly to replicate.
And, you know,
the definition of a robot has changed now
from the early Asmodean days
to where we are, a machine that can do something
kind of that's useful,
just doesn't need a human to do it.
They also have, for instance, a room as large as this,
which is a medicine dispensary,
which is apparently far, far more accurate
than having a human dispenser in a hospital.
It's dishing out the drugs, but in a good way.
If you visited that Hall of Fame, let's assume they have the drugs, but in a good way. If you visited
that Hall of Fame, let's assume they have all robots,
what would be your favorite robot, Kate?
My favorite robot?
They only have real ones, right? Like, not
science fiction. Oh, no, C-3PO's in there.
Is WALL-E there? I like WALL-E.
WALL-E.
You're allowed.
Even if they just have a drawing
of WALL-E, we'll give you Wall-E.
You're like, well, that's cute.
I like that.
Okay, how about you?
Alien Covenant,
which wasn't the best movie in the world,
but Michael Fassbender.
Oh, yeah.
Hot robot.
Hot robot.
But not as hot as,
what's his face in AI?
He was the male sex robot.
I forget his name.
He's gorgeous.
God, did I just go gay for robots?
I think I did.
You totally did.
I totally did.
Totally did.
Anyway, Michael Fassbender plays two robots.
He plays himself.
He plays Walter, who has no emotions, but then he plays Walter's evil twin, who does.
And they're both robots?
And they're both robots? And they're both robots, but the one without emotions, believe it or not,
easily manipulated by the one who has emotions
because when you're evil, you can do evil.
But when you don't have any emotions and you're just susceptible to anything.
So that's your favorite robot?
Yeah.
Which one?
The evil one.
I got it. I can't lie lie the evil one is my favorite yeah you know why because i don't have it in me to be that and i think in some i think maybe if i
did it what i would feel differently maybe you need it to complete you oh wouldn't that be cool
so at what point do you think about the good and evil that a robot might or might not do,
either because they're programmed to or because they learn it on their own?
Right.
Yeah.
I don't think we think about it in terms of inherent good or evil,
but more how is the technology being used?
By those who should know the difference between good and evil.
Yes.
Right.
Humans.
But see,
now apparently at some point,
these machines will be programmed
by algorithms written by people,
even if they're written by other machines,
written at some point by people,
and good and evil
will be kind of inherent in that algorithm.
Yeah.
It's going to be a whole mess of gray.
Oh, okay.
Thank you.
Thank you for that.
Thank you.
Thank you for those hopeful words.
I'm very, very hopeful, yes.
Well, I had to take the conversation there.
Okay.
The future of AI infused in robots.
Let's see what Anthony Daniels says.
I am a little frightened by AI, and you're quite right.
I come in to deal with the talks of the students with an objective eye
that I don't really understand much of the science,
but I have an outer perspective.
And gradually through practice, you know,
I'm enjoying virtual reality, augmented reality,
all these sort of things that are gradually bringing
the theatrical user, if you will,
the entertainment user, into the scene.
So you're not just a sit-back participant.
You are actually involved.
So maybe the gradualism of this prevents anyone
from even noticing the day that AI takes over.
I think kind of that's already happening.
But in the world of entertainment, which is what the Entertainment Technology Center is
about, the growth in involving entertainment is very marked, you know, with all these headsets
coming onto the leap motion and all that kind of thing.
But in a world where robots are going to industrialize jobs
and take jobs away from human beings.
Which they've already begun doing.
Yeah, we'd better look to what humans can do apart from twiddle their thumbs.
Wow.
Interesting.
So let me ask you now then.
Whatever you were not thinking about evil in the moment
how about evil in the future how about ai turning evil how about have have let me ask you a tighter
question has ai as portrayed in film gone the right places that we all should be thinking about
no they're missing something yes they're missing a ton.
And I love science fiction.
I think that science fiction opens people's minds
to thinking about what's possible.
But we have so many dystopian stories of robots taking over
that people are fearing robot uprisings
when that's very premature
and we should be worried about other things
that are happening right now.
Wait, you didn't say that it wouldn't happen.
You just said it's premature.
Well.
That's funny.
People worry about robot uprising?
Right.
Not yet.
Not for at least another two years.
Not yet.
That's so 2027.
No, so tell me.
So where should we be focused?
Well, there are a lot of issues right now with privacy, data security,
supplement versus replacement of human ability, with reinforcing racial, gender stereotypes in the design of these technologies.
All of this is happening right now.
There's autonomous weapon systems being developed.
There's things we should be concerned about that aren't the robots becoming smart and taking over the world. I think that
that one also
tends to be
a lot of
rich white dudes
worry about that one
because they don't have to worry
about the other stuff.
They're just like,
my only danger
is that a robot
is going to kill me.
I don't have to worry about
like facial recognition
holding me up
at the airport.
That's sociologically insightful.
You're right.
That makes a lot of sense.
Plus, you've seen the racial sinks in bathrooms.
Yeah.
The racist sinks.
No.
I don't know about this.
You don't know about those?
No.
They can't see black hands.
Yeah.
And we've done this with all types of technology
because it's all white dudes building it.
Right.
And so, right.
I just thought the sink didn't work. You put your hands hands on waiting for the water because it's an automatic has happened
to me and nothing's maybe that sink doesn't work so that i go to another sink then i wave my hands
more eventually you know it'll hit but if i so if you do the experiment you put in a darker surface
or a lighter surface it's it's it's reflecting reflection of light reflecting reflecting the
light right so what you're saying is white men, say it, white men.
White men, because there's also a lot of gender stuff that happens.
Right, right.
We'll design things thinking that they are the model of what it is and should be
and capturing their concerns.
But it's also not their fault.
Like, we all view the world through our own experiences.
And so the problem is that we don't have diverse teams building technology.
That's really where it is. It is their fault
if they're not hiring you.
Or Chuck. Or Chuck. They need to hire Chuck,
really. There you go. Can a black
man have clean hands?
I'm trying to stave off
viral infections.
Dang!
Yes, sir, you're hired.
We'll put you in.
But we don't want any trouble.
No, but you raise a very important point.
I'm stereotyping here,
but if white men are programming all of the code
that will be the future of AI,
it could have remarkably biased consequences.
As an unintended consequence.
Yeah.
Right.
You're speaking about this as though this is in the future, but this is actually happening
right now.
Well, gotcha.
Listen.
Yeah, I know.
That's why my hands are dirty.
Chuck can't clean his hands, and you don't know when.
Oh, God.
Thank God for hand sanitizer.
All right. So how about this?
Let's try to land this plane.
Okay.
What do you think is our largest ethical dilemma going forward?
I think the thing I worry about the most is that a lot of AI,
the way it's built right now, relies on data collection.
They need massive amounts of data, and so I worry about privacy
because there's no incentive to curb that right now.
Interesting.
Because if you want to know all about humans,
you've got to know everything they do.
But then other people will also have access to that data.
Governments will, companies will.
It's already happening in China
where they're collecting information privately,
but then the government forces them
to turn over that information, including facial recognition
that happens on just the streets of
the cities where they're like,
all that camera data.
How to identify foreign nationals.
Exactly.
Exactly.
So how do we lean into, because I do think
there are so many positive use cases
for this tech, so how do we lean into those positive
use cases and curb some of this stuff?
That's a challenge.
No, don't ask us that question.
I'm asking you.
I didn't bring you here to ask that.
No.
If I had an answer, then my job would be, you know.
Over.
Yeah.
All right.
So let's get some parting thoughts.
Chuck, what's your parting thought here?
You know, I'm really disturbed by the fact that there are racist things, man.
I'm sorry.
You did not know about that, Chuck.
I did not know about that at all.
And this has happened to me.
It's like I feel violated by sinks now.
That's all I can think about.
I'm sorry.
Okay.
Sorry to take you off the rails there, Chuck.
So, Kate,
give us something hopeful here.
Reflecting on
it all. So,
you know how people are
sometimes nice to robots and then they feel
silly about it? Like, they'll
say excuse me to their robot vacuum
cleaner, or they'll say please me to their robot vacuum cleaner or they'll like say
please or thank you to Alexa, the Amazon's assistant. I don't think people need to feel
silly about that because I think that what that is saying is that their first instinct is to be
kind to another. And so what I really, really love about robots is that they are kind of a reflection
of our own humanity in a way.
You mean our interactions with the robots?
Yeah, our interactions with them.
That's cool.
That's cool.
Yeah.
I like that.
So if you're a good person, a robot will tease that out of you.
So here's what I think.
Not that you asked.
We don't care, but go ahead.
Says the media lab professionals. I was going to the media lab professionals want to treat him like a
robot this is the real side of who we got here um i think and i don't even know if i have foundation
to think this way i think the apocalyptic scenarios are overplayed. I think we always deal into our base
lowest fears because fear tends to always override our joys. That's natural, I think,
for survival, right? You can, you know, if you're not afraid of something and it kills you,
then you're dead. Gone is the gene to be afraid of stuff that will kill you, right?
So you're taken out of the gene pool.
So I think that's been overplayed.
My worry is that our distraction with the evil
prevents us from thinking more creatively about the good.
The good that robotic AI can bring to this world.
And I don't want to lose out on the creative solutions
that they can bring.
So, Kate, I put it entirely on your shoulders
to fix the problem.
Because this office is not called the Media Lab.
This is just Neil's office.
Right.
Or the Cosmic Crib.
We're going to send you
back home,
back to your peeps,
and we want you
to solve this problem.
Challenge accepted.
Excellent.
Chuck.
Always a pleasure.
Good to have you.
Kate,
thanks for coming down
from Boston.
Thank you so much
for having me.
All right.
And you've been watching,
possibly listening,
to this episode
of StarTalk,
Robots and Humans.
And I just want to thank Kate and Chuck
for doing the show.
Absolutely.
As always, I bid you to keep looking up.