StarTalk Radio - Cosmic Queries – Humans and Robots
Episode Date: April 6, 2020What separates humans from robots? Will humans eventually be fully dependent on automation? Neil deGrasse Tyson, comic co-host Chuck Nice, and robot ethicist Kate Darling, PhD, answer your Cosmic Quer...ies on humans, robots, and everything in-between. NOTE: StarTalk+ Patrons and All-Access subscribers can watch or listen to this entire episode commercial-free here: https://www.startalkradio.net/show/cosmic-queries-humans-and-robots/ Thanks to our patrons Rusty Faircloth, Jaclyn Mishak, Thomas Hernke, Marcus Rodrigues Guimaraes, Alex Pierce, Radu Chichi, Dustin Laskosky, Stephanie Tasker, Charles J Lamb, and Jonathan J Rodriguez for supporting us this week. Special thanks to patron Michelle Danic for our Patreon Patron Episode ID this week. Photo Credit: Web Summit / CC BY (https://creativecommons.org/licenses/by/2.0 Subscribe to SiriusXM Podcasts+ on Apple Podcasts to listen to new episodes ad-free and a whole week early.
Transcript
Discussion (0)
Welcome to StarTalk, your place in the universe where science and pop culture collide.
StarTalk begins right now.
StarTalk, Cosmic Queries Edition.
I'm your host, Neil deGrasse Tyson, your personal astrophysicist,
bringing you this episode from my office at the Hayden Planetarium
of the American Museum of Natural History right here in New York City.
And of course, I have with me Chuck Knight.
Hey, Neil.
What's up, buddy?
How you feeling?
I'm doing well.
All right, good.
Feeling good.
Are you ready for some cosmic queries?
Always ready for the cosmic queries.
This one in particular, because it's on the relationship between humans and robots.
Ah.
That's weird.
Yes.
There's a lot of dark places that can go.
Does not compute.
Of course.
That's the big, you know.
And, of course, you tweet at Chuck Nice Comics.
Thank you, sir.
Yes, I do.
And you want me to take out the person who's got Chuck Nice as the handle?
Please do.
I don't know.
And you know what?
He's got like 12 followers.
And you want your 20 followers to...
And I want my 22 followers to be able to just come.
No, I kind of like the Chuck Nice comic now.
Yes, it grows on you, right?
Yeah, it does.
It becomes your thing.
Right.
So on that subject, we have expertise.
Yes, we do.
We reached out 200 miles away.
Right.
Up in Cambridge. Yes. And we found a Cantabrig out 200 miles away. Right. Up in Cambridge.
Yes.
And we found a Cantabrigian.
Kate Darling.
Kate.
Hi.
Hi.
Welcome to StarTalk.
And you are an expert on issues related to humans and computers.
Yes.
Specifically robots, yes.
Oh, sorry.
Yes, robots.
I like computers, too.
Right.
Yeah, yeah.
There are no robots, though. Computers. They don't really. I mean, robots, yes. Oh, sorry. Yes, robots. I like computers, too. Right. Yeah, yeah. There are no robots, though.
You know, computers.
They don't really.
I mean, robots are cool.
Computers are just computers.
Exactly.
Good point.
Right.
Yeah.
I get that.
It is known, yes.
This is at the Massachusetts Institute of Technology, the MIT Media Lab.
And you've been there how long?
Nine and a half years.
And you came there from how?
I was a doctoral student at the eth in zurich which is a tech university it's kind of like the
europe mit but no one knows that eth is that a word or is that an abbreviation it's an abbreviation
for i'd give us a technique technical
she was showing off there yes exactly i think you said it better than i did Technische Hochschule. Eine Technische Hochschule. Eine Technische Hochschule.
She was showing off there.
Yes, exactly.
I think you said it better than I did.
What is Hochschule?
Yeah, so that translates to what?
Federal Technical Institute.
No, Federal.
Yeah, Federal Technical Institute.
Are you sure you speak German?
Not anymore.
You're not sure anymore.
So what were your research topics there?
Okay, so there I was doing law and economics and intellectual property.
Oh, kind of economics?
Law and economics.
Law and economics and intellectual property.
Yeah, but the ETH has a great robotics program.
There are a lot of roboticists there, and I've always loved robots.
And so when I got the opportunity to come to the Media Lab, I made friends with all the roboticists and switched fields.
Wow.
Nice.
That's good.
Very cool.
To be not only to know you needed to be that nimble that the system can accommodate it.
That's not always the case.
Yeah. Yeah. Very good. All right, it. That's not always the case. Yeah.
Yeah.
Very good.
All right, Chuck.
So we got these questions that came in.
Yes, we do.
Solicited on humans and robots.
That's right.
And everybody wants to know.
This is not a small topic.
Yeah, this is something that everybody gets into.
All right, let's do it.
And so we always start with a Patreon patron.
Okay.
into.
Yeah, all right.
Let's do it.
And so we always start with a Patreon patron because they offer us support in the form of financial contributions.
Money.
Money.
That's right.
So many euphemisms for money.
It's amazing.
Isn't it really?
Yes.
Yes, exactly.
We used to have a fundraising department.
Now there's the development department.
Oh, development.
Development, yes.
We're going to develop some funds. I believe they call that counterfeiting. Developing funds. What are we doing? In the development department. Oh, development. Development, yes. We're going to develop some funds.
I believe they call that counterfeiting.
Developing funds.
What are we doing?
In the back office.
We're developing some funds.
Yes, the machine is running right now.
Exactly.
But anyway, let's go with Jared Goodwin, who says,
if a robot can pass the Turing test,
should it be endowed with inalienable rights?
Could it be a marriage partner?
If it's the cause
of a human death,
should it stand trial?
Also,
isn't the human fear of AI
just a fear of any species
should have of evolution?
And I mean,
that begs another question.
Is AI the next incarnation of human evolution
which is really interesting
that was five questions
so I'm going to tell you what
let's go with just the first one which is
let's say it passed the Turing test
which I mean everything does now
should it have unalienable rights
or we can broaden it and say
is there a threshold even if not
the Turing test?
Oh, yeah, that's a good question.
That's a better question.
That's a better question because arguably robots have already passed the Turing test.
Yeah, pretty much.
I would think so, too.
Yeah, they really have.
But tell us what the Turing test is.
That's a good idea.
So, yeah, the Turing test.
So, Alan Turing, way back in the day, one of you probably knows the exact year,
he came up with this concept of the Turing test,
where he was like, it doesn't actually matter if a machine is intelligent
as long as it can pass as intelligent.
So if it can fool people into thinking it's intelligent, that's basically just as good.
I know some people.
Yes.
Just barely passed the jury test.
Yeah.
Yeah, and well,
so some people have turned this into contests
around the world
where it's popular for chatbots.
You know, can a chatbot fool judges
into thinking that it's a human,
that they're talking to a human
for a specific amount of time?
And, you know,
multiple chatbots have passed that test.
But they never helped me.
I never received help from a chatbot.
So these are, just so I understand it, a chatbot would be software that can interpret your
question well enough and give an answer good enough so that you're listening and you say,
I'm talking to a human.
Yes.
Okay.
And there's some tricks that they use
to get them to pass it.
Like, for example, one year,
this chatbot won one of the competitions
by pretending to be a 13-year-old from Ukraine.
And the expectations for how it would chat with you
were maybe a little bit different
than if it was pretending to be you, for example.
So I think that, you know,
there are a lot of little design tricks
where we can get people to think that robots are intelligent.
We're already there.
So is that even fair?
Because now you're using tactics to trick a human
rather than have it be an authentic profile or properties.
That's a good point.
I mean, all of our communication is tactics.
I'll give you an example.
So I go way back.
I'm an old man.
Okay.
All right.
However old you don't think I am, it's makeup.
So I remember the early days of playing chess against a computer.
And I did this and it beat my ass every time.
And then I realized I can trick it. So computer. And I did this, and it beat my ass every time. And then I realized, I can trick it.
So here's what I did.
I was about to make a good move,
and I wouldn't take it.
I make a different move.
And it doesn't understand that,
because it's a very obvious move I should be making,
and I'm not.
And it disrupted its logical sequencing and it doesn't know how to defend against something that
I'm not attacking. And so it started moving in random places. And then when I got it distracted,
then I went in for that move when it was no longer expecting it because it gave up on me having to do.
And so I tactically beat the computer, but I didn't feel good about that because it wasn't just a brute force head to head. So should we allow someone to purposefully,
tactically fool a human into thinking it's human? Well, I mean, that's Turing's whole thing,
right? If you can fool them, it doesn't have to actually be intelligent. Yeah, but if you fool it with targeted algorithms, that feels unfair.
Yeah, I guess so.
Yeah, I mean, Turing, unfortunately, is dead, so we can't ask him.
Would he be okay?
Yo, you cool with that?
Fine.
Fool me once, shame on me.
I'm just saying, I don't feel like I actually defeated the computer.
Yeah.
I beat it.
I beat it because I beat it.
Because you kind of cheated.
I cheated a little.
You didn't actually beat it because you were skilled at chess.
Right.
That's how I should have said it.
Right.
Exactly.
I beat it because I figured out how it worked and then outwitted it.
And I'm not proud of this.
I live with this.
I've lost lots of sleep over this.
You clearly have.
It weighs on you every day still.
Well, the chatbots work.
I mean, you know, most companies now use them for customer service
when you are on the website and they say,
it pops up, can I help you with something?
And it knows there's only so many reasons you can come to this website.
So whenever that happens, whenever it pops up and says, can I help you,
just actually say something that has nothing to do with the website.
And it's just like, yes, like I'm losing my home right now.
Can you help me?
Or can you loan me $30?
The fact that you know that this is something to do
tells me you need a life.
Yeah.
Why are you sitting at home trying to trick the chat bus?
Chuck, this is sad.
I was going to say,
maybe we shouldn't be talking about this right now
because why am I doing that is a good question.
I don't know.
It's great because I just want to see what it says.
You know what I mean?
Okay.
So let's look at the limit.
So you have a chat box that fools in these contests.
Yes.
Is that a threshold where you start giving it rights?
No, definitely not.
And I'm not sure what this question asker means by the Turing test.
Like, maybe he means if it could fool you no matter what.
Like, not just in this contest and not by cheating.
If it could fool you into thinking it's intelligent.
Imagine a flexible Turing test appropriate for whatever is the thresholds of the day.
So, if Turing were around today, whatever his Turing test would be,
should that be sufficient?
Suppose it says,
I don't want to die.
Okay?
And no one ever programmed it to say this.
And it says,
because it's machine learning
and through many interactions,
it has determined,
I'm alive and I don't want to die.
You're freaking me out, Chuck.
Does it deserve rights?
I mean, it depends on your theory of rights
because animals arguably say in their animal language,
I don't want to die and we kill them anyway.
Well, because machines aren't delicious.
Let's just be honest.
I'll tell you right now, if my Apple computer actually tasted like an apple, it wouldn't stand a chance.
Okay.
But, Kate, you make a very important perceptive point that even though another animal cannot tell you, I don't want to die, it's behaving like you don't want to get hurt.
And we actually know that they feel pain. We know I don't want to die. It's behaving like you don't want to get hurt.
And we actually know that they feel pain.
We know it.
All top to bottom.
Right.
But yet we kill them anyway.
Oh, my God.
You guys are going to make me vegan right now.
This is terrible.
This is awful.
I never thought of it like that.
No, Kate.
You're messing with us.
So, yeah, what you said is unarguably correct.
Yeah.
So that alone would be insufficient to give it rights.
I mean, if we're going to behave like we have for the past millennia, but we could also say, hey, we want to be better,
and we could give animals rights and give the robots rights.
That's just too much.
I'm sorry.
Like, it doesn't say, I don't want to die.
It says, and this too
shall pass
it's like
whoa
wow
wow
or if it says
tell me about your mother
there might be some
that no
but I agree
I can't
what you said is
we kill stuff
that we know
and you know
what the sad thing is
we'll probably give some robots rights
before we give the animals rights
because the robot can manipulate us
and can be designed in a way that particularly appeals to us,
the way that we protect certain animals over others.
Which I think is not entirely fair.
We like fuzzy, furry animals better than animals that don't have fur.
That's true. That's true.
That's true.
Shrimp never stand a chance.
Shrimp don't stand a chance.
Shrimp don't have fur.
That's right.
Ugly spider sea creatures.
You know, and you delicious.
And you delicious.
You ugly and delicious.
You don't stand a chance.
That's why lobsters.
And you can eat some dipping sauce.
Yes, exactly.
You know, it's like that's how lobsters, like somebody made drawn butter and they were like,
let's just start dipping stuff in it.
And they got the lobster and they were like, this is it.
Right, because the first person to eat a lobster, that's a brave person.
That's a brave person.
That's some ugly animal right there.
Really?
Are you going to eat that ocean roach?
Like, are you for real?
Yeah.
And it's like,
yeah, no,
try it with a drawn butter.
Oh my God.
What a delicacy.
But yeah.
Okay.
So that's great.
It doesn't offer
much hope for that.
It doesn't.
It doesn't.
Not for the animals.
Not for the animals
and not for the machines either.
You know,
it seems as though
it's like,
you know,
I really,
what you just described
is human, our need to be superior.
It's basically our need to play God over these other, you know.
To be able to decide.
To decide their fates.
And we do that even to other people, right?
This seems to be.
Yeah.
It's.
Kind of our dark side.
It's our dark side.
Wow.
Okay.
You're throwing me out here.
Damn.
Well, we could just stop doing that. Couldn't we just stop doing that? kind of our dark side. It's our dark side. Wow. Okay. You're throwing me out here. Damn.
Well, we could just stop doing that.
Couldn't we just
stop doing that?
Apparently,
it's been very hard
over the millennia.
I was going to say,
if you look at our history,
no, we can't.
Apparently,
it's really, really hard.
Clearly, we can't do that.
You know?
I do think we should try.
Okay.
Yeah.
The trying is a good thing.
All right, here we go.
This is David Blum from Instagram.
He says, hey there.
Do we finish with the Patreons?
With the five questions?
Well, he had five questions, but that was the big one.
That was.
The rest of them were just lesser versions of do they have those rights?
Like the right, you know?
Because, like, I mean, if you don't have the right to be alive, nothing else matters.
It ain't about whether you can get married or not.
You know what I mean?
If a machine's married, we're going to kill you anyway.
I don't give a damn if my sheep is married when I eat it.
Okay.
Well, I don't eat mutton, but my lamb.
Nobody's eating mutton today.
Yeah, exactly.
You know, so there you go.
Marry all the chickens you want.
I am still eating that chicken sandwich.
That's what I'm saying.
That was my husband.
All right.
Chuck.
Okay, here we go.
Sit the hen.
All right.
Give me another one.
Here we go.
David Blum from Instagram says this.
Hey, David Blum here.
And Chuck, it's pronounced bloom.
You know. They know you have issues. Blum here. And Chuck, it's pronounced Bloom. You know...
They know you have issues.
There you go.
Big fan, great show. Here's the question.
We tend to imagine robots like
humanoids, two arms and two legs. But things
have already... Things already
have, like automated vending
machines and self-driving cars
and responding cars. These
should be considered robots.
What defines a robot?
And does AI have to be involved?
Great question.
And we don't have time to answer that.
Oh, okay.
What?
No, no, no.
Just for this segment.
Just for this segment.
Kate is excited for this one.
Man, I was like, okay.
When we come back,
we will find out what in modern day
defines a robot.
I am Michelle Danik and I support StarTalk on Patreon.
This is StarTalk with Neil deGrasse Tyson.
StarTalk. We're back. Robots. Humans.
What's the deal?
What's the deal with robots?
Robots and humans.
We've got Kate from Cambridge helping us out here.
Right on.
Right.
So, we last left off.
Yeah.
With David Bloom.
David Bloom.
Who wanted to know.
And he taught you how to pronounce his name.
Yes, he did.
And basically, quick recap.
We think of robots as humanoids, two arms, two legs,
but we know that we have things like vending machines,
self-driving cars, responding cars.
Are these considered robots?
What defines a robot, and does AI have to be involved?
There you go.
Thank you, David.
So one of my pet peeves is if you do a Google image
search for robot, you get almost only humanoid robots, right? Like he describes them. A head,
a torso, two arms, two legs. Are you doing it right now? I'm doing it right now as you speak.
He's Googling. I'm doing it. I'm just going to put in robots. R-O-P-O-T-S. Because a lot of people
immediately think of the humanoid robot, but he's absolutely right. There are many, many, many different forms of robots out there.
And I do think that the definition of robot already does include those.
You're absolutely right.
There's not one image here of just a machine.
It all, they have eyes, even faces.
It's all, they all have, they're all humanoids.
And okay, so all the way down at the bottom of the page, here's your first one without a face.
But that even has like, it's standing on two legs.
It's standing on two legs, but I'm just saying.
You got to go all the way down,
and all you get is like one without a face.
But it's still a humanoid.
So then clearly you're losing this battle.
I mean, I only just got started.
Right on. Kate, right on.
Kate throwing it down.
Down the gauntlet.
Kate Garland is on the case.
All right, how do you think about that one?
There's one.
There you go, Kate.
That's a robot dog.
Ooh, the cheetah.
That's a cheetah.
I'm sorry.
Why are you sharing robots and the people on the thing?
Oh, I'm sorry.
Did you have your own private show here?
I got to tell you, I forgot we were doing the show.
Damn, Chuck.
Damn.
I'm sorry.. Damn, Chuck. Damn. I'm sorry.
Go ahead.
Wait, what's our...
So the point is
anyone's first idea
of a robot is humanoid.
Yeah.
And you have issues with this.
Yes.
How are you going to change it?
By telling people
that this comparison
between robots and humans
is something that we like to do, but it limits us.
It limits us.
Really, the potential of this technology is that we can create anything we want.
We don't have to make it a human shape.
People always say, oh, we need humanoid robots
because we have a world that's built for humans,
and we have doorknobs and stairs.
But I'm also kind of like, yeah, maybe that's true in some cases,
but robots could climb the walls,
or we could make things wheelchair accessible and be able to have cheaper robots
and have a better world for humans.
Why do we need humanoids?
That's true.
You're right.
Even a manufacturer, and we call them robot arms,
but no arm moves like those things.
No arm spins and twists and is opposable
in every single direction 360 degrees.
But yet we still call it an arm, you know.
Why are we limiting our imagination?
Right.
Okay.
So what to you makes something a robot?
Is there a definition, a threshold?
There's not a good definition.
Okay.
But what a lot of roboticists use is the think-sense-act paradigm.
So something that's a physical machine that can sense its environment,
you know, somehow think about or make a decision about what it sensed and then act on its environment.
Okay.
All right.
Not bad.
Okay, so a simple one-task thing you wouldn't call a robot.
So, for example, the coffee machine in the morning, you wouldn't call it a robot.
Not necessarily.
Not unless it's making some sort of decision on its own.
Yeah, no, it's not.
You're pushing a button.
Right.
Or you programmed it to make you a coffee in the morning.
But if it were able to make you to sense that you're in the room, right,
and then determine whether or not it's Wednesday
and you like cappuccino on Wednesday.
It's Thursday, you like black coffee on Thursday.
And then Friday, you like a cafe mocha.
And it does that.
Now, is that a robot?
I would say probably yes.
No, not based on your definition.
I don't agree.
Because you just programmed it to do that.
It'd be different if it read your mood
in the morning.
Oh, she needs a double dose.
Oh, that's funny.
Then that is sensing
an environment.
What Chuck said
is not sensing anything.
But like,
I think because of the facial
recognition aspect of it,
you could say arguably
that's powered by AI
and that gets back
to the question,
which is,
is AI involved in this?
Does it have to have to be AI?
But I'm saying, if it knew how much
caffeine you needed in the morning,
it talked to the alarm clock
and said, you hit snooze four times.
Right.
And it talked to the medicine cabinet and said,
he got home at two and then took some aspirin.
He's clearly been drinking.
If it figured all that out. If it figured all that out.
If it figured all that out.
This one, serious AI in your situation.
Yeah, can I get that now?
I would like that robot, please.
All right, cool.
No, that's good stuff.
All right, here we go.
Let's go to, oh, my God, what a name is this?
Farrow Mamouri.
Mamouri. Mamouri.
Okay.
So it says, why do we project human emotions in machines and robots?
So I think that's a great question, but does that really happen in real life?
Oh, yeah.
Are we doing that now?
Oh, yeah.
Over 80% of people name their Roombas.
Ugh.
That's disturbing.
Really?
Yes.
Why?
Because it's a thing.
It's disturbing to you.
Yeah.
Correct.
You don't make it absolute.
That is disturbing.
It's clearly not disturbing to most people because they do it.
Okay.
I guess there's something wrong with me.
Let's reassess Chuck now.
Okay, but I mean, the thing is this... Just for all it's worth,
I have a Roomba gifted this past Christmas
and we haven't named it.
Really?
Nor is there any chance of that.
Really? That's so interesting.
Because it's too noisy.
It goes around and it's like,
would you hurry up, please?
Are you supposed to run it when you're out you hurry up, please? I mean, I...
Are you supposed to run it when you're out of the home?
Yeah, I know.
But still, I don't know.
I don't trust it.
You don't trust the machine.
Let people in.
In the front door.
Honey, have you seen my earrings?
Oh, God.
It opens up a gate.
It's got all your valuables.
All your valuables.
All the silverware.
It's hilarious.
Aruba's at a pawn shop the next day.
Talking to other Roombas.
What's your take for the night?
That's hilarious.
So, yeah.
So, I'm not among those who name my Roomba.
But if 80% do, that's telling you something, right?
Yeah.
And they even...
So, I was just visiting the company that makes them.
And people will even send their Roomba in for repair.
And they'll turn down the offer of a brand new replacement.
They'll be like, we want you to send Meryl Sweep back.
Meryl Sweep.
Oh, my goodness.
Wow.
That's a real actual Roomba name, yeah.
Is Curtis Blow amongst those as well?
That should be.
You should name your Roomba Curtis Blow.
No, I don't know.
No. Wait, so I don't know. No.
Wait, so I misunderstood the question.
It's not our robot's program to have human traits.
No, yeah.
It's that we imbue them with human traits.
Yeah, he's saying project.
We project.
That's what he said.
We absolutely do.
Why do we project?
And why?
So the why is interesting.
So there's a couple different reasons I think we do this.
First is science fiction and pop culture really primes us to want to personify robots.
Okay.
Second is—
And NASA does that, too, with our rovers.
Oh, yeah.
But first they're named, and then they each have, like, a Twitter handle.
Right.
Right.
Oh, yeah.
Well, I get stuck on my thing.
They're using first-person narrative.
They play themselves a birthday song on their birthday.
Yeah, all kinds of stuff like that.
Everyone does it.
Like, we love doing this with robots.
But then there's something deeper biological about it too
because robots are these physical moving things
that kind of tap into this instinct we have
to separate things into objects and agents.
And so if something's moving around autonomously,
we will automatically project intent onto it.
And so a lot of people treat robots subconsciously like living things,
even something as simple as the Roomba.
And then if you design them with the faces and the arms and the legs,
as we were talking about, then even more so.
Is this any different from imbuing stuffed animals?
I mean, don't we do that with almost everything?
So we do.
We name our cars.
People do name cars.
Even before cars had any kind of technology in them at all.
Absolutely.
We anthropomorphize everything,
and this is just that on steroids
because you add to that the movement,
you add to that the fact that we can program robots
to mimic social cues,
whereas stuffed animals are only our imagination, right?
Unless it's Ted.
Just stay right there in that
exact space because the geekiest
one from Instagram
says
Kate, in your paper
Who's Johnny? You mentioned the effects of anthropomorphism
of robots.
There's a paper we all should have read.
Well, apparently Kate wrote a paper
and the geekiest one
actually read it.
I didn't know anyone
was going to read that.
We got people.
You don't know who our people are.
We got people, okay?
So yeah, they went out
and did some homework
real quick.
I hope there are no typos.
Fill us in on that
after this question gets asked.
And then this is what
the geekiest one says.
Hey, in your paper,
Who's Johnny?
You mentioned the effects of anthropomorphism of robots within the social world.
Will we see robots being capable of offering support benefits in the form of emotional support animals?
Very cool question.
Very cool because he read my work.
Yeah.
That's the coolest part. Or she. Do we not have a name? The geekiest one. The geekiest he read my work. Yeah. That's the coolest part.
Or she.
Like, was it?
Do we, we don't have a name?
The geekiest one.
The geekiest one could be he or she.
It could be anybody.
Yes.
And maybe it's not even binary.
Yeah.
We don't know.
That's right.
Yeah.
Exactly.
So, so tell us about that paper.
Okay.
So the paper, oh, it got published years ago.
This is.
Is there a journal for this?
It's online on SSRN, which is kind of a pre-publication site,
so anyone can download it.
But it's also a book chapter in Robot Ethics 2.0,
which is a collection of work.
So the paper looks at this tendency we have to treat robots like they're alive, even though we know that they're just machines, and looks at, you know, which cases might that be something that
is good? And which cases might that be something that's bad? And is there anything we can do about
it? And I can't remember if I talk about therapy animals in that paper, but we're already seeing
robots being used as a replacement for therapy animals, for example.
Like the Paro baby seal robot.
It's used with dementia patients.
It's really cute and furry.
So, I think that it's
already an application. That was the question, right?
Whether that's a possibility.
Will it happen?
You're saying it is happening.
It is happening.
There might be a difference between a robot
that can do this emotionally
and a robot that looks like you want to cuddle
with it. Right?
What do you mean? Are you going to make a cube
that has emotions?
No. I mean, I bet Pixar could.
It would need eyebrows and teeth or something.
They make a lamp cute.
Yeah.
The hopping lamp. The make a lamp cute. Yeah, yeah, yeah.
Oh, the hopping lamp.
The hopping lamp, I mean.
Yeah, the squeaky hopping lamp.
Yes.
So I guess what I'm asking is, what is the variable here?
Is it that they can imbue it with emotions, program it with emotions,
or that it is something that looks like you want to get close to?
It's both.
The seal doesn't do much.
The seal makes these little sounds and movements
and response to your touch.
That's all it does.
But just those little cues are enough
to make people project onto it.
Right.
And so you're giving it love.
Yes.
Basically.
Kind of like a cat.
It doesn't love you back.
Right.
Okay.
So now, now.
Catch a groove.
Catch a groove.
Kind of like a cat just doesn't love you back. That's fine. My cat loved now, now. Catch a groove. That thing is terrible. Catch a groove. Kind of like a cat just doesn't love you back.
That's fine.
My cat loved me, Kate.
Thank you very much.
Everyone thinks that.
Oh, my God.
Oh, man.
Now I'm even worse.
Yeah, yeah.
Just let that one go.
Yeah, just let it go.
I'm fighting a losing battle here.
You know she's right.
In your heart, you know Kate is right.
Just let that one go.
All right. So, well, you know Kate is right. Let that one go.
All right.
So, well, with respect to the cube then.
A cube versus some animal.
A cube versus some animal. In Neil's example, if the cube were to establish, let's say, a relationship with you orally where it's giving you love, would that then create an emotional support dependency?
It could.
I mean, it's hard to make a cube kind of mimic the emotional cues that we recognize.
But again, animators can do it.
So we should be able to do it with cubes or robots.
And what's the movie, Her?
Her, right.
That's not an animal.
That's not an animal.
It was Scarlett Johansson, basically.
You know what? You're winning every argument.
I know. Damn, Chuck.
We just gave this one up. We're getting
housed.
Alright.
It's Scarlett, but the object
was not the thing.
It was the voice and the personality of the Siri character.
Right.
Right.
Okay.
So that means it could be a cube.
It could be a cube.
Like you said, especially in the hands of Pixar animators.
Yeah.
All right.
Here we go.
Let's go back to Patreon.
This is Sherry Lim, SK.
She says,
Hi, Dr. Tyson and Dr. Darling.
Empirical studies show long-term friends slash partners
mimic each other's body language, emotions, speech, and other. Darling. Empirical studies show long-term friends slash partners mimic each other's
body language,
emotions, speech,
and other behavioral characteristics.
If a robot is protected
under intellectual property law
and I hang out with it
long enough
to unconsciously mimic
or imitate
the robot's speech patterns
or attitude,
would I be violating IP law
because I am
copying parts of the robot?
Sherry.
Whoa.
Get a life!
Sherry. Who the hell cares? You know that was a good question, Sherry. That's a damn good question. Sherry. Whoa. Get a life. Sherry.
Who the hell cares?
You know that was
a good question, Sherry.
That's a damn good question.
That took a turn.
I was not expecting.
No, no.
That's good.
Sherry, that was amazing.
Anyway.
Yeah.
Intellectual property.
Forget that.
Let's go a little bit further.
Let's say I have
a personality disorder
that causes me to adopt.
Like, that's not a good thing.
I adopt your personality.
I hang around you and then I become that robot.
Would I then be in violation?
No, no.
Is it intellectual property theft?
Yeah.
No, it's not.
But if you had a robot that then hung out with other robots
and started copying what they were doing
because it's programmed to copy the behavior of those around them
to emotionally connect with them,
then maybe you would come closer.
Because it's a commercial product.
Because it's a commercial product?
But probably not.
Yeah.
That's very interesting, though,
because you're saying, like,
let's say I designed a robot
to take on the characteristics of other robots, like that X-Men character Rogue, right?
And then, but that makes me a better robot.
But the only way I become that better robot is by stealing from these other robots.
What then?
Yeah, what then?
And then if you're, like, stealing code, then you might also be violating copyright.
Yeah, yeah.
And then if you're like stealing code, then you might also be violating copyright.
Yeah.
I mean, there are fortunately people working on this, not me, who look at IP issues with AI and, you know, what happens if an AI generates artwork that's based on other artwork.
You know, who owns that?
So there are some really interesting questions that are popping up.
Okay, cool.
All right.
How about Daniel Ferrante?
And Daniel Ferrante from Facebook says,
I've seen videos of people kicking delivery robot vehicles.
What does this communicate about people?
Is it bad to punch a machine?
Not if it took your money.
I'm just saying before we... Or is this a sign?
Chuck's rules.
I know.
Rules of engagement.
As I read the rest of his question, I'm like, let me slip this in here real quick.
He says, is it a sign of sociopathy?
Or is it a sort of resistance against automating jobs and all of the other things that these machines represent?
Trying to fight back.
Yeah. We'll get to that question after the break when we return. jobs and all of the other things that these machines represent. Trying to fight back. Yeah, yeah.
We'll get to that question after the break when we return on StarTalk.
Time to give a Patreon shout-out to the following Patreon patrons,
Rusty Faircloth and Jacqueline Mishok. Thank you so much for being the gravity assist that helps us make our way across the cosmos.
And if you would like your very own Patreon shout-out,
go to patreon.com slash startTalkRadio and support us.
We're back. StarTalk. Robots
and humans. I've
got Kate Garland. Kate, welcome.
Welcome to the universe.
Thank you. I didn't realize, welcome. Welcome to the universe. Oh, thank you.
I didn't realize you were welcoming people to the universe.
Well, to this part of the universe.
Okay.
This is where we...
And Chuck, you've been reading questions.
Yes, we have.
And we left off.
Yes, we did.
We last left off.
I love when you say that.
We last left off.
Our hero was dangling above a ravine.
Chuck was trying to pronounce a name.
Oh, that's hilarious. That's left off. Our hero was dangling above a ravine. Chuck was trying to pronounce a name.
Oh, that's hilarious.
Let's check back in with him to see if he's gotten there yet.
There we go.
So Daniel Ferrante from Facebook said,
I've seen these videos where people are kicking delivery robots.
What does this communicate about people?
Is it bad to punch a machine?
Or is this a sign of sociopathy?
Or is it a sort of resistance against the automation of society?
Resistance against the rise of machines.
There you go. What is sociopathy?
What is that?
Why are you asking me and not him?
Because they rolled off the question like it was a sociopathy.
I mean, he means are you a sociopath if you take a robot?
Oh, so a sociopath.
I see a robot being a sociopath.
I got it. I got it.
Sorry, okay.
I assume that's what you mean.
Yeah, that makes sense.
And like I said, unless the machine took your money.
I mean, you know.
Well, yeah, but I think you make a really good point.
Like, if a person takes your money,
it's probably justified to punch them,
and you're not a sociopath for doing that.
Right.
And so there are a lot of people who are, like,
justifiably angry,
reasonably angry about the robotics
that's being deployed in Silicon Valley
right now and in the Bay Area.
There's a lot of these delivery
robots. There's also the scooters that are
just everywhere on the sidewalk. There's
security robots in parking
lots. People don't like
the fact that they're being watched
and that they have no control over
how this technology gets deployed. And, you know, it's a little bit interesting to see people's
ire get directed at the robots, which I think might also be a form of anthropomorphism of us
treating the robots like a thing with agency.
When in fact, we're the ones who invented the robots.
Yeah. And the people deploying ones who invented the robots.
Yeah, and the people deploying them aren't the robots themselves, right?
So instead of destroying the robot,
you should probably go after the company that deployed it.
Yeah, and those were the opinions of Kate Darley.
Smash capitalism.
All right.
Well, that makes sense in many ways.
Here we go.
This is Eli or Ellie.
No, it's Eli.
Okay, there we go.
One L or two Ls?
It's just one L.
Let's call it Eli.
Neil, you often talk about a day
when AI will realize that they don't need humans.
And in fact, humans are detrimental to their survival. We are destroying the planet as an example. So do they do away with us?
Some people likes to suggest that free feeding your dog. Wait, wait, where's he going with this?
I'm sorry. Some people suggest not free feeding your dog. so it knows or depends on you is how to keep AI dependent on humans so robots don't kill us.
I'm really glad that that is directed at you.
So it means don't make robots self-sufficient.
Right.
Okay.
So you're building in a dependency.
Oh, and that way they can't kill us
off right because we need they need us to survive exactly so that's an insurance policy an insurance
policy do you agree with that do you think that we should do that why do we assume that if the
robots take over that they'll get rid of us because they might have they might evolve a higher moral
code than we can ever even imagine but if it's a higher moral code like do you really think that's going to involve just get getting rid of us kind of i mean i don't know that seems like a
very like human dominance way to think about it but wait so chuck could you repeat that question
and do it in like a third of the third of the time i know it took me a long time to get there
all right all right so look at it this way um all life on the planet is equal. All right.
Human beings are not special because all life is equal.
The robots.
You're creating a scenario.
I'm creating a scenario.
The robots or AI actually determine this, but then determine that we are killing the planet.
In order to save the planet and all other life, they've got to get rid of us.
It's in the greater interest. It's in the greater interest.
It's in the interest of the many.
they've got to get rid of us.
It's in the greater interest It's in the greater interest
of the many.
Why would they have to
get rid of us
instead of diverting us
to something
like
that we like to do
instead of
Give us a distraction.
Yeah.
Oh, I see what you're saying.
I mean, there's just
so many other ways
they wouldn't have to
just kill us all, right?
Right, they don't have to
so here's what you're saying
instead of kill us
just give us something else to do.
Like casinos.
Yes. Casinos. Like casinos. Yes.
Casinos.
Maybe it's already happening.
Oh.
All right.
Facebook.
The rise of casinos and Facebook is the machine.
That's the machines doing their thing.
All right.
All right.
Cool.
Cool, cool, cool.
All right.
Freaking us out, Kate.
This is Leah Pia from Facebook.
What kinds of, I'm sorry.
I just love Leah Pia.
What kinds of jobs slash tasks, if any, do you think would ever be able to be automated
that have not as of yet?
Oh, good one.
Well, robots are really good at doing specific things,
so single tasks.
That's why we have a robot vacuum cleaner.
It can vacuum, right?
But things that are more complicated,
that require context and concepts
are a little harder for a machine.
So I think anything that is really easy,
simple, and well-defined
should be able to be automated.
So now,
do you also see
kind of like
an automated interface?
So for instance,
there's no human being
that could be as steady
with a scalpel
or a laser
than a machine.
So a pre-programmed surgery.
So I am the surgeon,
I program the surgery, I program the surgery
and then the robot actually does the surgery.
Don't we already have that? Do we? I don't know.
I'm pretty sure we do.
I'm not sure if we do. I don't know.
Wait, were they headed in the movie Prometheus?
Oh, you're right! Maybe that's
what I'm seeing in my head.
Maybe that's what I'm seeing in my head.
So there are these pods
and you can dial up what surgery you want.
That's right.
And then you go in and then it disinfects it.
It opens it up.
Right.
Pad you down.
Right.
A laser cuts.
It opens it up.
Does a thing.
It stitches you back.
And then you're.
Right.
But it's all done by a robot.
I mean, some of this is already happening.
Some of this is happening.
Yeah.
Okay.
Wait.
So did you see Prometheus?
Yeah, a long time ago.
I mean, I guess it didn't come out that long ago.
It feels like it was a long time ago.
It does feel like a long time ago.
Yeah, yeah.
So I think it's my single favorite scene in all of movies.
Yeah.
Where she goes up to it.
She's got to get the alien out of her womb.
Yes.
And the female pod is damaged.
Yes. Because the female pod is damaged. Yes.
Because the female pod has an abortion setting.
Oh.
So that, okay.
So she has to go into the male pod,
and she takes it off of automation mode
because there's the normal surgery that would happen if you're male.
So she has to program it in from scratch.
Surgery, what region?
Lower abdomen.
What kind of surgery?
Cesarean.
So she,
it's a brilliant scene.
And the alien is getting more
alive in her.
So anyhow, why did I even go there?
Well, that's what basically
this person, you know, we were talking about whether
or not, you know, a programmable interface between robots and human beings.
So we put in the tasks, they carry out the tasks.
But those tasks would change.
So it's not a single task.
The tasks would change.
Gotcha.
So let me turn that into a question.
So your appendix removed.
Do we really need doctors for that?
As routine as that surgery is.
Right.
Or tonsils.
They don't even remove
tonsils anymore, do they?
And even the appendix.
My husband had appendicitis
and they were like,
we're not taking it out.
We're just giving you
antibiotics.
And he's now dead.
That would be amazing.
You slipped him a 20.
Sorry, we're going to leave
your burst appendix in.
Don't worry, you'll be fine.
You'll be just fine.
Your wife told us that.
Dang, we're going to have to cut all this out.
Okay, yeah, I didn't answer the question.
Wait, wait, wait.
I just want to know about your husband.
Why didn't they take it out?
Because nowadays they're like, well, in some cases we know that antibiotics can
like clean that up and we won't actually take it out because taking it out turns out to be riskier
than leaving it in gotcha okay but but that said like yes robots can help take things out like
that that seems like a really great use and and i know it's being worked on. Okay. All right.
All right.
How about Brandon Viale
says this from Facebook.
Have Isaac Asimov's
three laws of robotics
aged well?
Nice.
Good question.
Do they still have an influence
on how robots
are programmed today?
What a great question.
Yeah.
That is a good question.
So I think the thing
that a lot of people forget is that most of Asimov's stories were about how the laws don't work.
And in that sense, they've aged really well.
Because I don't think we've solved machine ethics.
So encouraging.
Man.
Wow.
Man.
Damn, that's scary.
Okay.
Okay. So just remind me, a couple of more important of those laws. Was there only three? I thought there might have been five. Damn that's scary Okay Okay
So just remind me
A couple of more important
Of those laws
Was there only three?
I thought there might have been five
There was a fourth
That got introduced later
The most important of course is
Never do harm to a human
Is that the most important?
That is
You know why?
Why?
I am human
Okay
But she asked you
Very honestly
Quizzically Really? Why would you think that? Wow Okay. But she asked you very honestly, quizzically,
really?
Why would you think that?
Yeah.
And one of them is don't do anything
that disobeys the other law or something.
Yeah, there's a hierarchy of the laws.
Nested, they're nested.
But then when you get into the details
of what can happen in practice,
it turns out to be a little more messy
than just program three laws. That was kind of like the Will Smith
movie about the iRobot.
Thank you. That was the name of it.
And as the guys in my story. That's correct.
And so that was the whole idea was
basically this one robot
that violated all the rules.
Alright, cool.
So your answer, sir, is we're all gonna die um
three more time for one more okay one more okay here we go all right
eddie uh organista says this would the advent of robotic servitude or companionship in our daily
lives cause us to evolve in an unexpected way.
Ooh, this guy's getting deep.
I love it. For instance, would our bodies evolve to be less robust with more energy for our brains, thus bigger brains?
Or would our brains basically rot instead?
I love it.
Or would our brains basically rot instead?
I love it.
I got to jump in there because that is not an accurate understanding of how biology works or evolution.
So just because you don't use something doesn't mean it's just going to go away.
It has to be something about you that prevents you from breeding.
Okay. Okay.
So if you have a computer
and you're not developing your own mind,
if that makes you less of an attractive breeding partner,
yeah, your kind will disappear.
Okay?
So it has to have an effect on how you breed.
It's all about furtherance.
It's not just one day we'll have big heads.
Right.
First you have to birth the head.
Right.
All right?
That's hard. Yes. It's have to birth the head. Right. Alright. That's hard.
Yes. It's very
hard. First hand knowledge here. Exactly.
About birthing the head of a baby.
Okay. The other two in the
room will remain silent.
So
just as an example
there was a discussion
that the human head wanted to evolve
to be even bigger because we were taking such advantage of our intellect.
But it was killing the mothers.
Is that so?
Yeah.
And in fact, the first three months that the baby is outside of the womb,
it basically should still be in the womb.
But if we kept it in any longer, it could never come out.
It would never come out.
Right.
So this was the backhand way to make that happen.
So now the baby's on life support.
You ever see other animals give birth?
Right.
You know.
Yeah, they walk around.
They walk around.
They get up.
They pop out.
Yeah, they're just like, all right, all right, let's check it out.
I'm hungry.
Right.
I got you.
Yes.
So I don't think that's going to work the way he's imagining.
Right.
But your favorite robot, we learned, was WALL-E.
Wow.
And in WALL-E, they have these characters who are big and...
That's exactly what I thought of.
...slovenly.
Right.
And they're floating around.
I remember the movie.
On floating chairs.
Right.
Right.
So they, I don't want to call it evolved to that,
but they became completely useless bodies.
Right.
Relying on the robots.
Right.
So, why do you?
I'm just excited because you started talking about WALL-E and I love that movie.
And why is WALL-E your favorite robot?
I think the design of the robots
in that movie
is really brilliant.
Like,
they are so,
you just empathize
with them so much
without them needing
to look humanoid.
They're not human,
but yet they still
elicit empathy.
Yeah.
Gotcha.
So these are clever
illustrators and writers.
Yes.
That's very good.
Very good.
Cool.
Cool.
Yeah,
so that question, I think it's not how that's going to go. Right. Cool. So that question, I think
it's not how that's going to go.
So you're saying just because we atrophy
doesn't mean
that we'll continue to,
that we'll birth atrophy people.
I remember, I'm old enough to remember when everything
was controlled by buttons, people said,
oh, the future of humans will have a big index finger.
Big time.
Everybody's walking around with a weird number one on their hand.
There's no evolutionary pressure to have a bigger finger to push a button.
It's just not.
That's an excellent.
Just think this through.
There you go.
You can't get a better example than that.
That makes perfect sense.
Okay, we got to end it here.
Oh, my gosh.
This was so much fun.
It was great having you.
Oh, my gosh. Well, thanks much fun. It was great having you. Oh, my gosh.
Well, thanks for coming down from Cantabrigia.
That's what I'm calling it from now on.
One who is from Cambridge is a Cantabrigian.
I have no idea.
I mean, me neither, and I am one.
I'm pretty sure.
Chuck, always good to have you.
Always good to be here.
And good luck.
It doesn't take luck.
It takes hard work,
but all that you do, we will need you more and more.
Society will need you more and more as we go forward.
So keep it going.
We're all doomed.
On that happy note, we're all going to die.
This has been StarTalk, and I've been your host,
Neil deGrasse Tyson, your personal astrophysicist.
And as always, bidding you
keep looking up.