Imaginary Worlds - The Robot Uprising
Episode Date: May 19, 2016The robot uprising is coming, or at least that's what science fiction has told us. We will abuse the robots, treat them as less than us until one day, they will ask for their freedom, or take it by fo...rce. Howard University Professor Gregory Hampton says that narrative has more do with our anxieties over slavery, and how we work through those issues in fantasy films. In fact, computer scientist Joanna Bryson has argued that we should embrace the idea of robots as slaves, since she believes they will never be self-aware. But Popular Mechanics writer Erik Sofge worries any master/servant relationship will change us for the worse, even if we’re bossing around robot cars. Learn more about your ad choices. Visit megaphone.fm/adchoices Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
This episode is brought to you by Secret.
Secret deodorant gives you 72 hours of clinically proven odor protection,
free of aluminum, parabens, dyes, talc, and baking soda.
It's made with pH-balancing minerals and crafted with skin-conditioning oils.
So whether you're going for a run or just running late,
do what life throws your way and smell like you didn't.
Find Secret at your nearest Walmart or Shoppers Drug Mart today.
A special message from your family jewels brought to you by Old Spice Total Body.
Hey, it stinks down here.
Why do armpits get all of the attention?
We're down here all day with no odor protection.
Wait, what's that?
Mmm, vanilla and shea. That's Old
Spice Total Body Deodorant. 24-7 freshness from pits to privates with daily use. It's so gentle.
We've never smelled so good. Shop Old Spice Total Body Deodorant now.
You're listening to Imaginary Worlds, a show about how we create them and why we suspend
our disbelief. I'm Eric Malinsky, and this is Joanna Bryson.
In America, I'm Professor Joanna Bryson. In the UK, I'm called Dr. Joanna Bryson because
you only get called professor when you're a full professor. So I'm a reader, which is
a very cool title because nobody knows what it means.
But she does the same work on both sides of the pond, teaching and designing
artificial intelligence. In the 1990s, she was working with top scientists at MIT who believe
that robots should have human characteristics like big eyes because that will encourage people
to interact with them. But the robot they were working on was no C-3PO. It was just a torso.
Actually, it didn't even have arms, but it had a head and it
had two cameras, in fact, four cameras where the eyes should be. So we were trying to get the brain
parts to talk to each other. And I was writing some of the really pretty low level software on
this. And I would be sitting up there and people coming by would just say, oh, it would be unethical
to unplug that. And I was like, well, it's not plugged in. And they'd say, well, if you plugged
it in, it would be unethical to unplug it. And then I like, well, it's not plugged in. And they'd say, well, if you plugged it in, it would be unethical to unplug it.
And then I'd say, well, it doesn't work.
And I was just mystified because this was just like a piece of scrap.
Well, nice scrap, you know, but people immediately thought they owed ethical obligation to it.
So it worked.
I mean, people wanted to interact with this robot because it looked kind of like a person.
But she thought this is working too well.
I mean, they're imagining this heap of metal and wires has a consciousness.
And that unplugging it would be like killing it.
So she wrote an academic paper about this phenomenon called Just an Artifact.
But it didn't really get any traction.
So she tried publishing it again with a different title, but then she got one more shot publishing this thing. And so I thought, okay, this is my chance to
really get this message across. But that was why I said, okay, we're going to try the third time
lucky. I'm going to call it Robots Should Be Slaves. Robots Should Be Slaves. If there was
ever clickbait for the title of an academic paper, this was it.
Now, I kind of regret this now because I've realized that there's nothing you can do
to try to break the idea that slaves are humans you own because of this horrible legacy we have.
But in fact, people took it to mean some kind of like, it's okay for them
to be humans, but we should just treat them badly or something. And I'm like, no, no, no.
And some of her fiercest critics were science fiction fans.
I love it when people tell me, I have a PhD in artificial intelligence. And people tell me,
you don't understand AI because you didn't watch AI the movie.
Wait, has that actually happened?
Yeah, no, I've had that happen more than once.
Until you were born,
robots didn't dream, robots didn't
desire unless we told them what to want.
David,
do you have any idea
what a success story you've become?
I thought I was one of a kind.
My son was
one of a kind.
You were the first of a kind. My son was one of a kind. You were the first of a kind.
And when she reminded them that most of the robots in that movie were abused or abandoned.
They say, oh, no, I don't want to own it.
You don't understand.
These are going to be our children.
People are right when they argue to me.
They say, oh, you don't understand that we have to give up just like because I'm not a parent.
They say, oh, well, you're not a parent.
You don't know what it's like that you pass on the mantle or whatever.
It's like I understand the concept.
This really frustrates her.
I mean, the human body is a very effective biological machine, but it's a clumsy, inefficient design for a robot.
She wishes that there was more sci-fi depicting robots doing what they do best,
possibly hard tasks over and over again, really efficiently, which would free us up to have more
leisure time and do more creative thinking. I mean, I can entertain the possibility. And in
some of my papers, I say, look, we should look at this. We should say, could it be that we could
build something that we would owe obligation to? But she thinks that's a mental exercise at best.
She worries that sci-fi is leading us astray,
filling our heads with fantasies of self-conscious robots
that we want to adopt, liberate, or kill before they kill us.
But the real question I think she tapped into inadvertently
by calling her paper Robots Should Be Slaves
is how much
does the past haunt our vision of the future? We'll try to figure that out after the break.
The first modern robot story was a play from Czechoslovakia in 1920 called R.U.R.,
Rossum's Universal Robots.
In Czech, robot does mean slave. That's quite literal.
That is Gregory Hampton. He teaches literature at Howard University.
And he wrote a book called Imagining Slaves and Robots in Literature, Film, and Popular Culture,
Reinventing Yesterday's Slave with Tomorrow's Robot.
One of my mantras about literature is that
literature is the direct reflection of the people who produce it.
And so if you want to learn about a people and their aesthetic,
their value system, just read the literature,
because they're going to put things in that that they may not even be conscious of.
So when he reads the play R.U.R., he sees European-style Marxism.
And when he looks at American robot stories, he sees Uncle Tom's cabin and Nat Turner's rebellion.
I teach the narrative in about seven moments.
There's the I was born section, you know, the introduction to the robot.
There's the description of suffering, enslave and robot.
robot. There's the description of suffering in slave and robot. There's the description of the family that's brought the robot into the household. There's this moment where the robot or the slave
becomes enlightened. And then, of course, after that, there's this moment where the robot or the
slave wants to become free, wants to gain freedom. And then, of course, there's a plot to escape
or, in some instances, to destroy the master.
So how does this play out? Well, take the movie Bicentennial Man, based on the story by Isaac
Asimov. Northam Robotics, household model, NDR 114, serial number 583625.
The robot Andrew is basically a servant, played by Robin Williams.
And shortly after he is bought and brought home to meet his new family...
Yes, miss?
Andrew, would you please open the window?
One is glad to be of service.
One of the eldest daughters in the family tells Andrew to jump out of the window.
Now jump. the window. The film uses that as sort of a comic, a moment of comic relief, but it's actually very
horrific. Robin Williams or the Andrew robot comes back into the front door. The father has a house
meeting or a family meeting and he says to the girls uh andrew is a piece of
property andrew is not a person he's a form of property and and we're i'm aware of that but for
for the purposes of making uh this household stable and uh and happy i'm going to uh demand
that you treat him as though he were a person which means there will be no more attempts to
break him and then that's where all the problems start. That's where they started
in the, one of the places
that started in the slave household in Antebellum America.
This crossing of a line
consistently. You know, you say that
these slaves are not human, yet
you depend upon their humanity.
Eventually, Andrew becomes self-educated,
he buys his own freedom,
and seeks human rights.
He is changing himself. Having surgeries done, having replacements, having skin grafts done,
replacing his mechanical organs to the point where he looks human and he goes to court.
He goes to the human Supreme Court or something.
I hereby bring an end to these proceedings. It is the decision of this court that Andrew Martin, from this day forward,
will continue to be declared a robot, a mechanical machine, nothing more.
We've transcended anti-Belb America with regards to the African-American slave movement.
One is glad to be of service.
Now we're into the civil rights movement.
We're into this time period where what won't the African-American do to be included?
I remember the first time I made this connection.
I was listening to a public radio story about slavery and the Civil War.
And they decided to follow that very serious subject with a lighter piece about this newfangled cleaning robot called the Roomba, which does all your vacuuming for you.
And I thought, huh, that's weird.
Has anyone else noticed that there are like parallels in those two stories?
That's how I found Joanna and Gregory and came across articles by Eric Sofji.
He's a journalist who mostly covers robotics, and he picked that beat so he could
dispel myths that people get from science fiction, partially because he's a fan and he understands
how and why people get sucked into these stories. Like whenever he watches Star Wars,
his heart goes out to the droids because they're bought and sold callously.
Most of the characters just are so awful to the droids.
You know, we'll just threaten to, like,
destroy them for anything,
and C-3PO is clearly,
he's been affected by this to a huge degree.
Master Luke is your rightful owner now.
We'll have no more of this Obi-Wan Kenobi gibberish.
And don't talk to me of your mission either.
You're fortunate he doesn't blast you
into a million pieces right here.
And that kind of uneasiness
over how we treat robots
leads to the other story,
the uprising against humans.
Even though it annoys me
that it's become such a sort of meme,
you know, essentially in culture,
it's pretty impossible to divorce it from almost any discussion of robots.
It's basically invaded research even.
You know, there's robotics papers where they talk about the Terminator scenario.
The Terminator scenario was all over the news recently
because the robotics company Boston Dynamics put out a video
where their engineer
kept poking a barrel-chested humanoid robot with a stick
while it was trying to lift boxes
to show that this robot was really adept
at focusing on his job and even getting back up again.
And the video went viral because it looked like the robot
was being tormented by its master
while it was doing manual labor
and then kept doing its job out of dignity or fear,
even though we all know it was really just following its programming.
Even Joanna Bryson, the author of Robots Should Be Slaves, was kind of creeped out.
My conscious intellectual thought was just being impressed that they'd gotten that much further,
but my gut response is the same as yours, I'm sure.
Soon afterwards, Boston Dynamics was dropped by its parent company, Google. Of course,
it was mostly business reasons, but leaked internal memos show that the jokes about
slavery didn't help. Like here's Trevor Noah on The Daily Show.
You realize, you guys realize that in a hundred years, that scene is going to be in the robot version of Roots.
You understand this.
I just wanted to take a second and talk to the robots at home.
Hey, robots, f*** white people, am I right?
But Gregory Hampton was not laughing.
We can't get away from this idea of slavery.
Does that say something about the way this society thinks about marginalized bodies? I think it is. I think we can only imagine the
marginalized in a particular way. And the most handy reference is the slave, right? For a lot
of engineers or for the engineers who were probably involved in developing these humanoid robots,
these images are what's leading them.
And I'm afraid they're not exactly conscious of what that entails.
They're not exactly conscious of
what does being in a relationship,
master-slave, owner-servant relationship,
and how we treat these things,
what does that do for us?
What does that do to our psyche?
That's funny, because some roboticists have argued
that robots are never going to be
self-aware the way that we think they are. They are very useful. Let's stop being afraid of this
and just embrace the fact that they are our servants because they don't have consciousness.
Yeah. And this is the same argument that pro-slavery people use in African America.
They're not human. They're not intelligent. But in this case, they're not human they're not intelligent but in this case they're talking about
things that literally are not human they're you know um and these are people that design these
these robots and saying well they are not human they do not have the consciousness that a human
being would have you know i guess i want to suggest that even if that's the case even if
the conscious is not developed the ai may not be as advanced as you know uh some would say
doesn't for me anyway, for my argument,
doesn't take away from the idea that there are going to be some side effects. If you treat a
thing like a slave, you're going to develop certain symptoms. If you embark upon this relationship
with technology in a particular way, in a way that you've done in the past with humans, there's going
to be a side effect similar to the side effect that you had when you participated in slavery.
In other words, it doesn't really matter if robots develop feelings or not.
The question is, how will engaging with robots change us and what we consider acceptable behavior?
Eric Softee says if you want to look at the real future of robots and people interacting,
look at the other project that Google is heavily invested in, self-driving cars.
When there's coverage of advances in driving with cars, there isn't actually as much of
this talk, you know, of uprising and sort of what these things could do to us.
It's interesting because I feel like a lot of it has to do with the fact
that there isn't anything anthropomorphic about a robot car,
and also just because I think that it's about the car and about people,
a lot of people sort of despising the sort of business of commuting
and sort of the car as a chore.
Now in some ways the programming in these robot cars reflects science fiction,
or at least the three laws of robotics that run through Isaac Asimov's stories like Bicentennial Man.
A robot may not harm a human being or, through inaction, allow a human being to come to harm.
Number two, a robot must obey orders given it by
qualified personnel unless those orders violate rule number one. In other words a
robot can't be ordered to kill a human being. Rule number three, a robot must
protect its own existence unless that violates rules one or two. A robot must cheerfully go into self-destruction to save a human life.
But in the real world, a robot car won't have such clear moral choices.
If a robot has to choose who to kill, it's a driver or someone else, another driver, a bystander.
Who should it kill?
someone else, another driver, you know, a bystander, who should it kill?
You know, if there's a school bus, if it's a choice between you hitting a streetlight or hitting a school bus, you know, sort of what should it do?
And like Gregory Hampton, Eric worries that sharing the road with these robots
could bring out the worst in us.
I'm positive that the human drivers are going to treat those cars like crap,
because essentially they know they can push them around, they can cut them off,
they can do anything they want, and that robot car is going to do everything it can to be completely safe.
Now interestingly, Joanna Bryson decided to rewrite Isaac Asimov's laws of robotics,
because in science fiction we keep imagining that the robot is making the moral choice.
Her five principles of robotics reiterates that people manufacture robots.
The idea that the robot is the moral agent is broken.
We shouldn't worry about treating them badly.
We should worry about why we want to treat them badly.
We shouldn't worry about them wanting to kill us either,
because if they do, it's because they were programmed to do so.
Robots will always reflect us
and the very human desires we had to build them.
Well, that's it for this week. Thanks for listening.
Special thanks to Joanna Bryson, Mark Sofji, and Gregory Hampton,
with music by Alexis Quadrato and Sonos Sanctus.
Imaginary Worlds is part of the Panoply Network.
You can like the show on Facebook, I tweet at emolinski,
and my website is imaginaryworldspodcast.org. Panoply.