Making Sense with Sam Harris - #66 — Living with Robots
Episode Date: March 1, 2017Sam Harris speaks with Kate Darling about the ethical concerns surrounding our increasing use of robots and other autonomous systems. If the Making Sense podcast logo in your player is BLACK, you can ...SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.
Transcript
Discussion (0)
Thank you. of the Making Sense podcast, you'll need to subscribe at samharris.org. There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only
content. We don't run ads on the podcast, and therefore it's made possible entirely through
the support of our subscribers. So if you enjoy what we're doing here, please consider becoming For today's podcast, I bring you Kate Darling. What a great name. Kate is a researcher at the
MIT Media Lab and a fellow at the Harvard Berkman Center. And she focuses on the way technology
is influencing society, specifically robot technology. But her background is in law and
in the social sciences. And she's one of the few people paying attention to this. And this is,
along with AI, going to become increasingly interesting to us as we integrate more and
more autonomous systems into our lives. I really enjoyed speaking with Kate. We get into some edgy territory. As I think
I said at some point, the phrase child-sized sex robots was not one that I was ever planning to
say on the podcast, much less consider its implications. But we live in a strange world,
and it appears to be getting stranger. So to help us all figure that out,
I now bring you Kate Darling.
I am here with Kate Darling.
Kate, thanks for coming on the podcast.
I'm delighted to be here.
It's great to be able to do this.
I'm continually amazed that we can do this,
given the technology.
But I first learned of you, I think, in a New Yorker article on robot ethics.
And this is your area of focus and expertise.
And this is an area that almost doesn't exist.
You're one of the few people focusing on this.
So perhaps just take a moment to say how you got into this.
Yeah, robot ethics is, it is kind of a new field,
and it sounds really science fiction-y and strange. But I, so I have a legal and social
sciences background. And at some point, about five and a half years ago, I started working at
the Media Lab at MIT, where there's a bunch of roboticists. And I made friends with them
because I love robots. And I've always loved robots. So we started talking and we realized
that I was coming at the technology with, you know, some questions that they hadn't quite
encountered before. And we realized that together,
there were some things that,
you know, some questions
that were worth exploring,
that when you bring people
who really understand
how the technology works together
with people who come at this
from kind of a, you know,
policy or social sciences
or societal mindset,
that that can be interesting to explore.
Tell people what the Media Lab is.
It seems
strangely named, but everything that comes out of it is incredibly cool and super diverse.
What's going on over there at MIT? Yeah, it's a little hard to explain. The Media Lab is kind of,
to me, it's this building where they just stick a bunch of people from all sorts of different fields,
usually interdisciplinary, or as they call it,
anti-disciplinary, and they give them a ton of money and then cool stuff happens.
That's basically it. So there's everything from like economists to roboticists to people who are
curing blindness in mice to artists and designers. It's really a mishmash of all sorts of very interesting people working in
fields that don't really fit into the traditional categories of academia that we have right now.
And so now your main interest with robots is in how our relating to them could well and may,
in fact, inevitably change the way we relate to other
human beings. Yeah, absolutely. I'm totally fascinated by the way that we treat robots
like they're alive, even though we know that they're not, and the implications that that
might have for our behavior. I must say, I'm kind of late to acquire this interest. Obviously,
I've seen robots in science fiction for as long as I've
seen science fiction, but it wasn't until watching Westworld, literally a couple of months ago,
that I realized that the coming changes in our society based on whatever robots we develop
are going to be far more interesting and ethically pressing than I realized. And this has actually nothing to do with what I thought was the
central question, which is, will these robots be conscious? That is obviously a hugely important
question and a lot turns ethically on whether we build robot slaves that are conscious and can
suffer. But even short of that, we have some really interesting things that will happen once we build robots that escape what's now called the uncanny valley.
I'll probably have you talk about what the uncanny valley is.
And I think even based on some of your work, you don't even have to get all the way out of the uncanny valley or even into it for there to be some ethical issues around how we treat robots, which we have no reason to believe
are conscious. In fact, you know, we have every reason to believe that they're not conscious. So
perhaps before we get to the edgy considerations of Westworld, maybe you can say a little bit about
the fact that your work shows that people have their ethics pushed around even by relating to
robots that are just these bubbly cartoon
characters that nobody thinks are alive or conscious in any sense.
Yeah, we are so good at anthropomorphizing things. And it's not restricted to robots. I mean,
we've always had kind of a tendency to name our cars and, you know, become emotionally attached
to our stuffed animals and kind of imagine that
they, they're these social beings rather than just objects. But robots are super interesting
because they combine physicality and movement in a way that we will automatically project intent
onto. So I think that it's just, it's, it's so interesting to see people treat even the simplest robots
like they're alive and like they have agency, even if it's totally clear to them that it's
just a machine that they're looking at.
So, you know, long before you get to any sort of complex humanoid Westworld type robot,
people are naming their Roombas, people feel bad for the Roomba when it gets stuck somewhere,
just because it's kind of moving around on its own in a way that we project onto. And I think it goes further than just
being primed by science fiction and pop culture to want to personify robots. Obviously,
we've all seen a lot of sci-fi and Star Wars, and we probably have this inclination to name
robots and personify them because of that. But I think
that there's also this biological piece to it that's even more, that's even deeper and really
fascinating to me. So one of the things that we've noticed is that people will have empathy for
robots, or at least some of our work indicates that people will empathize with robots and be really uncomfortable when they're asked to
destroy a robot or do something, you know, mean to it, which is fascinating.
Does this pose any ethical concern? Because obviously it's kind of an artificial situation to
hand people a robot that is cute and then tell them to mistreat it. But there are robots being used in, I think,
isn't it like a baby seal robot that you're giving people with Alzheimer's or autism?
Is contact with these surrogates for affection, does that pose any ethical concerns? Or is that
just if it works on any level, it's intrinsically good in your view?
I think it depends.
I think there is something unethical about it, but probably not in the way that most
people intuitively think.
So I think intuitively, it's a little bit creepy when you first hear that, oh, we're
kind of, we're using these baby seal robots with dementia patients, and we're giving them
the sense of nurturing this thing that isn't alive.
That seems a little bit wrong to people at first blush. But I honestly, so if you look at what these robots are intended to replace, which is animal therapy, it's interesting to see that they can have a similar effect.
similar effect. And no one complains about animal therapy for, you know, dementia patients. It's something that we often can't use because of hygienic or safety or other reasons. But we can
use robots because people will consistently treat them sort of like animals and not like devices.
And I also think that, you know, for the ethics there, it's important to look at some of the
alternatives that we're using. So with the baby seal, if we can use that as an alternative to medication for calming distressed
people, I'm really not so sure that that's really an unethical use of robots. I actually think it's
kind of awesome. Yeah. So one of the things that does concern me, though, is that this is such an engaging and or in other words, manipulative technology that and and, you know, we're seeing a lot of these robots being developed for kind of vulnerable parts of the population, like the elderly or children.
A lot of kids toys are have increasing amounts of this kind of manipulative robotics in them.
manipulative robotics in them. So I do wonder whether the companies that are making the robots might be able to use that in ways that aren't necessarily in the public interest, like get
people to buy products and services or manipulate people into revealing more personal data than they
would otherwise want to enter into a database. Things like that concern me, but those are more
people doing things to other people rather than, you know, something intrinsically wrong about treating robots like they're alive.
So has there been anything like that? Have any companies with toy robots or elder care robots done anything that seems to push the bounds of propriety there in terms of introducing messaging that you wouldn't want in that kind of situation?
Yeah, I don't know any examples of people trying to manipulate the elderly as of now,
but we do have examples from the porn industry and having very manipulative chatbots that try
and get you to sign up for services. And this was happening decades ago, right? So we do have a history of companies trying to use
technology in advertising or say, you know, the in-app purchases that we see on iPads where there
have been consumer protection cases where, you know, kids were buying a bunch of things and now,
you know, companies have had to implement all of these safeties so that it requires, you know,
parental override in order to purchase stuff.
Like there's a history of, you know, we know we know that companies, you know, serve their own interests.
And any technology that we develop that is engaging in the way that robots already are in their very primitive forms and will increasingly be, I think, might pose a consumer protection risk. Or you could even,
you know, think of governments using robots that are increasingly entering into our homes and very
intimate areas of our lives. Governments using robots to, you know, collect more data about
people and essentially spy on them. So there's this basic fact where any system that seems to behave
autonomously doesn't have to be humanoid, doesn't even have to have a lifelike shape, it doesn't
have to draw on biology at all. As you said, it could be something like a Roomba. If it's
sufficiently autonomous, it begins to kindle our sense that we are in relationship to another, which we can find cute or menacing or
whatever we feel about it. It pushes our intuitions in the direction of this thing is a being in its
own right. I believe you have a story about how a landmine diffusing robot that was insectile,
like spider-like, could no longer be used, or at least one person in the
military overseeing this project felt you could no longer use it because it was getting its legs
blown off. And this was thought to be disturbing, even though, again, we're talking about a robot
that isn't even close to being the sort of thing that you would think people would attribute
consciousness to. Yeah. And then, of course, with design, you can really start influencing that, right? So whether
people think it's cute or menacing or whether people treat it as a social actor, because there's
this whole spectrum of, you know, you have a simple robot like the Roomba, and then you have a social
robot that's specifically designed to mimic all of these cues that you subconsciously associate
with states of mind. So we're seeing increasingly robots being developed that
specifically try and get you to treat it like a living thing, like the baby seal.
Are there more robots in our society than most of us realize? What is here now and what do you
know about that's immediately on the horizon? Well, I think what's sort of happening right now
is we've had robots for a long time, but robots have been mostly in factories and manufacturing lines and assembly lines and behind the scenes.
Now we're gradually seeing robots creep into all of these new areas.
The military or hospitals, we have surgical robots or transportation systems, autonomous vehicles. And we have these new household
assistants. A lot of people now have Alexa or Google Home or other systems in their homes.
And so I think we're just seeing an increase of robots coming into areas of our lives where we're
actually going to be interacting with them in all sorts of different fields and areas.
So what's the boundary between,
or is there a boundary between
these different classes of robots?
I don't think there's any clear line
to distinguish these robots.
Also in terms of the effect that they have on people,
you see, depending on how a factory robot is designed,
people will become emotionally attached to that as well.
That's happened.
And we also, I mean, by the way, we don't even have a universal definition of what a robot is.
Some of the robots I picture, like the robots I was picturing on an assembly line are either fixed in place, and we're just talking about arms that are constantly
moving and picking things up, or they're kind of moving on tracks, but they're not roving around in 360 degrees of freedom. I trust there are other robots that do that
in industry as well. Yeah. But like, so one question is, you know, is the inside of a
dishwasher, is that a robot? Like, is that movement autonomous enough? It's basically
what the factory robots are doing, but we call those robots. We don't call the dishwasher robot.
enough. It's basically what the factory robots are doing, but we call those robots. We don't call the dishwasher robot. There's just this continuum of machines with greater and greater
independence from human control and greater complexity of their routines, and there's no
clear stopping point. Let's come back to this concept of the uncanny valley, which I've spoken
about on the podcast before. What is the Uncanny Valley and what are the prospects that
we will get out of it anytime soon? Yeah, the Uncanny Valley is a somewhat controversial
concept that you can design something that is lifelike, but as soon as you get too close to,
I think for the Uncanny Valley, it's specifically humanoid. If you get too close
to something that looks like a human, but you don't quite match what it is, then it suddenly
becomes really creepy. So people will like the thing, the more lifelike that it gets. And then
once it gets too close, like the likability of it drops, it's like zombies or something like
something that's human, but not quite human really creeps us out. And then
it, it, it, it doesn't go back up again until you can perfectly like absolutely perfectly,
uh, mimic a human. And I think I, I like to think about it more less in terms of the uncanny valley
and more in terms of expectation management, I guess. So I think that if we see something that looks human,
we expect it to act like a human. And if it's not quite up to that standard, I think it disappoints
what we were expecting from it. And that's why we don't like it. And that's a principle that I see
in robot design a lot. So a lot of the really, I think, compelling social robots that we develop nowadays
are not designed to look like something that you're intimately familiar with. Like I have
this robot cat at home that Hasbro makes, and it's the creepiest thing. Because it's clearly
not a real cat, even though it tries to look like one. And so it's very, it's very unlovable in a way. But I also have this baby
dinosaur robot that is much more compelling because I've never actually interacted with
a two week old Camarasaurus before. So it's much easier to suspend my disbelief and actually
imagine that this is how a dinosaur would behave. So yeah, it's so it's interesting to see how the whole Westworld concept,
before we could even get there, we would really need to have robots that are so similar to humans
that we wouldn't really be able to tell the difference. What is the state of the art in
terms of humanoid robots at this point? I mean, I've never actually been in the presence of any
advanced robot technology that's attempting to be humanoid robots at this point. I mean, we are, I've never actually been in the presence of any advanced robot technology that's attempting to be humanoid.
There are some Japanese androids that are pretty interesting. I don't think,
like to me, they're not out of the Uncanny Valley yet, but there's also some conversation about
whether the Uncanny Valley is cultural or not. And also, I think some research
on that, which I don't think is very conclusive, but it might be that in some cultures, you know,
like in Japanese culture, people are more accepting of robots that look like humans,
but aren't quite there because, you know, people say that there's this religious background to it,
quite there because, you know, people say that there's this religious background to it, that the Shinto religion, the belief that objects can have souls makes people more accepting of robotic
technology in general, whereas in Western society, we're more creeped out by this idea that a thing,
a machine could, you know, resemble a living thing in a way, but I'm, yeah, I'm, I'm not really sure.
And, and I mean, you should check, check out the Androids that, that, uh, Ishiguro in Japan is
making because they, they're pretty cool. He made one that looks like himself, which is interesting,
uh, to think about, you know, his own motivations and psychology behind that. But, um, it is a
pretty cool robot. I think, you know, just from a
photograph, you might not be able to tell the difference. Probably in interacting with it,
you would. So do you think we will get to a Westworld level lifelikeness long before we get
to the AI necessary to power those kinds of robots? Or do you have any intuitions about how long it will
take to climb out of the uncanny valley? That's a good question. I honestly, I'm not as interested
in, you know, how do we completely replicate humans? Because I see so many interesting design
things happening now where that's not necessary. Like we can create, we can already with, and
robotic technology is very primitive at this
point. I mean, robots can barely operate a fork, but we can create characters that people will
treat as though they're alive. And while it's not quite Westworld level, if we move away from this
idea that we have to create humanoid robots and we create, you know, a blob or, you know, some we have a century of animation expertise to draw on in creating these compellingworld, we can get to a place where we are
creating robots that people will consistently treat like living things, even if we know that
they're machines. I guess my fixation on Westworld is born of the intuition that something fundamentally
different happens once we can no longer tell the difference between a robot and a person.
And maybe I'm wrong about that. Maybe this change and all of
its ethical implications comes sooner when, as you say, we have a blob that people just find compelling
enough to treat it as though it were alive. It just seems to me that Westworld is predicated on
the expectation that people will want to use robots in ways that would truly be unethical if these robots were sentient.
But because on assumption or in fact they will not be sentient, this becomes a domain of creative play analogous to what happens in video games.
If you're using a first-person shooter video game, you are not being unethical shooting the bad guys. And the
more realistic the game becomes, the more fun it is to play. And there's this sense that, I mean,
while some people have worried about the implications of playing violent video games,
all the data that I'm aware of suggests they're really not bad for us and crime has only gone
down in the meantime. And it seems to me that
there's no reason to worry that as that becomes more and more realistic, even with virtual reality,
it's going to derange us ethically. But watching Westworld made me feel that robots are different.
Having something in physical space that is human-like to the point where it is indistinguishable from a human. Even though
you know it's not, it seems to me that will begin to compromise our ethics if we mistreat these
artifacts. We'll not only feel differently about ourselves and about other people who mistreat
them, we will be right to feel differently because we will actually be changing ourselves.
You'd have to be more callous than in fact most people are to rape or torture a robot that is in
fact indistinguishable from a person because all of your intuitions of being in the presence of
personhood, of being in relationship, will be played upon by that robot, even though you know that it's been manufactured and let's say you've
been assured it can't possibly be conscious. So the takeaway message from watching Westworld for
me is that Westworld is essentially impossible. We would just be creating a theme park for
psychopaths and rendering ourselves more and more sociopathic
if we tried to normalize that behavior. And I think what you're suggesting is that long before
we ever get to something like Westworld, we will have and may even have now robots that if you were
to mistreat them callously, you would in fact be callous. And you'd have to be callous in order to
do that. And you're not going to feel good about doing it if you're a normal person and people
won't feel good watching you do it if they're normal. Is that what you're saying? Yeah. I mean,
we already have some indication that people's empathy does correlate with how they're willing
to treat a robot, which is super interesting. If you'd like to continue listening to this
conversation, you'll need to subscribe at
SamHarris.org.
Once you do, you'll get access to all full-length episodes of the Making Sense podcast, along
with other subscriber-only content, including bonus episodes and AMAs and the conversations
I've been having on the Waking Up app.
The Making Sense podcast is ad-free and relies entirely on listener support. And you can subscribe now at SamHarris.org.