a16z Podcast - a16z Podcast: To All the Robots I've Loved Before
Episode Date: February 14, 2019with Kate Darling (@grok_) and Hanne Tidnam (@omnivorousread) We already know that we have an innate tendency to anthropomorphize robots. But beyond just projecting human qualities onto them, as we be...gin to share more and more spaces, social and private, what kind of relationships will we develop with them? And how will those relationships in turn change us? In this Valentine’s Day special, Kate Darling, Researcher at MIT Labs, talks with a16z's Hanne Tidnam all about our emotional relations with robots. From our lighter sides -- affection, love, empathy, and support -- to our darker sides, what will these new kinds of relationships enhance or de-sensitize in us? Why does it matter that we develop these often intense attachments to these machines that range from tool to companion -- and what do these relationships teach us about ourselves, our tendencies and our behaviors? What kinds of models from the past can we look towards to help us navigate the ethics and accountability that come along with these increasingly sophisticated relationships with robots?
Transcript
Discussion (0)
Hi, and welcome to the A16Z podcast. I'm Hannah, and this episode is a Valentine's special
where I talk with Kate Darling, researcher at MIT Labs, all about our emotional relationships with
robots. We already know that we have an innate tendency to anthropomorphize robots, but as we
begin to share more and more spaces, both social and private, with these machines, what does that
actually mean for how we'll interact with them? From our lighter sides, from affection and love and
emotional support to our darker sides. What do these relationships teach us about ourselves,
our tendencies, and our behaviors? How will these relationships in turn change us? And what models
should we be thinking about as we develop these increasingly sophisticated relationships with
our robots? Besides just that natural instinct that we have to anthropomorphize all sorts of
things, how is it different with robots? Robots are just so fascinating because we know rationally
that they're machines, that they're not alive,
but we treat them like living things.
With robots, I think they speak to our primal brain
even more than like a stuffed animal
or some other object that we might anthropomorphize
is because they combine movement and physicality
in this way that makes us automatically project intent onto them.
So that's why we project more onto them
than like the Samsung monitor that I'm looking at back here
because it looks like it has agency in the world.
Yeah, I think that tricks our brains.
There are a lot of studies that show that we respond differently
to something in our physical space than just something on a screen.
So even though people will imbue everything with human-like qualities,
robots take it to a new level because of this physical movement.
People will do it even with very, very simple robots,
just like the Rumba vacuum cleaner.
You know, it's not a very compelling anthropomorphic robot,
and yet people will name them and feel bad for them when they get stuck
and insist on getting the same one back
if it gets broken.
Oh, my gosh, really?
Yeah.
And so if you take that and then you create something
that is specifically designed
to push those buttons in us,
then it gets really interesting.
So what is the reason why we should be aware
of this tendency besides like this cute,
you know, attachment to our rumbas
or our vectors or whatever our pet robots are?
Why does it matter that we develop these relationships with them?
Well, I think it matters because right now
robots are really much.
moving into these new shared spaces.
I mean, we've had robots for decades,
but they've been increasing efficiency
in manufacturing contexts.
They haven't been sharing spaces with people.
And as we integrate these robots
into these shared spaces, it's really important
to understand that people treat them differently
than other devices.
And they don't treat them like toasters.
They treat them subconsciously like living things.
And that can lead to some almost comical challenges
as we try and figure out how to treat these things as tools
in contexts where they're meant to be tools,
but then at the same time kind of want to treat them differently.
When you talk about these giant manufacturing robots
that do exist in plants and factories on the floor,
do we see that there?
So there's this company in Japan that has this standard assembly line
for manufacturing.
And like a lot of companies, they have people,
working alongside the robots on the assembly line.
And so the people will come in and they do these aerobics in the morning
to warm up their bodies for the day.
And they have the robots do the aerobics with their arms
with the people so that they'll be perceived more like colleagues
and less as machines.
People are more accepting of the technology.
People enjoy working with it more.
And I think it's really important to acknowledge
this emotional connection because you can harness it to.
So we know we have this tendency.
If we think about being aware of it
and being able to foster it or diminish it,
what are some of the ways in which we negotiate those relationships?
My focus is trying to think about what that means ethically,
what that means in terms of maybe changing human behavior,
what challenges we might want to anticipate.
We're seeing a lot of interesting use cases for social robots
that are specifically designed to get people to treat them
like living things to develop an emotional connection. One of my favorite current use cases for this
technology is a replacement for animal therapy. We have therapeutic robots. They're used as medical
devices with dementia patients or in nursing homes. I saw an article about that recently,
specifically with people with dementia. Yeah, there's a baby seal that's very popular. That's right.
That's the one I read about. A lot of people think it's creepy. We're giving old people robots and
like having them nurture something that's not alive. But then you look at some of the advantages
of having them have this experience when their lives have been reduced to being taken care of by
other people. And it's actually an important psychological experience for them to have. They've been
able to use these robots as alternatives to medication for calming distressed patients. This isn't
a replacement for human care. And that's also not how it's being used. It's really being used as a
replacement for animal therapy where we can't use real animals because people will consistently
treat them more like a living thing than a device. What is the initial interaction like when you
hold something like that? Is there a prelude that's necessary? Do you have to educate a little bit
those patients or do they just put the robot seal in their arms? The most clever robotic design
doesn't require any prelude or anything because you will automatically respond to the cues
The baby seal is very simple.
It just makes these little cute sounds and movements and response to your touch and will purr a little bit.
And so it's very intuitive and it's also not trying to be a cat or anything that you would be more intimately familiar with.
Because no one has actually held a baby seal before.
Right.
And so it's much easier to suspend your disbelief and just like go with it.
So what are sort of some of the very broad umbrella concerns that we want to.
to be thinking about as we're watching these interactions develop. A lot of my work has been around
empathy and violence towards robotic objects. Are we already being violent towards them?
Sometimes. There was this robot called hitchbot that hitchhiked all the way across the entire
country of Canada, just relying on the kindness of strangers. It was trying to do a road trip through
the U.S., and it made it to Philadelphia, and then it got vandalized beyond repair. Of course,
by the way, because I'm from New Jersey.
As you're telling me this story,
I'm already imagining this alien life
doing a little journey through the world.
I'm completely projecting this narrative onto it.
And that was the interesting thing about the story.
It wasn't that the robot got beat up,
but it was people's response to that,
that they were like empathizing with this robot
that was just trying to hitchhike around
and that it got...
People were so sad when this robot got...
Poor little stranger in a story.
Strangeland.
Yeah.
There was news about this, all of the world.
It hit international news.
And what do we learn from that?
Why is it interesting that we empathize with them?
Even more interesting to me is the question,
how does interacting with these very lifelike machines influence our behavior?
So could you use them therapeutically to help children or prisoners or help improve people's behavior?
But then the flip side of that question is, could it be?
desensitizing to people to be violent towards robotic objects that behave in a really
lifelike way? Is that a healthy outlet for people's violent behavior to go and beat up robots
that respond in a really lifelike way? Or is that kind of training our cruelty muscles?
Isn't that sort of like a new version of almost the old video game argument? I mean, so how is it
shifting? So it's the exact same question, which by the way, I don't think we've ever really
resolved. We mostly kind of decided that people can probably compartmentalize, but children
we're not sure about. And so we restrict very violent games to adults. So we've kind of decided
that, you know, we might want to worry about the kids, but, you know, adults can probably
handle it. Now, robots, I think, make us need to re-ask the question because they have this
visceral physicality that we know from research people respond differently to than things on a
screen. And so there's a question of whether we can compartmentalize as well with robots.
Specifically because they are so present in the world with us. Yes. So do you think that's because
it's almost a somatic relationship to them? Will it matter the same way when we are immersed and
say virtual reality? I mean as virtual reality gets more physical, I think that the two
worlds merge. And so even though the answer could very well be, people can still distinguish
between what's fake and what's real. And just because they like beat up their robot doesn't
mean that they're going to go and beat up a person or that their barrier to doing that is
lower. But we don't know. How do you start looking at that? What are the details that start giving
you an inkling one way or the other? The way that I think we're beginning to start to get at the
question is just trying to figure out what those relationships look like at first. So I've done
some work on how do people's tendencies for empathy relate to their hesitation to hit a robot just to
try and establish that people do empathize with the robots because we don't, we have to show that
first. Yeah, we have to show that first. It's so interesting. We all know what that feeling is,
but to show, to demonstrate, to model it and then see it and recognize it in our kind of research.
search experimentation. How do you actually categorize the response of empathy? One of the things
we did was have people come into the lab and smash robots with hammers and time, how long they
hesitated to smash the robot when we told them to smash it. Did you give them a framework
around this experiment or just have them walk in and just start? Definitely they did not know
that they were going to be asked to hit the robot. Okay. And we did psychological empathy test
with them to try and establish a baseline for how they scored on empathic concern generally.
But also, like, we had a variety of conditions.
So what we were trying to look at was a difference, for example,
in would people hesitate more if the robot had a name and a backstory
versus if it was introduced to them as an object?
Oh, well, presumably the name in the backstory, right?
Yes.
Not a huge surprise that when the robot's name is friends.
Frank, people hesitate more.
So sorry, Frank.
Yeah.
We actually tried measuring like slight changes in the sweat on their skin.
Oh, my gosh.
To see if they were more physically aroused.
Unfortunately, those sensors were really unreliable.
So we couldn't get reliable data from that.
We tried coding the facial expressions, which was also difficult.
That's what I was wondering about because as one human, reading another human,
you do have some sense, right?
And I have to say, like, the videos of this experiment
are much more compelling than just the hesitation data
because people really did, like, one woman was like looking at this robot,
which was a very simple, like looked kind of like a cockroach,
like it was just a thing that moved around like an insect.
And so this one woman is like holding the mallet and like stealing herself
and she's muttering to herself, it's just a bug, it's just a bug.
So the videos were compelling, but it wasn't, we just didn't find it easy enough to code them in a way that would be scientifically sound or reliable.
So we relied just on the timing of the hesitation.
Other studies have measured people's brainwaves while they watch videos of robots being tortured.
So there are a bunch of different ways that people have tried to get at this.
So when we start learning about our capacity for violence towards robots, are you thinking about that?
in terms of what it teaches us back about humans
or about what going forward,
the reason we need to know this.
We are learning actually more about human psychology
as we watch people interact with these machines
that don't communicate back to us in an authentic way.
So that's interesting.
But I think that it's mainly important
because we're already facing some questions
of regulating robots.
For example, there's been a lot of moral panic around sex robots.
We already need to be answering the question, do we want to allow this type of technology to exist and be used and be sold?
Do we want to only allow for it in therapeutic contexts?
Do we want to ban it completely?
And the fact is we have no evidence to guide us in what we should be doing.
So it's all coming down to the same question of like, is this desensitizing or is this enhancing, basically?
Yeah. Unfortunately, a lot of the discussions are just fueled by, you know, superstition or moral panic or in this context, a lot of it is science fiction and pop culture and our constant tendency to compare robots to humans and look at them as human replacements versus thinking a little bit more outside of the box and viewing them as something that's more supplemental to humans. Do we have a model for what that even might be?
I've been trying to argue that animals might be the better analogy to these machines that can sense and think and make autonomous decisions and learn and that we kind of treat like they're alive, but we know that they're not actually alive or feel anything or have emotions or can make moral decisions.
Right.
They are still controlled by humans.
Property.
Property.
They're property.
And throughout history, we've treated some animals as property, as tools, some animals we've turned into our companions.
And I think that that is how we're going to start integrating robotic technology as well.
We're going to be treating a lot of it like products and tools and property.
And some of it we're going to become emotionally attached to when we might integrate in different ways.
But we definitely should stop thinking about robots as human replacements and start thinking
about how to harness them as a partner that has a different skill set.
So while you're talking, I'm thinking about the incredibly fraught space of how we relate to
animals. Some people might argue that since that's such a gray area as it is and we're
always feeling our way, you know, and that model is always changing. It almost sounds like
it just makes it messier in a way, right? And I also think there's a way in which we have this
primal instinct of how to relate to animals? Do you think we have the same kind of seed for a primal
relationship with robots there? I think we do. I think that ironically we're learning more about
our relationship to animals through interacting with robots because we're realizing that we're
complete hypocrites. Oh, well, yeah. We fancy ourselves as caring about, you know, the inner biological
workings of the animals and whether animals can suffer.
And we actually don't care about any of that.
We care about what animals we relate to.
And a lot of that is cultural and emotional.
And a lot of that is based on which animals are cute.
For example, in the United States, we don't eat horses.
That's considered taboo.
Whereas in a lot of parts of Europe, people are like, well, horses and cows are both delicious.
Why would you distinguish between the two?
there's no inherent biological reason to distinguish.
Right, and by the way, we boil them into glue.
And yet, culturally, we feel this, like, bond with horses in the U.S. as this majestic beast,
and it seems so wrong to us to eat them.
The history of animal rights is full of stories like this.
Like, the Save the Whales campaign didn't start until people recorded whales singing.
Before that, people did not care about whales.
But then once we heard that they can sing and make this beautiful music,
we were like, oh, we must save these beautiful creatures that we can now suddenly relate to.
Because it needs to be about us, kind of on some deep level.
The sad but important realization is that we relate to things that are like us,
and we can build robots that are like that,
and we are going to relate to those robots more than to other robots.
So it's a principle almost of like design thinking then.
When you think about, like, well, I want this robot to have a relationship to humans like cattle pulling a plow.
It gives you a sort of vision of a different spectrum of relationships, for starters.
I mean, we've even tried to design animals accordingly.
We've bred dogs to look specific ways so that we relate more to them.
And the interesting thing about robots is that we have even more freedom to design them in compelling ways than we do with animals.
It takes a while to breed animals.
Yeah, generations.
Yeah, so I think we're going to see the same types of manipulations of the robot breeds.
Why would you go down on that spectrum to the lesser relationships
when it's something that is performing a service to humans?
If it's not directly harmful to have people develop an emotional attachment,
it's probably not a bad idea to do.
but a lot of the potential for robots right now
is in taking over tasks that are dirty, dull, and dangerous.
And so if we're using robots as tools to go do the thing,
it might make sense to design them in a way that's less compelling to people
so that we don't feel bad for them when they're doing the dirty, dull, dangerous work.
There are contexts where it can be harmful.
So, for example, you have in the military, you have soldiers
who become emotionally attached to the robots that they work with.
And that can be anything from inefficient to dangerous
because you don't want them hesitating for even a second
to use these machines the way that they're intended to be used.
Oh, like police dogs.
That's a great analogy.
If you become too attached to the thing that you're working with,
if it's intended to go into harm's way in your place, for example,
which is a lot of how we're using robots these days,
bomb disposal units, stuff like that,
that you don't want soldiers becoming emotionally affected by sending the robot into harm's way
because that can be, they could risk their lives. So it's really important to understand
that these emotional connections we form with these machines can have real world consequences.
Another interesting area is responsibility for harm because it does get a lot of attention
from policymakers and from the general public. With the robots generally, there's a lot of
throwing up our hands like how can we possibly hold someone accountable for this harm if the robot did something no one could anticipate
and i think we're forgetting that we have a ton of history with animals where we have things that we've treated as property that can make autonomous unpredictable decisions that can cause harm
so there's this whole body of legislation that we can look to basically yes the smorgasbord
of different solutions we've had is really compelling. I mean, the Romans even had rules around,
you know, if your oxen tramples the neighbor's field, the neighbor might actually be able to
appropriate your oxen or even kill your oxen. We've had animal trials. Oh, I talked about that
in a podcast with Peter Leeson about the trials of the rats for decimating crops. There's different
ways even today that we like to assign responsibility for harm. There's like the very pragmatic,
okay, how do we compensate the victim of harm? How do we hold the person who caused the harm
accountable so that there's an incentive to not do it again? Right. And a lot of that is done
through civil liability. Okay. There's also, however, criminal law that is kind of a primitive
concept when you think about it.
There was just this case in India
where an old man was
stoned to death with bricks by monkeys
who were intentionally flinging bricks
and the family tried to get the police
to do something about the monkeys
and hold the monkeys
criminally accountable for what happened.
Just because of that human assigning of blame?
Yes, because it wasn't enough
to just, you know,
some sort of monetary compensation, they really wanted these monkeys to suffer a punishment
for what they had done. And I know it seems silly, but we do sometimes have that tendency.
So it's interesting to think about ways that we might actually want to hold machines themselves
accountable and ways that that's problematic as well. So can you illustrate what that would
look like with robots when we think about those different ways of assigning responsibility?
Yeah, so, for example, the way that we regulate pit bulls currently in some countries is really interesting.
Austria has decided there are some breeds of dogs that we are going to place much stricter requirements on than, you know, just the other dog breeds.
So you need to get what's basically the equivalent of a driver's license to walk these dogs.
They have to have special collars and they have to be registered.
And you could imagine for certain types of robots, having a registry, having requirements, having a different legal accountability, like strict liability versus, oh, did I intend to cause this harm or did I cause it through neglect, the way that we distinguish, for example, between wild animals and pets.
If you have a tiger and a tiger kills the postal service worker, that's going to be your fault regardless of how careful you were with a tiger because we say having a tiger is just inherent.
dangerous. It's almost, the model is sort of developing different ideas around certain
categories and groups of the way we relate to them depending on whether those relationships
are based on then sort of our emotional narratives around it or evidence-based, you know,
becomes really important. The heart of it is that we need to recognize that social robots
could have an impact on people's behavior and that it's something that we might actually need
to regulate. One of the interesting conversations that's happening right now is around autonomous
weapon systems and accountability for harm in settings of war, where we have war crimes, but
they require intentionality. And if a robot is committing a war crime, then there's maybe not
this moral accountability. But wouldn't it be an obvious whoever programmed and owns the robot?
No. Because you need someone to have intentionally caused this rather than accident.
The thing about robots is that they can actually now make decisions based on the data that they gather that isn't a glitch in the code, but is something that we didn't foresee happening.
We've used autonomous, unpredictable agents as weapons in war previously. For example, the Soviets, they trained dogs to run under tanks, enemy tanks, and
They had explosives attached to them, and they were meant to blow up the tanks.
And a bunch of things went wrong.
So, first of all, they had trained the dogs on their own tanks, which means that the dogs would sometimes blow up their own tanks instead of the enemy tanks.
They didn't train the dogs to be able to deal with some of the noise on the battlefield.
the shooting. So the dogs got scared and would run back to their handlers with these explosives
attached to them and the handlers had to end up shooting the dogs. And, you know, we're not
perfect at programming robots either. There's a lot of things that can go wrong that don't necessarily
they're not glitches in code. Right. It's unanticipated consequences. So when we're thinking
about regulating things, I think that's a pretty good analogy, too, to look at the history.
of how we've handled these things in the past and who we've held accountable.
The interesting thing that occurs to me is how do we both acknowledge our human emotional
attachment and yet not let it direct us too much?
What's that balance like?
Step one is probably awareness, right?
But is it something we can manage and navigate or is it kind of beyond our control?
I think we struggle with that culturally as well because we have this Judeo-Christian
distinction, like we have this clear line between things that are alive and things that are not
alive. Whereas in some other countries, they don't necessarily make that distinction. Like in Japan,
they have this whole history of Shintoism and treating objects as things with souls. And so
it's, I think, easier for them to view robots as just another thing with a soul. And they don't
have this contradiction inside themselves of, oh, I'm treating this thing.
like a living thing, but it's just a machine.
Oh, that's so fascinating because I would have thought it would be the other way.
If you think everything has a soul, it's sort of harder to disentangle.
But you're saying you sort of are desensitized to it in a way.
Or you're more used to viewing everything as connected but different.
And so, you know, you still face the same design challenges of how do you get people to treat robots like tools
and settings where you don't want them to get emotionally attached to them.
So those design challenges still exist.
but I think as a society, you're not also dealing with this contradiction of,
I want to treat this thing like a machine, but I'm treating it differently.
Right.
The sort of ethical wrappers around this that we need to be aware of
when we're starting to introduce these different types of interactions
as these relationships become more sophisticated.
Thank you so much for joining us on the A16Z podcast.
Thanks for having me.