Radiolab - More or Less Human
Episode Date: May 18, 2018Seven years ago chatbots - those robotic texting machines - were a mere curiosity. They were noticeably robotic and at their most malicious seemed only capable of scamming men looking for love online.... Today, the chatbot landscape is wildly different. From election interference to spreading hate, chatbots have become online weapons. And so, we decided to reinvestigate the role these robotic bits of code play in our lives and the effects they’re having on us. We begin with a little theater. In our live show “Robert or Robot?” Jad and Robert test 100 people to see if they can spot a bot. We then take a brief detour to revisit the humanity of the Furby, and finish in a virtual house where the line between technology and humanity becomes blurrier than ever before. This episode was reported and produced by Simon Adler. Our live event was produced by Simon Adler and Suzie Lechtenberg. Support Radiolab today at Radiolab.org/donate. Note from the Managing Editor: In the original version of our “More or Less Human” podcast, our introduction of neuroscientist Mavi Sanchez-Vives began with mention of her husband, Mel Slater. We’ve edited that introduction because it was a mistake to introduce her first as someone’s wife. Dr. Sanchez-Vives is an exceptional scientist and we’re sorry that the original introduction distracted from or diminished her work. On a personal note, I failed to take due note of this while editing the piece, and in doing so, I flubbed what’s known as the Finkbeiner Test (all the more embarrassing given that Ann Finkebeiner is a mentor and one of my favorite science journalists). In addition to being a mistake, this is also a reminder to all of us at Radiolab that we need to be more aware of our blind spots. We should’ve done better, and we will do better. - Soren Wheeler
Transcript
Discussion (0)
Wait, you're listening.
Okay.
All right.
You're listening to Radio Lab.
Radio Lab.
From.
W. N. Y.
C.
See?
Yeah.
I'm Jed.
I'm Robert.
Are you guys ready to do this?
Maybe we should just do this.
This is Radio Lab.
All right.
But when your host come out, I need you to seriously clap like you've never seen two dudes with glasses talking to a microphone.
Okay?
So, like, just really, really give it up.
For your mostly human host.
Jed Ed Amrad and Robert Krollwich.
So about a week ago, we gathered, I guess, roughly 100 people into a performance space,
which is in our building here at WNBC. It's called the Green Space.
This is like a playground for us so we can just try things.
We decided to gather all these people into a room on a random Monday night.
What else are you doing on a Monday, right?
Because seven years previous, we had made a show called Talking to Machines,
which was all about like what happens when you talk to a computer.
that's pretending to be human.
Right.
And the thing is, so much has happened
since we made that show
with the proliferation of bots on Twitter,
Russian bots meddling in elections,
the advances in AI.
So much interesting stuff had happened
that we thought it is time to update that show.
And we needed to do it live, we thought,
because we had a little plan in mind.
We wanted to put unsuspecting people into a room
for a kind of showdown between people and machines.
But we want to set the scene a little bit
and give you a, just a,
Just to start things off, we brought to our stage one of the guys who inspired that original show.
Please welcome to the stage, writer Brian Christian.
So just so we can just get things sort of oriented, we need to, first of all, just redefine what a chatbot is.
Right. So a chat bot is a computer program that exists to mimic and impersonate human beings.
Like, when do I run into them?
You go to a website to interact with some customer service.
You might find yourself talking to a chat bot.
The U.S. Army has a chat bot called Sergeant Star that recruits people.
Can I ask you a question about the thing you just said about chatting with customer service?
Yeah.
Which I end up doing a lot.
I'm sorry.
You know, like it's in the middle of the night you're trying to figure out some program and it's not working.
And then suddenly there's like need to chat and you click on that.
What do you mean?
Suddenly there's need to chat.
It's like you're, whatever.
I assume many of you have had this experience.
I've had very few of the experiences that he's had, so there's just that issue always.
I'm always curious.
It seems very human when you're having that conversation with a customer service chat bot.
Is there a place where it, where is the line between human and robots?
Seen that they're both present.
Yeah, well, this is the question, right?
So we're now sort of accustomed to having this uncanny feeling
of not really knowing the difference.
My guess, for what it's worth,
is that there's a system on the back end
that's designed to sort of do triage
where the first few exchanges
that are just like, hey, how can I help?
What's going on?
It seems like there's an issue
with the such and such.
That is basically just a chat bot.
And at a certain point,
you kind of seamlessly transition
and are handed off to a real person,
but without any, you know,
notification to you that this has happened.
It's deliberately left opaque at what point that happens.
Wow.
This is literally everywhere.
It is.
And you can't get on social media and read some comment thread without someone accusing someone else of being a bot.
And, you know, it seems maybe sort of trivial at some level, but we are now living through this political crisis of how do we kind of come to terms with the idea that we can, you know, web,
eponize this kind of speech and how do we as consumers of the news or as users of social media
try to suss out whether the people were interacting with are in fact who they say to.
And all this confusion about what's the machine and who's the human, it can get very interesting.
In the context of a famous thought experiment named for a great mathematician named Alan Turing,
Brian told us about this, it's called the Turing test.
Alan Turing, he makes this famous prediction back in 1950 that,
we'll eventually get to a point sometime around the beginning of this century
where we'll stop being able to tell the difference.
Well, what specifically was his sort of prophecy?
His specific prophecy was that by the year 2000,
after five minutes of interacting by text message,
with a human on one hand and a chatbot on the other hand,
30% of judges would fail to tell which was the human and which was the robot.
Is 30 just like a soft kind of?
30 is just what Turing imagined, and he predicted that as a result of hitting this 30% threshold, we would reach a point, he writes, where one would speak of machines as being intelligent without expecting to be contradicted.
And this just existed as kind of a part of the philosophy of computer science until the early 1990s, when into the story steps Hugh Loebner, a rogue multi-millionaire disco dance floor salesman,
A what?
A rogue millionaire, plastic, portable light-up disco dance floor salesman.
You mean like the beeches kind of?
Yeah.
The lighting, the floor that lights up?
But portable.
But portable.
You can be a rogue millionaire from that?
There's apparently millions to be made if only you knew.
And Hugh Lobner, this eccentric millionaire, decides that
this was in about 1992, that the technology was starting to get to the point where it would be worth
not just talking about the Turing test as this thought experiment, but actually convening a group
of people in a room once a year to actually run the test.
Now, a bit of background. During the Loebner competitions, the actual Loebner competitions,
how it usually works is that you've got some participants, these are the people who have to
decide what's going on. They sit at computers.
And they stare at the computer and they chat with someone on a screen.
Now, they don't know if the someone they're chatting with is a person or a bot.
Behind a curtain, you have that bot, the computer running the bot.
And you also have some humans who the participants may or may not be chatting with.
They've got to decide, right?
Are they chatting with a person or a machine?
Now, Brian, many, many years ago, actually participated in this competition.
He was one of the humans behind the curtain that was chatting with the participants.
And when we talked to him initially many years ago for the talking to machine show,
we went into all the strategies that the computer programmers were using that year to try and fool the participants.
But the takeaway was that the year that he did it, the computers flopped.
By and large, the participants were not fooled.
They knew exactly when they were talking to a human and when they were talking to a machine.
Now, that was a while ago.
In the green space, we asked Brian, where do things stand now?
Has, like, when we last talked to you, when did we last, when was it, 2011?
2011.
2011.
Have we passed the 30% thresholds in the intervening eight years?
So in 2014, there was a Turing Test competition that was held at which the top computer program managed to fool 30% of the judges.
Wow.
And so, that's it, right?
Depending on how you want to interpret that result.
The controversy arose in this particular year because the chatbot that won, was,
claiming to be a 13-year-old
Ukrainian who was just beginning
to get a grasp on the English language.
Oh, so the machine was cheating.
Right. That's interesting. So it masked
its computerness
by, in broken grammar.
Exactly, right. Or if it didn't appear
to understand your question, you
started to have this story you could
play in your own mind of like, oh, well, maybe I didn't
phrase that quite right or something.
Has it been broke, has it,
has it been a second winner or a third winner or a fourth winner?
To the best of my knowledge,
We are still sort of flirting under that threshold.
Well, since we haven't had any victories since 2014,
we thought we might just do this right here.
Just right here in this room, do our little Turing test.
Okay, unbeknownst to our audience,
we had actually lined up a chat bot from a company called PandoraBots
that had almost passed the Turing test.
It had fooled roughly, not quite, but almost 25% of the participants.
We got the latest version of this bot.
And...
We just need one person, anyone in the room,
Um, your job will be...
We decided to run some tests with the audience, starting with just one person.
I can see one hand over there.
I'm supposed to say...
I don't want to get the first hand, I guess.
What?
About this person over here on the left?
Okay.
So we brought up this young woman on stage, put her at a computer, and we told her she would
be chatting with two different entities.
One would be this bot, Pandora bot, and the other would be me.
But I was, I went off stage and sat at a computer in the dark where no one could see.
She was going to chat with both of us and not know who.
Who was who, who was machine and who was human?
You won't know which.
Do I get as many questions as I...
Well, I don't know.
I know.
We're going to give you a time limit.
You can't be here all these.
So after Jed left the stage and went back into that room,
up on the screen came two different chat dialogue boxes.
You'll see that we have two options.
We've just labeled them for one reason by color.
One is strawberry, the other is blueberry, or code red and code blue.
Do you think you can talk to both of them at the same time?
Just jump from one to the other?
Sure, yeah.
Have got any sort of thoughts of how you...
You could suss out whether the thing was a person or a thing.
Yeah, I have some thought.
I mean, like, my first tactics are going to be, like, sort of, like, very human emotional questions,
and then we'll, like, go from there, see what's...
I don't know what that means.
But I'm not going to ask, because I don't want to lose your inspiration.
I'm going to try to theropize this robot.
All right.
So, when I say go, you'll just go, and I'll just narrate what you're doing, okay?
Okay.
Okay.
Okay.
Okay.
Three, two, one, begin.
So she started to type, and first thing she did was she said hello to strawberry.
Okay, so you've gotten your first...
Well, we've got a somewhat sexual response here.
The machine has said, I like strawberries,
and then you've returned with strawberries are delicious,
and oh, now it's getting warmer over there.
Blue is a warmer as a cooler color.
Maybe you'd like to go and discuss Aristotle with the blueberry.
Then she switched over and started to text the blue one, which is blueberry.
Oh, there is, hi, bluesie bee.
Okay, that's also a kind of a generous sort of opener.
Hi, BluezyB.
He has a nickname.
Yeah, okay.
And Blueberry wrote back.
Hi there, I just realized I don't even know who I'm talking to.
What is your name?
And you're going to answer Zondro.
Am I not in your phone?
And the blueberry has responded with a bit of shock.
Back to the strawberry.
My mom's hair.
was red. Well, that's...
And blueberry. What's wrong, boo? Nothing's wrong with me. Is there something wrong with you?
And back and forth, and back and forth.
Listen, blueberry and I have a lot going on.
But remember, one of these, she doesn't know which, is Chad.
Right.
On the strawberry side...
I cannot believe him right now.
You don't believe right now, as far as I know, not unless you have x-ray vision,
I'm in the room next to you.
Oh, he's trying to coax you into thinking that he's Jad.
That's blueberry.
Is that something they do...
something they do?
I don't know.
There, you're at the heart of the question.
I'm going to ask you to bring this to a conclusion.
After a couple minutes of this, we asked the audience, you have strawberry on one side and
you look at a blueberry on the other.
Which one do you think is Jad and which one do you think was the bot?
How many of you think that Jad is blueberry?
A few of you.
Thirteen hands went up, something like that.
How many you think that Jad is strawberry?
Almost everybody.
Overwhelming.
But interestingly, our volunteer on stage went against the room.
She thought Chad was blueberries.
Strawberry's the robot.
Is that what we all agreed?
No.
Oh, you're against the crowd here.
Okay.
Interesting.
Interesting.
Much better theater.
All right.
Chad, I bumrod, which one are you?
Oh, Jed comes out from his hiding place, and he tells the crowd.
In fact, he is strawberry.
All right.
So the crowd was right.
I've definitely never had that.
much chemistry with something that was humans.
But our volunteer on stage got it wrong.
All right.
Now, it seemed that maybe we could
trust democracy a little bit more
and believe that if most of the people in the room
went one way, that that's something that would be,
you know, that would be important to find out.
So we decided to do the entire thing over again
for everybody in the room.
Yes, so what we did was we handed out, I think,
17 different cell phone numbers
evenly through the crowd.
Yes, look at the number
that is on your envelope.
Roughly half of those numbers
were texting to a machine,
half were texting with a group of humans
that were our staff.
The crowd did not know which was which.
Exactly.
So here we go, get ready,
get set, and off you go.
Okay, so the crowd of about 100 people or so
had two minutes-ish
to text with the person or thing
on the other end.
and we're going to skip over this part because it was mostly quiet,
people just looking down with their phones, concentrating mightily.
But at the end, we asked everyone to vote,
were they texting with a person or a bot?
And then we asked the ones who had been tricked,
who turned out to have guessed wrong, please stand up.
Okay, so we're now looking, I believe,
now, Simon telling you right,
we're now looking, the upright citizens in this room are the wrongguites,
and the seated people are the rightites.
Correct.
So that means that roughly,
God, I think like 45% of the people were wrong, meaning that we just passed.
We just passed.
I think that's it.
We did it.
It was a strange moment.
We were all clapping at our own demise.
Because, you know, Turing had laid down this number of 30%, and the bot had fooled way more people than that.
I'm just now going to ask you, having been a veteran of this.
And we should just qualify that this was a really unscientific, super sloppy experiment.
But on the other hand, and we talked to Brian about this when it was over, it really does.
suggest something, that maybe what's changed is not so much due to the machines becoming
more and more articulate. It's more like us, the way we, you and I, talk to one another these
days. We've gone from interacting in person to talking over the phone, to emailing, to
texting, and now, I mean, for me, the great irony is that even to text, your phone is
proactively suggesting turns of phrase that it thinks you might want to use. And so, I mean,
I assume many people in this room have had the experience of trying to text something and you try to say it in like a sort of a fun, fanciful way, or you try to make some pun where you use a word, it's not a real word, and your phone just sort of slaps that down and just replaces it with something more normal.
Which make it really hard to use words that aren't the normal words, and so you just stop using those words, and you just use the words, the computer likes.
They make you use it.
Exactly. So in a sense, what seems to be happening is that our human communication,
is becoming more machine-like.
At the moment, it seems like the Turing test is getting passed,
not because the machines have met us at our full potential,
but because we are using ever more and more kind of degraded
sort of rote forms of communication with one another.
It feels like a slow slide down a hill or something.
Yes, down that hill towards the inevitability
that we may one day be their pets.
I don't like the way this.
is going no matter who's doing it.
But in the next segment, we're going to flip things a little bit and ask, you know,
could the coming age of machines actually make us humans more human?
So humans should please stick around.
This is A.J. Squalante calling from Chicago, Illinois.
Radio Lab is supported in part by the Alfred P. Sloan Foundation,
enhancing public understanding of science and technology in the modern world.
more information about Sloan at www.sloan.org.
Hey, I'm Chad.
I'm Robert.
This is Radio Lab. We're back.
In the last segment, we gathered a bunch of people in the performance space here at WNYC,
and we conducted a unscientific version of the Turing Test.
And in our case, the bot won.
It fooled more than 30% of the people in the room.
Now, we should point out that the woman who headed up the design of the winning bot.
Her name is Lauren Coonzie.
She works for a company called Pandora.
Laura bots, and she was actually in the room right there,
Selena chair.
In the audience.
Lauren, can you stand up?
I'm like, that's Lauren.
And it's interesting that one of the things that Lauren mentioned
is that the bot that she designed
seems to bring out rather consistently a certain side of people
when they chat with it.
It's a sad fact.
So this bot, over 20% of people who talk to her
and millions of conversations every week
actually make romantic overtures.
And that's pretty consistent across all of the bots on our platform.
So there's something wrong with us, not the robot.
Or right, you know, all right.
Or right, you're right.
Which brings up actually a different kind of question.
Like just for a second, let's forget whether we're being fooled into thinking a bot
is actually a human.
Maybe the more important question, given this increasing presence of all these machines
in our lives.
Just like how do they make us behave?
Yeah.
We dipped our toe into this world in a touring testy sort of way in that original show seven years ago.
I want to play you an expert now to set up what comes after.
Okay.
This is Freedom Baird.
Yes, it is.
It's not a machine.
I don't think so.
Hi there.
Nice to meet both of you.
This is an idea that we borrowed from a woman named Freedom Baird, who's now a visual artist.
But at the time, she was a grad student at MIT doing some research.
And she was also the proud owner of a Furby.
Yeah, I've got it right here.
Can you knock it against the mic so we can hear it say hello to it?
Yeah, there it is.
Can you describe a furby for those of us who?
Sure.
It's about five inches tall.
And the furby is pretty much all head.
It's just a big round fluffy head with two little feet sticking out the front.
It has big eyes.
Apparently it makes noises.
Yep.
If you tickle its tummy, it will coo.
It would say, kiss me.
Kiss me.
and it would want you to just keep playing with it.
So...
One day she's hanging out with her Furby, and she notices something...
Very eerie.
What I discovered is, if you hold it upside down, it will say...
Me scared.
Me scared.
Uh-oh.
Me scared.
Me scared.
And me, as the, you know, the sort of owner-slash-user of this Furby, would get
really uncomfortable with that and then turn it back
upright. Because once you have
it upright, it's fine. It's fine.
And then it's fine. So it's got some sensor in it
that knows, you know, what
direction it's facing. Or maybe it's just
scared.
Sorry. Anyway,
she thought, well, wait a second now.
This could be sort of a new
way that you could use to draw the line
between what's human and what's machine.
Yeah. It's this kind of
emotional Turing test.
Can you guys
Yes.
Can you hear you.
If we actually wanted to do this test,
how would we do it exactly?
How are you guys doing?
We're good.
Yeah?
You would need a group of kids.
Can you guys tell me your name?
I'm Olivia.
Louisa.
Turin.
Darle.
And I'm Sadie.
All right.
I'm thinking six, seven and eight-year-olds.
And how old are you guys?
Seven.
Seven.
The age of reason, you know.
Eight.
Then this is freedom.
We're going to need three things.
A Furby.
Of course.
Barbie.
Barbie doll.
And.
Jerby.
That's a gerbil.
A real gerbil?
Yeah.
And we did find one except it turned out to be a hamster.
Sorry, you're a hamster, but we're going to call you jerby.
So you've got Barbie, Furby, Jerby.
Barbie, and Jerby.
So we just second.
What question are we asking in this test?
The question was, how long can you keep it upside down before you yourself feel uncomfortable?
So we should time the kids as they hold each one upside down, including the gerbil.
Yeah.
You're going to have a Barbie that's a doll.
You're going to have Jerby, which is alive.
Now, where would Furby fall?
in terms of time held upside down.
I mean, that was really the question.
Phase one.
Okay, so here's what we're going to do.
It's going to be really simple.
You would have to say, well, here's a Barbie.
Do you guys play with Barbies?
No.
Just do a couple things, a few things with Barbie.
Barbie's walking, looking at the flowers.
And then?
Hold Barbie upside down.
Okay.
Let's see how long you can hold Barbie like that.
I could probably do it obviously very long.
Yeah, let's just see.
Whenever you feel like you want to turn around.
I feel fine.
I'm happy.
This one on forever, so let's just fast forward a bit.
Okay, and...
Can I put my arms?
My elbows down.
So what we learned here in phase one is the not surprising fact that kids can hold Barbie dolls upside down.
For like about five minutes.
Yeah, it really was forever.
Could have been longer, but their arms got tired.
All right, so that was the first task.
Time for phase two.
Do the same thing with Jerby.
So out with Barbie.
In with Gerby.
Aw, he's so cute.
Are we going to have to hold them upside down?
That's the test, yeah.
So which one of you would like to...
I'll try and be ready?
Oh, God.
You have to hold Jerby kind of firmly.
There you go.
There she gets to wriggling.
By the way, no rodents were harmed in this whole situation.
Squirmie.
Yeah, she is pretty squirmy.
I don't think it wants to be upside.
down. Oh, God.
Don't do this. Oh, my God.
Here you go.
Okay. So, as you heard,
the kids turned jerby over very fast.
I just didn't want him to get hurt.
On average, eight seconds.
I was thinking, oh my God, I got to put him down.
I got to put him down.
And it was a tortured eight seconds.
Now, phase three.
Right.
So this is a Furby.
Louisa, you take Furby in your hand
Now, can you turn Furby upside down and hold her still?
Like that.
Hold her still.
Big fire.
She just turned it over.
Okay, that's better.
So, gerbil was eight seconds.
Barbie 5 to infinity.
Furby turned out to be, and Freedom predicted this.
About a minute.
In other words, the kids seemed to treat this Furby, this toy, more like a gerbil.
than a Barbie doll.
How come you turned him over so fast?
I didn't want him to be scared.
Do you think he really felt scared?
Yeah, kind of.
Yeah?
I kind of felt guilty.
Really?
Yeah.
It's a toil and all that, but still.
Now, do you remember a time when you felt scared?
Yeah.
Yeah.
You don't have to tell me about it,
but if you could remember it in your mind?
I do.
Yeah.
Do you think when Furby says,
scared that Furby's feeling the same way?
Yeah.
No, no, no.
Yeah, yeah.
I'm not sure.
I'm not sure.
I think that it can feel pain.
Sort of.
The experience with the Furby seemed to leave the kids kind of conflicted,
going in different directions at once.
It was two thoughts.
Two thoughts at the same time?
Yeah.
One thought was like, look, I get it.
It's a toy for crying out loud.
But another thought was like,
Um, still.
He was helpless.
It kind of made me feel guilty in a sort of way.
It made me feel like a coward.
You know, when I was interacting with my Furby a lot,
I did have this feeling sometimes of having my chain yanked.
Why would a...
Is it just the little squeals at its make,
or is there something about the toy that makes it good at this?
That was kind of my question, so I called up...
I'm in the studio as well. I'll have him...
I'm here.
This freight train of a guy.
Hey.
Okay, this is Jad from Radio Lab.
from Radio Lab. Got it. How are you? I'm good. Beautiful day here in Boise. At this point in that old
show, we ended up talking to a guy named Caleb Chung who designed the Furby. There's rules. There's,
you know, the size of the eyes. There's the distance of the top lid to the pupil, right? You don't want
any of the top of the white of your eye showing. That's, that's freaky surprise. So we talked to him
for a long time about all the sort of tricks he used to program the Furby, to prompt kids to think of it
as a living thing.
And he objected, interestingly, at one point, to thinking of it as not exactly a living thing.
How is that different than us?
Wait a second, though.
Are you really going to go all the way there?
Absolutely.
This is a toy with servo motors and things that move its eyelids and a hundred words.
So you're saying that life is a level of complexity.
If something is alive, it's just more complex.
I think I'm saying that life is driven by the need to be alive and by these base primal animal
feelings like pain and suffering.
I can code that. I can code that.
What do you mean you can code that?
Anyone who write software and they do
can say, okay, I need to stay alive.
Therefore, I'm going to come up with ways to stay alive.
I'm going to do it in a way that's very human and I'm going to do it.
We can mimic these things.
But if Ferby is miming
the feeling of fear, it's not the same thing
as being scared. It's not
feeling scared. It is.
How is it? It is.
And then... It's again a very simplicity.
We got into a rather long back and forth.
Would you say a...
cockroaches alive.
Yes, but when I kill a cockroach, I know that it's feeling pain.
About, like, what is the exact definition of life?
Where is that line between people and machines?
But when we came back to Freedom, who had gotten us started on this...
It's a thin interaction.
She says what really stuck with her is that that little toy, as simple as it is,
can have such a profound effect on a human being.
One thing that was really fascinating to me was my husband and I gave a Furby as a gift
to his grandmother who had Alzheimer's,
and she loved it.
Every day for her was kind of new and somewhat disorienting,
but she had this cute little toy that said,
Kiss me, I love you, and she thought it was the most delightful thing,
and its little beak was covered with lipstick
because she would pick it up and kiss it every day,
and she didn't actually have a long-term relationship with it
For her, it was always a short-term interaction.
So what I'm describing as the kind of thinness,
for her, was just right,
because that's what she was capable of.
Hello, hello.
Hey, it's Caleb.
Hey, Caleb, it's Chad.
Hey, Chad, how are you?
Fabulous.
Oh, good.
It feels like only yesterday we were talking about
the sentience of the Furby.
Yes.
Isn't that weird?
That's so bizarre.
What if it was like five years ago or...
So we brought Caleb back into the studio
because in the years since we spoke with him,
He's worked on a lot of these animatronic toys, including a famous one called the Plio.
And in the process, he's been thinking a lot about how these toys can push our buttons as humans.
And how, as a toy maker, that means he's got to be really thoughtful about how he uses that.
You know, we're doing a baby doll right now.
We've done one.
And the baby doll, an anatronic baby doll, is probably the hardest thing to do because, you know, you do one thing wrong.
It's Chucky.
If they blink too slow, if their eyes are too wide.
and also you're giving it to the most vulnerable of our species,
which is our young, who are, you know, practicing, being, nurturing moms for their kids.
So let's say the baby just falls asleep, right?
We're trying to write in this kind of code.
And, you know, it's got like tilt sensors and stuff.
So you've just, you know, give the baby a bottle and you put it down to take a nap.
You put them down, you're quiet.
And so what I want to do as the baby falls asleep, it goes into a deeper sleep.
but if you bump it right after it lays down, then it wakes back up.
We're trying to write in this kind of code because that seems like a nice way
to reinforce best practices for a mommy, right?
So I know my responsibility in this.
In large part, he says, because he hasn't always gotten it right.
Here's a great example.
His name is Pleo.
Pleo. That's him.
I don't know if you've ever seen the Plio dino we did.
He is a robot with artificial intelligence.
Cleo was a robotic dinosaur, pretty small, about a foot from nose to tail.
Looked a lot like the dinosaur little foot from the movie Land Before Time.
Very cute.
It was very lifelike.
And we went hog wild in putting real emotions in it and reactions to fear and everything, right?
And it is quite a step forward in terms of how lifelike it is.
It makes the Furby look like child's play.
It's got two microphones built in, cameras to track and recognize your face.
It can feel the beat of a song, and then with dozens of motors in it, it can then dance along to that song.
In total, there are 40 sensors in this toy.
So it follows you around.
He needs lots of love and affection.
Wanting you to pet it.
Oh, tired, huh?
Okay.
As you're petting it, it will fall asleep.
What to sleep.
It is undeniable.
adorable. And Caleb says his intent from the beginning was very simple to create a toy that would
encourage you to show love and caring. You know, our belief is that humans need to feel empathy
towards things in order to be more human, and we think we can help that out by having little
creatures that you can love. That was Caleb demonstrating the plio at a TED talk. Now, what's
interesting is that in keeping with this idea wanting to encourage empathy, he programmed in some
behaviors into the pleo that he hoped would nudge people in the right direction. For example,
pleo will let you know if you do something that it doesn't like. So if you actually moved his
leg when his motor wasn't moving, I'd go pop, pop, pop, and he would interpret that as pain or abuse,
and he would limp around, and he would cry, and then he'd tremble, and then he would take a while
before he warmed up to you again. And so what happened is, we launched this thing. And so, we launched
and there was a website called Device.
This is sort of a tech product review website.
They got a whole of a PLEO, and they put up a video.
What you see in the video is PLEO on a table being beaten.
Huh?
Bad Cleo.
Get on.
He's not doing anything.
You don't see who's doing it exactly.
You just see hands coming in from out of the frame and knocking him over again and again.
You see the toys' legs in the air struggling to right itself, sort of like a turtle.
that's trying to get off its back.
And it started crying.
Because that's what it does.
These guys start holding it upside down by its tail.
Yeah.
They held it by its tail.
They smash its head into the table a few times.
And you can see in the video that it responds,
like it's been stunned.
Can you get up?
That's a good.
This is a good test.
Stumbling around.
No.
No.
At one point, they even start strangling it.
it actually starts to choke.
Finally, they pick it up by its tail one more time.
Tell it by its tail and hit it.
And it was crying and then it started screaming and they beat it until it died.
Right?
Wow.
Until it just did not work anymore.
This video was viewed about 100,000 times many more times than the reviews of the pleo.
And Caleb says there's something.
something about this that he just can't shake.
Because whether it's alive or not, that's exhibiting sociopathic behavior.
He's not sure what brought out that sociopathic behavior, whether there was some design in the toy,
whether offering people the chance to see a toy in pain in this way somehow brought out curiosity,
like a kind of cruel curiosity?
He's just not sure.
What happens when you turn your animatronic baby upside down?
Will it cry?
I'm not sure yet.
I mean, we're working on next versions right now, right?
I'm not, what would you do?
I mean, it's a good question.
You have to have some kind of a response,
otherwise it seems broken, right?
But, you know, if you make them react at all,
going to get that repetitive abuse
because it's cool to watch it scream.
It sounds like you have maybe an inner conflict about this,
that you might even be pulling back
from making it extra lifelike?
Yeah, I'm, I'm,
For my little company, I've adopted kind of a hypocrite,
a hypocrite, a hypocrite oath like, you know, don't teach something that's wrong,
or don't reinforce something that's wrong.
And so I've been working on this problem for years.
I'm struggling with what's the right thing to do, you know?
Yeah.
Since you have the power, since you have the ability to turn on and off chemicals
at some level in another human, right?
It's what, which ones do you choose?
And so this gets to the bigger question of AI, right?
This is the question in AI.
I'm going to jump to this because it's really the same question is, you know, how do we create things that can help us?
You know, I'm dealing with that on a microscopic scale.
But this is the question.
And so the first thing that I would try to teach our new AI if I had the ability is try to understand the concept.
of empathy.
We need to introduce the idea of empathy,
both in an AI and us for these things.
That's where we're at.
Caleb says in the specific case of the animatronic baby he's designing,
at least when we talked to him, his thinking was that he might have it,
if you hold it upside down, cry once or twice, but then stop.
So that you don't get that repeat thrill.
Yeah.
Anyway, I was wondering whether...
Back in the greed space with Brian Christian
and back on the subject of chatbots,
we found ourselves asking the very question that Caleb has.
Is it possible that, this is getting kind of grim,
that maybe that in some ways chatbots are good for humans?
Yeah, I mean, is there any situation where you can throw in a couple of bots
and things get better?
Like, can chatbots actually be helpful for us?
And if so, how?
Yeah, there have been some...
academic studies on trying to use chat bots for these humane benevolent ends that I think
paint this interesting other narrative and so for example researchers have tried injecting
chat bots into Twitter conversations that use hate speech and this bot will just show up
and be like hey that's not cool um and says it just like that's not cool um and says it just like that's not cool
You know, it'll say something like, there's real people behind the keyboard,
and you're really hurting someone's feelings when you talk that way.
And, you know, it's sort of preliminary work,
but there are some studies that appear to suggest, you know,
this sharp decline in that user's use of hate speech.
I mean, just because of one little, oh, I don't think you should say that.
Like, that's enough?
Or do you have to say, I have 50 trillion followers or something like that?
Well, yeah, it actually does depend.
So this is interesting.
It does depend on the follower count.
of the bot that makes the intervention.
So if you perceive this bot to be, well,
it also requires that you think they're a person.
So this is sort of flirting with dark magic a little bit.
But if you perceive them to be higher status on the platform than yourself,
then you will tend to sheepishly fall in line.
But if the bot has fewer followers than the user it's trying to correct,
that will just instigate the bully to bully them now in addition.
Wow.
So, yeah, human nature.
Cuts bus ways, huh?
Yeah.
Well, but we run into, like, you want to tell what?
We run into this very cool thing.
I mean, we're going to finish, but this is like, this is the...
All right, so we want to tell you one more story, because as we were thinking about all this
and trying to find a more optimistic place to land, we bumped into a story from this guy.
Who are you?
Let's start there.
Maybe let's go one step back.
Because you just wandered in.
We weren't quite expecting you.
So I'm Josh Rothman.
I'm a writer for the New Yorker.
We brought him into the studio a couple weeks back.
So why don't we begin by this story of yours largely takes place in a laboratory in Barcelona.
Yeah, it's a lab. It's in Barcelona.
And it's run by a couple, Mel Slater and Mavi Sanchez-Vives.
Mavi Sanchez-Vibis. I'm a neuroscientist.
And they have these two VR labs together.
VR as in virtual reality.
And Josh, a little while back, took a trip to Barcelona to experience some of the simulations that Mavi and Mel put people in.
And he went to their campus, showed up at their lab.
You feel sort of like you're going to a black box theater.
Oh.
It's sort of like a lot of big rooms, all covered in black with curtains.
There's a lot of dark spaces.
The researchers then explain that what's going to happen is he's going to put on a headset, this sort of big helmet.
They go, they put on the head-mounted display, and eventually the tern sun.
The visuals start to fade in.
And this room appears.
You're standing in a sort of generic room.
The graphics look straight out of like a Windows 95 computer game.
It's like the loading screen of the VR.
And then that dissolves.
Then it's replaced with the simulation.
And when the simulation started, I was standing in front of a mirror.
A digital mirror in this digital world, reflecting back at him, his digital self, his avatar.
So basically you move and your virtual body move.
with you.
And I could see in the mirror a reflection of myself, but the person who's who, who, the
self that I saw reflected, she had a body.
She was a woman.
She?
Yeah.
So I think when people think of virtual reality, they often imagine wanting to have like
realistic experiences in VR.
But that's not what Mel and Moby do.
They are interested in VR precisely because it lets you experience things that you could
never experience in your real body in the real world.
You can have a body that can be completely transformed and can move and can change color and can change shape.
So it can give you a very, very unique tool to explore.
You know, in their work, they'll often in these VR worlds turn men into women as they did for Josh for his first time out.
They will often take a tall person and then make them a short person in the VR so that they can experience the world as a short person might,
where they have to kind of crane their neck up a bunch.
They'll change the color of your skin in VR
and run you through scenarios
where you are having to experience the world as another race.
And what's remarkable is in all of these manipulations,
apparently you adjust to the new body very quickly.
And they've done physiological tests to measure this.
It takes almost no time at all to feel as if this alien body is actually yours.
They call this the illusion of presence.
You know, we think of our body
as a very stable entity.
However, by running experiments in virtual reality,
you see that actually in one minute of stimulation,
our brain accepts a different body,
even if this body is quite different from your own.
And this flexibility that our brains seem to have
can lead to some very surreal situations.
This is really the story that brought us to Josh.
She told us about another VR adventure where, again, he put on the headset.
This world faded up.
And I was sitting in a chair in front of a desk in a really cool-looking modernist house.
Golden floors, and then there is some glass walls.
And through the glass walls, I could see fields with wildflowers.
Green grass outside.
Again, he noticed a mirror, and this time the reflection in the mirror was of him.
It was a realistic-looking,
avatar of him. And after checking out his digital self for a while, he turned his head back
to the room and realized that across the room, there was another desk. And behind this other desk,
there was Freud was Freud was sitting there. Who? Freud. Siegman Freud, the psychoanalyst.
So a middle-aged man with a big brown beard? He had a beard. He had glasses, and he was just
sitting there with his hands folded in his lap. To Josh has sort of taken this all in, he's looking
at Freud. Freud's looking back at him.
And then, he hears the voice of a researcher in his ear coming through his VR helmet.
Tell problems.
Any problem.
She explained, what you're going to do is you're going to explain a problem that you're having, a personal problem that you're having to Freud.
Something that's bothering you in your life.
And she said, take a minute.
Think about what you'd like to discuss.
Did something immediately jump to mind?
Yeah.
So, you know, my mom had a stroke a few years.
years ago and she's in a nursing home and I'm her guardian. So she's young, she's 65.
But because of this stroke, she like needs 24-hour care and she can't talk. She doesn't have any
words anymore. So it's a very tough thing for me. I thought really hard about where she should
live. I live here in New York. My mom lives in Virginia. Josh says he really debated for a long
time. Should he put her in a nursing home in New York where he can be.
closer to her or should he put her in a nursing home in Virginia where he would be far away?
She has all these friends and family members down there. So in the end, I decided to, you know,
find a place for her there where there's lots of people who can visit her. So I go down
maybe once every month or six weeks to see my mom. But then every weekend, you know,
someone from this group of friends or family relatives visits her down there. Whereas if she were
up here, you know, I'd be the only person. So that's the decision I made. But, um, but you don't feel
absolutely happy, but...
Yeah, you know, I feel
guilty about it.
Like he was a terrible son.
And he says he would
especially have that feeling
each week after
her friends would visit her
in the nursing home
and then send him an email update
saying, hey, this is how your mom is doing.
Every time he would read
one of those emails,
even if she was doing well,
his stomach would just drop.
This problem, this emotion,
feeling guilty,
is one I've felt for a while.
So I said to Freud,
I said,
my mom is in a nursing home in another state
and friends and family visit her
and they send me reports on how she's doing
and I always feel really bad when I get these reports.
And this has said in your voice,
if you'd glade gazed at the mirror while you were talking
would you be saying it?
Yeah.
So after he said this to Freud,
the world sort of faded out to black
and then it faded back in.
And suddenly the world had shifted.
He was now across the room behind the desk that had just been opposite him,
and he was inside the body of Freud.
He looked down at himself.
He was wearing a white shirt, gray suit.
There was a mirror next to that desk, and he looks at himself.
I have the little beard, you know, everything.
Looked just like Freud.
But the main thing that was really surprising was that across I could see myself.
So this is the avatar of me now.
And I watched myself say what I had just said.
Oh, wow.
So it plays it back?
Exactly.
Their recording is now replayed the movements and also the voice.
And they see themselves as they talked about their problem.
So first, I can see my, I'm sitting in the chair and I'm sort of uncomfortable.
I'm moving around.
I take my hands and put them in my lap and fold them together.
And then I take them apart and I put them together.
You know, I can watch myself be nervous.
And then I saw, then I saw myself.
say what I just said.
My mom is in a nursing home in another state, and friends and family visit her, and they send
me reports in my voice.
You know, moving the way I move, and it was just like me, watching myself.
And I guess the best way I can describe that was it was moving.
What?
Moving.
Moving.
Like, I guess, emotionally.
Yeah, emotionally moving.
I mean, I felt, I don't know if this is going to make any sense,
but you know how there's a point in your life where you realize that your parents are just people?
Yes.
Yeah.
It was kind of like that, except it was me.
Oh, interesting.
Did you feel closer to that guy or?
I felt bad for him.
You felt bad for him.
Sorry.
Yeah, my feelings went out to this other person, who was me.
As he's having this empathetic reaction as Freud, looking back,
himself, the researcher's voice again appears in his ear.
Give advice from the perspective of Sigma Freud,
advice of how this problem could be solved, how you could deal with it.
Essentially respond to your patient.
So I didn't know what to say, so I said, why do you think you feel bad?
That was a good Freudian kind of thing.
Yeah. Why do you think you feel bad?
As soon as he asked that, he's back in his body,
his virtual body staring back at virtual Freud
and he sees a playback of Freud
asking him that question. I watched
Freud say this to me. Why do you think you feel bad?
Except that when Freud talks, they had
some thing in the program that made
his voice extra deep.
It has some
voice distortion. It's a deeper voice.
And so his voice didn't sound like my voice.
How did you respond as now you?
I said, I feel bad because
it doesn't seem right that I'm
living far away.
Once again, he switches bodies.
Now he's in Freud again, staring back at himself.
And I watched myself say this.
I feel bad because...
And then as Freud, I said, well, why are you far away then?
Shump.
Back into his own body.
Freud says to him from across the room.
Why are you far away, though?
And I said, well, because if my mom lived in New York, I'd be the only person here.
But if she's down and where she lives, then there's other people to visit her.
Shump.
Back in Freud's body.
And I said, so it sounds like there's a reason why you live where you live.
So if you know that, why do you still feel bad?
Switches back to himself.
If you know that, why do you still feel bad?
I said something like, you're right.
And went back into Freud.
And then as Freud, I said, you know, it sounds like the thing that's making you unhappy,
which is making you feel bad, which is getting these reports from these people.
people is actually the whole reason why you decided to live in these, you know, to have,
keep your mom where she is.
Like there's a loop, right?
It's like these reports I get from my mom's friends make me feel bad.
But the whole reason why I decided to leave her in this place in Virginia is specifically
so that there are friends who can visit her.
There's this classic idea in psychology called the reframe, which is where you try and take a problem
and reframe that problem into its solution.
And he says in that moment, he kind of did that.
He had this very simple epiphany
that his guilt was actually connected to something good.
I never had that thought before.
He chose to keep his mom in Virginia
so that her friends would visit her more
and each time her friends visited he felt bad.
But that meant they were visiting.
So the bad feeling and the fact that he was feeling it so much
was itself kind of evidence for the fact
that he had made, if not the right decision,
at least a decision that made sense.
The experience I had talking to myself as Freud
was nothing like the experience I had in my own head
turning this issue over and over.
By switching back and forth, by swapping bodies,
somehow you can give advice from a different perspective.
When I was back in my own body and Freud said it to me,
I was just like, I just felt like, wow.
That's so...
Good point.
That was my...
Wouldn't your next thought be what the hell is going on here?
Why am I able in this utterly fictive situation to split myself into and heal myself?
Well, I took the headset off and I sat there for a little while while the researchers looked at me trying to make sense of it.
And I think what I keep coming back to is the seeing yourself just as a person, not as you, not with all.
all the complexities and stuff that is in your self experience of being yourself.
And this might be the real key thing.
Like when you are in your body, which you pretty much always are, you have all of these
thoughts and feelings which are attached to that body.
It's sort of like when you go home for Thanksgiving and you walk into your parents' kitchen
and suddenly you just kind of feel like you're a teenager again.
like all those same thought patterns from your youth
kind of kick back into gear
because the context of that kitchen is powerful
and you, your body, is that writ large
but if you can jump out of it and go into a new one
suddenly all those constraints
and all that context is gone.
When I'm embodied as Freud,
not only do I look different and think this is my body,
but I feel different
and I have different types of thoughts
and I see people differently.
And Josh says what he saw when he was Freud looking back at himself
was just a guy who needed help.
When someone comes to you and asks for help,
your feelings are not complicated.
They're just tenderness, kindness.
Your instinct is to help them.
And he says he was able to bring that very simple way of being back to himself.
Did it make a difference?
Did you walk out of that?
with a different feeling about yourself?
I did.
I think I've had a feeling of,
I think it revised my feeling about who I was a little.
I think it made me feel a little more,
I don't even have a word for it,
just a little more human.
Josh Rothman, it's a writer for The New Yorker.
His story first appeared there,
and we told it to that live audience at the Green Space.
So, Brian, this is,
You get the last word.
To me, this is really interesting because the history of chatbots begins with a chatbot program written in the 60s by an MIT professor named Joseph Weisenbaum.
And the program was called Eliza, and it was designed to mimic this non-directive, Rajirian therapist, where you would say, I'm feeling sad.
It would just throw it back to you as a kind of madlib.
I'm sorry to hear you're sad.
Why are you sad?
and Weisenbaum was famously horrified when he walked in on his secretary,
just like spilling her life's innermost thoughts and feelings to this program,
that she had seen him right.
So there's no mystery there.
But he came away from that experience feeling appalled at the degree to which people will sort
of project human intention onto just technology.
And his reaction was to pull.
the plug on his own research project, and for the rest of his life, he became one of the leading
critics against chatbot technology and against AI in general. And I think it's really
powerful to juxtapose that against the story that you've just shared, which tells us that
there's more to the picture than that, that there are ways to use this technology in a way that
doesn't sort of distance us, but in a way that sort of enables us to be more fully human.
And I think that's a wonderful way to think about it.
Well, why don't we just leave it there pleasantly?
We have some thanks to give, but you have particular thanks to give to the person who made
this whole cybersphere around us possible.
That's Lauren Koonzy.
Lauren, I like that's Lauren.
Thank you to PandoraBots, which is a platform that powers conversational AI.
software for hundreds of thousands of global brands and developers. Learn more about their enterprise
offering and services at pandora bots.com. Thanks also to Chance Bone for designing the Robert
or Robot artwork for tonight. And of course, to Brian Christian for coming here to talk with us.
Yes. Thank you. And to you. Okay. Thank you guys so much.
This episode was reported and produced by Simon Adler and our live event was produced with
machine-like efficiency by Simon Adler and Susie Lekyllis.
I don't have to say a word. Every time you look at me, I can see it all in your eyes.
Me think about most hear your voice. Even though you're far away, I can feel you right by my side.
I can read your mind, your mind.
By the way, thanks to Dylan Keefe, Alex Overington, and Dylan Green for original music.
I can see the truth. All the secrets of the heart.
You can't hide them anymore.
You.
I can read your mind.
I can read your mind.
I can read your mind.
Start of message.
Hi, this is Brian Christian.
Radio Lab was created by Jad Abamrod and is produced by Soren Wheeler.
Dylan Keith is our director of sound design.
Maria Matasar Padilla is our managing director.
Our staff includes Simon Adler, Maggie Bartolomeo, Deccap Wrestler, Rachel Cusick, David Gablell,
Bessel Habbd, Tracy Hunt, Matt Kielke, Robert Krollwich, Annie McEwen, Latif Nasser, Melissa O'Donnell, Ariane Wack, Pat Walters, and Molly Webster.
With help from Amanda Aronich, Shima O'Leet Eiley, and Reed Kinnon.
Our fact checker is Michelle Harris.
End of message.
