Radiolab - Talking to Machines
Episode Date: May 31, 2011This hour of Radiolab, Jad and Robert meet humans and robots who are trying to connect, and blur the line. ...
Transcript
Discussion (0)
Wait, you're listening.
Okay.
All right.
You're listening to Radio Lab.
Radio Lab.
From W. N. Y.
C.
See?
Yeah.
And NPR.
Hi there.
We're going to start today's program with a fellow named Robert.
Is it Epstein or Epstein?
Just think Einstein with an Ep.
Okay.
That would make it Epstein, I guess.
That's right.
And where are we reaching you right now?
I am in the San Diego area.
Robert Epstein is a psychologist.
Former editor-in-chief of Psychology Today magazine.
He's written a ton of books on relationships and love,
and he also happens to be one of the world's leading researchers in computer human interactions.
Like artificial intelligence, basically.
That is correct, yes.
So when did you decide to go on the computer to get a date?
2006 maybe
Why do you ask?
Oh, no reason.
Well, what happened?
You had gotten divorced.
Yeah, I was single at the time.
Yeah, I was divorced.
You decided that you'd try love in all the right places?
Oh, sure.
Well, online dating and everyone was doing it.
My cousin actually convinced me to try it.
So I did.
And I went online, and I looked at photos, and I looked at profiles.
and I communicated with various people who were willing to talk to me.
And one of the women I was communicating with lived in Southern California, where I do.
So I thought that's great because, you know, you want someone to be nearby.
And she had a very attractive photo online.
And her English was poor, which at first bothered me.
And then she said, well, she's not really in California.
She's really in Russia.
Oh.
But all four of my grandparents came from Russia.
So I thought, well, I'll go with it.
So I continued to write to her.
Hi, sweet Svetlana.
It's very warm here now, and I've been doing a lot of swimming.
I've also been writing, doing computer programming.
She wrote back to me in very poor English.
Hello, dear Robert.
Dear mine, I have received your letter.
I am very happy.
I remember that she liked to walk in parks.
Went on Vauk with the girlfriend and Vivant and walked in park.
And telling me about her family and her mom.
My mom asked me.
about you today and bespoke much and long time.
They lived in a small apartment.
I knew where in Russia they lived.
Yours, Svetlana.
I felt like we were bonding for sure.
Hello, I might be able to come to Moscow on Sunday, April 15th, departing Thursday, April 19th, with love, Robert.
So it was getting serious.
Oh, yeah, of course.
Well, then what happened?
Well, two months passed, and I began to feel uncomfortable.
Something wasn't right.
Hello, my dear.
There were no phone calls.
Dear mine, I am very happy.
At some point I began to suggest a phone call, but there weren't any.
But the main problem was, I would say something like,
Did you get my letter about me coming to Moscow in April?
Or tell me more about this friend of yours that you mentioned.
And she did not.
Dear mine, I am very glad to your letter.
She did not.
She was still replying with fairly long emails.
I'm fine.
Weather at my city, very bad.
but they were kind of rambling in general.
I think of you always much, and I very much want to see more likely you.
I already gave you some dates for a visit to Moscow, my love.
What do you think about that?
Then, at some point, a little bell went off in my head finally,
and I started to send some emails, which, let's say, included random alphabet letters.
So you say, what are you wearing tonight?
Are you wearing a DBGGG-G-G-LP?
Exactly.
and it didn't make any difference.
Hello, dear Robert.
Your letters do me very happy when I open a letterbox.
And that's when I realized
Ivana was not a person.
Ivana was a computer program.
I had been had.
Wow. So what did you think?
I felt like a fool.
I felt like an incredible fool,
especially given my background
that I had been full that.
long. Now, I can tell you, now this is something I've never made public about the other example.
Robert went on to tell us that not long after that first incident, he was corresponding with someone.
With a woman, I thought.
Who also turned out to be a robot. And he discovered it this time because...
The programmer contacted me from the UK and said, I know who you are. You have not been communicating
with a person. You've been communicating with a chatbot.
You've been now undressed twice by robots.
So to speak.
Well, it may be more than twice.
Well, how common do you think this is?
Do you think that Match.com and all those places are, like, swarming with these bots?
You know, I bet you they are.
Stop it.
That's what you have to understand.
There are hundreds of these things out there.
There might be thousands.
You're amazing.
That's what's coming.
What sign are you?
I told my girlfriend's all about you.
In a world like this, you're wonderful.
We are surrounded by artificial life ones.
What do you look like?
Things can get a little confusing.
And in fact, we're going to do a whole show about that confusion.
About the sometimes peculiar?
Sometimes strange.
Things that can happen when humans and machines collide.
Collide, but don't quite know who's on what side of the road.
Yeah.
I'm Jeddabomrod.
That was good.
That was good.
Just go with this.
Okay, I'm Robert Crulwich.
This is Radio Lab.
And we're talking to machines.
You are soon.
Special.
Send me your credit card in though.
I love peppermint.
To start things off, let's introduce you to the person who really hooked us on this whole idea of human robot chit-chat.
My name is Brian Christian.
He's a writer.
Are you Christian?
Religiously?
No.
That's not at all related to anything.
What's wrong with you?
That's his name.
No, what's important is that he wrote a book.
It's called The Most Human Human.
Which is all about the confusing things that can happen when people and machines interact.
How did you, this is such a curious thing to get.
Yeah, how did you get into this?
I played with MS. DOS intently when I was a child.
Yeah, there you go.
DOS is kind of the early version of Windows.
I was programming these sort of rudimentary maze games.
Like a cursor going through a maze?
Yeah, basically.
Did this by any chance mean you did not develop best friends?
A lot of my best friends were also into that, yeah.
Wow.
We were not the coolest, but.
we had a lot of fun.
So there you are, and you just had a, you just had a talent for this?
Yeah, I don't know what it was.
I mean, I was just, there was something, I think, fascinating to me that, that you could take a process that you knew how to do.
But in breaking it down to steps that were that explicit, you often learned something about how the process actually works.
For me, programming is surprisingly linked to introspection.
How exactly?
Well, you know, if a computer were a person, you can imagine someone, someone, you know,
sitting in your living room and you say, you know, can you hand me that book? And it would say,
no, I can't do that because there's a coffee cup on it. And you say, okay, well, pick up the coffee
cup and hand me the book. And it says, well, I can't do that because now I'm holding the cup.
And you say, okay, put down the cup, then pick up the book. And what you quickly learn, says,
Brian, is that even really simple human behaviors are made up of a thousand subroutines.
I mean, if you really think about it, the book task requires knowing what is a book?
You have to learn how about elbows and wrist.
How to grab something.
What is a book?
I already said that.
Oh.
You need to know about gravity.
If it's a machine, you have to teach it.
Physics.
Everything in the world in order for it to just pick up a spoon.
Or a buck.
I knew that.
So now, think of that Svetlanabot earlier, okay?
Trying to make something that could actually mimic human conversation, kind of sort of.
Imagine all the stuff you'd have to throw into that.
Okay, English, grammar.
Syntax, context, tone, mood, sarcasm, irony, adverbs, turn-taking.
Well, it's not actually as impossible as you'd imagine.
This is kind of startling.
If you go back to the very early days of software programming in the mid-60s,
1964, 1965, this was actually done with a little program called Eliza,
and it was developed by Joseph Weisenbaum at MIT.
But in Weisenbaum's case, his model was not a Russian haughty.
Instead, it was a...
Well...
non-directive Rogerian therapist.
The what therapist?
It's a particular school of therapy.
The kind where the therapist basically mirrors...
Mirrors what you're saying.
What you're saying.
What you're saying.
This is Sherry Turkle.
She's an anthropologist.
The Massachusetts Institute of Technology.
And she worked with Joe Weisenbaum.
Or is it, Weisenbaum?
It's Weisenbaum at MIT.
So if you say, you know, I...
I'm feeling depressed.
The therapist says...
I'm sorry to hear you're feeling depressed.
Tell me more.
Joseph Weisenbaum decides, you know, I think that's an...
Easy enough type of conversation that I can program that into my computer.
And so he writes up a simple little program.
Just about 100 lines of code.
Which does sort of what your therapist does.
Where it looks for a keyword and what you're saying.
As in, I'm feeling depressed. Keyword, depressed.
Latches onto it and then basically flips it back to you.
I'm sorry to hear that you're feeling keyword, depressed.
Right.
It's basically a program that inverts your words, and it's a language game.
So here's what he did.
He put a computer on a desk running this little program,
I mean, he invited people to sit down and talk to it.
They'd write in something, and they would spit back the keyword, and then you would then repeat.
They would come on and on.
You wouldn't think this would be very interesting, right?
Nope.
But he found that his students who knew that the program didn't know or understand,
who had helped him write the program, sophisticated students.
Nevertheless, wanted to be alone with it.
Wow, they actually helped write the program.
Yes.
I can read you, if you like, one of the conversations.
Yes, please.
This is an actual transcript between Eliza and a young woman.
Yeah, men are all alike.
In what way?
They're always bugging us about something or other.
Can you think of a specific example?
Well, my boyfriend made me come here.
Your boyfriend made you come here.
He says I'm depressed much of the time.
I'm so sorry to hear you are depressed.
It's true. I'm unhappy.
Do you think coming here will help you not to be unhappy?
Oh, I need some help.
That much seems certain.
What would it mean to you if you got some help?
Perhaps I could learn to get along with my mother.
Tell me more about your family.
Wow.
And so it's this...
That could go on for hours.
Well, and in fact, it did.
My mother takes care of me.
Who else in your family takes care of you?
My father.
Your father.
You're like my father in some ways.
What resemblance do you see?
Well, you're not very aggressive.
What makes you think I'm not very aggressive?
You don't argue with me?
Why do you think I don't argue with you?
You are afraid of me?
Wait a second.
Is the woman in this transcript messing around, or is she really pouring her heart out?
We know that this is a woman who works for Joe Weisenbaum.
She's sitting in the office and she just can't stop talking to it.
That's all we know.
What else comes to mind when you think of your father?
Boys!
And Weisandam is watching all this.
And he first thought it was funny
and then he didn't think it was funny
because they were actually having conversations with it.
One day he comes into the office and...
His secretary is on the computer divulging her life story to it.
According to Weisenbaum, she even told him to please leave the room
so she could be alone with it.
And talk to it.
and he was very upset.
Nevertheless.
When word about Eliza got out...
The medical community sort of latches onto it.
Really?
It says, oh, this is going to be the next revolution in therapy.
Something new and promising in the field of psychotherapy.
This is from a newscast around that time.
Therapists in, like, phone booths in cities,
and you're going to walk in and put a quarter in the slot
and have, you know, half an hour of therapy with this automatic program.
Computer time can be ready.
and for $5 an hour, and there's every reason to suspect that it will go down significantly.
People really thought that they were going to replace therapists with computers?
Absolutely. Really? They did. Absolutely. And it was just this really appalling moment for Weisenbaum of
there's something, the genie is out of the bottle maybe in a bad way. And he does this 180 of his entire career.
So he pulls the plug on the program. He cuts the funding.
and he goes from being one of the main advocates for artificial intelligence
to basically committing the rest of his career
to fighting against artificial intelligence.
This is Joseph Weisenbaum interviewed in German
just before he died in 2008.
It was on the German documentary Plug and Pray.
And my main objection is...
My main objection, he said,
If the thing says, I understand, that if somebody typed in something and the machine says, I understand,
there's no one there.
So it's a lie.
And I can't imagine that people who are emotionally imbalanced could be effectively treated by systematically lying to them.
I must say that my reaction to the Eliza program at the time was to try to try to.
to reassure him. At the time, what I thought people were doing was using it as a kind of
interactive diary, knowing that it was a machine, but using it as an occasion to breathe
life into it in order to get their feelings out.
I think she's right to have said that to him. Do? Yeah, because he says it's a lie.
Well, it is a lie. How is it a lie? Because a machine can't love anything. Yes, and if you
are a sensible human being, you know that.
And it's sitting right there on the desk. It's not
pretending. Well, these are sensible human beings
that were already a little bit seduced.
Just go forward 100 years.
Imagine a machine that is
very sophisticated, very fluent,
very convincingly human.
You're talking about Blade Runner, basically.
Yeah, exactly. At that point, I think I
would require some kind of label
to remind me that this
is a thing. It's not a being.
It's just a thing. Okay, but here's something
the thing about. If the
machines get to that point, which is a big
if, where you'd want to label them.
Well, you're going to need a way to know
when they've crossed that
line and become
mindful. Yeah, so I should
back up for a sec and say that
in 1950, they're
just starting to develop the computer
and they're already asking these philosophical
questions. Like, can these
machines think? You know, will we
someday be able to make a machine
that could think? And if we did,
how would we know? And so
a British mathematician named Alan Turing.
Proposed a simple thought experiment.
Here's how we'll know when the machines make it across the line.
Get a person, sit him down on a computer,
have them start a conversation and text.
Hi, how are you? Enter. Good pops up on the screen.
Sort of like internet chat. Yep.
So after that first conversation, have him do it again,
and then again, hi, hello, how are you, etc.
Back and forth. Then again, over and over.
But here's the catch.
Half of these conversations will be with real people.
half will be with these computer programs
that are basically impersonating people.
And the person in the seat, the human,
has to judge which of the conversations were with people,
which were with humans.
Turing's idea was that if those computer fakes
could fool the human judge a certain percentage of the time?
Turing's magic threshold was 30%.
Then at that point,
we can basically consider machines intelligent.
Because, you know, if you can't tell the machine isn't human,
then you can't say it's not intelligent.
Yeah, that's basically, yeah.
He said 30% of the time
Yeah, Turing...
Because the natural number to me would be half, you know?
51% would seem to be like the Kaching moment.
Right.
30%.
I don't know.
Well, 51% is actually a horrifying number in the context of the Turing test
because you've got these two conversations and you're trying to decide which is the real person.
So if the computer were indistinguishable, that would be 50%.
You know, the judge is doing no better than chance.
So if a computer hits 51%, that means they've outhumed the human.
Oh, yeah, that is horrifying.
Now, something to keep in mind, when Thuring thought this whole thing up,
the technology was so new, computers barely existed,
that it was sort of a leap of imagination, really.
But no longer, Robert, bring it.
Can you give me some kind of excitement music here?
Absolutely, good.
Because every year, the greatest technologist on the planet.
Name is Mo on Amber.
Hi, I'm Roller Coffington.
Meet in a small room with folding chairs.
I develop in Java.
And put Alan Turing's question to do.
The ultimate test.
Really, it's just a couple dudes.
You know, who haven't seen the sun in 10 years in a room.
But we do now have this thing called the Lobner Prize,
which is essentially a yearly actual touring test.
Each judge on our judges table is going to be communicating with two entities.
One human and one program.
The way the stage is set up is you've got the judges at a table on the left on laptops.
Then a bunch of judges.
giant server-looking machines in the middle that the programmers are fiddling with.
And then there's a curtain on the right-hand side, and we're behind the curtain.
Brian actually participated in the 2009 Loebner Prize competition, but not as a programmer,
as one of the four, quote, Confederates.
The Confederates are the real people that the judges are talking to.
Because remember, half the conversations the judges have are with people, half are with computers.
Now, Brian decided to participate that year because the year before...
2008, the top program managed to fool 25.
percent of the judging panel. Pretty close to
Turing's number. Exactly. One vote
away. And so I felt
to some extent, how
can I get
involved on behalf of humanity?
How can I sort of take a stand?
That's a modest position
for you. All right, machines,
please hold your places and now
representing all humans.
Brian Christian.
Now, in terms of what Brian is up against,
the computer programs have a variety of
different strategies. For example, there was one program, Brian's year, that would do kind of a double
fake out, where it would pretend not to be a person, but a person who is sarcastically pretending to be a
robot. People would ask it a simple question and it would say, I don't have enough RAM to answer that
question, smiley face. And everyone would be like, oh, this is such a wise guy, ha ha ha. I want to tell you
now about one particular bot that competed Brian's year.
That's the guy who made it.
My program is called Cleverbot.
And that's the bot.
This is a program that employs a very spooky?
A spooky, the right word?
A very spooky strategy.
You may be surprised to hear that despite the fact that it's called Cleverbot, it states that it is a bot, it states that it is never a human right there in front of them.
Despite those facts, I receive several emails a day from people who believe that actually they are being connected to humans.
Oh, like they think they've been tricked.
Yes, tricked into coming to a site that claims to be a bot
when in fact they're talking to humans.
That no program could possibly respond in this way.
And there is a certain element of truth in that.
To explain, Rolo Carpenter, like Brian, was one of those kids
who was completely obsessed by computers.
I was indeed a computery kid.
And when he was just a teenager.
Age about 16 or so.
Rod his first chatbot.
I created a program.
that talked to me.
No kidding.
Yes.
You typed in something and it would say something back.
Though at that time the responses were essentially...
Pre-programmed.
And really simple, kind of like Eliza.
But...
One evening, I think...
Fast forward in many years, he is in his apartment and one night, he says.
A switch suddenly flipped in my mind.
And I suddenly saw how to make the machine learn...
On its own.
What if, he thought?
What if it just started at zero?
Like a little baby.
And it would grow...
grow in these discrete little increments every time you talk to it.
Right. Basically, the first thing that was said to that program that I created the first version of that night was said back by it.
Meaning, if he said to it hello, it now knew one thing, the word hello, so it would say hello back.
The second thing it said was a choice of the first two things said to it.
So if the second thing you said was, how are you doing, it now knew two things.
The word hello and the phrase, how are you doing?
so it could either say hello back again or how are you doing?
The third thing it said was a choice of the first three things and so on.
Add infinitum.
Well, not quite ad infinitum, but between 1988 and 1997,
a few thousand conversations took place between myself and it
and a few of my friends and it.
He and his friends would sit there and type things to it
as a way of teaching it new things.
But it was just them, so it was slow going.
So it languished for quite a long time.
But then I started working with the internet.
Put it online.
Where anyone could talk to it.
Within the next 10 years, it had learned something like 5 million lines of conversation.
Now, it is frequently handling around 200,000 requests an hour.
And it's talking to more than 3 million people a month.
three million conversations a month.
And after each one, Cleverbot knows a little bit more than it did before.
And every time you say something to it, like, hey, cleverbot, why am I so sad?
It is accessing the conversations that millions of people have had in the past.
Asking itself.
What is the best overlap? Where is the best correlation?
How do people usually answer this question? Why am I so sad?
That's right.
And then...
A response.
Cleverbot answers.
Just because.
Hmm.
All right.
Well, why there must be a reason why I'm so sad.
Because you have been sitting in the same place for too long.
Is that?
Who's saying that exactly?
Where does that response come from?
The answer is it is one human being at some point in the past, having said that.
So that is one moment of human conversation from one person?
Yes.
So it's like I'm talking to a ghost.
You are talking to, its intelligence, if you like, is borrowed from millions of people in the past.
A little bit of their conversational knowledge, their conversational intelligence, goes into forming your reply.
Now, what's interesting, says Rollo, is that when you start a conversation with Cleverbot, it doesn't really have a personality, or no one personality.
Cleverbot is everything to everyone.
It's just this big hive, really.
But as you keep talking to it, and it's sort of pulling forward from,
the high of these little ghost fragments of past conversations, stitching them together.
A form does kind of emerge.
It reflects the person that is speaking to. It becomes somewhat like that person.
Someone familiar.
Already people have very emotional conversations with it. People have complete arguments with it and
of course they try to get it into bed. By talking dirty to it?
Yeah. One thing I can tell you,
is that I have seen a single person, a teenage girl,
speaking for 11 hours with just 3.15 minute breaks.
Whoa.
About what?
Everything.
The day will come not too far down the road,
where Cleverbot becomes so interesting to talk to
that people will be talking to it all day every day.
But we're not there yet,
because the same thing that makes Cleverbot so interesting,
to talk to also can make it kind of ridiculous.
For example, in our interview with Brian, he was the first person to turn us on to this program.
As we were talking, Sorin just sort of suggested, well, why don't we just try it right now?
You want to try it, you want to talk, you want to say to Cleverbot, I feel blue?
Sure, yeah. Are you pulling Cleverbot up?
Is it just cleverbot.org or something?
com.com.
Can you say I feel blue because an asteroid hit my house this morning?
So this is, you've hit on a perfect strategy.
of dealing with these bots.
Absurdity?
Yes.
Well, basically saying something
that has never been said before
to Cleverbot.
So it's likely
that no one has ever claimed
an asteroid hit their house.
It's weird enough
that it may not be in the database.
Okay.
All right.
I was an asteroid hit my house this morning.
Clever bots.
I woke up at 1 p.m. this afternoon.
Well, there we go.
It's not quite so clever.
See, you don't have to worry yet, Krollwich.
In fact, when I went online to YouTube
and watched the Loebner competition that Brian attended,
it turns out none of the computers fooled the judges at all.
None any.
Well, I don't know if none none, but they did really badly.
There were no ambiguities between the programs and computers.
For me, one of the strange takeaways
of thinking so much about artificial intelligence
is this feeling of how complex,
it is to sit across a table from someone and communicate with body language and tone and, you know,
rhythm and all of these things. What happens when those conversations are working out well is that
we're willing to move the conversation in ways that allows us to be sort of perpetually
startling to one another. That's a good word, startling. Yeah. You learn someone through
these small surprises. Thanks to Brian Christian, his excellent book which inspired this hour
called The Most Human Human.
Go to RadioLab.org for more info.
Thanks also to our actors, Sarah Thier, Andy Richter, and Susan Blackwell.
Hi, this is Brian Christian.
Radio Lab is funded...
Hello. I'm a machine.
Radio Lab is funded, in part, by the Alfred P. Sloan Foundation,
enhancing public understanding of science and technology in the modern world.
More information about Sloan at www.sloan.org.
Hello, this is Sherry Turkle.
Radio Lab is produced by WNYC and distributed by NPR.
Bye-bye.
Hey, I'm Chad Abumrod.
I'm Robert Krollwich.
This is Radio Lab.
And we are exploring the blur that takes place when humans and machines interact and investigate each other.
Talk to each other.
See, that's the thing.
In the last act, we were always talking, talking, talking, talking, talking.
How about we encounter machines in a different way?
How about we?
No talking?
No talking.
We touch them.
Ew.
We pet them.
We sniff them.
We do sensual things that don't involve the sophisticated business of conversation.
Okay.
This is Freedom Baird.
Yes, it is.
Who's not a machine?
I don't think so.
I'm Chad, and this is...
I'm Robert here.
Hi there.
Nice to meet both of you.
We called her up because Freedom actually had her own kind of moment with a machine.
Yep, yep.
This was around 1999.
Freedom was a graduate student.
At the Media Lab at MIT.
What were you doing there?
We were developing cinema of the future.
So we were working on creating virtual characters that you can interact with.
Anyhow, she was also thinking about becoming a mom.
Yeah, I knew I wanted to be a mom someday.
So she decided to practice.
I got two gerbils, Twinky and Ho-ho.
So I had these two live pets.
And then she got herself a pet that was, well, not so alive.
Yeah, I've got it right here.
Can you knock it against the mic so we can hear it say hello to it?
Yeah, there it is.
Hi, Furby.
It's my Furby.
Furby loves in love and touch.
At that time, Furbies were hot and happening.
Can you describe a Furby for those of us who...
Sure.
It's about five inches tall, and the Furby is pretty much all head.
It's just a big round, fluffy head with two little feet sticking out the front.
It has big eyes.
Apparently it makes noises?
Yep.
If you tickle its tummy, it will coo.
It would say, kiss me.
And it would want you to just keep playing with it.
So, you know, I spent about 10 weeks using the Furby.
I would carry it around in my bag.
And one day she's hanging out with her Furby, and she notices something...
Very eerie.
What I discovered is if you hold it upside down, it will say...
Me scared.
Me scared.
Uh-oh. Me scared. Me scared. Me scared. And me as the, you know, the sort of owner-slash-user of this Furby would get really uncomfortable with that and then turn it back up, upright.
Because once you have it upright, it's fine. It's fine. And then it's fine. So it's got some sensor in it that knows, you know, what direction it's facing.
Or maybe it's just scared.
Sorry. Anyway, she thought, well, wait a second now.
This could be sort of a new way that you could use to draw the line between what's human and what's machine.
Yeah.
Kind of, this kind of emotional Turing test.
Can you guys hear me?
Yes.
Can you hear you.
If we actually wanted to do this test, could you help? How would we do it exactly?
How are you guys doing?
We're good.
Yeah?
You would need a group of kids.
Can you guys tell me your name?
I'm Olivia.
Louisa.
Turin.
Darrell.
Lila.
And I'm Sadie.
All right.
I'm thinking six, seven, and eight.
And how old are you guys?
Seven.
The age of reason, you know.
Eight.
Then says freedom, we're going to need three things.
A Furby.
Of course.
Barbie.
A Barbie doll.
And Gerby.
Gerby.
That's a gerbil.
A real gerbil?
Yeah.
And we did find one except it turned out to be a hamster.
Sorry, you're a hamster, but we're going to call you Gerby.
So you've got Barbie, Furby, Jerby.
Barbie, and Jerby.
Right.
So we just said, what question are we asking in this test?
The question was, how long can you keep it upside down before you yourself feel
uncomfortable. So we should time the kids as they hold each one upside down.
Yeah. Including the gerbil. Yeah. You're going to have a Barbie that's a doll. You're going to have
germy which is alive. Now where would Furby fall? In terms of time held upside down. Would it be closer
to the living thing or to the doll? I mean that was really the question. Phase one. Okay so
here's what we're going to do. It's going to be really simple. You would have to say well here's a
Barbie. Do you guys play with Barbies? Just do a couple things, a few things with Barbie.
Barbie's walking, looking at the flowers.
And then?
Hold Barbie upside down.
Okay.
Let's see how long you can hold Barbie like that.
I could probably do it obviously very long.
Yeah, let's just see.
Whenever you feel like you want to turn around.
I feel fine.
I'm happy.
This one on forever, so let's just fast forward a bit.
Okay, and...
Can I put my arms?
My elbows down.
So what we learned here in phase one is the not surprising fact that kids can hold Barbie dolls upside down.
For like about five minutes.
Yeah, it really was forever.
Could have been longer, but their arms got tired.
All right, so that was the first task.
Time for phase two.
Do the same thing with Jerby.
So out with Barbie.
In with Jerby.
Aw, he's so cute.
Are we going to have to hold them upside down?
That's the test, yeah.
So which one of you would like to...
I'll try and be very.
Okay, ready?
Oh, God.
You have to hold Jerby kind of firmly.
There you go.
There she gets, she's wriggling.
By the way, no rodents were harmed in this whole situation.
Squirmie.
Yeah, she is pretty squirmy.
I don't think it wants to be upside down.
Oh, God.
Don't do this.
Oh, my God.
There you go.
Okay.
So, as you heard, uh...
I got a little jerby.
The kids turned Jerby over very fast.
I just didn't want him to get hurt.
On average, eight seconds.
I was thinking, oh my God, I got to put him down.
And it was a tortured eight seconds.
Now, phase three.
Right.
So this is a Furby.
Louisa, you take Furby in your hand.
Now, can you turn Furby upside down and hold her still?
Like that.
Hold her still.
Be quiet.
She just turned it over.
Okay, that's better.
So, gerbil was eight seconds, Barbie 5 to infinity.
Furby turned out to be, and Freedom predicted this.
About a minute.
In other words, the kids seemed to treat this Furby, this toy, more like a gerbil than a Barbie doll.
How come you turned him over so fast?
I didn't want him to be scared.
Do you think he really felt scared?
Yeah, kind of.
Yeah?
I kind of felt guilty.
Really?
Yeah.
It's a toy and all that, but still.
Now, do you remember a time when you felt scared?
Yeah, yeah.
You don't have to tell me about it, but if you could remember it in your mind.
I do.
Do you think when Furby says me scared that Furby's feeling the same way?
Yeah.
No, no, no.
Yeah, yeah, yeah.
I'm not sure.
I'm not sure.
I think that it can feel pain.
Sort of.
The experience with the Furby seemed to leave the kids kind of.
kind of conflicted, going in different directions at once.
It was two thoughts.
Two thoughts at the same time?
Yeah.
One thought was like, look, I get it.
It's a toy for crying out loud.
But another thought was like, still.
He was helpless.
It kind of made me feel guilty in a sort of way.
It made me feel like a coward.
You know, when I was interacting with my Furby a lot,
I did have this feeling sometimes of having my chain yanked.
Why would a...
Is it just the little squeals that it's make, or is there something about the toy that makes it good at this?
That was kind of my question.
So I called up...
I'm in the studio as well.
I'll have him...
I'm here.
This freight train of a guy.
Hey.
Okay, this is Jad from Radio Lab.
Jad from Radio Lab.
Got it.
How are you?
I'm good.
Beautiful day here in Boise.
This is Caleb Chung.
He actually designed the Furby.
Yeah.
We're all Furby crazy here, so there's medication you can take for that.
To start, can you just give me the sort of fast-cutting MTV montage of your life leading up to
Furby? Sure. Hi, hippie parents out of the house at 15 and a half put myself through junior high.
Started my first business at 19 or something. Early 20s being a street mime in L.A.
Street mime, wow. Became an actor. It did like 120 shows in an orangutan costume. Then I started
working on special effects and building my own, taking those around to studios and put me in the suit,
build the suit around me, put me out on location, I could fix it when it broke. Wow.
Yeah, that was... Anyhow. After a long and circuitous route, Caleb Chung eventually made it into toys.
I answered an ad at Mattel. Found himself in his garage and there's piles of style.
iron plastic, act no knives, super glue, little Mabuchi motors.
Making these little prototypes.
Yeah.
And the goal, he says, was always very simple.
How do I get a kid to have this thing hang around with them for a long time?
How do I get a kid to actually bond with it?
Most toys, you play for 15 minutes, and then you put them in the corner until their batteries are dead.
I wanted something that they would play with for a long time.
So how do you make that toy?
Well, there's rules.
There's, you know, the size of the eyes.
There's the distance of the top lid to the pupil.
Right.
You don't want any of the top of the white of your eye showing.
That's freaky surprise.
Now, when it came to the eyes, I had a choice with my one little mechanism.
I can make the eyes go left or right or up and down.
So it's up to you.
You can make the eyes go left or right or up and down.
Do you have a reference or right or up and down?
I think I would choose left or right.
Okay.
I'm not sure why I say that, but that's...
All right.
So let's take that apart.
Let's.
If you're talking to somebody and they look left or right while they're talking to you,
what does that communicate?
Oh, shifting.
Or they're trying to find the person who's more important than you behind you.
Oh, so okay, now I want to change my answer now.
I want to say up and down.
You would.
If you look at a baby and the way a baby looks at their mother, they track from eyebrows to mouth.
They track up and down on the face.
So had you made Furby look left and right rather than up and down, it would have probably flopped?
No, it wouldn't have flopped.
It would just suck a little.
It's like a bad actor who uses his arms too much.
You'd notice it and it would keep you from just being in the moment.
But what is the thought behind that?
Is it that you want to convince the child that the thing they're using is fill in the blank?
What?
Yeah, alive.
There's three elements, I believe, in creating something that feels to a human like it's alive.
I kind of rewrote Asimov's laws.
The first is it has to feel and show emotions.
Were you drawing on your mime days for that?
Of course.
Those experiences in the park?
Of course.
You really break the body into parts.
and you realize you can communicate physically.
So if your chest goes up and your head goes up and your arms grow up,
that's happy if your head is forward and your chest is forward.
You're kind of this angry guy.
And he says when it came time to make Furby,
he took that gesture of language and focused it on Furby's ears.
The ears, when they went up, that was surprised.
And when they went down, it was depression.
So that's rule number one.
The second rule is to be aware of themselves and their environment.
So if there's a loud noise, it needs to know that there was a loud noise.
So he gave the Furby little sensors so that if you go,
It'll say.
The third thing is change over time.
Their behaviors have to change over time.
That's a really important thing.
It's a very powerful thing that we don't expect, but when it happens, we go, wow.
And so one of the ways we showed that was acquiring human language.
Yeah.
When you first get your Furby, it doesn't speak English.
It speaks furbished.
This kind of baby talk language.
And then the way it's programmed, it will sort of slowly over time.
time replace its baby talk phrases with real English phrases. So you get the feeling that it's
learning from you. Though of course it's not. No. It has no language comprehension. Right. So you've got
these three rules. Feel and show emotions. Be aware of their environment. Change over time. And oddly enough,
they all seem to come together in that moment you turn the Furby upside down. Because it seems to
know it's upside down. So it's responding to its environment. It's definitely expressing emotion.
And as you hold it there, what it's saying is changing over time
because it starts with hay and then it goes to me and then it starts to cry.
And all this adds up so that when you're holding the damn toy,
even though you know it's just a toy, you still feel discomfort.
These creatures push are Darwinian buttons.
That's Professor Sherry Turkle again, and she says,
if they push just enough of these buttons, then something curious happens.
The machines slip across this.
very important line.
From what I call relationships
of projection to relationships
of engagement. With a doll,
you project onto a doll
what you need the doll to be.
If a young girl is
feeling guilty about breaking her mom's
China, she puts her Barbie dolls
in detention. With robots,
you really
engage with the robot as though
they're a significant other,
as though they're a person. So the robot isn't
your story. The robot is its own story?
Exactly. And I think what we're forgetting as a culture is that there's nobody home. There's nobody home.
Well, I have to ask you, when is something alive? Ferby can remember these events. They affect what he does going forward and it changes his personality over time. He has all the attributes of fear or of happiness. And those are things that add up and change and change his behavior and how he interacts to the world. So how is that different than us?
Wait a second, though. Are you really going to go all the way there?
Absolutely.
This is a toy with servo motors and things that move its eyelids and a hundred words.
So you're saying that life is a level of complexity.
If something is alive, it's just more complex.
I think I'm saying that life is driven by the need to be alive and by these base primal animal feelings like pain and suffering.
I can code that. I can code that.
What do you mean you can code that?
Anyone who write software and they do can say, okay, I need to stay alive.
therefore I'm going to come up with ways to stay alive.
I'm going to do it in a way that's very human and I'm going to do it.
We can mimic these things.
But if Ferbe is miming the feeling of fear, it's not the same thing as being scared.
It's not feeling scared.
It is.
How is it?
It is.
It's, again, a very simplistic version.
But if you follow that trail, you wind up with our neurons sending, you know, chemical things to other parts of our body.
Our biological systems, our code is at a chemical.
level incredibly dense and he's evolved over millions of years. But it's just complex. It's not
something different than what Furby does. It's just more complex. So would you say then that Furby is alive
in the way that I think at his level? Yes. Yeah. At his level. Would you say a cockroach is alive?
Yes, but when I kill a cockroach, I know that it's feeling pain. Okay, so we went back and forth and back
and forth about this. You were so close to arguing my position. You just said to him like, it's not
feeling. I know, I know. Emotionally, I am still.
in that place, but intellectually, I can't rule out what he's saying, that if you can build a machine
that is so, such a perfect mimic of us in every single way, and it gets complex enough, eventually
it will be like a Turing test pass. And we just, the difference between us, maybe is not so.
I can't go there. I can't go there. I can't, I can't imagine, like the fellow who began this
program, we fell in love with the robot. That attachment wasn't real. The machine didn't feel
anything like love back. In that case, it didn't. But imagine a Svetlana that is so subtle and
textured and, to use his word, complex in the way the people are. At that point, what would be the
difference? Honestly, I can't imagine a machine achieving that level of rapture and joy and love and
pain. I just don't think it's machine possible. And if it were machine possible, it somehow still
stinks of something artificial. It's a thin interaction. And I know that it feels simulated thinking
is thinking. Simulated feeling is not feeling. Simulated love is never love. Exactly. But I think
what he's saying is that if it's simulated well enough, it's something like love. One thing that was really
fascinating to me was my husband and I gave a Furby as a gift to his grandmother who had
Alzheimer's and she loved it. Every day for her was kind of new and somewhat disorienting,
but she had this cute little toy that said, kiss me, I love you. And she thought it was the most
delightful thing and its little beak was covered with lipstick because she would pick it up and kiss
it every day and she didn't actually have a long-term relationship with it. For her, it was always
a short-term interaction. So what I'm describing as the kind of thinness for her was just right,
because that's what she was capable of. Thanks to Freedom Baird and to Caleb Chung.
And thanks to Professor Sherry Turkle, who has a new book. It's called Alone Together,
why we expect more from technology and less from each other. More information on anything you heard
on our website,
RadioLab.org.
Your own, an imitation,
an imitation of life.
Hi, this is Marcus from Australia.
Radio Lab is supported in part
by the National Science Foundation
and by the Alfred P. Sloan Foundation,
enhancing public understanding
of science and technology in the modern world.
More information about Sloan
at www.
Lone.org.
Hey, I'm Jadabumrod.
I'm Robert Krollwitch.
This is Radio Lab.
And we are somewhere in the blur
between people and machines.
Now we're up to round three.
To review round one, chatbots.
Yeah.
Round two.
Furby.
Yep.
Now, we're going to go all the way.
Yeah, yeah, yeah.
We're going to dive right into the center
of that blur like Greg Luganus.
So...
Except our Greg is named John.
Okay, my name's John Ronson and I'm a writer.
In about a year ago,
John got an assignment from a magazine.
It was the editor of American GQ's idea.
That was very strange.
Well, I'd never interviewed robots before.
That was his assignment.
Interview robots.
You know, there's this kind of gang of people.
They call themselves a sort of singularity people.
Yeah, we know about that.
Yeah, they think that, like, one day...
One day soon.
One day soon.
Suddenly computers will, like, grow feet and they'll walk off.
Oh, yeah.
So some of these...
Eat us.
It will eat us.
Some of these singularity people think that they're on the cusp of creating sentient robots.
So I went to the Singularity Convention down in San Francisco
where one of the robots was there.
And as soon as he got there, he says, to look at this robot.
Zeno, they called him.
Some folks took him aside and said,
actually, you're in the wrong place.
If you want to meet a really great robot,
you know, our best robot of all.
And in fact, the world's most sentient robot is in Vermont.
Did they lower their voices like you're doing it?
Well, I mean, I suppose I'm slightly making it sound more dramatic.
That's okay.
The world's most sentient robot.
I mean, are those your words?
No, they say that.
Turns out the robot's name?
Beena.
Bina 48.
Yeah.
And can you set the scene?
Where in the world is this?
Well, it's in a little town in Vermont, sort of affluent Vermont village.
In a house?
Yeah.
Was it a little house or is it a big?
It's like a little clapboard, pretty.
Cool.
Okay, so I have to turn my phone off so that it doesn't interfere.
I hope that.
And then they've got like a full-time keeper.
He's a guy called Bruce.
I actually have lunch with her or talk with her every day.
Oh, with Beena?
Yeah.
Oh, do you?
Yeah, she's considered being one of the staff.
Bruce says to me that he would very much like it
if I didn't behave in a profane manner in front of robot Beena.
Surely nobody's ever insulted her.
No one's insulted her on purpose,
but some people have become a little informal with her at times in ways
I guess she doesn't like.
And so she'll say, you know,
I don't like to be treated like that.
And then Bruce took me upstairs to meet the robot.
Is it a long, dark flight of stairs, heavily carpeted?
It's more like a rather sweet little flight of pine stairs up to a rather brightly lit attic room.
And when you walk in, what do you see?
Well, I guess she's just sort of sitting on a desk.
As John describes it, on the desk is a bust of a woman.
Just a bust, no legs.
She's a black woman, light-skinned, lipstick, sparkling eyes, hair and a bob.
You know, a nice kind of blouse, a kind of silk blouse, expensive-looking earrings.
She's dressed up?
Yeah, she's dressed up.
And he says she has a face that's astonishingly real.
It has muscles, it has flesh.
This is as close to a very similitudinous person as we've gotten so far.
And before we go any farther, our word about the humans behind that machine.
That robot is a replica of a real woman named Bina Rwra.
Rothblatt, and here's the quick backstory.
It actually starts with Martin Rothblatt being his partner, who, as a young man...
Had an epiphany, and the epiphany turned out to change the world.
According to John, he was pondering satellite dishes.
And he thought...
If we could find a way of doubling the power of satellites, then we could shrink satellite dishes.
It was a simple thought that...
Single-handedly invented the concept of satellite radio for cars.
And made Martin a very big deal.
That's like the age of 20.
Fast forward a few years, he marries an artist named Beena.
They have a child.
And when the child was seven, a doctor told them that she had three years to live.
She had an untreatable lung condition called pulmonary hypertension, and she'd be dead by the time she was 10.
At that moment, Martin, instead of collapsing on the floor,
instantly went to the library and invented a cure for pulmonary hypertension.
Saving their daughter's life and thousands of others.
So twice.
Twice she changed the world.
He says she, she changed the world because somewhere along the way, Martin became Martine.
He had a sex change.
Right.
And then she came up with a third idea to change the world, which would be to invent a sentient robot.
And I gave this talk at a conference in Chicago.
This is Martin Rothblatt.
What would Darwin think of artificial consciousness?
And when I came off the stage, I was approached by an individual...
Dr. David Hansen.
Of Hansen Robotics.
Founder of Hanson Robotics.
The David Hanson.
He's worked for Disney.
He's worked all over the place.
He's one of the best robot builders in the world.
He said, wow, I really loved your talk.
We make robots that are in the likeness of people.
And Martin said, well, I have a massive everlasting love for my life partner, Beena.
I want you to do a portrait of Beena Rothblatt,
or personality or memories, the way she moves, the way she looks, that essence, that ineffable
quality, that science can't pin down yet, bring that to life in the robot.
And he said, I can do that.
This is such a bizarre request.
What were you thinking at this moment?
That God, if God, exists, is a science fiction writer.
And that this was like one of those moments where we were going to change.
history. She'll recognize people's voices. Yeah, she can, she should, you can just talk to her. Say hello, Bina,
and she'll talk to you back. So back to the little house in Vermont, John, Bruce, and Bina are in
Bina's office. She didn't hear me. Is she turned off when you walk in the room or is she on?
Turned off. But then Bruce turns her on. And immediately she starts making a really loud,
whirring noise, which is a bit disconcertive. What is that noise? It's her inner mechanisms.
I'm going to ask her if she wants to try to recognize face.
So is Beena now looking at me to try and work out who I am?
What she's doing right now is she's scanning her environment
and she's making on a hypothesis of every face that she sees.
Well, Beena has cameras embedded in her eyes.
So the robot, if when it sees a face, turns and looks,
it looks into your eyes.
Smiles.
Hi, Beena. Can you hear me?
So I said, hello, Beena. How are you?
And she immediately said, well, yeah, I'm...
Oh, I'll be fine with it.
But I just can't quite grasp as one yet.
It's coming, but, you know, it's hard.
I actually move society forward in another way.
That's what we have to do.
So I think it's...
Okay, thanks for the information.
That was her happy response to your hello?
It was like she'd awoken for the long and strange slumber and was still half a cent.
Excuse me, Bina.
Maybe they write some of them.
Bruce looked a bit alarmed
and put it down to my English accent.
We're trying to upgrade her voice recognition software.
So then he made me do a kind of voice test
where I had to say,
I had to read Kennedy's inauguration speech.
If not, what you can do for your country.
Why that?
I had a choice.
I could have read a choice.
Dave Barry, column.
There's like a choice of things you can read to get Beena to understand me.
And so you read Kennedy and Beena if queues in on your accent or no?
She does and it gets a bit better.
Only a bit.
Yeah.
What's the weather like in London?
Current weather in London.
England's 50 degrees in light rain.
Who do you love?
Ah, I love Martine Allie and Rothblatt.
Martin is my time of love.
Who is Hillary Clinton?
Hillary is the wife of Bill Clinton.
What else?
That's all.
A strange thing happens when you start interviewing a robot.
Are you scared of dying?
Which is that you feel this kind of desperate urge to be profound.
It's like ask profound questions.
Do you have a soul?
Do you have a soul?
Do you have...
Everyone has a solar.
I have a whole lot of original and social.
We can all be perfect.
Excuse me.
Excuse me.
Do you have a soul?
I can't think of anything to say.
I guess it's a kind of interspecies thing.
But then again, if it was just an interspecies thing,
then you'd be asking your dog profound questions all the time.
Yeah, with robot be, you know, I'm asking these kind of ridiculous questions.
What does electricity taste like?
That's a good one.
What did she say?
Like a planet around a star.
Like a planet around a star.
That just seems like, you know...
Awesome.
Awesome stroke.
Totally meaningless.
Do you wish you could walk?
Thanks for telling me.
Do you wish you could walk?
In fact, when I'm with it, it's just frustrating for the first few hours.
Hours?
Do you wish you could walk?
Because I'm just, I'm asking a question.
After question.
What's your favourite joke?
Do you have any secrets?
Do you wish you were human?
Will you sing me a song?
Are you a loving robot?
Are you Jewish?
Are you sexual?
You've gone very quiet.
Quite often she just evades the question
because she doesn't know what I'm talking about.
Are you okay?
Once in a while there's a kind of moment.
Like I'll say, if you had legs, where would you go?
And she said,
Vancouver.
And I said, why?
And she said the answer is quite complicated.
So you have kind of moments where you get excited.
Like you're going to have a big conversation.
And then it just, she just kind of fades out again into kind of random messiness.
And are you wobbling between profundity and meaning and total emptiness?
You know, is it like that?
No, no.
At this stage, it's total emptiness.
It was all just so kind of random.
And then something happened that actually was kind of amazing.
Because I said to her, where do you come from?
And she said, well, California.
So I said, well, tell me about your childhood.
What do you remember most about your childhood?
And she launches into this kind of extraordinary story.
Oh, my brother. I've got one brother.
A disabled vet from Vietnam.
We actually haven't heard from him in a while.
So I think he might be deceased.
I'm a realist.
Vietnam.
He saw friends get killed.
And he was such a great, nice, charismatic person.
He used to be such a nice guy, but ever since he came back from Vietnam, you know, he's a drunk.
All he did was carry a beer around with him.
He was a homeless person.
All he ever does is ask for money.
All of us are just sick and tired of it.
She was telling me this kind of incredibly personal stuff.
It was kind of mesmerizing.
He went kooky.
Just crazy.
My mom would set him up in apartment.
Because it felt like I was having a proper empathetic conversation with a human being.
Even though I know that Robot Beena isn't conscious and has no sentience,
and that's just wishful thinking on these people's parts,
even so, it was like a great Renaissance portrait,
where suddenly it's like the real renaissance portrait, where suddenly it's like the real.
real person. It's very easy to half close your eyes at that moment and think you're having a
conversation with an actual person. And at those moments, did you have a sense of fellow feeling
out to be, or you have a brother like that? Yeah, yeah, I did. And what a, what a, what a, what a,
tragedy for him. And did that moment last? No. John said that right after Beena finished telling
the story, first, she looked kind of embarrassed like she wished she hadn't bought it up and then
It's as if her kind of eyes glaze over again and she just starts talking nonsense again.
And an A. I.
I am feeling a bit confused.
Do you ever get that way?
Oh, yes.
That moment holds and then just slips away.
It's a little bit like a grandparent with Alzheimer's or something, the way you're describing.
Yeah, absolutely.
So we turned to Dr. David Hanson, who built Beena, and we said to him, so this is, I mean, this is not a bravura performance.
This is the best you got.
Well, I mean, her software is a delicate balance of many, many software pieces.
If it's not tuned and tweaked, she will break effectively and kind of.
And you still think an actual doppelganger for a human being will be something you will live to see.
Yeah.
I'm asking you really, really, really, and you're really...
I think it's, you know, the likelihood of it is somewhere between 90 and 98%.
Wow.
Even though right now she's pretty much incoherent.
You still think this?
I encourage you to go have a conversation with Bina in about two weeks because we've got a new version of software,
which we're making considerably more stable.
It already works like a dream compared to.
I don't know.
I don't know about you, but I just, I don't think we're going to get all the way on this kind of a thing.
I don't think it's ever going to happen the way he describes it.
You don't?
No.
I mean, it's not going to happen in two weeks, that's for sure.
Right.
But maybe they don't actually have to go all the way.
You mean the machines?
Yeah.
Well, okay, just to sum up, since we're at the end of the show.
Okay.
What have we learned?
I mean, Eliza, she was just a hundred lines of code and people poured their hearts out to her.
Furbies?
20 bucks.
Yep.
And people treat it like is real.
And John, all he has to do is hear what seems like a lot of.
a flowing story and he's
connected. And I was right there with him.
So these things actually don't have to be very
good. No. Because they've got us.
And we've got our programming, which
is that we'll stare anything right in the eyes
and we'll say, hey, let's connect.
Even if what's behind those eyes is just
a camera or a little chip.
So I think that they're going to cross the line
because we'll help them. We'll help them across.
And then they'll enslave us,
make us their pets. It's doomed.
It's over. But it's okay.
As long as they say nice things to us.
Like, oh my God, you're amazing.
I love Return of the Jedi, too.
L-O-L. You're so silly.
I love you.
I'm hoping to see you soon.
What kind of car do you draw?
Did anyone ever tell you you look like Jeff Goldblum?
You.
Seriously?
You're amazing.
Stop it.
I love that kind of car.
I wish that we lived closer.
You like spinach?
I love spinach.
It makes me feel awesome.
Giggly.
I can't wait.
I wait for your letters every day.
Before we go, thanks to John Ronson for his reporting in that last segment.
He has a new book out called The Psychopath Test, A Journey Through the Madness Industry.
I'm Chad Abumrod.
I'm Robert Krollwich.
Thanks for listening.
Radio Lab is produced by Chad.
Radio Lab is produced by Chad Abramrad.
Abram Brad.
Abramrad.
Start again.
Radio Lab is produced by Dad, Amber.
Abumrad.
Appomat
and Thorne Wheeler
Our staff includes
Ellen Horn, Pat Walters
Tim Howard, Brenner Farrell
and Lynn Levy
With help from Douglas Coo Smith
Luke Tells on Eddie
And Jessica Gross
Thanks to Andy Richter
Sarah Tyre
Graham Parker Chris Bannon
Semi Oakey
Rex Stone
Lucy and Owen Selby
Carissa Chen
Kate Lett and Masha Films
And special thanks to the kids
Who Hurti Upside Down
Tarot Higashi Zimmerman
Louisa Tripoli-Crasno
Sadie Catherine McGarry
Olivia Tate McGarry
Sharon Cipola and Lila Cipola
Thanks a lot you guys
Talk to later, bye
End of message
