Radiolab - Killing Babies, Saving the World
Episode Date: November 17, 2009To get this podcast started, Robert ambushes Jad with a question...a question we've all been dying to ask him since June 10th, 2009, when Amil Abumrad came into the world. ...
Transcript
Discussion (0)
Wait, you're listening.
Okay.
All right.
All right.
You're listening to Radio Lab.
Radio Lab.
Shorts.
From W. N. Y.
C.
C?
Yes.
And NPR.
You ready, Robert?
Mm-hmm.
Hey, I'm Chad Aboumrod.
And I'm Robert Krollowich.
This is Radio Lab.
The podcast.
The podcast.
Today on Radio Lab, we are...
Well, I actually don't even know what we're doing.
I know we're revisiting some old question, but you've kept me in the dark.
So what are we doing today on the podcast?
Well, today we're going to check in with...
Who's here?
Somebody who you might remember, actually.
Oh, hi, Robert.
How you doing?
Can you just tell me who you are, just to say I'm Josh and where you are and what you do and stuff?
I'm Joshua Green.
I'm an assistant professor of psychology at Harvard University.
And you may remember...
Wait a second.
Growich.
Do you remember this, like in the morality show?
Sure.
I mean, Josh was the guy with immoral puzzles.
I study moral judgment and decision-making.
Are you going to get into the whole baby, would you kill your baby question?
Yes, exactly, exactly.
So for those of you who need to follow this, in that earlier radio lab,
we described the last episode of the TV show MASH.
It's wartime.
There's a nanny patrol coming down the road.
You are hiding in the basement with some of your fellow villagers.
Let's kill those lights.
And the enemy soldiers are outside.
They have orders to kill anyone that they find.
Quiet.
Nobody make a sound until they've passed us.
So there you are.
You're huddled in the basement all around your enemy troops,
and you're holding your baby in your arms.
Your baby with a cold, a bit of a sniffle.
And you know that your baby could cough at any moment.
If they hear your baby, they're going to find you and the baby and everyone else,
and they're going to kill everybody.
And the only way you can stop this from happening is cover the babies,
mouth. But if you do that, the baby's going to smother and die. If you don't cover the baby's
mouth, the soldiers are going to find everybody and everybody's going to be killed, including you,
including your baby. Then you have the choice. Would you smother your own baby to save the village?
Or would you let your baby cough knowing the consequences?
And make clear for me where we're going with this, Robert? Like I don't know.
You asked me a question at the time.
And how many people chose to kill their baby?
About half.
Wow.
That's not bad.
What do you mean it's not bad?
You're in favor of killing the baby.
Well, what would you do?
Me?
I wouldn't even consider it.
I would kill the baby.
You would?
The village will go on to have 100 babies.
Your baby is just one.
My baby is my world.
My baby is my universe.
So I don't...
You're going to erase all those people based on your one child?
But, wait, first of all, the audience should know that Chad and Bumrad does not have a child of his own yet.
Okay, now, now we have the benefit of time passing.
Just out of sheer curiosity, now that you have a child and you've looked into that child's face over and over and over again, I'm just curious, would you kill...
Is this the whole reason we were doing this podcast?
No, no, no, no, no, no, no, no, surprise me with this question?
No, I'm going to, no, people shouldn't worry.
But just out of curiosity, what would you do?
Would I kill the baby?
Your baby, not a baby, your baby.
Would you like to see a little picture of him?
No, I don't want to see a photo.
I know what a meal looks like crying out of that.
No, see, here's...
I have thought about this, actually,
because people send us emails about this for some reason.
I don't really know.
I mean, the thing is, though,
I mean, now this is not just like an abstract baby,
but it's my baby.
Well, that does change everything, obviously.
So I'm kind of in a place where I don't really
No. I frankly don't know.
Wait, let me just think about this.
I don't know. It's kind of an impossible question.
Because, like, in order to answer it truthfully, which is I would not kill my baby,
I'd have to sacrifice a principle, which is, like, not as important to me as my baby, but almost.
That principle being...
Well, that sometimes you have to sacrifice something very dear for the greater good.
I just think that that's a really...
I mean, not to get all communistic on you, but that's a really important idea.
And in this case, by the way, the calculus of what is about to happen if the baby gloss is
really not known to you.
Well, I mean, if you're the philosopher king and you give me two options, one is to kill my baby
to save the village or to allow my baby to live, in which case everybody dies, if those are the
only two options, then I still feel like you kind of have to kill the baby.
But I don't think I could do that.
I don't think any father could do that.
So my sort of pathetic answer at this point is I can't kill my baby.
baby, but then I can't sacrifice the village. So I think I would just, um, like, close my eyes and
wish I was somewhere else. So the idea is that, you know, when you think about this case,
on the one hand, you have an intuitive emotional response that says, no, this is terrible,
killing a baby or killing my own baby even worse. At the same time, a different system within your
brain is saying, look, this is, as horrible as this is, this is a sensible thing to do. It's the
only sensible thing to do because if you do nothing, everyone will die. Whereas if you, if you
you kill the baby, then at least you and the other people can live.
And what the evidence suggests is that these two competing moral perspectives are really grounded
in different parts of the brain, and the competition has not been resolved.
So that's where we were the last time.
Now I want to step forward for a second and think about it a little more deeply.
If our sense of right and wrong comes from these competing brain systems, let me revisit
the question.
Are our brains built to favor certain outcomes?
Let's suppose that you are walking alongside a lake and you see a girl drowning right in front of you and she's screaming for help, but you're wearing a very expensive suit.
Should you jump into the lake and save her?
No.
No, of course you should.
Yes.
You mean like the suit is the only thing that you prevent me from doing that?
Yeah, yeah, jump in.
But now suppose you're walking down past your mailbox and there's a letter in the mailbox which says, please give us $1,000 so we can help save girls on the other side of the globe.
Girls you'll never meet, girls who screams you'll never hear.
But there are girls in trouble on the other side of the world.
Go help them.
So the equivalence is that you jump into the lake, you save the girl who's drowning.
One-on-one.
Or you send the check and you save the girl who is in peril.
A girl, not that girl, a girl, somewhere on the other side of the globe.
I see.
So the question that go to Josh is, if you didn't give the $1,000, would that make you a bad guy?
Right.
Well, there is something funny about these cases, right?
That most of us say that, of course, you have to rescue the drowning child, but you know, you're not a saint if you don't give your money over to save the children on the other side of the world.
But you're certainly not a terrible person or so it seems to us.
And so, yes, there's this.
Putting aside whether it's a good or bad, whether you're a good or bad person.
Sure.
How do you explain the difference?
Well, I think it makes a lot of evolutionary sense.
that is, you know, a lot of our social, emotional responses are geared towards life in the kind of environment in which our ancestors evolved.
And it makes sense that we would have moral buttons, so to speak, that get pushed by the kinds of things that our ancestors might have encountered.
Because tens of thousands of years of evolution have essentially be quietly tugging at your heart in both in those kinds of situations.
Exactly, exactly. Whereas the idea of spending a minimal amount of money to save the life of some stranger,
on the other side of the world
that you're never going to meet,
that's a totally new modern phenomenon.
It's not something that our emotions are prepared for.
Well, now, doesn't that leave us in a funny place?
I think it does.
What happens if the most important questions
that we face as a species or as a group
involve thinking abstractly?
Those problems, pollution, global warming,
and things like that,
those aren't really local problems.
they're global problems.
Exactly.
This is, I think,
it gets right at the heart of the matter,
and this is why I do this research.
I think that the kind of thinking
that we apply to those problems,
what we call common sense,
is really hunter-gatherer common sense,
or at least a lot of it is.
And if we're going to face these big problems
that our minds were not designed
by evolution to handle,
then we have to learn to turn off parts of our brain
that are getting in the way
and turn on other parts
that may seem like the wrong parts to be using.
So he's saying that we should
tamp down our primitive emotional instincts that are in our reptile brain, those instincts that say
don't kill your baby, like that stuff. And then we should amp up somehow the part of us that
thinks more abstractly about the greater good and about people that are right in front of us.
Yeah. So if you're sitting there with a soda can in your hand and you think, I guess I can just
throw this on the street and you go clinkety clankety clankety clankety clank. Your primitive part is saying,
well, I can get away with that because no one's seeing it. But of course the
calculating part would say, well, if we all do this, then the world will be full of trash.
And it's problems like that that in order to solve them, you have to think abstractly.
That's interesting. You know why that's interesting?
Why?
Because it might be, I mean, I think he might be wrong. I mean, because we encountered this already.
He's asking us to rely on a part of our brain that, you know, is not exactly Hercules.
Do you remember the thing we talked about in the, what show was that, Zorin?
What was it, the choice show, with the Bobbyshiv.
Can we get that audio and throw that into the mix?
I'm a Baba Shev. I'm a professor here at the Stanford Graduate School of Business in marketing.
A lot of my research has to do with the brain.
And tricking people.
Oh, yeah, absolutely.
So, Robert, I want to tell you about one particular experiment that he did.
So the experiment is pretty straightforward.
It goes like this.
He got a bunch of subjects together.
He said, okay, I'm going to give you all a number.
On a little card, you're going to read the number, and I want you to commit that number to memory.
Take as much time as you want to memorize the number.
And then he says, you're now going to walk to the next room and recall the number.
And that's what subjects think, test subjects think that they're going to be doing.
So they know they're going to be in one place, getting a number, going to another place,
reciting that number.
That's right.
That's all they know.
That's all they know.
What they don't know is it not everybody is getting the same kind of number.
So some people get a seven-digit number, some people get a two-digit number.
That I can do, by the way.
I think I can do two digits.
No, I doubt it.
All the subjects have to do is they've got to memorize a number, walk out of room one, down the hall,
of room two, then recite their number.
Now just imagine, you with me?
Mm-hmm.
A person with a two-digit number in their head was walking out of room one.
One, two is my number.
I can definitely remember this.
Down the hall.
Same time, someone with seven digits in their head.
One two, two, eight nine, three, six.
Walks down the hall.
Now, here's where the trickery comes in.
As they're walking down the hall, mid-memorizing all of a sudden,
excuse me?
They pass a lady in the hallway, and she's holding something.
So I didn't interrupt you, but would you like a snack?
Um, uh, uh, I should.
She says, here, have a snack.
Just as our way of saying thanks for participating in this study, you can have one of two snacks you choose.
You can choose between either A, a big fat slice of chocolate cake, or B, a nice bowl of fruit salad.
Meanwhile, they've both got these numbers still in their head.
Now, here's the weird thing.
When they finally make their choice.
What would you like?
Some yummy cake or some healthy fruit.
The people, this is crazy, the people with two digits in their head?
You know, I love cake, but I think I'll take the fruit.
almost always choose the fruit.
It's healthy.
Whereas the people with seven digits in their head almost always choose the cake.
You know, the cake.
I want the cake.
And we're talking by huge margins here.
It was significant.
I mean, this was like in some cases a 20, 25, 30 point difference.
The lesson we took from that, which is the lesson you are not telling me now,
is that your rational system, the hope of humankind part of your brain,
is very, very suggestible weak.
almost barely struggling to manage the situation.
Give it something too much to do and, oh, man, it just eats sweet cake.
I would take a very different lesson from that study.
Imagine if you told those people who say, look, here's how your mind works.
When you have to remember a long number, it's going to clog up your memory,
and it's going to make it harder for you to resist the temptation to have chocolate cake instead of fruit salad.
But I'm telling you this now.
You're armed with the truth about how your own mind works.
here's a long number, go, right?
Now, how many of those people are going to be able to resist the chocolate cake?
I think a lot more of them are, right?
Has anyone done that?
Has anyone said, okay, I'm sending you down,
and that's going to be this siren seductive cake handling tempteris,
and let's see if you can resist?
Has that ever been done?
It hasn't.
I don't know if it's been done,
but I'm willing to place bets on how that will turn out.
That is that we can recognize the quirks and the flaws and the inconsistencies
in our cognitive systems and do something better that makes more sense.
Hmm.
Does that, is this just blind optimism?
I mean, or does he have evidence to support this?
Well, one thing that gives me hope is something called the Flynn effect.
The Flynn effect.
Yes.
So the Flynn effect is something that was noticed by a philosopher and a political scientist named Jim Flynn.
I knew it was going to be Flynn.
Yeah.
It would have been really surprising if his name was Zoranski.
That's right.
No, they line these things up so that they make sense.
Okay, the Flynn effect.
The Flynn effect. What Flynn noticed is that over the course of the 20th century, IQ scores kept going up and up and up in the industrialized world.
So much so that by his estimates, a person of average intelligence in 1900 would register somewhere near the line for mental retardation by present standards.
How could this be?
Same test, by the way?
So, everything else being equal?
I mean, that's why it's a bit complicated because the tests have changed and the norms have changed.
but doing your best to control for all of that.
By his estimates, we have gained about 30 IQ points as a society in the last 100 years, which is enormous.
Now, there are a lot of people who would say, well, the IQ test really doesn't really tell you that we're getting smarter or really different.
But if you ask Josh, well, why would we be getting better at the IQ test?
He says that in the last 100 years, people learn how to think abstractly.
Things that we take for granted, like thinking about abstract things like a market,
where a market is not a particular place with fruit stands,
but a more abstract space, so to speak, in which goods and services are exchanged for money.
These two men, these two minutes, two, these two minutes, two.
Heson.
Things are ten.
Nothing like that have become part of our cognitive background.
Meaning, and I think this is how Josh,
would argue it, these are
deeply abstract
occupations.
Gasoline, natural gas.
To try to figure out patterning and numbers
and future values.
Crude oil, natural gas.
And I think Josh is arguing that
it can change you. Cultural evolution
essentially has given us
much higher IQs when it comes to thinking
about a lot of things. Wow. So you're saying
that we are learning to
exercise our rational systems.
It's not that we're growing any new
brain cells are making a whole new set of connections.
It's just that what we've got, we're just making more muscular?
Exactly. It's like learning to play an instrument, right?
I mean, when you first start playing guitar, you're totally useless.
It sounds like a dying animal.
And, but, you know, give it a couple of years and it can sound great.
And basically we're...
Well, but that's a very specific sort of motor skill.
Right.
But being better at abstraction and thinking about right and wrong in a new way,
that seems what you're saying is kind of daft.
You think that you can exercise yourself into being a better man and a better woman and a better species?
I think that's right.
I think that we can learn to play our dorsalateral prefrontal cortices better.
At the end of the day, you think that the pressure of dealing with these big abstract problems will eventually change our minds.
Well, I hope so.
I mean, the problem is that as a species, we tend to learn from trial and error.
The problem with issues like nuclear proliferation and global warming is that we only have one Earth.
and what I hope is that if we have to learn the lesson from some kind of trial and error,
the errors are not so big that we don't get another chance.
But I also think that there's reason for optimism.
Or at least you hope.
I think, yeah, at least I hope, you know.
But I mean, that may just be because I'm an optimistic person.
I mean, I might just sort of throw up my hands and say, forget it.
I'll go do something else and enjoy my time before we kill ourselves.
But I think that, you know, it makes sense it's worth a shot to see if we can teach ourselves
to live happily on a...
a small planet.
Aren't you the teacher?
Yeah, I'm pretty pedantic, huh?
Teaching the world.
Well, no, I kind of mean, I certainly think anyone normal would be rooting for you.
Absolutely.
Well, thanks.
I appreciate that.
There are a lot of abnormal people who root for me, but I hope there are some normal ones, too.
Josh Green is an assistant professor of psychology at Harvard University.
He's written these ideas in an essay in a volume called What's Next, edited by Max Brockman.
And, hey, when you were talking with him, did you ask him about his babies?
Would he kill his babies?
You know, I should. I forgot.
Any case, we should wrap.
Yeah.
We should kill this baby.
We have to see our funding credit.
Right.
So Radio Lab is supported in part by...
The National Science Foundation.
Corporation for Public Broadcasting.
And one other.
The Sloan Foundation.
Yeah.
Which, by the way, is supporting Kepler, the Philip Glass Opera about the great 17th century astronomer.
It's premiering November 18th, right down the street for me at the Brooklyn Academy of Music.
I'm Chad Abumrod.
I'm Robert Grubowitz.
Thanks for listening.
