Freakonomics Radio - Freakonomics Radio Live: “We Thought of a Way to Manipulate Your Perception of Time.”
Episode Date: December 15, 2018We learn how to be less impatient, how to tell fake news from real, and the simple trick that nurses used to make better predictions than doctors. Journalist Manoush Zomorodi co-hosts; our real-time f...act-checker is the author and humorist A.J. Jacobs.
Transcript
Discussion (0)
Hey there, I'm Stephen Dubner, and this is a bonus episode of Freakonomics Radio Live.
It's the nonfiction game show we call Tell Me Something I Don't Know.
This was recently recorded in New York. If you'd like to attend a future show or be on a future show,
visit freeconomics.com slash live.
We'll be back in New York on March 8th and 9th at City Winery.
And in May, we are coming to California.
In San Francisco on May 16th at the Norse Theater in partnership with KQED.
And in Los Angeles on May 18th at the Ace Hotel Theater in partnership with KQED, and in Los Angeles on May 18th at the Ace Hotel Theater
in partnership with KCRW. Again, for tickets, go to Freakonomics.com slash live. And now,
on with our show. Good evening, I'm Stephen Dubner, and this is Freakonomics Radio Live.
Tonight, we're at Joe's Pub in New York City, and joining me as co-host is Manoush Zomorodi.
Manoush is the host and creator of the podcasts
Zig Zag and Note to Self.
She's the author of the book Bored and Brilliant,
How Spacing Out Can Unlock Your Most Productive and Creative Self.
Manoush, we know you grew up in Princeton, New Jersey,
the child of not one but two psychiatrists.
Indeed.
Everybody just went like this.
Oh.
Well, afterwards they'll come to you with their problems, presumably.
We know that before getting into the cutting-edge world of podcasting
that you reported for legacy media companies,
including Thomson Reuters and the BBC.
So tell us something we don't yet know about you, Manoush.
My big break was I was a breaking news producer for the BBC
and I was sent to, with a correspondent,
a real grown-up reporter person,
to report from Mount Etna, which was erupting.
And so the volcano's going off.
That was pretty cool.
We're on TV.
And then he's like, right, see you later.
I'm going back to Rome for my child's birthday party.
And I was like, all right, bye.
I couldn't get a flight out until the next day.
Was that a Roman accent?
No, that was Brian Barron, actually.
So I went to sleep in my lovely hotel.
So they wake me up at four in the morning,
and they're like, the volcano's erupting again.
And I was like, yeah, but Brian left.
And they're like, you, go to the volcano and report.
So I was on the morning news reporting from an erupting volcano.
Way to go.
And never looked back.
2001, been a reporter ever since.
Congratulations.
I say way to go.
We're so excited.
Like, there was danger happening.
And Manoush got to be there.
Have you ever seen lava flow?
Like, really?
It goes like this.
I'm not joking.
Just so you know, this is radio.
Sorry, it was really slow.
I'd like to describe what Manoush was doing.
She was holding up her hand and moving it very slowly.
Did it change your life in any way other than career-wise?
Yeah, I mean, because I thought of my own capabilities completely differently. So that changed everything. Okay, Manoush, very, very
happy to have you here tonight. Thank you. Thank you for coming to play Tell Me Something I Don't
Know with us. Here's how it works. Guests will come on stage to tell us some interesting fact
or idea or story, maybe a historical wrinkle we don't know. You and I can then ask them anything we want, and at the end
of the show, our live audience will pick a winner. They will vote on three simple criteria. Number
one, did the guest tell us something we truly did not know? Number two, was it worth knowing? And
number three, was it demonstrably true? And to help with that demonstrably true part, would you
please welcome our real-time fact checker,
the author of four New York Times bestsellers and counting, including The Year of Living
Biblically, A.J. Jacobs. Thank you. Thank you, Stephen.
So, A.J., it's been a while since we did one of these shows together. I assume you've just been
sitting at home waiting for us to call. Have you been working on anything at all? Well, that and I was able to squeeze in. I have a new book.
It's called Thanks A Thousand. And the idea is I went around the world and thanked a thousand people
who had even the smallest role in making my morning cup of coffee possible. So I thank the farmer who
grew the coffee beans and the trucker and the logo designer
and the man who made the zarf.
What's a zarf?
Bless you.
Well, thank you.
A zarf is, I learned, the official name
for that little cardboard sleeve that goes around.
No way, come on, that's not the word for it.
Zarf?
Z-A-R-F, yeah.
And just so you know, it has a long and glorious history.
There were zarfs in ancient China made of gold and tortoise shells.
What?
A.J. Wins.
I'm sorry.
That was amazing.
That's so cool.
Do we know where the word comes from?
Is it the sound the first person made when they grabbed a cup of hot coffee without a zarf?
That is a good question.
If you give me 30 seconds, I can give you an answer.
We'll get back to you.
Maybe by the end of the show you can tell us the etymology of zarf.
All right.
Well, AJ, I'm delighted that you are joining us as well.
It's time now to play Tell Me Something I Don't Know.
Would you please welcome our first guest, Julie Arslanolu.
Julie, welcome. I understand you are a research scientist at the Metropolitan Museum of Art. I'm guessing that's pretty fascinating work and very
promising for our purposes tonight. So I'm ready as are Manoush Zomorodi and A.J. Jacobs. What do
you know that's worth knowing that you think we don't know? I have a simple question.
What do antibodies have to do with art and art conservation?
Antibodies being the protein in blood that attacks the bad guys?
It's the protein that your body produces to recognize an other.
So every living organism has these.
Is it something that you apply?
We apply antibodies to art, yes.
Okay.
Oh, you apply antibodies. You're not looking for antibodies in the art.
No, we're not.
Let's ask about you.
Is your background purely art and or art conservation,
or do you have some kind of biochem background?
I have organic chemistry graduate degree.
Oh.
So are you potentially part of the preservation staff at the Met?
I'm a research scientist within the conservation department.
Does the Met have a lot of you or are you flying solo?
No, there's 12 of us.
There's 12 of you?
Yep.
Okay.
You apply, where do the antibodies come from that you apply?
We purchase them from commercial sources right now.
What do they belong to?
Where do they come from?
Well, the way an antibody is created is you take something that you want to study,
some protein from an organism.
It can even be a small molecule.
You inject it into an animal that's called the host.
The host creates antibodies against the other.
You harvest those, and then you inject those into a second animal and create antibodies to that first antibody,
and you use that as your reporting system.
So the whole idea is that when you have one thing that you want to recognize,
you create an antibody to that.
That is super fascinating.
So this is some version of, like, antibody dating?
I don't mean, like, smooching dating. I mean like carbon
dating dating. It's not so much. It could be time theoretically, but it's really complex because
you have a couple problems. One is that normally antibodies are made against what are called native
proteins. So these are the proteins that come off freshly from an organism. So let's talk about
collagen. If you extract collagen from, let's say,
bovine skin, you can create an antibody for collagen, you can create an antibody specifically
for bovine skin, but these are going to be in their native state, meaning it's the way the
protein is extracted from the tissue or the organism. My issue is that the proteins that
are extracted are extracted to be prepared to use for artwork.
So if it's going to be a glue...
Ah!
Yeah, yeah, yeah.
So if it's going to be a glue
or if it's going to be a binder for a paint,
meaning a paint is usually ground-up minerals
and you have some sort of adhesive
that holds the whole thing together.
So if you prepare this material,
you're going to heat it to extract it. You're going to do
something to it to prepare it for an artistic way of using it. Then you're going to mix it with these
inorganic mineral pigments, which have cations that react with proteins. Cat ions? Cat ions.
Yeah, ions from cats? Cat ions means positive charged. Okay.
And then you're going to let this stuff dry.
So a protein normally lives in an aqueous environment.
Now you're going to remove the water,
and then you're going to let it age for 500 years, 1,000 years.
So stuff happens to those proteins.
Art is made up of materials.
Materials continue to react.
The way they react is really complex because we don't have a really clear knowledge of the conditions that art was exposed to.
And art continues to be treated over time. So if it enters the museum, for example, it might be consolidated with additional animal products like sturgeon glue or animal glue. It might have
a synthetic polymer added to it to create a more cohesive surface. So when you
have all these things mixed together, they continue to react. So you're doing this kind of
proactive-ish, historical-ish detective work for what purpose? For restoration? For proving
the provenance or history of something? So at the most basic level, what we're trying to identify is what materials are used to create the artwork.
And why this is important is a few things.
One is when you have a sort of a lexicon of how art was created.
So, for example, egg tempera being used in the Italian Renaissance.
Well, was every painting in the Italian Renaissance made with egg tempura,
or did they use other protein-based binders?
So you can inform that lexicon.
You can create a timeline that informs the art historians
about what the materials actually were being used.
But more than that, when you look at a piece of artwork,
it's a combination of all the chemistry
that's going on. So when you mix a binder, like a protein, with a mineral pigment, as the light
passes through it, you're going to get a certain amount of saturation of color, you're going to get
some certain amount of gloss, and it all depends on that combination of which pigment and which binder.
And if you add an oil to it, it changes everything.
So looking at an artwork, understanding what you see, why does it look the way that it does right now? And did it look that way originally or has it changed? So that's one of the most
basic reasons for doing this. Oh my God. If you had been around in high school, I would have
actually liked biochemistry. I was like the artsy kid who was like,
I don't understand it.
I got a C- in chemistry, you guys.
It was my worst grade.
But if you had like-
You got no sympathy for that.
I know, I know.
Everyone was like-
I think what that means is that most of them
got worse in C-.
All right, maybe.
I found it really difficult,
but also because there was no applicable usage.
Like what you just described just lit up my brain
and explained to
me why I like certain paintings and others. And that's amazing. That's so cool.
There is a really strong connection between art and science. So the way it looks, why it changes,
the mechanics of the film, like as it changes and gets older, you actually can increase the
stiffness and you can get cracking and you can explain all this with chemistry and engineering. And so this is a really strong connection with
STEM and STEAM. And so there are universities that really pull us in to their chemistry classes to
teach folks who aren't quite so keen on chemistry. Are there applications of this method or something
similar beyond artwork? Is it used in archaeology, etc.? Absolutely. One of the earliest uses of
antibodies was actually paleontology. They used it to identify collagen in dinosaur bones.
This was how long ago? This is like the late 70s, early 80s. Is this process used in order to
not only understand the ingredients that have been used to make the art, but potentially to
preserve it in some way or clean it, I guess.
Well, we're trying to use a combination of the antibodies and something called mass spectrometry
to look at the molecular structure of the protein. So what we found out is that when you mix
different protein binders, like collagen or a whole egg or milk products, casein,
with different pigments, you actually will change the conformation of the protein.
These combinations change the conformation of the protein.
So you're getting some sort of structural change in the protein,
and we're trying to look to see how that affects the longevity of the paint.
A.J. Jacobs, Julie Arslanolu from the Metropolitan Museum of Art
has been telling us about antibodies and the use
thereof in restoration and learning more about art and for many applications. I'm sure you know
an awful lot about this. Of course, of course, even before I came. Yeah, well, it all checks out. Julie gets an A plus
for accuracy, not a C plus, no offense. But it actually, it led me to this list
of the strangest ingredients contained in paint.
There was Indian yellow.
Apologies to those eating right now.
But according to legend, this was made from cow urine
and not just regular cow urine.
This was cows who were fed only mango leaves,
which makes for a gorgeous pee, apparently,
and shiel's green, which was also lovely, but poisonous.
It was made from arsenic, and according to legend, it's what killed Napoleon Bonaparte.
He had green walls in his room.
So there you go, cow urine and arsenic.
That's art.
Well, AJ, thank you, and Julie, thanks so much for coming to play
Tell Me Something I Don't Know.
Would you please welcome our next guest, David Reiter.
So David Reiter, it says here you're a professor
of information sciences and technology at Penn State
and that your research has been particularly focused
on what makes us intelligent,
those of us who
may or may not be, and why we make mistakes. So that sounds like great turf for us. David,
tell us something we don't know, please. So I live in this college town, and I guess it's a bit
divided, which you see when you drive around there. There's people that live there all their
lives, and these townies have all the time in the world.
And then there's people like me who are a little impatient,
and we'd like to drive our fast cars into where we need to go, right?
How can you make me a little more patient?
We could start by not calling them townies,
because I don't think they like that.
But that's just a hunch.
You're saying that you and your uber-educated class of people,
you got a lot to do.
You're rushing to do research
and to give aid to floundering students like Manoush
who are getting C-minuses.
Exactly.
So are you doing research into ways
to get people to not be like you?
Yep.
I think there's a lot of work that's been done in behavioral economics
that has found out that we all are a little impatient, right?
So I'll give you an example.
Do you like chocolate?
I love chocolate.
Could you please just tell me what you're going to— I'm sorry, just kidding. That was demonstrating my impatience. We'll run a little experiment into you like chocolate? I love chocolate. Could you please just tell me what you're going to...
I'm sorry, just kidding.
That was demonstrating my impatience.
We'll run a little experiment into you, okay?
Would you like two pieces of chocolate or one?
Well, of course two.
All right.
Now I'll attach a little bit of time to that.
Would you like two pieces of chocolate in a week from now?
Are you doing the marshmallow test on me?
Is that what you're doing on me?
That's exactly right.
I'm a tech reporter, so I know about this stuff.
Let me tell you, all right?
We can't wait for anything anymore
because we have instant gratification.
So in a way, we all know that we're impatient, right?
The question is, can you do something about it?
So I have to say, even though the way you presented your dilemma,
you sounded a little, I don't want to say arrogant,
but like, I have a problem with the slow people, right?
That's the way you said it.
But it's interesting that now when you're searching for a solution, your solution is not to make the slow people faster.
You do want to ameliorate your impatience. That's a good point. You are identifying that. So you
want to know how you could become more patient while driving. Have you tried listening to a fine
podcast? I think that's a really good idea. Or smoking weed.
Have you ever been behind someone who's like, All of these would work. Okay. But you're trying to look at more of a cognitive behavioral,
sort of talk yourself into being more patient thing,
because that's what you do?
It's not talk therapy.
Is it a technological intervention of some sort?
Oh, I once learned that the best way to make people on a train
feel that the trip is shorter
is just by putting in really good fast Wi-Fi.
So if you do that in your car, then you can watch
Netflix while driving.
What if it's autonomous vehicles
and therefore no one's driving?
How boring. I mean, the vehicles are all
just driving themselves and they're all at the same speed.
And watch Netflix.
It's not a problem.
Why don't you tell us what you did?
And I'm curious to know, is this an experiment that you did in the field or in a lab?
Because we care about those distinctions.
Neither.
We run experiments on thousands of people that are somewhere in the world.
So our research is not based on American undergraduate psychology students,
but people from all over the planet of different ages that like to do our experiments. Who live in towns where there are half the people
who are really snooty and think that they're really busy and have to get somewhere really
fast, and the rest of the people are like, f*** you, we just are trying to get to the grocery
store. Like that? Exactly. Okay. We thought of a way to manipulate your perception of time
by giving you something that you might already know,
a countdown, like the countdown you see in all-time movies before the movie actually starts,
or like a progress bar when something's really, really slow on a computer. So we show people a
countdown, and then we give people a test of their impatience. Now, we manipulate the countdown.
We were interested in what happens when that countdown is fast versus when that countdown is really slow.
The countdowns always take the same amount of time, 15 seconds.
But I can count down 15, 14, 13, 12, or I can count down like this, 5, 4, 3.
Are you impatient yet?
Yes, three. Are you impatient yet? Yes, very.
So you're saying that if the numbers are going faster,
even if the duration of time is identical,
we experience it faster.
That is correct.
We're happier with the whole game.
But most importantly, we make better decisions during the impatience test that follows.
What is the decision-making test
that you use in this experiment?
Well, so this is kind of fun. My collaborator
Muzan Gafourian came up with this
beautiful experiment
where we bring in
Cookie Monster.
Now, Cookie Monster is probably the most
impatient guy we know.
Now you have my attention.
And your job is to host your friend, the Cookie Monster, and you've got a jar of cookies sitting
in your living room and Cookie Monster. Now the question is for you, how often do you check on
Cookie Monster to make sure he hasn't started eating cookies yet or you catch him the right
moment when he starts eating cookies? So if you do it right, you only check once right after he starts eating cookies.
Now, that's really, really hard.
And we found that the people that saw the slow countdown checked earlier and more often.
The people that were more impatient made worse decisions.
And that was in the time that followed watching the countdown, not during the countdown.
So let me just see if I understand.
If you were to make a prescription,
would you say that, for instance,
traffic lights should have attached to them
a countdown clock from whatever it is, 30 going fast?
Is that the idea?
They should be going fast,
and they should even be speeding up.
But you're talking about literally mounting
a countdown clock where it's visible
at a traffic light or an intersection or whatnot?
Is that the idea?
And for pedestrians, that's already being done.
Well, we have that in New York fairly recently.
Is your research connected in any way to that?
Now our walk and don't walk signs, they used to be the hand.
Now you get a countdown clock, and I think it starts at like 30
and goes really fast.
And I see old people running,
which to me seems potentially counterproductive. I don't
know. But do you know anything about that and whether it's working, safer, etc.?
My guess, it will be safer. I don't think it's meant to manage people's impatience in that sense.
It manages people's timing so they don't get on, you know, block the intersection, anything like that. Is impatience necessarily, however, a trait to be lessened or dampened? Because I would
consider myself a fairly impatient person, which I know has its downsides in some cases,
but I think there are also upsides. You quit things faster when they're not working out, which,
you know, that may not suit everyone, but there are those who argue that that can be a good thing.
And I'm curious whether impatience
is actually the thing that you are fighting,
or was that just a viable mechanism
to try to figure out how to manipulate
people's perception of the events?
When Edna's breaking out,
do you really want to be patient?
He played the volcano card.
We're truly interested in how we can change
people's perception of time and how
we can affect people's decision making.
And you can use these countdowns in both
directions. You can make somebody more patient
or you can make them more impatient.
Oh, that could be handy.
When you're doing this research, you
couched it in the fact that this is happening in
your town, but are there better
use case scenarios?
Were you trying to fix that problem in particular or were there other problems that you were actually
trying to sort out? So our experiment that we designed is meant to be very much like many
decisions that we take in real life. And these are decisions such as how often do I inspect a crumbling bridge or when do I decide to renovate
it because every week I don't renovate the bridge I get more use out of it right or similarly a
police precinct deciding how often to patrol an area or simply again you're driving and you have
to make quick decisions on how to gather information about the
things around you. Has that cyclist moved or is he still in my blind spot? So making all of these
decisions, you know, it's really something that's very, very commonplace. Timing decisions are very,
very important to managing risk, managing our safety. So, of course, you know, this is applicable
in the context of driving as well. And if you can
put up countdowns on your traffic lights, perhaps you could listen to some fast music before you
get in the car and you'd listen to some slow music while you're going or a slow podcast.
There are no slow podcasts, only slow podcast listeners. AJ Jacobs, David Reiter, has been telling us about
how to essentially manipulate away our impatience,
which is fascinating.
What more can you tell us on that?
Well, I'll just get right to it.
It does check out.
Actually, I was losing the train of thought a little, so I...
Smoked some dope.
No, my kids are in the audience.
But I did research, what is the longest traffic light in America? I, uh... Smoked some dope. No, my kids are in the audience.
But I did research what is the longest traffic light in America,
according to the New York Times.
It's in New Jersey.
An impressive five minutes and 28 seconds.
Where? Where in New Jersey?
West Milford, New Jersey.
Is anyone surprised the longest traffic light is in New Jersey?
Let's be honest.
But wait a minute.
Five minutes. That can't be right.
That's what it says.
This is a paper record.
You can listen to the Gettysburg Address almost three times in five minutes.
So that's a good use of your time.
What I want to do is I want to apply what you're saying to Twitter.
Instead of people reflexively retweeting or responding with outrage,
what if they were like, Instead of people reflexively retweeting or responding with outrage,
what if they were like,
countdown, here we go, I need 15 seconds before I can respond.
Do you think that would work to make people stop tweeting stupid shit, basically?
I love that.
I love that.
Hey, David Reiter, thank you so much for coming to play Tell Me Something I Don't Know.
And would you please welcome our next guest, Jeff Nosinoff.
Jeff is a consultant for NASA.
He formerly worked at NASA's Jet Propulsion Lab,
the coolest spot in the NASA universe.
He's also got a law degree.
Yeah, no clapping for that.
But check this out.
He's also got a very rare master's degree in space and telecommunications law.
Yeah, that's right.
Which is odd and nifty.
It is, it is.
It's been a strange journey.
So Jeff, welcome to our stage.
Thank you so much for coming.
What do you have for us tonight?
Well, my question to you is,
what is the most useful mission that NASA has done?
Well, I would say if the moon landing had not been fake,
that would have been it.
You got us.
You got us.
Okay.
Are there humans involved in this mission?
No.
Great way to narrow it down.
So there's the human side and the robotic side.
So I'm asking about the robotic side.
The most useful mission?
Yes.
It's going to be
something about
gathering information
with a big telescope.
Is it a flyby? You fly
by Mars? No, it's an Earth orbit.
So it's not the cute Mars lander?
That guy? No.
Does it have to do with the
location of space
minerals?
In a sense.
Gases?
Sort of.
Minerals and gases.
Antibodies?
No.
No.
That would be, if that had ever happened, that would be the answer.
That would be amazing. But we're not there yet.
Is the mission ongoing?
Sort of.
This is going to be very hard to fact check.
Have you ever said either yes or no in your life?
Yes. Yes, I have.
Why don't you tell us?
Sure. So the mission that I think is the most important and most useful is the Kepler space telescope.
Do you have a penny up here by any chance? A penny in your pocket?
I don't have a penny in my pocket.
Okay. So hold up your finger and look
at the ceiling, kind of
in the shape of a penny. If you
make a telescope that looks through that...
If I had a penny, I couldn't see the ceiling,
so why would you want us to use a penny?
If you imagine a cone that goes from your eye
through that penny and out in space
and you look through that telescope, you will
find thousands and thousands of planets
that are similar in a lot of ways
to the ones we have here in the solar system.
And that's just in a penny-sized slice
or section of the sky.
And that's not even looking that far.
That's just staying within our own galaxy.
So since we were all kids,
the number of planets has gone from nine to eight
to about 3,000.
And to me, that I think is the most
important and useful mission because it truly places into an unimaginable perspective everything
else that NASA does and that really humans do. And every other field of science, including minerals
and... Now, I believe Kepler was recently retired, but it was a massive success, wasn't it?
Like, it was up there something like three times
as long as originally planned?
It's really very much accomplished its mission,
which is showing us that the galaxy, at least,
the universe by extension,
is full of planets, more than there are stars.
And that, to me, the philosophical conclusion there
is that it's almost impossible
that the conditions that make Earth unique are unique here.
So that is fascinating.
It resonates, and I think it's an interesting answer
that Kepler is the most useful mission.
Do you have a larger point about NASA and usefulness, though?
Because it's a big point of contention.
Well, what initially drew me to the podcast
was the idea of hidden economies
and the idea that for every mission
that you read about in the paper
or that you see photos from,
there's hundreds of other missions
that are designed, evaluated, and rejected.
What is the actual rejection ratio, would you say?
Oh, 101, if not more.
Wow.
And that's because it's that much harder to actually build and fly something to another planet.
Are the rejections primarily for lack of technical or engineering ability?
Well, that's where the hidden economies come in.
There's really four factors that really matter.
There's risk, cost, science return, and technology development.
And that has changed over time.
The weighting of those has changed over time.
So in Apollo, everything was off the charts,
but they did it anyway.
Now, there's a larger focus on minimizing risk
and maximizing science return, which makes sense.
But if you tell them you're going to discover life somewhere
and your technology needs another 10 years,
you're going to get rejected for multiple reasons.
Science, we're not ready to make that conclusion yet.
Too risky and too much technology work.
You said the four dimensions on which a proposal is assessed,
and you talked about risk.
What is meant in that context? Sure.
So, well, there's a number of components.
There's actual technical risk, like can we look for life somewhere?
Does the scientific evidence support a look for life,
a search for life?
And I believe that there is sort of a larger philosophical debate
that goes around in the top floor of NASA headquarters
about do we really want to support a mission
that if it's successful, we have to declare for all time
that we have discovered life somewhere else? Why would that be a burden? Announcing you've discovered life
somewhere else will permanently change human history. It will throw, in my opinion, countless
ideologies into internal conflict. I hope I see it, but I can see that it will be hard for someone
to sign off on. Overall, what share of the collective missions, would you say,
are driven by scientific concerns and not political or economic concerns?
Well, formally, the answer is about 10%,
which is the science mission directorate of NASA
that sends out these robotic spacecraft.
If a human spaceflight expert were here,
they would probably point out the tremendous
advancements that come from sending humans into space, and those are all true.
But the human side of NASA has, from the beginning, been associated with political
gamesmanship, and that doesn't take away anything from it. But the pure science,
what are the rocks on Mars made of? That's only about 10%.
But does that change now
that it may not be up to you to decide
because Elon Musk might do it
for you with SpaceX? Elon Musk is actually doing
a great service for those of us who
try to get missions off the ground because
he's building the delivery truck and
we haven't had a really good delivery
truck at a cost that's
sustainable for a while.
If you talked about risk being a barrier
is working with a private firm like SpaceX, Elon Musk,
is that essentially a way of kind of offshoring
some of the risk for NASA?
So, and this is worth noting in its own right.
So space launch is no longer really considered a risk.
It's a cost.
But you don't really have to say, well, it might blow up. And I think that's worth noting as a species. We've
reached a point where we can say, yeah, putting stuff into space is no longer the hard part.
So it's not really a risk. It can help with the cost, though. But it's still, the SpaceX rockets are still a little, they have a slightly shorter
history of success than some others. But we certainly propose to use them whenever we can,
because they save us money on other stuff. So I hate to ask you to reduce an extraordinarily
complex and fascinating set of ideas into essentially a headline. but I am curious to know, like, what's your problem?
Like, what is the thing that you want to happen?
Do you want NASA to take more of a different kind of risk?
So I have a naive answer and a realistic one.
The naive one is that I would like people to march on Washington
demanding more funding for space science missions.
More realistically, I would like...
Did you hear those deafening cheers from the audience?
Thank you.
Thank you.
Thank you.
We can send them this recording.
More realistically, I think I would like to see
the re-emergence of a scientifically confident,
literate, encouraging society across the board.
Wait, which one is naive, did you say?
Can I ask you one last thing before we turn it over to AJ? The Kepler Observatory, I believe, was built by Ball Aerospace,
which is a subsidiary of Ball Corporation,
which until 1993, I believe, used to make ball mason jars.
So I'm curious if that's the root of the NASA problem somehow.
Ball did build part of it,
and that's part of what we do to reduce costs.
They built the jams and jellies for the mission?
Well, I think they built the main spacecraft part,
and the telescope came from somewhere else.
So NASA doesn't build all of the spacecraft anymore.
When possible, we use
contractors and vendors. AJ Jacobs, Jeff Nosanoff, a NASA consultant, has been telling us a lot of
interesting things about what he feels are the slightly wrongheaded philosophies behind NASA.
Keeping in mind he has a little bit of a horse in the race as a consultant who wants to get his
projects going, how much of what he said was totally false, AJ?
About 40%. What?
No.
In my extensive research, it did check out.
And I'm a big fan of the Kepler telescope.
It found over a thousand planets.
And I actually looked up what the planets were called.
They've got some wonderful names.
There's Kepler-560b and Kepler-438b.
There's a crowd favorite of Kepler-841b.
Yeah.
So you guys are, you need some creativity, I think.
Once we know a little bit more about them
other than there's one there,
it'll be easier to name them, I think.
All right.
There are a lot of Roman gods.
Yeah, that's true.
And I also looked up the original Kepler that it's named for. He's the Johannes Kepler, a 17th century astronomer. And it turns out, appropriately enough, he had
money problems. So like NASA, he had money problems. And he had to supplement his astronomy
work with astrology. He was the astrologer to the Holy Roman Emperor,
which I think is just super sad
because it's like Stephen Hawking reading tarot cards.
It's like...
I did not know that.
Great scientist.
I'm here.
Jeff, thank you so much for playing Tell Me Something I Don't Know.
It's time now for a quick break.
When we return, more guests.
We will make Manoush Zomorodi tell us some things we don't know,
and our live audience will pick a winner.
If you would like to be a guest on a future show
or attend a future show,
please visit Freakonomics.com.
We will be right back.
Welcome back to Freakonomics Radio Live.
Tonight we are playing Tell Me Something I Don't Know.
My name is Stephen Dubner.
Our fact checker is A.J. Jacobs,
and my co-host is the podcasting veteran Manoush Zomorodi.
Before we get back to the game,
we have got some frequently asked questions
written just for you,
Manoush. You ready for them? Okay. Yep. Manoush, we know that your latest podcast, ZigZag, spends a
lot of time talking about the blockchain. So for those who still don't get it, can you explain
the blockchain in 30 seconds or less? 30 seconds. Okay. Think of it as Google Docs, right? Like if
you have a document in Google Docs, if you
change it, everyone sees the change, right? And when you go back in, if somebody else changed it, you see
it as well. Think of that, but with no Google being in charge. Pretty cool, right? That was phenomenal, and
it was 17 seconds. I could keep, do you want to keep going? Give me 13 seconds more. Okay, great. So also
blockchain, it really truly is like a necklace of computers across the world,
all linked together.
When the change gets made, it goes across the entire necklace.
You can have private ones, like a bank can have its own blockchain,
or you can have more public ones, like Ethereum,
which anyone can join the Ethereum blockchain, as it were.
You can layer, keep going? You're way over your 13 seconds.
So no, let me ask you one more question on blockchain. Most people who know a little bit
about blockchain are alternately kind of enamored and petrified, especially when it's attached to
a currency, which tends to be very volatile. Tell us your one very favorite totally non-currency potential application of
blockchain. Saving journalism. That's the weirdo experiment that I've been part of. Anybody heard
of Civil here? Great. Okay, so Civil is the blockchain startup that my former executive
producer at WNYC, we quit our jobs to join this weirdo startup. And we are documenting
the entire process of trying to get our heads around what blockchain is, how it could potentially
save journalism. The idea with Civil is that there would be a network of trusted, verified,
little media publications, and people would be able to sort of pay for them as they go or tip
or vote. If somebody is putting nonsense on their fake news could get voted off by staking their tokens.
Spoiler, the token sale failed this week.
All documented on our podcast, ZigZag.
So that sounds like a great project.
Let me ask you this.
Since you have an interesting relationship with technology,
sometimes very pro, sometimes much less pro,
what is your personal strategy
for creating and managing computer passwords? Okay, I've never told anyone this. It's just me here.
I write messages to the tech platforms about how I really feel about them. So you know how you're
supposed to be like, random strings of words is a better way to write a password. So I will write
like, F youU Facebook and then...
So your whole string of passwords is just a litany of your feelings
towards the tech companies that you engage with?
Yes, correct.
Or if I like the tech company, it's a message to myself
reminding me of why I like it.
Give me a for instance.
In other words, just tell us like your Amazon password for now.
I'll give you another one.
One is like a running app and it was like
words of encouragement
to myself.
Oh, that's so cute.
It's kind of sweet, right?
Anyway,
I also use a password manager
which you all should do
so that,
you know,
that you're not keeping
your passwords
in places that are not.
No, it does not run
on blockchain.
And finally,
Manusha Amrodi,
you've worked in many
media forums,
radio, TV, books.
Why, in your view, is podcasting superior to all of them?
Because of the listeners, right?
No, I'm serious.
It's the truth.
I have never, I've been a journalist for a long time now,
and only when I started doing podcasts would people write me the most personal, incredible emails,
hug me when I met them at events.
It's a relationship that I have never had before
with people I don't know
based on sharing of stories and information.
And you're comfortable with this level of forced intimacy?
Have you listened to my shows, Stephen?
I'm pretty comfortable with a lot of things yes
ladies and gentlemen manoush zamorodi
all righty then let's get back to our game would you please welcome our next guest scott highhouse
hi scott says here that you are a professor of psychology at Bowling Green State University in Ohio. And I understand you have a riddle for us of some sort.
Yes.
There was a study in 1959 that showed that psychiatric nurses in a mental hospital were just as good at predicting patient readmission as were the expert psychiatrists.
So what were the nurses doing?
Listening to the patients.
Oh, I feel bad.
My mom and dad are going to listen to this.
Were they actually, because there's new technology out there
that is analyzing voice and can predict
when someone is going to have a psychotic break,
were they actually listening to the way that they spoke?
No.
Okay.
That was a real, you just love to stomp on her enthusiasm.
I didn't know.
Why don't you tell us?
Because I have a feeling that the story behind the answer is interesting.
They took each patient's folder and placed it on a kitchen scale.
And the heavier folders predicted readmission more than the lighter.
So is this what people talk about when they talk about Occam's razor?
The simplest theory is more likely to be correct, or is it something different than that?
I don't disagree with that. I think the general principle is that expert intuition is not very good when it comes to making predictions,
particularly about people's behaviors and their performance.
So are you also saying that this could not be replicated
in this day and age due to digital?
You'd have to be more creative, I think,
maybe the length of the file or something.
Gigabytes.
So let me ask you this.
You're a psychology researcher.
Yeah.
Is this story that you just told us related particularly to the work you do?
Or does this expert intuition idea travel across domains?
Yes, my area is industrial organizational psychology.
And I'm interested specifically in hiring and interviewing.
We know that intuition is a derailer. We knew way back that admissions
officers for universities who knew the GPA and the SAT score screwed things up when they added
their holistic judgment about the student. And we find the same thing with job interviews. Expert interviewers, experienced
interviewers in HR are actually worse than a lay person who uses structured questions that are job
related and behavioral in nature. So can you give an example of a good interview question and a poor
one that's more fishing for intuition?
Yes. A traditional interview question would be, why do you want to work here? Tell me about
yourself. And those are boring sounding, but you're saying they work. No, those are more
intuitive. So what is it that drives you and things like that? A more structured question
would be job related and behavioral. So tell me about a time when you encountered conflict at work
and what did you do about it?
Or what would you do in a situation where someone tried to undermine you at work?
So those very behavioral questions are very specific
and they ask about what would you do or what did you do?
Of all the domains in which all of us engage all the time,
so workplace, dating and mating,
an example like you gave in a medical field
where you're trying to assess someone's prospects
or assessing anyone's prospects,
where do you find intuition is most heavily relied on
and therefore most damaging?
Oh, goodness.
I do know that maybe it doesn't answer your question directly,
but intuition is good in some areas,
like wine tasting and art appreciation.
But when you say it's good...
Well, those are studies based on agreement with experts.
Yeah, but the experts you just told us are full of, you know...
My area is prediction, remember.
We're trying to predict future performance on the job.
So in areas where they look at agreement
with experts on aesthetic judgments,
intuition seems to work well.
And the more you think about,
is this de Kooning a good painting,
the farther away you get from expert judgment.
I find really interesting the idea that in a job interview,
but I'm guessing in any context,
if you ask for a specific behavioral response,
whether it's theoretical or real from history,
I mean, that makes a lot of sense.
AJ, Scott's telling us that intuition is to be leery of,
at least in some cases,
and that experts tend to have a lot of it
and make some bad decisions. What more can you tell us about that? Well, yeah, both my intuition and the data
support that intuition is terrible. It's a terrible predictor of future. I actually kind of got
sidetracked because I'm a fan of the old-fashioned ways of predicting the future. And maybe you can
tell me how successful they are. There's Bontroscopy, which is predicting the future by the sound of thunder.
Haruspex, which is predicting the future from the livers of sacrificed sheep. And Myomancy,
predicting the future by the movement of rats and mice. Do you use a lot of sheep livers at
Bowling Green? No, but there are areas of employee hiring
where they look at handwriting
or, many years ago, bumps on the head.
None of those were very useful.
But if you got your bump in a very dramatic way,
it could tell you something about the person.
And, just so you know,
zarf, the coffee cup sleeve,
I know you've been waiting,
from the Arabic zarf for vessel.
Thank you.
AJ Jacobs, as always, going way above and beyond the call of duty.
AJ, thank you.
And Scott Highhouse, thank you so much for playing.
It's time for our final guest of the evening.
So would you please welcome her, Radha Mihalcha.
Radha is the director of the Artificial Intelligence Lab at the University of Michigan.
Radha, the floor is yours.
I have a timely topic.
How can you increase your odds
of finding out if a news article is true or fake?
Does it involve AJ?
I am available.
You know, I wonder if what we just heard
from Scott Hayhouse should weigh into it in some way,
which is distrusting intuition.
Does that have anything to do with it or no?
To some extent.
Cagey answer.
Let's say more than just discounting intuition,
seeking out firm behavioral or structural elements like punctuation or typography.
Getting closer. Okay, so the second season of ZigZag is trust and information is our theme.
And we did something with the Knight Foundation looking at how misinformation and fake news
essentially spread on Twitter
before the 2016 election and post.
And it actually, the crazy surprising finding
was that all that fake information,
the millions of tweets,
went back to just a few dozen sites.
So it was far more centralized than people thought before the other fact that was really
interesting was that 95 to 97 percent of the information coming from those news sites was
true it was a very small amount it was that three to five percent that was nonsense that really got
pumped out across twitter so counterintuitively one might, is the genesis with somewhat reputable sites or sites that are
well-established. And I think that would add to the challenge, in fact, because you cannot really
rely on the source. You work in language and IT, so your answer has something to do with technology
computing, yes? That's true. Are you in possession of a pretty good method or algorithm to determine
fake news? Is that what you're saying? You have in your pocket something useful? Right. So your best
bet would be to bring along a computer. It turns out that computers are better than people at
detecting deception. What we found with our algorithms, for instance, in courtrooms, we can spot witnesses who are lying
about 75% of the time, which is quite a bit better than what people would do at the same task. So
people do a little bit better than random at 55%. In fake news, people are better. They get fake
news about 70% of the time. And computers do what? And computers would do 76.
So people are still beyond the computers.
Okay, so what are the computers actually doing, though?
Is it a text analysis?
Is it finding inconsistencies in mood or language?
What's happening?
So computers are basically learning from data.
They are learning from collections of lies and truths.
What are the attributes of those? So basically we program the system
to look for certain features or attributes
like sequences of words or relations between words
or the semantic type of the words.
Can you give an example of a phrase or sentence
or even a word that would indicate fakeness?
So one of the aspects of
language that computers would pick on is the use of personal pronouns. Liars would tend to use
less often first-person pronouns like I, me, myself, we, and instead they would use more often he, she,
they. Psychologists would explain that by saying that liars would want to detach themselves from
the lie. Another one,
which I think is somehow counterintuitive, is the use of words that reflect certainty.
Liars would tend to more often use words such as always, absolutely, or using exaggeration.
The best! The best, there we go. Unlike the truth-tellers, truth-tellers would more often use hedging, like maybe, perhaps, probably, things like that.
Interesting. So, cynical question, by publicly discussing this research, both in general and
specifically, aren't you just making it easier for the purveyors of fake news to get better?
Not necessarily. I think the clues that computers tend to pick on are not intuitive for humans.
So if I were to ask you how many times I said I,
you probably have no idea because you don't look for those little words
that actually make a difference in deception detection.
So it is still hard.
Even if you want to prevent others or a computer to detect deception,
it's actually hard.
Are you using deep learning, I'm assuming, with processing all this information?
We do use deep learning in other projects, but not in this particular one.
Not in this one.
What are we using?
We are using machine learning.
The reason being that deep learning works very well
when it has a lot of data.
Yeah.
So you need a lot of lies, a lot of truth.
We know some places.
Can I just ask, how surprised should we be
that this is the kind of task that computers are better at?
I mean, isn't the list of things that humans are better at computers then getting really short?
I don't mean to degrade the value of this kind of identification,
but I guess it's just not so surprising that a computer, an algorithm,
would be better than an intuitive, emotional, impatient human being?
Because we know that we're bad at those things, right?
Well, yes and no.
I think there are certain tasks where we are still better.
Like, for instance, writing.
If you were to write a novel, people are still much better.
So there is still a fair number of applications
or ways in which we are much better.
So I think the question is,
could you use the technology to parse Brett Kavanaugh's testimony?
We could, and we are planning to.
So we are working on it.
You heard it here first.
And Dr. Ford, of course.
I mean, right?
Of course.
The whole dialogue.
A.J. Jacobs, Radha Mihalcha from the University of Michigan is telling us that computers are getting pretty good at detecting deception.
What more to add?
It is looking good. These are early days for this technology, but I mean, we desperately need it.
And I looked into a little of the history of truth detection devices because the polygraph tests that measure, you know, your pulse and your
skin, they're not that reliable. You know, the American Psychological Association says to be
very skeptical. Though, in their defense, polygraphs do have a very cool backstory,
because one of the inventors of the polygraphs was William Marston, the man who created Wonder Woman, the superhero.
And Wonder Woman's lasso of truth,
that is 100% scientific and reliable.
So that's the secret.
AJ, thank you.
And Radhami Halter, thank you so much for playing.
Can we give one more hand to all our guests tonight?
It is time now for our live audience to pick a winner.
So who's it going to be?
Julie Arslanolu with using antibodies to answer art questions.
David Reiter with how to manipulate away our impatience.
Jeff Nosinoff with rethinking risk in NASA and space,
Scott Highhouse with how intuition is often wrong,
or Radha Mihalcha with detecting deception with computers.
While our live audience is voting, let me ask you a favor.
If you enjoy Freakonomics Radio, including this live version of Tell Me Something I Don't Know,
please spread the word, give it a nice rating on Apple Podcasts, Stitcher,
or wherever you get your podcasts.
Thank you so much.
Okay, the audience vote is in.
Once again, thanks so much to all our guest presenters.
And our grand prize winner tonight,
now, you could chalk this up to a little recency bias,
but I don't think so,
for telling us about detecting deception with computers. Radha Mihalje, congratulations. And Radha,
to commemorate your victory, we'd like to present you with this certificate of impressive
knowledge. It reads, in full, I, Stephen Dubner, in consultation with Manoush Zomorodi and A.J. Jacobs,
do hereby vow that Radha Mihalcha told us something we did not know,
for which we are eternally grateful.
That's our show for tonight.
I hope we told you something you didn't know.
Huge thanks to Manoush and A.J., to our guests,
and thanks especially to you for coming to play. Tell me something.
Thank you so much.
Tell me something.
I don't know. And Freakonomics Radio are produced by Stitcher and Dubner Productions.
This episode was produced by Allison Craiglow, Harry Huggins, Zach Lipinski, Morgan Levy, Emma Morgenstern, Dan Zula, and David Herman, who also composed our theme music.
The Freakonomics Radio staff also includes Greg Rippin and Alvin Melleth.
Thanks to our good friends at Qualtrics, whose online survey software is so helpful in putting on this show.
And thanks to Joe's Pub at the Public Theater for hosting us.
You can subscribe to Freakonomics Radio on Apple Podcasts, Stitcher, or on Freakonomics.com.
If you'd like our entire archive ad free, along with lots of bonus episodes and sneak peeks,
please sign up for Stitcher Premium.
Use the promo code FREAKONOMICS for one month free.
Thanks and good night.
Oh, so when you stepped over me, Stephen Dubner.
I'll say it again. Stitcher.