Freakonomics Radio - 199. This Idea Must Die
Episode Date: March 5, 2015Every year, Edge.org asks its salon of big thinkers to answer one big question. This year's question borders on heresy: what scientific idea is ready for retirement? ...
Transcript
Discussion (0)
Are you an idea junkie?
I am.
And since you listen to a show like this, you probably are too.
It's exciting to hear about ideas, especially new ones.
There's a progression that happens when you hear a new idea.
You run it through your brain, try to envision where it might lead.
Who will benefit from it?
Who will it hurt? Will it be worth the cost? Is it legal? Is it morally defensible? Is it
in fact a good idea? Today's episode is about ideas, but we run that progression in reverse.
Rather than asking if a new idea is a good one, we ask...
Well, here, you can tell from the answers what we ask.
The idea that I believe is ready to retire...
I think an idea that is really bad, that's detrimental to society, is the idea...
The scientific idea I believe is ready for retirement is the atheism.
That's right.
We are asking a bunch of people to name an idea that should be killed off,
an idea that's commonly accepted, but which, in fact, is impeding progress.
Would you like a for instance?
My lab's research focuses on the development of the adolescent brain.
That's Sarah-Jane Blakemore.
She's a professor of cognitive science at University College London.
The idea that she'd like to kill off
is the idea that people are either right-brained or left-brained.
When people say left-brained,
apparently what they tend to mean is a mode of thinking
which is more kind of logical and analytical and accurate,
whereas right-brained
people tend to be more creative, intuitive, emotional and subjective.
Like most of the ideas we'll be discussing today, this one is exceedingly popular.
It sells a lot of self-help books, businesses use it, even scientific studies sometimes
employ this idea of left-, right brain, for example,
with regards to gender differences or creativity in the brain. So this idea must make some sense,
right? This is an idea that makes no physiological sense. The brain, Blakemore tells us, is plainly
divided into two hemispheres, with each one doing more heavy lifting for certain
functions. But those hemispheres do not operate in isolation. There's a kind of fibrous tract in
the middle of your brain that connects up the two hemispheres. And that tract, called the corpus
callosum, enables the two hemispheres to talk to each other within a few milliseconds. So it's
simply not possible for one hemisphere to function
without the other hemisphere kind of joining in. So where did the left brain, right brain idea come
from? Blakemore says it most likely began as a misreading of earlier research on a small number
of patients whose two brain hemispheres couldn't communicate. Back in the 60s, 70s and 80s, there was quite a lot of very high impact,
extremely interesting research on split brain patients who have their corpus callosum
removed, surgically removed, mostly for intractable epilepsy. It's not done anymore,
but back then it was done a few times. These rare patients were studied by a professor of psychology
now at the University
of California, Santa Barbara, named Mike Gazzaniga. And what he found was that each hemisphere
played a role in different tasks and different cognitive functions, and that normally one
hemisphere dominated over the other. So what the patients were aware of was what was going on in
their left hemisphere, and they didn't have much conscious access was what was going on in their left hemisphere,
and they didn't have much conscious access to what was going on in their right hemisphere.
This is really interesting and important scientific work.
But what I think happened was that it was kind of slightly misinterpreted in the general public
to suggest that all of us are either left-brained or right-brained.
But actually, most of us have a functioning
corpus callosum, and so we use both our hemispheres all the time.
And yet, Blakemore says, the common perception today is still that most of us are either
left-brained or right-brained. And that, she says, is getting seriously in the way of progress.
What really worries me is that it is having a large impact in education. My research
involves teenagers and we go into schools a lot and what we see is often children being classified
as either left-brained or right-brained and actually it might be an impediment to learning
mostly because that kind of implies that it's fixed or innate and unchangeable to a large degree.
I mean there are huge individual
differences in cognitive strengths. Some people are more creative than others, other people are
more analytical than others but the idea that this is something to do with being left-brained
or right-brained is completely untrue and needs to be retired. From WNYC, this is Freakonomics Radio,
the podcast that explores the hidden side of everything.
Here's your host, Stephen Dubner. I'd like to say that today's episode was our idea,
that we thought up this notion of drawing up a hit list for outdated ideas.
But we are not that clever.
Here is one of the clever people.
For want of a better description, I call myself a cultural impresario.
That's John Brockman.
He makes his living as a literary agent, but for decades,
he's also been a curator of great minds and big ideas.
Years ago, he organized something called the Reality Club.
The idea was that we would seek out
the most interesting, brilliant minds,
have them get up in front of the group,
which was the way they could get in the group,
and ask aloud the questions they were asking themselves.
The group changed over time,
and in the 1990s, it migrated online.
Now it's known as Edge.org.
It's sort of a salon, populated mostly by scientists from the hard sciences and social sciences.
But there are also writers and others as well.
A tradition arose within the salon.
Every year, one question would be put to the entire community and everyone would write an essay in response.
Something like, what should we be worried about?
Or what do you believe is true even though you cannot prove it?
That is the best question ever.
It drove people mad.
Every year, these essays are collected in a book.
The latest book is called This Idea Must Die, Scientific Theories That Are Blocking Progress.
Here's a question everyone was asked to answer.
What scientific idea is ready for
retirement? The question came from an Edge.org contributor named Lori Santos. I'm a professor
of psychology at Yale University, and I'm also the director of the Comparative Cognition Laboratory.
The question arose from Santos' own academic work. Sometimes once something gets in print or gets in
a textbook or gets on people's
public radar, it just sticks around, even if there's reason to suspect that the idea is just
wrong. And it seems like there's no good procedure to kind of retire bad ideas in science. So, you
know, I'm a psychologist and I sometimes dabble in the work of economics. And if I'm not really
in the trenches, I might not know the kinds of ideas that economists are like, guys, you know, we stopped paying attention to that, you know, 10 years ago.
It'd just be nice to kind of get all ideas crisp and sort of get the ones that are not doing us
service out of there so we can focus on the stuff that we do think is true.
Edge.org received 175 contributions for ideas that must die, whether because they're simply
outdated or have been superseded or have no basis in fact,
or maybe they just don't sit right with the world anymore.
This episode presents a handful of these ideas, and we, in the spirit of overturning our habits, will also try something new.
Rather than hearing me interviewing our guests, badgering them with questions,
you will hear what is essentially a series of soliloquies
from scientists.
I'm Sam Arbisman. I am a complexity scientist and writer.
My name is Paul Bloom. I'm a psychology professor at Yale University.
To doctors.
My name is Azra Reza. I am an oncologist, professor of medicine and director of the
MDS Center at Columbia University in New York.
To an actor and writer who used to play a doctor on TV.
My name is Alan Alda.
I love science. I love to read about science.
And we even hear from an economist, Trent Averis.
When I think about ideas that are getting in the way of progress, I have a strange one.
It's probably one of the most unpopular ideas that you and I have ever talked about.
Let's begin here.
My name is Seth Lloyd.
I'm professor of quantum mechanical engineering at MIT.
One quick warning.
We are not going for trivial ideas here.
We're going big.
Very big.
The idea that I believe is ready to retire
is the universe. Over the last 20 years or so, it's become increasingly clear that the idea of
the universe as just the things that we can see through our telescopes, even though we can see 10 billion light years away,
is an outmoded idea. Now, the conventional picture of how the universe came about
is that it started 13.8 billion years ago in a gigantic explosion just called the Big Bang.
It was tremendously hot. It was full of all kinds of particles zipping around here and there.
And then gradually as the universe expanded, it cooled down. Galaxies started to form. Stars started to shine.
And then we're left with the universe that we see around us.
And that's true so far as it goes. If we look around us and see these galaxies flying through the cosmos, their existence, their composition, and their form can be very well explained by this theory of the Big Bang.
But the universe that we see around us is just one part of a much larger multifaceted multiverse in which there are many possible universes contained. So the current theories
suggest that this universe we see with electrons and photons and galaxies and stars and planets
and human beings, this is just one possible way for things to be. And if you were to go far enough
out there, you'd find pieces of the universe where things are entirely different, where there are no electrons, no stars, no planets.
If you go far enough out there, you'll basically find all possible combinations of what's allowed by the laws of physics playing themselves out because our universe is effectively a giant computer.
And everything that can possibly be computed is being computed. And this notion
is a rather new notion. It hasn't really percolated into human consciousness. But once one's given up
this piece of useless baggage that there is only one universe, we really are forced to contemplate the actual physical existence of things beyond
what we can just have experimental and observational access to.
And it gives us a nice explanation for why the universe is so darn intricate and complicated.
My name is Emanuel Derman.
I'm a professor at Columbia University,
and I worked on Wall Street for about 20 years as a quantitative analyst.
And the scientific idea that I believe is ready for retirement
is one that's very fashionable now,
and that's the use and the power of statistics.
It's a subject that's become increasingly popular
with the increasing power of computers, computer science,
information technology, and everybody's interest in economics and big data,
which have all come together in some sort of nexus
to make people think that just looking at data
is going to be enough to tell you truths about the world.
And I don't really believe that.
There are ways to understand the world and those ways involve understanding the deep
structure of the world and the way the world behaves and I can give examples from physics.
I worked as a particle theoretical physicist for a long time and all the really great discoveries
in physics have come from a burst
of intuition, which people tend to look down on these days. But if you look at Kepler,
Johannes Kepler was an astronomer about 50 years or so before Newton. And he actually
spent a lot of time studying Tycho Brahe, who was a Danish observational astronomer who collected tons of very detailed
data of the position of the planets. And Kepler got access to them and over 30 or 40 years
analyzed them. And actually, it was an astonishing feat. What he did was, if you think about what you
see when you see the trajectories of lights in the sky, which are planets, you see their motion
relative to the Earth. But what Kepler was interested in, their motion relative to the sun.
And the Earth is moving around the sun.
So God knows how he did it, but he had to extract out the motion of the Earth
from the whole picture and describe how the planet moves relative to the sun.
How he did this without computers is quite beyond me.
But in the end, his second law says that the line between the planet and the sun
sweeps out equal areas in equal times.
And it's kind of an astonishing thing to say because he's describing an invisible line between the sun and the planet.
There is no line between the sun and the planet.
And yet he's come up with this burst of intuition which lets him talk about something you can't see and that isn't in the data.
And I think that's a good instance of the sort of bursts of insight
that people have when they use their intuition to make great discoveries.
There's no understanding how he came to look at things in that way.
But what's fashionable these days is simply doing statistics and correlations.
And I don't believe you can really find deep truths like Kepler's laws
that are trying to describe something below the surface
simply by looking at data.
And it's sort of what's wrong with a lot of financial modeling too is the idea that somewhere
there's a formula that will tell you how to manage risk, tell you how to price things,
and absolve you of the responsibility or the struggle to actually understand the world in
a deeper way. And in the financial crisis, too much reliance on statistics is what got people
into trouble, thinking that bad things could never happen because they hadn't happened before.
My name is Azra Reza.
I am an oncologist, professor of medicine and director of the MDS Center at Columbia University in New York. The scientific idea that I believe is ready for retirement is mouse models
must be retired from use in drug development for cancer therapy, because what you see in a mouse
is not necessarily what you're going to see in humans. For example, one very simple
mouse model would be we take a mouse and give it a drug and see what happens to it. Another,
which is much more commonly used, is called xenograft mouse model, in which what we do is
that we will take a mouse and we will use radiation therapy, etc. to destroy its
immune system completely and now we will transplant a tumor taken from a human into this mouse
model. So its own immune system is gone so it won't reject the tumor and we can then test the efficacy of a drug to kill these human cells in the xenografted mouse model.
Now, currently, cancer affects one in two men and one in three women.
It's obvious that despite concerted efforts of thousands of investigators,
cancer therapy is today like beating a dog with a stick to get rid of its fleas.
It's really, in general, quite primitive. In fact, the acute myeloid leukemia, the disease I've been
studying, we are giving the same drugs today for the majority of
patients that we were giving in 1977 when I started my research in this area.
So when compared to, let's say, things like infectious diseases or cardiac drugs,
cancer drugs fail more often. Recently, things have improved from the mid
90s to now about 20% of drugs are actually entering clinical trials and FDA approved,
but 90% of the drugs still fail because of either unacceptable toxicity or once we give them to humans, we find that they are not working the way they were supposed to.
So why are these facts so grim?
Because we have used a mouse model that is misleading.
They do not mimic human disease well.
And they are essentially worthless for drug development.
It's very clear that if we are to improve cancer therapy,
we have to study human cancer cells.
But in my opinion, too many eminent laboratories and illustrious researchers have devoted entire lives
to studying malignant diseases in mouse models.
And they are the ones reviewing each other's grants and deciding where money gets spent.
So they're not prepared to accept that mouse models are basically valueless for most of cancer therapeutics.
But persisting with mouse models and trying to treat all cancers in this exceedingly artificial system will be a real drawback to proceeding with
personalized care based on a patient's own specific tumor, its genetic characteristics,
its expression profile, its metabolomics. All those things are so individually determined in cancer.
And for a lot of patients, the drugs are already there.
We just have to know how to match the right drug to the right patient at the right time.
And in order to do that, the answer is not going to come from mouse models, but it's
going to come from studying human cancers directly.
Mice just are not men.
The scientific idea that I believe is ready for retirement is the atheism prerequisite. The idea that the only way science can work
is if we assume we live in a godless, meaningless universe.
My name's Douglas Rushkoff. I'm a professor of media studies at Queens College, CUNY.
The assumption that we live in a godless, meaningless universe know, everything that we think, everything from
civilization to consciousness to meaning are all emergent phenomena, that they're all a result
of matter doing various materialist things. And when I started to realize that much of science's insistence on atheism was suspect, was when I start hearing these folks talk about the singularity.
They have a narrative for how consciousness develops, that information itself was striving for higher states of complexity. So information made little atoms and then molecules,
because molecules are more complex, and then little cells and little organisms, and finally
human beings and civilization, all more and more complex homes for information. And now computers
are coming, which will be even more complex than people. So information can just migrate from human consciousness into artificial intelligence, at which point the human species can just kind of fade away.
And that's when I realized, oh, they've created their equally mythological story for what's happening with a beginning, a middle and an end, which is just as archaic, just as arbitrary as any
of the religious narratives out there. And the irony for me is that it's the most outspokenly
godless of the scientists who fall most tragically in the spell of this story structure.
The people I'm asking to retire this idea are scientists, evolutionary biologists,
that seem to need to start the universe from zero
in order for their models to make sense? What if we don't have to
make science and our view of reality conform to the basic story structure of beginning, middle,
and end? You know, if there was something here before the Big Bang, then the story that science
is trying to tell doesn't really work.
I'm not saying that people can't be atheists.
Honestly, I have no idea what's going on here.
I don't know if there's a God or not.
I don't know if there's meaning or not.
But what I'm saying is that atheism can't be a prerequisite for the scientific model. Because if you are forcing yourself to strip meaning from reality
in order to cope with it, in order to explore it and observe it,
then you're tying your hands behind your back and you're missing a huge potential portion of the picture.
All right. We've already heard five ideas that should maybe be sent to the trash bin.
The atheism prerequisite for scientists, the value of mouse models for human medicine,
which I admit stunned me. The idea that statistics are as powerful and useful as we think, the idea of the universe,
and the left brain, right brain construct. Coming up on Freakonomics Radio, some other ideas we might want to get rid of, including the idea that things are either true or false,
and the idea that science can tell us everything we need to know about how to be happy.
And the idea that markets are good.
Really?
You sure about that?
And the second idea that I think is ready for retirement is the idea that markets are
bad. From WNYC, this is Freakonomics Radio.
Here's your host, Stephen Dubner.
They are out there.
Bad ideas, or if not bad ideas, ideas that have at least outlived their usefulness and are now standing in the way.
They're clogging up our brains, our academic departments, our research labs, our popular culture.
Which is why Edge.org has published a book called This Idea Must Die.
You know, I'm not sure any ideas have to die.
That's Alan Alda.
I'm an actor and a writer.
You probably know Alda from the epic TV series M.A.S.H. or, more recently, from The West Wing or 30 Rock.
What you may not know is that Alda also has a long-held passion for science.
Like most kids, I was very interested in science when I was a six-year-old boy. I used to do what I thought were experiments, trying to mix toothpaste in my mother's face powder to see if it would blow up.
So that seems to be the basis of a lot of science, actually, starting with Alfred Nobel.
But I just never lost that curiosity. And when I wrote for MASH, I wrote, I don't know, I guess about 20 or 25 episodes.
Whenever there was a medical procedure, I would research it as carefully as I could.
I'd go to a medical library and get out the books and find out exactly how the operation was done.
At this particular mobile army hospital, we're not concerned with the ultimate reconstruction
of the patient. We only care about getting the kid out of here alive enough for someone else to put
on the fine touches. We work fast and we're not dainty because a lot of these kids that can stand
two hours on the table just can't stand one second more. Walter DeShell was our medical advisor and
he had the wonderful ability to not only tell you what disease or operation might apply to the story,
but he could help you figure out how the story would benefit by the various stages of that disease or the techniques in the operation.
These days, Alan Alda is a visiting professor at the Alan Alda Center for Communicating Science at Stony Brook University on Long Island.
I love science. I love to read about science.
And so I'm very concerned about how science is communicated.
And for the last 25 years, I've spent a lot of my time trying to help scientists communicate about their work
so that ordinary people like me can understand it.
And now at the Center for Communicating Science at Stony Brook University,
we train scientists in kind of unusual ways.
We train them to relate to their audience, first of all,
by introducing them to improvisation exercises.
And that is not to make them performers or make them comics
or get them to invent things on their feet, which is what we usually think of in terms of improvising.
It's to get them to relate, which the improvising exercises all do.
They make you – they put you in a position where you have to observe the other player and you have to read the other player's face and tone of voice.
In a way, you have to read the other person's mind.
And that's, I think, the basis of good communication.
You've got to know what's going on in the mind of the person listening to you
to know if you're getting through to them or not.
Alder wrote an essay for This Idea Must Die,
but he's a little bit squeamish about the premise.
It's eye-catching to say this idea must die.
And I'm not sure that most of the articles in the Edge catalog of things that need to be retired actually need to be retired or just rethought.
So, therefore, I would say that asking for these ideas to be retired is really a way of saying, this is the received wisdom. Do we need
to re-examine it? And I think that's a good approach to take. The idea that I think maybe
is due for rest, and notice I said it needs to take a rest. I didn't say it needed to die,
is the idea that things are either true or false. And I know that's kind of an impertinent thing to say,
and it sounds stupid.
But what I mean by it is the idea that
something is either true or false for all time in all respects.
I think about this because
when I was being taught to think in school,
I was taught that the first rule of logic
is that a thing cannot both be and not be at the same time and in the same respect.
And that last part, in the same respect, really has a lot to do with it.
Because something is determined to be true through research.
And then further research finds out that it's only true under certain conditions or that there are other factors that are involved.
And this is a very interesting example.
A lot of people were interested.
I know I was interested when I read that red wine was good for you.
And at first, we might have even thought, the more red wine, the better.
Look at all that antioxidant stuff going into us. But it was a terrible disappointment sometime later when some other scientist said,
you know, under certain conditions, red wine could be not so good for you.
And again, there's this other thing that it might be really great for mice and less good for us.
But what really disturbs me is when the public decides that that means that science can't make up its mind or that scientists are just making things up.
Some people actually do think that.
Some people think the findings in science are hogwash.
Because if one day they say one thing, the next day they say another thing, then it sounds like they just are taking wild guesses at things.
When, in fact, the progress of science is just that.
You go deeper and deeper, you open up one door and you find another hundred doors that have locks on them,
but you have to figure out the combination for it.
And I personally find it exciting to see what we thought we understood to be contradicted,
but I don't think the public has enough of a grasp of how science is done,
how it's based on evidence. When you say this is true, in the mind of the person receiving that
information, they're going to accept it as true for all time, under all circumstances, unless you
warn them that things might change in the future, we might learn more about this.
That shift in the frame of reference
is something that ought to be allowed for.
I want to see science prosper.
I want to see evidential thinking be the norm for the public
as it is for scientists.
So my suggestion that we alter the way we talk about
things being true or false is really to help in the communication of science so that people
don't get confused. My name is Alan Anderson.
I'm a science journalist and writer.
And the idea that I think should be retired
is the notion that we are all still Stone Age thinkers,
that because of this long period we spent as hunter-gatherers,
perhaps 200,000 years before the appearance of agriculture,
that we're stuck still with all those
reflexes all those motivations that worked so well a long time ago so living in this modern world you
still you know want to bash people overhead with a rock or you're told by an expert that the best
way to look after a baby is the same way that would have appealed to someone living in a cave. It's that notion that a lot of the stresses and strains of modern life have been
caused by a disconnect between what we are biologically and what culture has created for us.
And this doesn't really gel with what we know about just how flexible and adaptable the human
brain is and how it can be rewired to do quite wonderful things
that didn't happen during the Stone Age.
And we have lots of really good scientific evidence
of how the brain can be changed by culture
and how that change in the brain can then be passed on to later generations.
So if we take reading as an example,
reading and writing emerged only around 5,000 years ago,
but there's no doubt reading, with its access to lots more information,
the ability to share your thoughts with others,
is a massive change in how the world works.
And if you look inside the brain of a person who can read
and scan the brain so you see which bits light up
when they're reading
and when they're talking, you'll see their brain has been massively remodeled. All kinds of new
pathways have formed which link areas to do with visual perception and to do with hearing. You'll
see it's profoundly different from a person who can't read. So a person who can read has got a
kind of a new brain, but it's not in any way inherited.
What is happening, and it's in each generation,
we've got really good at teaching children
how to make this change for themselves
so that they become a different kind of person.
So a change in the brain changes a person,
changes a cultural process, changes more people,
and that's how culture shapes brains.
Cultural evolution and the force of cultural change
is being greatly underestimated
when people talk about the Stone Age mentality.
And to just go through life thinking that we are trapped by what we are already
holds us back from embracing what we might become in the future.
The idea I believe is ready for retirement is that science can tell us everything we need to know
about how to be happy.
My name is Paul Bloom.
I'm a psychology professor at Yale University.
I wouldn't deny for a second that science could tell us a lot about happiness.
It could tell us how to cure depression.
It could tell us some surprising things about what aspects of our everyday life makes us happy and what don't.
But I think the idea that science can give us a
complete theory of how to live a happy life is mistaken and mistaken in important ways.
So there's two main limitations of science in the domain of happiness. One is the notion
of what it is to be happy. Which of all the things that go on in the domain of happiness. One is the notion of what it is to be happy.
Which of all the things that go on in the brain should count as happiness?
And nobody knows the answer.
And it's not the sort of answer that you're going to find out by studying in the lab.
If you tell me happiness is a lot of pleasure, you know, suppose you have a terrific meal
and then some wonderful sex and then you read this great book.
Yeah, just terrific time.
Compare that to a really difficult time where you help a lot of people and you feel a satisfaction.
Both of these events correspond to activity in the brain.
Which one is real happiness?
Which one should we be trying to maximize?
But a second independent problem.
Suppose we decide what it
is to make a happy day. And we agree on it. There's no argument. We've settled that. Still,
how do you decide how to sum up days to make a happy life? Is it better to live 90 so-so years
or 30 really happy years, even though some of them other days may be miserable. You can know
everything in the world about the brain and that won't tell you the answer. And in fact, what's
interesting is these problems are very similar to problems like how do you maximize happiness in a
society? Is the best society one that has a lot of happy people and the total sum of happiness is very high,
even though some people might be miserable, living horrible lives?
Or is a better society one where the average happiness is very high,
even though it may be not as much of a total happiness as the first society?
Those are hard questions and they aren't scientific ones. But I think some scholars tend to
be over enthusiastic as to what science could tell you. And there's a huge literature of people who
will directly argue that the key to figuring out how to live a good life and live a happy life
is revealed by laboratory studies and science. So you can say, who cares? Who cares if many scientists and many psychologists
believe that their research will tell us
everything we need to know about happiness?
Why does it matter? Why does it cause any problems?
One answer is, when scientists overreach
and people see them overreaching,
it causes lack of trust in science.
The second problem is, it's a missed opportunity.
I think that the study of how to live a good life is one of the great questions, and it's the sort of thing that would benefit
from cross-disciplinary work, including philosophers and theologians and artists
and a range of scholars, but not just scientists.
I'm Sam Arbisman.
I am a complexity scientist and writer.
And the scientific idea that I think is ready for retirement is the idea that all of science needs to be big science. And by big science,
I mean the money and the effort that we pour into it, as well as the scale of the technology we use,
as well as the scale of the organizations and the teams. So we've gone from this age when you could
be a hobbyist. There used to be this figure of the gentleman scientist. It was very common several hundred years ago,
individuals who were independently wealthy
tinkering in their country estate or wherever,
and they were able to make a lot of discoveries.
As science has changed over time,
gone from this age when you could be
just an individual making discoveries
to this idea that you now need lots of money, lots of effort,
lots of people in order to make discoveries.
And a lot of people now feel that that's all there is,
that science has gotten bigger,
and we have to constantly move this way towards big science.
And I think even though there are many big and major discoveries
that are done through big science,
there still is a place for little
science. So because a lot of scientists now choose to publish their research and the raw data for
their research online, we now have huge amounts of data available in a way that was not available
before. They're now available to everyone. And that coupled with this massive availability of tools to help analyze these
things makes it no longer the province of the specialist with a vast amount of money.
You can now go on eBay and buy lab apparatus and set up your own biotech lab in your basement.
You can buy these things on the cheap and do research yourself.
And you won't necessarily make cutting-edge results all the time,
but you can still do things and see how it works.
So there is this democratization of the means of actually making new discoveries.
And I think one of the great things about that is it no longer makes science seem so abstract or different from what everyone else is doing.
It's simply just a rigorous way of asking questions, inquiring the world. And so even
though we think that things that came before us that might be hundreds of years old or even older
have been completely picked over and there's no new areas to work on or no new potential for discovery, there's still a lot available. And I think
if we recognize that anyone can play with these things, and they still might fail,
but there's still the potential for doing this kind of little science,
will help fill in a lot of these holes that the frontier has passed by, which is really,
really exciting.
The idea that I think is ready for retirement is the idea that markets are good and the idea that markets are bad.
I'm Michael Norton.
I'm a professor at Harvard Business School, and I think different people have different views of how markets work.
Some people think markets are amazing and they solve all of our problems.
Other people think markets are terrible and they're a source of misery for humans.
So the idea that markets are good is this sense that in the aggregate and across all individuals and across all decisions, that things are optimal in a sense that when everyone is
trying to buy the things they want and everyone's making things for people to buy that those markets
become efficient for example the stock market can become efficient because people eventually
evaluate things correctly and everything works really smoothly the other view of markets that
they're terrible is that it doesn't make any sense that markets would be good
because markets are made up of individuals. And we know that individuals are extremely biased,
including me, where we make all sorts of mistakes. So the idea that aggregating up
mistakes would somehow solve the mistakes to many people rings as completely wrong.
I think people looking at markets in this black and white way means that there's very little dialogue between people who hold these different views because it's almost as though the other side is just so blind to understanding how markets really work.
If you believe in market efficiency, when anyone on the other side says, I don't believe markets are efficient, maybe we need to do some tweaks to make the market efficient. You think they literally don't understand how markets work.
In a sense, this underlies nearly every political and public policy decision we're arguing about
and making today. If you think about health care and the housing market and income inequality,
all of these current debates basically have at at least part of their core, this idea that markets work just fine and don't need government versus markets don't work just fine and really need government.
So this lack of conversation across these two diametrically opposed views partly drives misunderstanding around these public policy debates. And so the idea is, if you think about how markets work,
what they are is an aggregation of individuals,
or sometimes we call them groups.
Groups of people have come together to cure diseases,
to save the world, amazing things.
Groups of people have come together and caused religious conflict
and caused horrible things to happen.
So we don't necessarily think that groups are either
good or groups are bad. We think, in fact, that they can be good and they can be bad. The market,
in a sense, is just the biggest group. And it seems likely that if groups that we know that
are a little bit smaller than the market can be good or bad, probably the market itself can be
good or bad as well. And that view of markets might help
us understand, again, not that they're good or bad, but really deeply understand when they do
well and when they do poorly. So, Levitt, do you feel generally that people, especially kind of academic elite people, put too much emphasis on looking for new ideas rather than perhaps, you know, killing off old ones?
I never thought of that in my entire life, whether people do too much of that.
That's Steve Levitt, my Freakonomics friend and co-author.
He is an economist at the University of Chicago.
I love the idea of killing off bad ideas
because if there's one thing that I know in my own life,
it's that ideas that I've been told a long time ago stick with me.
And you often forget whether they have good sources or whether they're real.
You just live by them.
They make sense, especially the worst kind of old ideas are the ones that are intuitive, the ones that fit with your worldview.
And so unless you have something really strong to challenge them, you hang on to them forever.
So give me a for instance.
When I think about ideas that are getting in the way of progress, I have a strange one. It's probably one of the most unpopular ideas that you and I have ever talked about.
I think an idea that is really bad, that's detrimental to society, is the idea that
life is sacred.
Okay, I know that's probably like you and everyone else is going, what's wrong with this guy?
Okay, but no, you got to hear me out for one second.
Okay, clearly my own life to me has almost infinite value.
We know people will fight like crazy and do anything to stay alive.
But the problem is that as a society, we really have taken that to heart. And so anything we do, like trying to limit healthcare or access at the end of life to various kinds of medical
stuff, feels awful to us. And even other things that maybe people will do voluntarily, like
selling their organs, which might induce some greater
likelihood of death at some point, but in return for financial gain along the way. People hate
ideas like that. And it's true in the U.S. and Europe, without a doubt, that there's this view
that life is an entitlement and the protection of life is an entitlement. And here's why I think
that's such a bad idea.
When you look at the progress that we've made in society,
so much of the progress over the last 100 years
has been in keeping people alive.
I mean, it's incredible what through medicine
and antibiotics and other things
we've managed to increase life expectancy.
So it's an area, it's a dimension
in which we have a lot of power. We're good at it.
But the problem with this idea that every life is valuable and every life should be saved,
essentially at any cost, is that the kind of innovations that we end up making
and the expense which is exacted in terms of GDP end up being huge. So the problem is
that right now, healthcare costs are spiraling out of control.
So almost 20% of GDP is spent on healthcare, but much of it is not effective. And it's not effective
because we hold the idea that everyone needs to be kept alive kind of no matter what. And so we do
incredibly expensive things and we encourage innovation by pharmaceutical companies and by
medical device makers, which find solutions at any cost, even though in the end, if you think
about health just as being like any other good or living like being any other good, you kind of buy
it and sell it and it has a price. And if you don't have enough money, you just can't stay alive
forever. You would organize the market in a very different way and people make different choices.
And the kind of innovation that we do would be presumably much more effective and efficient
innovation because people would have to develop the kinds of solutions that you and I would
pay for out of our own pocket as opposed to solutions where we just say, well, the government's
going to pay for it anyway.
So it doesn't matter if a chemotherapy only extends life by three weeks and it costs $400,000. Look, we're going to give it to people
anyway. That encourages all the wrong kinds of innovation.
Look, I love my own life. I love it more than anything. And as many resources as I have,
if I'm facing death, I'm probably willing to spend that money to try to keep.
OK, but but as much as I like you, I don't like your life infinitely. Right.
I wouldn't probably drain every penny of my savings account to prolong your life by six months for six months.
Let's say a year. Let's say I'm dying. And as of today, I know that I'm going to
die one year from now, but I can get two years with the right interventions that'll be very
costly. How much of your net worth would you spend, Levin?
Are you going to spend a year writing a great book with me or what are you going to do?
Just enjoy life, play golf?
So there's some self-interest here.
Okay. Aside from self-interest, purely out of my deep love for you,
I might spend 5% or 10% of my wealth to keep you alive for a year.
That's not very much. That's not very much at all.
That's like a lot to me. That's like a lot.
Oh my God, that's like less than a sales tax.
So how about a pure
stranger? What share of your
wealth, if someone said you're
the only person who can save this other
person, what share of your wealth would you give
to give a complete and total stranger
an extra year of life?
Like next to nothing, right? I'd have to say
next to nothing, yeah. Yeah, because there's too
many other total strangers.
And the problem is that the way we've organized society is that none of us really care very much about anyone else.
But I guess the idea is that if we don't care anyone else, then we know no one cares about us either.
So we have to pass laws that say that the government, society, health care, we have to be taken care of.
We have to be saved.
But I think it's actually the wrong way to think about the problem from an economic perspective.
Look, I'm not saying the market's the only thing that works or the greatest thing.
It's just, but it is, we've accepted it as the way that we live our lives.
And I believe that markets should also probably, or will, maybe should is the wrong word, will eventually have to function more as health care gets to be increasingly expensive.
And the approach we've taken now becomes less and less feasible.
And a different organization of healthcare delivery and of decision-making about life,
to me, is really central to making progress.
I hear you. I'm still a little hung up on the fact that you're only going to spend 5%
of your net worth on extending my life. On the other hand, it's only a year.
Wait, what are you going to do for me if I'm dying?
Well, same question, one year, one extra year.
Look, it's free for me to say, so I'll say 90%. Would I actually do that?
That's a joke.
But here's the deal about your 5% offers. So
if you lost 5% of your net worth overnight, which is possible, you could, it could be a bad day in
the stock market and the horse track, you could lose 5% of your net worth. You would barely notice
it would not affect your daily life at all. I would argue if you lost me overnight, at least
you would, I'd like to think you would at least know that something happened.
So I know by all the free time I would have, I would constantly realize you were gone.
So maybe actually, maybe we have the arrow going in the wrong direction.
Maybe you're willing to pay for me to get off.
Thanks to John Brockman at Edge.org and all our guests today. Steve Levitt, Michael Norton,
Sam Arbisman, Paul Bloom,
Alan Anderson, Alan Alda,
Douglas Rushkoff, Azra Raza,
Emanuel Derman, Seth Lloyd, Laurie Santos, and Sarah Jane Blakemore.
Thanks to Christopher Wirth for his excellent production work on this episode.
Most of all, thanks to you for listening.
I'm guessing you may have something to say about all the ideas
sentenced to death here today, so tell us what you're thinking.
You can find us on Twitter, Facebook, and at Freakonomics.com.
And here's an idea that isn't worth killing off.
Subscribing to Freakonomics Radio.
Just go to iTunes or wherever else you get your podcasts,
find that subscribe button,
and we will sneak into your podcast listening device
every Wednesday at midnight Eastern time
and deliver a fresh episode for free.
You're welcome.
Hey, podcast listeners.
On the next Freakonomics Radio, you will hear from Katie Milkman.
I'm an assistant professor at the Wharton School where I study behavioral economics and how people make choices.
And we talk about something she calls temptation bundling.
So when I talk about temptation bundling, I mean combining a temptation, something like a TV show, a guilty
pleasure, something that will pull you into engaging in a behavior with something you know
you should do but might struggle to do. You need a for instance? My temptation bundle is to listen
to Freakonomics podcasts while I'm running. Doing it right now. What I like to do is skip an
afternoon of work and go to the movies after my annual pap smear.
I really wish my temptation bundle was acceptable, but it would be drinking at work.
Self-help with a cognitive plot twist. That's next time on Freakonomics Radio.
Freakonomics Radio is produced by WNYC and Dubner Productions.
Our staff includes Greg Rosalski, Caroline English, Susie Lechtenberg, and Chris Bannon,
with help from Christopher Wirth, Anna Hyatt, Rick Kwan, David Herman, and Merritt Jacob.
If you want more Freakonomics Radio, you can subscribe to our podcast on iTunes or go to Freakonomics.com, where you'll find lots of radio a blog the books and more