Freakonomics Radio - Your Brain Doesn’t Work the Way You Think
Episode Date: December 23, 2024David Eagleman upends myths and describes the vast possibilities of a brainscape that even neuroscientists are only beginning to understand. Steve Levitt interviews him in this special episode of Peop...le I (Mostly) Admire. SOURCES:David Eagleman, professor of cognitive neuroscience at Stanford University and C.E.O. of Neosensory. RESOURCES:Livewired: The Inside Story of the Ever-Changing Brain, by David Eagleman (2020)."Why Do We Dream? A New Theory on How It Protects Our Brains," by David Eagleman and Don Vaughn (TIME, 2020)."Prevalence of Learned Grapheme-Color Pairings in a Large Online Sample of Synesthetes," by Nathan Witthoft, Jonathan Winawer, and David Eagleman (PLoS One, 2015).Sum: Forty Tales from the Afterlives, by David Eagleman (2009).The vOICe app.Neosensory. EXTRAS:"Feeling Sound and Hearing Color," by People I (Mostly) Admire (2024)."What’s Impacting American Workers?" by People I (Mostly) Admire (2024)."This Is Your Brain on Podcasts," by Freakonomics Radio (2016).
Transcript
Discussion (0)
Hey there, it's Stephen Dubner.
Today, a holiday treat, a bonus episode from People I Mostly Admire, one of the other shows
we make here at the Freakonomics Radio Network.
It is an interview show hosted by Steve Levitt, my Freakonomics friend and co-author, who
is an economics professor emeritus now at the University of Chicago.
On this episode, Levitt interviews David Eagleman,
a neuroscientist, entrepreneur, and author of several books,
including LiveWired,
the inside story of the ever-changing brain.
It is a fascinating conversation.
You are going to love it.
To hear more conversations like this,
follow people I mostly admire in your podcast app.
Okay, that's it from me.
Here is Steve Levitt.
I love podcast guests who
change the way I think about some important aspect of the world.
A great example is my guest today, David Eagleman.
He's a Stanford neuroscientist whose work on brain plasticity
has completely transformed my understanding of the human brain and its possibilities.
The human brain is about three pounds. It's locked in silence and darkness.
It has no idea where the information is coming from because everything is just electrical
spikes and also chemical releases as a result of those spikes.
And so what you have in there is this giant symphony of electrical activity going on and
its job is to create a model of the outside world.
Welcome to People I Mostly Admire with Steve Levitt.
According to Eagleman, the brain is constantly trying to predict the world around it.
But of course, the world is unpredictable and surprising, so the brain is constantly
updating its model.
The capacity of our brains to be ever-changing is usually referred to as plasticity, but
Eaglemann offers another term, livewired.
That's where conversation begins.
Plasticity is the term used in the field because the great neuroscientist or psychologist actually,
William James, coined the term because he was impressed
with the way that plastic gets manufactured,
where you mold it into a shape and it holds onto that shape.
And he thought that's kind of like what the brain does.
The great trick that mother nature figured out
was to drop us into the world half-baked.
If you look at the way an alligator drops into the world,
it essentially is pre-programmed.
It eats, mates, sleeps, does whatever it's doing.
But we spend our first several years absorbing the world around us based on our neighborhood
and our moment in time and our culture and our friends and our universities.
We absorb all of that such that we can then springboard off of that and create our own
things.
There are many things that are essentially pre-programmed in us,
but we are incredibly flexible, and that is the key about live wiring.
When I ask you to think of the name of your fifth grade teacher,
you might be able to pull that up, even though it's been years since you saw that fifth grade teacher,
but somehow there was a change made in your brain and that stayed in place.
You've got 86 billion neurons.
Each neuron is as complicated as a city.
This entire forest of neurons,
every moment of your life is changing.
It's reconfiguring, it's strengthening connections
here and there.
It's actually unplugging over here and replugging over there.
And so that's why I've started to feel that the term plasticity is
maybe under reporting what's going on.
And so that's why I may have the term live wiring.
When I went to school, I feel like they taught me the brain was organized
around things like senses and emotions, that there were these different parts
of the brain that were good for those things, but you make the case that
there's a very different organization of the brain.
It is organized around the senses, but the interesting thing is that the cortex,
this wrinkly outer bit, is actually a one-trick pony. It doesn't matter what you plug in. It'll
say, okay, got it. I'll just wrap myself around that data and figure out what to do with that data.
It turns out that in almost everybody, you have functioning eyeballs that plug into the
back of the head.
And so we end up calling the back part of the brain the visual cortex.
We call this part the auditory cortex and this the somatosensory cortex that takes in
information from the body and so on.
So what you learned back in high school or college is correct most of the time, but what
it overlooks is the fact that the brain is so flexible.
If a person goes blind or is born blind, that part of the brain that we're calling
the visual cortex, that gets taken over by hearing, by touch, by other things.
And so it's no longer visual cortex.
The same neurons that are there are now doing a totally different job.
So let me pose a question to listeners.
Imagine you have a newborn baby
and he or she looks absolutely flawless on the outside,
but then upon examination,
the doctors discover that half of his or her brain
is just missing,
a complete hemisphere of the brain.
It's never developed.
It's just empty space.
I would expect that would be a fatal defect
or best the child would be growing up profoundly mentally disabled.
Turns out the kid will be just fine. You can be born without half the brain or
you can do what's called a hemisphereectomy which happens to children
who have something called Rasmussen's encephalitis which is a form of epilepsy
that spreads from one hemisphere to the other.
The surgical intervention for that is to remove half the brain.
You can just imagine as a parent the horror you would feel if your child had to go in
for something like that, but you know what?
Kids just fine.
I can't take my laptop and rip out half the motherboard and expect it to still function,
but with the brain, with a live wired system, it'll work. So I first came to your work because I was so blown away by the idea of human echolocation.
Only to discover that echolocation is only the tip of the iceberg.
But could you talk just a bit about echolocation, how quickly with training it can start to substitute for sight?
So it turns out that blind people can make all kinds of sounds either with their mouth like clicking
or the tip of their cane or snapping their fingers,
anything like this.
And they can get really good at determining
what is coming back as echoes and figure out,
oh, okay, this is an open space in front of me.
Here, there's something in front of me.
It's probably a parked car and oh,
there's a little gap between two parked cars here.
So I can go in here.
The key is the visual part of the brain is no longer being used because for whatever
reason there's no information coming down those pipelines anymore.
So that part of the brain is taken over by audition, by hearing and by touching other
things.
What happens is that the blind person becomes really good at these other things because
they've just devoted more real estate to it.
And as a result, they can pick up on all kinds of cues
that would be very difficult for me and you
because our hearing just isn't that good.
And then in these studies,
you put a blindfold on a person for two or three days
and you try to teach them echolocation.
If I understand correctly, even over that time scale,
the echolocation starts taking over the visual part of the brain. Is that a fair assessment?
That is exactly right. This was my colleagues at Harvard. They did this over the course of five
days. They demonstrated that people could get really good at, there are actually a number of
studies like this, they can get really good at reading Braille. They can do things like echolocation. And the speed of it was sort of the surprise.
But the real surprise for me came along
when they blindfolded people tightly
and put them in the brain scanner
and they were making sounds or touching the hand.
And they were starting to see activity
in the visual cortex after 60 minutes of being blind. So in your book
you talk about REM sleep and honestly if I had sat down and tried to come up with an explanation
of REM sleep I could have listed a thousand ideas. Your pet theory would not be one of them. So
explain what REM sleep is and then tell me why you think we do it. REM sleep is rapid eye movement sleep.
We have this every night, about every 90 minutes, and that's when you dream.
So if you wake someone up when their eyes are moving rapidly and you say,
Hey, what are you thinking about?
They'll say, well, I was just riding a camel across a meadow.
But if you wake them up at other parts of their sleep, they typically won't have anything going on.
So that's how we know we dream during REM sleep.
But here's the key.
My student and I realized that at nighttime,
when the planet rotates,
we spend half our time in darkness.
And obviously we're very used to this
electricity blessed world,
but think about this in historical time,
over the course of hundreds of millions of years,
it's really dark.
I mean, half the time you are in blackness.
Now you can still hear and touch and taste and smell in the dark, but the visual system
is at a disadvantage whenever the planet rotates into darkness.
And so, given the rapidity with which other systems can encroach on that, what we realized
is it needs a way of defending itself against takeover every single night.
And that's what dreams are about.
So what happens is you have these midbrain mechanisms that simply blast random activity
into the visual cortex every 90 minutes during the night.
And when you get activity in the visual cortex, you say, oh, I'm seeing things.
And because the brain is a storyteller, you can't activate all the stuff without feeling like there's a whole
story going on there.
But the fascinating thing is when you look at the circuitry carefully, it's super specific,
much more specific than almost anything else in the brain.
It's only hitting the primary visual cortex and nothing else.
And so that led us to a completely new theory about dreams.
We studied 25 different species of primates and we looked at the amount of REM sleep
they have every night and we also looked at how
plastic they are as a species. It turns out that the amount of dream sleep that a creature has
exactly correlates with how plastic they are. Which is to say if your visual system is in danger of getting taken over because your
brain is very flexible,
then you have to have more dream sleep.
And by the way, when you look at human infants,
they have tons of dream sleep at the beginning,
when their brains are very plastic,
and as they age, the amount of dream sleep goes down.
Have you convinced the sleep scientists this is true,
or is this just you believing it right now?
At the moment, there are 19 papers
that have cited this and discussed this and I think
it's right.
I mean, look, everything can be wrong.
Everything is provisional, but it's the single theory that is quantitative.
It's the single theory about dreams that says not only here is a idea for why we dream,
but we can compare across species and the predictions match exactly.
No one would have suspected that you'd see a relationship between how long it takes you
to walk or reach adolescence and how much dream sleep you have, but it turns out that
is spawn on.
So we talked about echolocation, which uses sound to accomplish tests that are usually
done by vision.
And you've started a company called Neosensory, which uses touch to accomplish tests that
are usually done with hearing.
Can you explain the science behind that?
Given that all the data running around in the brain is just data and the brain doesn't
know where it came from, all it knows is, oh, here are electrical spikes, and it tries to figure out what to do with
it.
I got really interested in this idea of sensory substitution, which is can you push information
into the brain via an unusual channel?
Originally we built a vest that was covered with vibratory motors, and we captured sound
for people who are deaf.
So the vest captures sound,
breaks it up from high to low frequency,
and you're feeling the sound on your torso.
By the way, this is exactly what the inner ear does.
It breaks up sound from high to low frequency
and ships that off to the brain.
So we're just transferring the inner ear
to the skin of the torso, and it worked.
People who are deaf could come to hear the world that way. So I spun this out of
my lab as a company, Neosensory, and we shrunk the vest down to a wristband and we're on wrists of
deaf people all over the world. The other alternative for somebody who's deaf is a cochlear
implant, an invasive surgery. This is much cheaper and does as good a job. Just make sure I understand it.
Sounds happen and this wristband hears the sounds
and then shoots electrical impulses into your wrist
that correspond to the high and low frequency.
It's actually just vibratory motor.
So it's just like the buzzer in your cell phone,
but we have a string of these buzzers all along your wrist.
And we're actually taking advantage of
an illusion which is if I have two motors next to each other and I stimulate them both,
you will feel one virtual point right in between. And as I change the strength of those two motors
relative to each other, I can move that point around. So we're actually stimulating 128 virtual
points along the wrist. Do people train?
You give them very direct feedback or is it more organic?
Great question.
It started off where we were doing a lot of training on people and what we realized is
it's all the same if we just let it be organic.
The key is we just encourage people, be in the world.
And that's it.
You see the dog's mouth moving and you feel the barking on your wrist or or you close the door and you feel that on your wrist, or you say something,
you know,
most deaf people can speak and they know what their motor output is and they're
feeling the input.
Okay.
So hearing their own voice for the first time through this.
Oh God.
Yeah, that's interesting.
And by the way, that's how you learned how to use your ears too.
You know, when you're a baby, you're watching your mother's mouth move
and you're hearing data coming in your ears
and you clap your hands together
and you hear something in your ears,
it's the same idea.
You're just training up correlations in the brain
about, oh, this visual thing seems to always go
with that auditory stimulus.
So then it seems like if I'm deaf
and I see the dog's mouth moving and I now associate that with the sound,
do the people say that they hear the sound where the dog is or is the sound coming from the wrist?
For the first few months, you're hearing it on your wrist. You can get pretty good at these correlations.
But then after about six months, if I ask somebody, when the dog barks, you feel something on your wrist and you think,
okay, what was that on?
That must've been a dog bark.
And then you look for the dog.
And they say, no, I just hear the dog out there.
And that sounds so crazy,
but remember that's what your ears are doing.
Your ears are capturing vibrations of the eardrum
that moves from the middle ear to the inner ear,
breaks up to different frequencies,
goes off to your brain, goes to your auditory cord.
It's this giant pathway of things.
And yet, even though you're hearing my voice right now
inside your head, you think I'm somewhere else.
And that's exactly what happens
irrespective of how you feed the data in.
So you also have a product that helps with tinnitus.
Could you explain both what that is
and how your product helps? So tinnitus is Could you explain both what that is and how your product helps?
So tinnitus is a ringing in the ears.
It's like beep and about 15% of the population has this.
And for some people it's really, really bad.
It turns out there is a mechanism for helping with tinnitus
which has to do with playing tones
and then matching that with stimulation on the skin.
People wear the wristband, it's exactly the same wristband, but we have the phone play
tones and you're feeling that all over your wrist and you just do that for 10 minutes
a day and it drives down the tinnitus.
Now why does that work?
There are various theories on this, but I think the simplest version is that your brain is figuring out, okay,
real sounds always cause this correlating vibration on my wrist, but a fake sound, beep,
this thing in my head, that doesn't have any verification on the wrist.
And so that must not be a real sound.
So because of issues of brain plasticity, the brain just reduces the strength of the
tinnitus because it learns that it's not getting any confirmation that that's a real world
sound.
Now, how did you figure out that this bracelet could be used for this?
This was discovered by a woman named Susan Shore, who's a researcher who discovered this
about a decade ago. She was using electrical shocks on the tongue. And there's actually
another company that spun out called Lanier that does this with sounds in the ear and shocks on the tongue.
They had an argument that they think it had to be touched from the head and the
neck, and I didn't buy that at all.
And that's why I tried that with the wristband.
So this was not an original idea for us except to try this on the
wrist and it works equally as well.
So what we're talking about is substituting between senses.
Are there other forms of this product that are currently available to consumers or likely
to become available soon in this space?
For people who are blind, for example, there are a few different approaches to this.
One is called the brain port.
And that's where for a blind person, they have a little camera on their glasses and
that gets turned into little electrical stimulation on the tongue.
So you're wearing this little electro-tactile grid on your tongue and it tastes like pop
rocks sort of in your mouth.
Blind people can get pretty good at this.
They can navigate complex obstacle courses or throw a ball into a basket at a distance because
they can come to see the world through their tongue, which if that sounds crazy, it's the
same thing as seeing it through these two spheres that are embedded in your skull.
It's just capturing photons and information about them, figuring out where the edges are,
and then shipping that back to the brain.
The brain can figure that out.
There's also a colleague of mine that makes an app called Voice.
It uses the phone's camera
and it turns that into soundscape.
So if you're moving the camera around,
you're hearing, brrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr you as a sighted person to get used to this and say, oh, okay, I'm turning the visual world into
sound and it's starting to make sense when I pass over an edge or when I zoom into something,
the pitch changes, the volume changes, there's all kinds of changes in the sound quality that
tells you, oh yeah, now I'm going to close something, now I'm getting far, and here's what
the world looks like in sound.
Coming up after the break. There's really no shortage of theoretical ideas in neuroscience, but fundamentally we
don't have enough data.
More of Steve Levitt's conversation with David Eagleman in this special episode of People
I Mostly Admire.
Okay.
Back now to this special episode of People I Mostly Admire.
This is my Freakonomics friend and co-author Steve Levitt in conversation with the neuroscientist David Eagleman.
Steve Levitt in conversation with the neuroscientist David Eagleman. Elon Musk's company Neuralink has gotten a ton of attention lately.
Could you explain what they're trying to do and whether you think that's a promising
avenue to explore?
What they're doing is they're putting electrodes into the brain to read from and talk to the
neurons there.
So what we've been talking about so far
has been sending signals to the brain,
but what Neuralink is trying to do
is take signals out of the brain, is that right?
That is correct.
Everything we've been talking about so far
with sensory substitution,
that's a way of pushing information in and non-invasive.
And what Neuralink is, you have to drill a hole in the head
to get to the brain itself,
but then you can do reading and writing invasively.
That actually has been going on for 60 years.
The language of the brain is electrical stimulation.
And so with a little tiny wire, essentially, you can zap a neuron and make it pop off,
or you can listen to when it's chattering along going, pop, pop, pop, pop, pop, pop,
pop, pop.
There's nothing actually new about what Neuralink is doing, except that they're making a one-ton
robot that sews the electrodes into the brain so it can do it smaller and tighter and faster
than a neurosurgeon can.
And by the way, there are a lot of great companies doing this sort of thing with electrodes.
As people get access to the brain, we're finally getting
to a point, we're not there yet, but we're getting to a point where we'll finally be
able to push theory forward. There's really no shortage of theoretical ideas in neuroscience,
but fundamentally, we don't have enough data because, as I mentioned, you've got these
86 billion neurons all doing their thing, and we have never measured
what all these things are doing at the same time.
So we have technologies like
functional magnetic resonance imaging, FMRI,
which measures big blobby volumes of,
oh, there was some activity there and some activity there,
but that doesn't tell us what's happening
at the level of individual neurons.
We can currently measure some individual neurons, but not many of them. Be like, if
an alien asked one person in New York City, hey, what's going on here? And then tried
to extrapolate to understand the entire economy of New York City and how that's all working.
So I think we're finally getting closer to the point where we'll have real data about, wow, this is what
thousands or eventually hundreds of thousands or millions of neurons are actually doing
in real time at the same moment.
And then we'll be able to really get progress.
I actually think the future is not in things like Neuralink, but the next level past that,
which is nanorobotics.
This is all theoretical right now, but I don't think this is more than 20, 30 years off,
where you do three-dimensional printing,
atomically precise, you make molecular robots,
hundreds of millions of these,
and then you put them in a capsule
and you swallow the capsule,
and these little robots swim around
and they go into your neurons,
these cells in your brain,
and from there, they can send out little signals saying,
hey, this neuron just fired.
And once we have that sort of thing,
then we can say non-invasively,
here's what all these neurons are doing at the same time,
and then we'll really understand the brain.
I've worn a continuous glucose monitor a few times.
So you stick this thing in your arm
and you leave it there for 10 days,
and every five minutes,
it gives you a reading of your blood glucose level.
It gives you direct feedback on how your body responds
to the foods you eat, also to stress or lack of sleep
that you simply don't get otherwise.
I learned more about my metabolism in 10 days
than I had over the entire rest of my life combined.
What you're talking about with these nanoorobots is obviously in the future,
but is there anything now that I can buy and I can strap on my head,
and I know it's not going to be individual neurons,
but that would allow me to get feedback about my brainwaves
and be able to learn in that same way I do with the glucose monitor?
What we have now is EEG, electroencephalography, and there are several really good companies like
Muse and Emotive that have come out with at-home methods. You just strap this thing on your head
and you can measure what's going on with your brain waves. The problem is that brain waves are
still pretty distant from the activity of 86 billion
chattering neurons. An analogy would be if you went to your favorite baseball
stadium and you attached a few microphones to the outside of the
stadium and you listened to a baseball game but all you could hear with these
microphones is occasionally the crack of the bat and the roar of the crowd and
then your job is to reconstruct what baseball is,
just from these few little signals you're getting.
So I'm afraid it's still a pretty crude technology.
I could imagine that I would put one of these EEGs on
and I would just find some feeling I liked,
bliss or peace, or maybe it's a feeling induced by drugs
and alcohol and
I would be able to see what my brain patterns looked like in those states then I could sit
around and try to work towards reproducing those same patterns.
Now it might not actually lead to anything good but in your professional opinion total
waste of time you trying to do that?
The fact is if you felt good at some moment in your life
and you sat around and tried to reproduce that,
I think you'd do just as well thinking about that moment
trying to put yourself in that state
rather than trying to match a squiggly line.
You know, I'm a big believer in data though,
and it seems like somebody should be building AI systems
that are able to look at those squiggles and
give me feedback.
The thing that I'd so hard about the brain is that we don't get direct feedback about
what's going on, which is how the brain is so good at what it does.
If the brain didn't get feedback from the world about what it was doing, it wouldn't
be any good at predicting things.
So I'm trying to find a way that I can get feedback.
But it sounds like you're saying I gotta live
for 20 more years if I wanna hope to do that.
I think that's right.
I mean, there's also this very deep question
about what kind of feedback is useful for you.
Most of the action in your brain is happening unconsciously.
It's happening well below the surface of your awareness
or your ability to access it.
And the fact is that your brain works much better that way.
Do you play tennis, for example?
Not well.
Or golf?
Golf I play.
Okay, good.
So if I ask you, hey, Stephen,
tell me exactly how you swing that golf club.
The more you start thinking about it,
the worse you're gonna be at it
because consciousness,
when it starts poking around in areas
that it doesn't belong,
it's only gonna make things worse.
And so it is an interesting question
about the kind of things that we want to be more
conscious of.
I'm trying some of these experiments now, actually using my wristband, wearing EEG and
getting a summarized feedback on the wrist.
So I don't have to stare at a screen, but as I'm walking around during the day, I have
a sense of what's going on with this.
Or with the smartwatch, having a sense of what's going on with this, or with the smartwatch having a sense of what's going on with my physiology.
I'm not sure yet whether it's useful
or whether those things are unconscious
because mother nature figured out a long time ago
that it's just as well if it remains unconscious.
One thing I'm doing, which is just a wacky experiment,
just to try it, the smartwatch is measuring
all these things, we have that data going out,
but the key is you have someone else wear the wristband, like your spouse wear the smartwatch is measuring all these things. We have that data going out, but the key is
you have someone else wear the wristband,
like your spouse wear the smartwatch
and you're feeling her physiology.
And I'm trying to figure out,
is this useful to be tapped into someone else's physiology?
I don't know if it's good or bad for marriages, but-
What a nightmare.
But I'm just trying to really get at this question
of these unconscious signals that we experience,
is it better if they're exposed or better to not expose them?
What have you found empirically?
Empirically, what I found is that married couples don't want to wear it.
So in my lived experience, I walk around and there's almost nonstop chatter in my head.
It's like there's a narrator who's commenting on what I'm observing in the world.
My particular voice does a lot of rehearsing of what I'm going to say out loud in the future
and a lot of rehashing of past social interactions.
Other people have voices in their head that are constantly criticizing and belittling
them. But either way, there's both a voice that's talking
and there's also some other entity in my head
that's listening to that voice and reacting.
Does neuroscience have an explanation
for this sort of thing?
In my book, Incognito, the way I cast the whole thing
is that the right way to think about the brain
is like a team of rivals.
Lincoln, when he set up his presidential cabinet, he set up several
rivals in it and they were all functioning as a team.
That's really what's going on under the hood in your head is you've got all
these drives that want different things all the time.
So if I put a slice of chocolate cake in front of you, Steven, part of your
brain says, Ooh, that's a good energy source.
Let's eat it.
Part of your brain says, no, don't eat it. It'll make me overweight. Part of your brain says, okay, I'll eat good energy source. Let's eat it. Part of your brain says, no, don't eat it.
It'll make me overweight.
Part of your brain says, okay, I'll eat it,
but I'll go to the gym tonight.
And the question is, who is talking with whom here?
It's all you, but it's different parts of you.
All these drives are constantly arguing it out.
It's, by the way, generating activity
in the same parts of the brain as listening
and speaking that you would normally do.
It's just internal before anything comes out.
Language is such an effective form of communicating and of summarizing information that at least
my impression inside my head is that a lot of this is being mediated through language.
But I also have this impression that there are parts of my brain that are not very good with language. But I also have this impression that there are parts of my
brain that are not very good with language. Maybe I'm crazy, but I have this
working theory that the language parts of my brain have really co-opted power.
The non-speaking parts of my brain, they actually feel to me like the good parts
of me, the interesting parts of me, but I feel like they're essentially held
hostage by the language parts.
Does that make any sense?
Well, this might be a good reason for you
to keep pursuing possible ways to tap into your brain data.
And by the way, it turns out that the internal voice
is on a big spectrum across the population,
which is to say some people like you
have a very loud internal radio.
I happen to be at the other end of the spectrum
where I have no internal radio at all.
I never hear anything in my head.
That's called anendophagia.
But everyone is somewhere along this spectrum.
One of the points that I've always really concentrated
on neuroscience is what are the actual differences
between people traditionally that's been looked at
in terms of disease states.
But the question is, from person to person
who are in the normal part of the distribution,
what are the differences between us?
It turns out those are manifold.
So take something like how clearly you visualize
when you imagine something.
So if I ask you to imagine a dog running across
a flowery meadow towards a cat,
you might have something like a movie in your head.
Other people have no image at all.
They understand it conceptually,
but they don't have any image in their head.
And it turns out when you carefully study this,
the whole population is smeared across the spectrum.
So our internal lives from person to person
can be quite different.
So when you talk about the spectrum,
it makes me think of synesthesia.
Could you explain what that is and how that works?
So I've spent about 25 years now studying synesthesia and that has to do with some percentage
of the population has a mixture of the senses.
They might look at letters on a page and that triggers a color experience for them where
they hear music and that caused them to see some visual,
or they put some taste in their mouth and it causes them to have a feeling on their
fingertips. There are dozens and dozens of forms of synesthesia, but what they all have
to do with is a cross blending of things that are normally separate in the rest of the population.
And what share of the population has these patterns?
So it's about 3% of the population that has colored letters or colored weekdays or months
or numbers.
Well, it's big.
It's interesting.
I wouldn't have thought it was so big.
The crazy part is that if you have synesthesia, it probably has never struck you that 97%
of the population does not see the world the way that you see it.
Everyone's got their own story going on inside and it's rare that we stop to consider the possibility
that other people do not have the same reality that we do.
And what's going on in the brain?
In the case of synesthesia,
it's just a little bit of cross talk between two areas
that in the rest of the population
tend to be separate but neighboring.
So it's like porous borders between two countries.
They just get a little bit of data leakage and that's what causes them to
have a joint sensation of something.
People make a big deal out of it when they talk about musicians having this.
And they imply that it's helpful, that it makes them better musicians.
Do you think there's true to that?
Or is it just that if 3% of the population has this, then they're going
to be some great musicians among them?
I suspect it's the latter, which is to say everyone loves pointing out synesthetic musicians,
but no one has done a study on how many deep sea divers have synesthesia or how many accountants
have synesthesia.
And so we don't really know if it's disproportionate among musicians.
So you've created this database of people who have the condition and you find a pattern
that is completely and totally bizarre and that's that there's a big bunch of people who
associate the letter a with red, b with orange, c with yellow, it goes on and then they start
repeating it g. In general though you don't see any patterns at all like people can connect these colors and letters in any way. Do you remember when you first found this pattern
and what you thought was?
So typically, as he said, it's totally idiosyncratic. Each synesthete has his or her own colors
for letters. So my A might be yellow, your A is purple, and so on. And then what happened
is with two colleagues of mine at Stanford, we found in this database of tens of thousands of synesthetes that I've collected over the years, we found that starting in the late 60s, there was some percentage of synesthetes who happened to share exactly the same colors.
These synesthetes were in different locations, but they all had the same thing. And then that percentage rose to about 15% in the mid 70s.
So when you saw this, you must have been thinking, my God, this is important, right?
Exactly right.
The question is, how could these people be sharing the same pattern?
What we had always suspected is that maybe there was some imprinting that happens, which
is to say there's a quilt in your grandmother's house that has a red A and a yellow B and
a purple C and so on.
But everyone has different things
that they grew up with as little kids.
And so it was strange that this was going on.
The punchline is that we realized
that this is the colors of the Fisher Price Magnet Set
on the refrigerators that were popular
during the 70s and 80s and then essentially died out.
And so it turns out that when I look across
all these tens of thousands of synesthetes,
it's just those people who were kids in the late 60s and 70s and 80s that imprinted on
the Fisher Price Magnet set, and that's their synesthesia.
And then as its popularity died out, there aren't any more who have that particular pattern. Now, I have to imagine that the way we teach in traditional classrooms with a teacher,
professor at a blackboard lecturing to a huge group of passive students, as a neuroscientist,
that must make you cringe, right?
It does, increasingly, yes.
How should we teach?
I think the next generation is going to be smarter than we are simply because of the
broadness of the diet that they can consume.
Whenever they're curious about something, they jump on the internet, they get the answer
straight away from Alexa or from ChatGPT.
They just get the answers and that is massively useful for a few reasons.
One is that when you are curious about something, you have the right cocktail of neurotransmitters present to make that information stick. So if you get the answer
to something in the context of your curiosity, then it's going to stay with you. Whereas
you and I grew up in an era where we had lots of just in case information.
What do you mean by that?
Oh, you know, like just in case you ever need to know that the battle of Hastings happened
in 1066, here you go.
And you want to contrast that with just in time information.
Exactly.
I need to know how to fix my car.
And so the internet tells me, and then I can really remember it because I need it.
That's exactly it.
And so look, you know, for all of us with kids, I know you've got kids, I've got
kids and we feel like, oh, my kid's on YouTube and wasting time.
There's a lot of amazing resources and things that they learn on YouTube or even on TikTok
anywhere.
There's lots of garbage, of course, but it's better than what we grew up with.
When you and I wanted to know something, we would ask our mothers to drive us down to
the library and we would thumb through the card catalog and hope there was something
on it there that wasn't too outdated.
You were more ambitious than me.
I would just ask my mother,
and I have since learned that every single thing
my mother taught me was completely wrong.
But I still believe them.
Because of this part of the brain
that locks in things that you learn, learn, go,
I still have to fight every day
against the falsehoods my mother taught me.
I wish I had told her to take me to the library.
My mother was a biology teacher, and my father
was a psychiatrist.
And so they had all kinds of good information.
I'm just super optimistic about the next generation of kids.
Now, as far as how we teach, things
got complicated with the advent of Google.
And now it's twice as complicated with ChatGPT.
Happily, we already learned these lessons 20 years ago.
What we need to do is just change the way that we ask questions of students.
We can no longer just assume that fill in the blank or even just writing a paper on something is the optimal way to have them learn something.
But instead, they need to do interactive projects like run little experiments with each other and you know the kind of thing that you and I both love to do in our careers which is okay go out and
find this data and run this experiment and see what happens here.
That's the kind of opportunities that kids will have now.
You are listening to a special bonus episode of People I Mostly Admire with
Steve Levitt and the neuroscientist David
Eagleman. After the break, what are large language models missing?
It has no theory of mind, but it has no physical model of the world the way that we do.
That's coming up after the break. David Eagleman is a professor, a CEO, leader of a nonprofit called the Center for Science
and Law, host of TV shows on PBS and Netflix, and the founder of Possibilianism.
Like every curious person trying to figure out what we're doing here, what's going on,
it just feels like there are two stories.
Either there's some religion story or there's the story of strict atheism, which I tend
to agree with, but it tends to come with this thing of, look, we've got it all figured out.
There's nothing more to ask here.
There is a middle position which people call agnosticism, but usually that
means, I don't know, I'm not committing to one thing or the other. I got interested in
defining this new thing that I call possibilityism, which is to try to go out there and do what
a scientist does, which is an active exploration of the possibility space. What the heck is
going on here? We live in such a big and mysterious cosmos
Everything about our existence is sort of weird
Obviously the whole Judeo-Christian tradition
That's one little point in that possibility space or the possibility that there's absolutely nothing
We're just atoms and we die, but there's lots of other possibilities
And so I'm not willing to commit to one team or the other without having sufficient evidence.
So that's why I call myself a Possibilian.
And so in support of Possibilianism, maybe a better name could be in order, you wrote
a book called SOME, that's S-U-M.
So it's SOME, 40 Tales from the Afterlives.
How do you describe the book to people?
I call it literary fiction.
It's 40 short stories that are all mutually exclusive.
They're all pretty funny, I would like to think,
but they're also kind of gut wrenching.
And what I'm doing is shining the flashlight
around the possibility space.
None of them are meant to be taken seriously,
but what the exercise of having 40
completely different stories gives us is a sense of, wow,
actually, there's a lot that we don't know here. In some of the stories, God is a female. In some
stories, God is a married couple. In some stories, God is a species of dim-witted creatures. In one
story, God is actually the size of a bacterium and doesn't know that we exist. And in lots of stories, there's no God at all.
That book is something I wrote over the course of seven years and became an international
bestseller.
It's really had a life to it that I wouldn't have ever guessed.
When I heard about the book, I saw the subtitle and thought, I have zero interest in reading
a book about the afterlife.
I totally misunderstood what the book was about.
And then I certainly didn't understand that some was Latin.
Some actually I chose because among other things,
that's the title story.
In the afterlife, you relive your life,
but all the moments that share a quality are grouped together.
So you spend three months waiting in line
and you spend 900 hours sitting
on the toilet and you spend 30 years sleeping. All in a row. Exactly. And this amount of time
looking for lost items and this amount of time realizing you've forgotten someone's name and
this amount of time falling and so on. Part of why I use the title, some is because of the sum
of events in your life like that. Part of it was because Koji Tō or GoSoon.
So it ended up just being the perfect title for me, even if it did lose
a couple of readers there yet.
People are super excited right now about these generative AI models,
the large language models. What's
your take on it?
Essentially, these artificial neural networks took off from a very simplified version of
the brain, which is, hey, look, you've got units and they're connected. And what if we
can change the strength between these connections? And in a very short time, that has now become
this thing that has read everything ever written on the planet and can give extraordinary answers. But it's not yet the brain or
anything like it. It's just taking the very first idea about the brain and
running with it. What a large language model does not have is an internal model
of the world. It's just acting as a statistical parrot. It's saying, okay,
given these words, what is the next word most likely
to be given everything that I've ever read on the planet? And so it's really good at
that, but it has no model of the world, no physical model. And so things that a six-year-old
can answer, it is stuck on. Now, this is not a criticism of it in the sense that it can
do all kinds of amazing stuff and it's going to change the world, but it's not the brain
yet and there's still plenty of work to be done to get something
that actually acts like the brain.
Do you think that it is a solvable problem to give these models, a theory of mind, a
model of the world?
I suspect so because there are 8.2 billion of us who have this functioning in our brains
and as far as we can tell, we're just made of physical stuff.
We're just very sophisticated algorithms,
and it's just a matter of cracking what that algorithm is.
If we were to come back in 100 years,
what do you think would be most different?
I know that's a hard prediction to make,
but what do you see as transforming most
in the areas you work in?
The big textbook that we have in our field
is called Principles of Neuroscience,
and it's about 900 pages.
And it's not actually principles and it's about 900 pages.
And it's not actually principles, it's just a data dump of all this crazy stuff we know.
And in 100 years, I expect it'll be like 90 pages. We'll have things where we put big
theoretical frameworks together and we say, ah, okay, look, all this other stuff, these
are just expressions of this basic principle that we have now figured out.
Do you pay much attention to behavioral economics?
Yes, I do.
What do you think of it?
Oh, it's great.
And that's probably the direction that a lot of fields will go is how do humans actually
behave?
One of the big things that I find most interesting about behavioral economics comes back to this
issue about the team of rivals.
When people measure in the brain how we actually make decisions about whatever,
there are totally separable networks going on.
Some networks care about the valuation of something,
the price point.
You have totally other networks
that care about the anticipated emotional experience
about something.
You have other networks that care about the social context.
Like what do my friends think about this?
You have mechanisms that care about short-term gratification.
You have other mechanisms that are thinking
about the long-term, what kind of person do I wanna be?
All these things are battling it out under the hood.
It's like the three stooges sticking each other in the eye
and wrestling each other's arms and stuff.
But what's fascinating is when you're standing
in the grocery store aisle trying to decide
which flavor of ice cream you're gonna buy the grocery store aisle trying to decide which
flavor of ice cream you're going to buy, you don't know about these raging battles happening
under the hood.
You just stand there for a while and then you say, okay, I'll grab this one over here.
There was a point in time among economists that there was a lot of optimism that we could
really nail macroeconomics, inflation and interest rates and whatnot.
And we could really understand how
the system worked. And I think there's been a real step back from that. The view now is,
look, it's an enormous complex system. And we've really, I guess, given up in the short
run. Are you at all worried that's where we're going with the brain?
Oh, gosh, no. And the reason is because we've got all these billions of brains running around.
What that tells us is it has to be pretty simple
in principle.
You got 19,000 genes, that's all you've got.
Something about it has to be as simple as falling off a log
for it to work out very well so often, billions of times.
They say as you get older,
it's important to keep challenging your brain by learning
new things like a foreign language.
I can't say I found learning German to be all that much fun, and I definitely have not
turned out to be very good at it.
So I've been looking for a new brain challenge, and I have to say, I find echolocation very
intriguing.
How cool would it be to be able to see via sound?
I suspect though that my aptitude for echolocation will be on power with my aptitude for German.
So if you see me covered in bruises, you'll know why.
If you want to learn more about David Eagleman's ideas, I really enjoyed a couple of his mini-books
like LiveWired, which talks about his brain research, and some Forty Tales from
the Afterlives, his book of speculative fiction.
Hey there, it's Stephen Dubner again.
I hope you enjoyed this special episode of People I Mostly Admire.
I loved it and I would suggest you go right now to your podcast app and follow the show,
People I Mostly Admire.
We will be back very soon with more Freakonomics Radio.
Until then, take care of yourself,
and if you can, someone else too.
Freakonomics Radio and People I Mostly Admire
are produced by Stitcher and Renbud Radio.
This episode was produced by Morgan Levy
with help from Lerak Bowditch and Daniel Moritz-Rabson.
It was mixed by Jasmine Klinger.
Our staff also includes Alina Kullman, Augusta Chapman, Dalvin Abouaji, Eleanor Osborne,
Ellen Frankman, Elsa Hernandez, Gabriel Roth, Greg Rippon, Jason Gambrell, Jeremy Johnston,
John Schnars, Neil Carruth, Rebecca Lee Douglas, Sarah Lilly, Teo Jacobs, and Zach Lipinski.
Our composer is Luis Guerra. As always, thank you for listening.
David, you got your quick time going? I do now.
The Freakonomics Radio Network,
the hidden side of everything.
Stitcher. The Freakonomics Radio Network. The hidden side of everything.