Making Sense with Sam Harris - #322 — Predicting Reality
Episode Date: June 13, 2023Sam Harris speaks with Andy Clark about the predictive brain, embodied cognition, and the extended mind. They discuss the structure of perception, novelty, precision, pain, psychedelics, emotion, ways... to hack our predictions, hypnosis, meditation, artificial intelligence, consciousness, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe. Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
Transcript
Discussion (0)
Thank you. of the Making Sense podcast, you'll need to subscribe at samharris.org. There you'll find our private RSS feed
to add to your favorite podcatcher,
along with other subscriber-only content.
We don't run ads on the podcast,
and therefore it's made possible entirely
through the support of our subscribers.
So if you enjoy what we're doing here,
please consider becoming one.
Well, Trump has finally been indicted for willfully mishandling classified documents.
The details are fairly amazing.
Once again, we see the evidence that the man has never been playing 4D chess.
He just so recklessly and pointlessly violates norms and compromises the integrity of everyone around him. And he's been so immunized from political consequences by having bent the
Republican Party into a personality cult that it's no longer surprising that he expects every
bad situation to turn to his advantage. And perhaps this one will, too. We'll just have
to wait and see. I would definitely be happier if he were being prosecuted for something related
to January 6th. That is, for something where there really is no comparison to make to any
other political figure, alleging a double standard. It seems clear that such comparisons in this case
are specious, because while they mishandled documents, Clinton and Biden and Pence did not behave the way Trump has behaved here.
But the political optics are very easily distorted, and are actively being distorted now.
Anyway, I'm going to keep my powder dry on Trump. I was hoping never to think about the man again,
to keep my powder dry on Trump. I was hoping never to think about the man again, but it seems it will be unavoidable as the 2024 presidential campaign gets rolling. But I will pick my moments carefully
because the man has been an almost miraculous opportunity cost for our entire species.
I mean, more time has been wasted on Trump than on any other human being
in the last century. I mean, this is not Hitler or Stalin or Einstein, right? This is a person
so totally without consequence or substance. This is a person whose ideas and life example and even his bad intentions are so
measly. It really is a perverse miracle that he has taken up this much of everyone's time.
It's like we just spent the better part of a decade obsessing about and watching our society tear itself apart over vanilla ice or carrot top
or Pee Wee Herman. And I don't mean to denigrate those guys especially, but I'm sure each of them
would be astounded if they bent the arc of human history in this way on the basis of their cultural products. How did we get here? How is this
the person who has taken up all of our bandwidth? It really has been an astonishing theft of our
collective attention. Something seems to have gone very wrong with our culture. What we have in place of sober thought is just a ripping sound that started
somewhere around the OJ trial. At least that's when I first heard it. And with the birth of the
internet and social media, it has grown deafening. We seem to have collectively produced an approach
to politics and journalism and activism and citizenship, a whole life philosophy that really could be summed
up in Johnny Cochran's immortal lie, if the glove don't fit, you must acquit, right? I mean,
that's the level. That's the empty slogan that led millions of people to celebrate the release
of a man who everyone knew was a murderer.
That's the level of cynicism and moral confusion and grievance entrepreneurship that seems to have
spread everywhere now, right, left, and center. We now have a culture that simply cannot produce
a coherent vision of how to survive in this century, much less thrive in it,
because we've lost the ability to impartially talk about facts. And most of the people who
are lucky enough not to have to really worry about this, at least not yet, those who are doing well
enough to avert their eyes and just focus on their own lives, these people are busy watching ASMR
videos and taking ice baths.
It seems pretty clear that the mainstream media can't figure out how to solve this,
but the independent media can't either.
Podcasts and newsletters are becoming like multi-level marketing for conspiracists.
I've called this a new religion of contrarianism,
but calling it a religion is too grand.
It's a cargo cult that is dazzled by each new meme that washes up on Twitter.
Epstein didn't kill himself.
George Soros is ruining everything.
UFOs have finally landed.
Big tech censorship is the most important problem on Earth.
Behind every one of these things, you get a glimpse of how the story
ends, with another wave of lunatics storming the U.S. Capitol, only to take selfies and smear
shit on the walls. I think if Jesus came back to Earth tomorrow to raise the dead,
half of our society would expect him to say something about mRNA vaccines,
Half of our society would expect him to say something about mRNA vaccines or Jewish control of the media.
Can someone figure out how to reboot this hard drive?
Anyway, today's podcast has nothing to do with any of these issues.
Today I'm speaking with Andy Clark.
Andy is a professor of cognitive philosophy at the University of Sussex,
and he's the author of several books, most recently The Experience Machine, How Our Minds Predict and Shape Reality.
And we talk about the predictive brain, as well as embodied cognition, and what he calls the extended mind. We discuss the structure of perception, novelty, precision, pain, psychedelics, emotion, hacking our predictions, hypnosis,
meditation, artificial intelligence, consciousness, and other topics.
And now I bring you Andy Clark.
I am here with Andy Clark. Andy, thanks for joining me.
It's a great pleasure to be here. Thanks for having me.
So, you have written a fascinating book titled The Experience Machine,
How Our Minds Predict and Shape Reality. And long before that, you were, I believe,
the co-author with David Chalmers of the extended mind hypothesis,
which rattled some minds, extended or otherwise, in philosophy back in the day.
So I want to talk about all this.
I guess let's start with your book, which mostly focuses on the predictive brain hypothesis,
which is a topic that has come up in at least one
recent podcast. But let's see if we can explain this fairly counterintuitive thesis. But actually,
before we do, maybe can you just summarize your intellectual background? I just gave two
landmarks on it, but what have you tended to focus on,
and how do you describe your interest in philosophy and science at the moment?
Okay, yeah. So, you know, I've been working in cognitive science and kind of philosophy of mind for a long time now. Originally, I guess I was most interested in questions about the role of
the body and the construction of our mental life.
And I'm still very interested in that.
I soon became interested in connectionism and robotics,
because that all seemed to go together, you know,
connectionism, that old word for artificial neural networks.
And at some point during that sort of journey,
the extended mind story came on the scene,
which I saw really as just a kind of footnote to a lot of work that
was going on in embodied cognition anyway. It was just a kind of observation that embodied agents
can lean on their tools and technologies in such a strong way as to make them worth counting as
single systems at times. And really, I spent a long, long time thinking about all that stuff,
but people kept asking me, so what is it that brains do in all of this? And although I'd followed the neuroscience,
I'd never bothered to sort of really look for a systematic account of what the brain's role
in these complicated brain-body-world nexuses was. And then when predictive processing came along,
something I'd kind of been interested in actually since the mid-90s when I was looking at just a fragment of that work, that just seemed
to be a very, very good place to start to weave it all together because it turns out,
or at least this is what I believe, that predictive brains are the perfect internal platform for
embodied extended minds.
So it was nice to get all of those things coming together.
But that's kind of how it went for me, sort of interested in empirically informed philosophy
of mind, running that through artificial neural networks, embodied cognition, robotics,
extended mind, and here we are today, predictive processing.
Nice. Well, a lot of that has relevance for recent developments, cultural developments with
respect to artificial intelligence. I think since you published your book, AI has just exploded
into relevance for almost everyone. So I think we'll land there and just get your take on the
implications of these increasingly powerful tools. But before we do, let's talk about the brain and the mind
and this notion that much of what the brain is doing perceptually as a matter of motor control
and emotional regulation and just cognition generally is a matter of it's predicting reality on some level and then reducing prediction error.
Let's just take it from the ground up, however you want to start. What is this predictive brain
hypothesis? Yeah, I mean, I think the best way into it is on the perception side. It's going
to be important very rapidly that it's not just a story about
perception, but somehow that seems to me to be the easiest way to get the general picture.
So if I was to, for example, show you a hollow face mask that was lit from behind, so you're
viewing the concave side of the mask, it will actually look to you as if the nose is facing
outwards. That's called the hollow mask illusion. It's pretty popular. You can see it on the web.
What seems to be going on there is that our brain has a very strong history with faces,
and it's come to predict unconsciously, very strongly, that noses are going to stick out.
predict unconsciously very strongly that noses are going to stick out. So in that particular case,
you've got perfectly good sensory information coming in specifying concavity, but your visual experience is as of an ordinary sort of convex outward facing nosed face. And that's what
constructs your experience. And I think that's just a sort of a very small version of what brains are doing all the time.
So, you know, that's a case where the stimulus is a bit weird.
But even in the ordinary case of me looking around the room and seeing a Coke can and
a coffee mug in front of me, that is being constructed by my brain having very good predictions about what those sensory
stimulations are likely to be like, and using those to do an awful lot of work. It's cleaning
up the signal, it's discarding some bits, it's amplifying others. And it's that process of
kind of cleaning up and making sense that downward flow in predictions, predictions moving from deep in the brain
towards the sensory peripheries, seem to be doing all the time. That's the general idea.
These predictions are issued by a generative model, just like in the AI systems that you
were just talking about, chat GPT and the rest. Obviously, the content of this generative model
is rather different to the content of their ones. And that's something we might come back to in the end. But that's the sort of basic picture is we've over time built up a
model of how the sensory stimulations ought to be if we're where we think we are doing what we think
we're doing. And the brain uses those predictions to structure the inputs. and then we're driven by the errors in that attempt at
structuring. So sensory information gets swapped for prediction error at rather an early stage of
processing, so that everything that you see, hear, touch, and feel is kind of framed by these
attempts at prediction. So what is happening in the case where we perceive something that is truly
novel, right? An object that you have never seen before, and you have never seen anything quite
like it suddenly is placed in front of you next to the Coke can. What is novelty on this theory?
Yeah, I mean, I think the right thing to say there is it's going to be very,
very counterintuitive at first, which is that I don't think that we could even perceive
absolute genuine novelty. But the good thing is that we're never presented with absolute genuine
novelty. Even if an object came from Mars or somewhere like that, and it landed beside the
Coke can, there's enough common patterns there
in the sort of low-level sensory information for me to construct some kind of grip on its sort of
rough shape and its color. At the same time, if you put me in a brand new kind of environment,
the closest I can think of to this is when I first went diving. And I remember that experience and find it very, very hard to
kind of see anything. And yet over time, you're able to see an awful lot better.
And I think that what's going on there is that we have to train what in that case is a very,
very bad prediction machine, in particular with perception action loops. So I think if you're
going to get to grips with something that is pretty novel, then you're going to have to slowly deal with it over time. And you're going to have
to deal with it in a way that has perception action loops right at the heart. I think it would
be quite difficult to get on top of these things with sort of just passive information, although
of course, some kinds of system can do that if they have the right training. So what am I saying here about genuine novelty?
I think the cases you're thinking of just aren't genuinely novel.
If you blindfold me, take me out somewhere, I don't know.
I don't know what country I'm in.
I open my eyes.
I've still got an awful lot of good predictions that get very, very rapidly updated by a little
bit of prediction error that might say something like, oh, I don't know, this is a very outdoor countryside-y environment you're in, or this is
a very industrial urban landscape that you're in. And so those early prediction errors, whatever I
started predicting, the early ones can then sort of frame more and more refined predictions. So a quick sort of very rapid cycle of predictions and error exchanges settles on the right thing.
It's also provable that you can start a prediction machine with random assignments.
And if you just give it time, as it were, give it enough training, then it will learn
a model that can make the right sorts of predictions.
So you basically got two choices.
You either retrieve a better prediction now because you've got one, or else you do a lot
of slow and tortuous learning.
What is the actual claim here with respect to the error term?
Yeah, I mean, so I don't think that we experience prediction errors.
It's slightly contentious. Some people think that perhaps we do in some way.
I think that what we experience is the result of getting rid of prediction errors.
So your brain has to make a prediction. There will be prediction errors, but they're not
experienced. They're the things that let the brain recruit a better prediction.
So yeah, if I open my eyes and I think I'm in my bedroom and I'm actually somewhere else
entirely, then I don't experience the errors.
Although I might experience a moment or two of confusion, a moment or two of uncertainty.
I think so in that way, it's not like the error is not there in phenomenology, but it's not really structuring my experience.
All structured experience is the best current prediction.
I think that's the right thing to say.
So then what's the relationship of attention and precision to this picture?
So precision is a huge weight in, a huge player in this whole economy.
It's kind of implementing attention.
The idea is that precision weightings implement attention.
But it's basically just the thought that if you're making predictions and you have sensory
information coming in, then there's a question.
How much do I trust the prediction? How much do I trust the sensory information coming in, then there's a question. How much do I trust the prediction?
How much do I trust the sensory information? And that's what that weighting variable is doing.
It's able to adjust the amount of processing that is driven by the sensory input versus the
predictions. So if I'm fairly confident of my predictions, as my brain was in the case of the
hollow mask illusion, for example, it's very confident of those predictions, and then I end
up with actually a false visual experience as a result. So attention in this case, increased
attention to the face here, is a matter of giving a higher precision weight into the sensory input so as to overcome
the illusion? Yeah, attention here can work either way. So, you know, you can be up in the value of
the sensory information, or you can be up in the value of the prediction, according to which of
the two your brain is unconsciously estimating to be the most reliable. And, you know,
often it will also be a mixture of the two at different levels of processing, different areas
of the brain. So the other thing to remember about precision here is it's being estimated
for every neural population in every area all the time. So it's not really just one balancing act,
it's these, you know, thousands of little balancing acts all the time. But yeah,
that's the thought, is that attention just is the process by which precision gets assigned.
Okay, so I want to do our best to make this intuitively graspable for people just in their
direct experience. I'm now looking at my computer. It's a very static scene. I've got a Word doc open, and I've got my desktop, and nothing's moving, nothing's changing.
And I've been looking at it for some minutes, so my sensory experience is fairly stable.
Obviously, I've been executing lots of eye movements across this stable scene, so it is changing, but it's not the ordinary circumstance of a rapidly changing world that I'm engaging with.
So I'm looking at the static scene, and I find that I can pay attention.
I can await various—speaking just visually now—I can await the significance of various parts of my visual field
over others. And I can do that whether I'm actually redirecting my eyes and putting, you know,
foveal focus on specific parts of it, or I can do it just purely as a matter of attention,
which is to say that I can be focusing, I can have my foveal focus on just one word in my document, but I can also be attending to the
periphery of my visual field, you know, as a matter of just directing my, kind of the beam of my
conscious perception. And in the midst of all of this, it's still possible for something
new to appear, right? So that was not anticipated. So I can see like, you know, scintillas of light
that are, you know, kind of happening more at the level of, you know, my eye, you know,
it's like a hardware error as opposed to something that's a genuine perception from
the environment, you know, or it can be like a floater, the liquid of my eye will come across my visual field.
What is happening?
Can you just map this on to the notion of error and the notion of prediction?
When I'm moving from everything that's static that I can continually visit and revisit
and it's unchanging and the changing term of let's say something floating
across my visual field that wasn't there a second ago how is prediction and error accounting for
this experience yeah i mean there are lots of different things going on there i think
one thing to say is that there are there are there are some kinds of stimulus that get assigned very high precision when yet they're
detected at all. So fast-moving things from the peripheries tend to be assigned high precision
as soon as they turn out. That's an evolutionarily useful thing. You notice something if it's kind of
moving fast towards you. Can you just define that phrase high precision?
Oh, sorry. This is just highly weighted. So in this case, it will be the sensory information.
So that sensory information would then be highly enough weighted to probably break through from
whatever else it is you're doing so that you see that thing move. You don't always, you know,
if people set up the experiments in certain ways so that
you're very busy trying to solve some other problem somewhere else on the screen. You might
miss it. But fast-moving things tend to attract precision, and that will tend to make them
noticed in that way. The other thing that I think is worth saying about what attention does is it kind of reverses
something that happens otherwise fairly automatically in predictive processing,
which is that well-predicted things tend to be dampened. And so, you know, as you get the same
information on and on, it sort of dampens. And that's probably what's going on in Troxler fading
and things like that, where a stimulus begins to kind of fade from view if you don't move your eyes around really enough to give you a little bit of change there.
So what attention seems to do is it reverses that dampening effect so that you can keep something alive by attending hard to it.
And that's some work that Koch, KOK, and some others have done.
So I don't know, I feel like there's something else that you're after here about the way that
precision weighting works. I mean, it's basically just sort of applying a sort of estimation of the
inverse variance of the, well, actually actually the prediction error is the thing that is
typically targeted there.
So it's how much am I going to trust prediction errors of this kind as they're emerging right
now?
And that's just something that the generative model has to learn to estimate in the same
way that it's trying to estimate what's out there.
So one of the things I think is interesting about predictive processing architectures is that they're automatically
metacognitive architectures as well. There's these two things going on, guess the world
and guess how good your guessing is all the time.
Mm-hmm. And how does this account for other aspects of experience, like emotion and motor behavior?
And maybe we want to take each of these at a turn.
I'm thinking especially things like pain.
things like placebo and nocebo effects and pain and functional illness being, in many cases,
driven by one's expectations. You have a fairly arresting example in the book of just how far this can go. We can take those in any order you want, but I'm thinking about pain and emotion and
motor movement. Yeah, I think, where to start there? I think pain. Let's start with
pain and then move along to emotion and movement. Yeah, I mean, you could think of pain in the same
ballpark as emotion, but let's just start with simple pain. So, you know, the idea there is
that we're predicting not just the external world, but the signals from our own
body all the time. In fact, you might think that predicting the signals from your own body is
evolutionarily the whole important thing about this kind of structure, is that you're predicting
how your body ought to be right now. And that helps to kind of, in a way that we'll describe
in a minute, move your body
around and adjust internal parameters and start sweating and things like that, or go and get
something to drink or something to eat in ways that keep those variables within the bounds of
viability. So we use predictions to make sure that we don't have to stray right outside the
bounds of viability
before we know something's going wrong. That seems, you know, basic homeostasis and allostasis.
That's, I think, the fundamental reason why we have predictive brains is to enable those things
to happen. So just as a concrete example, so thirst is not necessarily a reporter of a true departure from homeostasis.
It's more of a prediction of a coming departure, and therefore you deal with the thirst before, in fact, it's physiologically real.
Yes, that's exactly right.
Lisa Feldman Barrett describes this very nicely in her I I think it's How Emotions Are Made book,
where she says that if you feel thirsty and you take a drink of water, you immediately feel as
if your thirst is quenched, but actually the water won't do you any good for about 20 minutes,
anything like that. But that's fine because the feeling of having a quenched thirst reflects a
prediction just as much as the thirst did in the
first place. So you've got time to spare, if you see what I mean. So it's fine to think that it's
quenched now, because as long as it's quenched in 20 minutes, you're in good shape. So the thought
there is that, yeah, all of our bodily feelings are constructed around predictions, including
pain. And for that reason, if you get very strong
information suggesting that something very painful is happening to your body,
then even if nothing is actually happening to your body, you feel intense pain.
I think the example you might be thinking of in the book is a construction worker that
fell from a height onto a nail, and it appeared to pierce right through their foot.
They were in intense agony. They were taken to hospital and given fentanyl. And then when they
slowly removed the nail from the foot, well, it turned out it had just passed harmlessly between
the toes. But of course, the worker couldn't see that. They were in a big work boot. What they saw
was strong visual evidence of a really, really nasty injury.
And I have absolutely no doubt that the pain was perfectly real and intense, intense enough
for the fentanyl.
And that's sort of, you know, you might think that's a very dramatic case.
But the moral of the story and the moral of the discussion in the book anyway, is that
actually all of our pains and
all of our feelings are constructed in part from prediction and in part from sensory evidence.
And that's as true for ordinary pain as it is for that particular sort of a rather
dramatic illusion of pain. And then you've got all the complicated functional medical syndrome
conditions in between where in some cases there's no sufficient physical cause.
But in many cases, there's a physical cause, but it's just not a sufficient explanation
of the intensity or persistence of the pain or other disability. And there it just seems like there's a
little bit of overweighted prediction machinery in play. And there's a lot of interest in new
therapies that are trying to target the predictions rather than anything else. So I think pain is,
you know, we all know this in a way, it's sort of if the dentist says expect to tickle,
they're saying that for a reason, they're trying to frame those sensations that you're going to get in a way that really will
dampen the experience of pain just a little bit.
And there are controlled experiments showing that expectations of intense pain will up
the pain rating and expectations of less intense pain will down the pain rating,
even when what's being delivered is an intermediate stimulus all those times.
So I think pain's a good case, but it's just one that we all happen to know about. But all of our medical symptoms, all of our bodily experiences are built up in this way.
I just want to revisit the basic thesis again, because I know you clarified this
at the outset, but I just want to make sure I have the true shape of it. So is the claim
that we mostly consciously experience our predictions and are continually revising them in concert with attending to sensory inputs?
Or is it that all we experience is our predictions
and that the sensory input is really always unconsciously modifying our predictions?
And that is, as Arnold Seth called it, a controlled hallucination,
but the control component is always happening in the dark.
Yes, that's the way I see it.
Of course, you know, it's still early days for this sort of family of theories, and you could construct them in different ways so that you have some sort of somehow partial experience of the flow of the
prediction errors. But that's not true to my visual experience normally, for example. If I just
turn my head around and see the room that I'm in, there must be flurry upon flurry of prediction
error being created and then being resolved because I know about the room, I know about the
kind of objects in it, I have no
trouble at all sort of upping the attention on that diary on my desk and seeing the details of
the sunflower that seems to be on the front cover. I don't experience the errors at all. I just
experience the most successful predictive model that has accommodated as much of the error as can be accommodated right now.
So what's happening under conditions where someone has taken a powerful psychedelic,
say, like LSD or psilocybin? I know that you discuss this a little bit in the book,
and there's Robin Carhart-Harris' thesis around this. How do you think about this within the schema of prediction and
error terms? Yeah, I mean, basically, in the book, I just adopted the Carhart-Harris model. I think
it's the best one that we've currently got. But I think the first thing to say about the actions
of psychedelics is it's very dose-dependent, as you will know if you've
taken any of them. It is very dose-dependent. And the varying effects at different doses
actually fall out quite nicely from the idea that the brain is a multi-level prediction
machine where the lower levels are specializing in stuff a lot closer to the sensory information
itself.
So, you know, obvious things, color, shape, texture, those sorts of things.
And then the higher levels are dealing in much more abstract things like, I don't know,
what kind of thing is this?
What can I do with it?
In the case of many of the predictions that seem to be kind of targeted by the psychedelics. At the low levels,
you get sort of visual disturbances, you might see creeping forms, different textures,
strange colors. But then at the higher doses, you get the really interesting effects like
ego dissolution and oneness with the universe and the kind of the beneficial effects oneness with the universe and the beneficial effects on people with chronic depression,
for example. All of those things seem to require higher doses, not repeated doses necessarily.
One dose can often do it. And that falls into place according to Carhart-Harris. I just report
the work here because the actual sort of shape of the psychedelic molecules causes them to bind to receptors
that are higher up in the processing stream, meaning that they're going to have more effect
at high doses on the stuff that is more abstract, if you like.
So think about things like, you know, what's your relationship to the world?
What's your relationship to yourself?
You know, how do you see yourself in the future? So I think it does make a certain kind of sense,
the idea that we've got this sort of cascade and that if you can sort of, I think the phrase that
he uses is shaking the snow globe. So the idea there is that you can sort of disrupt the ordinary
entrenched predictions
at those high levels, and that can be really, really liberating because you get to experience
the world in a new way, one that, you know, experience your being in the world in a new
way, which I think can be incredibly powerful for people with sort of, you know, end of
life anxiety or depression and so on.
That's what the research seems to suggest.
But in that case, where it seems like one is experiencing a great onrushing of novelty,
what is one actually experiencing with respect to these different components of the theory,
the raw sensory data versus one's prediction about what is
happening in the world and the accuracy, the prediction about the validity of one's own
prediction.
Yeah, I think that's where the snow globe image is quite useful, I think, because a
good way to think about it is that what's going on when you get that sort of onrush of novelty,
as you nicely put it there, is really the relaxation of entrenched predictions. So it's
kind of getting rid, or temporarily at least, of the predictions that were gathering the sensory
input into the accepted buckets. And since it's not being gathered into the accepted buckets, then new patterns
can be detected, new shapes can form. It's not that they form without the benefit of
predictions. It's just that the predictions that can now be recruited to deal with that
information are not the ones that were being recruited before. And I think that's the best
way to think about that and why the shaking up the snow globe thing is quite a useful little picture.
Now, do you have personal experience with any of these drugs?
Yeah, some of them.
Yeah, I've had a fair bit of experience with MDMA, which is a borderline, not a classic psychedelic.
I took peyote once a long time ago.
That's in the classic psychedelic mode.
And of course, magic mushrooms.
Magic mushrooms grew all around the campus when I was an undergraduate.
So yeah, we have plenty of those.
So yeah, some of them at least.
Well, that's a good goad to philosophy.
Yeah, that's true.
Yeah, well, MDMA, as you point out, is not a classic psychedelic, but it leads nicely to
any discussion of emotion and emotional pain and its antithesis. How do you think about
emotion in this context? Yeah, so I think that emotion has a very strong component of bodily
prediction in it. I mean, it's not just bodily prediction, but there's an old picture of emotion
that goes back to William James. I'm sure that you know it and many of your listeners know it.
It's this idea that what an emotion is, is a sort of perception of the bodily changes
that are associated with something or the ones
that are going on right now i should say so you know the example the famous example is you see a
bear and you feel fear and you run from the bear but the feeling of fear is actually your perception
of the bodily states of kind of arousal and preparation for flight and whatever else, galvanic skin response
that happens.
That's just motivated there by the idea that if you took all that away, you might judge
that it would be a good idea to run away, but you wouldn't really be feeling anything.
And I think that that story has a lot going for it, but it's a little bit blunt.
I mean, so my colleague at Sussex, Hugo Critchley, has done a lot of work on this.
And what they find is that from the James model, you might kind of expect there to be
a one-to-one mapping between every emotion we can feel and the perception of some set
of bodily changes.
But there doesn't seem to be that.
You know, it's as if the bodily changes are a bit blunt.
You know, is there a characteristic signature for, I don't know, the anxiety that I was
feeling before this podcast versus the anxiety that maybe I'm going to feel if I'm about
to jump off a high diving board or, you know, it's just a bit blunt to reconstruct all of
that.
But if what you're doing is chucking that
information into one big pot, along with what you know about the context, in order to try to predict
what's going to be happening in your body and the world over the next, let's say, few minutes,
then you get something that is much more fine-grained. So the feeling of a fast-beating
heart when
you're working out at the gym versus when you're just sitting down and you're having a panic attack
or you're worried that you're having a heart attack or something like that. You know, these are
very different feelings and yet the bodily stuff you're picking up on might be very, very similar.
Yeah, well, people will be familiar with the concept of reframing or even just
comparing two similar states of arousal and noticing that they're, in the one case, you're
scoring it as a highly negative experience and another, it can be quite positive.
I mean, the example I always use is the stress one feels in the gym,
the most intense part of one's workout,
just viewed purely as a matter of physiological stimulus,
it would be an extraordinarily negative and even terrifying state of the body
if you didn't know the reasons for it.
If you woke up at three in the morning and you felt that way, you'd call an ambulance. But because
you know what's going on and you know what precipitated it, it's actually a highly positive
experience for most people, even if there's an unpleasantness to it. So how do you think about the freedom this gives us to intervene in our standard predictive weightings that may be making us, frankly, miserable and improve our lives on the basis of just grabbing the levers of this machinery?
Actually, just before I pick up on that, something you said there that I think is interesting to follow up a bit is whether we should think about the feeling as the same, but the judgment
of its importance as being different, or whether the actual feeling when you frame it as I'm
working out at the gym versus when you frame it as I've just woken up in bed and I don't
know what's going on.
I think that the predictive processing story says that the feeling itself is different.
It's not that you've got the same feeling both times,
and context just allows you to behave differently in response.
It's reaching further than that somehow.
It's really changing the feeling.
So I think it's important to bear that in mind.
I think both could be true here,
because I would certainly agree that subsequent feelings get layered onto it based on the interpretation.
So it's obviously a moving target.
I mean, you're going to get a cortisol dump based on the three in the morning experience of pressure and elevated heart rate, which you wouldn't get in the gym because you're not reacting
to this thing.
So it is definitely evolving.
Yeah.
No, you're right.
Naturally, it's so important to always think about everything over time.
And it is so tempting to sometimes go back and just think about snapshots.
But I really think if we're looking at cognition, we should always be thinking over time.
So yeah, thanks for that.
That is really important.
You did ask also there about ways to intervene.
You know, what could we do to leverage this wiggle room that we've got in our favor?
And I think that, you know, once we realize that the wiggle room is built around these
edifices of prediction, then we can begin to see things to do. The thing
that is a sort of break on that is that so much of that prediction machinery is unconscious,
and we can't control it just by having a different thought. So when I look at the hollow mask,
for example, I might very well be able to think to myself, look, I really, really, really know
that that's a hollow side that's facing me. It's just not going to do any good. I can't reach down
and alter those. But maybe I could with enough practice or looking at things in different
lights. It kind of depends. Things vary according to how a different illusion is being generated.
But in the case of things that we might do in our daily life, the obvious
cases are things like reframing an experience that might otherwise be negative and that negativity
would set off bad cycles. So if I'm about to do a talk, I sometimes feel a little tingle in my
fingers. I guess that's adrenaline or something like that. Reframing that tingle, not as anxiety, but as chemical readiness to deliver a good performance
is actually a trick that I think works. It really does seem to do something.
Likewise, reframing pain that we talked about earlier, all of those self-affirmation practices
that we read about, they actually have some pretty good evidence that they can make a
difference in some cases.
So there's some good studies showing that self-affirmation about abilities to do spatial
reasoning tasks and math tasks can abolish gender differences in UK school kids in that
case.
And there's a similar set of results with race differences in US school kids.
So nothing is a panacea and nothing works for everything. You've got to have the basic skill
set, otherwise you can't unleash it. But if you do have the basic skill set, you can either get
in your own way or get out of your own way. And framing and self-affirmation really seems to help
with getting out of our own way. And framing and self-affirmation really seems to help with getting out of our own
way. What about hypnosis? Yeah, that's another wonderful way of getting out of our own way.
Actually, another of my colleagues, Zoltan, the wonderfully named Zoltan Deans,
works on hypnosis and cognitive science. And yeah, I think hypnosis is a powerful and actually underexploited tool at the moment.
It's also a nice way of, you know, susceptibility to hypnosis is an interesting sort of gauge,
as Zoltan says, of what he calls phenomenological control.
So the amount of control that you can exert over the shape of your own experience by these different techniques
probably varies according to how hypnotizable you are.
Yeah, and I guess differences in hypnotizability is a measure of the plasticity of one's models,
right, or their susceptibility to conceptual influence, right? I mean, how would you,
on the basis of this thesis, how would you describe, because famously there's a very wide
range in susceptibility to hypnosis. There's the Stanford scale, which I think goes from
one to nine or zero to nine. And some people just are not hypnotizable, and some people are highly so.
How would you describe that difference in light of the model?
Yeah, I think it has to be related deeply to the amount of sort of voluntary control
you can exert over your own precision weightings, just to dip into the jargon there.
But that's the amount of control that you can exert over the weighting of top-down
predictions over sensory information. If you can exert a lot of control over that, then as long as
you want to be hypnotized, you should be able to be hypnotized successfully. And of course, if you
have that sort of control and you really don't want to be hypnotized, you won't be able to be hypnotized.
It's a sort of, as Zoltan puts it, it's a sort of voluntary, the voluntary giving up
of voluntary control or something like that.
So I think control over precision waiting is actually, it's a really, really important
skill that we humans should try and develop.
I think that meditation is another way of trying to develop that skill. If you ask
me what I think meditation is doing for people, I think it is enabling greater control over the
precision weighting apparatus. And the more control we have over that, the more control we
have over our own experience. Do you have much experience with meditation?
Well, funny enough, I only have a little because I don't seem to get on with it. And I'm really
disappointed about this. You know, I've been to a few sort of week-long courses and I've done my
best to sort of, you know, kind of sit quietly and do the right things for 20 minutes a day for a
while. Are these are week-long Vipassana courses like mindfulness?
Yes, exactly. It's sort of live-in kind of.
Yeah, I mean, it's a pretty, obviously, I probably should give it, particularly given my theoretical views, should give it a better shot.
But because every time I've tried, I just seem to be maybe just a little bit too manic and hyper, i.e. the very kind of person would benefit most, but finds it hardest to get into have you ever tried
time yeah go ahead have you ever tried meditating while on mdma or any other compound of interest
no no i've never tried that that might be you think that would be worth a go yeah yeah if um
if mdma is still on the menu i would highly highly recommend trying some mindfulness. I have never tried that,
but I have had that experience of, you know, just sort of sitting and finding myself very,
very happy looking at a very small thing in front of me, which is, you know, it's got a little bit
of that sort of almost unwitting mindfulness about it. I think the closest I get in my current daily life is when
I go on very long walks. And there's a certain point in a long walk where you can, I think,
start to enter a state that has some of the right properties.
So again, just in an effort, however quixotic, to make this intuitive for people. When you say that you think meditation is a matter of
altering the precision weighting of one's models, can you...
I think it's more about gaining control over the precision weighting. So, you know,
altering is what you do with it once you gain control over it. But it's learning how to control
the precision weighting better so that, for example,
you can allow the sensory information to kind of try to speak for itself a bit more without being
sort of sucked into starting you off thinking about stuff that is coming from the higher,
more abstract levels. Like, I don't know, what am I going to do later today? What should I be working on now? That sort of stuff.
So I think it's gaining some control over the way that precision is distributed across the machine.
It's a very difficult thing because most of the precision weight and stuff is happening automatically beneath the hood all the time.
is happening automatically and beneath the hood all the time.
So I think that's why we need these sort of long-term practices to somehow install a bit more control than we would otherwise have.
Well, let me describe my experience of mindfulness,
and you can tell me how it fits in if you can do this.
That would be interesting.
And there are kind of a few stages to this,
but let's take anxiety as a classically negative emotion that people find mindfulness can
be very helpful with. So, you know, something has precipitated anxiety, let's say a thought about,
you know, some future event like a public talk, and you feel this anxiety, and it feels intrinsically unpleasant, and the default
reaction is to not want to feel that way, to be thinking about the thing that's making you
anxious, to be thinking about the reasons why you don't like this, why am I this sort of person
who gets anxious, why can't I just be happy to be giving this talk, and you're thinking,
the thoughts are kindling the anxiety, the anxiety is being felt and kindling
further thoughts in that vein. And the way mindfulness breaks this spell is that you
remember that it's possible just to feel the anxiety, just feel the mere physiology of the
butterflies in your chest, and to feel it non-judgmentally and non-reactively. You can
even feel the intrinsic unpleasantness of it, if that's salient, but feel that without reaction.
And you can notice that consciousness is just this open space in which everything,
thoughts and sensations and changes in physiology, are just appearing all by themselves. So you just rest as that open
and non-judgmental and non-reactive awareness of all of these changes. And the moment you shift
to that openness and just mere awareness, they lose their psychological implications. So anxiety
in some sense is no longer anxiety. It's just
this changing energy state of the body that doesn't have meaning. I mean, in this moment,
it has no more meaning than a feeling of indigestion or itching on your skin. I mean,
it doesn't get read back into a psychological story of the kind of person you are. It's just fluttering and actually
benign changes in the state of energy of the body. So given that transition, how might you explain
what's happening there in terms of precision weighting and predictive models, etc.?
Yeah, that's a lovely description.
I think you must be a really, really good meditation teacher.
I like the sound of that.
So I think the thing to think there is that precision is a zero-sum game.
So, you know, if you really up the precision on one place,
then you have to down the precision elsewhere. And so if you now imagine really up in the precision. full-length episodes of the Making Sense Podcast, along with other subscriber-only content, including bonus episodes and AMAs and the conversations I've been having on the Waking
Up app. The Making Sense Podcast is ad-free and relies entirely on listener support,
and you can subscribe now at SamHarris.org. you