Making Sense with Sam Harris - #320 — Constructing Self and World
Episode Date: May 22, 2023Sam Harris speaks with Shamil Chandaria about how the brain constructs a vision of the self and the world. They discuss the brain from first principles; Bayesian inference; hierarchical predictive pro...cessing; the construction of vision; psychedelics and neuroplasticity; beliefs and prior probabilities; the interaction between psychedelics and meditation; the risks and benefits of psychedelics; Sam’s recent experience with MDMA; non-duality; love, gratitude, and bliss; the self model; the Buddhist concept of emptiness; human flourishing; effective altruism; and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe. Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
Transcript
Discussion (0)
Thank you. of the Making Sense podcast, you'll need to subscribe at samharris.org. There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only
content. We don't run ads on the podcast, and therefore it's made possible entirely through
the support of our subscribers. So if you enjoy what we're doing here, please consider becoming Today I'm speaking with Shamil Chandaria.
Shamil is a philanthropist, an entrepreneur, a technologist, and an academic
with multidisciplinary research interests spanning computational neuroscience,
machine learning and artificial intelligence, and the philosophy and science of human well-being.
He got his PhD at the London School of Economics in mathematical modeling of economic systems,
and he later completed a master's in philosophy from University College London, where he developed an interest in the philosophy of science and the philosophical issues related to biology and
neuroscience and ethics. In 2018, Shamil helped endow the Global
Priorities Institute at Oxford University, and in 2019 he was a founder of the Center for Psychedelic
Research in the Department of Brain Sciences at Imperial College London. He's also funding
research on the neuroscience of meditation at Harvard University and the University of California
at Berkeley. And Shamil and I spoke about many of our intersecting interests,
with the main focus being on how the brain constructs a vision of the self and the world.
We discussed the brain from first principles, Bayesian inference,
the hierarchy of predictive processing in the brain,
how vision is constructed, psychedelics and neuroplasticity,
beliefs and prior probabilities,
the interaction between psychedelics and meditation, the risks and benefits of psychedelics,
my recent experience with MDMA, non-duality, love, gratitude, and bliss, the self-model,
the Buddhist concept of emptiness, human flourishing, effective altruism, and other topics. And now I bring you Shamil Chandaria.
I am here with Shamil Chandaria. Shamil, thanks for joining me.
Yeah, it's a great honor to be on.
Darya. Shamil, thanks for joining me. Yeah, it's a great honor to be on.
So I forget how I discovered you. I think I saw you in conversation with Will McCaskill,
the young philosopher who I'm a big fan of and who's been on the podcast several times.
And it just seemed to me that just based on your conversation with him,
that you and I have an unusual number of topics we intersect on. And I think you,
just judging from what I've seen of you, you've arrived at these various topics by different routes than I have. So it'll be interesting to hear your story. But
briefly, I think we are both very interested in the brain and the nature of mind, both as it can be understood through
neuroscience and also through first-person methods like meditation and psychedelics.
You also have a lot of experience with artificial intelligence, which is an interest and concern of
mine, and also affective altruism and considering topics like existential risk and other long-term
challenges. There's a lot here. So perhaps you can just summarize your journey into
some or all of these areas. What are you focusing on and how have you come to focus on these things?
Yeah, so you're right. We actually, I think, share, there's a
huge amount of overlap. In fact, funnily enough, I think we first met in Puerto Rico, if you
remember that conference in 2015. So I was there and I mean, we may have had a short conversation.
I was a big fan of waking up the book in those days.
Nice.
Okay, so forgive me because I'm not aware of having met you, but it's very likely we did meet because it was not that large a group and that was an interesting conference. Yeah, I think 70 people, right?
So yeah, that was just before, well, I was already at the Future of Humanity Institute there where Nick Bostrom and others are. But yeah, so there's so
many threads to the story. But I'm surprised, because actually, I thought you must have
discovered me by seeing this talk that I gave called The Bayesian Brain and Meditation.
No, I've since seen that talk, or at least a podcast with you discussing that talk.
I now forget.
But yeah, your discussion with Will at first.
Okay, yeah.
Well, so that's obviously something we'll get into.
That's the big thing which has kind of become very central in my thinking on kind of how
does meditation work.
But let's just rewind.
Yeah, so I have a kind of a mathematical background.
My PhD was in mathematical economics, actually using techniques of like stochastic optimal
control, which actually become later the mathematics behind reinforcement learning, which is obviously
central in AI.
And so I've done so many different things in my life, including, you know, finance and technology.
But I think that I joined DeepMind as a strategic advisor in 2015 and was there until 2021.
And, you know, like you,
one of my central concerns, of course, is AI safety,
but I'm also interested on a technical side
and kind of, you know, really one of the,
I mean, I have lots of interest in AI,
but one of the real interests
is to understand how the brain works.
Because I think that machine learning and AI
has been, is actually a very good way to start thinking about how the brain works.
And at the same time, I was also a research fellow at the Institute of Philosophy at London
University, and looking at this kind of intersection between neuroscience and philosophy.
And at the time, I think, you know, back in 2013, 14, you know, they asked me, since I was the kind
of mathematical guy there, you know, there's this thing called the free energy principle
coming out of Carl Friston's lab. And, you know, can you explain how this really works?
You know, you know about entropy and stuff like that. So I started really getting into it and
it was very interesting because of course it's deeply connected with information theory and
machine learning. And to some extent, I would say I now take the position, and I think many neuroscientists do, that it's the closest thing we have to a kind of general algorithm of what might be going on in the brain from a big picture perspective.
And as I kind of got into it more and more, the more I thought that, wow, this is very similar to, you know, what I'm going through in my meditation journey and kind of what
the central ideas of Buddhism and Eastern spiritual traditions are.
And, you know, because essentially, I guess we'll get into this but what seems to come
out is that really the brain is having to construct or fabricate or simulate a world a phenomenal
world and a phenomenal self and the free energy principle kind of goes through like you know
how does it how do we do that? So that was very
interesting. And then interestingly, as a deep mind, I started really looking at some of these
architectures, these unsupervised learning architectures using deep neural networks.
And I started to be able to understand the free energy principle a lot better than I did before.
And I think in a much more heuristic and practical way compared to the sort of usual explanations in neuroscience,
which are notoriously difficult,
sometimes using tensor calculus and all sorts of things.
So yeah, so that's some of the background, you know,
bringing in the neuroscience and meditation. So did you ever work with Friston?
Yeah, well, I continue to do. I mean, so, well, in fact, I was with him at a workshop,
I think about a month ago, on computational neurophenomenology. And yeah, he's pretty amazing.
Yeah, yeah. Very smart and quantitative neuroscientist. I think, is he the most
cited neuroscientist at this point? I believe so. I believe so, yeah, yeah.
So a couple more background questions before we jump in. One, just to remind people that DeepMind is the AI company that was
acquired by Google that gave us AlphaZero and AlphaGo and AlphaFold and made some of these
initial breakthroughs with deep learning in recent years that have really been the core,
I would say, of the renaissance in AI. People are talking more about open AI at the
moment as a result of ChatGPT, but DeepMind really has been the frontrunner for several years in AI.
And it's joined together with Google Brain now, so it's back again as Google DeepMind.
Yeah, yeah. How did you come to meditation and what practices have you
been doing and what teachers have been important for you?
Yeah. So that's actually very central to my life. I started meditating 35 years ago.
I write when I started my PhD. I initially started with TM, which was the way back then in the 80s.
That's what pretty much a lot of the early meditators started with.
And I found that actually very useful.
And as I've gone through my practice, I've only come to understand that it was actually a really good foundation.
And then I guess maybe around 20 years ago, I started my sort of first Buddhist retreats.
And then maybe seven or eight years ago, I started really spending a lot of time at a
retreat center in the UK called Gaia House,
where Rob Berbea was the resident teacher. And I was very influenced by his kind of framework
on emptiness and his meditation practices.
Yeah, unfortunately, I never met him. I discovered him after he died. He died,
unfortunately, quite young. And he has this wonderful book on emptiness,
The Scene That Freezed. And he really seemed like he was quite a gem.
He really was. I mean, he's actually, I think, exactly the same age as me to the month.
I think that he... Yeah, unfortunately, by the time I was there there he was a lot of the time pretty sick so I kind of
never really got to sat with sit with him too much but I was still you know in the in the orbit and
you know and and my meditation practice deepened a lot into the genres and other kind of techniques and then other emptiness meditations of Robert Baer.
And then I suppose in the last three, four years, I kind of felt that what my practice really needed
was a move to non-dual style. And so I did a retreat with Locke Kelly, but then pretty much a little after that,
started working with Michael Taft, who is a non-dual teacher in a kind of, I mean,
he's non-dual style, but not under any particular lineage. And that was perfect for me because his experience is very broad and he can kind of integrate many styles.
And so, yeah, I've been working with him.
So, yeah, it's been a long and interesting journey.
And along the way, something that we haven't yet touched on, I also have been very involved in the psychedelic kind of renaissance. I'm also
a research fellow at Imperial College where Robin Carhart-Harris used to be. And Robin's now,
of course, in San Francisco, but UCSF. And actually, I worked quite closely with Robin and Carl Friston on the kind of computational model of what might be going on with psychedelics, the REBUS model, which basically uses a predictive processing framework.
Nice, nice. And you funded some of that research, right? Yeah, so that's yet another research because apart from being like on the science side and the research side, I'm also, another hat is being a philanthropist.
Just as it happens because of my career, you know, I'm able to be, have the financial resources to also have a philanthropic role.
And I take it, I'm very influenced,
obviously, by effective altruism.
And one of the kind of tenets of effective altruism
is that, you know, we want to be in areas
that are kind of neglected.
And when I was, and, you know, these funding,
you know, when I sort of helped to set up
the first psychedelic research center in the world,
it was still pretty undefunded. Right. Well, okay. So we have many
things on the menu here. Let's start with the brain. And know, some of these topics are fairly complex and some of the interesting details are in the math.
And we obviously are working with audio only, so there's no visual aids here.
But I think it would be worth trying to explain what you mean by the free energy principle,
what you mean by the free energy principle, what you mean by predictive inference or predictive coding. Part of that picture is also the work you've done on Bayesian inference in the brain.
We might, just to make things difficult, we might also mention integrated information theory.
Come at that tangle however you want, but what do you
think is the best hypothesis at the moment describing what the brain is doing? And we
might want to start by differentiating that from everyone's common sense idea of what the science
probably says about what the brain is doing. Yeah, okay. No, that's great.
So why don't we look at the brain from first principles,
and then maybe we can later apply it to meditation and spirituality.
So the thing is that, you know, maybe 20 years ago,
the consensus of, you know, what the brain was doing was
it was kind of taking bottom-up sensory data, sensory information,
and kind of processing it up a stack. And then eventually, the brain would know what was,
would figure out what was going on. And that view of what the brain is doing is, in fact,
of what the brain is doing is in fact precisely upside down, according to the latest theory of how the brain works. And I think the, you know, the way to start at this question is really from
first principles. It really does help to look at it philosophically, which is, you know, we're an organism with this central processing unit,
the brain, which is enclosed in a kind of dark cell within the skull.
I mean, we are already brains in vats.
You know, we already thought experiments.
Exactly, exactly. And all this brain has access to is some noisy time series data, some dots and
dashes coming in, you know, sort of from the nervous system. Now, how on earth is it going
to figure out what is going on in the world? Before you proceed further, I love the angle you're taking here, but let's just reiterate
what is meant by that, because it can be difficult to form an intuition about just how strange our
circumstance is. I mean, you open your eyes and you see the world, or you seem to see the world, and people lose sight of the significance
of light energy being transduced into electrochemical energy that is not, it is not
vision, right? It is not, after it hits your retina, you're not dealing with light anymore,
and it's, this has to be a reconstruction, And we're now going to talk about the details
of that reconstruction. But to say that we're brains in vats, right, and being piped with
electrochemical signals, divorced from how experience seems, you know, out there in the
world that it just seems given to us, that's not hyperbole. There is a
fundamental break here, at least in how we conceive of our sectioning of reality based on
what our nervous system is. Yeah. I mean, in fact, I don't know how deep you want to go with this,
but actually you can even start before that, which is from the philosophical problem, which is, you know, what Plato and Immanuel Kant kind of pointed to,
which is that we only know our appearances, our experience. We have no contact with reality.
Most people's common sense view is that, oh, look, we're looking out at the
world through little windows in the front of our skulls, and we're seeing trees as they really are.
Now, of course, that cannot be true for precisely the reasons that you said. We're just receiving some noisy,
random electrical signals coming in
and the brain has never seen reality as it is.
I was gonna, you know, the tree as it is in itself,
if that makes any sense.
Now, what the brain has to do
is figure out the causes of its sensory data.
In other words, it's trying to figure out what is causing its sensory data
so it can get some grip on the environment.
And that, of course, is important from an evolutionary perspective
because if we don't know what's going on in the environment,
we won't know where the food is and we won't know where the tiger is.
So we need to find out the causes of our sensory data.
And this is ultimately, formally, exactly the statistical inference problem, the Bayesian inference problem.
And Bayesian inference is trying to figure out the probability that given my sensory data, I'm seeing a tree.
Okay.
Now, as we said, it turns out that the brain can't solve this problem because actually
formally solving, you know, the Bayesian inference problems turns out for technical reasons to
be computationally explosive.
So what evolution has to do and what we have to
do in artificial intelligence is use another algorithm. It's called approximate Bayesian
inference. And the way you solve it, because Bayesian inference is so difficult, the way you
actually solve it is going at it backwards. And what you have to do is you essentially have to have all this data come in
and try to learn what you think you're seeing and from what you think you are seeing you then
simulate the pixels that you would be seeing if your guess is correct. So if I think I'm seeing a
tree what your brain then has to do is go through something called a generative model and actually
simulate the sensory data that it would be seeing if this was indeed a tree. Now, that is incredible
because what it means is that, well, you know, the upshot of that, just to cut to the chase,
what, this is the real kind of what's called a neurophenomenological hypothesis, which is that in fact, what we experience, if we're aware of it, is our internal simulation, is precisely that internal generative model.
Now, you might just then conclude, well, we're just hallucinating, we're just simulating, how do we have any grip on reality?
And this is where the free energy principle comes in.
It says that, you know, what we have to do
is we have to simulate what we think is going on,
but it's not any old simulation.
It's a simulation that minimizes the prediction error
from the output of your simulation and the few bits of sensory data that
we get. In other words, what we actually do with the sensory data is use it to calibrate
our simulation model, our generative model. And there is another part of the free energy principle,
which is it turns out that minimizing prediction error isn't good enough. It turns out we also have to have some prior guesses, some prior probabilities
about what we're experiencing. In other words, you know, as I grow up, you know, through childhood
and, you know, as you're enculturated, you come to learn that there are things like trees and
so there's a kind of a high prior probability of finding trees in your environment. and as you're enculturated, you come to learn that there are things like trees.
And so there's a kind of a high prior probability of finding trees in your environment.
Now, what you want to do is you want to have a simulation,
which is minimizing the prediction error with the sensory data,
but also minimizing the informational distance between the output of your generative model, the simulation,
and your priors. In other words, you want a simulation that is as close to what you would normally expect before seeing the sensory data. So this is really what the free energy is. The
free energy has two terms. The first is roughly kind of a prediction error. And the second
is an informational distance to the prior of what you'd be expecting. So it turns out that we can
actually do approximate Bayesian inference, which is the mathematically optimal thing to do,
if we simulate the world and use that simulation to, and create the simulation
in such a way that minimizes the prediction error with the sensory data that we get, and also
minimizes the deviation from, the divergence from our prior probability distribution, prior probabilities.
So that's kind of the free energy in a nutshell.
And it's kind of, as I said, it's very interesting because it helps us think about phenomenology,
which is, you know, what I'm interested in because like, you know, it's, if we open our
eyes, as you say, and we find the world just appear in front of us, you know, what is this?
What is this experience that we're having? And the answer is, it's a kind of, we're somehow
aware of our internally generated model of the world. And that model happens to be kind of calibrated correctly with the sensory
data. Yeah. Yeah, it was a great overview. Maybe I'll track back through some of that just to give
people a few handholds here and also give them areas they may do some further research if they're
interested. So many people will have heard of Bayesian statistics or Bayes' theorem, and it's actually a pretty simple piece of mathematics that it's worth
looking up because it's unlike many equations. Once you track through the terms, it does
repay one's intuitive sense of how things should be here.
I mean, this is a mathematical description of how we revise our probability estimates
based on evidence.
And so when you look at this equation, I just pulled it up to remind myself of its actual
structure.
If you want, I can just do a little very simple example.
Sure.
Yeah.
I mean, I was imagining something like,
you know, what's the probability that it's raining given that the street is wet, you know?
Yeah. So, I mean, I'll stick to the brain and the tree and the data. But yeah. So, what Bayes' theorem says to think about our tree in the brain example, you know,
it's giving you a formula for calculating the probability that of there being a
tree given your sensory data. Okay. In fact, it's calculated, you know, Bayesian inference,
the way we're doing in the free energy is calculating the whole probability distribution.
But you can just think of it that what we're trying to calculate is the probability that what you're seeing is a tree given the sensory data that's coming through to you.
And what Bayes' theorem says is that you can calculate that probability by going at
it in a kind of a backwards way, which is you can say it's equal to the likelihood of
the data.
equal to the likelihood of the data, and that's roughly saying, how likely is it that I would be seeing exactly this sensory data if it was indeed a tree, times another term called the prior
probability, which is, what's the prior probability of seeing trees? Okay, so those are the two main
terms of Bayes' theorem, the likelihood of the data,
which is what's the probability of seeing this particular data on the basis that it's from a
tree. And the second term is the prior, which is the probability of seeing trees in general.
And then these two terms are just divided by a normalizing term, which is very simple. It's just
what's the probability in general of seeing
this particular sensory data. So that's just there to make sure the probabilities add up to one.
One thing I'll flag here is that this connects with some very common reasoning errors of the
sort that Danny Kahneman and Amos Tversky pointed out, like, base rate neglect. For the prior probability of seeing a tree,
given that you're walking someplace on Earth, is very high,
but the prior probability of seeing a spaceship or a lion or something else is lower,
and it's only against those background probabilities that we can finally judge
how likely it is that our perceptions are veridical,
right? And neglecting that what is called base rate is a source of some very often comic reasoning
errors. In fact, if I can draw it back to the brain, that's a great example to illustrate it
exactly, because this goes to the heart of the free energy principle and how predictive
processing and active inference works, which is, okay, so you're looking down the street and you
see, you know, it's kind of a little foggy, but you see this four-legged animal coming up the street. And actually, it kind of looks like a lion. The probability that the sensory
data is coming from a lion is actually higher than the probability that this sensory data is
coming from a dog. Okay. So let's just take that as given that in fact, it's... However,
So let's just take that as given that in fact, it's... However, the prior probability of seeing a lion is way, way lower than seeing a dog.
And so in fact, and this can be actually, you know, this is tested in lots of experiments.
In fact, you will perceive that as a dog.
You will actually perceive it as a dog because that's the way Bayesian inference works out.
Now, actually, there's a slight wrinkle to this,
which gets into the nitty-gritty of the free energy principle.
If it wasn't a foggy day
and you get a really clear read on the sensory data,
then the weight of that likelihood of the data term
will take precedence over the prior. So it will actually overrule the prior. So it doesn't mean
that, you know, you're just constrained by your priors forevermore. It's just a way of weighting
the sensory data with the prior probabilities. And, you know, if it's a foggy day, the sensory data is lowly weighted.
Technically, we say it's got low precision,
which is the inverse of variance.
And yeah, that's a really great example
of how the Bayesian inference actually works in the brain.
Okay, so just to give some neuroanatomical plausibility to this picture. So again, the common sense view of the science here is that we have a world. Let's stick it gets transduced into electrochemical energy in the
brain and transits through various brain areas. And along the way, various features of the visual
scene are detected and encoded. So there are neurons that respond to straight lines. There are cortical columns in the visual cortex that build up a more complex and abstract image.
And, you know, eventually you get to some cell in the cortex that responds to faces rather than anything else.
And even, you know, you'll get cells that respond to specific phases, like the fabled grandmother cell.
I think there was one experiment about 25, 30 years ago that showed that there were cells
that were responding to the phase of Bill Clinton and not any other.
And so you have this kind of one-way, feed-forward picture of a mapping of the world,
and yet in your description here are seeming to reverse the causality.
One interesting piece of neuroanatomical trivia is that we have something like 10 times the number
of connections going top-down rather than bottom-up, from returning to visual cortex from
the frontal lobes. That has always been somewhat inscrutable. We know that you can
modify the activity and even structure of visual cortex by learning, right? So you can learn to
see the world differently, and that learning largely takes place frontally or in areas of
cortex that are not strictly limited to vision, and yet they connect back
to visual cortex. And so you imagine what is required neurologically to learn to recognize,
you know, let's say you become a radiologist and you learn to read CAT scans, say. That learning
has to be physically inscribed somewhere, and we find that the changes propagate all the way down to visual cortex.
that is predictive, that is making guesses, that is a kind of, I believe,
Arnold Seth, when he was on this podcast, described it as a controlled hallucination.
It's very much like what the dreaming brain is doing, except in waking life,
it is constrained by visual inputs to the system of the sort that you just described.
And we're getting this error term in predictive coding.
So maybe you can kind of fill in the gap I've created here. What are these deeper layers of the network doing, and how is this reversal of, you know, this is now a feedback story more than
it is a feed-forward story. How does that change our sense, or how might it change our sense
of the role that our worldview and self-model plays in determining the character of our experience?
Right. Great. So exactly as you say, it's kind of always been a bit of a mystery why there are 10 times as many feedback
neurons as there are kind of feet forward in some of these systems. And the picture that we just
talked about where the generative model, the simulation model, actually points down from the higher cortical areas towards the low-level inputs where the
sense data is coming in. Now, in fact, you know, so one way to think about this model is that we've
got this kind of generative model, which starts with our priors, what we think is going on,
what we think is going on and makes a simulation.
And what flows up the feed forward part is just the prediction errors.
So the prediction errors say, look, your model's a little wrong here because, you know, it's different.
So then the model will be adjusted.
So to minimize the prediction errors.
Now, it's not just one huge model going all the way from top to bottom.
As you intimated, the scheme that is now thought to arise is something called hierarchical
predictive processing.
So it's essentially that you have a whole series of low-level models near the data.
You know, the first layers of the visual
cortex might be, you know, having, you know, models that are detecting edges and corners.
And then, you know, you build up from there exactly like you do in a neural network where
higher layers in the network are essentially processing higher level features, except that these are all being driven
down by these priors that are generating what we would expect to see. And all that's flowing up,
the funny thing is that the data actually never flows up the brain. All that's flowing up is the
prediction errors up this feedforward network. What's coming down is the output of the generative model.
So the brain is only generating what it thinks it's seeing.
And there is no actually what we're seeing.
It's just prediction errors flow up and say, can you please adjust it?
There's a large prediction error here.
So what we think is
going on is that we have these kind of models that sit one on top of another. And the higher level
model is where the priors come from. Now, you might ask, well, where do the priors of that
higher level model come from? Well, they come from priors a layer above. And, you know, we don't know how
many layers in this hierarchy there are, but, you know, there might be something like half a dozen
layers in the hierarchy. And right at the top of the hierarchy, you know, we get things like
concepts and, you know, multi-sensory integration concepts and reasoning and language.
Maybe in the middle layers of this hierarchy, we get things like faces and motion.
And at the low levels of the hierarchy, we get these very raw, unfabricated parts of the sensory formation percepts,
low-level sensory percepts.
Just out of curiosity, how many layers are deep learning networks working with now?
Well, like in the transformer model that's behind ChatGPT and Google's BARD, they're like
close to 100, maybe 95 or 125, depending on the particular architecture.
So there are a lot.
That being said, obviously the brain is way more parallel and complex architecture, I would guess, than some of these neural networks.
But hierarchy is key.
I would guess, than some of these neural networks.
But hierarchy is key.
And I think that's precisely why you're able to get such sophisticated behavior out of some of these large language models.
But we've known for over a decade that neural networks use generative models.
Unsupervised neural networks work in the same way as the brain. And they extract these
features like edges and corners, and then noses and eyes and mouths and ears, and then whole faces,
you know, further up the hierarchy. So that's the way that, you know that we think that the brain is kind of constructing our model of the
world. Now, I mean, at the top of the, you know, to really kind of think about what, you know,
well, what's at the top of this? You know, what are we actually trying to do? Well, one of the
most important, I mean, one of the most important conjectures is that in fact, it's kind of like a self-model, a phenomenal self-model,
which must emerge at some of these kind of higher levels in the hierarchy.
And I don't know, well, I guess we'll get into that when we talk about the meditation.
Yeah. Yeah. So I want to take a turn toward psychedelics and meditation and the nature of the self
and just how flexible our interaction with reality might prove to be and just what is
possible subjectively here to realize and how might that matter and how that might connect
to human flourishing overall.
Just to take one point of contact here, there's some
evidence now that psychedelics in particular promote neuroplasticity and offering some clues
to how a fairly short experience might create durable changes in one's sense of one's being
in the world. Strangely, I think it was a recent paper
that suggested the neuroplasticity is mediated through intracellular 5-HT2A receptors, which are
not, as many people know, psychedelics like LSD and psilocybin are active through serotonin
receptors, but they obviously have a different effect than serotonin normally
does. And the idea that they may be reaching inside the cell seemed, I mean, maybe that's been
in the air for a while, but it was the first I heard of it, which struck me as interesting.
But before we get there, I just want to see if we can make this picture of predictive coding and
error detection somehow subjectively real for people. So you know,
you and I are having this conversation. My eyes have generally been open. I've been looking at a
fairly static scene. I just have my desk in front of me. Nothing has been moving, right? There's no
changes to the visual scene, really, apart from what is introduced by my moving my eyes around.
And I've surveyed this scene fairly continuously for the last 45 minutes as we've been speaking.
And again, it's a scene of very little change, right?
And yet I'm continuing to see everything, and some things presumably I'm now seeing for the first time as I pay attention in new ways.
things presumably I'm now seeing for the first time as I pay attention in new ways. Now, if something fundamentally changed, if a mouse suddenly leapt onto the surface of my desk and
began scurrying across it, it would get a strong reaction from me and I would perceive the novelty.
But before that happens, I'm perceiving everything quite vividly anyway, and nothing is changing.
So in what sense is my perception merely a story of my continuous prediction errors
with respect to the visual scene?
Yeah, so I think the idea is that if, I mean, you are creating a simulation of what your best guess is on, you know, the
contents of your desk.
And as you say, if there is a, if something like a mouse runs across your desk, you know, that would be something that would cause a
very large prediction error and your attention would go to it.
In fact, we didn't get into this, but there is actually a kind of a real homologue of
what attention is within the predictive processing framework.
of what attention is within the predictive processing framework. Essentially, what happens is that when you attend to something,
you give more weight to parts of the predictive processing hierarchy stack.
And specifically, you give more precision weighting to the sensory data, the likelihood of the data.
And so you would say there's a very large prediction area here.
And you would be, instead of your priors dominating the posterior, what you actually see,
the sensory data would have a greater weight in determining the contents
of the generative model.
So, you know, this is a kind of a two-way street that's going on constantly between
the likelihood of the data and the priors, your expectations.
And, you know, it's interesting just to take a step back, you know, you're seeing this relatively constant scene in front of you, you know, presumably in these beautiful colors, in a cartoonish definition.
portion of the visual scene at any one time, because that's where your macula, the only part that sees in color and accurately is like a tiny portion of the visual field.
And yet you're seeing everything clearly in color.
So this kind of makes it very clear that what you are seeing is not your sensory data, but
in fact, the output of your general term model.
Just to remind people, your peripheral vision, while it seems to you to be occurring in color,
it really isn't. You can test this. You can have someone hold a colored object, however brightly
colored you want, at the very edge of your peripheral field of view, you know, keeping your
eyes forward, and you will find it impossible right at the edge to determine what the
color of that object is until it comes further into your field of view. And yet we're not
walking around feeling that our visual world is ringed with black and white imagery.
And so it is, as you point out, with the area of the vast region beyond the very narrow spot of foveal focus.
You see something in focus, but the rest isn't in focus until you direct your gaze to it.
And yet we don't tend to notice that. And that's a,
so it's, there's something, it's a little bit like a, you know, a video game engine that is just,
you know, it's kind of rendering parts of the world when they're needed, but they're not,
you know, they're just presumed otherwise. And we're, we seem to be content to live that way
because it doesn't, until we start bumping into hard objects
that we didn't know were there.
And it's the stability of all,
I guess there's another piece here,
we're constantly moving our eyes
in what are called visual saccades,
and we're effectively blind when we do that.
For the brief moment of our eyes lurching around,
we're not consciously getting visual data, and we're not
noticing that either, right? So there are various clues, and you can notice that when you, if you go
to a mirror and stare into your own eyes, and then look around, and then look back at your eyes,
you never catch your eyes, you know, moving around, and there's this gap, and if you still
doubt that, you can notice how different it is to move your eye by
taking your finger and touching the side of one of your eyes and jiggling it, and you can see how
the world lurches around there. That's because your ocular motor cortex can't correct for that
kind of motion in its kind of forward-looking copy of what it expects to see, because you're
accomplishing that with your finger. But when you move your eyes in the normal way,
it's discounting the data that's being acquired during that movement. So in all these ways,
you can see that you're not getting this crystal clear, comprehensive photographic image of the world when you're seeing, this is a piecemeal vision,
again, based in large measure on what you're expecting to see, and yet that's not consciously
obvious. Yeah, exactly. And of course, it's only when you go through meditation or experiences and psychedelics or, you know, other times, you know,
people can suddenly come to notice, ah, you know, isn't it odd that when I push my eyeball,
the whole world moves, you know, maybe what I'm seeing is a kind of a mental construction
and not the world as it really is. So I want to talk about the self in particular
and what we might describe as the self-model.
I think Thomas Messinger, who's also been on the podcast,
might have given us that phrase, I'm not sure.
Yeah, he's done phenomenal work on this over the years,
and I think that that's actually central,
this Messinger concept of the phenomenal self-model.
But before we do it, many people will be interested in how psychedelics help us make
sense of some of this neuroscience. Because unlike meditation, I mean, there's obviously
a fair amount of neuroscience done on meditation as well,
but the strength of psychedelics is that you can take really anyone.
There are some very rare exceptions to this, but, you know, virtually anyone can be sat down and given the requisite substance,
and an hour later, they're having some very predictable and sweeping changes made to their
perception of the world. For better or worse, almost no one comes away from a large dose of
LSD or psilocybin saying nothing happened or it didn't work. Whereas with meditation,
as many people who have tried the practice know, many, many people simply bounce off the whole
project. They close their eyes, they try to follow their breath, or they use whatever technique has
been given to them, and they feel like nothing has happened, right? It's just me here thinking,
and I do that all the time anyway, and they come away with the sense that it's not for them,
or maybe there's really nothing to it.
It's just people are just deceiving themselves that there's anything especially important going on there.
But psychedelics don't tend to have that effect on people.
What do you think we know about psychedelics at this point that gives us some perspective here?
And perhaps you might describe, if you're willing, your own
experience with psychedelics. Have they been an important part of your
coming to be interested in any of this? Yeah, absolutely. Okay, well, why don't we take the
kind of the predictive processing theory that's out there in terms of how, what is the mechanism of action from a computational perspective?
If you'd like to continue listening to this conversation,
you'll need to subscribe at samharris.org.
Once you do, you'll get access to all full-length episodes
of the Making Sense podcast,
along with other subscriber-only content,
including bonus episodes and AMAs
and the conversations I've been having on the Waking Up app. The Making Sense podcast is ad-free Thank you.