Making Sense with Sam Harris - #113 — Consciousness and the Self
Episode Date: January 9, 2018Sam Harris speaks with Anil Seth about the scientific study of consciousness, where consciousness emerges in nature, levels of consciousness, perception as a “controlled hallucination,” emotion, t...he experience of “pure consciousness,” consciousness as “integrated information,” measures of “brain complexity,” psychedelics, different aspects of the “self,” conscious AI, and many other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.
Transcript
Discussion (0)
Today I spoke with Anil Seth. He is a professor of cognitive and computational neuroscience
at the University of Sussex and founding co-director of the Sackler Center for Consciousness Science.
And he's focused on the biological basis of consciousness and is studying it in a very
multidisciplinary way, bringing neuroscience and mathematics and artificial intelligence
and computer science, psychology, philosophy, psychiatry,
all these disciplines together in his lab. He is the editor-in-chief of the academic journal
Neuroscience of Consciousness, published by Oxford University Press, and he has published
more than 100 research papers in a variety of fields. His background is in natural sciences and computer science and AI,
and he also did postdoctoral research for five years at the Neurosciences Institute in San Diego
under Gerald Edelman, the Nobel laureate. And we cover a lot of ground here. We really get
into consciousness in all its aspects. We start with
the hard problem, then talk about where consciousness might emerge in nature, talk about levels of
consciousness, anesthesia, sleep, dreams, the waking state. We talk about perception as a controlled
hallucination, different notions of the self, conscious AI, many things here. I found it all fascinating,
and I hope you do as well. And so without further delay, I bring you Anil Seth.
I am here with Anil Seth. Anil, thanks for coming on the podcast.
Thanks for inviting me. It's a pleasure.
So I think I first discovered you, I believe I'd seen your name associated with various
papers, but I think I first discovered you the way many people had after your TED Talk.
You gave a much-loved TED Talk.
Perhaps you can briefly describe your scientific and intellectual background.
It's quite a varied background, actually.
I think my intellectual interest has always been in understanding the physical and biological basis
of consciousness and what practical implications that might have in neurology and psychiatry.
But when I was an undergrad student at Cambridge in the early 1990s, consciousness was
certainly as a student then. And then
in a place like Cambridge, not a thing you could study scientifically. It was still very much a
domain of philosophy. And I was still at that time, I still had this kind of idea that physics
was going to be the way to solve every difficult problem in science and philosophy.
So I started off studying physics.
But then through the undergrad years, I got diverted towards psychology as more of a direct
route to these issues of great interest and ended up graduating with a degree in experimental
psychology. After that, I moved to Sussex University, where I am now actually, again,
to do a master's and a PhD in computer
science and AI. And this was partly because of the need, I felt, at the time to move beyond these
box and arrow models of cognition that were so dominating psychology and cognitive science
in the 90s towards something that had more explanatory power. And the rise of connectionism and all these new methods and tools in AI seemed to provide
that.
So I stayed at Sussex and did a PhD, actually, in an area which is now called artificial
life.
And I became quite diverted, actually, ended up doing a lot of stuff in ecological modeling
and thinking a lot more here about how brains, bodies and environments
interact and co-construct cognitive processes. But I sort of left consciousness behind a little
bit then. And so when I finished my PhD in 2000, I went to San Diego to the Neuroscience Institute
to work with Gerald Adelman. Because certainly then San Diego was one of the few places,
old Edelman, because certainly then San Diego was one of the few places, certainly that I knew of at the time, that you could legitimately study consciousness and work on the neural basis of
consciousness. Edelman was there, Francis Crick was across the road at the Salk Institute. People
were really doing this stuff there. So I stayed there for about six years and finally started
working on consciousness, but bringing together all these different traditions of math, physics, computer science, as well as the tools of cognitive neuroscience.
And then for the last 10 years, I've been back at Sussex, where I've been running a lab.
And it's called the Sackler Center for Consciousness Science.
And it's one of the growing number of labs that are explicitly dedicated to solving or studying
at least the brain and biological basis of consciousness.
Yeah, well, that's a wonderful pedigree.
I've heard stories, and I never met Edelman.
I've read his books, and I'm familiar with his work on consciousness.
But he was famously a titanic ego, if I'm not mistaken.
I don't want you to say anything you're not comfortable with,
but everyone who I've ever heard have an encounter with Edelman was just amazed at how much space he
personally took up in the conversation. I've heard that too. And I think there's some truth to that.
What I can say from the other side is that when I worked for him and with him,
firstly, it was an incredible experience. And I felt very lucky
to have that experience because he had a large ego, but he also knew a lot too. I mean, he really
had been around and had contributed to major revolutions in biology and in neuroscience.
But he treated the people he worked with, I think, often very kindly. And one of the things that was
very clear in San Diego at the time, he didn't go outside of
the Neurosciences Institute that much. It was very much his empire. But when you were within it,
you got a lot of his time. So I remember many occasions just being in the office and most days
I would be called down for a discussion with Edelman about this subject or that subject or
this new paper or that new paper.
And that was a very instructive experience for me. I know he was quite difficult in many interviews and conversations outside the NSI, which is a shame, I think, because his legacy really is
pretty extraordinary. I'm sure we'll get onto this later. But one of the other reasons I went there
was one of the main reasons I went there was because I'd read some of the early work on dynamic core theory, which has later become Giulio
Tononi's very prominent integrated information theory.
And I was under the impression that Giulio Tononi was still going to be there when I
got there in 2001, but he hadn't.
He left.
And he wasn't really speaking much with Abelman at the time.
And it was a shame that they didn't continue their interaction.
And when we tried to organize a festrift, a few of us for Edelman some years ago now,
it was quite difficult to get the people together that had really been there and worked with
him at various times of his career. I think of the
people that have gone through the NSI and worked with Edelman, there are an extraordinary range of
people who've contributed huge amounts, not just in consciousness research, but in neuroscience
generally, and of course in molecular biology before that. So it was a great experience for
me. But yeah, I know he could also be pretty difficult at times too. He had to have a pretty
thick skin. So we have a massive interest in common.
No doubt we have many others.
But consciousness is really the center of the bullseye as far as my interests go.
And really, as far as anyone's interests go, if they actually think about it, it really
is the most important thing in the universe because it's the basis of all of our happiness and
suffering and everything we value. It's the space in which anything that matters can matter. So the
fact that you are studying it and thinking about it as much as you are just makes you the perfect
person to talk to. I think we should start with many of the usual starting points here because
I think they're the usual starting points for a reason. Let's start with a definition of consciousness. How do you
define it now? I think it's kind of a challenge to define consciousness. There's a sort of easy
folk definition, which is that consciousness is the presence of any kind of subjective experience
whatsoever. For a conscious organism, there is a phenomenal world of
subjective experience that has the character of being private, that's full of perceptual qualia
or content, colors, shapes, beliefs, emotions, other kinds of feeling states. There is a world
of experience that can go away completely in states like general anesthesia or dreamless
sleep. It's very easy to define it that way. To define it more technically is always going to be
a bit of a challenge. And I think sometimes there's too much emphasis put on having a consensus
technical definition of something like consciousness, because history of science has
shown us many times that definitions evolve along with our scientific understanding of a phenomenon. We don't take the definition and then
transcribe it into scientific knowledge in a unidirectional way. So long as we're not talking
past each other and we agree that consciousness picks out a very significant phenomenon in nature,
which is the presence of subjective experience,
then I think we're on reasonably safe terrain.
Many of these definitions of consciousness are circular.
We're just substituting another word for consciousness
in the definition, like sentience or awareness
or subjectivity or even something like qualia,
I think is parasitic on the undefined concept of consciousness.
Sure, I think that's right.
But then there's also a lot of confusions people make too.
So I'm always surprised by how often people confuse consciousness with self-consciousness.
And I think our conscious experience of selfhood
are part of conscious experiences as a whole, but only a subset of those experiences.
as a whole, but only a subset of those experiences. And then there are arguments about whether there's such a thing as phenomenal consciousness that's different from access consciousness, where
phenomenal consciousness refers to this impression that we have of a very rich conscious scene,
perhaps envisioned before us now, that might exceed what we have cognitive access to.
Other people will say, well, no, there's no such thing as phenomenal consciousness beyond
access consciousness.
So there's a certain circularity, I agree with you there, but there are also these important
distinctions that can lead to a lot of confusion when we're discussing the relevance of certain
experiments.
I want to just revisit the point you just made about not transcribing
a definition of a concept that we have into our science as a way of capturing reality. And then
there are things about which we have a folk psychological sense which completely break apart
once you start studying them at the level of the brain. So something like memory, for instance,
we have the sense that it's one thing intuitively, pre-scientifically. We have the sense that
to remember something, whatever it is, is more or less the same operation regardless of what it is.
Remembering what you ate for dinner last night, remembering your name, remembering who the first
president of the United States was,
remembering how to swing a tennis racket. These are things that we have this one word for,
but we know neurologically that they're quite distinct operations, and you can disrupt one and have the other intact. The promise has been that consciousness may be something like that,
that we could be similarly confused about it,
although I don't think we can be. I think consciousness is unique as a concept in this
sense. And this is why I'm taken in more by the so-called hard problem of consciousness than
I think you are. I think we should talk about that. But before we do, I think the definition
that I want to put in play, which I know you're quite
familiar with, is the one that the philosopher Thomas Nagel put forward, which is that consciousness
is the fact that it's like something to be a system, whatever that system is. So if a bat
is conscious, this comes from his famous essay, What Is It Like To Be A Bat? If a bat is conscious,
his famous essay, What Is It Like to Be a Bat? If a bat is conscious, whether or not we can understand what it's like to be a bat, if it is like something to be a bat, that is consciousness
in the case of a bat. However inscrutable it might be, however impossible it might be to map that
experience onto our own, if we were to trade places with a bat, that would not be synonymous
with the lights going out. There is something that's like to be a bat if a bat that would not be synonymous with the lights going out. There
is something that's like to be a bat if a bat is conscious. That definition, though, it's really
not one that is easy to operationalize, and it's not a technical definition. There's something
sufficiently rudimentary about that that it has always worked for me. And when we begin to move
away from that definition into something more technical, my experience
has been, and we'll get to this as we go into the details, that the danger is always that
we wind up changing the subject to something else that seems more tractable.
We're no longer talking about consciousness in Nagel's sense.
We're talking about attention.
We're talking about reportability or we're talking about reportability,
or mere access, or something. So how do you feel about Nagel's definition as a starting point?
I like it very much as a starting point. I think it's pretty difficult to argue with that
as a very basic fundamental expression of what we mean by consciousness in the round.
So I think that's fine. I partly disagree with you. I partly disagree with you, I think, when
we think about the idea that consciousness might be more than one thing. And here I'm much more
sympathetic to the view that, heuristically at least, the best way to scientifically study
consciousness and philosophically to think about it as well, is to recognize that we might be
misled about the extent to which we experience consciousness as a unified phenomenon. And
there's a lot of mileage in recognizing how, just like the example for memory, recognizing how
conscious experiences of the world and of the self can come apart in various different ways.
Just to be clear, actually, I agree with you there.
We'll get into that, but I completely agree with you there that we could be misled about how unified consciousness is.
The thing that's irreducible to me is this difference between there being something that it's like and not.
You know, the lights are on or they're not.
There are many different ways in which the lights can be on
in ways that would surprise us.
Or, for instance, it's quite possible that the lights are on
in our brains in more than one spot.
We'll talk about split-brain research, perhaps,
but there are very counterintuitive ways the lights could be on.
But just the question is always, is there something that it's like to be that bit of
information processing or that bit of matter? And that is always the cash value of a claim
for consciousness. Yeah, I'd agree with that. I think that it's perfectly reasonable to put
the question in this way, that for a conscious organism, it is something like it is to be
that organism. And the thought is that there's going to be some physical, biological, informational
basis to that distinction. Now, you've written about why we really don't need to waste much time
on the hard problem. Let's remind people what the hard problem is. David Chalmers has been
on the podcast, and I've spoken about it with other people, but perhaps you want to introduce
us to the hard problem briefly. The hard problem has been, rightly so, one of the most influential
philosophical contributions to the consciousness debate for the last 20 years or so. And it goes
right back to Descartes.
And I think it encapsulates this fundamental mystery
that we've started talking about now,
that for some physical systems,
there is also this inner universe.
There is the presence of conscious experience.
There is something it is like to be that system.
But for other systems, tables, chairs,
probably most computers, probably all computers these days, there is nothing it is like to be that system.
And what the hard problem does, it pushes that intuition a bit further and it distinguishes
itself from the easy problem in neuroscience. And the easy problem, according to Chalmers,
is to figure out how the brain works in all its functions, in all its detail. So to figure out
how we do perception, how we utter certain linguistic phrases, how we move around the
world adaptively, how the brain supports perception, cognition, behavior in all its richness in a way
that would be indistinguishable from, and here's the key really, in a way that would be indistinguishable from an equivalent
that had no phenomenal properties at all, that completely lacked conscious experience.
The hard problem is understanding how and why any solution to the easy problem, any explanation of
how the brain does what it does in terms of behavior, perception, and so on, how and why any
of this should have anything to do with conscious experiences at all? And it rests on this idea of the conceivability of zombies. And this is
one reason I don't really like it very much. I mean, the hard problem has its conceptual power
over us because it asks us to imagine systems, philosophical zombies, that are completely equivalent in terms of their function
and behavior to you or to me or to any or to a conscious bat, but that instantiate no phenomenal
properties at all. The lights are completely off for these philosophical zombies. And if we can
imagine such a system, if we can imagine such a thing, a philosophical zombie, you or me,
then it does become this enormous challenge. You think, well, then what is it or what could it be
about real me, real you, real conscious bat? That gives rise, that requires or entails that
there are also these phenomenal properties, that there is something it is like to be you or me
phenomenal properties, that there is something it is like to be you or me, or the bat. And it's because Chalmers would argue that such things are conceivable, that the hard problem seems like a
really huge problem. Now, I think this is a little bit of a, I think we've moved on a little bit from
these conceivability arguments. Firstly, I just think that they're pretty weak.
And the more you know about a system, the more we know about the easy problem,
the less convincing it is to imagine a zombie alternative. Think about, you're a kid,
you look up at the sky and you see a 747 flying overhead, and somebody asks you to imagine a 747 flying
backwards, well, you can imagine a 747 flying backwards. But the more you learn about aerodynamics,
about engineering, the harder it is to conceive of a 747 flying backwards. You simply can't build
one that way. And that's my worry about this kind of conceivability argument, that to me,
I really don't think I can imagine in a serious way the existence of a
philosophical zombie. And if I can't imagine a zombie, then the hard problem loses some of its
force. That's interesting. I don't think it loses all of its force, or at least it doesn't for me.
For me, the hard problem has never really rested on the zombie argument, although I know Chalmers
did a lot with the zombie argument.
I mean, so let's just stipulate that philosophical zombies are impossible. They're at least,
you know, what's called in the jargon, nomologically impossible. It's just a fact
that we live in a universe where if you built something that could do what I can do, that
something would be conscious. So there is no zombie Sam that's possible. And let's just also add what you just said, that really, when you get to the
details, you're not even conceiving of it being possible. It's not even conceptually possible.
You're not thinking it through enough. And if you did, you would notice it break apart. But for me,
you did, you would notice it break apart. But for me, the hard problem is really that with consciousness, any explanation doesn't seem to promise the same sort of intuitive closure that
other scientific explanations do. It's analogous to whatever it is, and we'll get to some of the possible explanations, but it's not
like something like life, which is an analogy that you draw and that many scientists have drawn
to how we can make a breakthrough here. It used to be that people thought life could never be
explained in mechanistic terms. There was a philosophical point of view called vitalism here, which suggested that you needed
some animating spirit, some Elan Vital in the wheelworks to make sense of the fact that
living systems are different from dead ones, the fact that they can reproduce and repair
themselves from injury and metabolize and all the functions we see a living system engage,
which define what it is to be alive
it was thought very difficult to understand any of that in mechanistic terms and then lo and behold
we managed to do that the difference for me is and you know i'm happy to have you prop up this
analogy more than i have but the difference for me is that everything you want to say about life
with the exception of conscious life we have to leave consciousness off the table here more than I have, but the difference for me is that everything you want to say about life,
with the exception of conscious life, we have to leave consciousness off the table here,
everything else you want to say about life can be defined in terms of extrinsic functional relationships among material parts. So, you know, reproduction and growth and healing and metabolism and homeostasis, all
of this is physics and need not be described in any other way.
And even something like perception, you know, the transduction of energy, you know, let's
say, you know, vision, light energy into electrical and chemical energy in the brain and the mapping
of a visual space onto a visual cortex, all of
that makes sense in mechanistic physical terms until you add this piece of, oh, but for some of
these processes, there's something that it's like to be that process. For me, it just strikes me as
a false analogy and with or without zombies, the hard problem still stays hard.
I think it's an open question whether the analogy will turn out to be false or not.
It's difficult for us now to put ourselves back in the mindset of somebody 80 years ago,
100 years ago, when vitalism was quite prominent, and whether the sense of mystery surrounding something that was alive seemed to be as inexplicable as
consciousness seems to us today. So it's easy to say with hindsight, I think, that life is
something different. But we've encountered, or rather scientists and philosophers over
centuries have encountered things that have seemed to be inexplicable, that have turned out to be
centuries have encountered things that have seemed to be inexplicable, that have turned out to be
explicable. So I don't think we should rule out a priori that there's going to be something really different this time about consciousness. I think a more heuristic aspect to this is that
if we run with the analogy of life, what that leads us to do
is to isolate the different phenomenal properties that co-constitute what it is for us to be
conscious. We can think about, and we'll come to this, I'm sure we think about conscious selfhood
as distinct from conscious perception of the outside world. We can think about conscious
experiences of volition and of agency that are
also very sort of central to our, certainly our experience of self. These give us phenomenological
explanatory targets that we can then try to account for with particular kinds of mechanisms.
It may turn out at the end of doing this that there's some residue. There is still something that is fundamentally puzzling, which is this hard problem residue.
Why are there any lights on for any of these kinds of things?
Isn't it all just perception?
But maybe it won't turn out like that.
And I think to give us the best chance of it not turning out like that, there's a positive
and a negative aspect.
The positive aspect is that we need to retain a focus on phenomenology.
And this is another reason why I think the hard easy problem distinction can be a little
bit unhelpful, because in addressing the easy problem, we are basically instructed to not worry about
phenomenology. All we should worry about is function and behavior. And then the hard problem
kind of gathers within its remit everything to do with phenomenology in this central mystery of why
is this an experience rather than no experience. The alternative approach, and this is something
I've kind of caricatured as the real problem, but David Chalmers himself has called it the mapping problem, and Francisco Varela talks about a similar set of ideas with his neurophenomenology, is to not try to solve the hard problem to court, not try to explain how it is possible that consciousness comes to be part of the universe, but rather to individuate different kinds of phenomenological properties and draw some explanatory mapping
between neural, biological, physical mechanisms and these phenomenological properties. Now once
we've done that and we can begin to explain not why is there experience at all, but why are certain
experiences the way they are and not other ways? And we can predict when certain experiences will have particular phenomenal
characters and so on. Then we'll have done a lot more than we can currently do. And we may have to
make use of novel kinds of conceptual frameworks, maybe frameworks like information processing
will run their course and
will require other more sophisticated kinds of descriptions of dynamics and probability in order
to build these explanatory bridges. So I think we can get a lot closer. And the negative aspect is,
why should we ask more of a theory of consciousness than we should ask of other
kinds of scientific theories. And I know people
have talked about this on your podcast before as well, but we do seem to want more of an explanation
of consciousness than we would do of an explanation in biology or physics, that it somehow should feel
intuitively right to us. And I wonder why this is such a big deal when it comes to consciousness. Because we're
trying to explain something fundamental about ourselves doesn't necessarily mean that we should
apply different kinds of standards to an explanation that we would apply in other fields of science.
It just may not be that we get this feeling that something is intuitively correct when
it is in fact a very good scientific account of the origin of phenomenal properties.
Certainly scientific explanations are not instantiations.
There's no sense in which a good theory of consciousness should be expected to suddenly
realize the phenomenal properties that it's explaining.
But also, I worry that we ask too much of theories of consciousness this way.
Yeah, well, we'll move forward into the details, and I'll just flag moments where I feel like
the hard problem should be causing problems for us. I do think it's not a matter of asking too
much of a theory of consciousness here. I think there are very few areas in science where the accepted explanation is totally
a brute fact which just has to be accepted because it is the only explanation that works,
but it's not something that actually illuminates the transition from atoms to some higher level
phenomenon, say. Again, for everything we
could say about life, even the very strange details of molecular biology, just how information in the
genome gets out and creates the rest of a human body, it still runs through when you look at the
details. It's surprising. It's at parts difficult to
visualize, but the more we visualize it, the more we describe it, the closer we get to something that
is highly intuitive, even something like, you know, the flow of water. The fact that water molecules
in its liquid state are loosely bound and move past one another, well that seems exactly like
what should be happening at the micro level, so as to explain the macro level property of
the wetness of water and the fact that it has characteristics, higher level characteristics,
that you can't attribute to atoms, but you can attribute to collections of atoms, like turbulence,
say. Whereas with, you know Whereas if consciousness just happens to require
some minimum number of information processing units knit together in a certain configuration,
firing at a certain hertz, and you change any of those parameters and the lights go out,
that for me still seems like a mere brute fact that doesn't explain consciousness.
It's just a correlation that we decide is the crucial one.
And I've never heard a description of consciousness of the sort that we will get to, like integrated
information, you know, Tononi's phrase, that unpacks it any more than that.
And you can react to that, but then I think we should just get into
the details and see how it all sounds. Sure. I'll just react very briefly, which is that
I think I'd also be terribly disappointed if you look at the answer in the book of nature,
and it turned out to be, yes, you need 612,000 neurons wired up in a small world network,
and that's it. The hope is that does seem, of course, ridiculous
and arbitrary and unsatisfying. The hope is that as we progress beyond, if you like, just brute
correlates of conscious states towards accounts that provide more satisfying bridges between
mechanism and phenomenology that explain, for instance,
why a visual experience has the phenomenal character that it has and not some other
kind of phenomenal character like an emotion, that it won't seem so arbitrary. And that as we
follow this route, which is an empirically productive route, and I think that's important,
that if we can actually do science with this route, we can try to think about how to operationalize phenomenology in
various different ways. Very difficult to think how to do science and just solve the hard problem
head on. At the end of that, I completely agree. There might be still this residue of mystery,
this kernel of something fundamental left unexplained. But I don't think we can take
that as a given because we can't, well, I certainly can't predict what I would feel as intuitively
satisfying when I don't know what the explanations that bridge mechanism and phenomenology are going
to look like in 10 or 20 years time. We've already moved further from just saying it's this area or
that area to synchrony, which is still kind of unsatisfying, to now I think some emerging
frameworks like predictive processing and integrated information, which aren't completely
satisfying either. But they hinted a trajectory where we're beginning to draw closer connections
between mechanism and
phenomenology. Okay, well, let's dive into those hints. But before we do, I'm just wondering,
phylogenetically, in terms of comparing ourselves to so-called lower animals, where do you think
consciousness emerges? Do you think there's something that's like to be a fly, say?
That's a really hard problem i mean it's i i
have to be agnostic about this uh and again it's just striking how people in general's views on
these things seems to have changed over the last you know recent decades it seems completely
unarguable to me that other mammals, all other mammals, have conscious experiences of one sort
or another. I mean, we share so much in the way of the relevant neuroanatomy and neurophysiology
exhibit so many of the same behaviors that it would be remarkable to claim otherwise.
It actually wasn't that long ago that you could still hear people say that consciousness was so dependent on language that they wondered whether human infants were conscious,
to say nothing of dogs and anything else that's not human. Yeah, that's absolutely right. I mean,
that's a terrific point. And this idea that consciousness was intimately and constitutively
bound up with language or with
higher order executive processing of one sort or another, I think just exemplifies this really
pernicious anthropocentrism that we tend to bring to bear sometimes without realizing it.
We think we're super intelligent, we think we're conscious, we're smart,
and we need to judge everything by that benchmark. And what's the most advanced thing about humans?
Well, you know, if you're gifted with language, you're going to say language.
And now already, with a bit of hindsight, it seems, to me anyway, rather remarkable
that people should make these, I can only think of them as just quite naive errors to
associate consciousness with language.
Not to say that consciousness and language don't have any intimate relation.
I think they do.
Language shapes a lot of our conscious experiences.
But certainly it's a very, very poor criterion with which to attribute subjective states
to other creatures.
So mammals for sure.
I mean, mammals for sure, right?
But that's easy because they're pretty similar to humans and primates being mammals.
But then it gets more complicated.
And you think about birds diverged a reasonable amount of time ago, but still have brain structures
that one can establish analogies, in some cases homologies, with mammalian brain structures. And in some
species, scrub jays and corvids generally, pretty sophisticated behavior too. It seems
very possible to me that birds have conscious experiences. And I'm aware underlying all this,
the only basis to make these judgments is in
light of what we know about the neural mechanisms underlying consciousness and the functional and
behavioral properties of consciousness in mammals has to be this kind of slow extrapolation because
we lack the mechanistic answer and we can't look for it in another species. But then you get beyond
birds and you get out to, I then like to go way out on a phylogenetic branch
to the octopus, which I think is an extraordinary example of convergent evolution. I mean,
they're very smart. They have a lot of neurons, but they diverged from the human line, I think,
as long ago as sponges or something like that i mean really very little in common and um but they have incredible differences too three hearts uh eight legs arms i'm never sure
whether it's a leg or an arm um that behave semi-autonomously and one is left you know when
you spend time with these creatures i've been lucky enough to spend a week with them in a lab
in naples you certainly get the impression of another conscious presence there, but of a very
different one. And this is also instructive because it brings us a little bit out of
this assumption that we can fall into that there is one way of being conscious and that's our way.
way of being conscious and that's our way there's you know there is a huge space of possible minds out there and uh the octopus is is a very definite example of a very uh different mind and very
likely uh conscious mind too now when we get down to oh yeah not really down i don't like this idea
of organisms being arranged on a single scale
like this. But certainly creatures like fish, insects are simpler in all sorts of ways than
mammals. And here it's really very difficult to know where to draw the line, if indeed there is
a line to be drawn, if it's not just a gradual shading out of consciousness with gray areas in between and no categorical divide, which I think
is equally possible. Many fish display behaviors which seem suggestive of consciousness. They will
self-administer analgesia when they're given painful stimulation. They will avoid places
that have been associated with painful stimulation and so on you hear things like
the the precautionary principle come into play that uh given that suffering if it exists conscious
suffering is a very aversive state and it's ethically wrong to impose that state on other
creatures we should tend to assume that uh creatures are conscious unless we have good evidence that they're not.
So we should put the bar a little bit lower in most cases.
Let's talk about some of the aspects of consciousness that you have identified
as being distinct.
There are at least three.
You've spoken about the level of consciousness, the contents of consciousness, and the experience
of having a conscious self that many people, as you said, conflate with consciousness as a
mental property. There's obviously a relationship between these things, but they're not the same.
Let's start with this notion of the level of consciousness, which really isn't the same
thing as wakefulness. Can you break those
apart for me? How is being conscious non-synonymous with being awake in the human sense?
Sure. Let me just first amplify what you said, that in making these distinctions, I'm certainly
not claiming, pretending, that these dimensions of level, level content and self pick out completely
independent aspects of conscious experiences. There are lots of interdependencies. I just
think they're heuristically useful ways to address the issue. We can do different kinds of experiments
and try to isolate distinct phenomenal properties in their mechanistic basis by making these
distinctions. Now, when it comes to conscious level, I think that the simplest way to think of this is more or less as a scale.
In this case, it's from when the lights are completely out, when you're dead, brain death,
under general anesthesia, or perhaps in very, very deep states of sleep, all the way up to vague levels of awareness which are similar which
correlate with with wakefulness so when you're very drowsy to vivid awake alert full conscious
experience that that you know i'm certainly having now feel very awake and alert and and you know my
conscious level is kind of up there. Now, in most cases,
the level of consciousness articulated this way
will go along with wakefulness or physiological arousal.
When you fall asleep,
you lose consciousness, at least in early stages.
But there are certain cases that exist
which show that they're not completely the same thing on both
sides. So you can be conscious when you're asleep. Of course, we know this. This is called dreaming.
So you're physiologically asleep, but you're having a vivid inner life there. And on the
other side, and this is where consciousness science the rubber of consciousness science hits the road of neurology you have states where behaviorally you have
what look like what looks like arousal this is used to be called the vegetative state it's been
kind of renamed several times now the wakeful unawareness state where the idea is that the
body is still going through physiological cycles of arousal from sleep
to wake, but there is no consciousness happening at all. The lights are not on. So these two things
can be separated. And it's a very productive and very important line of work to try to isolate what's the mechanistic basis of conscious level independently from the mechanistic basis of physiological arousal.
Yeah, and a few other distinctions to make here.
Also, general anesthesia is quite distinct from deep sleep, just as a matter of neurophysiology.
Certainly, general anesthesia is nothing like
sleep. It's certainly deep levels of general anesthesia. So whenever you go for an operation
and the anesthesiologist is trying to make you feel more comfortable by just saying something
like, yeah, we'll just put you to sleep for a while and then you'll wake up and it will be done,
they are lying to you for good reason. It's kind of nice just to
feel that you're going to sleep for a bit. But the state of general anesthesia is very different.
And for very good reason. If you were just put into a state of sleep, you would wake up as soon
as the operation started. And that wouldn't be very pleasant. It's surprising how far down you
can take people in general anesthesia, almost to a level of isoelectric brain activity where there is pretty much nothing going
on at all and still bring them back. And many people now have had the non-experience of general
anesthesia. And in some weird way, I now look forward to it the next time I get to have this,
because it's a very sort of, it's almost a reassuring experience
because there is absolutely nothing. It's complete oblivion. It's not, you know, when you go to sleep
as well, you can sleep for a while and you'll wake up and you might be confused about how much time
has passed, especially if you've just flown across some time zones or stayed up too late,
something like that. You know, might not be sure what time it is, but you'll still have this sense of some time having passed. Except we have this
problem, or some people have this problem of anesthesia awareness, which is every person's
worst nightmare, if they care to think about it, where people have the experience of the surgery
because for whatever reason the anesthesia hasn't taken them deep enough
and yet they're immobilized and can't signal
that they're not deep enough.
I know, absolutely.
But I mean, that's a failure of anesthesia.
It's not a characteristic of the anesthetic state.
Do you know who had that experience?
You've mentioned him on the podcast.
I, really?
Francisco Varela.
Oh, really?
I didn't know that.
I did not know that.
Yeah, Francisco was getting a liver transplant and experienced some part of it.
Well, that's pretty horrific.
Could not have been fun.
Yeah.
I mean, of course, because the thing there is under most serious operations, you're also
administered with a muscle paralytic so that you don't jerk around when you're being operated
on.
And that's why it's particularly a nightmare scenario. But you know, if anesthesia is working properly,
certainly the times I've had general anesthesia, you start counting to 10 or start counting
backwards from 10, you get to about eight, and then instantly you're back somewhere else,
very confused, very disoriented.
But there is no sense of time having passed. It's just complete oblivion. And I found that
really reassuring because we can think conceptually about not being bothered about all the times we
were not conscious before we were born, and therefore we shouldn't worry too much about
all the times we're not going to be conscious after we die. But to experience these moments
of complete oblivion during a lifetime, or rather the edges of them, I think is a very
enlightening kind of experience to have. Although there's a place here where the hard problem
does emerge because it's very difficult,
perhaps impossible, to distinguish between a failure of memory and oblivion.
Has consciousness really been interrupted?
Take anesthesia and deep sleep as separate, but similar in the sense that most people think there was a hiatus in consciousness.
I'm prepared to believe that that's not true of deep sleep, but we just don't
remember what it's like to be deeply asleep. I'm someone who often doesn't remember his dreams,
and I'm prepared to believe that I dream every night. And we know, even in the case with general
anesthesia, they give amnesiac drugs so that you won't remember whatever they don't want you to remember.
And I recently had the experience of not going under a full anesthesia, but having a,
you know, what's called a twilight sleep for a procedure. And there was a whole period afterwards
where I was coming to about a half hour that I don't remember. And it was clear to my wife that I wasn't going to
remember it, but she and I were having a conversation. I was talking to her about
something. I was saying how perfectly recovered I was and how miraculous it was to be back.
And she said, yeah, but you're not going to remember any of this. You're not going to
remember this conversation. And I said, okay, well, let's test it. Say something now and we'll
see if I remember it. And she said,
this is the test, dummy. You're not going to remember this part of the conversation.
And I have no memory of that part of the conversation.
It's a good test. You're right, of course, that even in stages of deep sleep,
people underestimate the presence of conscious experiences. And this has been demonstrated
by experiments called serial awakening experiments, where you'll just wake somebody up various times
during sleep cycles and ask them straight away, you know, what was in your mind? And quite often,
people will report often very simple sorts of experiences, static images and so on, in stages of non-REM, non-dreaming sleep.
And I concede that there may be a contribution of amnesia to the post hoc impression of what
general anesthesia was like. But at the same time, there's all the difference in the world
between the twilight zone and full on general anesthesia, where it's not just that I don't remember
anything, it's the real sense of a hiatus of consciousness, of a complete interruption
and a complete instantaneous resumption of that experience.
Yeah, yeah.
No, I've had a general anesthetic as well, and there is something quite uncanny about
disappearing and being brought back without a sense of any intervening time.
Because you're not aware of the time signature of having been in deep sleep, but there clearly is
one. And the fact that many people can go to sleep and kind of set an intention to wake up at a
certain time, and they wake up at that time off into the minute, it's clear there's some time
keeping function happening in our brains all the while, but there's something about a general anesthetic
which just seems like, okay, the hard drive just got rebooted and who knows how long the computer
was off for. Exactly. Yeah. Okay. So let's talk about these other features. We've just dealt with
the level of consciousness. Talk to me about the contents of consciousness. How do you think about that? When we are conscious, then we're conscious of
something. And I think this is what the large majority of consciousness research empirically
focuses on. You take somebody who is conscious at a particular time and you try to,
you can ask a few different questions. You can ask
what aspects of their perception are unconscious and not reflected in any phenomenal properties
and what aspects of their perception are reflected in their phenomenal property. What's the difference
between conscious and unconscious processing, if you like. What's the difference between different
modalities of conscious perception? So at any one time, we may, certainly outside of the lab,
our conscious scene at any one time will have a very multimodal character. There'll be sound,
sight, experiences of touch, maybe if you're sitting down or holding something and then a whole range of
more self-related experiences too of body ownership of all the signals coming from deep
inside the body which are more relevant to self but the basic idea of conscious content is to study
what the mechanisms are that give rise to the particular content of a conscious scene at any one time. And here, the reason it's useful
to think of this as separate from conscious level is partly that we can appeal to different kinds
of theories, different kinds of theoretical and empirical frameworks. So the way I like to think about conscious perception is in terms of
prediction, in terms of what's often been called the Bayesian brain or unconscious inference from
Helmholtz and so on. And the idea that perception in general works more from the top down or from the outside in than from the, sorry, I got that wrong.
Perception inject works more from the top down or the inside out rather than from the bottom up
or the outside in. And this has a long history in philosophy as well, back to Kant and long before
that too. I mean, the straw man, the easily defeated idea about perception is that sensory signals
impinge upon our receptors, and they percolate deeper and deeper into the brain. And at each
stage of processing, more complex operations are brought to bear. And at some point,
ignition happens or something happens, and you're conscious of those sensory signals
at that point. And I think this is kind of the wrong way to think about it, that
if you look at the problem of perception that brains face, and let's simplify it a lot now and
just assume the problem is something like the following, that the brain is locked inside a
bony skull. And let's assume for the sake of this argument that perception is the problem is something like the following, that the brain is locked inside a bony skull. And let's assume, for the sake of this argument, that perception is the problem of figuring out
what's out there in the world that's giving rise to sensory signals that impinge on our sensory
surfaces, eyes and ears. Now, these sensory signals are going to be noisy and ambiguous.
They're not going to have a one-to-one mapping with things out there in the world, whatever they may be. So perception has to involve this process of inference, of best guessing,
in which the brain combines prior expectations or beliefs about the way the world is
with the sensory data to come up with its best guess about the causes of that sensory data.
And in this view, what we perceive is constituted by these multi-level predictions that try to
explain away or account for the sensory signals. We perceive what the brain infers to have caused
those signals, not the sensory signals themselves. In this view, there's nothing that it is for there
to be raw sensory experience of any kind. All perceptual experience is an inference
of one sort or another. And given that view, one can then start to ask all sorts of interesting
experimental questions like, well, what kinds of predictions? How do predictions or expectations
affect what we consciously perceive, consciously report? What kinds of predictions may still go on under the hood and not instantiate any
phenomenal properties. But it gives us this set of tools that we can use to build bridges between
phenomenology and mechanism again. In this case, the bridges are made up of the computational
mechanisms of Bayesian inference as they might be implemented in neuronal circuitry. And so instead of looking for, you know,
asking questions like,
is V1, is early visual cortex
associated with visual experience?
We might ask questions like,
are Bayesian priors or posteriors
associated with conscious phenomenology
or are prediction errors
associated with conscious phenomenology?
We can start to ask slightly,
I think, more sophisticated bridging questions like that.
Well, yeah, in your TED Talk, you talk about consciousness as a controlled hallucination.
And I think Chris Frith has called it a fantasy that coincides with reality.
Can you say a little more about that and how that relates to the role of top-down prediction in perception?
Yeah, I think they're both very nice phrases. And I think the phrase controlled hallucination
actually has been very difficult to pin down where it came from. I heard it from Chris Frith
as well originally, and I've asked him and others where originally it came from. And we can trace it to a seminar given by Ramesh Jain
at UCSD sometime in the 90s, but it was a verbal, there the trail goes cold. But anyway, the idea
is sort of the following, that we can bring to bear a naive realism about perception where we
assume that what we visually perceive is the way things actually are in the real world.
That there is a table in front of me that has a particular color that has a piece of paper on it
and so on. And that's veridical perception. As distinct from hallucination, where we have a
perceptual experience that has no corresponding reference in the real world. And the idea of controlled hallucination or fantasy that coincides with
reality is simply to say that normal perception is always a balance of sensory signals coming
from the world and the interpretations, predictions that we bring to bear about the causes of those sensations. So we are always seeing
what we expect to see in this Bayesian sense. We never just see the sensory data. Now,
normally, we can see this all the time. It's built into our visual systems that
light is expected to come from above because our visual systems have evolved in a situation where the sun
is never below us. So that causes us to perceive shadows in a particular way. Rather, we'll perceive
curved surfaces as being curved one way or another under the assumption that light comes from above.
We're not aware of having that constraint built deep into our visual system, but it's there.
built deep into our visual system, but it's there. And the idea is that every perception that we have is constituted, partly constituted, by these predictions, these interpretive
powers that the brain brings to bear onto perceptual content. And that what we call
hallucinations is just a tipping of the balance slightly more towards the brain's own internal predictions.
Another good everyday example of this is if you go out on a day where there's lots of white fluffy clouds and you can see faces in clouds.
If you look for them, this is periodelia, you can see patterns in noise.
Now, that's a kind of hallucination there.
You're seeing something that other people
might not see.
It's not accompanied by delusion, you know it's a hallucination.
But it just shows how our perceptual content is always framed by our interpretation.
Another good everyday example is dreams, because dreams we know are a situation where
our brain is doing something very similar
to what it's doing in the waking state, except the frontal lobes have come offline enough so
that there's just not the same kind of reality testing going on. And our perception in this case
is not being constrained by outer stimuli. It's just, it's being generated from within. But would this be
an analogous situation where our top-down prediction mechanisms are roving unconstrained by
sensory data? I think, yeah, dreams certainly show that you don't need sensory data to have
vivid conscious perception because you don't have any sensory input apart from a bit
of auditory input when you're dreaming. I think the phenomenology of dreams is interestingly
different. Dream content is very much less constrained. There is this naive realism
just goes nuts in dreams, doesn't it? I mean, people can change identity, locations can change,
weird things happen all the time. You don't experience them as being weird.
That's the weirdest part of dreams. The fact that it's not that they're so weird, it's that the weirdness is not detected. We don't care that they're so weird. Yeah, which is, I think,
a great example of how we often overestimate the insight we have about what our conscious
experiences are like. We tend to assume that
we know exactly what's happening in all our conscious experiences all the time, whether
it's weird or not. Dreams show that that's not always the case. But I think the idea of controlled
hallucination goes, it's as present in the normal and non-dreaming perception as it is in dreaming.
And it really is this idea that all our perception
is constituted by our brain's predictions
of the causes of sensory input.
And most of the time walking around the world,
we will agree about this perceptual content.
If I see a table and claim it's this color,
you'll probably agree with me.
And we don't have to go into the philosophical
inverted spectra thing here.
It's just a case of we tend to report the same sorts of things when
faced with the same sorts of sensory inputs. So we don't think there's anything particularly
constructed about the way we perceive things because we all agree on it. But then when
something tips the balance, maybe it's under certain pharmacological stimulus,
maybe it's in dreams, maybe it's in certain states of psychosis and mental illness,
then people's predictions about the causes of sensory information will differ from one another.
And if you're an outlier, then people will say, now you're hallucinating because you're reporting something that isn't there.
And my friend, the musician
Baba Brinkman, put it beautifully. He said, you know, what we call reality is just when we all
agree about our hallucinations, which I think is a really nice way to put that.
This leaves open the question, what is happening when we experience something fundamentally new
or have an experience where our expectations are violated.
So we're using terms like predictions or expectations or models of the world, but I
think there's a scope for some confusion here. Just imagine, for instance, that some malicious
zookeeper put a fully grown tiger in your kitchen while you were sleeping tonight.
I presume that when you come down for your morning coffee, you will see this tiger in the kitchen,
even though you have no reasonable expectation to be met by a tiger in the morning. I think it's
safe to assume you'll see it even before you've had your cup of coffee. So given this, what do we mean by expectations at the level of the brain?
That's a very, very important point. It's this whole language of the Bayesian brain and
predictive processing, bandies around terms like prediction expectation and prediction error,
surprise, and all these things. It's very, very important to recognize that these terms don't only mean,
or don't really mean at all, psychological surprise or explicit beliefs and expectations
that I might hold. So certainly, if I go down in the morning, I am not expecting to see a tiger.
However, my visual system, when it encounters a particular kind of input,
is still expecting, if there are sensory input that pick out things like edges, it will best
interpret those as an edge. And if it will pick out stripes, it will interpret those as stripes.
It's not unexpected to see something with an edge, and it may not be unexpected to see something
with a stripe. It may not even be unexpected from my brain's point of unexpected to see something with an edge. And it may not be unexpected to see something with a stripe.
It may not even be unexpected from my brain's point of view to see something that looks
a bit like a face.
And those become low level best guesses about the causes of sensory input, which then give
rise to higher level predictions about those causes.
And ultimately, the best guess is that there's some kind of animal there,
and indeed, that it's a tiger. So I don't think there's a conflict here. We can see new things,
because new things are built up from simpler elements for which we will have adequate
predictions for, built up over evolution and over development and over prior experience.
And one thing you point out, at least in one of your papers, maybe you did this in the TED talk,
that different contents of consciousness have different characters, so that visual perception
is object-based in a way that interior perception is not. The sensing of an experience like nausea, say, or even of an emotion like sadness, does not have all of the features of perceiving an object in visual space.
You're looking at an object in visual space, there's this sense of location, there's the sense that anything that has a front will also have a back.
That if you walked around it, you would be given different views of it,
none of which may ever repeat exactly. I'm looking at my computer now. I've probably never seen
my computer from precisely this angle. And if I walked around it, I would see thousands of
different slices of this thing in the movie of my life. And yet there's this unitary sense of
an object in space that has a front and back
and sides. And yet, of course, none of this applies when we're thinking about our internal
experience. Do you have any more you want to say about that? Because that's a very interesting
distinction, which again is one of these places where the terminology we use for being aware of
things or being conscious of things or perceiving things doesn't really get at the phenomenology very well.
Now, thank you for raising that.
I think this is a great point and something I've thought quite a lot about.
There's a couple of elements here.
So I'll start by talking about this phenomenology of objecthood that you beautifully described for vision there and then then get onto this case of interoception
and perception of the internal state of the body.
So indeed, for most of us, most of the time,
visual experience is characterized by there being a world of objects around us.
I see coffee cups on the table, computers in front of me, and so on.
Actually, that's not always the case.
If I'm, for instance, trying to catch a cricket ball or a softball or something someone's thrown to me, what my perceptual system is doing there is not so much trying to figure out what's out there in the world.
It's all geared towards the goal of catching a cricket ball.
And there's a whole branch of psychology.
It has roots in Gibsonian ecological psychology and william power's perceptual control
theory that that sort of inverts things and it says that has this whole tradition in thinking
about perception and its interaction with behavior i mean we like to think that we perceive the world
and then we behave so we have perception perception and controlling behavior. But we can
also think of it the other way around and think of behavior controlling perception so that when we
catch a cricket ball, what we're really doing is maintaining a perceptual variable to be a constant.
In this case, it would be the acceleration of the angle of the ball to the horizon. If we keep that
constant, we will catch
the cricket ball. And if you reflect on the phenomenology of these things, if I'm engaged
in an act like that, I'm not so much perceiving the world as distinct objects arranged in particular
ways. I'm perceiving how well my catching the cricket ball is happening. Am I likely to catch
it? Is it going well or not? That's a different kind of description
of visual phenomenology. But most of them, and this will become important a bit later when we
talk about why our experience of the inside of our bodies, of being a body, has the character
that it has. I think it's more like catching a cricket ball, but we'll get to that in a second.
But if we think now just back to when we're not catching things, we're just looking around and we see this visual scene populated by objects.
And you're absolutely right that one way to think of that is that when I perceive an object to have a volumetric extension, to be a three-dimensional thing in the world occupying a particular location,
thing in the world occupying a particular location. What that means is that I'm perceiving how that object would behave if I were to interact with it in different ways. This has another
tradition, well it's back to Gibson again and ecological psychology, but also sensory motor
theory of Alvin Noah and Kevin O'Regan, that what I perceive is how I can interact with an object. I perceive
an object as having a back, not because I can see the back, but because my brain is encoding somehow
how different actions would reveal that surface, the back of that object. And that's a distinctive
kind of phenomenology. In the language of predictive processing of the Bayesian brain,
one thing I've been trying to do is cash out that account of the phenomenology of objecthood
in terms of the kinds of predictions that might underlie it. And these turn out to be
conditional or counterfactual predictions about the sensory consequences of action. So in order to perceive
something as having objecthood, the thought is that my brain is encoding how sensory data would
change if I were to move around it, if I were to pick it up, and so on and so forth. And if we
think about the mechanics that might underlie that, they fall out quite naturally from this Bayesian
brain perspective, because to engage in predictive perception, to bring perceptual interpretations
to bear on sensory data, our brain needs to encode something like a generative model.
It needs to be able to have a model of the mapping from sensory data to,
or rather the mapping from something in the world to sensory data
and be able to invert that mapping.
That's how you do Bayesian inference in the brain.
And if you've got a generative model that can invert that mapping,
then that's capable of predicting what sensory signals would happen
conditional on different kinds of actions.
This is, it brings in an extension of predictive processing that's technically called active
inference, where we start to think about reducing prediction errors, not only by updating one's
predictions, but also by making actions to sort of make our predictions come true.
But in any case, you can make some interesting empirical predictions
about how our experience of something as an object
depends on what the brain learns about ways of interacting with these objects.
And we started to test some of these ideas in the lab
because you can now use clever things like virtual reality
and augmented reality to generate objects that will be initially unfamiliar, but that behave in weird ways when you try to interact with them.
So you can either support or confound these kinds of conditional expectations and then try to understand what the phenomenological consequences of doing so are.
understand what the phenomenological consequences of doing so are. And you can also account for situations where this phenomenology of objecthood seems to be lacking. So for instance,
in synesthesia, which is a very interesting phenomenon in consciousness research, and
yeah, I'm sure you know this, Sam, but a very canonical example of synesthesia is when
is graphene color synesthesia. People may look at a black letter or number or graphene,
and they will experience a color along with that experience. They will have a color experience,
a concurrent color experience. This is very, very well established. What's often not focused on is that pretty much
across the board in grapheme color synesthesia, synesthetes, they don't make any confusion that
the letter is actually red or actually green. They still experience the letter as black. They're
just having an additional experience of color along with it. They don't confuse it as a property.
So this is why whenever you see a kind of illustration of synesthesia with the letters
colored in it's a very very poor illustration i'm guilty of using those kinds of poor illustrations
in the past but this color experience does not have the phenomenology of objecthood it lacks it
it doesn't appear to be part of an object in the outside world. Why not? Well,
it doesn't exhibit the same kinds of sensory motor contingencies that an object that has a
particular color does. So if I'm synesthetic and I'm looking at the letter F and I change
lighting conditions somewhat or move around it, then a really red F will change its luminance
and reflectance properties in subtle but significant ways. But for my synesthetic experience,
it's still just an F. So my experience of red doesn't change. So I think this is just a
promising example of how concepts and mechanisms from within predictive perception
can start to unravel some pervasive and modality-specific phenomenological properties of consciousness.
I think it's worth emphasizing the connection between perception and action,
because it's one thing to talk about it in the context of
catching a cricket ball, but when you talk about the evolutionary logic of having developed
perceptual capacities in the first place, the link to action becomes quite explicit. We have not
evolved to perceive the world as it is for some abstract epistemological reason. We've evolved to perceive
what's biologically useful. And what's biologically useful is always connected, at least, you know,
when you're talking about the outside world, to actions. If you can't move, if you can't act in
any way, there would have been very little reason to evolve a capacity for sight, for instance.
any way, there would have been very little reason to evolve a capacity for sight, for instance.
Absolutely. I mean, there's that beautiful story, I think, of, is it the sea slug or the sea snail or something of that sort? Some very simple marine creature that swims about during its
juvenile phase looking for a place to settle. And once it's settled and it just starts filter feeding, it digests its own brain because it no longer has any need for perceptual competence now that it's not going to move anymore.
And this is often used as a slightly unkind analogy for getting tenure in academia.
But you're absolutely right that perception is not about figuring out really what's there.
We perceive the world as it's useful for us to do so.
And I think this is particularly important
when we think about perception of the internal state of the body,
which you mentioned earlier, this whole domain of interoception.
Because if you think, what are brains for fundamentally, right?
They're not for perceiving the world as it is.
They're certainly not for,
didn't evolve for doing philosophy or complex language.
They evolved to guide action.
But even more fundamentally than that,
brains evolved to keep themselves and bodies alive.
They evolved to engage in homeostatic regulation of the body so that it
remains within viable physiological bounds. That's fundamentally what brains are for. They're for
helping creatures stay alive. And so the most basic cycle of perception and action doesn't involve the
outside world at all. It doesn't involve the exterior surfaces of the body at all.
It's only about regulating the internal milieu, the internal physiology of the body,
and keeping it within the bounds that are compatible with survival. And I think this gives us a clue here about why experiences of mood and emotion
and of, if you like, the most basic essence of selfhood
have this non-object-like character.
So I think the way to approach this is to first realize that just as we
perceive the outside world on the basis of sensory signals that are met with a top-down
flow of perceptual expectations and predictions, the very same applies to perception of the internal
state of the body. The brain has to know what the internal state of
the body is like. It doesn't have any direct access to it just because it's wrapped within
a single layer of skin. I mean, the brain is the brain. All it gets are noisy and ambiguous
electrical signals. So it still has to interpret and bring to bear predictions and expectations
in order to make sense of the barrage of sensory
signals coming from inside the body. And this is what's collectively called interoception,
perception of the body from within. Just as a side note, it's very important to distinguish
this from introspection, which could hardly be more different. Introspection, you know,
consciously reflecting on the content of our experience. This is not that. This is interoception, perception of the body from within. So the same computational principles apply. We have to bring
to bear, our brain has to bring to bear predictions and expectations. So in this view, we can
immediately think of emotional conscious experiences, emotional feeling states, in this same inferential framework. And I've
written about this for a few years now, that we can think of interoceptive inference. So emotions
become predictions about the causes of interoceptive signals in just the same way
that experiences of the outside world are constituted by predictions of the causes of
sensory signals. And this, I think, gives a nice computational and mechanistic gloss on pretty old theories of emotion that originate
with William James and Karl Langer, that emotion has to do with perception of physiological change
in the body. These ideas have been repeatedly elaborated. So people ask about
the relationship between cognitive interpretation and perception of physiological change.
This predictive processing view just dissolves all those distinctions and says that emotional
experience is the joint content of predictions about the causes of interoceptive signals at all levels, at all low and high levels of abstraction.
And the other aspect of this that becomes important is the purpose of perceiving the body
from within is really not at all to do with figuring out what's there. My brain couldn't care less that my internal organs
are objects and they have particular locations within my body. Couldn't care less about that.
It's not important. The only thing that's important about my internal physiology is that it works.
That if you imagine the inside of my body is a cricket ball,
it really don't care where the cricket ball is
or that it's a ball.
All it cares is that I'm going to catch the ball.
It only cares about control and regulation
of the internal state of the body.
So predictions, perceptual predictions
for the interior of the body are of a very different kind.
They're instrumental, they're control-oriented,
they're not epistemic, they're not to do with finding out.
And I think that gets it.
For me, anyway, it's very suggestive of why
our experiences of just being a body
have this very sort of non-object-based,
inchoate phenomenological character
compared to our experience as the outside world.
But it also suggests that everything can be derived from that,
that if we understand the original purpose of predictive perception
was to control and regulate the internal state of the body,
then all the other kinds of perceptual prediction
are built upon that evolutionary imperative so that ultimately the way we perceive
the outside world is predicated on these mechanisms that have their fundamental objective
in the regulation of our internal bodily state. And I think this is really important for me because it gets away from these, I don't know,
pre-theoretical associations of consciousness and perception with cognition, with language,
with all these higher order things, maybe social interaction, and it grounds them much more in the
basic mechanisms of life. So here we have a nice thing that it might not just be that life provides a nice
analogy with consciousness in terms of hard problems and mysteries and so on, but that
there are actually very deep obligate connections between mechanisms of life and the way we perceive
consciously and unconsciously ourselves and the world. Well, so now if interoception is purpose toward what is sometimes called allostatic control,
so the regulation of internal states on the basis of essentially homeostasis as governed
by behavior and action, if that's the purpose, an emotion is essentially parasitic on these
processes. An emotion like disgust, say, or fear or anger,
much of the same neural machinery is giving rise to these kinds of emotions.
How do you think about emotion by this logic?
What precipitates an emotion is most often,
I mean, it can just be a thought, right, or a memory of something that's happened,
but its referent is usually out in the world, very likely in some social circumstance. What
is the logic of emotion in terms of this picture of prediction and control in our internal system?
It's very interesting. I think it's more of a research program than a question that's easy to answer in the world an object or a social situation or a course of action
so that our brain needs to be able to to predict the allostatic consequences and here you're
absolutely right allostasis is sort of the behavioral process of maintaining of homeostasis
so our brain needs to be able to predict the allostatic consequences of
everything that it, every action that the body produces, whether it's an internal action of
autonomic regulation, whether it's an external action, a speech act, or just a behavioral act.
What's that, what are the consequences of that for our physiological condition and the maintenance
of viability? And I think emotional content is a way in which those consequences become
represented in conscious experience. And they can be quite simple. So if you think of
probably primordial emotions like disgust have to do with a rejection of something that you try to put inside
your body that shouldn't be there because the consequence is going to be pretty bad.
And that's a very non-social kind of emotion, at least certain forms of disgust that have to do
with eating bad things don't depend so much on social context, though they can be invoked by
social context later on. But then other more sophisticated or more ramified emotions like regret.
Think about regret.
It's not the same as disappointment.
Disappointment is, I was expecting X and I got Y,
like a lot of people might have done Christmas last week.
You can be disappointed, but regret has an essential counterfactual element that, oh, I could have done this
instead, and then I would have got X if I'd done this. And I think certainly my own personal
emotional life involves many experiences of regret, and even anticipatory regret,
where I regret things I haven't even done yet yet because I kind of assume they're going to turn out badly.
And the relevance of that is that these sorts of emotional experiences depend on quite high level predictions about counterfactual situations, about social consequences, about what other people might think or believe about me.
So we can have
an ordering of the richness of emotional experience, I think. That is defined by the
kinds of predictions that are brought to bear. But they're all ultimately rooted in their relevance
for physiological viability. So we've been talking about the contents of consciousness and how varied they are and how they're shaped by top-down predictive processes, perhaps even more than bottom-up processes. being of pure consciousness, consciousness without content or without obvious content?
Is this something that you are skeptical exists or do you have a place on the shelf for it?
I think it probably does exist. I don't know. I mean, unlike you, I've not been a very disciplined
meditator. I've tried it a little bit, but it's not something that you probably
gain very much from dabbling in. I think it seems to me conceivable that there's a phenomenal state
which is characterized by the absence of specific contents. I can imagine I'm happy with the idea
that that state exists. I'm somehow skeptical of people's reports
of these states. And this gets back to what we were talking about earlier, that we tend to somehow
overestimate our ability to have insight into our phenomenal content at any particular time.
But yeah, I mean, the interesting question there, which I haven't thought about a lot, is what would the computational vehicles of such a state be in terms of predictive
perception? Is it the absence of predictions, or is it the prediction of that nothing is causing
my sensory input at that particular time? I don't know. I don't know. I have to think about that
some more. Yeah, I mean, it's an experience that I believe I've had. And again, I agree with you that we're
not subjectively incorrigible, which is to say we can be wrong about how things seem to us.
We can certainly be wrong about what's actually going on to explain the character of our
experience. But I would say we can be wrong about the character of our experience in important ways,
which is to say that if we become more sensitive to what an experience is like, we can notice things
that we weren't first noticing about it, and it's not always a matter of actually changing the
experience. Obviously, there's conceptual questions here about whether or not being able to discriminate
more is actually finding qualia that were there all the time
that you weren't noticing, or you're actually just changing the experience. When you learn how to
taste wine, are you having a fundamentally different experience, or are you actually
noticing things that you might have noticed before, or are both processes operating simultaneously?
I think it's probably both. Yeah, I mean, I think this whole predictive perception view would come down pretty firmly that at least to some extent your experience
is actually changing because you're developing a different set of predictions you know your your
your your predictions are best able to distinguish initially similar sets of sensory signals so i
think yeah it's not just that you're noticing different things your your experiences are
changing as well i, to take the experience
of pure consciousness that many meditators believe they've had, people have had it on
psychedelics as well. Perhaps we'll touch the topic of psychedelics because I know you've done
some research there. But the question is, what I'm calling pure consciousness, was there something
there that I could have noticed that was the contents of consciousness
that I wasn't noticing there?
But the importance of the experience doesn't so much hinge for me on whether or not consciousness
is really pure there or really without any contents.
It's more that it's clearly without any of the usual gross contents.
without any of the usual gross content. It's quite possible to have an experience where you're no longer obviously feeling your body. There's no sensation that you are noticing.
There's no sense of proprioception. There's no sense of being located in space. In fact,
the experience you're having is a consciousness denuded of those usual reference points. And that's what's so
interesting about it. That's what's so expansive about it. That's why it suddenly seems so unusual
to be you in that moment, because all of the normal experiences have dropped away.
So seeing, hearing, smelling, tasting, touching, and even thinking have dropped away. This is where
for me that the hard problem does
kind of come screaming back into the conversation. On many of these accounts of what consciousness is,
we should probably move to Tononi's notion of integrated information. On his account, and this
is a very celebrated thesis in neuroscience and philosophy, on his account, consciousness simply is a matter of integrated
information, and the more information and the more integrated, the more consciousness, presumably.
But an experience of the sort that I'm describing of pure consciousness, consciousness, you know,
whether pure or not, consciousness stripped of its usual informational reference points, is not the experience of
diminished consciousness. In fact, the people who have this experience tend to celebrate it as
more the quintessence of being conscious. I mean, it's really some kind of height of consciousness
as opposed to its loss. And yet, the information component is certainly dialed down by any normal
sense in which we use the term information. They're not things being discriminated from one
another. And I guess you could say it's integrated, but there are other experiences that I could
describe to you where the criteria of integration also seems to fall apart and yet consciousness remains.
So again, this is one of those definitional problems. If we're going to call consciousness
a matter of integrated information, if we find an example of there's something that it's like to be
you and yet information and integration are not its hallmarks, well, then it's kind of like defining all ravens
as being black, and then we find a white one. What do we call it, a white raven or some other bird?
Do you have any intuitions on this front? Alan, there's an awful lot in what you said
just there. I think if we just put aside for a second trying to isolate what we might call the minimal experience of selfhood. Is there
anything left after you've got rid of experiences of body and of volition and of internal narratives
and so on and so on? Have a thought about that. Just for one point of clarification, I would
distinguish this from the loss of self, which I hope we come to, I think you can lose your sense
of self with all of the normal phenomenology preserved. So you can be seen and hearing and
tasting and even thinking just as vividly, and yet the sense of self, or at least one sense of self,
can drop away completely. This is a different experience I'm talking about. Yes, I mean, that sounds like flow state type experiences in some way.
But maybe we can get onto that. But if we move indeed to IIT and think about
how that might speak to these issues of pure consciousness and whether
these experiences serve as some kind of counter-example,
some phenomenological counter-example to IIT. I think that's very interesting to think about.
And it gets at whether we consider IIT, integrated information theory, to be primarily a theory of
conscious level, of how conscious a system is, or of conscious content, or of their interaction.
Perhaps it's best to start just by summarizing in a couple of sentences the claims of IAT,
because you're absolutely right. It's come to occupy a very interesting position
in the academic landscape of consciousness research. A lot of people talk about it,
although in the last couple of meetings of the Association for the Scientific Study of Consciousness, certainly the last one, there was surprisingly little about it.
And I haven't thought why that might be, which we can come on to.
It's probably worth trying to explain just very briefly what integrated information theory, IIT, tries to do.
And what it tries to do, it starts with a bunch of phenomenological axioms.
So it doesn't start by asking the question,
what's in the brain and how does that go along with consciousness?
It tries to identify axiomatic features of conscious experience,
things that should be self-evident,
and then try to, from there, derive what are the necessary and
sufficient mechanisms, or really what's the sufficient mechanistic basis given these axioms.
IOT will call these postulates. There are actually, in the current version of IOT,
five of these axioms, but I think we just consider a couple of them. And these are the fundamental ones, information and integration. And these particular, you can call them axioms, or you can call them
just generalizations of what all conscious experiences seem to have in common. Information
integration. So the axiom of information is that every conscious experience is highly informative
for the organism in the specific sense of ruling out a vast repertoire of alternative experiences.
You're having this experience right now instead of all the other experiences you could have,
you could have had, you have had, you will have, you're having this particular experience. And the
occurrence of that experience is generating an enormous amount of information because it's
ruling out so many alternatives. As you go through this, I think it will be useful for me to
just flag a few points where this phenomenologically breaks down for me. So
again, the reference here is to kind of non-ordinary experiences in meditation
and with psychedelics, but the meditative experiences for me at least have become
quite ordinary. I can really talk about them in real time. So the uniqueness of each conscious
experience as being highly informative because it rules out so many other conscious experiences. In meditation,
in many respects, that begins to break down because what you're noticing is a core of sameness
to every experience. What you're focusing on is the qualitative character of consciousness that is unchanged by experience. And so the
distinctness of an experience isn't what is so salient. What is salient is the unchanging
quality of consciousness in its openness, its centerlessness, its vividness. And one analogy
I've used here, and if you've ever been in a restaurant which has had a
full-length mirror across one of the walls, and you haven't noticed that the mirror was a mirror,
and you just assume that the restaurant was twice as big as it in fact is, the moment you notice
it's a mirror, you notice that everything you thought was the world is just a pane of glass,
it's just a play of light on a wall. And so all those people
aren't really people or they're not extra people. They're in the room just being reflected over
there. And one way to describe that shift is almost a kind of loss of information, right?
It's just like there's no depth to what's happening in the glass. Nothing's really
happening in the glass. And meditation does begin to converge on that kind of experience
with everything. The Tibetan Buddhists talk about one taste, being that there's basically,
there's a single taste to everything when you really pay attention. And it is because these
intrinsic properties of consciousness are what have become salient, not the differences between
experiences. So I don't know if that just sounds like an explosion of gibberish to you, but it's a way in which when I begin to hear this
first criterion of Tononi's stated, as you have, it begins to not map on to what I'm describing as
some of the clearest moments of consciousness. Again, not a diminishment of consciousness.
some of the clearest moments of consciousness.
Again, not a diminishment of consciousness.
That's very interesting.
And those states of being aware of the unchanging nature of consciousness,
I think that that's really very important.
I'm not sure it's misaligned with Tononi's intuition here,
because I think the idea of informativeness is,
so if you think about it,
I mean, there's one way to think about it,
which is that specific experience
that you're having in that meditative state
of becoming aware of one taste
or of the unchanging nature
that underlies all experiences,
that itself is a specific experience.
It's a very specific experience.
You have to have trained in meditation for a long time to have that experience and the having of that experience is equally distinctive it's ruling out all the other experiences when you're not
having that experience and so it's not so much how informative it is for you at the psychological level it's a it's a much more reductionist
interpretation of information i think the other way to to get it that is to think of it from the
bottom up from the simple systems upwards and ternone uses an analogy which i think has got
some value uh you know why is a photodiode not conscious? Well, for a photodiode, the whole world, in the world,
outer world, it's either dark or light. The photodiode doesn't have an experience of
darkness and lightness. It's just on or off, one or zero. And generalizing that, that a particular
state has the informational content it has in virtue of all the things it isn't, rather than
the specific thing that it is. So we can think about this in terms of color, you know, red is red,
not because of any intrinsic redness to a combination of wavelengths, but because of
all the other combinations of wavelengths that are excluded by that particular combination of
wavelengths and i think this is this is really interesting and this point goes actually precedes
integrated information theory goes right back to the dynamic core ideas of ternonion edelman which
was the thing that first attracted me to go and work san diego nearly 20 years ago and even then
the point was made that an experience of pure darkness
or complete sensory deprivation, where there's no sensory input, no perceptual content, call this a
hypothetical conscious state for now, I don't know how to what extent it's approximated by any
meditative states, that has exactly the same informational content as does a very vivid,
busy, conscious scene walking down the high street, because it's ruling out the same number
of alternatives. And it may seem subjectively different, less informative, because there's
nothing going on. But in terms of the number of alternative states that it's ruling out it's the same so i think there's a there's a
sense in which we can interpret information uh yeah this this axiom of informativeness
as applying to a whole range of different kinds of conscious contents of course this does get us
onto tricky territory about whether we're talking about a theory
of level or a theory of content.
But this idea is, I think it can account for your situation, though it does ask the question
about can we really get at content specifically there.
So the number of states over which you can range as a conscious mind defines how much information is encoded when you're in one of those states.
That's right. That would be the claim.
And you can think of it in terms of one of the quantities associated with this technical definition of information theory is entropy.
this technical definition of information theory is entropy.
And entropy simply measures the range of possible options and the likelihood of being in any particular one of those options.
And so entropy is a measure of the kind of uncertainty
associated with a system state.
And so a photodiode can only be in one of two possible states.
A single die can be in six possible states.
A combination of two die can be in 12 possible states.
And there's actually, I want to linger here slightly longer
because it's in these technical details about information theory that IIT, I think, runs aground because it's
trying to address the hard problem. It's because of this identity relationship that Tononi argues
for between integrated information. We'll get onto integration in a second, but let's just think
about information. It's because of this identity relationship in which he says consciousness simply is integrated information measured the right way,
that the whole theory becomes empirically untestable and lame. Because if we're to make
the claim that the content and level of consciousness that a system has
is identical to the amount of integrated information that it has, that means in order
to assess this, I need to know not only what state the system is in and what state it was in
previous time steps, let's say we measure it over time. But I also need
to know all the states the system could be in, but hasn't been in. I need to know all its possible
combinations. And that's just impossible for anything other than really simple toy systems.
There's a metaphysical claim which goes along with this too, which is that information has ontological status. This goes back to John Wheeler and it from bit and so on,
that the fact that a system could occupy a certain state but hasn't is still causally
contributing to the conscious level and state of the system now. And that's a very strong
claim, and it's a very interesting claim. I mean, who knows what the ontological status of information
in the universe will turn out to be. But you also have an added problem of how you bound
this possibility. So for instance, not only can you not know all the possible states my
conscious mind could be in so as to determine the information density of the current state,
but what counts as possible? If in fact it is possible to augment my brain even now,
I just don't happen to know how to do that, but it's possible to do that, or it'll be possible next week. Do we have to incorporate those possibilities into
the definition of my consciousness? If you'd like to continue listening to this podcast,
you'll need to subscribe at samharris.org. You'll get access to all full-length episodes of the
Making Sense Podcast and to other subscriber-only content, including bonus episodes and AMAs and the
conversations I've been having on the Waking Up app. The Making Sense podcast is ad-free
and relies entirely on listener support. And you can subscribe now at samharris.org. Thank you.