Making Sense with Sam Harris - #264 — Consciousness and Self (Rebroadcast)
Episode Date: October 21, 2021Sam Harris speaks with Anil Seth about the scientific study of consciousness, where consciousness emerges in nature, levels of consciousness, perception as a “controlled hallucination,” emotion, t...he experience of “pure consciousness,” consciousness as “integrated information,” measures of “brain complexity,” psychedelics, different aspects of the “self,” conscious AI, and many other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe. Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
Transcript
Discussion (0)
Thank you. of the Making Sense Podcast, you'll need to subscribe at SamHarris.org. There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only
content. We don't run ads on the podcast, and therefore it's made possible entirely
through the support of our subscribers. So if you enjoy what we're doing here,
please consider becoming one.
Welcome to the Making Sense Podcast.
This is Sam Harris.
No housekeeping today, apart from mentioning that big things are happening over at Waking Up.
We have redesigned the app, and I'm really happy with the result.
Props to the team over there at Waking Up,
and many good things happening on that front that I'm excited about.
Okay, well today I'm releasing a podcast that we originally aired a few years ago.
This is with Anil Seth, a quite celebrated neuroscientist,
and this was a really good conversation on consciousness that runs to three hours.
Anil has a new book out, available today, titled Being You, A New Science of Consciousness,
and I have not yet read the book.
He was beginning to write it when we last spoke.
But I'm told it's fantastic.
And it has received wonderful reviews and been endorsed by many smart people.
David Eagleman, Nicholas Humphrey, Alex Garland,
the director of the film Ex Machina,
Sean Carroll, Nigel Warburton,
and it's been endorsed by none other than my wife, Annika Harris. So anyway, I look forward to reading it. I really enjoyed this conversation with Anil. He remains a professor of cognitive and
computational neuroscience at the University of Sussex and the co-director of the Sackler
Center for Consciousness Science. And with that, I give you Anil Seth.
I am here with Anil Seth. Anil, thanks for joining me on the podcast.
Thanks for inviting me. It's a pleasure.
I think I first discovered you, I believe I'd seen your name associated with various papers,
but I think I first discovered you the way many people had after your TED Talk. You gave a much
loved TED Talk. Perhaps you can briefly describe your scientific and intellectual background.
It's quite a varied background, actually.
I mean, I think my intellectual interest has always been in understanding the physical
and biological basis of consciousness and what practical implications that might have
in neurology and psychiatry.
But when I was an undergrad student at Cambridge in the early 1990s, consciousness was certainly
as a student then.
And then in a place like Cambridge, not a thing you could study scientifically.
It was still very much a domain of philosophy.
And I was still at that time, I still had this kind of idea that physics was going to
be the way to solve every difficult problem in science and philosophy.
So I started off studying physics.
But then through the undergrad years, I got diverted towards psychology
as more of a direct route to these issues of great interest
and ended up graduating with a degree in experimental psychology.
After that, I moved to Sussex University, where I am now actually, again,
to do a master's and a PhD in computer science and AI. And this was partly because of the need, I felt, at the time to move beyond these box and arrow models of cognition that were so dominating psychology and cognitive science in the 90s towards something that had more explanatory power. And the rise
of connectionism and all these new methods and tools in AI seemed to provide that.
So I stayed at Sussex and did a PhD, actually in an area which is now called artificial life. And
I became quite diverted, actually ended up doing a lot of stuff in ecological modeling and thinking a lot more here about how brains, bodies and environments interact and co-construct cognitive
processes. But I sort of left consciousness behind a little bit then. And so when I finished my PhD
in 2000, I went to San Diego to the Neuroscience Institute to work with Gerald Adelman,
because certainly then San Diego was
one of the few places, certainly that I knew of at the time, that you could legitimately study
consciousness and work on the neural basis of consciousness. Adelman was there, Francis Crick
was across the road at the Salk Institute. People were really doing this stuff there. So I stayed
there for about six years and finally started working on consciousness,
but bringing together all these different traditions of math, physics, computer science,
as well as the tools of cognitive neuroscience. And then for the last 10 years, I've been back at Sussex where I've been running a lab and it's called the Sackler Center for Consciousness
Science. And it's one of the growing number of labs that are explicitly dedicated to solving or studying at least the brain and biological basis of consciousness.
Yeah, well, that's a wonderful pedigree. I've heard stories, and I never met Edelman. I've
read his books, and I'm familiar with his work on consciousness, but he was famously a titanic ego,
if I'm not mistaken.
I don't want you to say anything you're not comfortable with,
but everyone who I've ever heard have an encounter with Edelman
was just amazed at how much space he personally took up in the conversation.
I've heard that too.
And I think there's some truth to that.
What I can say from the other side is that when I worked for him and with him,
firstly,
it was an incredible experience.
And I felt very lucky to have that experience because he had a large ego, but he also knew
a lot too.
I mean, he really had been around and had contributed to major revolutions in biology
and in neuroscience.
But he treated the people he worked with, I think, often very kindly.
And one of the things that was very clear in San
Diego at the time, he didn't go outside of the Neurosciences Institute that much. It was very
much his empire. But when you were within it, you got a lot of his time. So I remember many
occasions just being in the office and most days I would be called down for a discussion with
Adelman about this subject or that subject or this
new paper or that new paper. And that was a very instructive experience for me. I know he was quite
difficult in many interviews and conversations outside the NSI, which is a shame. I think it
because his legacy really is pretty extraordinary. I'm sure we'll get onto this later. But one of the
other reasons I went there was one of the main reasons I went there was because I'd read some of the early work on dynamic core
theory, which has later become Giulio Tononi's very prominent integrated information theory.
And I was under the impression that Giulio Tononi was still going to be there when I got there in
2001, but he hadn't, he'd left. And he wasn't really speaking much with
Edelman at the time. And it was a shame that they didn't continue their interaction. And when we
tried to organize a festrift, a few of us for Edelman some years ago now, it was quite difficult
to get the people together that had really been there and worked with him at various
times of his career i think of the people that have gone through the nsi and work with edelman
they're extraordinary range of people who've contributed huge amounts not just in consciousness
research but in neuroscience generally and of course in molecular biology before that so it
was a great yeah great experience for me but yeah I know you could also be pretty difficult at times too. You had to have a pretty thick skin. So we have a massive interest in common. No doubt we have
many others, but consciousness is really the center of the bullseye as far as my interests go.
And really, as far as anyone's interests go, if they actually think about it, it really is the
most important thing in the universe because it's the basis of all of our happiness and suffering and everything we value. It's the
space in which anything that matters can matter. So the fact that you are studying it and thinking
about it as much as you are just makes you the perfect person to talk to. I think we should start
with many of the usual starting points here,
because I think they're the usual starting points for a reason. Let's start with a definition of
consciousness. How do you define it now? I think it's kind of a challenge to define
consciousness. There's a sort of easy folk definition, which is that consciousness is
the presence of any kind of subjective experience whatsoever. For a conscious organism,
there is a phenomenal world of subjective experience that has the character of being
private, that's full of perceptual qualia or content, colors, shapes, beliefs, emotions,
other kinds of feeling states. There is a world of experience that can go away completely in states
like general anesthesia or dreamless sleep. It's very easy to define it that way. To define it
more technically is always going to be a bit of a challenge. And I think sometimes there's too
much emphasis put on having a consensus technical definition of something like consciousness,
because history of science has
shown us many times that definitions evolve along with our scientific understanding of a phenomenon.
We don't sort of take the definition and then transcribe it into scientific knowledge
in a unidirectional way. So long as we're not talking past each other and we agree that
consciousness picks out a very significant phenomenon in nature, which is the presence of
subjective experience, then I think we're on reasonably safe terrain.
Many of these definitions of consciousness are circular. We're just substituting another word
for consciousness in the definition, like sentience or awareness or subjectivity or
even something like qualia, I think, is parasitic on the
undefined concept of consciousness.
Sure, I think that's right.
But then there's also a lot of confusions people make too.
So I'm always surprised by how often people confuse consciousness with self-consciousness.
And I think our conscious experience of selfhood are part of conscious experiences as a whole,
but only a subset of those experiences.
And then there are arguments about whether there's such a thing as phenomenal consciousness
that's different from access consciousness, where phenomenal consciousness refers to
this impression that we have of a very rich conscious scene, perhaps in vision before us now,
that might exceed what we have
cognitive access to. Other people will say, well, no, there's no such thing as phenomenal
consciousness beyond access consciousness. So there's a certain circularity, I agree with you
there, but there are also these important distinctions that can lead to a lot of
confusion when we're discussing the relevance of certain experiments.
I want to just revisit the point you just made about not transcribing a definition of a concept
that we have into our science as a way of capturing reality. And then there are things
about which we have a folk psychological sense which completely break apart once you start
studying them at the level of the brain. So something like memory, for instance, we have the sense that it's one thing intuitively, you know, pre-scientifically,
we have the sense that to remember something, whatever it is, is more or less the same operation
regardless of what it is. Remembering what you ate for dinner last night, remembering your name,
remembering who the first
president of the United States was, remembering how to swing a tennis racket. These are things
that we have this one word for, but we know neurologically that they're quite distinct
operations and you can disrupt one and have the other intact. The promise has been that
consciousness may be something like that, that we could be
similarly confused about it, although I don't think we can be. I think consciousness is unique
as a concept in this sense, and this is why I'm taken in more by the so-called hard problem of
consciousness than I think you are. I think we should talk about that, but before we do,
I think the definition that I want to put in play, which I know you're. I think we should talk about that. But before we do, I think the definition
that I want to put in play, which I know you're quite familiar with, is the one that the philosopher
Thomas Nagel put forward, which is that consciousness is the fact that it's like something
to be a system, whatever that system is. So if a bat is conscious, this comes from his famous essay,
What Is It Like to Be a Bat? If a bat is conscious, this comes from his famous essay, What Is It Like to Be a Bat?
If a bat is conscious, whether or not we can understand what it's like to be a bat,
if it is like something to be a bat, that is consciousness in the case of a bat. However inscrutable it might be, however impossible it might be to map that experience onto our own,
if we were to trade places with a bat, that would not be synonymous
with the lights going out. There is something that's like to be a bat if a bat is conscious.
That definition, though it's really not one that is easy to operationalize and it's not a technical
definition, there's something sufficiently rudimentary about that that it has always
worked for me. And when we begin to move away from
that definition into something more technical, my experience has been, and we'll get to this as we
go into the details, that the danger is always that we wind up changing the subject to something
else that seems more tractable. We're no longer talking about consciousness in Nagel's sense,
We're no longer talking about consciousness in Nagel's sense.
We're talking about attention.
We're talking about reportability or mere access or something.
So how do you feel about Nagel's definition as a starting point?
I like it very much as a starting point.
I think it's pretty difficult to argue with that as a very basic fundamental expression of what we mean by consciousness in the round.
So I think that's fine. I partly disagree with you. I partly disagree with you, I think, when
we think about the idea that consciousness might be more than one thing. And here I'm much more
sympathetic to the view that, heuristically at least, the best way to scientifically study consciousness and philosophically to think about
it as well is to recognize that we might be misled about the extent to which we experience
consciousness as a unified phenomenon. And there's a lot of mileage in recognizing how,
just like the example for memory, recognizing how conscious experiences
of the world and of the self can come apart in various different ways.
Just to be clear, actually, I agree with you there. We'll get into that, but I completely
agree with you there that we could be misled about how unified consciousness is. The thing
that's irreducible to me is this difference between there being something that it's like and not. You know,
the lights are on or they're not. There are many different ways in which the lights can be on
in ways that would surprise us. Or for instance, it's quite possible that the lights are on
in our brains in more than one spot. We'll talk about split brain research perhaps, but
they're very counterintuitive ways
the lights could be on. But just the question is always, is there something that it's like to be
that bit of information processing or that bit of matter? And that is always the cash value of a
claim for consciousness. Yeah, I'd agree with that. I think that it's perfectly reasonable to put the
question in this way, that for a conscious organism, it is something like it is to be that organism. And the thought is that there's going
to be some physical, biological, informational basis to that distinction. Now, you've written
about why we really don't need to waste much time on the hard problem. Let's remind people what the hard
problem is. David Chalmers has been on the podcast, and I've spoken about it with other people, but
perhaps you want to introduce us to the hard problem briefly. The hard problem has been,
rightly so, one of the most influential philosophical contributions to the consciousness
debate for the last 20 years or so. And it goes right back
to Descartes. And I think it encapsulates this fundamental mystery that we've started talking
about now, that for some physical systems, there is also this inner universe, there is the presence
of conscious experience, there is something it is like to be that system. But for other systems,
there is the presence of conscious experience. There is something it is like to be that system.
But for other systems, tables, chairs, probably most computers, probably all computers these days,
there is nothing it is like to be that system. And what the hard problem does, it pushes that intuition a bit further and it distinguishes itself from the easy problem in neuroscience.
And the easy problem, according to Chalmers, is to figure out how the brain works in all its
functions, in all its detail.
So to figure out how we do perception, how we utter certain linguistic phrases, how we
move around the world adaptively, how the brain supports perception, cognition, behavior
in all its richness in a way that would be indistinguishable from, and here's the key
really, in a way that would be indistinguishable from, and here's the key really, in a way that would be
indistinguishable from an equivalent that had no phenomenal properties at all, that completely
lacked conscious experience. The hard problem is understanding how and why any solution to the easy
problem, any explanation of how the brain does what it does in terms of behavior, perception,
and so on, how and why any of this should have anything to do with conscious experiences at all? And it rests on this idea of
the conceivability of zombies. And this is one reason I don't really like it very much.
I mean, the hard problem has its conceptual power over us because it asks us to imagine
systems, philosophical zombies, that are completely equivalent in terms of their
function and behavior to you or to me or to any or to a conscious bat, but that instantiate no
phenomenal properties at all. The lights are completely off for these philosophical zombies.
And if we can imagine such a system, if we can imagine such a thing, a philosophical zombie,
you or me, then it does become this enormous challenge. You think, well, then what is it or
what could it be about real me, real you, real conscious bad? That gives rise, that requires or
entails that there are also these phenomenal properties, that there is something it is like to be you or me or the bat. And it's because Chalmers would argue that such things are
conceivable that the hard problem seems like a really huge problem. Now, I think this is a little
bit of a, I think we've moved on a little bit from these conceivability arguments. Firstly, I just think that they're pretty weak. And the more you know about a system, the more we know about the
easy problem, the less convincing it is to imagine a zombie alternative. Think about, you're a kid,
you look up at the sky and you see a 747 flying overhead.
And somebody asks you to imagine a 747 flying backwards.
Well, you can imagine a 747 flying backwards.
But the more you learn about aerodynamics, about engineering, the harder it is to conceive of a 747 flying backwards.
You know, you simply can't build one that way.
And that's my worry about this kind of conceivability argument.
That to me, I really don't think I can imagine in a serious way the existence of a philosophical zombie.
And if I can't imagine a zombie, then the hard problem loses some of its force.
That's interesting. I don't think it loses all of its force, or at least it doesn't for me.
For me, the hard problem has never really rested on the zombie argument, although I know Chalmers did a lot with the zombie argument.
I mean, so let's just stipulate that philosophical zombies are impossible.
They're at least, you know, what's called in the jargon, nomologically impossible.
It's just a fact that we live in a universe where if you built something that could do what I can do, that something would be conscious.
So there is no zombie Sam that's possible. And let's just also add what you just said, that
really, when you get to the details, you're not even conceiving of it being possible. It's not
even conceptually possible. You're not thinking it through enough. and if you did, you would notice it break apart. But for me, the hard problem is really that with consciousness, any explanation doesn't seem to promise the same sort of intuitive closure that other scientific explanations do.
It's analogous to whatever it is, and we'll get to some of the possible
explanations, but it's not like something like life, which is an analogy that you draw and that
many scientists have drawn to how we can make a breakthrough here. It used to be that people
thought life could never be explained in mechanistic terms. There was a philosophical point of view called vitalism here, which
suggested that you needed some animating spirit, some Elan Vital in the wheelworks to make sense
of the fact that living systems are different from dead ones, the fact that they can reproduce
and repair themselves from injury and metabolize, and all the functions we see a living system
engage, which define what it is to be alive, it was thought very difficult to understand any of
that in mechanistic terms. And then lo and behold, we managed to do that. The difference for me is,
and I'm happy to have you prop up this analogy more than I have, but the difference for me is that everything you want to say about life, with the exception of conscious life, we have to leave
consciousness off the table here, everything else you want to say about life can be defined in terms
of extrinsic functional relationships among material parts. So, you know, reproduction and growth and healing
and metabolism and homeostasis,
all of this is physics and need not be described
in any other way.
And even something like perception, you know,
the transduction of energy, you know,
let's say, you know, vision, light energy
into electrical and chemical energy in the brain
and the mapping of a visual space
onto a visual cortex, all of that makes sense in mechanistic physical terms until you add
this piece of, oh, but for some of these processes, there's something that it's like to be that
process.
For me, it just strikes me as a false analogy.
And with or without zombies, the hard problem still
stays hard. I think it's an open question whether the analogy will turn out to be false or not.
It's difficult for us now to put ourselves back in the mindset of somebody 80 years ago,
100 years ago, when vitalism was quite prominent and whether the sense of mystery surrounding something that was alive
seemed to be as inexplicable as consciousness seems to us today. So it's easy to say with
hindsight, I think, that life is something different. But we've encountered, or rather,
scientists and philosophers over centuries have encountered things that have seemed to be
inexplicable, that have turned out to be explicable. So I don't think we should rule out a priori
that there's going to be something really different this time about consciousness.
There's, I think, a more heuristic aspect to this is that if we run with the analogy of life, what that leads us to do is to isolate the different phenomenal properties that co-constitute what it is for us to be conscious. as distinct from conscious perception of the outside world. We can think about conscious experiences of volition and of agency
that are also very central to certainly our experience of self.
These give us phenomenological explanatory targets
that we can then try to account for with particular kinds of mechanisms.
It may turn out at the end of doing this that there's some residue. There is
still something that is fundamentally puzzling, which is this hard problem residue. Why are there
any lights on for any of these kinds of things? Isn't it all just perception? But maybe it won't
turn out like that. And I think to give us the best chance of it not turning out
like that, there's a positive and a negative aspect. The positive aspect is that we need to
retain a focus on phenomenology. And this is another reason why I think the hard easy problem
distinction can be a little bit unhelpful, because in addressing the easy problem, we are basically
instructed to not worry about phenomenology. All we should worry about is function and behavior.
And then the hard problem kind of gathers within its remit everything to do with phenomenology
in this central mystery of why is this an experience rather than no experience.
The alternative approach, and this is something I've kind of caricatured as the real problem, but David Chalmers himself has called it the mapping
problem. And Francisco Varela talks about a similar set of ideas with his neurophenomenology,
is to not try to solve the hard problem to court, not try to explain how it is possible that
consciousness comes to be part of the universe, but rather to individuate different kinds of phenomenological properties
and draw some explanatory mapping between neural, biological, physical mechanisms and these
phenomenological properties. Now, once we've done that and we can begin to explain not why is their
experience at all, but why are certain experiences the way they are and not other ways, and we can begin to explain not why is their experience at all, but why are certain experiences
the way they are and not other ways. And we can predict when certain experiences will have
particular phenomenal characters and so on. Then we'll have done a lot more than we can currently
do. And we may have to make use of novel kinds of conceptual frameworks, maybe frameworks like
information processing will run their course and will require other
more sophisticated kinds of descriptions of dynamics and probability in order to build
these explanatory bridges.
So I think we can get a lot closer.
And the negative aspect is, why should we ask more of a theory of consciousness than
we should ask of other kinds of scientific
theories? And I know people have talked about this on your podcast before as well, but we do
seem to want more of an explanation of consciousness than we would do of an explanation in biology
or physics, that it somehow should feel intuitively right to us. And I wonder why this is such a big deal when it comes to
consciousness. Because we're trying to explain something fundamental about ourselves doesn't
necessarily mean that we should apply different kinds of standards to an explanation that we
would apply in other fields of science. It just may not be that we get this feeling that something is
intuitively correct when it is in fact a very good scientific account of the origin of phenomenal
properties. Certainly, scientific explanations are not instantiations. There's no sense in which
a good theory of consciousness should be expected to suddenly realize the phenomenal properties that it's explaining. But also, I worry that we ask too much of theories
of consciousness this way. Yeah, well, we'll move forward into the details, and I'll just flag
moments where I feel like the hard problem should be causing problems for us. I do think it's not a
matter of asking too much of a theory of consciousness here.
I think there are very few areas in science where the accepted explanation is totally a brute fact which just has to be accepted because it is the only explanation that works, but it's not something
that actually illuminates the transition from atoms you know, atoms to some higher level phenomenon,
say. Again, for everything we could say about life, even the very strange details of molecular
biology, just how information in the genome gets out and creates the rest of a human body,
it still runs through when you look at the details. It's surprising. It's
at parts difficult to visualize, but the more we visualize it, the more we describe it,
the closer we get to something that is highly intuitive, even something like, you know, the flow
of water. The fact that water molecules in its liquid state are loosely bound and move past one another, well, that seems exactly like what should be happening at the micro level.
So as to explain the macro level property of the wetness of water and the fact that it has characteristics, higher level characteristics that you can't attribute to atoms, but you can attribute to collections of atoms like turbulence, say.
that you can't attribute to atoms, but you can attribute to collections of atoms, like turbulence, say.
Whereas if consciousness just happens to require some minimum number of information processing units knit together in a certain configuration, firing at a certain hertz,
and you change any of those parameters and the lights go out,
that, for me, still seems like a mere brute fact that doesn't
explain consciousness. It's just a correlation that we decide is the crucial one. And I've never
heard a description of consciousness, you know, of the sort that we will get to, like, you know,
integrated information, you know, Tononi's phrase, that unpacks it any more than that. And you can react to that, but then
I think we should just get into the details and see how it all sounds.
Sure. I'll just react very briefly, which is that I think I'd also be terribly disappointed if
you look at the answer in a book of nature and it turned out to be, yes, you need 612,000 neurons
wired up in a small world network and that's it.
The hope is that does seem, of course, ridiculous and arbitrary and unsatisfying.
The hope is that as we progress beyond, if you like, just brute correlates of conscious
states towards accounts that provide more satisfying bridges between mechanism and phenomenology that explain
for instance why a visual experience has the phenomenal character that it has and not some
other kind of phenomenal character like an emotion that it won't seem so arbitrary and that as we
follow this route which is an empirically productive route and i think that's important that
this route, which is an empirically productive route. And I think that's important that if we can actually do science with this route, we can try to think about how to operationalize
phenomenology in various different ways. Very difficult to think how to do science and just
solve the hard problem head on. At the end of that, I completely agree there might be
still this residue of mystery, this kernel of something fundamental left unexplained.
But I don't think we can take that as a given because we can't, well, I certainly
can't predict what I would feel as intuitively satisfying when I don't know what the explanations
that bridge mechanism and phenomenology are going to look like in 10 or 20 years time. We've already moved further from just saying it's this area or that area to synchrony, which is still kind of
unsatisfying, to now I think some emerging frameworks like predictive processing and
integrated information, which aren't completely satisfying either. But they hinted a trajectory
where we're beginning to draw closer connections
between mechanism and phenomenology. Okay, well, let's dive into those hints. But before we do,
I'm just wondering, phylogenetically, in terms of comparing ourselves to so-called lower animals,
where do you think consciousness emerges? Do you think there's something that's like to be a fly, say?
That's a really hard problem. I mean, I have to be agnostic about this. And again, it's just striking how people in general's views on these things seems to have changed over the last recent
decades. It seems completely unarguable to me that all other mammals have conscious experiences
of one sort or another. I mean, we share so much in the way of the relevant neuroanatomy and
neurophysiology exhibits so many of the same behaviors that it would be remarkable to claim
otherwise. It actually wasn't that long ago that you could still hear people say that consciousness
was so dependent on language that they wondered whether human infants were conscious, to say
nothing of dogs and anything else that's not human.
Yeah, that's absolutely right.
I mean, that's a terrific point.
And this idea that consciousness was intimately and constitutively bound up with
language or with higher order executive processing of one sort or another, I think just exemplifies
this really pernicious anthropocentrism that we tend to bring to bear sometimes without realizing
it. We think we're super intelligent, we think we're conscious, we're smart, and we need to
judge everything by that benchmark.
And what's the most advanced thing about humans?
Well, if you're gifted with language, you're going to say language.
And now already, with a bit of hindsight, it seems, to me anyway, rather remarkable
that people should make these, I can only think of them as just quite naive errors to associate consciousness
with language. Not to say that consciousness and language don't have any intimate relation,
I think they do. Language shapes a lot of our conscious experiences. But certainly it's a very,
very poor criterion with which to attribute subjective states to other creatures. So mammals
for sure, I mean mammals for sure, right? But that's easy
because they're pretty similar to humans and primates being mammals. But then it gets more
complicated and you think about birds diverged a reasonable amount of time ago, but still have
brain structures that one can establish analogies, in some cases homologies, with mammalian
brain structures. And in some species, scrub jays and corvids generally, pretty sophisticated
behavior too. It seems very possible to me that birds have conscious experiences. And I'm aware,
underlying all this,
the only basis to make these judgments is in light of what we know about the neural mechanisms
underlying consciousness and the functional
and behavioral properties of consciousness in mammals.
It has to be this kind of slow extrapolation
because we lack the mechanistic answer
and we can't look for it in another species.
But then you get beyond birds and you get out to, you know, I then like to go
way out on a phylogenetic branch to the octopus, which I think is an extraordinary example of
convergent evolution. I mean, they're very smart. They have a lot of neurons, but they diverged from
the human line, I think, as long ago as sponges or something like that. I mean,
really very little in common. But they have incredible differences too. Three hearts, from the human line i think as long ago as sponges or something like that i mean really
very little in common and um but they have incredible differences too three hearts uh
eight legs arms i'm never sure whether it's a leg or an arm um that behaves semi-autonomously
and one is left you know when you spend time with these creatures i've been lucky enough to
spend a week with them in a lab in naples you certainly get the impression of another conscious presence there
but of a very different one and this is also instructive because it it it brings us a little
bit out of this assumption that we can fall into that there is one way of being conscious and that's our way there's you know there is a
huge space of possible minds out there and uh the octopus is is a very definite example of a very
different mind and very likely uh conscious mind too now when we get down to
oh yeah not really down i don't like this idea of organisms being arranged on a single scale
like this. But certainly creatures like fish, insects are simpler in all sorts of ways than
mammals. And here it's really very difficult to know where to draw the line, if indeed there is
a line to be drawn, if it's not just a gradual shading out of consciousness with gray areas in between
and no categorical divide, which I think is equally possible. Many fish display
behaviors which seem suggestive of consciousness. They will self-administer analgesia when they're
given painful stimulation. They will avoid places that have been associated with painful stimulation um they would avoid places that have been associated with painful stimulation
and so on you hear things like the the precautionary principle come into play that uh given that
suffering if it exists conscious suffering is a very aversive state and it's ethically wrong to
impose that state on other creatures we should tend to assume that creatures are conscious unless we have good evidence that they're not.
So we should put the bar a little bit lower in most cases.
Let's talk about some of the aspects of consciousness that you have identified as being distinct.
There are at least three. You've spoken about the level of consciousness,
the contents of consciousness, and the experience of having a conscious self that many people,
as you said, conflate with consciousness as a mental property. There's obviously a relationship between these things, but they're not the same. Let's start with this notion of the level of
consciousness, which really isn't the same thing
as wakefulness. Can you break those apart for me? How is being conscious non-synonymous with being
awake in the human sense? Sure. Let me just first amplify what you said, that in making these
distinctions, I'm certainly not claiming, pretending that these dimensions of level content and self
pick out completely independent aspects of conscious experiences. There are lots of
interdependencies. I just think they're heuristically useful ways to address the
issue. We can do different kinds of experiments and try to isolate distinct phenomenal properties
in their mechanistic basis by making these
distinctions. Now, when it comes to conscious level, I think that the simplest way to think
of this is more or less as a scale. In this case, it's from when the lights are completely out,
when you're dead, brain death, or under general anesthesia, or perhaps in very very deep states of sleep all the way up to vague levels of awareness
which are similar which correlate with with wakefulness so when you're very drowsy to vivid
awake alert full conscious experience that that you know i'm certainly having now feel very awake
and alert and and you know my conscious level is kind of up there.
Now, in most cases, the level of consciousness articulated this way will go along with wakefulness or physiological arousal.
When you fall asleep, you lose consciousness, at least in early stages.
But there are certain cases that exist which show that they're not completely
the same thing on both sides. So you can be conscious when you're asleep. Of course,
we know this. This is called dreaming. So you're physiologically asleep, but you're having a vivid
inner life there. And on the other side side and this is where consciousness science the rubber of
consciousness science hits the road of neurology you have states where behaviorally you have
what looked like what looks like arousal this is used to be called the vegetative state it's been
kind of renamed several times now the wakeful unawareness state where the idea is that the body is still going through physiological
cycles of of arousal from sleep to wake but there is no consciousness happening at all the lights
are not on so these two things can be separated and it's very you know it's a very productive and very important line of work to try to isolate what's the mechanistic
basis of conscious level independently from the mechanistic basis of physiological arousal.
Yeah, and a few other distinctions to make here. Also, general anesthesia is quite distinct from
deep sleep, just as a matter of neurophysiology.
is quite distinct from deep sleep, just as a matter of neurophysiology.
Certainly, general anesthesia is nothing like sleep. It's certainly deep levels of general anesthesia. So whenever you go for an operation and the anesthesiologist is trying to make you
feel more comfortable by just saying something like, yeah, we'll just put you to sleep for a
while and then you'll wake up and it will be done. They are lying to you for good reason. It's kind of nice just to feel that you're going to sleep for a bit.
But the state of general anesthesia is very different. And for very good reason. If you
would just put into a state of sleep, you would wake up as soon as the operation started and that
wouldn't be very pleasant. It's surprising how far down you can take people in general anesthesia,
almost to a level of isoelectric brain activity
where there is pretty much nothing going on at all and still bring them back. And many people
now have had the non-experience of general anesthesia. And in some weird way, I now look
forward to it the next time I get to have this. Because it's a very sort of, it's almost a reassuring experience because there is absolutely
nothing.
It's complete oblivion.
It's not, you know, when you go to sleep as well, you can sleep for a while and you'll
wake up and you might be confused about how much time has passed, especially if you've
just flown across some time zones or stayed up too late, something like that.
You might not be sure what time it is, but you'll still have this sense of some time having passed.
Except we have this problem, or some people have this problem of anesthesia awareness, which
is every person's worst nightmare, if they care to think about it, where people have the experience
of the surgery because, for whatever whatever reason the anesthesia hasn't
taken them deep enough and yet they're immobilized and can't signal that they're not deep enough.
I know, absolutely. But I mean, that's a failure of anesthesia. It's not a characteristic of the
anesthetic state. Do you know who had that experience? You've mentioned him on the podcast.
I did. Really? Francisco Varela. Oh, really? I didn't know that. I did not know that.
Yeah, Francisco was getting a liver transplant and experienced some part of it.
Well, that's that's pretty horrific. Could not have been fun. Yeah. I mean, of course,
because the thing there is that, you know, under most serious operations, you're also administered
with a muscle paralytic so that you don't jerk around
when you're being operated on. And that's why it's particularly a nightmare scenario.
But if anesthesia is working properly, certainly the times I've had general anesthesia,
you start counting to 10 or start counting backwards from 10, you get to about 8,
and then instantly you're back somewhere else, very confused, very disoriented.
But there is no sense of time having passed.
It's just complete oblivion.
And I found that really reassuring because we can think conceptually about not being
bothered about all the times we were not conscious before we were born.
And therefore, we shouldn't worry too much about all the times we're not going to be conscious after
we die.
But to experience these moments of complete oblivion during a lifetime, or rather the
edges of them, I think is a very enlightening kind of experience to have.
Although there's a place here where the hard problem does emerge
because it's very difficult, perhaps impossible, to distinguish between a failure of memory and
oblivion. Has consciousness really been interrupted? Take anesthesia and deep sleep as
separate but similar in the sense that most people think there was a hiatus in consciousness, I'm prepared to believe that
that's not true of deep sleep, but we just don't remember what it's like to be deeply asleep.
I'm someone who often doesn't remember his dreams, and I'm prepared to believe that I dream every
night. And we know, even in the case with general anesthesia, they give amnesiac drugs so that you won't remember whatever they
don't want you to remember. And I recently had the experience of not going under a full anesthesia,
but having a, you know, what's called a twilight sleep for a procedure. And there was a whole
period afterwards where I was coming to about a half hour that I don't remember. And it was clear to my wife
that I wasn't going to remember it, but she and I were having a conversation. I was talking to her
about something. I was saying how perfectly recovered I was and how miraculous it was to be
back. And she said, yeah, but you're not going to remember any of this. You're not going to remember
this conversation. And I said, okay, well, let's test it. You know, you say something now and we'll see if I remember it. And she said, she said, this is the test dummy. You're
not going to remember this part of the conversation. And I have no memory of that part of the
conversation. So good test. Yeah. You're right. Of course that, that, um, even in stages of deep
sleep, people underestimate the presence of conscious experiences.
And this has been demonstrated by experiments
called serial awakening experiments,
where you just wake somebody up
at various times during sleep cycles
and ask them straight away,
what was in your mind?
And quite often people do.
If you'd like to continue listening to this conversation,
you'll need to subscribe at samharris.org. to this conversation, you'll need to subscribe at SamHarris.org.
Once you do, you'll get access to all full-length episodes
of the Making Sense podcast,
along with other subscriber-only content,
including bonus episodes, NAMAs,
and the conversations I've been having on the Waking Up app.
The Making Sense podcast is ad-free
and relies entirely on listener support.
And you can subscribe now at SamHarris.org.