Making Sense with Sam Harris - #34 — The Light of the Mind
Episode Date: April 18, 2016Sam Harris speaks with philosopher David Chalmers about the nature of consciousness, the challenges of understanding it scientifically, and the prospect that we will one day build it into our machines.... If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.
Transcript
Discussion (0)
Thank you. of the Making Sense Podcast, you'll need to subscribe at samharris.org. There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only
content. We don't run ads on the podcast, and therefore it's made possible entirely
through the support of our subscribers. So if you enjoy what we're doing here,
please consider becoming one.
Today I'm speaking with David Chalmers.
David is a philosopher at NYU and also at the Australian National University, and he's the co-director of the Center for Mind, Brain, and Consciousness at NYU.
David, as you'll hear, though we've never met,
was instrumental in my turning my mind toward
philosophy and science, ultimately, because of the work he began doing on the topic of
consciousness in the early 90s. And I found it fascinating to talk to David. His interests and
intuitions in philosophy align with my own to a remarkable degree. We spend most of our time
talking about consciousness and what it is and
why it is so difficult to understand scientifically, conceptually. We talk about the hard problem of
consciousness, which is a phrase he introduced into philosophy that has been very useful in
shaping the conversation here. At least it's been useful for those of us who think that there really is a hard problem that resists any kind of easy neurophysiological solution or computational
solution. We talk about artificial intelligence and the possibility that the universe is a
simulation and other fascinating topics, some of which can seem so bizarre or abstract as to not have any real tangible
importance. But I would urge you not to be misled here. I think all of these topics will be more and
more relevant in the coming years as we build devices which, if they're not in fact conscious, will seem conscious to us. And as we
confront the prospect of augmenting our own conscious minds by integrating our brains with
machines more and more directly, and even copying our minds onto the hard drives of the future,
all of these arcane philosophical problems will become topics
of immediate and pressing personal and ethical importance. David is certainly one of the best
guides I know to the relevant terrain. So now it's with great pleasure that I bring you David Chalmers.
Well, I'm here with philosopher David Chalmers. David, thanks for coming on the podcast.
Thanks. It's a pleasure to be talking with you.
You know, I don't think we've ever met. Are you aware of whether or not we've met? I feel like we've met by email, but... I don't think so. I've had a couple of emails back and forth over the years and with Annika, your wife as well,
but never in person that I recall. Yeah, you know, because I feel the reason why I'm confused
about this is because, and this is almost certainly something you don't know, but you
served quite an important intellectual role in my life. I went to one of those early Tucson
conferences on consciousness. Oh, I didn't know that. I think it was probably 95. Was 94 the first one?
Yeah, 94 was the first small one with about 300 people.
Then it got really big in 96 with about 2,000 people.
I think I went to 95 and probably 96 as well.
I had dropped out of school and was, I guess you could say, looking for some direction in life.
I became very interested in the conversations that some direction in life. And I became very
interested in the conversations that were happening in the philosophy of mind. I think
probably the first thing I saw was some of the sparring between Dan Dennett and John Searle.
Then I noticed you in the Journal of Consciousness Studies. And then I think I just saw an ad,
probably in the Journal of Consciousness Studies for the Tucson conference and showed up
and I quite distinctly remember your talk there and your articulation of the hard problem of
consciousness really just made me want to do philosophy and which led very directly into my
wanting to know more about science and sent me back to the ivory tower.
And I think a significant percentage of my getting a PhD in neuroscience and continuing to be
interested in this issue was the result of just seeing that the conversation you started in Tucson
more than 20 years ago. Okay. Well, I'm really pleased to hear that. I had no idea.
Yeah. Might've been the 96 conference.
Was Dan Dennett there, you said?
I don't recall if Dan
was there. I went to
at least two of them and they were
in quick succession.
I think Roger Penrose was there.
I remember Stuart Hammeroff
talking at least about their thesis
and it was a fascinating time.
Yeah, that's the event that people call the Woodstock of consciousness,
getting everyone together, getting the band together for the first time.
It was a crazy conference. It was a whole lot of fun.
It was the first time I had met a lot of these people too, myself, actually.
Oh, interesting.
It was very influential for me.
I feel like I am a bad judge of how familiar people are with the problem of
consciousness because I have been so steeped in it and fixated on it for now decades. So I'm always
surprised that people find this a novel problem and difficult to even notice as a problem. So
let's start at the beginning and let's just talk about what consciousness is. What do you mean by consciousness and how would you distinguish it from
the other topics that it's usually conflated with, like self-awareness and behavioral report
and access and all the rest? I mean, it's awfully hard to define consciousness, but I'd at least
like to start by saying consciousness is the subjective experience
of the mind and the world it's basically what it feels like from the first person point of view
to be thinking and perceiving and judging and so on so when i look out at a scene like i'm doing
now out my window there are trees and there's grass and a pond and so on.
Well, there's a world of information processing where all this stuff, you know, photons in my retina send a signal of the optic nerve to my brain.
Eventually, I might say something about it.
That's all of a level of functioning and behavior.
But there's also really crucially something it feels like from the first person point
of view.
I might have an experience of the colors, a certain greenness of the green, a certain
reflection on the pond.
This is a little bit like the inner movie in the head.
And the crucial problem of consciousness, for me at least, is this subjective part,
what it feels like from the inside
this we can distinguish from
questions about say behavior and
about functioning people sometimes use the word consciousness just for the fact that for example, I'm awake and
responsive That's something that can be understood
Straightforwardly in terms of behavior and there are going to be mechanisms for how I'm responding
and so on. So I like to call those problems of consciousness, the easy problems, the ones about
how we behave, how we respond, how we function. What I like to call the hard problem of consciousness
is the one about how it feels from the first person point of view. Yeah, there was another
very influential articulation of this problem, which I would assume influenced
you as well, which was Thomas Nagel's essay, What Is It Like To Be A Bat?
The formulation he gave there is, if it's like something to be a creature or a system
processing information, whatever it's like, even if it's something we can't understand,
the fact that it is like understand, the fact that it
is like something, the fact that there's an internal subjective qualitative character to
the thing, the fact that if you could switch places with it, it wouldn't be synonymous with
the lights going out, that fact, the fact that it's like something to be a bat, is the fact of
consciousness in the case of a bat or in any other system.
I know people who are not sympathetic with that formulation just think it's a kind of
tautology or it's just a question-begging formulation of it.
But as a rudimentary statement of what consciousness is, I've always found that to be an attractive
one.
Do you have any thoughts on that?
Yeah, I find that's about as good a definition as we're going to get for consciousness.
The idea is roughly that a system is conscious if there's something it's like to be that system.
So there's something it's like to be me.
Right now, I'm conscious.
There's nothing it's like, presumably, to be this glass of water on my desk.
If there's nothing it's like
to be that glass of water on my desk,
then it's not conscious.
Likewise, some of my mental states,
you know, my seeing the green leaves right now,
there's something it's like for me to see the green leaves.
So that's a conscious state for me.
But maybe there's some unconscious language processing,
syntax going on in my head that doesn't feel like anything to me or some motor processes and
cerebellum and those might be states of me but they're not conscious states of me because there's
nothing it's like for me to undergo those states so i find this is a definition that's very vivid
and useful for me that said it's just a bunch of words like anything. And for some people, so for some people, this bunch of words, I think, is very useful in activating the idea of consciousness from the subjective point of view.
Other people hear something different in that set of words, like, what is it like? You're saying, what is it similar to? Well, it's like it's kind of similar to my brother, but it's different as well.
You know, for those people, that set of words doesn't work.
So what I've found over the years is this phrase of Nagel's is incredibly useful for at least some people in getting them on to the problem, although it doesn't work for
everybody.
What do you make of the fact that so many scientists and philosophers find the hardness
of the hard problem, and I think I should probably
get you to state why it's so hard or why you have distinguished the hard from the easy problems of
consciousness, but what do you make of the fact that people find it difficult to concede that
there's a problem here? Because it's, I mean, this is just a common phenomenon. There are people like Dan
Dennett and the Churchlands and other philosophers who just kind of ram their way past the mystery
here and declare that it's a pseudo mystery. And you and I have both had the experience of
witnessing people either seem to pretend that this problem doesn't exist or they acknowledge
it only to change the subject and then pretend that they've addressed it. And so let's state
what the hard problem is and perhaps you can say why it's not immediately compelling to everyone
that it's in fact hard. Yeah, I mean, there's obviously a huge amount of disagreement
in this area. I don't know what your sense is. My sense is that most people at least got a reasonable appreciation of the fact that there's a big problem here.
Of course, what you do after that is very different in different cases.
Some people think, well, it's only an initial problem and we ought to kind of see it as an illusion and get past it. But yeah, to state the problem,
I find it useful to first start by distinguishing the easy problems,
which are problems basically about the performance of functions from the hard problem, which is about experience.
So the easy problems are, you know, how is it, for example,
we discriminate information in our environment and respond appropriately?
How does the brain integrate information from different sources and bring it together to
make a judgment and control behavior?
How indeed do we voluntarily control behavior to respond in a controlled way to our environment?
How does our brain monitor its own states?
These are all big mysteries and actually neuroscience has not gotten all that far on some of these problems.
They're all quite difficult.
But in those cases, we have a pretty clear sense of what the research program is and what it would take to explain them.
It's basically a matter of finding some mechanism in the brain that, for example, is responsible for discriminating the information and controlling the behavior.
Although it's pretty hard work finding the mechanism, we're on a path to doing that.
So a neural mechanism for discriminating information, a computational mechanism for the brain to
monitor its own states, and so on.
So for the easy problems,
they at least fall within the standard methods
of the brain and cognitive sciences.
But basically, we're trying to explain some kind of function
and we just find a mechanism.
The hard problem,
what makes the hard problem of experience hard
is it doesn't really seem to be a problem about behavior
or about functions.
You can in principle imagine doesn't really seem to be a problem about behavior or about functions.
You can in principle imagine explaining all of my behavioral responses to a given stimulus
and how my brain discriminates and integrates and monitors itself and controls.
You could explain all that with, say, a neural mechanism, and you might not have touched
the central question,
which is why does it feel like something from the first person point of view? That just doesn't
seem to be a problem about explaining behaviors and explaining functions. And as a result,
the usual methods that work for us so well in the brain and cognitive sciences,
finding a mechanism that does the job just doesn't obviously apply
here. We're going to get correlations. We're certainly finding correlations between processes
in the brain and bits of consciousness, an area of the brain that might light up when you see red
or when you feel pain. But nothing there seems yet to be giving us an explanation. Why does all
that processing feel like something from the
inside? Why doesn't it go on just in the dark, as if we were giant robots or zombies without any
subjective experience? So that's the hard problem. And I'm inclined to think that most people at
least recognize there is at least the appearance of a big problem here. From that point, people react in different ways.
Someone like Dan Dennett says, it's all an illusion or a confusion and one that we need to
get past. I mean, I respect that line. I think it's a hard enough problem that we need to be
exploring every avenue here. And one avenue that's very much worth exploring is the view that it's
an illusion. But there is something kind of much worth exploring is the the view that it's an illusion
but there is something kind of faintly unbelievable about the whole idea that the data of consciousness
here are an illusion to me that the most real thing in the universe you know the feeling of pain the
experience of vision or of thinking so it's a very um it's a very hard line to take the line that
dan dennett takes he taught he wrote a book consciousness explained back in the early 90s where he tried to take that line it was very itett takes. He wrote a book, Consciousness Explained, back in the
early 90s, where he tried to take that line. It was a very good and very influential book,
but I think most people have found that at the end of the day, it just doesn't seem to
do justice to the phenomenon. To be fair to Dan, it's been a long time since I've looked at that
book. That was actually, it might have been the first book I read on this topic back when it came out,
I think in 91. Does he actually say, and this is strange, I'm very aligned with you and people like
Thomas Nagel on these questions in The Philosophy of Mind, and yet have had this alliance with Dan
for many years around the issue of religion. And so I've spent a lot of time with Dan. We've never really gotten into a conversation on consciousness. Perhaps we've been wary of colliding on this
topic and we had a somewhat unhappy collision on the topic of free will. Is it true that he says
that consciousness is an illusion or is it somehow just the hardness of the hard problem is illusory, that the hard problem is categorically
different from the easy problems. I completely understand how he would want to push that
intuition. But as I've said before, and I really don't see another way of seeing this, it seems to
me that consciousness is the one thing in this universe that can't be an illusion. I mean, even if we are confused about everything, even if we are
even confused about the qualitative character of our experience in many important respects,
so that we're not subjectively incorrigible, you can be wrong about what it's like to be you
in terms of the details, which is to say you can become a better judge of what it's like to be you in terms of the details, which is to say you can become a better
judge of what it's like to be you in each moment. But the fact that it is like something to be you,
the fact that something seems to be happening, even if this is only a dream or you're a brain
in a vat or you're otherwise misled by everything, there is something seeming to happen. And that seeming is all you need to
assert the absolute undeniable reality of consciousness. I mean, that is the fact of
consciousness every bit as much as any other case in which you might assert its existence.
So I just don't see how a claim that consciousness itself is an illusion can ever
fly. Yeah, I think, yeah, I'm with you on this. I think Dan's views have actually evolved a bit
over the years on this. Back in maybe the 1980s or so, he used to say things that sounded much
more strongly like consciousness doesn't exist it's an illusion he wrote a paper
called on the absence of phenomenology saying there really isn't such a thing as what we call
phenomenology which is basically just another word for consciousness he wrote another one called
quining qualia which said we need to we just need to get rid of this whole idea of qualia
which again is a word that philosophers use for the qualitative character of experience. The thing that makes, you know, seeing red different from seeing green,
they seem to involve different qualities.
At one point, Dan was inclined to say, oh, that's just a mistake.
There's nothing there over the years.
I think he's found that people find that line just a bit too strong to be believable.
It's just, it just seems frankly unbelievable from the first-person point of view
that there are no qualia. There is no feeling of red versus the feeling of green, or there is no
consciousness. So he's evolved in the direction of saying that, yeah, there's consciousness,
but it's really just in the sense of, for example, there's functioning and behavior and information encoded. There's not
really consciousness in this strong phenomenological sense that drives the hard problem. I mean,
in a way, it's a bit of a verbal relabeling of the old line, you know, because you must be
familiar. I know you're familiar with these debates over free will, where one person says
there is no free will and the other person says, well,
there is free will, but it's just this much more deflated thing, which is compatible with
determinism.
And it's basically two ways of saying the same thing.
I think Dan used to say there is no consciousness.
Now he says, well, there is consciousness, but only in this very deflated sense.
And I think ultimately it's another way of saying the same thing.
He doesn't think there is consciousness in that strong subjective sense that poses the hard problem. I feel super
sensitized to the prospect of people not following the plot here, because if it's the first time
someone is hearing these concerns, it's easy to just lose sight of what the actual subject is.
So I just want to retrace
a little bit of what you said sketching the hardness of the hard problem. So you have
this, the distinction between understanding function and understanding the fact that experience
exists. And so we have functions like motor behavior or learning or visual perception
and it's very straightforward to think about explaining
these in mechanistic terms. I mean, so you have something like vision. We can talk about the
transduction of light energy into neurochemical events and then the mapping of the visual field
onto the relevant parts of, in our case, the visual cortex. And this is very complicated, but it's not in principle obscure.
The fact that it's like something to see, however, remains totally mysterious no matter how much of
this mapping you do. And if you imagine from the other side, if we built a robot that could do all
the things we can, it seems to me that at no point in refining its mechanism,
would we have reason to believe that it's now conscious, even if it passes the Turing test.
This is actually one of the things that concerns me about AI, it seems one of the likely paths we
could take, is that we could build machines that seem conscious, and the effect will be so convincing
that we will just lose sight of
the problem. All of our intuitions that lead us to ascribe consciousness to other people and to
certain animals will be played upon because we will build the machines so as to do that. And
it will cease to seem philosophically interesting or even ethically appropriate to wonder whether
there's something that it's like to be one of these robots. And yet, it seems to me that we still won't know whether these machines are
actually conscious unless we've understood how consciousness arises in the first place,
which is to say, unless we've solved the hard problem.
Yeah. And I think we can, maybe we should distinguish the question of whether a system
is conscious from how that consciousness is explained.
I mean, even in the case of other people, well, they're behaving as if they're conscious.
And we tend to be pretty confident that other people are conscious.
So we don't really regard there to be a question about whether other people are conscious.
Still, I think it's consistent to have that attitude and still find it very mysterious, this fact of consciousness, and to be utterly puzzled about how we might explain it in terms of the brain. that machine, even if there are machines hanging around with us, talking in a human-like way and
reflecting on their consciousness, those machines are saying, hey, I'm really puzzled by this
whole consciousness thing, because I know I'm just a collection of silicon circuits,
but it still feels like something from the inside. If machines are doing that,
I'm going to be pretty convinced that they are conscious, as I am conscious conscious but that won't make it any any less
mysterious well maybe it'll just make it all the more mysterious how on earth could this machine
be conscious even though it's collection of silicon circuits likewise how on earth could i
be conscious just as a result of these processes in my brain it's not that i see anything intrinsically
worse about silicon than about brain processes here there just seems to be this kind of mysterious gap in the explanation
in both cases and of course we can worry about other people too there's a classic philosophical
problem the problem of other minds how do you know that anybody else apart from yourself
is conscious descartes said well I'm certain of one thing.
I'm conscious, I think, therefore I am.
That only gets you one data point.
It gets me to me being conscious.
Actually, it gets me to me being conscious right now.
Who knows if I was ever conscious in the past?
Anything else beyond that has got to be something of an inference or an extrapolation.
We end up taking for granted most of the time that other people are conscious,
but you could try to raise questions there if you wanted to.
And then as you move to questions about AI and robots, about animals and so on, the questions
just become very fuzzy and murky.
Yeah, I think the difference with AI or robots is that presumably we will build them, or
we may in fact build them along lines that are not at all analogous to the
emergence of our own nervous systems. And so if we follow the line we've taken, say, with like,
you know, chess playing computers, where we have something which we don't even have any reason to
believe is aware of chess, but it is all of a sudden the best chess player on earth, and now will always be so,
if we did that for a thousand different human attributes, so that we created a very compelling
case for its superior intelligence, it can function in every way we function better than we can, and we have put this in some format so that it has the memetic facial displays that we find attractive and compelling.
We get out of the uncanny valley, and these robots no longer seem weird to us.
In fact, they detect our emotions better than we can detect the emotions of other people, or than other people can detect ours. And so, all of a sudden, we are played upon by a system that is deeply unanalogous to our own
nervous system. And then we will just, then I think it'll be somewhat mysterious whether or
not this is conscious, because we have cobbled this thing together. Whereas in our case,
the reason why I don't think it's parsimonious
for me to be a solipsist and to say, well, maybe I'm the only one who's conscious, is because
there's this obviously deep analogy between how I came to be conscious and how you came to be
conscious. So I have to then do further work of arguing that there's something about your nervous
system or your situation in the universe that might not be a sufficient base of consciousness, and yet it is clearly in my own case.
So to worry about other people or even other higher animals seems a stretch.
At least it's unnecessary, and it's only falsely claimed to be parsimonious.
I think it's actually, you have to do extra work to doubt whether other
people are conscious rather than just simply not attribute consciousness to them.
How would you feel if we met Martians? Let's say they're intelligent Martians who are
behaviorally very sophisticated and we turn out to be able to communicate with them about
science and philosophy, but at the same time they've evolved through a completely independent evolutionary process from us. So they got there in a different way. Would you
have the same kind of doubts about whether they might be conscious?
Yeah, well, I think perhaps I would. It would be probably somewhere between our own case and
whatever we might build along lines that we have no good reason to think
track the emergence of consciousness in the universe.
Well, this is actually a topic I wanted to raise with you,
this issue of epiphenomenalism, because it is kind of mysterious.
The flip side of the hard problem, the fact that you can describe all of this functioning
and you seem to never need to introduce consciousness
in order to describe mere function, leaves you at the end of the day with the possible
problem which many people find deeply counterintuitive, which is that consciousness doesn't do anything.
That it's just, it is an epiphenomenon, which is an analogy often given for this.
It's like the smoke coming out of the
smokestack of an old-fashioned locomotive. It's always associated with the progress of this train
down the tracks, but it's not actually doing anything. It's a mere byproduct of the actual
causes that are propelling the train. And so consciousness could be like the smoke rising out of the
smokestack. It's not doing anything. And yet it's always here at a certain level of function.
If I recall correctly, in your first book, you seem to be fairly sympathetic with epiphenomenalism.
Talk about that a little bit. I mean, epiphenomenalism is not a view that anyone
feels any initial attraction for. The consciousness doesn't do anything.
It sure seems to do so much.
But there is this puzzle that pretty well for any bit of behavior you try to explain,
it looks like there's the potential to explain it without invoking consciousness in this
subjective sense.
There'll be an explanation in terms of neurons or computational mechanisms of our various behavioral responses.
I mean, one place where you at least start to wonder, maybe consciousness doesn't have
any function, maybe it doesn't do anything at all.
Maybe for example, consciousness gives value and meaning to our lives, which is something
we can talk about without actually doing anything.
But then, obviously there are all kinds of questions uh how and why would it have evolved not to mention
how is it that we come to be having this extended conversation about consciousness
um if consciousness isn't actually playing a role in the uh in the causal loop so in my first book
i at least tried on the idea of epiphenomenalism. I didn't come out saying this is definitely true, but tried to say, okay, well, if we're
forced in that direction, that's one way we can go.
I mean, actually, we're in view, I mean, this is skipping ahead a few steps, is that either
it's epiphenomenal or it's outside a physical system, but somehow playing a role in physics.
That's another kind of more traditionally dualist possibility.
Or third possibility, consciousness is somehow built in at the very basic level of physics.
So to get consciousness to play a causal role, you need to say some fairly radical things.
I'd like to track through each of those possibilities, but to stick with epiphenomenalism
for a moment, you've touched on it in passing here,
but remind us of the zombie argument. I don't know if that originates with you.
It's not something that I noticed before I heard you making it, but the zombie argument really is
the thought experiment that describes epiphenomenalism. Introduce the concept of a
zombie, and then I have a question about that.
So yeah, the idea of zombies actually, I mean, it's been out there for a while in philosophy before me, not to mention out there in the popular culture.
But the zombies which play a role in philosophy are a bit different from the zombies that
play a role in the movies or in Haitian voodoo culture.
You know, the ones in the movies are all supposed to be, all the different kinds of zombies are missing something. The zombies in the movie are
lacking somehow life there. They're dead, but reanimated. The zombies in the voodoo tradition
are lacking some kind of free will. Well, the zombies that play a role in philosophy
are lacking consciousness. And this is just a thought experiment, but the conceit is that we can at least imagine
a being at the very least behaviorally identical
to a normal human being,
but without any consciousness on the inside at all,
just acting and walking and talking
in a perfectly human-like way
without any consciousness.
The extreme version of this thought experiment says we can at least imagine a being physically
identical to a normal human being, but without any subjective consciousness.
When I talk about my zombie twin, a hypothetical being in the universe next door who's physically
identical to me, he's holding a
conversation like this with a zombie analog of you right now i'm saying all the uh all the same
stuff and responding but without any consciousness now no one thinks anything like this exists
in our universe but the idea at least seems imaginable or conceivable there doesn't seem
to be any contradiction in the idea and the very fact that you can kind of make sense of the idea
immediately raises some questions like why aren't we zombies there's a contrast here
zombies could have existed evolution could have produced zombies why didn't evolution produce
zombies it produced conscious beings it looks like for anything behavioral you could point to,
it starts to look as if a zombie could do all the same things without consciousness.
So if there was some function we could point to and say,
that's what you need consciousness for,
and you could not in principle do that without consciousness,
then we might have a function for consciousness.
But right now it seems, I mean, actually this corresponds to the science.
For anything that we actually do, perception, learning, memory, language, for consciousness. But right now it seems, I mean, actually this corresponds to the science for
anything that we actually do. Perception, learning, memory, language, and so on. It sure looks like a
whole lot of it can be performed even in the actual world unconsciously. So the whole problem
of what consciousness is doing is just thrown into harsh relief by that thought experiment.
Yeah, as you say, that most of what our minds are accomplishing is
unconscious, or at least it seems to be unconscious from the point of view of the two of us who are
having this conversation. So the fact that I can follow the rules of English grammar insofar as I
manage to do that, that is all being implemented in a way that is unconscious. And when I make an error, I, as the conscious
witness of my inner life, I'm just surprised at the appearance of the error. And I could be
surprised on all those occasions where I make no errors and I get to the end of a sentence in
something like grammatically correct form. I could be sensitive to the fundamental mysteriousness of that, which is to say that I'm following rules
that I have no conscious access to in the moment. And everything is like that. The fact that I
perceive my visual field, the fact that I hear your voice, the fact that I effortlessly and
actually helplessly decode meaning from your words, because I am an English speaker and
you're speaking in English, but if you were
speaking in Chinese, it would just be noise. And I mean, this is all unconsciously mediated. And so
again, it is a mystery why there should be something that is like to be associated with
any part of this process because so much of the process can take place in the dark, or at least it seems to be in the dark.
This is something that is a topic I raised in my last book, Waking Up, in discussing split brain research.
But there is some reason to worry or wonder whether or not there are islands of consciousness in our brains that we're not aware of, which is to say we have
the problem of other minds with respect to our own brains. What do you think about that? What
do you put the chance of there being something that is like to be associated with these zombie
parts of, or seemingly zombie parts of your own cognitive processing?
Well, I don't rule it out.
I think when it comes to the mind-body problem,
the puzzles are large enough.
One of the big puzzles is we don't know which systems are conscious.
At least some days I see a lot of attraction to the idea of thinking
consciousness is much more widespread than we think.
I guess,
most of us think, okay, humans are conscious and probably a lot of the more sophisticated mammals,
at least, are conscious. Apes, monkeys, dogs, cats, around the point of mice. Maybe some people
start flies. Some people start to wobble. But I you know, I'm attracted by the idea that for, you know, many at least reasonably
sophisticated information processing devices, there's some kind of consciousness.
And maybe this goes down very deep.
And, you know, at some point, maybe we can talk about the idea that consciousness is
everywhere.
But before even getting to that point, if you're prepared to say that, say, a fly is
conscious or a worm with its 300 neurons and so on then you do start to have to worry about
bits of the uh bits of the brain that are enormously more sophisticated than that but
that are also part of another conscious system there's a uh there's a guy julio tenoni who's
put forward a well-known recent theory of consciousness called the information integration theory.
And he's got a mathematical measure called phi of the amount of information that a system integrates and thinks.
But roughly, whenever that's high enough, you get consciousness.
So then, yeah, you'd look at these different bits of the brain, the hemisphere, things like the cerebellum and something.
Well, OK, the phi there is not as high as it is for the brain but it's still pretty high high enough that in an animal
he would say it's conscious so why isn't it and he ends up having to throw in an arbitrary an extra
axiom that he calls the exclusion axiom saying if you're part of a system that has a higher phi than
you then you're not conscious so if the you know if the hemisphere has a high phi, but the brain as a whole has a higher phi,
then the brain gets to be conscious,
but the hemisphere doesn't.
But to many people, that axiom looks kind of arbitrary.
And if it wasn't for that being in there,
then you'd be left with a whole lot of conscious subsystems all over.
I agree.
Who knows what it's like to be a subsystem,
what it's like to be my cerebellum,
or what it's like to be a hemisphere,
but it at least makes you worry and wonder.
On the other hand, you know, there are these experiments where, you know, one half of the brain is basically, these situations where one half of the brain basically gets destroyed and
the other half keeps going fine. Yeah, yeah. Well, so I wanted to ask you about Tononi's
notion of consciousness as integrated information. And to my eye, it seems yet another case of someone just trying to ram past the hard problem.
And actually, I noticed Max Tegmark wrote a paper that actually took Tononi as a starting point.
And Max has been on this podcast.
I don't think we touched on consciousness, but he also did a version of this.
He just basically said, you said, let's start here. We know that there are certain arrangements of matter that just are conscious.
We know this. There is no problem. This is a starting point. And now we just have to talk
about the plausible explanation for what makes them conscious. And then he sort of went on to
embrace Tononi and then did a lot of physics.
But is there anything in Tononi's discussion here that pries up the lid on the hard problem
more than the earlier work he did with Edelman or anyone else's attempt to give some information processing construal
or a synchronicity of neural firing control of
consciousness? Yeah, to be fair to Giulio Tononi, I mean, I think it's true that in some of the
presentations of his work in the popular press and so on, you can get this idea,
oh, information integration is all there is. Let's explain that. We've explained everything.
He's actually very sensitive to the uh to the problem of
consciousness and and if you're and when pressed on this and even in some of the stuff he's written
he says i'm not trying to solve the hard problem in the sense of showing how you can get consciousness
from matter he's not trying to cross the explanatory gap from physical processes to
consciousness rather he says i'm starting with the fact of consciousness i'm just taking that He's not trying to cross the explanatory gap from physical processes to consciousness.
Rather, he says, I'm starting with the fact of consciousness.
I'm just taking that as a given that we are conscious, and I'm trying to map its properties. And he actually starts with some phenomenological axioms of consciousness.
It consists of information that's differentiated in certain ways, but integrated and unified in other ways.
And then what he tries to do is take those phenomenological axioms
and turn them into mathematics, turn them into mathematics of information
and say, what are the informational properties that consciousness has?
And then he comes up with this mathematical measure.
Then at a certain point, it somehow turns into the theory that
all consciousness is is this what consciousness is is a certain kind of integration of information
the way i would hear the theory i don't know if he puts it this way it's basically there's a
correlation between different states of consciousness and different kinds of integration
of information in the brain.
There's still a hard problem here because we still have no idea why all that integration of information in the brain should give you consciousness in the first place.
But even someone who believes there's a hard problem can believe there are still really
systematic correlations between brain processes and consciousness that we ought to be able
to give a rigorous mathematical
theory of, you know, just which physical states go with which kind of states of consciousness.
And I see Giulio's theory is basically as at least a stab in that direction of trying to give a
rigorous mathematical theory of the correlations. Yeah, yeah, well I should say that I certainly
agree that one can more or less throw up one's hands with respect to the hard problem and then
just go on to do the work of trying to map the neural correlates of consciousness and understand
what consciousness seems to be in our case as a matter of its instantiation in the brain and
never pretend that the mystery has been reduced thereby. So that if it turned out that, I think
I said this once in response to the work he did with Edelman. So they put a, one of the criteria,
I don't know if he still does this, but one of his criterion for information integration was that
there had to be within a window of something like 500 milliseconds, right? And I just,
by analogy, extrapolated that out to geological processes in the earth. And what if it was just
the fact that integrated processes in the earth over a time course of a few hundred years
was a sufficient basis of consciousness? If we just stipulate that that's true,
that's still just a statement of a miracle from my point of view, from the point of view of being
sympathetic with the hard problem. That would be an incredibly strange thing to believe, and yet
that is the sort of thing we are being forced to believe about our own brains, just under a
slightly different description. I do think there's something intermediate that you can go for here, even if you do believe
and you're very convinced there's a serious hard problem of consciousness that allows
the possibility of at least a broadly scientific approach to something in the neighborhood
of the hard problem.
It's not where it's not just, oh, let's look at the neural correlates and see what's going on in the human
case, but it's something like try to find the simplest, most fundamental principles that connect
physical processes to consciousness as a kind of basic, general, and universal principle. So we
might start with some correlations we find in the familiar human case between, say, certain neural systems
and certain kinds of consciousness, but then try and generalize those based on as much evidence
as possible. Of course, the evidence is limited, which is another limitation here. But then try
and find principles which might apply to other systems. Ultimately, look for really simple
bridging principles that cross the gap from
physical systems to consciousness, and that would in principle predict what kind of consciousness
you'd find in what kind of physical system.
So I would say something like Tononi's information integration principle with this mathematical
quantity phi as a proposal, maybe a very early proposal, about a fundamental principle that
might connect physical processes to consciousness.
Now, it doesn't exactly remove the hard problem, because at some point you've got to take that
principle as a basic axiom. Yeah, when there's information integration, there's consciousness.
But then you can at least go on to do science with that principle. And it may well be that,
my take on this is that we know that elsewhere in
science, you have to take some laws and some principles as fundamental.
You know, the fundamental laws of physics, the law of gravity, or the unified field theory
of the laws of quantum mechanics.
Some things are just basic principles that we don't try and explain any further. But It may well be that when it comes to consciousness, we're going to have to take
something like that for granted as well. So we don't try to explain space, or at least we didn't
try to explain space in terms of something more basic. Some things get taken as primitive,
and we look at the fundamental laws that involve them. Likewise, the same could be true for
consciousness. And we ended up, you know, maybe it's, we ended up pretty satisfied about what goes on in the case of space space is one of the
primitives but we've got a great scientific theory of how it works we could end up in that position
for consciousness too yes we have to take something here as as as basic but we'll get this really
fundamental principle say like the information integration principle that crosses the gap and
yet won't remove the hard problem because that'll be taken as basic, but that will at least be reduced to a situation
we're familiar with elsewhere in science. Yeah, yeah, and actually I'm quite sympathetic with
that line. As you say, there are primitives or brute facts that we accept throughout science,
and they are no insult to our thinking about the rest of reality.
And so I want to get there, but I realize now I forgot to ask a question that Annika wanted me to
ask, my wife Annika wanted me to ask on the zombie argument. And she was wondering why,
I mean, whether it was actually conceivable that a zombie would or could talk about consciousness
itself. I mean, how is it that you take a zombie, you know, my zombie twin that has no experience,
there's nothing that it's like to be that thing, but it is talking just as I am and is functioning
just as I am. What could possibly motivate a zombie that is devoid of phenomenal experience to say things like,
I have experiences, but other creatures don't, or to worry about the possibility of zombies?
There would seem to be no basis to make this distinction, because everything he's doing,
he can easily ascribe to others that have no experience.
So there seems to be no basis for him to distinguish experience from non-experience.
So I just wanted to get your reaction to that on her behalf.
I mean, this is a big puzzle, and it's probably one of the biggest puzzles
when it comes to thinking through this idea of a zombie.
I don't know if zombies talk in the same way.
If you'd like to continue
listening to this conversation,
you'll need to subscribe
at SamHarris.org.
Once you do,
you'll get access
to all full-length episodes
of the Making Sense podcast,
along with other subscriber-only content,
including bonus episodes,
NAMAs,
and the conversations
I've been having
on the Waking Up app.
The Making Sense podcast
is ad-free
and relies entirely on listener support.
And you can subscribe now at samharris.org.